{"video_id": "hg2Q_O5b9w4", "text": "hi there today we're going to look at curl contrastive unsupervised representations for reinforcement learning by Aravind Sreenivas Michel Laskin and Petra Biel so this is a general framework for unsupervised representation learning for our L so let's untangle the title a little bit it is for reinforcement learning which it if you don't know what reinforcement learning is I've done a bunch of videos on are L afraid works so it's for general reinforcement learning that means it can be paired with almost any RL algorithm out there so we're not", "start_timestamp": "00:00:00", "end_timestamp": "00:00:42", "start_second": 0, "end_second": 42, "url": "https://www.youtube.com/watch?v=hg2Q_O5b9w4&t=0s", "title": "CURL: Contrastive Unsupervised Representations for Reinforcement Learning", "thumbnail": "https://i.ytimg.com/vi/hg2Q_O5b9w4/maxresdefault.jpg"} {"video_id": "hg2Q_O5b9w4", "text": "going to you know dive into specific or allowed rooms today it is unsupervised which means it doesn't need any sort of labels and it also doesn't need a reward signal forum RL which is pretty cool because usually the entire RL pipelines rely on some sort of a reward or auxiliary reward signal now there is a training objective here but it doesn't have to do with the RL reward and then in the it is learning representations which means it learns it learns intermediate representations of the input data that is useful and in the end", "start_timestamp": "00:00:42", "end_timestamp": "00:01:23", "start_second": 42, "end_second": 83, "url": "https://www.youtube.com/watch?v=hg2Q_O5b9w4&t=42s", "title": "CURL: Contrastive Unsupervised Representations for Reinforcement Learning", "thumbnail": "https://i.ytimg.com/vi/hg2Q_O5b9w4/maxresdefault.jpg"} {"video_id": "hg2Q_O5b9w4", "text": "it is contrastive and that is the the kind of secret sauce in here the training objective it's what's called contrastive learning and that's what we're going to spend most of our time on today exploring what that means alright so here's the general framework you can see it down here sorry about that so you can see that reinforcement learning is just a box which is we don't care about the RL algorithm you use that's just you know what what comes at the end what comes at the beginning oh here is the observation so the observation in an RL", "start_timestamp": "00:01:23", "end_timestamp": "00:02:04", "start_second": 83, "end_second": 124, "url": "https://www.youtube.com/watch?v=hg2Q_O5b9w4&t=83s", "title": "CURL: Contrastive Unsupervised Representations for Reinforcement Learning", "thumbnail": "https://i.ytimg.com/vi/hg2Q_O5b9w4/maxresdefault.jpg"} {"video_id": "hg2Q_O5b9w4", "text": "algorithm is kind of fundamental now if someone explains RL to you or reinforcement learning usually what they'll say is there is some kind of actor and there is some kind of environment right and the environment will give you an observation right observation Oh which is some sort of let's say here is an image right so in this in this RL framework specifically the examples they give are of image based reinforcement learning so let's say the Atari game where you have this little spaceship here and there are meteorites up here and you need to shoot", "start_timestamp": "00:02:04", "end_timestamp": "00:02:48", "start_second": 124, "end_second": 168, "url": "https://www.youtube.com/watch?v=hg2Q_O5b9w4&t=124s", "title": "CURL: Contrastive Unsupervised Representations for Reinforcement Learning", "thumbnail": "https://i.ytimg.com/vi/hg2Q_O5b9w4/maxresdefault.jpg"} {"video_id": "hg2Q_O5b9w4", "text": "them so there is a little shot here right you need to shoot those meteorites right so this is the observation oh and then as an age as an actor you have to come up with some sort of action and the actions here can be something like moved to the left move to the right press the button that you know does the shooting so you have to come up with an action somehow given this observation and then the environment will give you back a reward along with the next observation like the next frame of the game and you're gonna have to come up with", "start_timestamp": "00:02:48", "end_timestamp": "00:03:23", "start_second": 168, "end_second": 203, "url": "https://www.youtube.com/watch?v=hg2Q_O5b9w4&t=168s", "title": "CURL: Contrastive Unsupervised Representations for Reinforcement Learning", "thumbnail": "https://i.ytimg.com/vi/hg2Q_O5b9w4/maxresdefault.jpg"} {"video_id": "hg2Q_O5b9w4", "text": "another action in response to that and the environments going to give you back another reward and the next observation and so on so what you want to do is you want to find a mapping from observation to action such that your reward is going to be as high as possible right this is the fundamental problem of RL and usually what people do is they take this act this mapping here from observation to action to be some sort of function some sort of function that is parameterised maybe and nowadays of course it's often a neural network but", "start_timestamp": "00:03:23", "end_timestamp": "00:04:02", "start_second": 203, "end_second": 242, "url": "https://www.youtube.com/watch?v=hg2Q_O5b9w4&t=203s", "title": "CURL: Contrastive Unsupervised Representations for Reinforcement Learning", "thumbnail": "https://i.ytimg.com/vi/hg2Q_O5b9w4/maxresdefault.jpg"} {"video_id": "hg2Q_O5b9w4", "text": "you're trying to learn given the input observation what output action you need to do and you can think of the same here so you have this input observation up here and down here after the reinforcement learning the output is going to be an action right and so this this function we talked about up here is usually implemented sorry is usually implement as you put the observation into the r.l framework and then the RL framework learns this f of theta function to give you an action now here you can see the pipeline is a bit different we don't", "start_timestamp": "00:04:02", "end_timestamp": "00:04:39", "start_second": 242, "end_second": 279, "url": "https://www.youtube.com/watch?v=hg2Q_O5b9w4&t=242s", "title": "CURL: Contrastive Unsupervised Representations for Reinforcement Learning", "thumbnail": "https://i.ytimg.com/vi/hg2Q_O5b9w4/maxresdefault.jpg"} {"video_id": "hg2Q_O5b9w4", "text": "want to shove the observation in directly right we don't want the observation directly but what we put into the RL framework is this Q thing now the Q is supposed to be a representation of the observation and a useful representation so if we think of this of this game here of this Atari game up here what could be the what could be a useful representation if if I had to craft one by hand how would I construct a useful representation keep in mind the representation the goal is to have a representation of the observation that is more useful to the", "start_timestamp": "00:04:39", "end_timestamp": "00:05:22", "start_second": 279, "end_second": 322, "url": "https://www.youtube.com/watch?v=hg2Q_O5b9w4&t=279s", "title": "CURL: Contrastive Unsupervised Representations for Reinforcement Learning", "thumbnail": "https://i.ytimg.com/vi/hg2Q_O5b9w4/maxresdefault.jpg"} {"video_id": "hg2Q_O5b9w4", "text": "RL algorithm than just the pure pixels of the image right so if I have to craft a representation let's say it's a vector right let's say our our our representations need to be vectors what I would do is I would probably take the x and y coordinates of the little spaceship right x and y and put it in the vector that's pretty useful and then I would probably take the x and y coordinates of the meteorites that are around right let's say there are maximum two XY XY here I would probably take the angle right the angle where my spaceship", "start_timestamp": "00:05:22", "end_timestamp": "00:06:07", "start_second": 322, "end_second": 367, "url": "https://www.youtube.com/watch?v=hg2Q_O5b9w4&t=322s", "title": "CURL: Contrastive Unsupervised Representations for Reinforcement Learning", "thumbnail": "https://i.ytimg.com/vi/hg2Q_O5b9w4/maxresdefault.jpg"} {"video_id": "hg2Q_O5b9w4", "text": "is pointing to that should be pretty useful because if I shoot I want to know where I shoot right so theta here and then probably maybe the X and y coordinate of the of the shot here of the red shot that I fired if there is one right also going to put that into my representation so x and y and maybe Delta X Delta Y something like this right so you can see if I had to handcraft something if I I can pretty much guarantee that if I put in this representation right here into the RL algorithm but put this in here it would turn out", "start_timestamp": "00:06:07", "end_timestamp": "00:06:52", "start_second": 367, "end_second": 412, "url": "https://www.youtube.com/watch?v=hg2Q_O5b9w4&t=367s", "title": "CURL: Contrastive Unsupervised Representations for Reinforcement Learning", "thumbnail": "https://i.ytimg.com/vi/hg2Q_O5b9w4/maxresdefault.jpg"} {"video_id": "hg2Q_O5b9w4", "text": "guaranteed it would turn out to be a better or L agent that learns faster than if I put in the original observation which is the the pixel image of the game right because of course in order to play the game correctly in order to play the game to win you need to extract this information right you need to get our there's something like a spaceship there's something like meteorites this is all things that are elegant doesn't know her say and would have to learn from the pixels right but if I already give it the information", "start_timestamp": "00:06:52", "end_timestamp": "00:07:29", "start_second": 412, "end_second": 449, "url": "https://www.youtube.com/watch?v=hg2Q_O5b9w4&t=412s", "title": "CURL: Contrastive Unsupervised Representations for Reinforcement Learning", "thumbnail": "https://i.ytimg.com/vi/hg2Q_O5b9w4/maxresdefault.jpg"} {"video_id": "hg2Q_O5b9w4", "text": "that is useful it can learn much faster all right so you can see if I handcraft a good representation it's pretty easy for the RL algorithm to improve now we want to come up with a framework that automatically comes up with a good representation right so it alleviates the RL algorithm here that reinforcement it alleviates that from learn from having to learn a good representation right it already is burdened with learning the what a good action is in any given situation right we want to alleviate it of the burden to also", "start_timestamp": "00:07:29", "end_timestamp": "00:08:10", "start_second": 449, "end_second": 490, "url": "https://www.youtube.com/watch?v=hg2Q_O5b9w4&t=449s", "title": "CURL: Contrastive Unsupervised Representations for Reinforcement Learning", "thumbnail": "https://i.ytimg.com/vi/hg2Q_O5b9w4/maxresdefault.jpg"} {"video_id": "hg2Q_O5b9w4", "text": "extract useful information from the from the observation space right so how do we do this this is Q here is supposed to be exactly that it's supposed to be a good representation but not one that we handcrafted but a used with a technique that can be employed pretty much everywhere and the goal sorry that the secret sauce here is this contrastive loss thing okay this bombed contrastive learning is this this kind of magic thing that will make us good representations so what is contrastive learning in this case I'm", "start_timestamp": "00:08:10", "end_timestamp": "00:08:55", "start_second": 490, "end_second": 535, "url": "https://www.youtube.com/watch?v=hg2Q_O5b9w4&t=490s", "title": "CURL: Contrastive Unsupervised Representations for Reinforcement Learning", "thumbnail": "https://i.ytimg.com/vi/hg2Q_O5b9w4/maxresdefault.jpg"} {"video_id": "hg2Q_O5b9w4", "text": "going to explain in this case for this kind of image based for image based reinforcement learning but just for image based neural networks how can we come up with a contrastive loss so you see there's kind of a two pipeline thing going on here there is like this and this and then one of them is going to be the good encoding all right so let's check it out let's say we have this image that we had before right draw it again this little spaceship this and this and so right and we want to we want to do this what we", "start_timestamp": "00:08:55", "end_timestamp": "00:09:50", "start_second": 535, "end_second": 590, "url": "https://www.youtube.com/watch?v=hg2Q_O5b9w4&t=535s", "title": "CURL: Contrastive Unsupervised Representations for Reinforcement Learning", "thumbnail": "https://i.ytimg.com/vi/hg2Q_O5b9w4/maxresdefault.jpg"} {"video_id": "hg2Q_O5b9w4", "text": "need to do is we need to produce three different things from it we need to produce an anchor what's called an anchor so we need to produce a positive sample positive sample and we need to produce negative samples let's just go with one negative sample for now right so the goal is to come up with a task that where we produce our own labels right so we want since we're training a encoder and the encoder is a neural network that's parametrized we need some sort of loss function so the goal is to come up with a method where we can", "start_timestamp": "00:09:50", "end_timestamp": "00:10:31", "start_second": 590, "end_second": 631, "url": "https://www.youtube.com/watch?v=hg2Q_O5b9w4&t=590s", "title": "CURL: Contrastive Unsupervised Representations for Reinforcement Learning", "thumbnail": "https://i.ytimg.com/vi/hg2Q_O5b9w4/maxresdefault.jpg"} {"video_id": "hg2Q_O5b9w4", "text": "create our own labels to a task but that we construct the task in a way such that the neural network has no choice but learn something meaningful even though we made the task of ourselves all right I hope this was kind of clear so how are we gonna do this our method of choice here is going to be random cropping now random cropping means that I just I take an image right and I crop a a piece from it so a smaller piece from the image I just take a view inside the image so in case of the anchor right I'm gonna draw the same picture here bear with me", "start_timestamp": "00:10:31", "end_timestamp": "00:11:16", "start_second": 631, "end_second": 676, "url": "https://www.youtube.com/watch?v=hg2Q_O5b9w4&t=631s", "title": "CURL: Contrastive Unsupervised Representations for Reinforcement Learning", "thumbnail": "https://i.ytimg.com/vi/hg2Q_O5b9w4/maxresdefault.jpg"} {"video_id": "hg2Q_O5b9w4", "text": "I'm gonna draw the same picture here a couple of times this is all supposed to be the same picture and with the negative sample I'm just gonna leave it empty for now there are two meteorites two meteorites shot shot right so for the anchor we're going to actually not random crop but center crop right so we're going to take here the center image right so the assumption is kind of that if I Center if I Center crop I won't lose you know too much of the image I can actually make the crop bigger such that almost everything of", "start_timestamp": "00:11:16", "end_timestamp": "00:11:59", "start_second": 676, "end_second": 719, "url": "https://www.youtube.com/watch?v=hg2Q_O5b9w4&t=676s", "title": "CURL: Contrastive Unsupervised Representations for Reinforcement Learning", "thumbnail": "https://i.ytimg.com/vi/hg2Q_O5b9w4/maxresdefault.jpg"} {"video_id": "hg2Q_O5b9w4", "text": "the image is somewhat contained in this and that yeah all right so this is going to be my anchor and then the positive sample is going to be a random crop of the same image so I'm just randomly going to select a same size same size section from that image let's say this is up right here all right and the negative sample is going to be around the crop from a different image right so a different image might be from the same game right but might be there is a meteorite here right and there is no shot I don't I don't shoot and I'm going", "start_timestamp": "00:11:59", "end_timestamp": "00:12:45", "start_second": 719, "end_second": 765, "url": "https://www.youtube.com/watch?v=hg2Q_O5b9w4&t=719s", "title": "CURL: Contrastive Unsupervised Representations for Reinforcement Learning", "thumbnail": "https://i.ytimg.com/vi/hg2Q_O5b9w4/maxresdefault.jpg"} {"video_id": "hg2Q_O5b9w4", "text": "to take a random crop from this let's say I'm going to take a random crop here let's put a meteorite here as well just for fun all right so these are going to be our three samples and now the question is going to be if I give the anchor to the neural network I'm going to say I give you the anchor right but I'm also going to give you this and this thing and I'm not going to give any of this I'm just going to give whatever I cropped right so just just these things so I asked the neural network neural network I give you the", "start_timestamp": "00:12:45", "end_timestamp": "00:13:39", "start_second": 765, "end_second": 819, "url": "https://www.youtube.com/watch?v=hg2Q_O5b9w4&t=765s", "title": "CURL: Contrastive Unsupervised Representations for Reinforcement Learning", "thumbnail": "https://i.ytimg.com/vi/hg2Q_O5b9w4/maxresdefault.jpg"} {"video_id": "hg2Q_O5b9w4", "text": "anchor now which one of these two which one of these two crops comes from the same image right so as human you look at this and if you just see the center crop you see oh okay down here there's this this tip of this thing and then there's the shot right and in relation to the shot there is a meteor here right and then you look at the second one and you say okay I don't see the spaceship but there's the same relation here from the shot to the meteor and I can kind of see the meteor up here and this also fits with that right and the the spaceship", "start_timestamp": "00:13:39", "end_timestamp": "00:14:18", "start_second": 819, "end_second": 858, "url": "https://www.youtube.com/watch?v=hg2Q_O5b9w4&t=819s", "title": "CURL: Contrastive Unsupervised Representations for Reinforcement Learning", "thumbnail": "https://i.ytimg.com/vi/hg2Q_O5b9w4/maxresdefault.jpg"} {"video_id": "hg2Q_O5b9w4", "text": "must be you know down here somewhere and then I go over here and I try to do the same thing is okay here's the meteor and you know it it might be it might be in the original image it might be over here somewhere so that's possible I don't see it right that's possible but then there should be there should be a shot right somewhere here or sorry further up oops T there should be a shot somewhere here right I'm pretty sure because there's there's one over here and I don't see it right so I am fairly sure mr. tasks asked her that this image here", "start_timestamp": "00:14:18", "end_timestamp": "00:15:03", "start_second": 858, "end_second": 903, "url": "https://www.youtube.com/watch?v=hg2Q_O5b9w4&t=858s", "title": "CURL: Contrastive Unsupervised Representations for Reinforcement Learning", "thumbnail": "https://i.ytimg.com/vi/hg2Q_O5b9w4/maxresdefault.jpg"} {"video_id": "hg2Q_O5b9w4", "text": "is the positive sample while this image here is the negative sample right so this is the task that you ask of the neural network give it the anchor and you ask which one of the of these two comes from the same image right this is called contrastive learning now is a bit more complicated in that of course what you do is you encode these things using neural networks and then so each of the things you encode so the anchor you're going to encode all of these things using a neural network right and then this is what's going to", "start_timestamp": "00:15:03", "end_timestamp": "00:15:50", "start_second": 903, "end_second": 950, "url": "https://www.youtube.com/watch?v=hg2Q_O5b9w4&t=903s", "title": "CURL: Contrastive Unsupervised Representations for Reinforcement Learning", "thumbnail": "https://i.ytimg.com/vi/hg2Q_O5b9w4/maxresdefault.jpg"} {"video_id": "hg2Q_O5b9w4", "text": "become the query and these are becoming the keys so key one or key two and then you're going to feed it always two of them into a bilinear product right the bilinear product is simply you can think of it as an inner product in a perturbed space that you can learn so you're going to have this you have these two here these go into q WK one and then these two here sorry this and this go into q w k 2 now W here is a learnable parameter right so you have some freedom and then you basically take whichever one of those two is highest right so this might", "start_timestamp": "00:15:50", "end_timestamp": "00:16:39", "start_second": 950, "end_second": 999, "url": "https://www.youtube.com/watch?v=hg2Q_O5b9w4&t=950s", "title": "CURL: Contrastive Unsupervised Representations for Reinforcement Learning", "thumbnail": "https://i.ytimg.com/vi/hg2Q_O5b9w4/maxresdefault.jpg"} {"video_id": "hg2Q_O5b9w4", "text": "be this high and this might only be this high and then you say aha cool this one's higher so this one must be the positive right and you train the W specifically to make this higher to make the positive ones higher and the negative ones a lower so this is a supervised learning task right where these things here are going to be the lockets or or the so their inner product but you basically then pick the one that is highest as a in a soft max way and they put this in the paper so if we go down here the objective that they use to", "start_timestamp": "00:16:39", "end_timestamp": "00:17:23", "start_second": 999, "end_second": 1043, "url": "https://www.youtube.com/watch?v=hg2Q_O5b9w4&t=999s", "title": "CURL: Contrastive Unsupervised Representations for Reinforcement Learning", "thumbnail": "https://i.ytimg.com/vi/hg2Q_O5b9w4/maxresdefault.jpg"} {"video_id": "hg2Q_O5b9w4", "text": "do the contrastive learning is this one so as you can see it's a soft max like in multi-class classification of the inner product the bilinear product with the positive samples over the bilinear product with the positive samples plus the bilinear product with all of the negative samples so you're going to come up with more than one negative sample all right now the only thing left that we don't have here is that the encoding how you're going to come from the image space to this space here is going to be slightly different and depending on", "start_timestamp": "00:17:23", "end_timestamp": "00:18:09", "start_second": 1043, "end_second": 1089, "url": "https://www.youtube.com/watch?v=hg2Q_O5b9w4&t=1043s", "title": "CURL: Contrastive Unsupervised Representations for Reinforcement Learning", "thumbnail": "https://i.ytimg.com/vi/hg2Q_O5b9w4/maxresdefault.jpg"} {"video_id": "hg2Q_O5b9w4", "text": "whether you're talking on the anchor or on the what what are called the keys the things you compare to and this is out of a kind of a stability criterion you already maybe you don't you know like something like double q-learning or things like this it sometimes when you train with your own thing so in q-learning you're kind of trying to to come up with an actor and a critic or it's not the same thing but you're kind of using the same neural network twice in in your in your setup and then you compare the output stored to each other", "start_timestamp": "00:18:09", "end_timestamp": "00:18:53", "start_second": 1089, "end_second": 1133, "url": "https://www.youtube.com/watch?v=hg2Q_O5b9w4&t=1089s", "title": "CURL: Contrastive Unsupervised Representations for Reinforcement Learning", "thumbnail": "https://i.ytimg.com/vi/hg2Q_O5b9w4/maxresdefault.jpg"} {"video_id": "hg2Q_O5b9w4", "text": "which isn't you know it leads to instability so in our case we took it three times here or multiple times especially for the same objective here we have twice something that was encoded by the same neural networking isn't the two sides of this by linear product so if we were to use the same neural network that tends to be somewhat unstable so we have different neural networks one that will encode the query which is this F Q + 1 which will encode the keys sorry F ok now we don't want to learn to neural networks and that's why", "start_timestamp": "00:18:53", "end_timestamp": "00:19:36", "start_second": 1133, "end_second": 1176, "url": "https://www.youtube.com/watch?v=hg2Q_O5b9w4&t=1133s", "title": "CURL: Contrastive Unsupervised Representations for Reinforcement Learning", "thumbnail": "https://i.ytimg.com/vi/hg2Q_O5b9w4/maxresdefault.jpg"} {"video_id": "hg2Q_O5b9w4", "text": "there's a bit of a compromise where we say it is the same neural network but but basically this one is the one we learn and then we always every now and then we transfer over the parameters to that one and in fact each step we transfer over the parameters and do an exponential moving average with the parameters of this momentum encoder from the step before so the momentum encoder parameters are a moving average of the parameters of the query encoder and that is so you get kind of get the best of both worlds you don't", "start_timestamp": "00:19:36", "end_timestamp": "00:20:21", "start_second": 1176, "end_second": 1221, "url": "https://www.youtube.com/watch?v=hg2Q_O5b9w4&t=1176s", "title": "CURL: Contrastive Unsupervised Representations for Reinforcement Learning", "thumbnail": "https://i.ytimg.com/vi/hg2Q_O5b9w4/maxresdefault.jpg"} {"video_id": "hg2Q_O5b9w4", "text": "have to learn a second neural network but your second neural network is not the same as your first neural network but it it kind of lags behind but it is also it is also performing almost as well so that is um I don't know if that makes sense but it is the best I can to explain it so to recap you take your observation you encode it as a query sorry you crop crop here for your anchor that gets your query and then you random crop for your keys right into positive and negative samples right so random crop from the same observation or from", "start_timestamp": "00:20:21", "end_timestamp": "00:21:13", "start_second": 1221, "end_second": 1273, "url": "https://www.youtube.com/watch?v=hg2Q_O5b9w4&t=1221s", "title": "CURL: Contrastive Unsupervised Representations for Reinforcement Learning", "thumbnail": "https://i.ytimg.com/vi/hg2Q_O5b9w4/maxresdefault.jpg"} {"video_id": "hg2Q_O5b9w4", "text": "different observations right these become your positive and negative samples then you take you take me push this through your encoders for the query and for the keys respectively you end up with the Q which is the encoded anchor and the case which are the encoded positive and negative samples and then you learn you update this encoder here using the contrastive loss right and at the same time you feed the q you feed the q here into the reinforcement learning algorithm and you learn your reinforcement learning algorithm instead", "start_timestamp": "00:21:13", "end_timestamp": "00:22:01", "start_second": 1273, "end_second": 1321, "url": "https://www.youtube.com/watch?v=hg2Q_O5b9w4&t=1273s", "title": "CURL: Contrastive Unsupervised Representations for Reinforcement Learning", "thumbnail": "https://i.ytimg.com/vi/hg2Q_O5b9w4/maxresdefault.jpg"} {"video_id": "hg2Q_O5b9w4", "text": "of giving having the observation directly as an input here you now have the Q here as an input right that is it the reinforcement learning works exactly the same but except having the so input Oh you now have the representation input queue and you don't have to worry about anything else in terms of the reinforcement learning algorithm it stays exactly the same right the this whole thing here can actually run either in parallel or you can think of it before you can think of it off policy on policy it is sort of", "start_timestamp": "00:22:01", "end_timestamp": "00:22:41", "start_second": 1321, "end_second": 1361, "url": "https://www.youtube.com/watch?v=hg2Q_O5b9w4&t=1321s", "title": "CURL: Contrastive Unsupervised Representations for Reinforcement Learning", "thumbnail": "https://i.ytimg.com/vi/hg2Q_O5b9w4/maxresdefault.jpg"} {"video_id": "hg2Q_O5b9w4", "text": "modular how you how you fit this in it simply comes up with good representations so that is that is basically a deal here right and you hope that the whole procedure of this contrastive learning then gives you good representation of this anchor thing here if you encode that to the queue you hope that this representation now is a good representation as a basis for the RL algorithm and it turns out at least in their experiments it is so here you see the same thing they actually they do something more where in RL usually deal", "start_timestamp": "00:22:41", "end_timestamp": "00:23:21", "start_second": 1361, "end_second": 1401, "url": "https://www.youtube.com/watch?v=hg2Q_O5b9w4&t=1361s", "title": "CURL: Contrastive Unsupervised Representations for Reinforcement Learning", "thumbnail": "https://i.ytimg.com/vi/hg2Q_O5b9w4/maxresdefault.jpg"} {"video_id": "hg2Q_O5b9w4", "text": "with a stack of observations not just a single observation because so for example in Atari people always concatenate something like for the four last frames right and their their point is okay if we have this stack here if we do this data augmentation you know these crops we kind of need to do them consistently right we need to crop every single image at the same point for the query and also if we do a random crop let's say a random crop down here we need to do this same random crop for all of the of the stack of images here right", "start_timestamp": "00:23:21", "end_timestamp": "00:23:59", "start_second": 1401, "end_second": 1439, "url": "https://www.youtube.com/watch?v=hg2Q_O5b9w4&t=1401s", "title": "CURL: Contrastive Unsupervised Representations for Reinforcement Learning", "thumbnail": "https://i.ytimg.com/vi/hg2Q_O5b9w4/maxresdefault.jpg"} {"video_id": "hg2Q_O5b9w4", "text": "so um that that is kind of the additional thing they introduced it with respect to RL that deals with with stacked time frames but it's kind of the same the same diagram as above here right so they explained the the RL algorithms they use and exactly they're they're their thing and here you can see that anchor is a crop and the positive sample is a random crop from the same image this would be up here somewhere the anchor is cropped from the middle and then the negative would be a random crop from a different", "start_timestamp": "00:23:59", "end_timestamp": "00:24:42", "start_second": 1439, "end_second": 1482, "url": "https://www.youtube.com/watch?v=hg2Q_O5b9w4&t=1439s", "title": "CURL: Contrastive Unsupervised Representations for Reinforcement Learning", "thumbnail": "https://i.ytimg.com/vi/hg2Q_O5b9w4/maxresdefault.jpg"} {"video_id": "hg2Q_O5b9w4", "text": "image or a different stack of images they have a pseudo code here where that was pretty simple we'll just go through it quickly right you start off with fq + FK these are the encoders for the query and keys you start them off the same then you go through your data loader you do this random augmentation of your query and you keys and I don't not even sure if the random augmentation needs actually to be a central crop for the anchor or just two different two different crops from the same image that might be as", "start_timestamp": "00:24:42", "end_timestamp": "00:25:22", "start_second": 1482, "end_second": 1522, "url": "https://www.youtube.com/watch?v=hg2Q_O5b9w4&t=1482s", "title": "CURL: Contrastive Unsupervised Representations for Reinforcement Learning", "thumbnail": "https://i.ytimg.com/vi/hg2Q_O5b9w4/maxresdefault.jpg"} {"video_id": "hg2Q_O5b9w4", "text": "well so you know just I guess I guess it's a thing you could choose I don't know what exactly is the best thing alright then I forward the query through the FQ and I forward the keys through the FK then important I detach this so I don't train I don't want to train the FK I only want to train the FQ right then I do the bilinear product here with the W these these are the bilinear product and then I put this all of this into a cross entropy loss right in the end I update my fq m IW and i do this exponentially", "start_timestamp": "00:25:22", "end_timestamp": "00:26:13", "start_second": 1522, "end_second": 1573, "url": "https://www.youtube.com/watch?v=hg2Q_O5b9w4&t=1522s", "title": "CURL: Contrastive Unsupervised Representations for Reinforcement Learning", "thumbnail": "https://i.ytimg.com/vi/hg2Q_O5b9w4/maxresdefault.jpg"} {"video_id": "hg2Q_O5b9w4", "text": "moving average for my key encoder and they test on two different things they test on the deepmind control tasks and they always test 100 K time steps so their big point is data efficiency right they they claim they can use learn useful representations with not much data so the task is here how good are you at one 100 cases that time steps right so you don't you don't optimize until the end you just you get 100k time steps and then the question is how how good are you and the curl here outperforms all of the baselines handily in the deep mind", "start_timestamp": "00:26:13", "end_timestamp": "00:27:01", "start_second": 1573, "end_second": 1621, "url": "https://www.youtube.com/watch?v=hg2Q_O5b9w4&t=1573s", "title": "CURL: Contrastive Unsupervised Representations for Reinforcement Learning", "thumbnail": "https://i.ytimg.com/vi/hg2Q_O5b9w4/maxresdefault.jpg"} {"video_id": "hg2Q_O5b9w4", "text": "control tasks and it also outperforms a lot of the baselines in the Atari tasks and it actually if you look at the results it doesn't outperform everything but for example here the red is curl and the dashed gray is state as a si now state si si the important thing to note here is it has access to the state whereas curl only works from pixels right so that what I said before like if I had to craft the use for a presentation basically state si si has access to that and you see that in many of the tasks that the curl comes close", "start_timestamp": "00:27:01", "end_timestamp": "00:27:47", "start_second": 1621, "end_second": 1667, "url": "https://www.youtube.com/watch?v=hg2Q_O5b9w4&t=1621s", "title": "CURL: Contrastive Unsupervised Representations for Reinforcement Learning", "thumbnail": "https://i.ytimg.com/vi/hg2Q_O5b9w4/maxresdefault.jpg"} {"video_id": "hg2Q_O5b9w4", "text": "or performs equally well to the state si si right so that's pretty impressive especially if you've took at pixel si si sorry which is the same algorithm but does not have access to the state just the pixels it often fails terribly right so um that is pretty interesting to see and even to me it's pretty interesting to see that this kind of this kind of algorithm this kind of self labeled algorithm comes up with such useful representations all right so I hope I have explained this satisfactorily and check out the paper for more experiments", "start_timestamp": "00:27:47", "end_timestamp": "00:28:35", "start_second": 1667, "end_second": 1715, "url": "https://www.youtube.com/watch?v=hg2Q_O5b9w4&t=1667s", "title": "CURL: Contrastive Unsupervised Representations for Reinforcement Learning", "thumbnail": "https://i.ytimg.com/vi/hg2Q_O5b9w4/maxresdefault.jpg"} {"video_id": "hDQNCWR3HLQ", "text": "on April 21st jurgen schmidhuber tweeted out stop crediting the wrong people for inventions made by others at least in science the facts will always win at the end as long as the facts have not yet won it is not yet the end no fancy award can ever change that hashtag it self-correcting science hashtag plagiarism and links to an article of his own website where he wrote critique of Honda Prize for dr. Hinton so this is on Schmidt Hoover's own website and it's by himself and don't you love this how to pronounce his", "start_timestamp": "00:00:00", "end_timestamp": "00:00:41", "start_second": 0, "end_second": 41, "url": "https://www.youtube.com/watch?v=hDQNCWR3HLQ&t=0s", "title": "[Drama] Schmidhuber: Critique of Honda Prize for Dr. Hinton", "thumbnail": "https://i.ytimg.com/vi/hDQNCWR3HLQ/hqdefault.jpg"} {"video_id": "hDQNCWR3HLQ", "text": "name jurgen schmidhuber you again sorry this is this is absolutely great so both actually Schmid over and Hinton are on Twitter you can tweet at them and follow them this article here is a basically a critique of the press release of Honda when they awarded geoff hinton for his achievements and it goes through it step by step and we won't look at the whole thing but just two for you to get the flavor so here honda says dr. Hinton has created a number of technologies that have enabled the broader application of", "start_timestamp": "00:00:41", "end_timestamp": "00:01:21", "start_second": 41, "end_second": 81, "url": "https://www.youtube.com/watch?v=hDQNCWR3HLQ&t=41s", "title": "[Drama] Schmidhuber: Critique of Honda Prize for Dr. Hinton", "thumbnail": "https://i.ytimg.com/vi/hDQNCWR3HLQ/hqdefault.jpg"} {"video_id": "hDQNCWR3HLQ", "text": "AI including the backpropagation algorithm that forms the basis of deep learning approach to AI and schmidhuber just goes off its he basically claims him while Hinton and his co-workers have made certain significant contributions to deep learning he claimed above is plain wrong right he did not invent back propagation the person who invented back propagation was settled in linear MA and the many papers he says basically many papers failed to cite linin MA and this who was the original inventor of back prop and so on and he go kind of goes", "start_timestamp": "00:01:21", "end_timestamp": "00:02:05", "start_second": 81, "end_second": 125, "url": "https://www.youtube.com/watch?v=hDQNCWR3HLQ&t=81s", "title": "[Drama] Schmidhuber: Critique of Honda Prize for Dr. Hinton", "thumbnail": "https://i.ytimg.com/vi/hDQNCWR3HLQ/hqdefault.jpg"} {"video_id": "hDQNCWR3HLQ", "text": "through a history of this and how it's even earlier I always have a bit of a trouble with claims like who invented what because when it is an algo them really the same thing right and when he when is it a variation on another algorithm and when is it something completely new it's never entirely clear but the the points here made that the things the backpropagation algorithm existed before Hinton and also that some of the papers some of the seminal papers did not cite the correct origin statement to in 2002 he", "start_timestamp": "00:02:05", "end_timestamp": "00:02:42", "start_second": 125, "end_second": 162, "url": "https://www.youtube.com/watch?v=hDQNCWR3HLQ&t=125s", "title": "[Drama] Schmidhuber: Critique of Honda Prize for Dr. Hinton", "thumbnail": "https://i.ytimg.com/vi/hDQNCWR3HLQ/hqdefault.jpg"} {"video_id": "hDQNCWR3HLQ", "text": "introduced the a fast learning algorithm for restricted Boltzmann machines that allowed them to learn a single layer of distributor representation without requiring any labeled data these methods allow deep learning to work better and they led to the current deep learning revolution and he is no dr. Hinton's interesting unsupervised pre training for deep neural networks was irrelevant for the current latif learning revolution in 2010 our team showed that the feed-forward networks can be trained by plain backprop do not at all require", "start_timestamp": "00:02:42", "end_timestamp": "00:03:16", "start_second": 162, "end_second": 196, "url": "https://www.youtube.com/watch?v=hDQNCWR3HLQ&t=162s", "title": "[Drama] Schmidhuber: Critique of Honda Prize for Dr. Hinton", "thumbnail": "https://i.ytimg.com/vi/hDQNCWR3HLQ/hqdefault.jpg"} {"video_id": "hDQNCWR3HLQ", "text": "pre training and he basically again says apart from this Hinton's unsupervised pretending was conceptually a rehash of my unsupervised pre training for deep recurrent neural networks so he you know as you know she made Ober has done a lot of work in recurrent neural networks and he basically says it it was just a rehash of his algorithm now I I have to say I have so first look first of all he he makes a point here right that we don't really do unsupervised pre-training him or until now of course but you like for to train an amnesty law", "start_timestamp": "00:03:16", "end_timestamp": "00:03:55", "start_second": 196, "end_second": 235, "url": "https://www.youtube.com/watch?v=hDQNCWR3HLQ&t=196s", "title": "[Drama] Schmidhuber: Critique of Honda Prize for Dr. Hinton", "thumbnail": "https://i.ytimg.com/vi/hDQNCWR3HLQ/hqdefault.jpg"} {"video_id": "hDQNCWR3HLQ", "text": "fighter you don't have to do that but it's also doubtful that this this was a step even though even if it wasn't on the exact path to the current situation it was a thing that got people excited maybe and so the critique is like half valid and also it doesn't help me to burn that he always compares it to his own things like it just it just like either criticized them for you know in general things but then avoid bringing your own things in because it just sounds like I did this before and also I read some papers", "start_timestamp": "00:03:55", "end_timestamp": "00:04:34", "start_second": 235, "end_second": 274, "url": "https://www.youtube.com/watch?v=hDQNCWR3HLQ&t=235s", "title": "[Drama] Schmidhuber: Critique of Honda Prize for Dr. Hinton", "thumbnail": "https://i.ytimg.com/vi/hDQNCWR3HLQ/hqdefault.jpg"} {"video_id": "hDQNCWR3HLQ", "text": "from from these times people just wrote papers sometimes I haven't read this specific one but sometimes people just wrote papers writing down their ideas like one could do this and this and this never doing any experiments or actually specifying exactly what they mean they just kind of wrote down a bunch of ideas and that got published especially like there's some some reinforcement learning papers where people are just like oh one I imagine agents doing this and learning from that so it is again it is never", "start_timestamp": "00:04:34", "end_timestamp": "00:05:11", "start_second": 274, "end_second": 311, "url": "https://www.youtube.com/watch?v=hDQNCWR3HLQ&t=274s", "title": "[Drama] Schmidhuber: Critique of Honda Prize for Dr. Hinton", "thumbnail": "https://i.ytimg.com/vi/hDQNCWR3HLQ/hqdefault.jpg"} {"video_id": "hDQNCWR3HLQ", "text": "really clear in ideas or just had by everyone I think people people mistake this that think that the ideas are unique it's not ideas that are unique many people have the same ideas but some there's also execution and exact formalization and so on and exact level of specificity this all of this is really hard and then the honda says in 2009 dr. Hinton and two of his students used multi-layer neural nets to make major breakthrough and speech recognition that led directly to greatly improved and this of course Schrader who goes off by", "start_timestamp": "00:05:11", "end_timestamp": "00:05:48", "start_second": 311, "end_second": 348, "url": "https://www.youtube.com/watch?v=hDQNCWR3HLQ&t=311s", "title": "[Drama] Schmidhuber: Critique of Honda Prize for Dr. Hinton", "thumbnail": "https://i.ytimg.com/vi/hDQNCWR3HLQ/hqdefault.jpg"} {"video_id": "hDQNCWR3HLQ", "text": "this because speech recognition is of course prime LS TM territory so you don't want to go near this and the Honda further says revolutionized computer vision by showing that deep learning worked far better than existing state of the art and again he says the basic ingredients were already there and so on and the our team in Switzerland already used his first superior award-winning GPU based CNN and so on that's what it's called dan net was produced by his group and again this seems correct right this seems when he lays it out like this but", "start_timestamp": "00:05:48", "end_timestamp": "00:06:32", "start_second": 348, "end_second": 392, "url": "https://www.youtube.com/watch?v=hDQNCWR3HLQ&t=348s", "title": "[Drama] Schmidhuber: Critique of Honda Prize for Dr. Hinton", "thumbnail": "https://i.ytimg.com/vi/hDQNCWR3HLQ/hqdefault.jpg"} {"video_id": "hDQNCWR3HLQ", "text": "it doesn't change the fact that Alex net1 imagenet in 2012 and that was like the start of the deep learning revolution it was like wow you can cut the learn like the error rate by something like 30% simply by doing this deep learning stuff so again even if Dan that he says it blew away the competition it just seems it it always seems like Schmidt Hooper's kinda right but then also he's not he's like a cadet exact academic work and and the idea being there on a paper isn't the only thing that drives progress and says to achieve", "start_timestamp": "00:06:32", "end_timestamp": "00:07:22", "start_second": 392, "end_second": 442, "url": "https://www.youtube.com/watch?v=hDQNCWR3HLQ&t=392s", "title": "[Drama] Schmidhuber: Critique of Honda Prize for Dr. Hinton", "thumbnail": "https://i.ytimg.com/vi/hDQNCWR3HLQ/hqdefault.jpg"} {"video_id": "hDQNCWR3HLQ", "text": "their dramatic results dr. Hinton also invented a widely used new method called dropout which reduces overfitting no like no and like no just no like randomly dropping parts in order to make something more robust that is surely not a new thing and he also says much early it is there's this stochastic Delta rule and so on and he also critiques that this paper did not cite this they just gave it the name right this is an idea that is kind of so simple that you you wouldn't even necessarily think about researching whether that has existed", "start_timestamp": "00:07:22", "end_timestamp": "00:08:08", "start_second": 442, "end_second": 488, "url": "https://www.youtube.com/watch?v=hDQNCWR3HLQ&t=442s", "title": "[Drama] Schmidhuber: Critique of Honda Prize for Dr. Hinton", "thumbnail": "https://i.ytimg.com/vi/hDQNCWR3HLQ/hqdefault.jpg"} {"video_id": "hDQNCWR3HLQ", "text": "already I think they just did it and then because it's a natural idea and then they gave it a name and the name stuck right it's not about the idea itself and then lastly they say of the countless AI based technological services across the world it is no exaggeration to say that few would have been possible without the results dr. Hinton created I love this name one that would not have been possible and he just gives a list of their own group and that are basically possible without Hinton's contributions and this is just it's a", "start_timestamp": "00:08:08", "end_timestamp": "00:08:47", "start_second": 488, "end_second": 527, "url": "https://www.youtube.com/watch?v=hDQNCWR3HLQ&t=488s", "title": "[Drama] Schmidhuber: Critique of Honda Prize for Dr. Hinton", "thumbnail": "https://i.ytimg.com/vi/hDQNCWR3HLQ/hqdefault.jpg"} {"video_id": "hDQNCWR3HLQ", "text": "bit of a cheap shot right clearly honda if they're not saying it would have been you know physically him possible without his contributions its but certainly Hinton has has if even if he hadn't invented any of those things he certainly has created like a spark and his these things created a splash got people excited people thinking about new ways of applying things even you know if this is all true so right and but but I would like you to I'd like you to notice this is a critique of what Honda says about Hinton and if I read", "start_timestamp": "00:08:47", "end_timestamp": "00:09:35", "start_second": 527, "end_second": 575, "url": "https://www.youtube.com/watch?v=hDQNCWR3HLQ&t=527s", "title": "[Drama] Schmidhuber: Critique of Honda Prize for Dr. Hinton", "thumbnail": "https://i.ytimg.com/vi/hDQNCWR3HLQ/hqdefault.jpg"} {"video_id": "hDQNCWR3HLQ", "text": "through the statements of Schmidt who were most of them are technically correct right and you know that so that was that and then I thought okay cool but then someone posted II didn't read it and then Hinton replies and this is okay don't you love this so Hinton says having a public debate with schmidhuber about academic credit is not at advisable because it just encourages him and there is no limit to the time and effort that he is willing to put into trying to discredit his perceived Arrivals he is even escorted to tricks", "start_timestamp": "00:09:35", "end_timestamp": "00:10:15", "start_second": 575, "end_second": 615, "url": "https://www.youtube.com/watch?v=hDQNCWR3HLQ&t=575s", "title": "[Drama] Schmidhuber: Critique of Honda Prize for Dr. Hinton", "thumbnail": "https://i.ytimg.com/vi/hDQNCWR3HLQ/hqdefault.jpg"} {"video_id": "hDQNCWR3HLQ", "text": "like having multiple aliases in Wikipedia to make it look as if other people agree the patient on his website about Alan Turing is a nice example of how he goes on trying to these are like these are shots fired and he says I'm going to respond once and only once I have never claimed that I invented backpropagation David Romo hard invented it independently after other after other people in other fields had invented it it's true when you first published we did not know the history so he basically says okay we did forget decided when we", "start_timestamp": "00:10:15", "end_timestamp": "00:10:56", "start_second": 615, "end_second": 656, "url": "https://www.youtube.com/watch?v=hDQNCWR3HLQ&t=615s", "title": "[Drama] Schmidhuber: Critique of Honda Prize for Dr. Hinton", "thumbnail": "https://i.ytimg.com/vi/hDQNCWR3HLQ/hqdefault.jpg"} {"video_id": "hDQNCWR3HLQ", "text": "first published about rock crop but he doesn't say he invented it what I've claimed is that I was the person to clearly demonstrate that back prop could learn interesting in turn represent and that that this is what made it popular right so this goes into into the direction schmidhuber is very much on academic contributions idea was there before and hint and basically says no what we did is kind of we showed that it works in this particular way and we can have got people excited about it I did is by forcing that blah blah blah and it", "start_timestamp": "00:10:56", "end_timestamp": "00:11:35", "start_second": 656, "end_second": 695, "url": "https://www.youtube.com/watch?v=hDQNCWR3HLQ&t=656s", "title": "[Drama] Schmidhuber: Critique of Honda Prize for Dr. Hinton", "thumbnail": "https://i.ytimg.com/vi/hDQNCWR3HLQ/hqdefault.jpg"} {"video_id": "hDQNCWR3HLQ", "text": "is he says it is true that many people in the press have said I invented back prop and I've spent a lot of time correcting them here's an excerpt from 2018 where this is I guess a quote from this book that quotes Hinton where he says lots of people invented different versions of back prop before day with normal heart they were mainly independent inventions something I feel I've got too much credit for it's one of these rare cases where an academic feels he has got too much credit for something my main contribution was to sure you can", "start_timestamp": "00:11:35", "end_timestamp": "00:12:08", "start_second": 695, "end_second": 728, "url": "https://www.youtube.com/watch?v=hDQNCWR3HLQ&t=695s", "title": "[Drama] Schmidhuber: Critique of Honda Prize for Dr. Hinton", "thumbnail": "https://i.ytimg.com/vi/hDQNCWR3HLQ/hqdefault.jpg"} {"video_id": "hDQNCWR3HLQ", "text": "use it for learning distributed representations so I'd like to set the record straight on that and then he said maybe Jurgen would like to set the record straight on who invented LST M's boom boom crazy shot shots fired by Hinton here this is I mean this is just great but again look at what Hinton says Hinton basically says yes I have not invented that I have corrected this on public record in the past and yeah so so that's what Hinton says and I mean the the the comments here are just gold I really invite you to read it and then", "start_timestamp": "00:12:08", "end_timestamp": "00:12:56", "start_second": 728, "end_second": 776, "url": "https://www.youtube.com/watch?v=hDQNCWR3HLQ&t=728s", "title": "[Drama] Schmidhuber: Critique of Honda Prize for Dr. Hinton", "thumbnail": "https://i.ytimg.com/vi/hDQNCWR3HLQ/hqdefault.jpg"} {"video_id": "hDQNCWR3HLQ", "text": "schmidhuber of course being Schmidt who replies again down here he has a a response to the reply and I don't expect Hinton to reply again so I waited for a bit but but I I believe him when he says he does it only once so he goes into this summary the facts presented in sections 1 2 3 4 5 are still valid so he goes what kind of statement by statements is having a public debate blah blah blah and he says this is an ad hominem attack which is true right this is true and he says he even has multiple aliases in Wikipedia", "start_timestamp": "00:12:56", "end_timestamp": "00:13:40", "start_second": 776, "end_second": 820, "url": "https://www.youtube.com/watch?v=hDQNCWR3HLQ&t=776s", "title": "[Drama] Schmidhuber: Critique of Honda Prize for Dr. Hinton", "thumbnail": "https://i.ytimg.com/vi/hDQNCWR3HLQ/hqdefault.jpg"} {"video_id": "hDQNCWR3HLQ", "text": "and he just says another ad hominem attack and then he goes into that schmidhuber tries to discredit Alan Turing and then shmita goes into this big long big long basically claim that Alan Turing wasn't as important as people made him out to be and people invented this kind of Turing machine equivalents before that again it's kind of showing tubers take that the idea basically was already there and these people don't get the correct credit and also he's correct that this is a this is a true it's an ad hominem attack right so you know be it", "start_timestamp": "00:13:40", "end_timestamp": "00:14:28", "start_second": 820, "end_second": 868, "url": "https://www.youtube.com/watch?v=hDQNCWR3HLQ&t=820s", "title": "[Drama] Schmidhuber: Critique of Honda Prize for Dr. Hinton", "thumbnail": "https://i.ytimg.com/vi/hDQNCWR3HLQ/hqdefault.jpg"} {"video_id": "hDQNCWR3HLQ", "text": "as it may this is correct and then when when Hinton goes that he doesn't stay and invent backdrop and me to persist this is finally response related to my post which is true right however he does not at all contradict what I wrote and it is true that he credited his co-author Rommel Hart with the invention but but neither cited alanine MA and also the statement lots of people he says it wasn't created by lots of different people but exactly one person so this I find como like can you really say now this is the", "start_timestamp": "00:14:28", "end_timestamp": "00:15:05", "start_second": 868, "end_second": 905, "url": "https://www.youtube.com/watch?v=hDQNCWR3HLQ&t=868s", "title": "[Drama] Schmidhuber: Critique of Honda Prize for Dr. Hinton", "thumbnail": "https://i.ytimg.com/vi/hDQNCWR3HLQ/hqdefault.jpg"} {"video_id": "hDQNCWR3HLQ", "text": "exact time when backprop was invented even though it probably wasn't in the current exact current formulation and it probably existed someone like this so but again and he his main claim is dr. Hinton except the Honda Prize although he apparently agrees that Honda's claims are false he should ask Honda to correct their statements and like in the end maybe you're going would like to set the record straight who invented LST M's and you know as we as you may know seppo writer it kind of invented LST ms under jurgen schmidhuber", "start_timestamp": "00:15:05", "end_timestamp": "00:15:48", "start_second": 905, "end_second": 948, "url": "https://www.youtube.com/watch?v=hDQNCWR3HLQ&t=905s", "title": "[Drama] Schmidhuber: Critique of Honda Prize for Dr. Hinton", "thumbnail": "https://i.ytimg.com/vi/hDQNCWR3HLQ/hqdefault.jpg"} {"video_id": "hDQNCWR3HLQ", "text": "as a as a PhD advisor but the to summarize dr. Hinton's comments and ad hominem arguments diverged from the contents of my post and do not challenge the facts and so on and i have to say after reading this this this is a this is correct right hinton basically replies to hey i I never claimed I invented back prop and other people have invented it and Schmidt Hoover doesn't criticize hinton in this particular post he may otherwise schmidhuber doesn't create as as Hinton for claiming that he criticizes Honda for claiming that", "start_timestamp": "00:15:48", "end_timestamp": "00:16:30", "start_second": 948, "end_second": 990, "url": "https://www.youtube.com/watch?v=hDQNCWR3HLQ&t=948s", "title": "[Drama] Schmidhuber: Critique of Honda Prize for Dr. Hinton", "thumbnail": "https://i.ytimg.com/vi/hDQNCWR3HLQ/hqdefault.jpg"} {"video_id": "hDQNCWR3HLQ", "text": "Hinton did and it doesn't hidden basically agrees with him and also schmidhuber says dr. Hinton accepted the Honda Prize although he apparently agrees that the claims are false he should ask Honda to correct their statements and it is true that Hinton accepted this price under this release right now you might be able to say him Hinton also says he's on the record basically saying he didn't do this and I guess if you're Hinton and you know you've had this you've had a successful career and so on and you have previously", "start_timestamp": "00:16:30", "end_timestamp": "00:17:02", "start_second": 990, "end_second": 1022, "url": "https://www.youtube.com/watch?v=hDQNCWR3HLQ&t=990s", "title": "[Drama] Schmidhuber: Critique of Honda Prize for Dr. Hinton", "thumbnail": "https://i.ytimg.com/vi/hDQNCWR3HLQ/hqdefault.jpg"} {"video_id": "hDQNCWR3HLQ", "text": "really publicly stated that you didn't invent these things and you know made it clear and then you get a prize and they write this thing maybe you just don't want to go after every single press statement and correcting that but you know in essence basically Hinton and understood this as an attack on himself that he claims he invented back prop and schmidhuber says Honda claims he invented back rub and Hinton accepted the price so agrees with it and he basically agrees with it but doesn't say Honda should have corrected at which I", "start_timestamp": "00:17:02", "end_timestamp": "00:17:40", "start_second": 1022, "end_second": 1060, "url": "https://www.youtube.com/watch?v=hDQNCWR3HLQ&t=1022s", "title": "[Drama] Schmidhuber: Critique of Honda Prize for Dr. Hinton", "thumbnail": "https://i.ytimg.com/vi/hDQNCWR3HLQ/hqdefault.jpg"} {"video_id": "hDQNCWR3HLQ", "text": "can understand so this is my take on this issue it's kind of both or correct and they just kind of talk past each other and schmidhuber is always on the the idea existed before and Hinton is correct when he says it's not always just about the idea progress is also made by people being excited people actually getting something to work people you know doing something at the right time in the right place which is also correct but it is fun it is fun so so I just I enjoyed I enjoy this honestly like because ultimately this is", "start_timestamp": "00:17:40", "end_timestamp": "00:18:28", "start_second": 1060, "end_second": 1108, "url": "https://www.youtube.com/watch?v=hDQNCWR3HLQ&t=1060s", "title": "[Drama] Schmidhuber: Critique of Honda Prize for Dr. Hinton", "thumbnail": "https://i.ytimg.com/vi/hDQNCWR3HLQ/hqdefault.jpg"} {"video_id": "_X0mgOOSpLU", "text": "The power of yet. I heard about a high school in Chicago where students had to pass a certain number of courses to graduate, and if they didn't pass a course, they got the grade \"Not Yet.\" And I thought that was fantastic, because if you get a failing grade, you think, I'm nothing, I'm nowhere. But if you get the grade \"Not Yet\", you understand that you're on a learning curve. It gives you a path into the future. \"Not Yet\" also gave me insight into a critical event early in my career, a real turning point. I wanted to see", "start_timestamp": "00:00:00", "end_timestamp": "00:00:56", "start_second": 0, "end_second": 56, "url": "https://www.youtube.com/watch?v=_X0mgOOSpLU&t=0s", "title": "The power of believing that you can improve | Carol Dweck", "thumbnail": "https://i.ytimg.com/vi/_X0mgOOSpLU/maxresdefault.jpg"} {"video_id": "_X0mgOOSpLU", "text": "how children coped with challenge and difficulty, so I gave 10-year-olds problems that were slightly too hard for them. Some of them reacted in a shockingly positive way. They said things like, \"I love a challenge,\" or, \"You know, I was hoping this would be informative.\" They understood that their abilities could be developed. They had what I call a growth mindset. But other students felt it was tragic, catastrophic. From their more fixed mindset perspective, their intelligence had been up for judgment, and they failed.", "start_timestamp": "00:00:56", "end_timestamp": "00:01:50", "start_second": 56, "end_second": 110, "url": "https://www.youtube.com/watch?v=_X0mgOOSpLU&t=56s", "title": "The power of believing that you can improve | Carol Dweck", "thumbnail": "https://i.ytimg.com/vi/_X0mgOOSpLU/maxresdefault.jpg"} {"video_id": "_X0mgOOSpLU", "text": "Instead of luxuriating in the power of yet, they were gripped in the tyranny of now. So what do they do next? I'll tell you what they do next. In one study, they told us they would probably cheat the next time instead of studying more if they failed a test. In another study, after a failure, they looked for someone who did worse than they did so they could feel really good about themselves. And in study after study, they have run from difficulty. Scientists measured the electrical activity from the brain as students confronted an error.", "start_timestamp": "00:01:50", "end_timestamp": "00:02:40", "start_second": 110, "end_second": 160, "url": "https://www.youtube.com/watch?v=_X0mgOOSpLU&t=110s", "title": "The power of believing that you can improve | Carol Dweck", "thumbnail": "https://i.ytimg.com/vi/_X0mgOOSpLU/maxresdefault.jpg"} {"video_id": "_X0mgOOSpLU", "text": "On the left, you see the fixed-mindset students. There's hardly any activity. They run from the error. They don't engage with it. But on the right, you have the students with the growth mindset, the idea that abilities can be developed. They engage deeply. Their brain is on fire with yet. They engage deeply. They process the error. They learn from it and they correct it. How are we raising our children? Are we raising them for now instead of yet? Are we raising kids who are obsessed with getting As? Are we raising kids who don't know how to dream big dreams?", "start_timestamp": "00:02:40", "end_timestamp": "00:03:32", "start_second": 160, "end_second": 212, "url": "https://www.youtube.com/watch?v=_X0mgOOSpLU&t=160s", "title": "The power of believing that you can improve | Carol Dweck", "thumbnail": "https://i.ytimg.com/vi/_X0mgOOSpLU/maxresdefault.jpg"} {"video_id": "_X0mgOOSpLU", "text": "Their biggest goal is getting the next A, or the next test score? And are they carrying this need for constant validation with them into their future lives? Maybe, because employers are coming to me and saying, \"We have already raised a generation of young workers who can't get through the day without an award.\" So what can we do? How can we build that bridge to yet? Here are some things we can do. First of all, we can praise wisely, not praising intelligence or talent. That has failed. Don't do that anymore.", "start_timestamp": "00:03:32", "end_timestamp": "00:04:22", "start_second": 212, "end_second": 262, "url": "https://www.youtube.com/watch?v=_X0mgOOSpLU&t=212s", "title": "The power of believing that you can improve | Carol Dweck", "thumbnail": "https://i.ytimg.com/vi/_X0mgOOSpLU/maxresdefault.jpg"} {"video_id": "_X0mgOOSpLU", "text": "But praising the process that kids engage in, their effort, their strategies, their focus, their perseverance, their improvement. This process praise creates kids who are hardy and resilient. There are other ways to reward yet. We recently teamed up with game scientists from the University of Washington to create a new online math game that rewarded yet. In this game, students were rewarded for effort, strategy and progress. The usual math game rewards you for getting answers right, right now, but this game rewarded process.", "start_timestamp": "00:04:22", "end_timestamp": "00:05:10", "start_second": 262, "end_second": 310, "url": "https://www.youtube.com/watch?v=_X0mgOOSpLU&t=262s", "title": "The power of believing that you can improve | Carol Dweck", "thumbnail": "https://i.ytimg.com/vi/_X0mgOOSpLU/maxresdefault.jpg"} {"video_id": "_X0mgOOSpLU", "text": "And we got more effort, more strategies, more engagement over longer periods of time, and more perseverance when they hit really, really hard problems. Just the words \"yet\" or \"not yet,\" we're finding, give kids greater confidence, give them a path into the future that creates greater persistence. And we can actually change students' mindsets. In one study, we taught them that every time they push out of their comfort zone to learn something new and difficult, the neurons in their brain can form new, stronger connections,", "start_timestamp": "00:05:10", "end_timestamp": "00:06:00", "start_second": 310, "end_second": 360, "url": "https://www.youtube.com/watch?v=_X0mgOOSpLU&t=310s", "title": "The power of believing that you can improve | Carol Dweck", "thumbnail": "https://i.ytimg.com/vi/_X0mgOOSpLU/maxresdefault.jpg"} {"video_id": "_X0mgOOSpLU", "text": "and over time, they can get smarter. Look what happened: In this study, students who were not taught this growth mindset continued to show declining grades over this difficult school transition, but those who were taught this lesson showed a sharp rebound in their grades. We have shown this now, this kind of improvement, with thousands and thousands of kids, especially struggling students. So let's talk about equality. In our country, there are groups of students who chronically underperform, for example, children in inner cities,", "start_timestamp": "00:06:00", "end_timestamp": "00:06:50", "start_second": 360, "end_second": 410, "url": "https://www.youtube.com/watch?v=_X0mgOOSpLU&t=360s", "title": "The power of believing that you can improve | Carol Dweck", "thumbnail": "https://i.ytimg.com/vi/_X0mgOOSpLU/maxresdefault.jpg"} {"video_id": "_X0mgOOSpLU", "text": "or children on Native American reservations. And they've done so poorly for so long that many people think it's inevitable. But when educators create growth mindset classrooms steeped in yet, equality happens. And here are just a few examples. In one year, a kindergarten class in Harlem, New York scored in the 95th percentile on the national achievement test. Many of those kids could not hold a pencil when they arrived at school. In one year, fourth-grade students in the South Bronx, way behind, became the number one fourth-grade class in the state of New York", "start_timestamp": "00:06:50", "end_timestamp": "00:07:51", "start_second": 410, "end_second": 471, "url": "https://www.youtube.com/watch?v=_X0mgOOSpLU&t=410s", "title": "The power of believing that you can improve | Carol Dweck", "thumbnail": "https://i.ytimg.com/vi/_X0mgOOSpLU/maxresdefault.jpg"} {"video_id": "_X0mgOOSpLU", "text": "on the state math test. In a year, to a year and a half, Native American students in a school on a reservation went from the bottom of their district to the top, and that district included affluent sections of Seattle. So the Native kids outdid the Microsoft kids. This happened because the meaning of effort and difficulty were transformed. Before, effort and difficulty made them feel dumb, made them feel like giving up, but now, effort and difficulty, that's when their neurons are making new connections, stronger connections.", "start_timestamp": "00:07:51", "end_timestamp": "00:08:50", "start_second": 471, "end_second": 530, "url": "https://www.youtube.com/watch?v=_X0mgOOSpLU&t=471s", "title": "The power of believing that you can improve | Carol Dweck", "thumbnail": "https://i.ytimg.com/vi/_X0mgOOSpLU/maxresdefault.jpg"} {"video_id": "_X0mgOOSpLU", "text": "That's when they're getting smarter. I received a letter recently from a 13-year-old boy. He said, \"Dear Professor Dweck, I appreciate that your writing is based on solid scientific research, and that's why I decided to put it into practice. I put more effort into my schoolwork, into my relationship with my family, and into my relationship with kids at school, and I experienced great improvement in all of those areas. I now realize I've wasted most of my life.\" Let's not waste any more lives, because once we know", "start_timestamp": "00:08:50", "end_timestamp": "00:09:50", "start_second": 530, "end_second": 590, "url": "https://www.youtube.com/watch?v=_X0mgOOSpLU&t=530s", "title": "The power of believing that you can improve | Carol Dweck", "thumbnail": "https://i.ytimg.com/vi/_X0mgOOSpLU/maxresdefault.jpg"} {"video_id": "pPyOlGvWoXA", "text": "so then let's get started for today welcome to lecture 10 of cs2 9458 deep unsupervised learning now this lecture will be on compression before we dive into that a couple of logistical things there are main logistical things that are ahead of you are your project milestone which is a three-page Goldbach intermediate report is due on Monday so we must look forward to reading those giving you feedback in the days after the deadline so you can gonna make sure you're maximally on track for your full final project the", "start_timestamp": "00:00:00", "end_timestamp": "00:00:40", "start_second": 0, "end_second": 40, "url": "https://www.youtube.com/watch?v=pPyOlGvWoXA&t=0s", "title": "L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning", "thumbnail": "https://i.ytimg.com/vi/pPyOlGvWoXA/maxresdefault.jpg"} {"video_id": "pPyOlGvWoXA", "text": "other thing that's coming up in two weeks we'll have our midterm which will figure out how to do it remote under the current circumstances but the main thing we'll do later this week is release a set of study materials for you that capture the core of the things covered in the class their very core compressive a little bit because of how much we're going to have you study because of course a more difficult semester than most you do outside circumstances so a relatively short study guide and it'll be a PDF with the questions and the", "start_timestamp": "00:00:40", "end_timestamp": "00:01:15", "start_second": 40, "end_second": 75, "url": "https://www.youtube.com/watch?v=pPyOlGvWoXA&t=40s", "title": "L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning", "thumbnail": "https://i.ytimg.com/vi/pPyOlGvWoXA/maxresdefault.jpg"} {"video_id": "pPyOlGvWoXA", "text": "answers and so you'll know exactly what the questions can be and what the answers are that we expect you to get so that will come out later today or tomorrow for you to study link pause here and see if there's any questions about logistics oh and by the way this lecture is recorded so for some reason you you know don't like your voice to be heard just like with the in-class lectures that were recorded then please be aware of that alright then let's get started with the Contin first day so compression what is", "start_timestamp": "00:01:15", "end_timestamp": "00:01:55", "start_second": 75, "end_second": 115, "url": "https://www.youtube.com/watch?v=pPyOlGvWoXA&t=75s", "title": "L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning", "thumbnail": "https://i.ytimg.com/vi/pPyOlGvWoXA/maxresdefault.jpg"} {"video_id": "pPyOlGvWoXA", "text": "it and why would we care in general and why would we there in this class so what is it it's an data you might want to reduce the number of bits for encoding a message a message could be an image you want to send or part of speech or maybe some music you want to send across on communication line and it's an original in a collisional format might take up a very large number of bits and you might want to be able to get that same information across by seeing last bits in the communication channel so what does it", "start_timestamp": "00:01:55", "end_timestamp": "00:02:34", "start_second": 115, "end_second": 154, "url": "https://www.youtube.com/watch?v=pPyOlGvWoXA&t=115s", "title": "L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning", "thumbnail": "https://i.ytimg.com/vi/pPyOlGvWoXA/maxresdefault.jpg"} {"video_id": "pPyOlGvWoXA", "text": "look like you have some bit stream B on the left here so that's where you start out with then what happens next is you want to compress it and end up with a compressed version of that bit stream and the hope that that compressed version has lost its anatomy original so when you send a compressed extreme over a channel or stored on a hard drive or whatever you want to do with those bits in a more compressed way then it's ideally a lot less good but then when you want to use it later you should be able to expand it back out just the", "start_timestamp": "00:02:34", "end_timestamp": "00:03:08", "start_second": 154, "end_second": 188, "url": "https://www.youtube.com/watch?v=pPyOlGvWoXA&t=154s", "title": "L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning", "thumbnail": "https://i.ytimg.com/vi/pPyOlGvWoXA/maxresdefault.jpg"} {"video_id": "pPyOlGvWoXA", "text": "compression into the original alright so why do we care well you could save time you could save bandwidth over communications channel you could save space when you're storing it so many reasons you might care about this from the AI point of view and part of why it's interesting for this class is that often the ability to compress data reflects understanding of the data by the system that compressed the data so if you throw this in that's really by compressing data that means that system somehow has absorbed an understanding of", "start_timestamp": "00:03:08", "end_timestamp": "00:03:44", "start_second": 188, "end_second": 224, "url": "https://www.youtube.com/watch?v=pPyOlGvWoXA&t=188s", "title": "L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning", "thumbnail": "https://i.ytimg.com/vi/pPyOlGvWoXA/maxresdefault.jpg"} {"video_id": "pPyOlGvWoXA", "text": "the data so now there's two types of compression lossy versus lossless compression in this lecture we'll be fully focused on lossless compression where the original bits can be completely reconstructed on the output now sometimes in practice you might care about lossy compression you say well I don't need all the details back as long as I can save more bits I'm happy to lose some detail that would be loss of compression not the topic for this class but also a topic you might be interested in at some point so I want to make sure", "start_timestamp": "00:03:44", "end_timestamp": "00:04:15", "start_second": 224, "end_second": 255, "url": "https://www.youtube.com/watch?v=pPyOlGvWoXA&t=224s", "title": "L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning", "thumbnail": "https://i.ytimg.com/vi/pPyOlGvWoXA/maxresdefault.jpg"} {"video_id": "pPyOlGvWoXA", "text": "you know it exists now one of the very interesting things with compression there are some prizes associated it so recently hutter I should increase the price used to be a 50,000 euro prize for compressing human knowledge and recently it went up to factor 10 it's now a five hundred thousand euro prize if you can compress human knowledge what does it mean more concretely so there's a one gigabyte file of I believe text and this file here and with nine and if you can compress that to less than one hundred sixteen megabytes you win the prize you", "start_timestamp": "00:04:15", "end_timestamp": "00:04:57", "start_second": 255, "end_second": 297, "url": "https://www.youtube.com/watch?v=pPyOlGvWoXA&t=255s", "title": "L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning", "thumbnail": "https://i.ytimg.com/vi/pPyOlGvWoXA/maxresdefault.jpg"} {"video_id": "pPyOlGvWoXA", "text": "won this thing you cracked it the reason how to read out the surprise is not so much because it specifically wants that one gigabyte compressed into one sixteen megabytes because he believes now we go by it has interesting information that any system that can represent it as compactly as one sixteen megabytes must have made its hopefully that's what it thinks some AI advances to be able to do that it's pretty interesting here because unlike most things we've covered in this class and you'll see in any kind of machine learning there's no train at", "start_timestamp": "00:04:57", "end_timestamp": "00:05:29", "start_second": 297, "end_second": 329, "url": "https://www.youtube.com/watch?v=pPyOlGvWoXA&t=297s", "title": "L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning", "thumbnail": "https://i.ytimg.com/vi/pPyOlGvWoXA/maxresdefault.jpg"} {"video_id": "pPyOlGvWoXA", "text": "st it's not that he asked you to send in a compressor and has a secret test set is gonna test your compressor on to see how it works no it's literally there's a 1 gigabyte file and if you can make it smaller small enough you win the prize what's gonna be able to decompress it so you gotta be able to effectively send him something that's 116 megabytes or less and includes the code for decoding back into the one gigabyte so you'd be sending effectively both the decoder program and some encoding of this monkey bad file together it would be able to", "start_timestamp": "00:05:29", "end_timestamp": "00:06:05", "start_second": 329, "end_second": 365, "url": "https://www.youtube.com/watch?v=pPyOlGvWoXA&t=329s", "title": "L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning", "thumbnail": "https://i.ytimg.com/vi/pPyOlGvWoXA/maxresdefault.jpg"} {"video_id": "pPyOlGvWoXA", "text": "reconstruct the original 1 gigabyte file so very very specific problem there's no test said just that one training example but nobody's got them close to actually making this work so interesting challenge maybe something you want to think about at some point and see if it can make some progress then there's another compression challenge on images so this is often held at CDR the main conference for computer vision and so there's a workshop there that looks at how well your compressor and there it's really about a compressor that you send", "start_timestamp": "00:06:05", "end_timestamp": "00:06:38", "start_second": 365, "end_second": 398, "url": "https://www.youtube.com/watch?v=pPyOlGvWoXA&t=365s", "title": "L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning", "thumbnail": "https://i.ytimg.com/vi/pPyOlGvWoXA/maxresdefault.jpg"} {"video_id": "pPyOlGvWoXA", "text": "in compressor of course and they have a secret asset on which they test how well you can compress and decompress the test examples so two very different challenges but both very much at the core what we're going to be thinking about today in lecture all right so why in this course it turns out that we've studied a lot of generic models in this course it also turns out that compression utilizes generative models so the better it narrative model the better the compression can be and in fact Jonathan who will cover the second", "start_timestamp": "00:06:38", "end_timestamp": "00:07:19", "start_second": 398, "end_second": 439, "url": "https://www.youtube.com/watch?v=pPyOlGvWoXA&t=398s", "title": "L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning", "thumbnail": "https://i.ytimg.com/vi/pPyOlGvWoXA/maxresdefault.jpg"} {"video_id": "pPyOlGvWoXA", "text": "half of this lecture has made several breakthroughs in this PhD research showing how some of the state-of-the-art generative models can be converted into compression algorithms with the CIO narrative models under the hood such that you can get better compression now you might go get otherwise and we'll cover that later but so there's a very close connection between better generative models and better compression there be material we would recommend for this lecture is this PDF overview to the nice write-up that", "start_timestamp": "00:07:19", "end_timestamp": "00:07:55", "start_second": 439, "end_second": 475, "url": "https://www.youtube.com/watch?v=pPyOlGvWoXA&t=439s", "title": "L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning", "thumbnail": "https://i.ytimg.com/vi/pPyOlGvWoXA/maxresdefault.jpg"} {"video_id": "pPyOlGvWoXA", "text": "covers the background on essentially information theory slack impression that we'll be covering in this lecture at least the first half in the second half will dive a lot more in the deep learning aspects and how and tied it into this so some applications you might have seen jarick file compression gz p z 7z a zip file systems various multimedia formats you might have seen a file - fake file gif file mp3 mp4 communications that maybe you don't see news anymore now but where compression played a big role in the", "start_timestamp": "00:07:55", "end_timestamp": "00:08:29", "start_second": 475, "end_second": 509, "url": "https://www.youtube.com/watch?v=pPyOlGvWoXA&t=475s", "title": "L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning", "thumbnail": "https://i.ytimg.com/vi/pPyOlGvWoXA/maxresdefault.jpg"} {"video_id": "pPyOlGvWoXA", "text": "past fax modem Skype and so forth and all of these are examples of where here original information might have been represented with many many bits too large for you to a store on file in that format and because you can reduce them or they can get back out the original you can now store it more efficiently or send it more efficiently over get in line when he said it more official over communication line it can reduce both the amount of does he need to send then in the process also reduce the latency because it might be less", "start_timestamp": "00:08:29", "end_timestamp": "00:09:02", "start_second": 509, "end_second": 542, "url": "https://www.youtube.com/watch?v=pPyOlGvWoXA&t=509s", "title": "L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning", "thumbnail": "https://i.ytimg.com/vi/pPyOlGvWoXA/maxresdefault.jpg"} {"video_id": "pPyOlGvWoXA", "text": "delay assuming you can decode quickly on the other side now maybe you might have followed this TV show called Silicon Valley it's uh well pretty finish I would say with many things that are maybe a little too close to home and too close to true but still pretty funny and if you watch that show on HBO you have noticed that actually the company the central company Pied Piper would they put forward as their product is the well a middle-out compression algorithm nobody knows what middle out this but they put forward a compression algorithm", "start_timestamp": "00:09:02", "end_timestamp": "00:09:41", "start_second": 542, "end_second": 581, "url": "https://www.youtube.com/watch?v=pPyOlGvWoXA&t=542s", "title": "L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning", "thumbnail": "https://i.ytimg.com/vi/pPyOlGvWoXA/maxresdefault.jpg"} {"video_id": "pPyOlGvWoXA", "text": "and that's the secret sauce of their company turns out that some people really do this for their actual company so there's various startups out there that you invent don't disclose exactly what's under the hood but invent new compression algorithms using machine learning under the hood most likely to improve upon past state of garden compression now the specific point I shoulda named itself after the Silicon Valley show which was called the company's called Pied Piper and this is now called pact pie so there's actually", "start_timestamp": "00:09:41", "end_timestamp": "00:10:14", "start_second": 581, "end_second": 614, "url": "https://www.youtube.com/watch?v=pPyOlGvWoXA&t=581s", "title": "L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning", "thumbnail": "https://i.ytimg.com/vi/pPyOlGvWoXA/maxresdefault.jpg"} {"video_id": "pPyOlGvWoXA", "text": "a real thing they presented at TechCrunch in 2015 now first question you might ask is can we have universal data compression so their fundamental question you'll see in this lecture a lot of questions we ask them to be very fundamental where we can give actually very very strong theoretical answers sometimes negative answers so can we come up with universal data impressive would that mean that would be can we come up with something that no matter what let's say filing you give it it can make it smaller and later decompress it", "start_timestamp": "00:10:14", "end_timestamp": "00:10:51", "start_second": 614, "end_second": 651, "url": "https://www.youtube.com/watch?v=pPyOlGvWoXA&t=614s", "title": "L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning", "thumbnail": "https://i.ytimg.com/vi/pPyOlGvWoXA/maxresdefault.jpg"} {"video_id": "pPyOlGvWoXA", "text": "back out to the original well the things that's possible well okay let's see imagine you want to compress every possible to compress every possible possible bitstream they ever encounter okay so that's not possible no longer we can do this what's the intuition that should be simple we'll do a proof by contradiction suppose you have a universal data compression algorithm you that can compress every bit streams no matter what your feet it is gonna make it less bits and then go to decompress it back out later to the original okay now given", "start_timestamp": "00:10:51", "end_timestamp": "00:11:33", "start_second": 651, "end_second": 693, "url": "https://www.youtube.com/watch?v=pPyOlGvWoXA&t=651s", "title": "L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning", "thumbnail": "https://i.ytimg.com/vi/pPyOlGvWoXA/maxresdefault.jpg"} {"video_id": "pPyOlGvWoXA", "text": "bit string B 0 you can compress it to get a smaller bit string B 1 if it strictly last bits otherwise it's not a universal compressor now B you want you can feed into it again it'll turn that into B 2 which is yet smaller you keep doing this you do this especially many times at some point you'll have a big string of size 0 at that point it's obvious you cannot recover what the earth will lost kiss you it could have been anything and everything gets turned into 0 you can get back out will int it so what this shows them assuming", "start_timestamp": "00:11:33", "end_timestamp": "00:12:09", "start_second": 693, "end_second": 729, "url": "https://www.youtube.com/watch?v=pPyOlGvWoXA&t=693s", "title": "L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning", "thumbnail": "https://i.ytimg.com/vi/pPyOlGvWoXA/maxresdefault.jpg"} {"video_id": "pPyOlGvWoXA", "text": "somebody tells you I have a universal data compression already compress everything no problem here's a prove that this is actually not possible to prove it another way also another way to prove it is to do it by Counting you can say okay suppose your algorithm can compress all thousand histories ok how many thousand bit strings are there we throw the one thousand possible bit strings now if we can compress all them that means we can pick every one of them and turn them into something smaller and distinct smaller otherwise we cannot get the", "start_timestamp": "00:12:09", "end_timestamp": "00:12:47", "start_second": 729, "end_second": 767, "url": "https://www.youtube.com/watch?v=pPyOlGvWoXA&t=729s", "title": "L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning", "thumbnail": "https://i.ytimg.com/vi/pPyOlGvWoXA/maxresdefault.jpg"} {"video_id": "pPyOlGvWoXA", "text": "original back out but if we look at what's possible with all possible shorter bit strings actually you cannot encode all two to the 1000 possible thousand bit strings so since we can't include all possible to the 1000 bit strings it means we cannot compress all of them so we have two different proves here to show that the universal data compression just not possible why is impossible and practice though even if you cannot universally compress everything well there are statistical patterns that you can exploit for", "start_timestamp": "00:12:47", "end_timestamp": "00:13:26", "start_second": 767, "end_second": 806, "url": "https://www.youtube.com/watch?v=pPyOlGvWoXA&t=767s", "title": "L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning", "thumbnail": "https://i.ytimg.com/vi/pPyOlGvWoXA/maxresdefault.jpg"} {"video_id": "pPyOlGvWoXA", "text": "example here's a piece of text and I'll give you all a minute to read this text so as you're reading this text you'll you'll notice well likely you'll notice that there's something fun about the text and that the words are mostly misspelled but despite these was being misspelled it's how she's still very feasible to read this and effectively what it says is that most people have no problem reading a piece of text if for every word you keep the first two letters you keep the last two letters but then everything in between you can", "start_timestamp": "00:13:26", "end_timestamp": "00:14:12", "start_second": 806, "end_second": 852, "url": "https://www.youtube.com/watch?v=pPyOlGvWoXA&t=806s", "title": "L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning", "thumbnail": "https://i.ytimg.com/vi/pPyOlGvWoXA/maxresdefault.jpg"} {"video_id": "pPyOlGvWoXA", "text": "randomly permute which means that the ordering of the letters in between has not much real information in it it could be any ordering so you don't need to stick to the original order in us here they do this crammed with we can still understand it so it means there's some redundancy it means that certain sequences are just not very likely and when you read this it's close to a sequence that you're familiar with and so you can easily map it onto that and still understand words that were there originally there's another example from", "start_timestamp": "00:14:12", "end_timestamp": "00:14:44", "start_second": 852, "end_second": 884, "url": "https://www.youtube.com/watch?v=pPyOlGvWoXA&t=852s", "title": "L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning", "thumbnail": "https://i.ytimg.com/vi/pPyOlGvWoXA/maxresdefault.jpg"} {"video_id": "pPyOlGvWoXA", "text": "images so on the Left we see a bunch of real-world images of flowers in this case on the right we see random data the data on the left if your dataset looks like that it's very compressible because there are a lot of regularities at an intuitive level example often maybe pixels have roughly the same value whereas images on the right which are completely random there is no correlation between neighboring pixels that you can exploit to maybe compress how to represent the data on the right so two very different", "start_timestamp": "00:14:44", "end_timestamp": "00:15:20", "start_second": 884, "end_second": 920, "url": "https://www.youtube.com/watch?v=pPyOlGvWoXA&t=884s", "title": "L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning", "thumbnail": "https://i.ytimg.com/vi/pPyOlGvWoXA/maxresdefault.jpg"} {"video_id": "pPyOlGvWoXA", "text": "distributions for the complete random distribution not clear how to compress for the real-world type data you can already intuitively see that there are opportunities to compress for example you could just think of every other bit now every other pixel it wouldn't be perfectly lossless we could probably reconstruct most of the image from that alright so what we've covered so far is what is compression what's the goal in compression and why might we care go from a practical point of view and from a AI point of view we also looked at the", "start_timestamp": "00:15:20", "end_timestamp": "00:15:57", "start_second": 920, "end_second": 957, "url": "https://www.youtube.com/watch?v=pPyOlGvWoXA&t=920s", "title": "L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning", "thumbnail": "https://i.ytimg.com/vi/pPyOlGvWoXA/maxresdefault.jpg"} {"video_id": "pPyOlGvWoXA", "text": "fact that universal lossless compressor is just not possible we looked at some intuition as they're being redundancy in most of the data that we encounter in the real world and because there is redundancy I'm totally speaking there should be a way to exploit that because it's only the did that really occurs in the real world that you need to have a good compression for and the data that doesn't really occur in the real world even though they are also can be represented in principle bit strings you might not care much", "start_timestamp": "00:15:57", "end_timestamp": "00:16:24", "start_second": 957, "end_second": 984, "url": "https://www.youtube.com/watch?v=pPyOlGvWoXA&t=957s", "title": "L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning", "thumbnail": "https://i.ytimg.com/vi/pPyOlGvWoXA/maxresdefault.jpg"} {"video_id": "pPyOlGvWoXA", "text": "about how much that kind of non real-world that gets compressed so for the remainder of this lecture we'll want to look at is a couple of things first thing we want to look at coding off symbols so we'll start looking at okay what does it mean to actually have a compression system and this will only comment with a method called Huffman coding that is actually used in many many of today's systems and it's actually quite intuitive very simple way to understand how compression could effectively work so I'm going to look at", "start_timestamp": "00:16:24", "end_timestamp": "00:16:54", "start_second": 984, "end_second": 1014, "url": "https://www.youtube.com/watch?v=pPyOlGvWoXA&t=984s", "title": "L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning", "thumbnail": "https://i.ytimg.com/vi/pPyOlGvWoXA/maxresdefault.jpg"} {"video_id": "pPyOlGvWoXA", "text": "some theoretical limits and from there we'll look at some additional consideration for coding that will help us a bit more than what we get from the simplest version we cover first from there we'll tie this into things that we've covered in this class we'll look at our aggressive models will look at Vee we'll look at flow models and try to understand how these models can be leveraged to do better compression all right let's get this started so here's one way of coding information all right a way and just to be clear there's", "start_timestamp": "00:16:54", "end_timestamp": "00:17:38", "start_second": 1014, "end_second": 1058, "url": "https://www.youtube.com/watch?v=pPyOlGvWoXA&t=1014s", "title": "L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning", "thumbnail": "https://i.ytimg.com/vi/pPyOlGvWoXA/maxresdefault.jpg"} {"video_id": "pPyOlGvWoXA", "text": "no compression and this wave coding so ASCII is a system that for everything that's on your keyboard will assign seven bits so every character you can type can be represented with seven bits so two to seven possible characters could be represented this way what's nice about this very easy for kody and decoded there's a very simple one-to-one mapping always going to the seven bits for that character and back out to the character but if you encode this way you're not exploiting its disco parents is Nagano it's not compressing your", "start_timestamp": "00:17:38", "end_timestamp": "00:18:15", "start_second": 1058, "end_second": 1095, "url": "https://www.youtube.com/watch?v=pPyOlGvWoXA&t=1058s", "title": "L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning", "thumbnail": "https://i.ytimg.com/vi/pPyOlGvWoXA/maxresdefault.jpg"} {"video_id": "pPyOlGvWoXA", "text": "information maybe some key strokes are far less likely than others so maybe the ones that are less likely you should allow it to use more bits and the ones that are very likely you should try to represent with a very small number of bits and overall you might have a win that's the intuition behind law compression schemes but obviously here everything seven bits that's not going to happen let's in at least a reference as a starting point so we'll need a spare be length codes codes down assign different lengths depending on how likely a symbol", "start_timestamp": "00:18:15", "end_timestamp": "00:18:47", "start_second": 1095, "end_second": 1127, "url": "https://www.youtube.com/watch?v=pPyOlGvWoXA&t=1095s", "title": "L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning", "thumbnail": "https://i.ytimg.com/vi/pPyOlGvWoXA/maxresdefault.jpg"} {"video_id": "pPyOlGvWoXA", "text": "is how do we avoid ambiguity a simple way to avoid ambiguity when you have variable lengths is when your fix times is very easy first seven bits per character next sum of its next character December's next target but if it's variable length how you know a character has been fully translated and now the next one is starting one way to do this is to ensure that no code word is a prefix of another code word so as you see bits come across the line at some point you'll have seen all the bits for some letter let's say and because no", "start_timestamp": "00:18:47", "end_timestamp": "00:19:22", "start_second": 1127, "end_second": 1162, "url": "https://www.youtube.com/watch?v=pPyOlGvWoXA&t=1127s", "title": "L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning", "thumbnail": "https://i.ytimg.com/vi/pPyOlGvWoXA/maxresdefault.jpg"} {"video_id": "pPyOlGvWoXA", "text": "code word prefix of another one at that point you know there's nothing else that can continue from this this is the thing is the complete thing sent across and the corresponding character can be decoded so another thing you can do but that might be consuming more bandwidth to space you could also end the stop character to each codeword the Morse decoding does this but this might be a little wasteful we can have a general prefix-free code and we'll look at that very soon so let's look at Morris first so in Morse code in what happens this", "start_timestamp": "00:19:22", "end_timestamp": "00:20:03", "start_second": 1162, "end_second": 1203, "url": "https://www.youtube.com/watch?v=pPyOlGvWoXA&t=1162s", "title": "L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning", "thumbnail": "https://i.ytimg.com/vi/pPyOlGvWoXA/maxresdefault.jpg"} {"video_id": "pPyOlGvWoXA", "text": "very old coding scheme this is where I'm back when you want to send let's say you just have a effectively a line the communication line over which all you could send was let's say voltage going on and back down you can make it go up briefly or go go up for longer for three times as long so a dot is a brief spike in or up in your voltage let's say and then a dash is three times as long and then the spaces in between the dot and dash are also encoding that it's it's quiet the reason it's a quiet time in between", "start_timestamp": "00:20:03", "end_timestamp": "00:20:39", "start_second": 1203, "end_second": 1239, "url": "https://www.youtube.com/watch?v=pPyOlGvWoXA&t=1203s", "title": "L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning", "thumbnail": "https://i.ytimg.com/vi/pPyOlGvWoXA/maxresdefault.jpg"} {"video_id": "pPyOlGvWoXA", "text": "and then between characters there will also be three so between total of three units and then between words there will be Sun units of quiet time so that way you can encode every character alphabet all numbers and all I need to be able to do is send dots and dashes or short and longer signals and pauses be able to get everything across and people use that before before telephones time to go up Telegraph where they could send information across this way so some of the things you can already see here the letter A as a relatively short encoding", "start_timestamp": "00:20:39", "end_timestamp": "00:21:17", "start_second": 1239, "end_second": 1277, "url": "https://www.youtube.com/watch?v=pPyOlGvWoXA&t=1239s", "title": "L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning", "thumbnail": "https://i.ytimg.com/vi/pPyOlGvWoXA/maxresdefault.jpg"} {"video_id": "pPyOlGvWoXA", "text": "same for e same for I and that's because those are frequent frequently used letters and then things that are less frequent like maybe a Z has a longer encoding what else is left frequent there J longer encoding essentially well more or less to the letter sound you get a lot of points for in Scrabble have the long encoding skew here X over here and the letters that don't give you much points in Scrabble have these shorter in CONUS because there's many more words that use them okay that's that's a very specific thing the more", "start_timestamp": "00:21:17", "end_timestamp": "00:21:57", "start_second": 1277, "end_second": 1317, "url": "https://www.youtube.com/watch?v=pPyOlGvWoXA&t=1277s", "title": "L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning", "thumbnail": "https://i.ytimg.com/vi/pPyOlGvWoXA/maxresdefault.jpg"} {"video_id": "pPyOlGvWoXA", "text": "general thing that people tend to use a cynical prefix-free codes which can be represented as binary tries or binary trees so what is a binary try or tree it's a tree where one thing's split you hard-code ahead of time that the left will be a zero the right will be a one and so you can build a tree and you don't even have to put the zeros and the ones on it that I'm putting on here because you always know those side is a zero the right side is a one that says their specific type of data structure the reason it's spelled with ie is", "start_timestamp": "00:21:57", "end_timestamp": "00:22:36", "start_second": 1317, "end_second": 1356, "url": "https://www.youtube.com/watch?v=pPyOlGvWoXA&t=1317s", "title": "L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning", "thumbnail": "https://i.ytimg.com/vi/pPyOlGvWoXA/maxresdefault.jpg"} {"video_id": "pPyOlGvWoXA", "text": "because comes from retrieval it's a data structure for easy retrieval of certain information so that's also why it's often pronounced binary trees because it comes from retrieval at the same time there's also threes this way that also our data structure and so it can be a little confusing if they're pronounced the same way so some people will still pronounce this as trust to distinguish from trees so that's a binary tree the way we're going to use it is that the symbols will always live in the leaves and a code word is a path from root leaf", "start_timestamp": "00:22:36", "end_timestamp": "00:23:08", "start_second": 1356, "end_second": 1388, "url": "https://www.youtube.com/watch?v=pPyOlGvWoXA&t=1356s", "title": "L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning", "thumbnail": "https://i.ytimg.com/vi/pPyOlGvWoXA/maxresdefault.jpg"} {"video_id": "pPyOlGvWoXA", "text": "so let's look at some example here's an example we have a code word table we have one two three four five six characters each character has an encoding with a sequence of bits of sometimes only one bit and you can see that there's a crisp honest adapt in this binary tree where all the characters are sitting in leaves of the tree because every character sitting in a leave of a tree what it means lets him I'm getting some message across say I'm receiving this message over here I'm receiving a zero what will I do I will", "start_timestamp": "00:23:08", "end_timestamp": "00:23:54", "start_second": 1388, "end_second": 1434, "url": "https://www.youtube.com/watch?v=pPyOlGvWoXA&t=1388s", "title": "L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning", "thumbnail": "https://i.ytimg.com/vi/pPyOlGvWoXA/maxresdefault.jpg"} {"video_id": "pPyOlGvWoXA", "text": "go down this path I'll say oh I hit a leaf that means I'm at the end nothing left to go ID code an A then I get this one over here we just restarted means good this way I get another one you this way get another one Musa go this way get another one means go this way I hate to leave I know I'm ready to decode and it's a B and so because all the symbols live in the leaves I always know when I hate to leave what's him I need to decode and then come back to the top to start decoding the rest a message that's coming in now", "start_timestamp": "00:23:54", "end_timestamp": "00:24:30", "start_second": 1434, "end_second": 1470, "url": "https://www.youtube.com/watch?v=pPyOlGvWoXA&t=1434s", "title": "L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning", "thumbnail": "https://i.ytimg.com/vi/pPyOlGvWoXA/maxresdefault.jpg"} {"video_id": "pPyOlGvWoXA", "text": "you can of course ask yourself the question for a given set of symbols that you want to send are there multiple binary trees and in fact there are there are many many trees you could put forward to come up with a podium for these six symbols here is another example here the tree is set up a little family and we see them the same string being compressed twice and on the left it requires 30 bits on the right it requires 29 bits and so the name of the game here is can we find a binary tree set down as I tried to encode my message", "start_timestamp": "00:24:30", "end_timestamp": "00:25:06", "start_second": 1470, "end_second": 1506, "url": "https://www.youtube.com/watch?v=pPyOlGvWoXA&t=1470s", "title": "L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning", "thumbnail": "https://i.ytimg.com/vi/pPyOlGvWoXA/maxresdefault.jpg"} {"video_id": "pPyOlGvWoXA", "text": "my bitstream sir as I try to put my original symbol message into a bit stream that a bit stream is as short as possible and race where you could search over all possible binary trees but there would be many many many binaries and then apply decide which one is most efficient but we'll see better schemes than that but you have a very naive way to put is just try all possible binary trees that have the symbols at the leaves and just see for each one of them how long the bitstream is and take the best one we'll see an efficient method", "start_timestamp": "00:25:06", "end_timestamp": "00:25:38", "start_second": 1506, "end_second": 1538, "url": "https://www.youtube.com/watch?v=pPyOlGvWoXA&t=1506s", "title": "L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning", "thumbnail": "https://i.ytimg.com/vi/pPyOlGvWoXA/maxresdefault.jpg"} {"video_id": "pPyOlGvWoXA", "text": "to get very close to that I should we get to the optimal one okay so the efficient method to find the optimal one without needing to do that exhaustive search I just described is something called Huffman codes and right now but we'll cover how Huffman codes work procedurally and then later once we've seen a bit more foundation on information theory we will also prove the fact that they are optimal but for now we're not going to yet prove that there are more you're going to look at the procedure okay so how does it work", "start_timestamp": "00:25:38", "end_timestamp": "00:26:18", "start_second": 1538, "end_second": 1578, "url": "https://www.youtube.com/watch?v=pPyOlGvWoXA&t=1538s", "title": "L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning", "thumbnail": "https://i.ytimg.com/vi/pPyOlGvWoXA/maxresdefault.jpg"} {"video_id": "pPyOlGvWoXA", "text": "Huffman algorithm said they're very simple consider the probability P I of each symbol I that's in your input so you have maybe a belong text file if you're encoded characters you would for each character do a count and then see what's the probability for each character to appear once you've done that you can start with one node corresponding to each symbol so for each of these symbols you have a node so starts as a disconnected tree just a bunch of separate Leafs really but not really connected up to anything yet and", "start_timestamp": "00:26:18", "end_timestamp": "00:26:52", "start_second": 1578, "end_second": 1612, "url": "https://www.youtube.com/watch?v=pPyOlGvWoXA&t=1578s", "title": "L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning", "thumbnail": "https://i.ytimg.com/vi/pPyOlGvWoXA/maxresdefault.jpg"} {"video_id": "pPyOlGvWoXA", "text": "you associate with it a weight P I which is the probability of that symbol from there you repeat the same process over and over until it's all connected together in a single tree what is this process you selected two trees with min probabilities P kmpl initially when each symbol is its own thing what that means is you find the two symbols with the lowest probability later on once you've done some merges it'll be the trees that at the root have the lowest problem then you merge those two into a single tree with associated probability as the sum", "start_timestamp": "00:26:52", "end_timestamp": "00:27:32", "start_second": 1612, "end_second": 1652, "url": "https://www.youtube.com/watch?v=pPyOlGvWoXA&t=1612s", "title": "L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning", "thumbnail": "https://i.ytimg.com/vi/pPyOlGvWoXA/maxresdefault.jpg"} {"video_id": "pPyOlGvWoXA", "text": "of the original probabilities and that's it that's all I need to do so let's take a look at an example of how this works on some example data here we have six symbols each symbol has its own probability associate with it and so let's step through what Huffman coding does we have six symbols each your own probability we have a with a probability of zero point two we have B with probability zero point one C per building 0.05 T with probability zero point two one e with probability zero point three six and F with probability", "start_timestamp": "00:27:32", "end_timestamp": "00:28:15", "start_second": 1652, "end_second": 1695, "url": "https://www.youtube.com/watch?v=pPyOlGvWoXA&t=1652s", "title": "L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning", "thumbnail": "https://i.ytimg.com/vi/pPyOlGvWoXA/maxresdefault.jpg"} {"video_id": "pPyOlGvWoXA", "text": "zero point zero eight let's follow the algorithm what a to lowest probability thinks it's C and F so what do we do we connect them up CNF get connected up and together they have the sum of the probabilities which is zero point one three what's lowest in probability now it's the zero point 1 here and the zero point 1 3 over here so we'll connect this up and the top here now is probability 0.23 what's lowest now we have a zero point two and a zero point two one let's go point two one is somewhat inconveniently", "start_timestamp": "00:28:15", "end_timestamp": "00:28:53", "start_second": 1695, "end_second": 1733, "url": "https://www.youtube.com/watch?v=pPyOlGvWoXA&t=1695s", "title": "L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning", "thumbnail": "https://i.ytimg.com/vi/pPyOlGvWoXA/maxresdefault.jpg"} {"video_id": "pPyOlGvWoXA", "text": "located so I'm going to move over here so be zero point two one and moving it off to the side zero point two one and DNA connect together for a zero point four one what are the two lowest now is the zero point two three dots here and a zero point three six over here is 0.36 is incomming located I'm going to relocate e over here all right then connecting these and then together they have zero point five nine the two lowest ones are the only two left is zero point or one and the zero point five nine and here is our Huffman encoding and then", "start_timestamp": "00:28:53", "end_timestamp": "00:29:45", "start_second": 1733, "end_second": 1785, "url": "https://www.youtube.com/watch?v=pPyOlGvWoXA&t=1733s", "title": "L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning", "thumbnail": "https://i.ytimg.com/vi/pPyOlGvWoXA/maxresdefault.jpg"} {"video_id": "pPyOlGvWoXA", "text": "what we do is like the left side is 0 and the right split is 1 there we go and now we have an encoding we want to know what is D D is 0 0 what is a a is 0 1 what is B B is 1 0 0 1 0 0 what is C 1 0 1 0 E is 1 1 M F is 1 0 1 1 and to say uniquely decodable code every symbol once you have sent the bits across you hit the leave of dizzy coding tree you know you've got an entire symbol and then you start again at the top of the tree to decode the next symbol so we haven't covered why this is optimal but hopefully the procedure is clear that is", "start_timestamp": "00:29:45", "end_timestamp": "00:30:41", "start_second": 1785, "end_second": 1841, "url": "https://www.youtube.com/watch?v=pPyOlGvWoXA&t=1785s", "title": "L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning", "thumbnail": "https://i.ytimg.com/vi/pPyOlGvWoXA/maxresdefault.jpg"} {"video_id": "pPyOlGvWoXA", "text": "a relatively simple procedure they can do for any symbol table that you have and it relies on these probabilities and you might already start foreshadowing here chorus this is what your narrative models might be handy very good generative models might allow us to build good probability estimates that we can then use to find a real good encoding because of course if these probabilities are wrong then this tree will not be a very good tree for encoding the data so here's the same another example that you can work", "start_timestamp": "00:30:41", "end_timestamp": "00:31:17", "start_second": 1841, "end_second": 1877, "url": "https://www.youtube.com/watch?v=pPyOlGvWoXA&t=1841s", "title": "L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning", "thumbnail": "https://i.ytimg.com/vi/pPyOlGvWoXA/maxresdefault.jpg"} {"video_id": "pPyOlGvWoXA", "text": "through solving another example and especially under many of the applications that are used on the internet Huffman codes are used to compress their data all right so maybe let me pause here for a moment to see if there's any questions so have you any questions feel free to type them into the chat window or to just speak up or raise your hand oh hi drew yeah I had a question so I guess something that I noticed about Hoffman codes and stuff would be that you're kind of the number of symbols or number of values that you have is fixed", "start_timestamp": "00:31:17", "end_timestamp": "00:32:13", "start_second": 1877, "end_second": 1933, "url": "https://www.youtube.com/watch?v=pPyOlGvWoXA&t=1877s", "title": "L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning", "thumbnail": "https://i.ytimg.com/vi/pPyOlGvWoXA/maxresdefault.jpg"} {"video_id": "pPyOlGvWoXA", "text": "but if you're trying to let's say encode more complex data structure so if you have something like you know maybe maybe images have like a fixed dimensions to audio can be multiple dimensions for example so is there a way other than discretizing or is the notion just to make chunks and then compress them effect sized chunks which take discrete values yeah very good questions so chunking is between an option and then just send over in chunks which by the way often can be desirable for other reasons also even if you had a fixed", "start_timestamp": "00:32:13", "end_timestamp": "00:32:51", "start_second": 1933, "end_second": 1971, "url": "https://www.youtube.com/watch?v=pPyOlGvWoXA&t=1933s", "title": "L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning", "thumbnail": "https://i.ytimg.com/vi/pPyOlGvWoXA/maxresdefault.jpg"} {"video_id": "pPyOlGvWoXA", "text": "size thing but you wanted this let's say you had a video you wanted to watch on home if somebody first encodes the entire video sensitive crosses one file and only then you can decode and play it's not great you want to be able to stream it across so you have there are reasons to chunk where you're actually for bill optimality of compression but you reduce latency and getting things across we will look at some other codes a little later and it's a really good question that actually fits very well with her describing so the coding systems we'll", "start_timestamp": "00:32:51", "end_timestamp": "00:33:23", "start_second": 1971, "end_second": 2003, "url": "https://www.youtube.com/watch?v=pPyOlGvWoXA&t=1971s", "title": "L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning", "thumbnail": "https://i.ytimg.com/vi/pPyOlGvWoXA/maxresdefault.jpg"} {"video_id": "pPyOlGvWoXA", "text": "look at later are thematic coding and asymmetric numeral systems are able to encode streams in effectively a continuous way such that if the stream is longer it can keep encoding and it just on-the-fly continues to encode as you go along now in practice often people will still chant and stop at some point because otherwise you might have to wait too long before you can decode sometimes but in principle they can work with arbitrary length and not knowing the length ahead of time so we'll cover that but you're", "start_timestamp": "00:33:23", "end_timestamp": "00:33:57", "start_second": 2003, "end_second": 2037, "url": "https://www.youtube.com/watch?v=pPyOlGvWoXA&t=2003s", "title": "L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning", "thumbnail": "https://i.ytimg.com/vi/pPyOlGvWoXA/maxresdefault.jpg"} {"video_id": "pPyOlGvWoXA", "text": "absolutely right for Huffman codes do is that strong assumption that you have an alphabet of symbols and you build it encoding for a doubt alphabetical and you don't encode that specific symbols make sense thank you well compression usually be done in terms of bits like the encoding will be the output of the encoder will be like a lookup table and it won't you say yes the way we think of compression and the way it's ultimately done on computers is that only what comes out is the sequence of bits you can think of a single bit as", "start_timestamp": "00:33:57", "end_timestamp": "00:34:39", "start_second": 2037, "end_second": 2079, "url": "https://www.youtube.com/watch?v=pPyOlGvWoXA&t=2037s", "title": "L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning", "thumbnail": "https://i.ytimg.com/vi/pPyOlGvWoXA/maxresdefault.jpg"} {"video_id": "pPyOlGvWoXA", "text": "the sense of the minimal unit of information like a single bit can either be 0 or 1 and sometimes the minimum that you can send across in terms of information just an outer 0 or a 1 because there's two options you can send information across if you have only one option while there's nothing you can do there's no information be transmitted so in fact often as we'll see the way information gets better than the size of well the amount of information your system gets measured is by bits the mineral number of bits required to", "start_timestamp": "00:34:39", "end_timestamp": "00:35:11", "start_second": 2079, "end_second": 2111, "url": "https://www.youtube.com/watch?v=pPyOlGvWoXA&t=2079s", "title": "L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning", "thumbnail": "https://i.ytimg.com/vi/pPyOlGvWoXA/maxresdefault.jpg"} {"video_id": "pPyOlGvWoXA", "text": "represent the original is the amount of information in that piece of data I say one quick thing to add maybe it is the case that when you actually transmit over certain lines that are not let's say a computer storing zero on bits there are transmission schemes where you send maybe two bits in one go to using potentially something close it to a continuous channel that you then discretize on the other side to get out several bits in one go so that was happen also under the hood but in terms of kind of the information theoretic", "start_timestamp": "00:35:11", "end_timestamp": "00:35:49", "start_second": 2111, "end_second": 2149, "url": "https://www.youtube.com/watch?v=pPyOlGvWoXA&t=2111s", "title": "L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning", "thumbnail": "https://i.ytimg.com/vi/pPyOlGvWoXA/maxresdefault.jpg"} {"video_id": "pPyOlGvWoXA", "text": "properties we tend to think of it as turning everything into a sequence of bits alright great questions thank you let's move to the next part which is threat of the limits and what we're going to cover here to me some of the most beautiful math than any discipline has to offer somehow well we're going to cover we can cover in just a few slides we've been covering quite comprehensively in just a few slides and get very kind of deep throughout up on inside to guarantees across so very excited about getting to talk about that", "start_timestamp": "00:35:49", "end_timestamp": "00:36:38", "start_second": 2149, "end_second": 2198, "url": "https://www.youtube.com/watch?v=pPyOlGvWoXA&t=2149s", "title": "L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning", "thumbnail": "https://i.ytimg.com/vi/pPyOlGvWoXA/maxresdefault.jpg"} {"video_id": "pPyOlGvWoXA", "text": "day so one thing you might have heard in the context of information fear this thing called entropy DeShannon a sort of a measure of information so what is entropy by definition so it's just a mathematical definition not talking about properties yet entropy of X what is X X is a random variable right and so X is just really some distribution P of X and so whoo I measure entropy till the entropy of the random variable not a specific institution of that variable but anyways the entropy of the distribution or entry of the random", "start_timestamp": "00:36:38", "end_timestamp": "00:37:11", "start_second": 2198, "end_second": 2231, "url": "https://www.youtube.com/watch?v=pPyOlGvWoXA&t=2198s", "title": "L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning", "thumbnail": "https://i.ytimg.com/vi/pPyOlGvWoXA/maxresdefault.jpg"} {"video_id": "pPyOlGvWoXA", "text": "variable as a whole and this is definition use sum over all possible values the random variable can take on and you take a weighted sum of the probability of take on that value and then log 2 of 1 over P exile ok so this might seem look a little coming out of water but maybe let's get a little bit of intuition for this of why this might be a meaningful way of how you measure entropy which is the amount of uncertainty you have in a distribution and hence there's a lot of uncertainty about the random variable you need more bits to send the cross we", "start_timestamp": "00:37:11", "end_timestamp": "00:37:49", "start_second": 2231, "end_second": 2269, "url": "https://www.youtube.com/watch?v=pPyOlGvWoXA&t=2231s", "title": "L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning", "thumbnail": "https://i.ytimg.com/vi/pPyOlGvWoXA/maxresdefault.jpg"} {"video_id": "pPyOlGvWoXA", "text": "hope to tell the other person what the outcome us so there I have been unbearable I read an experiment I see the outcome of that random variable I want to communicate the outcome to you how many bees do I need to send on average and this effect doesn't just talk about that but it also then kind of hints at an encoding scheme it kind of says the number of this you can I use for a outcome X is going to be log to 1 over P X I so let's look at an example distribution here's an example distribution random variable can take on", "start_timestamp": "00:37:49", "end_timestamp": "00:38:24", "start_second": 2269, "end_second": 2304, "url": "https://www.youtube.com/watch?v=pPyOlGvWoXA&t=2269s", "title": "L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning", "thumbnail": "https://i.ytimg.com/vi/pPyOlGvWoXA/maxresdefault.jpg"} {"video_id": "pPyOlGvWoXA", "text": "five values and will compute the entropy of this thing so it isn't like 1/4 1/4 1/4 1/8 1/8 and we can compute the entropy entropy is 2.25 then let's look at another distribution much more peaked beautiful these are three quarters and then 1/16 for everything else compute the entropy its 1.3 so 1.3 2.25 entropy is a lot larger on the left and on the right why is that because if I run an experiment on the random variable on the left and then want to communicate the outcome Q actually there's many possible outcomes that are pretty likely and so", "start_timestamp": "00:38:24", "end_timestamp": "00:39:03", "start_second": 2304, "end_second": 2343, "url": "https://www.youtube.com/watch?v=pPyOlGvWoXA&t=2304s", "title": "L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning", "thumbnail": "https://i.ytimg.com/vi/pPyOlGvWoXA/maxresdefault.jpg"} {"video_id": "pPyOlGvWoXA", "text": "it's not like you can come over to very efficient encoding scheme because you need to encode everything pretty much with some reasonable probability you're gonna have to send it across whereas here on the right what happens is this first outcome over here scroll ugly so if you encode that first outcome with a very small number of bits then most of the time you have to send almost nothing and then yet sometimes you have to send more bits to get the other things across but most of the time is very cheap and so that that's effectively what's going", "start_timestamp": "00:39:03", "end_timestamp": "00:39:34", "start_second": 2343, "end_second": 2374, "url": "https://www.youtube.com/watch?v=pPyOlGvWoXA&t=2343s", "title": "L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning", "thumbnail": "https://i.ytimg.com/vi/pPyOlGvWoXA/maxresdefault.jpg"} {"video_id": "pPyOlGvWoXA", "text": "on in this equation and for now a building intuition will make this a lot more formal very soon so let's take a look at another example so think back to think back to our binary trees to encode some set of symbols we have symbol a b c and d if these are the probabilities 1/2 1/4 1/8 1/8 then this thing over here is a optimal way of encoding that have the time you sent just one bit for a the other half of the time you got to cover the rest and to say that you're covering the rest you have to send the one then", "start_timestamp": "00:39:34", "end_timestamp": "00:40:20", "start_second": 2374, "end_second": 2420, "url": "https://www.youtube.com/watch?v=pPyOlGvWoXA&t=2374s", "title": "L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning", "thumbnail": "https://i.ytimg.com/vi/pPyOlGvWoXA/maxresdefault.jpg"} {"video_id": "pPyOlGvWoXA", "text": "in the other half of the time half the time you have to send what it say it's be in the other half of the time you have to signal with the one that it's one of the other two and then at the end you decide which one it is when you're down here so what we can this even though we haven't proven this intuitively it should make sense that this is a very good scheme for encoding this kind of distribution over symbols because you can't say anything less than one symbol for a otherwise you have not communicated anything a is the most", "start_timestamp": "00:40:20", "end_timestamp": "00:40:53", "start_second": 2420, "end_second": 2453, "url": "https://www.youtube.com/watch?v=pPyOlGvWoXA&t=2420s", "title": "L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning", "thumbnail": "https://i.ytimg.com/vi/pPyOlGvWoXA/maxresdefault.jpg"} {"video_id": "pPyOlGvWoXA", "text": "frequent symbol I'll have to do is send one symbol and for being well you know if the first signal it's not a and then you send one more something symbol to communicate it to be similar than for CMD this encoding scheme over here uses a length that is the to log of 1 over P of X I and so you could imagine a world if in the in your world every probability associate with any symbol you have some symbol X I and the probability P of X I is equal to 1 is equal to 1 over 2 to the length of sorry no let's see if ya the paralytics ID can", "start_timestamp": "00:40:53", "end_timestamp": "00:41:33", "start_second": 2453, "end_second": 2493, "url": "https://www.youtube.com/watch?v=pPyOlGvWoXA&t=2453s", "title": "L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning", "thumbnail": "https://i.ytimg.com/vi/pPyOlGvWoXA/maxresdefault.jpg"} {"video_id": "pPyOlGvWoXA", "text": "be expressed as 2 to the power L I then you can encode using the same scheme that symbol X I with a length L I bitstream into a tree that would be built up the way the tree was built up over here haven't threw in this but that's kind of the rough intuition and we'll see is of course things that generalize this to simple we're P of X is not necessarily one over two to the power of something that could be any probability different from a power of 1/2 okay so that's some high-level intuition let's now take a look at some of the theory that we can", "start_timestamp": "00:41:33", "end_timestamp": "00:42:09", "start_second": 2493, "end_second": 2529, "url": "https://www.youtube.com/watch?v=pPyOlGvWoXA&t=2493s", "title": "L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning", "thumbnail": "https://i.ytimg.com/vi/pPyOlGvWoXA/maxresdefault.jpg"} {"video_id": "pPyOlGvWoXA", "text": "put down first main theorem is the Kraft McMillan inequality what it says is that for any uniquely decodable code c okay so this is somebody tells you I have a code it's uniquely decodable and if it's not uniquely decodable you can't really use it to do lossless compression so codes do need to be uniquely the code well unless we're not going to consider them for losses fresh when it comes up with a uniquely decodable code see what does that mean it means a mapping from symbols to bits bit strings corresponding to each symbol if it's", "start_timestamp": "00:42:09", "end_timestamp": "00:42:52", "start_second": 2529, "end_second": 2572, "url": "https://www.youtube.com/watch?v=pPyOlGvWoXA&t=2529s", "title": "L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning", "thumbnail": "https://i.ytimg.com/vi/pPyOlGvWoXA/maxresdefault.jpg"} {"video_id": "pPyOlGvWoXA", "text": "indeed uniquely decodable then it's the case that this property holds true what is this saying it's saying for each symbol and corresponding ain't coding so the word the bit word over you we can look at the length of the encoding and there's some property satisfied by these lengths so funny and somebody's your table symbols and bitstreams it's gonna be a and some bits string here then be some other bits here and so forth you give you a code that's uniquely decodable then the lengths that you encounter here will satisfy this", "start_timestamp": "00:42:52", "end_timestamp": "00:43:30", "start_second": 2572, "end_second": 2610, "url": "https://www.youtube.com/watch?v=pPyOlGvWoXA&t=2572s", "title": "L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning", "thumbnail": "https://i.ytimg.com/vi/pPyOlGvWoXA/maxresdefault.jpg"} {"video_id": "pPyOlGvWoXA", "text": "property this says it's smaller that one has to be smaller than one what does that mean these are negative powers here so it's effectively saying that the lengths have to be large enough they always going to be of a certain length otherwise it's not be satisfied so swimming out this thing is saying if someone is uniquely decodable code I can guarantee you that the encodes have to be relatively long they cannot be shorter than a certain amount because otherwise they would not satisfy this property with Boston", "start_timestamp": "00:43:30", "end_timestamp": "00:44:05", "start_second": 2610, "end_second": 2645, "url": "https://www.youtube.com/watch?v=pPyOlGvWoXA&t=2610s", "title": "L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning", "thumbnail": "https://i.ytimg.com/vi/pPyOlGvWoXA/maxresdefault.jpg"} {"video_id": "pPyOlGvWoXA", "text": "actually held in the opposite direction opposite direction says if you have a set of length L that satisfy that same thing then there is a code you can build in fact the prefix code which is very convenient to deal with which is uniquely decodable of the same size as these links so it's a back and forth kind of mapping if something uniquely decodable this is satisfied if don't satisfy this you can build uniquely decodable code that in fact the prefix to a tree that allows you to encode symbols with feet word lengths what does", "start_timestamp": "00:44:05", "end_timestamp": "00:44:44", "start_second": 2645, "end_second": 2684, "url": "https://www.youtube.com/watch?v=pPyOlGvWoXA&t=2645s", "title": "L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning", "thumbnail": "https://i.ytimg.com/vi/pPyOlGvWoXA/maxresdefault.jpg"} {"video_id": "pPyOlGvWoXA", "text": "this mean this means that since there's a party that holds true for any usually decodable code someone can give you include the decodable code this property will be true when this profit is true there is also a prefix code with the same lengths so it means that we never need to resort to anything but prefix codes if my said have a very clever scheme to make the bitstream uniquely decodable you might have to look ahead and look and look at many places of decode you can say no need i can use the same encoding length and build a prefix", "start_timestamp": "00:44:44", "end_timestamp": "00:45:22", "start_second": 2684, "end_second": 2722, "url": "https://www.youtube.com/watch?v=pPyOlGvWoXA&t=2684s", "title": "L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning", "thumbnail": "https://i.ytimg.com/vi/pPyOlGvWoXA/maxresdefault.jpg"} {"video_id": "pPyOlGvWoXA", "text": "code that will have the same efficiency as your other uniquely decodable code which will be more troublesome to decode so we're going to strict attention to prefix codes all right so what's under the hood here let me give you a quick brief sketch one direction for any prefix codes d and that's kind of a subset of what's on previous slide for our prefix codes C we have down this is satisfied that's what I stated what's the sketch order all the lengths of your code words then setup we prefix code right so we have all these", "start_timestamp": "00:45:22", "end_timestamp": "00:46:06", "start_second": 2722, "end_second": 2766, "url": "https://www.youtube.com/watch?v=pPyOlGvWoXA&t=2722s", "title": "L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning", "thumbnail": "https://i.ytimg.com/vi/pPyOlGvWoXA/maxresdefault.jpg"} {"video_id": "pPyOlGvWoXA", "text": "lengths favorite prefix code who a previous code we can build a tree look at a tree we can look at the tree will actually end initially over here at those red dots because that's where the code words are but we expand the tree to be of equal depth everywhere so even though maybe your symbol a would be encoded here you continue because you want to make it all equal death then after you've done that you can do a simple count you can say each code word for example this one over here how many leaves are covered by it well the whole", "start_timestamp": "00:46:06", "end_timestamp": "00:46:45", "start_second": 2766, "end_second": 2805, "url": "https://www.youtube.com/watch?v=pPyOlGvWoXA&t=2766s", "title": "L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning", "thumbnail": "https://i.ytimg.com/vi/pPyOlGvWoXA/maxresdefault.jpg"} {"video_id": "pPyOlGvWoXA", "text": "trees of depth 4 so the whole thing is depth for it is at depth - so what's under here is due to the four bonds to leaf nodes under here we have 2 to the 4 minus 1 as 8 leaf nodes living here and so forth so every Congress there's any leave of the expanded tree since it's a pretty code let me know overlapped it's a clean tree so the total number of leaves will be in this case 2 to the 4 or in general 2 to the L + 4 n Ln the maximum length code that you're considering so the opposite equality here that the number of leaves", "start_timestamp": "00:46:45", "end_timestamp": "00:47:35", "start_second": 2805, "end_second": 2855, "url": "https://www.youtube.com/watch?v=pPyOlGvWoXA&t=2805s", "title": "L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning", "thumbnail": "https://i.ytimg.com/vi/pPyOlGvWoXA/maxresdefault.jpg"} {"video_id": "pPyOlGvWoXA", "text": "covered is smaller than the total number of leaves you could have in a tree you just divide both sides by 2 to the L 1 and you get this thing over here so not too hard to prove the details of the proof don't matter too much but it can be done in one slide that's the first part how about the second part second part says for any set of lengths if this is satisfied then we can build a prefix third tree with those lengths how is this done you consider a full tree of depth L M which is the longest length and for each eye you pick any node of depth li", "start_timestamp": "00:47:35", "end_timestamp": "00:48:19", "start_second": 2855, "end_second": 2899, "url": "https://www.youtube.com/watch?v=pPyOlGvWoXA&t=2855s", "title": "L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning", "thumbnail": "https://i.ytimg.com/vi/pPyOlGvWoXA/maxresdefault.jpg"} {"video_id": "pPyOlGvWoXA", "text": "still available so you're going a tree say a depth of Li is anything still available okay I picked this once you pick that you consider everything below it average so this is all done at this point nothing below there is available anymore this will consume tuna alin- li leaves of the expanded tree you can you know that as you count together how many leaves you're going to cover in this process is going to be this many on the left here we're told that this thing on the top holds true which means that this is smaller than 2 - L M which means that", "start_timestamp": "00:48:19", "end_timestamp": "00:48:59", "start_second": 2899, "end_second": 2939, "url": "https://www.youtube.com/watch?v=pPyOlGvWoXA&t=2899s", "title": "L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning", "thumbnail": "https://i.ytimg.com/vi/pPyOlGvWoXA/maxresdefault.jpg"} {"video_id": "pPyOlGvWoXA", "text": "we can fit this inside a tree so we have be able to fit all the code words inside a tree okay so two quick proofs we don't need to know to be able to prove going forward but I wanted to get across it these are actually relatively simple to prove consequence from this is probably something you've heard many many times and that will now be able to prove very very easily then for any message distribution P of X there's some distribution over symbols then you have an Associated uniquely decodable code see then the average encoding length so", "start_timestamp": "00:48:59", "end_timestamp": "00:49:37", "start_second": 2939, "end_second": 2977, "url": "https://www.youtube.com/watch?v=pPyOlGvWoXA&t=2939s", "title": "L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning", "thumbnail": "https://i.ytimg.com/vi/pPyOlGvWoXA/maxresdefault.jpg"} {"video_id": "pPyOlGvWoXA", "text": "the expected length of your code we encode a symbol will be always larger than the entropy of the distribution there's a janitor in 1948 entropy is lower bound on how many bits need to encode symbols coming from a certain distribution so let's step through what the key things are to get there this is just what we're starting from the difference between entropy and exit code length entropy is this thing here expected code did this look at all possible symbols and look at the length and take the weighted sum we have px I", "start_timestamp": "00:49:37", "end_timestamp": "00:50:15", "start_second": 2977, "end_second": 3015, "url": "https://www.youtube.com/watch?v=pPyOlGvWoXA&t=2977s", "title": "L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning", "thumbnail": "https://i.ytimg.com/vi/pPyOlGvWoXA/maxresdefault.jpg"} {"video_id": "pPyOlGvWoXA", "text": "here P of X I over here we can come bring this together we have this over here then to bring us more closer together we're going to say well L I equals log of 2 to the L I we have difference of two logs we can bring out together the log things behind multiplied together or divided by each other if you have a negative sign so they've been negative appearing here then what is this thing over here what are we doing we're replacing let me expand it we're essentially replacing it with by bringing this thing over here", "start_timestamp": "00:50:15", "end_timestamp": "00:50:57", "start_second": 3015, "end_second": 3057, "url": "https://www.youtube.com/watch?v=pPyOlGvWoXA&t=3015s", "title": "L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning", "thumbnail": "https://i.ytimg.com/vi/pPyOlGvWoXA/maxresdefault.jpg"} {"video_id": "pPyOlGvWoXA", "text": "we're the expected value of the log of something to make it into log of expected value that's Jensen's inequality we've seen that in variational auto-encoders see it in many many places in machine learning we just applied Jensen's inequality expected value of the log smaller than log of the expected value and noise brought along we have the expected value of the log is over here whereas the log of the expected value is above and so we have just an equality applied here how about the next step we do this is going to be", "start_timestamp": "00:50:57", "end_timestamp": "00:51:36", "start_second": 3057, "end_second": 3096, "url": "https://www.youtube.com/watch?v=pPyOlGvWoXA&t=3057s", "title": "L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning", "thumbnail": "https://i.ytimg.com/vi/pPyOlGvWoXA/maxresdefault.jpg"} {"video_id": "pPyOlGvWoXA", "text": "the craft McMillon in a quote assess if we have uniquely decodable code and this thing over here has to be smaller than equal to one and then log of one is zero and we're done so to prove Shannon theorem all we needed is Jensen and then craft mill and inequality and we're good to go we have the full proof let me maybe pause here who's a pretty big result and see there's any questions all right so at this point we've proven that and uniquely decodable but anybody can come up with with certain lines know the code words you can use a prefix code", "start_timestamp": "00:51:36", "end_timestamp": "00:52:30", "start_second": 3096, "end_second": 3150, "url": "https://www.youtube.com/watch?v=pPyOlGvWoXA&t=3096s", "title": "L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning", "thumbnail": "https://i.ytimg.com/vi/pPyOlGvWoXA/maxresdefault.jpg"} {"video_id": "pPyOlGvWoXA", "text": "if you want to so that makes it very convenient and also that it's never going to be better than the entropy and expected encoding length for she might have next n as well how close can we get to entropy can we find the code that achieves that achieves H of X or it close to it because if we can and we know we're doing optimal okay so here's one way to think of it expected code length would be entropy if we take the lengths of all of them exactly this thing over here on the inside the coding is log to 1 over P of", "start_timestamp": "00:52:30", "end_timestamp": "00:53:08", "start_second": 3150, "end_second": 3188, "url": "https://www.youtube.com/watch?v=pPyOlGvWoXA&t=3150s", "title": "L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning", "thumbnail": "https://i.ytimg.com/vi/pPyOlGvWoXA/maxresdefault.jpg"} {"video_id": "pPyOlGvWoXA", "text": "X I we're good to go now I'm practice I might not be a natural number so you might have to round it up to the nearest natural number to actually make it a bit sequence so this is essentially n2 between Shannon coding so how about we proposed this we're going to try to encode with this thing over here the first question you should have is that even possible is this a valid set of lengths or would this be lengths that actually will not correspond to a curve well I'm gonna think craft of billing allows us to check for a given set of", "start_timestamp": "00:53:08", "end_timestamp": "00:53:46", "start_second": 3188, "end_second": 3226, "url": "https://www.youtube.com/watch?v=pPyOlGvWoXA&t=3188s", "title": "L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning", "thumbnail": "https://i.ytimg.com/vi/pPyOlGvWoXA/maxresdefault.jpg"} {"video_id": "pPyOlGvWoXA", "text": "lengths is there a code that corresponds to it so let's check Kim can we find a code that matches up with this well the this thing over here is the thing that we have on the Left sent hand side into credibility and equality and we want to hopefully prove that this is smaller than one so but trying to prove it's smaller than one we have to make the steps to get there if we can prove this then it means that code exists and we're good to go we can actually means looking to enter the coding so this thing is equal to the code lengths are given by", "start_timestamp": "00:53:46", "end_timestamp": "00:54:25", "start_second": 3226, "end_second": 3265, "url": "https://www.youtube.com/watch?v=pPyOlGvWoXA&t=3226s", "title": "L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning", "thumbnail": "https://i.ytimg.com/vi/pPyOlGvWoXA/maxresdefault.jpg"} {"video_id": "pPyOlGvWoXA", "text": "this quantity over here so just then this is more than or equal is because we're running up here and we're getting rid of the rounding up but the rounding up is happening in a negative exponent so by getting rid of the rounding up we end up with something bigger then this thing is easy to simplify to to the log to of something is just something that's what we have here now some of the probabilities is equal to one and we are good to go we have them 2 to the minus L I sum over I saw you go to 1 which we know from", "start_timestamp": "00:54:25", "end_timestamp": "00:55:04", "start_second": 3265, "end_second": 3304, "url": "https://www.youtube.com/watch?v=pPyOlGvWoXA&t=3265s", "title": "L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning", "thumbnail": "https://i.ytimg.com/vi/pPyOlGvWoXA/maxresdefault.jpg"} {"video_id": "pPyOlGvWoXA", "text": "Kraft McMillan implies exist a prefix code dad worked with the links so we now know that we can do entropy coding so be an alternative scheme to Huffman coding right we would have encoding the bill the treaty here would be you look at the probabilities of all your symbols and then you assign the length and then you still define code words that match up with it but assuming you can run some search or some other item to find those code words you know they exist so you just need to find them and then you're good to go", "start_timestamp": "00:55:04", "end_timestamp": "00:55:43", "start_second": 3304, "end_second": 3343, "url": "https://www.youtube.com/watch?v=pPyOlGvWoXA&t=3304s", "title": "L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning", "thumbnail": "https://i.ytimg.com/vi/pPyOlGvWoXA/maxresdefault.jpg"} {"video_id": "pPyOlGvWoXA", "text": "how is this well there is there's a little derivation we can do that this is very close to achieving entropy so look at this over here what's expected length weighted sum of the lines fill in the lines then what is this thing over here well this length over here is rounding up so it could go up by one relative to the real number that's on the inside that's just the one plus over here once you have that in simplified it the one comes up from does summed over all pxi and then in the back here we have entropy and so we have 1 plus entropy so", "start_timestamp": "00:55:43", "end_timestamp": "00:56:25", "start_second": 3343, "end_second": 3385, "url": "https://www.youtube.com/watch?v=pPyOlGvWoXA&t=3343s", "title": "L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning", "thumbnail": "https://i.ytimg.com/vi/pPyOlGvWoXA/maxresdefault.jpg"} {"video_id": "pPyOlGvWoXA", "text": "expected length is entropy plus 1 so this is pretty good we here have discovered out not only the best thing you can do is entropy coding is the entropy in terms the number of expected bits but also you can use directly log 2 of 1 / PXI as with designated coatings and if you do that you're only one away on average from the optimal encoding now the one thing we haven't covered yet in this whole scheme is how do you find that encoding we now know that we could do entropy encoding we know that this will be close to optimal", "start_timestamp": "00:56:25", "end_timestamp": "00:57:05", "start_second": 3385, "end_second": 3425, "url": "https://www.youtube.com/watch?v=pPyOlGvWoXA&t=3385s", "title": "L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning", "thumbnail": "https://i.ytimg.com/vi/pPyOlGvWoXA/maxresdefault.jpg"} {"video_id": "pPyOlGvWoXA", "text": "but running a massive search coming tutorials face might not be that practical check out Huffman codes can achieve the same optimality and we'll show that now so by induction on the number of symbols in our code book so a number of symbols M by induction meaning that in the proof we'll assume that if we have to encode only n minus 1 symbols and we use the Huffman encoding scheme we would end up with an optimal prefix code for those n minus 1 symbols and now I'm going to show that under that assumption it's also true for M and of", "start_timestamp": "00:57:05", "end_timestamp": "00:57:43", "start_second": 3425, "end_second": 3463, "url": "https://www.youtube.com/watch?v=pPyOlGvWoXA&t=3425s", "title": "L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning", "thumbnail": "https://i.ytimg.com/vi/pPyOlGvWoXA/maxresdefault.jpg"} {"video_id": "pPyOlGvWoXA", "text": "course with only two symbols or one symbol wherever you want to start it's clear Huffman codes are optimal so we're good to go okay this is actually a little intricate but it's not too long Huffman coding always looks at these lowest probabilities symbol so we'll start there we look at the two lows for those symbols X and y there's always going to be two lowest probability symbols mater is a tie but that's fine they arbitrarily break ties and be the two lowest probability symbols in your original code book optimal prefix codes", "start_timestamp": "00:57:43", "end_timestamp": "00:58:20", "start_second": 3463, "end_second": 3500, "url": "https://www.youtube.com/watch?v=pPyOlGvWoXA&t=3463s", "title": "L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning", "thumbnail": "https://i.ytimg.com/vi/pPyOlGvWoXA/maxresdefault.jpg"} {"video_id": "pPyOlGvWoXA", "text": "will have two leaves in elos double branch why is that you have a prefix code if symbol here maybe a symbol here some more symbols here in the lowest level branch which is this one over here there are two leads in higher levels that's not always true here that's not true here that's not true but the lowest level it's always true why is this always true imagine you didn't have two symbols left anymore only had one symbol after you didn't have this one here what do you do you would actually get rid of us all split here you put C up here and now", "start_timestamp": "00:58:20", "end_timestamp": "00:58:59", "start_second": 3500, "end_second": 3539, "url": "https://www.youtube.com/watch?v=pPyOlGvWoXA&t=3500s", "title": "L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning", "thumbnail": "https://i.ytimg.com/vi/pPyOlGvWoXA/maxresdefault.jpg"} {"video_id": "pPyOlGvWoXA", "text": "it's gonna be true again if it's not two symbols at the bottom you only have one we just a bit more okay so at the bottom there is always gonna be two leaves at that lowest branch then without loss of generality we can assume that symbols x and y have the same parent does I have to meet a kids just bill imagine your tree looks like this it can be an X's here y is here and there's a Z here and a W here could be it they don't have the same parent but because they're the lowest probability symbols they will always sit at the lowest lowest level", "start_timestamp": "00:58:59", "end_timestamp": "00:59:38", "start_second": 3539, "end_second": 3578, "url": "https://www.youtube.com/watch?v=pPyOlGvWoXA&t=3539s", "title": "L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning", "thumbnail": "https://i.ytimg.com/vi/pPyOlGvWoXA/maxresdefault.jpg"} {"video_id": "pPyOlGvWoXA", "text": "and at the lowest level you can interchange where they live and you can always make x and y appear together and put W over here and so there now have the same errand it's effectively the same code you just move things around at the bottom so x and y have the same current then every optimal prefix 3 will have x + y together at the lowest level with the same parent so that's what we're out now the steps we've made allow us to include this line over here no matter the tree structure the additional cost of having x and y rather than just", "start_timestamp": "00:59:38", "end_timestamp": "01:00:22", "start_second": 3578, "end_second": 3622, "url": "https://www.youtube.com/watch?v=pPyOlGvWoXA&t=3578s", "title": "L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning", "thumbnail": "https://i.ytimg.com/vi/pPyOlGvWoXA/maxresdefault.jpg"} {"video_id": "pPyOlGvWoXA", "text": "a simple parent symbol Xena needed sending that with my n minus 1 symbols now this M resembled x and y the extra cost will be px plus py y is down the number of times you have to go down that extra level in a tree to reach x and y is P of X plus P of y if you only have to go to the parent of x + y you wouldn't have to go that extra level and whenever you have to go to extra level it cost you one extra bit it happens P of X plus P of y fraction of the time now the end symbol Huffman code tree adds this minimal cost to the", "start_timestamp": "01:00:22", "end_timestamp": "01:00:58", "start_second": 3622, "end_second": 3658, "url": "https://www.youtube.com/watch?v=pPyOlGvWoXA&t=3622s", "title": "L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning", "thumbnail": "https://i.ytimg.com/vi/pPyOlGvWoXA/maxresdefault.jpg"} {"video_id": "pPyOlGvWoXA", "text": "optimal n minus 1 symbol Huffman code tree which is optimal by induction so this here it's a final part approve it's saying no matter what tree you build you'll always pay a prize of P of X plus P of Y when you need to split on x and y you can't just get away with apparent Z that's unavoidable the Hoffman code tree will have them appear that way together so the Hoffman code tree is incurring the minimum possible cost for being a n symbol tree prison minus one symbol tree it's having that minimal cost to when", "start_timestamp": "01:00:58", "end_timestamp": "01:01:51", "start_second": 3658, "end_second": 3711, "url": "https://www.youtube.com/watch?v=pPyOlGvWoXA&t=3658s", "title": "L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning", "thumbnail": "https://i.ytimg.com/vi/pPyOlGvWoXA/maxresdefault.jpg"} {"video_id": "pPyOlGvWoXA", "text": "it's built so far which is a 10-1 symbol tree which we know is optimal by induction and we're good to go alright so click recap of everything we covered entropy is the expected encoding length when encoding each symbol with this length and so there's the equation for entropy atmosphere for 1948 so that every data source P of X is an order 0 Markov model which means there's no dependencies between symbols like that you are accounted for or able to come for then a compression scheme that independent codes each symbol in your", "start_timestamp": "01:01:51", "end_timestamp": "01:02:31", "start_second": 3711, "end_second": 3751, "url": "https://www.youtube.com/watch?v=pPyOlGvWoXA&t=3711s", "title": "L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning", "thumbnail": "https://i.ytimg.com/vi/pPyOlGvWoXA/maxresdefault.jpg"} {"video_id": "pPyOlGvWoXA", "text": "sequence must use at least entropy bits per symbol an average Huffman code is able to do that with an overhead about most one how do we know that because entropy coding doesn't leave an overhead up at most one and we prove that Huffman codes are optimal so given entropy coding has an overhead about most one Huffman codes provide a constructive way of achieving something that also has an overhead about most one beyond the entropy cost any questions I have a question so for the competition you mentioned in the", "start_timestamp": "01:02:31", "end_timestamp": "01:03:25", "start_second": 3751, "end_second": 3805, "url": "https://www.youtube.com/watch?v=pPyOlGvWoXA&t=3751s", "title": "L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning", "thumbnail": "https://i.ytimg.com/vi/pPyOlGvWoXA/maxresdefault.jpg"} {"video_id": "pPyOlGvWoXA", "text": "beginning 500,000 euro competition if we just take the photo and yeah if we compute the entropy of that file that's provided that will provide like the minimum number of bits right if we just turn that can we just turn that to see if it'll be like 116 megabytes like that should would that give like a lower bound on what can be achieved so yeah that's a really good question and that gets exact what you're getting at is a in some sense exactly this thing over here so for now we've assumed the order 0 Markov model so what that", "start_timestamp": "01:03:25", "end_timestamp": "01:04:07", "start_second": 3805, "end_second": 3847, "url": "https://www.youtube.com/watch?v=pPyOlGvWoXA&t=3805s", "title": "L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning", "thumbnail": "https://i.ytimg.com/vi/pPyOlGvWoXA/maxresdefault.jpg"} {"video_id": "pPyOlGvWoXA", "text": "assumes is done let's say that is a sequence of symbols let's say there's only 26 letters and nothing else in that file of course there's other symbols too but you could just look at the frequencies of each of those letters you could then look at the entropy encoding or look at the entropy and say okay this is the entropy and now if I want to compress this by doing each of my 26 letters a bit sequence as its encoding what's the best I can possibly do I can actually find that number and you'll find that it is gonna be more than that", "start_timestamp": "01:04:07", "end_timestamp": "01:04:41", "start_second": 3847, "end_second": 3881, "url": "https://www.youtube.com/watch?v=pPyOlGvWoXA&t=3847s", "title": "L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning", "thumbnail": "https://i.ytimg.com/vi/pPyOlGvWoXA/maxresdefault.jpg"} {"video_id": "pPyOlGvWoXA", "text": "one in a 16 megabytes because otherwise somebody would have long done it the reason there is a hope to be able to do better and well we'll get into how to do this soon is down in reality these letters in that file are not independent when you see first three letters you might have an easy time predicting the fourth letter because there's only so many reasonable completions of that word now you already starting to saw the first three letters off and so then the calculus becomes a little different than we get that in a moment", "start_timestamp": "01:04:41", "end_timestamp": "01:05:11", "start_second": 3881, "end_second": 3911, "url": "https://www.youtube.com/watch?v=pPyOlGvWoXA&t=3881s", "title": "L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning", "thumbnail": "https://i.ytimg.com/vi/pPyOlGvWoXA/maxresdefault.jpg"} {"video_id": "pPyOlGvWoXA", "text": "and that's where things get combat is then all saying like oh it's not as simple as doing counting of trillions of each of the symbols you really need effectively a generic model that can understand how to predict the next law small previous symbols and start measuring the entropy and batma and then the question is how good a maliki you built and yes if you can build the world's best generative model to predict the next character in that sequence you look at the entropy of that then you might have a lower bound", "start_timestamp": "01:05:11", "end_timestamp": "01:05:40", "start_second": 3911, "end_second": 3940, "url": "https://www.youtube.com/watch?v=pPyOlGvWoXA&t=3911s", "title": "L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning", "thumbnail": "https://i.ytimg.com/vi/pPyOlGvWoXA/maxresdefault.jpg"} {"video_id": "pPyOlGvWoXA", "text": "roughly speaking on I mean - I think of a few details to be sure it's exactly true but they don't give you a pretty good estimate of what the optimal encoding might be and we'll look at a few examples soon like three slides from now we'll get to a few more things that touch upon exactly what you're asking about really good question other questions okay let's move on then so a couple of coding considerations we want to look at here what happens when your frequent accounts or maybe some more complicated estimate the distribution over symbols", "start_timestamp": "01:05:40", "end_timestamp": "01:06:36", "start_second": 3940, "end_second": 3996, "url": "https://www.youtube.com/watch?v=pPyOlGvWoXA&t=3940s", "title": "L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning", "thumbnail": "https://i.ytimg.com/vi/pPyOlGvWoXA/maxresdefault.jpg"} {"video_id": "pPyOlGvWoXA", "text": "is not precise you have an estimate P hat but really the tradition is P what's gonna happen with your performance of the compression scheme higher order models we pick the next symbol from previous Emal's how can that help you and what about that plus one didn't ask innocent as it seems or is actually very bad sometimes and what can we do about it so the expected code length when using p hat to construct the code is going to be expected code length but in reality our expectations with p is the way we encounter symbols is governed by", "start_timestamp": "01:06:36", "end_timestamp": "01:07:14", "start_second": 3996, "end_second": 4034, "url": "https://www.youtube.com/watch?v=pPyOlGvWoXA&t=3996s", "title": "L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning", "thumbnail": "https://i.ytimg.com/vi/pPyOlGvWoXA/maxresdefault.jpg"} {"video_id": "pPyOlGvWoXA", "text": "p for though your symbol is p i but the code length we assign is based on p hat a lie so then this is our expected code length simply don't need to round up to get no natural numbers for the encoding so simple calculation then we add and subtract the same quantity now this quantity over here in the front we recognize as KL divergence the thing in the back we recognize as entropy so we see the expected coloured length when we use a distribution estimate P hat is going to be the entropy plus we know it's always going to be more any", "start_timestamp": "01:07:14", "end_timestamp": "01:07:59", "start_second": 4034, "end_second": 4079, "url": "https://www.youtube.com/watch?v=pPyOlGvWoXA&t=4034s", "title": "L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning", "thumbnail": "https://i.ytimg.com/vi/pPyOlGvWoXA/maxresdefault.jpg"} {"video_id": "pPyOlGvWoXA", "text": "encoding is gonna cost you at least entropy may be more it's gonna cost you an additional decay all the versions in P and P hat so the price you pay is a cal divergence we know that the log likely objective when we learn a genera t'v model I should comes down to minimizing the KL divergence between the data distribution and the model that you learn and selectively when we're maximizing log likelihood we're minimizing this calorie version effectively trying to find a distribution now incur minimal overhead if we use it for an encoding", "start_timestamp": "01:07:59", "end_timestamp": "01:08:31", "start_second": 4079, "end_second": 4111, "url": "https://www.youtube.com/watch?v=pPyOlGvWoXA&t=4079s", "title": "L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning", "thumbnail": "https://i.ytimg.com/vi/pPyOlGvWoXA/maxresdefault.jpg"} {"video_id": "pPyOlGvWoXA", "text": "to encode our data notice two ways you can prove Cal is positive we can prove it because we know every encoding has to be hired Andrew could call it done this means that this thing is positive because we know that already or you could prove me from first principles using Jensen's inequality which is shown at the bottom here but so we also will pay a price corresponding to the KL divergence so the better our generic model is the better our encoding scheme can be and so when we think about including with generative models there's", "start_timestamp": "01:08:31", "end_timestamp": "01:09:04", "start_second": 4111, "end_second": 4144, "url": "https://www.youtube.com/watch?v=pPyOlGvWoXA&t=4111s", "title": "L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning", "thumbnail": "https://i.ytimg.com/vi/pPyOlGvWoXA/maxresdefault.jpg"} {"video_id": "pPyOlGvWoXA", "text": "really two things going on you want to somehow figure out a good encoding scheme but the other part is you want to do really well at this part over here which is a maximum likelihood estimator because that's going to help you ensure encoding scheme is actually good on the data now what if P of X is high entropy if P of X is high entropy that would gain a very long code length which you might not like you might be able to decrease the entropy by considering conditional entropies if you condition X on the context let's say what has come before", "start_timestamp": "01:09:04", "end_timestamp": "01:09:41", "start_second": 4144, "end_second": 4181, "url": "https://www.youtube.com/watch?v=pPyOlGvWoXA&t=4144s", "title": "L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning", "thumbnail": "https://i.ytimg.com/vi/pPyOlGvWoXA/maxresdefault.jpg"} {"video_id": "pPyOlGvWoXA", "text": "they may be able to reduce the entropy in fact this it's easy to prove that the conditional entropy of X given context C is always smaller than the unconditional entropy H of X in fact auto regressive models do exactly this in an automatic model you predict the next symbol based on everything you've seen so far and often the prediction of the next symbol or the next pixel is gonna be a lot more lot easier to predict so a lot lower entropy than just independently predicting each pixel and so going back to the price thing that we were talking", "start_timestamp": "01:09:41", "end_timestamp": "01:10:14", "start_second": 4181, "end_second": 4214, "url": "https://www.youtube.com/watch?v=pPyOlGvWoXA&t=4181s", "title": "L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning", "thumbnail": "https://i.ytimg.com/vi/pPyOlGvWoXA/maxresdefault.jpg"} {"video_id": "pPyOlGvWoXA", "text": "about effectively this is saying that if you don't assert each symbol independently but you train a conditional distribution maybe well you will not you should not do worse and likely you should do better then come then when you would do it with each independent symbol being encoded separately all right how about the +1 Matt seem pretty innocent no entropy is optimal n to be +1 why not up up their price of 1 let's look at an example and let's look an example where I might I should be pretty bad and it's not going to be", "start_timestamp": "01:10:14", "end_timestamp": "01:10:53", "start_second": 4214, "end_second": 4253, "url": "https://www.youtube.com/watch?v=pPyOlGvWoXA&t=4214s", "title": "L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning", "thumbnail": "https://i.ytimg.com/vi/pPyOlGvWoXA/maxresdefault.jpg"} {"video_id": "pPyOlGvWoXA", "text": "uncommon if we have a good predictive model which makes H of X very low then it could be very high overhead for example our distribution over symbols the three symbols very peaked mostly on the first symbol because we can predict the next letter maybe very easily the next pixel very easily we're very peak distribution 90% of math then 5% 5% the entropy of this thing is roughly 0.5 but it will pay a penalty of +1 and in fact we send us a lot of time each thing is gonna cost us at least one because sending a bit sending anything across", "start_timestamp": "01:10:53", "end_timestamp": "01:11:37", "start_second": 4253, "end_second": 4297, "url": "https://www.youtube.com/watch?v=pPyOlGvWoXA&t=4253s", "title": "L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning", "thumbnail": "https://i.ytimg.com/vi/pPyOlGvWoXA/maxresdefault.jpg"} {"video_id": "pPyOlGvWoXA", "text": "will be at least one well I should pay a price that's pretty high so here's the optimal code for this we could use just year over 0.9 and then 1 1 1 0 expected code length will be 1.1 so we're going to pay a price here that's actually pretty big that almost twice the length of the code compared to what entropy cover what entropy predicts has the lower bound so this first one gets expensive you send their long sequence of symbols essentially sent twice the sequence length compared to what in principle you wish you could be getting", "start_timestamp": "01:11:37", "end_timestamp": "01:12:14", "start_second": 4297, "end_second": 4334, "url": "https://www.youtube.com/watch?v=pPyOlGvWoXA&t=4297s", "title": "L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning", "thumbnail": "https://i.ytimg.com/vi/pPyOlGvWoXA/maxresdefault.jpg"} {"video_id": "pPyOlGvWoXA", "text": "how can we get around this let's take a moment to think about that anybody any suggestions could you use larger trunks can you use larger chunks exactly why would you care about larger chunks the reason your this price is expensive the plus one is expensive is because when you only have three symbols where you send one symbol you still need use at least one bit but one symbol doesn't have much information in it in this case very little information it is deceiving the first symbol if we send multiple symbols in", "start_timestamp": "01:12:14", "end_timestamp": "01:13:05", "start_second": 4334, "end_second": 4385, "url": "https://www.youtube.com/watch?v=pPyOlGvWoXA&t=4334s", "title": "L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning", "thumbnail": "https://i.ytimg.com/vi/pPyOlGvWoXA/maxresdefault.jpg"} {"video_id": "pPyOlGvWoXA", "text": "one go let's say we turn this into a distribution over we have three symbols ABC chosen distribution over there could be a a a there could be a a B there could be a a C and so forth now we have 3 to the 3 is 27 possible combined symbols that we're trying to send not which friend this will work out a lot more nicely and the overhead will become a lot less than if when we try to send just one symbol of time so let's take a look at this in action one way people do this in actually sending faxes well Tom there keep you have used faxes but essentially", "start_timestamp": "01:13:05", "end_timestamp": "01:13:51", "start_second": 4385, "end_second": 4431, "url": "https://www.youtube.com/watch?v=pPyOlGvWoXA&t=4385s", "title": "L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning", "thumbnail": "https://i.ytimg.com/vi/pPyOlGvWoXA/maxresdefault.jpg"} {"video_id": "pPyOlGvWoXA", "text": "before there was emailed with something called faxes where you could send documents over a phone line and the way it was encoded was by Senshi he's sending he naively would be uttered is fix less wide or block and as you step through the entire page naively you have to send one bit per pixel white or black very expensive because usually actually it's going to be a lot of light in a row or a lot of blacks in a row so you can instead encode it as the number of whites then the number of blocks number of lights it's called a thin coating and", "start_timestamp": "01:13:51", "end_timestamp": "01:14:24", "start_second": 4431, "end_second": 4464, "url": "https://www.youtube.com/watch?v=pPyOlGvWoXA&t=4431s", "title": "L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning", "thumbnail": "https://i.ytimg.com/vi/pPyOlGvWoXA/maxresdefault.jpg"} {"video_id": "pPyOlGvWoXA", "text": "that's what they came up with so what are your symbols now your symbols are I run off I run off let's say one one I run off two whites and run up three wise one of four wives Sam for black you list out old possible run wings that you might want to care about encoding and then you can look at the probabilities of each of those run lines and then build a Huffman code and then you get the encoding so you're going to be using and you get a very compressed representation of that page that you're trying to send across even then also has", "start_timestamp": "01:14:24", "end_timestamp": "01:15:03", "start_second": 4464, "end_second": 4503, "url": "https://www.youtube.com/watch?v=pPyOlGvWoXA&t=4464s", "title": "L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning", "thumbnail": "https://i.ytimg.com/vi/pPyOlGvWoXA/maxresdefault.jpg"} {"video_id": "pPyOlGvWoXA", "text": "a question about the English language how much entropy is there in English language and people have done this experiment so question here is was the entropy conditional entropy of X let's say X a position and given X 0 through X and minus 1 how predictable is the next character so Shannon ran his experiment and he concluded that the English language is only one bit per character so if you train a conditional model that predicts the next care could give it everything before you can get an entropy of 1 bit how do you even figure that out", "start_timestamp": "01:15:03", "end_timestamp": "01:15:37", "start_second": 4503, "end_second": 4537, "url": "https://www.youtube.com/watch?v=pPyOlGvWoXA&t=4503s", "title": "L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning", "thumbnail": "https://i.ytimg.com/vi/pPyOlGvWoXA/maxresdefault.jpg"} {"video_id": "pPyOlGvWoXA", "text": "the way you did it is you actually ask people to do completions so you would say okay here's the beginning of some text now predict for me the next character and then the person predicts a character and then me Shannon would say right or wrong it is right you're done if it's wrong you get to guess again 79% of the time people get it correct in the first guess 8% of time it takes two guesses 3% of time takes three guesses and so forth whenever you get communicated back whether your guess was right or wrong effectively one bit of", "start_timestamp": "01:15:37", "end_timestamp": "01:16:20", "start_second": 4537, "end_second": 4580, "url": "https://www.youtube.com/watch?v=pPyOlGvWoXA&t=4537s", "title": "L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning", "thumbnail": "https://i.ytimg.com/vi/pPyOlGvWoXA/maxresdefault.jpg"} {"video_id": "pPyOlGvWoXA", "text": "communication was communicated about what the underlying character is and so you can just stick the weight of some here you'll see that lands are roughly 1 which means that you need one bit of information for character on average people have not gotten to that by the way I mean that depth of at least ball automatic fix compression schemes have now gotten to that lower yet but things are getting closer and closer over time so looking at practical schemes if you use just 7-bit encoding fixed encoding well then you have seven bits per", "start_timestamp": "01:16:20", "end_timestamp": "01:17:01", "start_second": 4580, "end_second": 4621, "url": "https://www.youtube.com/watch?v=pPyOlGvWoXA&t=4580s", "title": "L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning", "thumbnail": "https://i.ytimg.com/vi/pPyOlGvWoXA/maxresdefault.jpg"} {"video_id": "pPyOlGvWoXA", "text": "character if you use entropy encoding off individual characters of your two to the seven this is 128 or simply 128 characters you use entropy encoding you know I need 4.5 bits per character that's of course the ball it is if you look at entropy but you can't perfectly achieve that because you have to brown to a finite number of bits so - code which is optimal achieves four point seven now if you look at the entropy of groups of eight symbols and then look at the average entropy per character you can line up 2.4 now some", "start_timestamp": "01:17:01", "end_timestamp": "01:17:43", "start_second": 4621, "end_second": 4663, "url": "https://www.youtube.com/watch?v=pPyOlGvWoXA&t=4621s", "title": "L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning", "thumbnail": "https://i.ytimg.com/vi/pPyOlGvWoXA/maxresdefault.jpg"} {"video_id": "pPyOlGvWoXA", "text": "thought is good asymptotically this goes to 1.3 so what you want to do instead of encoding one character at a time when I maybe could eight characters at a time and huffing code will be something probably slightly above 2.4 for that employee ok I propose we take a five-minute break here and when we start at 625 and we'll start looking at how some of these ideas can tie into the generative models we've been studying you alright let's restart any questions before we go to the next talks alright then let's take a look at how we can", "start_timestamp": "01:17:43", "end_timestamp": "01:23:43", "start_second": 4663, "end_second": 5023, "url": "https://www.youtube.com/watch?v=pPyOlGvWoXA&t=4663s", "title": "L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning", "thumbnail": "https://i.ytimg.com/vi/pPyOlGvWoXA/maxresdefault.jpg"} {"video_id": "pPyOlGvWoXA", "text": "combine all the rest of models which we covered in one of the first weeks of this class with coding the key motivation here is that we want a flexible system to group multiple symbols to avoid the potential +1 over head on every symbol including going back to Jerusalem who long this thing is going to be and want to be able to encode that on the fly so question we might have is how many symbols which symbols to group in a naive system that's what you'd have to do yes okay how many similar I'm going to group in", "start_timestamp": "01:23:43", "end_timestamp": "01:24:31", "start_second": 5023, "end_second": 5071, "url": "https://www.youtube.com/watch?v=pPyOlGvWoXA&t=5023s", "title": "L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning", "thumbnail": "https://i.ytimg.com/vi/pPyOlGvWoXA/maxresdefault.jpg"} {"video_id": "pPyOlGvWoXA", "text": "groups of 3 or 10 or whatever and then make some decisions about how to grid them Ciotti hue is actually done but we no need to decide on how many symbols or which symbols win we're doing this for we're going to encode every possible symbol sequence by mixing into a distribution it'll show an example very soon and this works for training over there and is extremely compatible with all aggressive models so let's take a look let's take a look at an example we have a alphabet with two symbols a and B probability of a is 0.8 probability of E", "start_timestamp": "01:24:31", "end_timestamp": "01:25:16", "start_second": 5071, "end_second": 5116, "url": "https://www.youtube.com/watch?v=pPyOlGvWoXA&t=5071s", "title": "L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning", "thumbnail": "https://i.ytimg.com/vi/pPyOlGvWoXA/maxresdefault.jpg"} {"video_id": "pPyOlGvWoXA", "text": "is 0.2 if we individually encode this we'll have to maybe send the 0 for a and a 1 for B and is gonna be a lot of overhead because a cost us just as much as B even though it's way more likely really there should be a way to make it cheaper the second most naive thing would be only talked about earlier gives you a head of time decide what is 3 a is going to be three B's two A's and a B and so forth we're gonna do something quite different we're going to do is we're going to say okay we get something to come in a a PA", "start_timestamp": "01:25:16", "end_timestamp": "01:25:51", "start_second": 5116, "end_second": 5151, "url": "https://www.youtube.com/watch?v=pPyOlGvWoXA&t=5116s", "title": "L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning", "thumbnail": "https://i.ytimg.com/vi/pPyOlGvWoXA/maxresdefault.jpg"} {"video_id": "pPyOlGvWoXA", "text": "let's say first and code the first symbol here a we have a distribution that's available to us the malls the probability of these symbols I'm gonna say okay 80% chance it's a 2% chance it's B so we're gonna actually map the fact that I have an A to this interval over here it means we landed you can think about the all possible random events that couldn't happen in the world is the ones that lie on the 0 to 0.28 interval that's the event that has happened then when the next a comes in we're going to take that interval that", "start_timestamp": "01:25:51", "end_timestamp": "01:26:28", "start_second": 5151, "end_second": 5188, "url": "https://www.youtube.com/watch?v=pPyOlGvWoXA&t=5151s", "title": "L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning", "thumbnail": "https://i.ytimg.com/vi/pPyOlGvWoXA/maxresdefault.jpg"} {"video_id": "pPyOlGvWoXA", "text": "we're working with now 0-3 0.8 and say okay it's again an a so it must have fallen within the first 80% of that new interval then it's a B which means that's fall in the last 20% the last 20% of that interval there and then it's a a which again means the first 80% so end up with is let me take a different color that's more visible is a lot of green already end up with the notion that this string a ABA gets mapped to a very specific interval within the 0 to 1 interval and the way we do this it should be clear that is unique for every", "start_timestamp": "01:26:28", "end_timestamp": "01:27:17", "start_second": 5188, "end_second": 5237, "url": "https://www.youtube.com/watch?v=pPyOlGvWoXA&t=5188s", "title": "L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning", "thumbnail": "https://i.ytimg.com/vi/pPyOlGvWoXA/maxresdefault.jpg"} {"video_id": "pPyOlGvWoXA", "text": "string will have a unique interval he end up in and so that a different sequence we end up with a different interval the idea behind arithmetic code is that what we're going to communicate is the interval so way to communicate the this thing over here now we still have to decide how we're going to commute but that's the idea and you don't need to know I had I'm how long your bit string is or your symbol string is going to be because this interval one boom maps to whatever simple sequence you receive so just need to encode this and", "start_timestamp": "01:27:17", "end_timestamp": "01:27:55", "start_second": 5237, "end_second": 5275, "url": "https://www.youtube.com/watch?v=pPyOlGvWoXA&t=5237s", "title": "L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning", "thumbnail": "https://i.ytimg.com/vi/pPyOlGvWoXA/maxresdefault.jpg"} {"video_id": "pPyOlGvWoXA", "text": "we're good to go if there were more symbols coming in if there was another B after this they would have split this thing again I would have had the small end of all was another a after this with him split this up a bit more and your the smaller interval and so forth so one-to-one mapping between simple sequences of arbitrary length and intervals okay how do we code an interval let's start with a naive attempt naive attempt of encoding interval so K represent each interval by selecting the number within the interval", "start_timestamp": "01:27:55", "end_timestamp": "01:28:31", "start_second": 5275, "end_second": 5311, "url": "https://www.youtube.com/watch?v=pPyOlGvWoXA&t=5275s", "title": "L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning", "thumbnail": "https://i.ytimg.com/vi/pPyOlGvWoXA/maxresdefault.jpg"} {"video_id": "pPyOlGvWoXA", "text": "which is diffused bits and binary fractional notation and you set up the code so for example if we had these intervals we could resent those with point zero one for the first interval point one for the second one and point one one for the third one because those are binary numbers that fall into each of those respective intervals it's not too hard to show you for interval size s so the width of the interval asked we have the most negative log two of us bits rounded up to represent such a number which is great because that means", "start_timestamp": "01:28:31", "end_timestamp": "01:29:09", "start_second": 5311, "end_second": 5349, "url": "https://www.youtube.com/watch?v=pPyOlGvWoXA&t=5311s", "title": "L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning", "thumbnail": "https://i.ytimg.com/vi/pPyOlGvWoXA/maxresdefault.jpg"} {"video_id": "pPyOlGvWoXA", "text": "that because the width here s s is really probability of the civil sequence so that's achieving entropy coding up to the rounding out the problem here is done these codes are not a set of prefix codes for example we have one here that we would send for a second symbol but after we receive one we wouldn't know did they send us a second symbol or was it the third symbol so the second symbol sent twice or the third symbol sent once there's no disambiguation and so this scheme while we might seem reasonable at first and it's efficient it actually", "start_timestamp": "01:29:09", "end_timestamp": "01:29:47", "start_second": 5349, "end_second": 5387, "url": "https://www.youtube.com/watch?v=pPyOlGvWoXA&t=5349s", "title": "L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning", "thumbnail": "https://i.ytimg.com/vi/pPyOlGvWoXA/maxresdefault.jpg"} {"video_id": "pPyOlGvWoXA", "text": "does allow you to decompress correctly so what else can we do we have each binary number correspond to the interval of all possible completions so for the above example when we say point zero zero it means the interval from zero to 0.25 will say point one zero zero it means the interval from 0.5 to 0.625 will see a point 1 1 that means interval from point 7 5 to 1 so we're gonna we're gonna want it to be the case that remember on the previous page any simple sequence that I want to send will result in an interval we want to send we're", "start_timestamp": "01:29:47", "end_timestamp": "01:30:26", "start_second": 5387, "end_second": 5426, "url": "https://www.youtube.com/watch?v=pPyOlGvWoXA&t=5387s", "title": "L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning", "thumbnail": "https://i.ytimg.com/vi/pPyOlGvWoXA/maxresdefault.jpg"} {"video_id": "pPyOlGvWoXA", "text": "gonna find a bit sequence such that when you look at the corresponding interval that we map to it which is given here by example that entire interval should fall inside end of all we're trying to encode leaving no I'm bagheera T which in the ball it belongs to and will not be a prefixed or anything else to work out the details of this it turns out you get an overhead of to possibly instead of 1 but that's actually pretty good because when we do this there's kind of arithmetic coding we can code arbitrary many symbols so the overhead of plus two", "start_timestamp": "01:30:26", "end_timestamp": "01:31:08", "start_second": 5426, "end_second": 5468, "url": "https://www.youtube.com/watch?v=pPyOlGvWoXA&t=5426s", "title": "L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning", "thumbnail": "https://i.ytimg.com/vi/pPyOlGvWoXA/maxresdefault.jpg"} {"video_id": "pPyOlGvWoXA", "text": "is only in count incurred once for the entire sequence instead of incurred for every symbol we send the Cross so it's a one-time overhead for the entire sequence that we encode this way obviously we'd like to avoid the plus two but it's not that bad any remaining challenges well sometimes when you file this scheme what'll happen is that the interval that you're finding as you go to your a be a be a and so forth sequence and you start from that interval from zero to one you might find that you know at some point you have", "start_timestamp": "01:31:08", "end_timestamp": "01:31:44", "start_second": 5468, "end_second": 5504, "url": "https://www.youtube.com/watch?v=pPyOlGvWoXA&t=5468s", "title": "L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning", "thumbnail": "https://i.ytimg.com/vi/pPyOlGvWoXA/maxresdefault.jpg"} {"video_id": "pPyOlGvWoXA", "text": "this interval like this but zero point five is here just marking this next thing you realize oh you're actually here next thing you realize they be you're here the next thing that the way it works out is that you always end up with that interval that's centered around 0.5 if the case you're never able to send that first bit till your entire sequence is complete and so the solution to that is to even though in principle to minimize the number of bits you need to send you need to go to the end of your simple sequence and code the whole thing and", "start_timestamp": "01:31:44", "end_timestamp": "01:32:19", "start_second": 5504, "end_second": 5539, "url": "https://www.youtube.com/watch?v=pPyOlGvWoXA&t=5504s", "title": "L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning", "thumbnail": "https://i.ytimg.com/vi/pPyOlGvWoXA/maxresdefault.jpg"} {"video_id": "pPyOlGvWoXA", "text": "then send all your bits if you want to minimize latency not wait till the end of the whole thing before you can send anything at all you'll split into smaller blocks such tab he keeps traveling 0.5 it's something to say okay I'm done there's a bigger block I'm sending it across another thing down this scheme as I described it assumes this infinite precision it assumes that you can actually compute these intervals precisely and this interval becomes always small small over time and so you could imagine that you're starting under", "start_timestamp": "01:32:19", "end_timestamp": "01:32:49", "start_second": 5539, "end_second": 5569, "url": "https://www.youtube.com/watch?v=pPyOlGvWoXA&t=5539s", "title": "L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning", "thumbnail": "https://i.ytimg.com/vi/pPyOlGvWoXA/maxresdefault.jpg"} {"video_id": "pPyOlGvWoXA", "text": "flow if you just do standard floating-point calculations to compute those intervals and then of course you would start losing information because the floating point system couldn't like encode the information you need to encode there is a solution to that you can actually convert this all into a scheme where you're only computed with integers and the blow-up compression survey then I linked at the one of the very first slides explains how I can turn this into an integer implementation rather than relying on real numbers now that we know how to", "start_timestamp": "01:32:49", "end_timestamp": "01:33:23", "start_second": 5569, "end_second": 5603, "url": "https://www.youtube.com/watch?v=pPyOlGvWoXA&t=5569s", "title": "L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning", "thumbnail": "https://i.ytimg.com/vi/pPyOlGvWoXA/maxresdefault.jpg"} {"video_id": "pPyOlGvWoXA", "text": "encode let's think about how all the rest of models can play well in this so far we said we have P of a P of B but actually no need in this entire scheme that we described at the distribution used for P of X one has to be the same decision as we use for it P of X 2 for X 3 and X 4 we can instead use conditionals that are more precise and more predictive of the next symbol and some lower entropy and a more effective encoding scheme and so this arithmetic coding scheme is perfectly compatible in all aggressive models you can just work", "start_timestamp": "01:33:23", "end_timestamp": "01:33:59", "start_second": 5603, "end_second": 5639, "url": "https://www.youtube.com/watch?v=pPyOlGvWoXA&t=5603s", "title": "L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning", "thumbnail": "https://i.ytimg.com/vi/pPyOlGvWoXA/maxresdefault.jpg"} {"video_id": "pPyOlGvWoXA", "text": "your way let's say put a pixel after pixel or get the distribution for next pixel next pixel next pixel and encode with arithmetic encoding accordingly working your way through an image the better the log probability the further compression will be so better likelihood off your recive model will mean better compression of that data and these two schemes couldn't be any more compatible perfectly lined up predict one symbol at a time and encoded one at a time and keep going so let me pause you're gonna see our questions about arithmetic", "start_timestamp": "01:33:59", "end_timestamp": "01:34:37", "start_second": 5639, "end_second": 5677, "url": "https://www.youtube.com/watch?v=pPyOlGvWoXA&t=5639s", "title": "L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning", "thumbnail": "https://i.ytimg.com/vi/pPyOlGvWoXA/maxresdefault.jpg"} {"video_id": "pPyOlGvWoXA", "text": "coding and then we'll switch to a very different kind of coding scheme okay now I'll switch to think about how I can use a variational auto encoder something called bits back coding and a symmetric numeral systems to encode information this at least to me is one of the most was one of the most mind-boggling thinks how does this even possible they're confusing at first but I hope that you know - the way we laid it out in the slice at all again be clear how does exactly works but there's this notion somehow you get bits back and hence you", "start_timestamp": "01:34:37", "end_timestamp": "01:35:28", "start_second": 5677, "end_second": 5728, "url": "https://www.youtube.com/watch?v=pPyOlGvWoXA&t=5677s", "title": "L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning", "thumbnail": "https://i.ytimg.com/vi/pPyOlGvWoXA/maxresdefault.jpg"} {"video_id": "pPyOlGvWoXA", "text": "send bits but it's actually not as expensive you thought it was because he got bits back and we'll make it more precise soon the references for this part of lecture are listed here with the initial bits back paper by Freund Hinton from 97 actually that's the same one the first one is here which was on using it in the context of man description length for the way to know I'm at work but then start looking at source coding then there was this paper here the bits back ans paper so law to refer it do it that way bits back 10s was the session let me", "start_timestamp": "01:35:28", "end_timestamp": "01:36:10", "start_second": 5728, "end_second": 5770, "url": "https://www.youtube.com/watch?v=pPyOlGvWoXA&t=5728s", "title": "L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning", "thumbnail": "https://i.ytimg.com/vi/pPyOlGvWoXA/maxresdefault.jpg"} {"video_id": "pPyOlGvWoXA", "text": "let me restart the slide for a moment so first first thing that happened with bits back is the thing at the bottom here this wasn't a comics of pure vision learning was not in the comics of coding next thing that happened was in the comics of Ashley making this practical as a coding scheme this idea and but this used arithmetic coding it turns out that the scheme we're going to look at is not very compatible with arithmetic coding unlike autographs of models which are almost designed to do arithmetic coding the when you have a via it's not very", "start_timestamp": "01:36:10", "end_timestamp": "01:36:52", "start_second": 5770, "end_second": 5812, "url": "https://www.youtube.com/watch?v=pPyOlGvWoXA&t=5770s", "title": "L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning", "thumbnail": "https://i.ytimg.com/vi/pPyOlGvWoXA/maxresdefault.jpg"} {"video_id": "pPyOlGvWoXA", "text": "compatible with that not in the same way so this result into lots of overhead lots of extra bits you need to be communicated chunking has to happen a lot when you usually don't want to chunk as you lose efficiency this isn't 97 then in 2019 this beautiful paper came out by concentrated barber who's sure they can do this with NS rather than automatic coding so the underlying information theoretic scheme used in their approaches NS rather than arithmetic coding we haven't covered in us yet but higher-level thing is that I", "start_timestamp": "01:36:52", "end_timestamp": "01:37:30", "start_second": 5812, "end_second": 5850, "url": "https://www.youtube.com/watch?v=pPyOlGvWoXA&t=5812s", "title": "L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning", "thumbnail": "https://i.ytimg.com/vi/pPyOlGvWoXA/maxresdefault.jpg"} {"video_id": "pPyOlGvWoXA", "text": "think the coding looks at your data as a stream you go literally through it ans doesn't including that acts more like a stack and putting things popping things from stack is the way things get encoded and that matches much better with the ideas we're going to step through here and next is actually possibly practical in fact the NS used in many place but physically here very well matched with BAE type coding schemes then in our work Berkeley Jonathan let a lot of this work today with Freesat kima and myself we", "start_timestamp": "01:37:30", "end_timestamp": "01:38:03", "start_second": 5850, "end_second": 5883, "url": "https://www.youtube.com/watch?v=pPyOlGvWoXA&t=5850s", "title": "L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning", "thumbnail": "https://i.ytimg.com/vi/pPyOlGvWoXA/maxresdefault.jpg"} {"video_id": "pPyOlGvWoXA", "text": "looked at essentially this paper here and made it more made it more efficient in by looking at hierarchical latent variable random just single it in variable auto-encoders all of this builds on this ANS TIG invented by chaired Duda in 2007 which is using many coding schemes but is very just encase a lot of information here was invented in authorities in the 50s 1940s 1950s right Shannon's theorem 1948 Hoffman code 1952 ans third aqua coding scheme today invented in 2007 POW at a time where nobody thought you couldn't still invent", "start_timestamp": "01:38:03", "end_timestamp": "01:38:43", "start_second": 5883, "end_second": 5923, "url": "https://www.youtube.com/watch?v=pPyOlGvWoXA&t=5883s", "title": "L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning", "thumbnail": "https://i.ytimg.com/vi/pPyOlGvWoXA/maxresdefault.jpg"} {"video_id": "pPyOlGvWoXA", "text": "really groundbreaking new things that are going to be that widely used in compression and sure they did so quick refresher will covered encoding entropy coding assigns log to 1 over P of X I length for the encoding of a symbol entropy is a lower bound we know and how long do you go to even pass to be the Shannon theorem said that we can't do better than entropy Huffman says the maya with the Huffman scheme you can get to entropy plus one arithmetic coding always have verbal I think many symbols in one go plays a plus two but is not a", "start_timestamp": "01:38:43", "end_timestamp": "01:39:20", "start_second": 5923, "end_second": 5960, "url": "https://www.youtube.com/watch?v=pPyOlGvWoXA&t=5923s", "title": "L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning", "thumbnail": "https://i.ytimg.com/vi/pPyOlGvWoXA/maxresdefault.jpg"} {"video_id": "pPyOlGvWoXA", "text": "plus two or similar so plus do for the entire simple sequence so it's actually more efficient than doing Huffman each separate symbol so that's we've all read so far where some key assumptions it is assumed that we have a model P of X for which we can do the following track to be numerate all X for Huffman otherwise we can do it build that tree to enumerate everything well you want to enumerate all possible images you might want to possibly encode no you can't build a Huffman tree for that arithmetic coding gets around that by you only need", "start_timestamp": "01:39:20", "end_timestamp": "01:39:57", "start_second": 5960, "end_second": 5997, "url": "https://www.youtube.com/watch?v=pPyOlGvWoXA&t=5960s", "title": "L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning", "thumbnail": "https://i.ytimg.com/vi/pPyOlGvWoXA/maxresdefault.jpg"} {"video_id": "pPyOlGvWoXA", "text": "to be able to assign probabilities to the next symbol in your sequence if you can do that you can use arithmetic coding but even that tend to require that there's a relatively small number of symbol values if your symbol can take on infinitely many values it's not really clear how you doing arithmetic coding so when it is fire 11 1 X continuous but that's actually quite fixable we'll look at that on the next slide and then the X is high dimensional and that's the main challenge we'll be looking at the observation here is down", "start_timestamp": "01:39:57", "end_timestamp": "01:40:31", "start_second": 5997, "end_second": 6031, "url": "https://www.youtube.com/watch?v=pPyOlGvWoXA&t=5997s", "title": "L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning", "thumbnail": "https://i.ytimg.com/vi/pPyOlGvWoXA/maxresdefault.jpg"} {"video_id": "pPyOlGvWoXA", "text": "some high dimensional distributions still allow for convenient coding and what we'll see some examples and we will want to do is leverage that to efficiently code mixture models of these easy high dimensional distributions the key wooden that we'll get from this part of lectures down as long as the single the non mixture model can be employed efficiently Whittlesey scheme that from there allows us to encode data coming to the mixture model also very efficiently and of course mixture models are often a lot more expressive than", "start_timestamp": "01:40:31", "end_timestamp": "01:41:04", "start_second": 6031, "end_second": 6064, "url": "https://www.youtube.com/watch?v=pPyOlGvWoXA&t=6031s", "title": "L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning", "thumbnail": "https://i.ytimg.com/vi/pPyOlGvWoXA/maxresdefault.jpg"} {"video_id": "pPyOlGvWoXA", "text": "their individual components which means that we can now have a coding scheme that is designed around a much more expressive distribution class that you can fit to your data oh that seems to be out of order okay well so a real number X has infinite information so we cannot really expect to send a real number across a line in a finite amount of time because the infant that's keeps going forever new information every bit so what we're gonna do we're gonna assume if we have to be able to continuous variables X that we can discretize and", "start_timestamp": "01:41:04", "end_timestamp": "01:41:44", "start_second": 6064, "end_second": 6104, "url": "https://www.youtube.com/watch?v=pPyOlGvWoXA&t=6064s", "title": "L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning", "thumbnail": "https://i.ytimg.com/vi/pPyOlGvWoXA/maxresdefault.jpg"} {"video_id": "pPyOlGvWoXA", "text": "that we're happy with the discretization so this guy's up to some position T you can discretize in two ways imagine you have a Gaussian excellents on the horizontal axis is a Gaussian distribution you can discretize on the x axis or alternatively and often more convenient you can discretize in the cumulative distribution so it still acts the cumulative distribution will run something like this and then this goes from 0 to 1 you can discretize here first of all lets you deal with the notion a just relies on X what were you gonna do with", "start_timestamp": "01:41:44", "end_timestamp": "01:42:33", "start_second": 6104, "end_second": 6153, "url": "https://www.youtube.com/watch?v=pPyOlGvWoXA&t=6104s", "title": "L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning", "thumbnail": "https://i.ytimg.com/vi/pPyOlGvWoXA/maxresdefault.jpg"} {"video_id": "pPyOlGvWoXA", "text": "the tails you probably make one again you make one big interval to go to infinity but still as somewhat inconvenient maybe also if you disregard an X well this thing has a lot of probability mass this one doesn't have much priority mask I see this guitar is based on the cumulative this is just saying every interval has the same probability mass that's how I'm going to disco ties to discretize me located here be here from there then here this interval you go here this interval then you go well it's not perfectly drawn but you could hear you", "start_timestamp": "01:42:33", "end_timestamp": "01:43:07", "start_second": 6153, "end_second": 6187, "url": "https://www.youtube.com/watch?v=pPyOlGvWoXA&t=6153s", "title": "L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning", "thumbnail": "https://i.ytimg.com/vi/pPyOlGvWoXA/maxresdefault.jpg"} {"video_id": "pPyOlGvWoXA", "text": "get the interval here isn't the ball here isn't the ball and so forth so that's that's a way you can discretize continuous variables with equal probability analysis or doing it tacitly on the x axis we can look at something called the discretized variable X this crisis interval with T and then so this is this version of it okay look at the entropy of that variable which will be probability of being in the interval I which is T which is the width times the height P of X I and so this approximation of an integral and then", "start_timestamp": "01:43:07", "end_timestamp": "01:43:49", "start_second": 6187, "end_second": 6229, "url": "https://www.youtube.com/watch?v=pPyOlGvWoXA&t=6187s", "title": "L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning", "thumbnail": "https://i.ytimg.com/vi/pPyOlGvWoXA/maxresdefault.jpg"} {"video_id": "pPyOlGvWoXA", "text": "the value of the function here okay now when we work this out just log of the product is sum of the logs then we see here that this is looks like approximation of integrals so we say okay it's almost the same as integral and we get is here is what is called the differential entropy and then this is some factor that ties into the discretization level so it seems can actually use the differential entropy if we have a functional representation of our of our distribution and we can compute the integral for it we can", "start_timestamp": "01:43:49", "end_timestamp": "01:44:32", "start_second": 6229, "end_second": 6272, "url": "https://www.youtube.com/watch?v=pPyOlGvWoXA&t=6229s", "title": "L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning", "thumbnail": "https://i.ytimg.com/vi/pPyOlGvWoXA/maxresdefault.jpg"} {"video_id": "pPyOlGvWoXA", "text": "understand what the differential entropy is and then the log of our disposition level will determine the overall entropy that would go into representatives as a discrete variable okay so that's a bit of background on how to deal with entropy of continuous variables jump it will be determined by our discretization now let's go to the actual challenge is that we wanted to solve and it was and we'll mostly think about discrete variables now but it also works for contains you can choose so key assumption some high-dimensional", "start_timestamp": "01:44:32", "end_timestamp": "01:45:25", "start_second": 6272, "end_second": 6325, "url": "https://www.youtube.com/watch?v=pPyOlGvWoXA&t=6272s", "title": "L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning", "thumbnail": "https://i.ytimg.com/vi/pPyOlGvWoXA/maxresdefault.jpg"} {"video_id": "pPyOlGvWoXA", "text": "P of X that allows for easy coding exercise I mention all hearings sometimes still used to encode one when X is Gaussian you can do it as an independent random variables along each axis and each and invariably you could efficiently we've said on the previous slide or maybe for each X we can use an auto Russell model and we know how to do a rest of encoding with arithmetic schemes and so forth these are examples of high dimensional situations where we can encode things efficiently might be more but for now let's just let's go", "start_timestamp": "01:45:25", "end_timestamp": "01:46:03", "start_second": 6325, "end_second": 6363, "url": "https://www.youtube.com/watch?v=pPyOlGvWoXA&t=6325s", "title": "L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning", "thumbnail": "https://i.ytimg.com/vi/pPyOlGvWoXA/maxresdefault.jpg"} {"video_id": "pPyOlGvWoXA", "text": "mostly if think about this one over here mixture models allows to have them wider range of the situations and their components for example mixture of Gauss is much richer than a single Gaussian for example a single Gaussian all I can do is look like this but a niche model could have many of these bump mix together and then the overall thing would look something like that which is much more complexed representation that you can capture with this five component mixture that with a single gaussian the key question we want to answer is if P", "start_timestamp": "01:46:03", "end_timestamp": "01:46:43", "start_second": 6363, "end_second": 6403, "url": "https://www.youtube.com/watch?v=pPyOlGvWoXA&t=6363s", "title": "L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning", "thumbnail": "https://i.ytimg.com/vi/pPyOlGvWoXA/maxresdefault.jpg"} {"video_id": "pPyOlGvWoXA", "text": "of X is a mixture model of easily incredible distributions does that mean we can also efficiently encode P of X there we'll look at one the illustrations to get the point across the way that it's easy to draw on slides but keep in mind that we're covering a method that generalizes to higher dimensions and I've always want to do is do including of 1d variables you can use many many methods it's not about 1d that's just a way to draw things on the slide also will not allow ourselves to rely on that the main effects being small because the", "start_timestamp": "01:46:43", "end_timestamp": "01:47:23", "start_second": 6403, "end_second": 6443, "url": "https://www.youtube.com/watch?v=pPyOlGvWoXA&t=6403s", "title": "L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning", "thumbnail": "https://i.ytimg.com/vi/pPyOlGvWoXA/maxresdefault.jpg"} {"video_id": "pPyOlGvWoXA", "text": "main effects is being small we could rely on that we can do many other things so we imagined higher dimensional X take on many many values but somehow an efficiently incurring a single component and mixture that we're using to represent P of X ok let's see what we can do now our running example is going to be a mixture model P of X has a weighted sum this is choosing the mode there's different modes index by I there is a distribution of x given I so supposed to I think is down maybe when we sample X we first sample a mode and", "start_timestamp": "01:47:23", "end_timestamp": "01:48:05", "start_second": 6443, "end_second": 6485, "url": "https://www.youtube.com/watch?v=pPyOlGvWoXA&t=6443s", "title": "L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning", "thumbnail": "https://i.ytimg.com/vi/pPyOlGvWoXA/maxresdefault.jpg"} {"video_id": "pPyOlGvWoXA", "text": "once you sample the mode from sample mode one this is the distribution we sample mode to be this is a distribution we sample mode three maybe this is a distribution sample mode or maybe this is a distribution and so forth assumptions of each of these modes themselves easy to encode East to encode means that we have a scheme that will give us close to this because that's what a good encoding would do it would cost you a number of is equal to log 1 over P of x given I is that's one week more than oh it's mode I our", "start_timestamp": "01:48:05", "end_timestamp": "01:48:44", "start_second": 6485, "end_second": 6524, "url": "https://www.youtube.com/watch?v=pPyOlGvWoXA&t=6485s", "title": "L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning", "thumbnail": "https://i.ytimg.com/vi/pPyOlGvWoXA/maxresdefault.jpg"} {"video_id": "pPyOlGvWoXA", "text": "distribution is P of X given mine ok first scheme we might consider is max mode encoding so max mode encoding what do we do we say ok we have a mixture distribution in this method - code X well we don't know how to correctly from P of X but we know how to code from P of x given AI so what if we could get D on that was used to generate X then we can encode X there efficiently so we find an item maximize B of AI given X so imagine we're back to this mixture model thing our X falls here then we might sit home this mode over here is the one and this", "start_timestamp": "01:48:44", "end_timestamp": "01:49:34", "start_second": 6524, "end_second": 6574, "url": "https://www.youtube.com/watch?v=pPyOlGvWoXA&t=6524s", "title": "L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning", "thumbnail": "https://i.ytimg.com/vi/pPyOlGvWoXA/maxresdefault.jpg"} {"video_id": "pPyOlGvWoXA", "text": "is mug one two three and say okay mine is three that's the most likely mode to have generated this X but of course if we know how to encode x given I we still to send I across otherwise the other person cannot decode with that scheme because I don't know of what we're coding relative to so if the semi which will cost us log of one over P of I then we have to send X which will cost as log of one over P of x given I and so the expected third length shown on the right here is well there's an expectation for possible X's we need to send when we", "start_timestamp": "01:49:34", "end_timestamp": "01:50:12", "start_second": 6574, "end_second": 6612, "url": "https://www.youtube.com/watch?v=pPyOlGvWoXA&t=6574s", "title": "L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning", "thumbnail": "https://i.ytimg.com/vi/pPyOlGvWoXA/maxresdefault.jpg"} {"video_id": "pPyOlGvWoXA", "text": "send an X we look at the I that minimizes we're coming here is both I and X so we're really coding log 1 over P of I comma X willing to choose our I we get to choose and we're picking the one that minimizes that quantity another way to write a second equation here same thing okay so the schema straightforward and we know how much is going to cost us is it optimal it's not optimal because effectively we're using a different distribution Q of X which I'll have a cost H of X plus the kr between P and Q what do I mean with that when we use", "start_timestamp": "01:50:12", "end_timestamp": "01:50:56", "start_second": 6612, "end_second": 6656, "url": "https://www.youtube.com/watch?v=pPyOlGvWoXA&t=6612s", "title": "L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning", "thumbnail": "https://i.ytimg.com/vi/pPyOlGvWoXA/maxresdefault.jpg"} {"video_id": "pPyOlGvWoXA", "text": "this encoding scheme imagine we have two modes this is P and you see P is running over here and both in those two modes when we use a scheme above effectively what we're doing is we're fitting the distribution queue to our to our original situation and we're encoding based on cue because editing the falls on this side will use mode one everything calls and that's how we'll use mode two and this is not the same as P it's different and will pay the price will pay the KL between the two in extra bits now I might say do we care do I", "start_timestamp": "01:50:56", "end_timestamp": "01:51:41", "start_second": 6656, "end_second": 6701, "url": "https://www.youtube.com/watch?v=pPyOlGvWoXA&t=6656s", "title": "L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning", "thumbnail": "https://i.ytimg.com/vi/pPyOlGvWoXA/maxresdefault.jpg"} {"video_id": "pPyOlGvWoXA", "text": "care about paying this KL divergence for all all the pants and this drawing here yeah you probably care it's a pretty big kale if your distribution was such that your modes are completely separated from each other well they're not KL between P and Q will be almost zero you might not care let's think about what we often care about in it our scenarios which is might have a variational auto encoder with a latent code latent variable Z so not just the I would M be Z that Z can take on a continuum of values so there'll be a continuum of modes and if", "start_timestamp": "01:51:41", "end_timestamp": "01:52:15", "start_second": 6701, "end_second": 6735, "url": "https://www.youtube.com/watch?v=pPyOlGvWoXA&t=6701s", "title": "L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning", "thumbnail": "https://i.ytimg.com/vi/pPyOlGvWoXA/maxresdefault.jpg"} {"video_id": "pPyOlGvWoXA", "text": "I pick only one of them instead of somehow using the continuum we're losing a lot because because it's a continuum they're all going to be very close together and so we are going to lose a lot by using Q instead of P in this situation so we have a scheme we can do coding but we're paying a price question is can we somehow get it done without paying that K odd well let's think about it some more so we looked at max mode what if we do posterior sampling in posterior sampling or say as well we still have the same situation as before", "start_timestamp": "01:52:15", "end_timestamp": "01:52:53", "start_second": 6735, "end_second": 6773, "url": "https://www.youtube.com/watch?v=pPyOlGvWoXA&t=6735s", "title": "L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning", "thumbnail": "https://i.ytimg.com/vi/pPyOlGvWoXA/maxresdefault.jpg"} {"video_id": "pPyOlGvWoXA", "text": "but the current is up to taking the I that remember before it was high that maximizes P i given X here we sample might not sound smart up first and in fact when we're done with this slide you'll see that the coding scheme we're covering on this slide is worse than the one we covered on the previous slide but in the process of covering this scheme we'll build up some new concepts that allow us on the next slide to get the best scheme better than the previous one this one so bear with me for a moment here so we sample I from P of Y given X", "start_timestamp": "01:52:53", "end_timestamp": "01:53:34", "start_second": 6773, "end_second": 6814, "url": "https://www.youtube.com/watch?v=pPyOlGvWoXA&t=6773s", "title": "L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning", "thumbnail": "https://i.ytimg.com/vi/pPyOlGvWoXA/maxresdefault.jpg"} {"video_id": "pPyOlGvWoXA", "text": "we stand i same cost as before using encoding based on the prior like the I why not P I give an X you might say isn't that more peak can we just send P are given X well the recipient doesn't have X so they cannot decode it against a given X they have nothing else we send I is the first thing we say well they have to decode a dist on the prior and you have to encode it based enough then we send backs using same encoding scheme as before this is probably efficient but not necessarily as efficient as using the best I remember imagine we have", "start_timestamp": "01:53:34", "end_timestamp": "01:54:13", "start_second": 6814, "end_second": 6853, "url": "https://www.youtube.com/watch?v=pPyOlGvWoXA&t=6814s", "title": "L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning", "thumbnail": "https://i.ytimg.com/vi/pPyOlGvWoXA/maxresdefault.jpg"} {"video_id": "pPyOlGvWoXA", "text": "these distributions here and let's say our X landed over here let's say there's mode 1 mode 2 and we're unlucky and when we sample I from PRI given X we end up with our I equal to somehow well encoding X code X from P of x given I equal 2 is going to be very expensive because there's a low probability here that code is not going to be very efficient at getting X across so it makes it less efficient than what's on the previous slide in fact the difference is that here we have log 1 over P I comma X whereas in the previous", "start_timestamp": "01:54:13", "end_timestamp": "01:54:53", "start_second": 6853, "end_second": 6893, "url": "https://www.youtube.com/watch?v=pPyOlGvWoXA&t=6853s", "title": "L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning", "thumbnail": "https://i.ytimg.com/vi/pPyOlGvWoXA/maxresdefault.jpg"} {"video_id": "pPyOlGvWoXA", "text": "one we had a min over I sitting in front of it ok so we lost some things here but it's all for a good reason so now what we're going to now be able to do is earn bits back which is the key concept we want to get to so it's an optimal yes and no it's yes it's optimal if we like to send I and X but we don't care about sending I we just want to send X is something we made up X is the real thing is the symbol I is just a mode then you know we have an ordered distribution we're fitting so optimal the same both but it's a waste to send I and so how", "start_timestamp": "01:54:53", "end_timestamp": "01:55:41", "start_second": 6893, "end_second": 6941, "url": "https://www.youtube.com/watch?v=pPyOlGvWoXA&t=6893s", "title": "L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning", "thumbnail": "https://i.ytimg.com/vi/pPyOlGvWoXA/maxresdefault.jpg"} {"video_id": "pPyOlGvWoXA", "text": "much the bottle is well it's about more according to the entropy are are given acts effectively because that's where we send that's that's wasted so what can we do what can we do to avoid this overhead the very interesting idea that the base back idea is that somehow we we send too many bits what we can earn them back and so the higher level that's what's gonna happen I say all things gonna happen we say we acknowledge we said too much we're gonna somehow earn them back and not have to pay for them so let's take a", "start_timestamp": "01:55:41", "end_timestamp": "01:56:21", "start_second": 6941, "end_second": 6981, "url": "https://www.youtube.com/watch?v=pPyOlGvWoXA&t=6941s", "title": "L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning", "thumbnail": "https://i.ytimg.com/vi/pPyOlGvWoXA/maxresdefault.jpg"} {"video_id": "pPyOlGvWoXA", "text": "look at that legs back coding solves a scheme on the previous slide we sample I from PR given X the cost descendant is log of 1 over PI then we send X cost is log 1 over P of X given are all the same as in the previous line base back idea with exact difference between approximate inference later what will it do the recipient decodes I and X but knows the distribution for a given X because they have the corresponding model on their side so what that means is that there see piant actually can recover the random seed that you used to", "start_timestamp": "01:56:21", "end_timestamp": "01:57:08", "start_second": 6981, "end_second": 7028, "url": "https://www.youtube.com/watch?v=pPyOlGvWoXA&t=6981s", "title": "L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning", "thumbnail": "https://i.ytimg.com/vi/pPyOlGvWoXA/maxresdefault.jpg"} {"video_id": "pPyOlGvWoXA", "text": "sample i from p i given X they can go to the reverse profit you do the sampling here you use random seed what is it random seed that is really use sequence of random bits that was used since the recipient knows the distribution knows I know sex they can back out the sequence of random bits that caused you to sample I so can reconstruct the random bits use a sample PI given x those findings were also sent those are log 1 over PI given X random bits which we now don't have to count what do I mean with that imagine", "start_timestamp": "01:57:08", "end_timestamp": "01:57:56", "start_second": 7028, "end_second": 7076, "url": "https://www.youtube.com/watch?v=pPyOlGvWoXA&t=7028s", "title": "L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning", "thumbnail": "https://i.ytimg.com/vi/pPyOlGvWoXA/maxresdefault.jpg"} {"video_id": "pPyOlGvWoXA", "text": "you're trying to send X and somehow we have a friend is also trying to send Brandon bits you can take your friends around the bits use them for this sampling send them across through this process and they'll be able to be decoded on the other side and those are your friends base so you don't have to pay the price for that that's their bits they happen to come out on the other side that's their cost to pay so one way to think of it all you have to pay is the X give an eye and that's it and we'll make that more concrete even if", "start_timestamp": "01:57:56", "end_timestamp": "01:58:29", "start_second": 7076, "end_second": 7109, "url": "https://www.youtube.com/watch?v=pPyOlGvWoXA&t=7076s", "title": "L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning", "thumbnail": "https://i.ytimg.com/vi/pPyOlGvWoXA/maxresdefault.jpg"} {"video_id": "pPyOlGvWoXA", "text": "they're your own bits so bits by coding cost you send you'll because of log 1 over P I to send I then cause of log 1 or P of X I just an x given I and then you earn this back because those bits actually it's a bunch of random bits that we're sitting there in or sent across but they're not yours you don't have to pay the price for them and if you do the math you actually have log 1 over P of X so you'll get encoding of our want to send X at the entry rate for X so we've got optimal encoding great we're optimal now what does it look like", "start_timestamp": "01:58:29", "end_timestamp": "01:59:10", "start_second": 7109, "end_second": 7150, "url": "https://www.youtube.com/watch?v=pPyOlGvWoXA&t=7109s", "title": "L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning", "thumbnail": "https://i.ytimg.com/vi/pPyOlGvWoXA/maxresdefault.jpg"} {"video_id": "pPyOlGvWoXA", "text": "you have some symbol data this is what you want to send and that's an auxilary data this is random bits sequence the sender will do lossless compression the through the schema is fine receiver will get back out the symbol data and also get back out the auxilary data because you get them back out on this side you don't count it against your budget for encoding assumptions we make you we can compute P of Y given X which can be a strong assumption being able to find that distribution in your mixture model it's the distribution that you don't", "start_timestamp": "01:59:10", "end_timestamp": "01:59:51", "start_second": 7150, "end_second": 7191, "url": "https://www.youtube.com/watch?v=pPyOlGvWoXA&t=7150s", "title": "L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning", "thumbnail": "https://i.ytimg.com/vi/pPyOlGvWoXA/maxresdefault.jpg"} {"video_id": "pPyOlGvWoXA", "text": "break it up available and then if something you have auxilary random data we'd like to transmit and we set it across we don't have to pay a profit and somebody asked carries that cost so you've actually did something with approximate inference in a VA II we don't find the exact kathiria for Z given X we have an inference network or here Q I give an X in first network we sample hum q are given X otherwise everything is saying we go through the whole process what happens is what we get back is log 1 over Q I given X and", "start_timestamp": "01:59:51", "end_timestamp": "02:00:29", "start_second": 7191, "end_second": 7229, "url": "https://www.youtube.com/watch?v=pPyOlGvWoXA&t=7191s", "title": "L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning", "thumbnail": "https://i.ytimg.com/vi/pPyOlGvWoXA/maxresdefault.jpg"} {"video_id": "pPyOlGvWoXA", "text": "when we see here if then the cost of transmitting data is not a is a little higher than log 1 over P of X because effectively we have the wrong distribution here we have Q instead of P this is the evidence lower bound we applies with the BAE so if you use a via e to do bits back coding by optimizing the loss of the VA e there directly optimizing the compression capability of this big back coding approach so perfect match between the VA objective and compression so how about that source of random bits that we also like to send", "start_timestamp": "02:00:29", "end_timestamp": "02:01:11", "start_second": 7229, "end_second": 7271, "url": "https://www.youtube.com/watch?v=pPyOlGvWoXA&t=7229s", "title": "L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning", "thumbnail": "https://i.ytimg.com/vi/pPyOlGvWoXA/maxresdefault.jpg"} {"video_id": "pPyOlGvWoXA", "text": "where does that come from in practice it's actually your own bits so imagine you already have some bits sitting here you have some zeros ones you know maybe you've already done some compression of something else it's a random sequence it's sitting there ready to be transmitted then the first thing you have to do and the the notation here is slightly different Wyatt responds to our i M s corresponds to our X ok so keep that in mind that's the notation they use in this paper here from which we took the figure so in", "start_timestamp": "02:01:11", "end_timestamp": "02:01:48", "start_second": 7271, "end_second": 7308, "url": "https://www.youtube.com/watch?v=pPyOlGvWoXA&t=7271s", "title": "L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning", "thumbnail": "https://i.ytimg.com/vi/pPyOlGvWoXA/maxresdefault.jpg"} {"video_id": "pPyOlGvWoXA", "text": "cutting the mode why we do it with their infants attribution why given the simpler one code s0 to do that we need to grab random bits to do that sampling well that means we consumed these random bits from our string that we want to send across next thing that happens is we start encoding we code SEO remembers our X so our symbol given the mode gets encoded so this grows the number of bits you want us in then the encode de mode from its prior and this grows again and so what happened here is that in the process", "start_timestamp": "02:01:48", "end_timestamp": "02:02:34", "start_second": 7308, "end_second": 7354, "url": "https://www.youtube.com/watch?v=pPyOlGvWoXA&t=7308s", "title": "L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning", "thumbnail": "https://i.ytimg.com/vi/pPyOlGvWoXA/maxresdefault.jpg"} {"video_id": "pPyOlGvWoXA", "text": "when coding one symbol we have first consumed some bits now we're on the stack of things to be send now we've added more bits to encode asking them why it's a symbol given mode and added more bits to code the mode itself well overall this thing will have grown typically not guaranteed but typical half grown and now we could repeat this process what we had here as the extra information it's now sitting here you can get our next symbol s1 will find what our y1 is and repeat and so we see what actually happens is we were", "start_timestamp": "02:02:34", "end_timestamp": "02:03:14", "start_second": 7354, "end_second": 7394, "url": "https://www.youtube.com/watch?v=pPyOlGvWoXA&t=7354s", "title": "L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning", "thumbnail": "https://i.ytimg.com/vi/pPyOlGvWoXA/maxresdefault.jpg"} {"video_id": "pPyOlGvWoXA", "text": "building up and some kind of popping the stack by pushing through the stack the sequence of bits that it code a sequence of symbols with this mixture model or bits back coding so we really see is the bits that we're getting back or not visitor sitting off to the side necessarily they're bits that came onto our stack from encoding the previous symbol that we encoded this way and you might wonder well if we took it off here but put other things on have we lost the ability to get those bits back no that's the whole idea in the decoding as we saw", "start_timestamp": "02:03:14", "end_timestamp": "02:03:53", "start_second": 7394, "end_second": 7433, "url": "https://www.youtube.com/watch?v=pPyOlGvWoXA&t=7394s", "title": "L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning", "thumbnail": "https://i.ytimg.com/vi/pPyOlGvWoXA/maxresdefault.jpg"} {"video_id": "pPyOlGvWoXA", "text": "on the previous one two slides back sorry when we when we decode we have we can reconstruct the random bits that are used to sample the mole given the symbol and so we get them back out at that time so still get everything on the other side this is not lost will be decoded and bits back all right so the last thing I want to cover and I'm going to hand it off to Jonathan and maybe we'll take a very short break and then hand it off to Jonathan is how do we have to get those bits back I've been telling you you're", "start_timestamp": "02:03:53", "end_timestamp": "02:04:40", "start_second": 7433, "end_second": 7480, "url": "https://www.youtube.com/watch?v=pPyOlGvWoXA&t=7433s", "title": "L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning", "thumbnail": "https://i.ytimg.com/vi/pPyOlGvWoXA/maxresdefault.jpg"} {"video_id": "pPyOlGvWoXA", "text": "going to get these this back you're going to have sampled your mode from Q R given X and then later you're going to get them back how this work so let's say you have a distribution I have an X and I'm going to draw the distribution of Qi given X is going to be discrete for you I'm doing here it's gonna be discrete and so I'm going to look at the cumulative distribution so let's say I lives here and then I could be maybe one two three or four come the distribution we'll say okay maybe one has a probability of let's say 0.2 or", "start_timestamp": "02:04:40", "end_timestamp": "02:05:29", "start_second": 7480, "end_second": 7529, "url": "https://www.youtube.com/watch?v=pPyOlGvWoXA&t=7480s", "title": "L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning", "thumbnail": "https://i.ytimg.com/vi/pPyOlGvWoXA/maxresdefault.jpg"} {"video_id": "pPyOlGvWoXA", "text": "something then once I hit two maybe two is a probability of zero point one so we hit level zero point three over here and three might have a probability of maybe 0.5 to go away to zero point eight and then for weight of the probability of zero point two all the way to one what does it mean to sample i given X I have this bit stream so I have a bit stream sitting there I'm going to start from the end here and work my way so the first thing I see is zero zero tells me so I have a zero one interval here zero tells me that I am in the zero to zero", "start_timestamp": "02:05:29", "end_timestamp": "02:06:20", "start_second": 7529, "end_second": 7580, "url": "https://www.youtube.com/watch?v=pPyOlGvWoXA&t=7529s", "title": "L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning", "thumbnail": "https://i.ytimg.com/vi/pPyOlGvWoXA/maxresdefault.jpg"} {"video_id": "pPyOlGvWoXA", "text": "or interval but in that interval I can still be either so it has to be either one two or three I don't know yet what I'm going to be so simple the next zero at which point out in the zero to 0.25 and I still don't know what I'm going to be I've consumed I've consumed this era have consumed this zero now I'm going to consume this one now as I consume this one it means I'm gonna be in the 0.25 in the top half maybe here I still don't know what I'm going to be I could be a 1 or a 2 I don't know and I'm gonna have to", "start_timestamp": "02:06:20", "end_timestamp": "02:07:17", "start_second": 7580, "end_second": 7637, "url": "https://www.youtube.com/watch?v=pPyOlGvWoXA&t=7580s", "title": "L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning", "thumbnail": "https://i.ytimg.com/vi/pPyOlGvWoXA/maxresdefault.jpg"} {"video_id": "pPyOlGvWoXA", "text": "consume this zero then next now I'm in the bottom half of this and now I actually know once I sampled those four bits I know I won now I can go sample from P X given I equal 1 2 then encode my X right and I can also have my prior P I that I used to encode I equal hold on clear this for a moment so I need to stand X I need to send I how am I going to set I well you could say well I have a distribution here over for pop values and I couldn't either die by maybe building a Huffman code or something over those four possible values that's", "start_timestamp": "02:07:17", "end_timestamp": "02:08:20", "start_second": 7637, "end_second": 7700, "url": "https://www.youtube.com/watch?v=pPyOlGvWoXA&t=7637s", "title": "L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning", "thumbnail": "https://i.ytimg.com/vi/pPyOlGvWoXA/maxresdefault.jpg"} {"video_id": "pPyOlGvWoXA", "text": "you can do something much simpler you can say to get the point across that I need to be I equal 1 well I achieved that by this sequence I achieved by the 0 1 0 0 sequence so I can actually just send across arrow 1 0 0 so not across and that signals that I have I so that way I'm also trivially getting those bits back because the person who receives this gets to read off the bits just like that oh here are the bits I can just read off and then I can all so use a temp to decode X all right so let's see I think that's it for me", "start_timestamp": "02:08:20", "end_timestamp": "02:09:07", "start_second": 7700, "end_second": 7747, "url": "https://www.youtube.com/watch?v=pPyOlGvWoXA&t=7700s", "title": "L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning", "thumbnail": "https://i.ytimg.com/vi/pPyOlGvWoXA/maxresdefault.jpg"} {"video_id": "pPyOlGvWoXA", "text": "let's take maybe a two three minute break as I know Jonathan has a lot to cover and let's maybe restart around 712 713 for the last part of lecture Jonathan do you want to try to take control of the screen here um yeah sure okay um let's see can you hear me okay yeah my I might turn off my camera too so that my internet connections were reliable but but we'll see just let me know if it's not working well okay um I guess I can just jump in and talk about more about bits back it's possible to address a question on chat oh yeah questions on", "start_timestamp": "02:09:07", "end_timestamp": "02:11:16", "start_second": 7747, "end_second": 7876, "url": "https://www.youtube.com/watch?v=pPyOlGvWoXA&t=7747s", "title": "L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning", "thumbnail": "https://i.ytimg.com/vi/pPyOlGvWoXA/maxresdefault.jpg"} {"video_id": "pPyOlGvWoXA", "text": "ChaCha I think it's part of lecture and then we'll dive in with with that later so first question is there's also we can have P of X and P is a mixture of gaussians how you can simply of X to begin with yeah it's a very observation it's not exactly our assumption the assumption more precise is that we have a mixture model and that four components individual components in a mixture model we know how to encode efficiently but for the mixture model as a whole we might not know how to encode and now we have a scheme to do that especially if", "start_timestamp": "02:11:16", "end_timestamp": "02:11:55", "start_second": 7876, "end_second": 7915, "url": "https://www.youtube.com/watch?v=pPyOlGvWoXA&t=7876s", "title": "L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning", "thumbnail": "https://i.ytimg.com/vi/pPyOlGvWoXA/maxresdefault.jpg"} {"video_id": "pPyOlGvWoXA", "text": "you know how to encode each component let's back her to your way to encode it against a mixture model which likely there better fit your data distribution and as we know the closer you are to distribution the smaller K diverges the more efficient your coding will be and so it allows us to the mixture model which might be a better fit which in turn would result in higher efficiency encoding another question there is the new party between Edinburgh Nubia that's a really really good question so one of the big things that I think Jonathan", "start_timestamp": "02:11:55", "end_timestamp": "02:12:30", "start_second": 7915, "end_second": 7950, "url": "https://www.youtube.com/watch?v=pPyOlGvWoXA&t=7915s", "title": "L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning", "thumbnail": "https://i.ytimg.com/vi/pPyOlGvWoXA/maxresdefault.jpg"} {"video_id": "pPyOlGvWoXA", "text": "will be you know Jonathan's covering that paper so the 2019 paper tube it's like a NS paper by a thousand at all investigated exactly that assumption so we'll see bar that but the notion down if you already put bits on your bit stream from encoding the previous symbol previously previous symbol and you'd work with those bits is that really a deficient or real the question is all those bits really random enough to achieve the efficiency that we declare here and so chuckling we'll get to that question maybe five or six slides for", "start_timestamp": "02:12:30", "end_timestamp": "02:13:04", "start_second": 7950, "end_second": 7984, "url": "https://www.youtube.com/watch?v=pPyOlGvWoXA&t=7950s", "title": "L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning", "thumbnail": "https://i.ytimg.com/vi/pPyOlGvWoXA/maxresdefault.jpg"} {"video_id": "pPyOlGvWoXA", "text": "now so all that for now I know it should be clear a few slides no all right um right okay so I'll just talk about how one more about bits back and some more modern instantiations of bits back coding into real algorithms that we can actually download and use and also in particular how fits back coding place with new types of degenerative models like a ease and hierarchical PA ease and flows instead of say just Gaussian mixture models right so the the core algorithm that that all these mute bits back papers are based on is this thing", "start_timestamp": "02:13:04", "end_timestamp": "02:14:04", "start_second": 7984, "end_second": 8044, "url": "https://www.youtube.com/watch?v=pPyOlGvWoXA&t=7984s", "title": "L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning", "thumbnail": "https://i.ytimg.com/vi/pPyOlGvWoXA/maxresdefault.jpg"} {"video_id": "pPyOlGvWoXA", "text": "called a symmetric numeral systems so this is an alternative to arithmetic coding as Peter was saying and it's especially appealing because it it's well first of all it's very simple and you can implement it on on you can implement it in a very efficient way which makes it actually practically usable and it also has some nice stack like properties that make it compatible with bits black coding so I'll just first take some time to describe what ENS actually is so again ans which is something like just like you're a thematic coding is a way of", "start_timestamp": "02:14:04", "end_timestamp": "02:14:44", "start_second": 8044, "end_second": 8084, "url": "https://www.youtube.com/watch?v=pPyOlGvWoXA&t=8044s", "title": "L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning", "thumbnail": "https://i.ytimg.com/vi/pPyOlGvWoXA/maxresdefault.jpg"} {"video_id": "pPyOlGvWoXA", "text": "taking a sequence of data and turning it into a bit stream where the bits greens length is something like the entropy of the data times the number of symbols and so I'll just jump right in and describe how this thing works and so so let's say this source that sorts of things that we're trending odhh is just two symbols it be each occurring with probability 1/2 and so you might imagine that the the naive way to code stuff like this is to just assign a to the number 0 and B to the number 1 and then you just get a string", "start_timestamp": "02:14:44", "end_timestamp": "02:15:29", "start_second": 8084, "end_second": 8129, "url": "https://www.youtube.com/watch?v=pPyOlGvWoXA&t=8084s", "title": "L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning", "thumbnail": "https://i.ytimg.com/vi/pPyOlGvWoXA/maxresdefault.jpg"} {"video_id": "pPyOlGvWoXA", "text": "of A's and B's just turns into a string of zeros and ones and that pretty much is the best that you can do but let's see how ans does this um so ans describes a bit stream not not represented it doesn't represent it exactly as a sequence of bits but it represents it as a natural number so there's an s stores this thing called a state s and we start at 0 and so ans defines an encoding operation so there's this encoding operation that takes in a current state and takes in the current symbol that you wish to encode so let's say you start at some", "start_timestamp": "02:15:29", "end_timestamp": "02:16:16", "start_second": 8129, "end_second": 8176, "url": "https://www.youtube.com/watch?v=pPyOlGvWoXA&t=8129s", "title": "L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning", "thumbnail": "https://i.ytimg.com/vi/pPyOlGvWoXA/maxresdefault.jpg"} {"video_id": "pPyOlGvWoXA", "text": "state s and you want to encode the the symbol a in this case in this very particular case what a in this will do is produce the number 2's 2 times X so remember the state s is a natural number and if you wish to encode B it produces the this state 2's plus 1 so that this is ans for for this very simple source um of course the ans will generalize more but just for in this case this is all it does and so you can see that really what this is doing is its appending numbers zeros and ones on the right of a binary representation of the", "start_timestamp": "02:16:16", "end_timestamp": "02:17:00", "start_second": 8176, "end_second": 8220, "url": "https://www.youtube.com/watch?v=pPyOlGvWoXA&t=8176s", "title": "L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning", "thumbnail": "https://i.ytimg.com/vi/pPyOlGvWoXA/maxresdefault.jpg"} {"video_id": "pPyOlGvWoXA", "text": "state s and that's how this is algorithm stores data that's how it stores a and B and a very important property of any reasonable coding algorithm like ans is that you should be able to decode the data that you encoded so given some state s you want to be able to tell what was the last symbol that was encoded and so that's very easy to check so if s is even then you know the last symbol was was a if it's odd then you know it's B and if you know it's then you can just divide by to take the floor and then you get the previous state so so that's how", "start_timestamp": "02:17:00", "end_timestamp": "02:17:48", "start_second": 8220, "end_second": 8268, "url": "https://www.youtube.com/watch?v=pPyOlGvWoXA&t=8220s", "title": "L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning", "thumbnail": "https://i.ytimg.com/vi/pPyOlGvWoXA/maxresdefault.jpg"} {"video_id": "pPyOlGvWoXA", "text": "this algorithm works and you can already see just based on this very simple example that this algorithm has the stack like property if you encode a sequence of symbols a B V then the next thing that you decode if you wish will be the last thing that you encoded so it's sort of a first in last out type of type of stack ok can I ask a question here yeah so sorry for this simple example what is the capital P of X and the mixture of Gaussian can you explode right in terms of this example and also I don't see why the stack is being used", "start_timestamp": "02:17:48", "end_timestamp": "02:18:29", "start_second": 8268, "end_second": 8309, "url": "https://www.youtube.com/watch?v=pPyOlGvWoXA&t=8268s", "title": "L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning", "thumbnail": "https://i.ytimg.com/vi/pPyOlGvWoXA/maxresdefault.jpg"} {"video_id": "pPyOlGvWoXA", "text": "here thank you yes so in this case we haven't gotten to the mixture yet where we're gonna talk about that soon this is just for this very simple source over here it's just a coin flip but we just want to store coin flips there's no latent variables or anything like that the second question was where does the stack come in it comes in the fact that so let's say we but let's say we encode a sequence of symbols say B a B and so that's if we follow this encoding rule then that's gonna produce a sequence of states it's gonna be like s1 s2 s3 and", "start_timestamp": "02:18:29", "end_timestamp": "02:19:14", "start_second": 8309, "end_second": 8354, "url": "https://www.youtube.com/watch?v=pPyOlGvWoXA&t=8309s", "title": "L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning", "thumbnail": "https://i.ytimg.com/vi/pPyOlGvWoXA/maxresdefault.jpg"} {"video_id": "pPyOlGvWoXA", "text": "so s3 is the final state that we have after encoding these three symbols and then what ans lets us do is decode from that state and when we decode from that state and s will tell us the last symbol that was encoded and then tell us the the previous state that came before that so so that's why it's it's like a stack because if you ask me an s what was the lesson well that was encoded it's gonna be beat or it's gonna be the this beat not the first one hopefully this will be more clear as I've got some more examples", "start_timestamp": "02:19:14", "end_timestamp": "02:19:51", "start_second": 8354, "end_second": 8391, "url": "https://www.youtube.com/watch?v=pPyOlGvWoXA&t=8354s", "title": "L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning", "thumbnail": "https://i.ytimg.com/vi/pPyOlGvWoXA/maxresdefault.jpg"} {"video_id": "pPyOlGvWoXA", "text": "okay um right mmm it's not letting me advance [Music] okay so let's see how this generalizes to the setting of not just the binary source or not just the the coin flip but something more interesting so here we again have two symbols a and B but become the problem the probabilities aren't one-half anymore instead it's gonna be one-fourth for a and three-fourths for B so B is more likely so we're going to now think about how to generalize ans to this setting and and the way the way it's done is like this so you take all the natural numbers so", "start_timestamp": "02:19:51", "end_timestamp": "02:20:43", "start_second": 8391, "end_second": 8443, "url": "https://www.youtube.com/watch?v=pPyOlGvWoXA&t=8391s", "title": "L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning", "thumbnail": "https://i.ytimg.com/vi/pPyOlGvWoXA/maxresdefault.jpg"} {"video_id": "pPyOlGvWoXA", "text": "here here's all the natural numbers and what we do is we partition it into two sets one set for a and one set for B and so I'll just write down what those sets are and then and then talk about why we chose those sets so we're gonna write down one set for a and this is going to be 0 4 8 and so on and this is a partition so the the separate B is just all the other numbers so that's 1 2 3 5 6 7 1 so on so just a draw draw it out here these numbers here for an 8 correspond to 8 and these all the other numbers correspond to be I'm saying", "start_timestamp": "02:20:43", "end_timestamp": "02:21:35", "start_second": 8443, "end_second": 8495, "url": "https://www.youtube.com/watch?v=pPyOlGvWoXA&t=8443s", "title": "L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning", "thumbnail": "https://i.ytimg.com/vi/pPyOlGvWoXA/maxresdefault.jpg"} {"video_id": "pPyOlGvWoXA", "text": "correspond to a meeting correspond to ending in a or correspond to ending in be right um I guess I've been just I haven't defined what corresponds to mean yet I just mean that we're defining these two sets s a s sub a is gonna be all the numbers divisible by 4 and s would be is gonna be the others and so so we we've just defined these two sets and then I'll just describe how we encode some spring so let's say we want to encode the string be a B so again ENS builds up with some big natural number which is a state so we start at", "start_timestamp": "02:21:35", "end_timestamp": "02:22:20", "start_second": 8495, "end_second": 8540, "url": "https://www.youtube.com/watch?v=pPyOlGvWoXA&t=8495s", "title": "L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning", "thumbnail": "https://i.ytimg.com/vi/pPyOlGvWoXA/maxresdefault.jpg"} {"video_id": "pPyOlGvWoXA", "text": "the state start at s equals zero and what we want to do is encode onto the state zero the symbol B so the way we do this is we look for the zeroth number in B's set so that this might sound a little bit weird maybe I'll just write out the general rule if when we encode a state s with say the symbol a we look at the s number and press sobayed so this is s and this is this number and that's so B okay so let's just go through this so when we encode 0b we look for the zeroth number and be set so so be set is this one two", "start_timestamp": "02:22:20", "end_timestamp": "02:23:26", "start_second": 8540, "end_second": 8606, "url": "https://www.youtube.com/watch?v=pPyOlGvWoXA&t=8540s", "title": "L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning", "thumbnail": "https://i.ytimg.com/vi/pPyOlGvWoXA/maxresdefault.jpg"} {"video_id": "pPyOlGvWoXA", "text": "three five six seven all the numbers that are not divisible by four and a zeroth number starting indexing at zero is one that's the first number so that's what we get here so that's just writing it down in this table here okay now the next character we want any code is eight so we want to encode the new state as one and we want to include the number a or the symbol a so we look for number one in a set so a set is 0 for 8 and so on so number 1 is 4 so that was here and then finally the new status for and then", "start_timestamp": "02:23:26", "end_timestamp": "02:24:08", "start_second": 8606, "end_second": 8648, "url": "https://www.youtube.com/watch?v=pPyOlGvWoXA&t=8606s", "title": "L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning", "thumbnail": "https://i.ytimg.com/vi/pPyOlGvWoXA/maxresdefault.jpg"} {"video_id": "pPyOlGvWoXA", "text": "we retreading could be again and so that's 6 so what this says is that ans has turned the string EAB to NSS turned the string BAE into this number 6 and this number six stores these three characters which is kind of cool okay so first of all the this might seem like a weird set of rules to play by but first let's check that this is actually decodable otherwise this would be useless so to see that is it possible to take the the number six and and see which was the last character that was encoded and the answer is yes because", "start_timestamp": "02:24:08", "end_timestamp": "02:24:50", "start_second": 8648, "end_second": 8690, "url": "https://www.youtube.com/watch?v=pPyOlGvWoXA&t=8648s", "title": "L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning", "thumbnail": "https://i.ytimg.com/vi/pPyOlGvWoXA/maxresdefault.jpg"} {"video_id": "pPyOlGvWoXA", "text": "these two sets si in SP were defined to partition the natural numbers so for any number any natural number like 6 you know which said it belongs to so you know that 6 belongs to s of B there's no and and so you know the last character that was encoded was B and then you can also recover the last state the previous state before he was encoded and the way you do that is just by looking at the position of six and S sub B a so you see that six is the fourth number and SOP so that's the previous day and you can just", "start_timestamp": "02:24:50", "end_timestamp": "02:25:32", "start_second": 8690, "end_second": 8732, "url": "https://www.youtube.com/watch?v=pPyOlGvWoXA&t=8690s", "title": "L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning", "thumbnail": "https://i.ytimg.com/vi/pPyOlGvWoXA/maxresdefault.jpg"} {"video_id": "pPyOlGvWoXA", "text": "keep repeating this and you can recover the characters that were encoded on so hopefully that convinces do that this is decodable and kind of the point of this is that we actually chose these sets as sub ans of B so that their density in the natural numbers is approximately well it is pretty much the probability of the symbols so you know if you take a lot of natural numbers the fraction of the numbers which Lyoness of a is about one-fourth and the fraction of the numbers that line s of B is about three reports and so this out this encoding", "start_timestamp": "02:25:32", "end_timestamp": "02:26:07", "start_second": 8732, "end_second": 8767, "url": "https://www.youtube.com/watch?v=pPyOlGvWoXA&t=8732s", "title": "L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning", "thumbnail": "https://i.ytimg.com/vi/pPyOlGvWoXA/maxresdefault.jpg"} {"video_id": "pPyOlGvWoXA", "text": "operation here where we look for the s number in one of these sets on that operation will advance us by a fraction but by a factor of about one over P but that's just what happens because this thing is distributed like a fraction of P over the natural numbers so when you index into it you you you increase by a fraction of one over P so that means that every time you encode a symbol onto on to a state I guess it's called X here you end up multiplying your natural number by about 1 over P that's generally what happens approximately so", "start_timestamp": "02:26:07", "end_timestamp": "02:26:51", "start_second": 8767, "end_second": 8811, "url": "https://www.youtube.com/watch?v=pPyOlGvWoXA&t=8767s", "title": "L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning", "thumbnail": "https://i.ytimg.com/vi/pPyOlGvWoXA/maxresdefault.jpg"} {"video_id": "pPyOlGvWoXA", "text": "here if the SLA is powers of three it'll also work powers of three yeah so we want as SMA so we we just want them to occur about one fourth of the time like zero comma three comma nine etc that'll also work um so that doesn't really occur one-fourth of the time in if you pick some long sequence of natural numbers those numbers don't occur one-fourth of the time for that long sequence oh I see so we want the density of these things to be to be any partition that needs the criteria that first 1/4 is going to work right this is", "start_timestamp": "02:26:51", "end_timestamp": "02:27:41", "start_second": 8811, "end_second": 8861, "url": "https://www.youtube.com/watch?v=pPyOlGvWoXA&t=8811s", "title": "L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning", "thumbnail": "https://i.ytimg.com/vi/pPyOlGvWoXA/maxresdefault.jpg"} {"video_id": "pPyOlGvWoXA", "text": "not a neat partition right so so there are actually a lot of choices for this so this particular choice is true is it so that it's very easy to implement the encoding and decoding operations so you can just do it with some modular um but if you have some crazy choice maybe it'll work but it might be very hard to to compute B encode and decode operations well it seems like the set of natural number that is also chosen like can be chosen otherwise here like we don't have to we only restrict a natural number because of the index is zero so", "start_timestamp": "02:27:41", "end_timestamp": "02:28:21", "start_second": 8861, "end_second": 8901, "url": "https://www.youtube.com/watch?v=pPyOlGvWoXA&t=8861s", "title": "L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning", "thumbnail": "https://i.ytimg.com/vi/pPyOlGvWoXA/maxresdefault.jpg"} {"video_id": "pPyOlGvWoXA", "text": "it's convenient is that why well at the end of the day this is something that we want to turn into a binary string so I guess I haven't described that at the end but so so once you in covered everything you have this big natural number that describes all your symbols and then you turn it into a binary string and then you can in the binary representation and you can ship that off to the to the receiver they start at the end you just have one number right right and then you from this one number you can back word", "start_timestamp": "02:28:21", "end_timestamp": "02:28:53", "start_second": 8901, "end_second": 8933, "url": "https://www.youtube.com/watch?v=pPyOlGvWoXA&t=8901s", "title": "L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning", "thumbnail": "https://i.ytimg.com/vi/pPyOlGvWoXA/maxresdefault.jpg"} {"video_id": "pPyOlGvWoXA", "text": "generate all the three views right right but here we have this number six and now we want to send six to the receiver and the receiver you know the all our communication protocols work in bits so we have to turn six into a binary string and then send that to the receiver but the point the point is that actually secure here's here's the property of this scheme that we basically keep dividing by P of s every time we encode s so that means that if we encode a bunch of symbols we get some starting symbol divided by the", "start_timestamp": "02:28:53", "end_timestamp": "02:29:34", "start_second": 8933, "end_second": 8974, "url": "https://www.youtube.com/watch?v=pPyOlGvWoXA&t=8933s", "title": "L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning", "thumbnail": "https://i.ytimg.com/vi/pPyOlGvWoXA/maxresdefault.jpg"} {"video_id": "pPyOlGvWoXA", "text": "product of the probabilities of all the symbols that we that we encode it and so if we so this is some natural number and if we code the natural number the number of bits needed is about the log of the number that's the log base 2 of the number that's how many bits we need to code it so we see that this is this is the code length it's the sum over T of log 1 over P for all the symbols and so so if we take this and we divide by the number of symbols so if we take this divided by the number of symbols you see that this goes to the entropy of this of", "start_timestamp": "02:29:34", "end_timestamp": "02:30:21", "start_second": 8974, "end_second": 9021, "url": "https://www.youtube.com/watch?v=pPyOlGvWoXA&t=8974s", "title": "L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning", "thumbnail": "https://i.ytimg.com/vi/pPyOlGvWoXA/maxresdefault.jpg"} {"video_id": "pPyOlGvWoXA", "text": "this source so that so this is like an optimal thing to do I roundabout way of answering the question of why use natural numbers but but I think the stack here is just a conceptual framework right we don't know the actual implementation we don't need stack yeah that's absolutely true with this we say it's a stack just because it has this property that every time we decode something it we just get the last thing that was encoded we don't get the first thing that I was encoded so we just call it a stack but but yeah you", "start_timestamp": "02:30:21", "end_timestamp": "02:30:55", "start_second": 9021, "end_second": 9055, "url": "https://www.youtube.com/watch?v=pPyOlGvWoXA&t=9021s", "title": "L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning", "thumbnail": "https://i.ytimg.com/vi/pPyOlGvWoXA/maxresdefault.jpg"} {"video_id": "pPyOlGvWoXA", "text": "don't actually need a real stack I mean essentially it's just a partition of a lookup table like before we have a general lookup table but now you just partitioning the lookup table um sure right I guess like maybe the point here is that yeah ENS is really these rules and you can implement them efficiently this was what judo found and and it seems to work in practice and it kind of helps this stack like behavior that's the best the point of this and it's also optimal okay so what so returning to more interesting models which are not just", "start_timestamp": "02:30:55", "end_timestamp": "02:31:48", "start_second": 9055, "end_second": 9108, "url": "https://www.youtube.com/watch?v=pPyOlGvWoXA&t=9055s", "title": "L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning", "thumbnail": "https://i.ytimg.com/vi/pPyOlGvWoXA/maxresdefault.jpg"} {"video_id": "pPyOlGvWoXA", "text": "two characters a and B but rather things like distributions over images represented by latent variable models so there's this very nice algorithm introduced in 2019 called bits back with ans or BB ans which is its back coding using ans as a back end and the the reason to use ans is because it turns out that the the staff like property of ans where the last thing you decode or the wear whatever you decode is the last in Unicode makes it very compatible with the concept of getting bits back so let's just see how that works so here", "start_timestamp": "02:31:48", "end_timestamp": "02:32:30", "start_second": 9108, "end_second": 9150, "url": "https://www.youtube.com/watch?v=pPyOlGvWoXA&t=9108s", "title": "L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning", "thumbnail": "https://i.ytimg.com/vi/pPyOlGvWoXA/maxresdefault.jpg"} {"video_id": "pPyOlGvWoXA", "text": "we're gonna think about winged variable models so Peter talked about Gaussian mixture models which are one case of this so here's the is the latent variable P of Z is the prior and that's the marginal distribution so this is how bits back coding works and we're gonna talk about how it works exactly what AMS so in BBA in s the first thing you do if you wish to send X so the goal here is the send X the first thing you do is you you you start off with a non-empty bit screen so you start with so s so we can just call it a bit scream", "start_timestamp": "02:32:30", "end_timestamp": "02:33:26", "start_second": 9150, "end_second": 9206, "url": "https://www.youtube.com/watch?v=pPyOlGvWoXA&t=9150s", "title": "L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning", "thumbnail": "https://i.ytimg.com/vi/pPyOlGvWoXA/maxresdefault.jpg"} {"video_id": "pPyOlGvWoXA", "text": "because that that's just how we think about it and so the first thing the encoder does is it decodes Z from the bit stream so the encoder knows X so the encoder can compute Q of Z given X this is just the approximate posterior of this latent variable model and it can use this distribution to decode from the bit stream and we assume that this bit stream was full of random bits and so that this is a question that came up and I'll talk about the consequences of that later so that's the first thing you do but the point is that if you decode from", "start_timestamp": "02:33:26", "end_timestamp": "02:34:07", "start_second": 9206, "end_second": 9247, "url": "https://www.youtube.com/watch?v=pPyOlGvWoXA&t=9206s", "title": "L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning", "thumbnail": "https://i.ytimg.com/vi/pPyOlGvWoXA/maxresdefault.jpg"} {"video_id": "pPyOlGvWoXA", "text": "random bits then you get a sample and then the next thing the encoder does is it encodes X using this using P of X given Z which is actually called the decoder and then it finally encodes e ok so what actually happened here so if we just visualize this in a spit stream like this so so that this is what we started off with in the first phase when we decode Z we actually remove a little bit of this bit stream from the right so imagine this is a stack where we keep adding things on the right so in this first phase we remove a little bit and", "start_timestamp": "02:34:07", "end_timestamp": "02:34:52", "start_second": 9247, "end_second": 9292, "url": "https://www.youtube.com/watch?v=pPyOlGvWoXA&t=9247s", "title": "L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning", "thumbnail": "https://i.ytimg.com/vi/pPyOlGvWoXA/maxresdefault.jpg"} {"video_id": "pPyOlGvWoXA", "text": "then we get a little shorter bit stream then we encode X so that increases the length of the bit stream a little bit more but let's say by this much then then we encode Z again so so that that increases by a little bit so now we can you can just look at this diagram and see how much how long did this bit stream get what was the net change in the length of this bit screen well we we have to add in these two parts right because the bit stream went right there but but then we also subtract how much well I guess not there but we subtracted", "start_timestamp": "02:34:52", "end_timestamp": "02:35:39", "start_second": 9292, "end_second": 9339, "url": "https://www.youtube.com/watch?v=pPyOlGvWoXA&t=9292s", "title": "L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning", "thumbnail": "https://i.ytimg.com/vi/pPyOlGvWoXA/maxresdefault.jpg"} {"video_id": "pPyOlGvWoXA", "text": "a little bit at the beginning so the netcode links though are the net amount of change to this to the length of this bit stream is well that's a negative log P of X given Z minus log P of Z so that was furred for these two parts two and three but then we had to subtract the amount that we we decoded from the bit stream at the beginning I was so that's plus log excuse me for the next and the first part Z gives you some sample from Q so the actual code length on average is is the average of this of received Ron from the approximate posterior so", "start_timestamp": "02:35:39", "end_timestamp": "02:36:25", "start_second": 9339, "end_second": 9385, "url": "https://www.youtube.com/watch?v=pPyOlGvWoXA&t=9339s", "title": "L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning", "thumbnail": "https://i.ytimg.com/vi/pPyOlGvWoXA/maxresdefault.jpg"} {"video_id": "pPyOlGvWoXA", "text": "you can see that this is the v AE bound this is just the variational bound on negative log likelihood so so I guess this is can I ask sorry if you have a stream of let's say oh just lowercase letters A through Z then would P of Z here just be 1 over 26 and then the P of X given Z would be the number of times it occurs in divided by the total length right so it just depends on what your latent variable model happens to be I'm so the case that I'm actually thinking about it is view is that this is a V a and so P of Z is like standard normal", "start_timestamp": "02:36:25", "end_timestamp": "02:37:17", "start_second": 9385, "end_second": 9437, "url": "https://www.youtube.com/watch?v=pPyOlGvWoXA&t=9385s", "title": "L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning", "thumbnail": "https://i.ytimg.com/vi/pPyOlGvWoXA/maxresdefault.jpg"} {"video_id": "pPyOlGvWoXA", "text": "quite better but my confusion is why would it be like a normal distribution like isn't each key represented by a single value in the lookup table it's a constant right so why would it have a distribution um so I'm not really sure what like if you're given a string you just count right and then out of the count you that's a constant let's say it's all just restrict to a through Z then for each of the character you have a basically the probability occurs in this stream and that's a constant value so why would that have a distribution so", "start_timestamp": "02:37:17", "end_timestamp": "02:38:01", "start_second": 9437, "end_second": 9481, "url": "https://www.youtube.com/watch?v=pPyOlGvWoXA&t=9437s", "title": "L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning", "thumbnail": "https://i.ytimg.com/vi/pPyOlGvWoXA/maxresdefault.jpg"} {"video_id": "pPyOlGvWoXA", "text": "maybe let let's back up for a bit and just like talk about what we're trying to do so what we're trying to do is to turn the latent variable model into a compression algorithm so just starting from square root 1 we have a ve of this what's what's the input of the Dae an image it's a stream right let's say for 1d case is it a stream time I propose we can write questions here offline because we've got a lot of cover yes okay yeah happy to talk about this later yes ok so here is a description of the same same thing so during the encoding", "start_timestamp": "02:38:01", "end_timestamp": "02:38:54", "start_second": 9481, "end_second": 9534, "url": "https://www.youtube.com/watch?v=pPyOlGvWoXA&t=9481s", "title": "L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning", "thumbnail": "https://i.ytimg.com/vi/pPyOlGvWoXA/maxresdefault.jpg"} {"video_id": "pPyOlGvWoXA", "text": "phase the decoder decodes from the bit stream then encodes X and Z and you can also check that this is decodable so if you just run everything in Reverse you you just end up getting yeah getting X so you decode Z you decode X and then you can re encode see I'm using using a should actually be Q and re-encoding part is this getting bits back here so once you rien code Z once a receiver re-encode Z then the receiver gets now a slightly longer bid stream from which I can start to decode the next C so that's so so those are exactly", "start_timestamp": "02:38:54", "end_timestamp": "02:39:45", "start_second": 9534, "end_second": 9585, "url": "https://www.youtube.com/watch?v=pPyOlGvWoXA&t=9534s", "title": "L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning", "thumbnail": "https://i.ytimg.com/vi/pPyOlGvWoXA/maxresdefault.jpg"} {"video_id": "pPyOlGvWoXA", "text": "the bits that were gone back right here okay so there are two sort of two points that we should talk about when when getting B ba and s working with continuously variable models like via East which is that disease these are continuous so Z is comes from a standard normal distribution and so we can't really cope continuous data but what we can do is district ice it to some high precision and so if you take Z and you discretize it to some level Delta Z then you pretty much turned a probability density function by P of Z and you turn", "start_timestamp": "02:39:45", "end_timestamp": "02:40:33", "start_second": 9585, "end_second": 9633, "url": "https://www.youtube.com/watch?v=pPyOlGvWoXA&t=9585s", "title": "L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning", "thumbnail": "https://i.ytimg.com/vi/pPyOlGvWoXA/maxresdefault.jpg"} {"video_id": "pPyOlGvWoXA", "text": "it into a probability mass function for capital P of Z which is Z times Delta Z so what you get by integrating a density over this small region of volume Delta Z and so you can do that for both the posterior in the prior so you do that for the prior and you do it for the posterior and you see that these deltas use cancel out and so so we get is that this bits back code length with the discretization being the same between the prior and the posterior still gives you the same KL divergence term in the BAE the second point was that that somebody", "start_timestamp": "02:40:33", "end_timestamp": "02:41:18", "start_second": 9633, "end_second": 9678, "url": "https://www.youtube.com/watch?v=pPyOlGvWoXA&t=9633s", "title": "L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning", "thumbnail": "https://i.ytimg.com/vi/pPyOlGvWoXA/maxresdefault.jpg"} {"video_id": "pPyOlGvWoXA", "text": "brought up is that we we decode Z from the bitstream and that's how we sample from the bit stream is by decoding from it that's how we sample Z but in order for that to really give us a good sample the bits that we that we decode from have to be actually random and so that's not necessarily true and so in a VA II the last Z if you just sort of work out what's going on basically if this KL divergence between this aggregate posterior Q of Z and the prior is small then that means those bits will be random or pretty good and that'll be", "start_timestamp": "02:41:18", "end_timestamp": "02:42:07", "start_second": 9678, "end_second": 9727, "url": "https://www.youtube.com/watch?v=pPyOlGvWoXA&t=9678s", "title": "L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning", "thumbnail": "https://i.ytimg.com/vi/pPyOlGvWoXA/maxresdefault.jpg"} {"video_id": "pPyOlGvWoXA", "text": "good enough to get a good sample but of course in practice for a VA that's not trained exactly well this isn't gonna be nonzero but in practice it seems like this doesn't seem to matter too much I think one thing that might actually work to ensure that the bits are random which I haven't seen explored is to just encrypt the bit stream and that'll make the bits look random and then you can decode anything from it so I think in practice is not a problem and with nice is that this the scheme fits back with ans seems to work pretty", "start_timestamp": "02:42:07", "end_timestamp": "02:42:44", "start_second": 9727, "end_second": 9764, "url": "https://www.youtube.com/watch?v=pPyOlGvWoXA&t=9727s", "title": "L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning", "thumbnail": "https://i.ytimg.com/vi/pPyOlGvWoXA/maxresdefault.jpg"} {"video_id": "pPyOlGvWoXA", "text": "well um so the authors of this paper implemented this it's back ans algorithm for ve strain on em mist and they found that the numbers that we got were very close are pretty much the same as this as the variational is negative variational the variational bound on the negative block likelihood which is exactly what you want that's what is predicted so this thing works as well as as as advertised you right so in our work what we did was we looked at weight in variable models which are not just one layer so we know", "start_timestamp": "02:42:44", "end_timestamp": "02:43:32", "start_second": 9764, "end_second": 9812, "url": "https://www.youtube.com/watch?v=pPyOlGvWoXA&t=9764s", "title": "L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning", "thumbnail": "https://i.ytimg.com/vi/pPyOlGvWoXA/maxresdefault.jpg"} {"video_id": "pPyOlGvWoXA", "text": "that the the more powerful the the model the better the log likelihoods that we get out of it so we should get better compression so here we're looking at a setting where the model is has a Markov chain structure over the wing variables so there lady variables ZL the L minus one up and seal the one the necks so this is the graphical model of a sampling path and and the the inference path accuse go going the other way and they're both Markov chains so this is a particular type of model that we're looking at and so if you had this", "start_timestamp": "02:43:32", "end_timestamp": "02:44:18", "start_second": 9812, "end_second": 9858, "url": "https://www.youtube.com/watch?v=pPyOlGvWoXA&t=9812s", "title": "L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning", "thumbnail": "https://i.ytimg.com/vi/pPyOlGvWoXA/maxresdefault.jpg"} {"video_id": "pPyOlGvWoXA", "text": "particular model which is this this sort of chain structure there are two ways to view it you can just view it as a via e with this block of latent variables as just one latent variable and then you can run bits by B ba and s1 and then that works perfectly fine but another way to view it is to view it as a leak variable model with just a little just draw the layers again so here's X 1 Z 2 3 so you can just view Z 1 as the one and only latent variable but then you see that it's prior is a VA with the same structure so it's prior is P of Z 1", "start_timestamp": "02:44:18", "end_timestamp": "02:45:05", "start_second": 9858, "end_second": 9905, "url": "https://www.youtube.com/watch?v=pPyOlGvWoXA&t=9858s", "title": "L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning", "thumbnail": "https://i.ytimg.com/vi/pPyOlGvWoXA/maxresdefault.jpg"} {"video_id": "pPyOlGvWoXA", "text": "which is a VA II where it's prior is P of Z 2 which is another V and so on so these are two equivalent ways of looking at the same model so in terms of log likelihood they're the same because that if you just write down v variational bounds they're equal but they suggest slightly different compression algorithms with different practical consequences so the idea is that instead of just treating disease as one single block of one large latent variable you can actually recursively invoke its back coding into the prior so you can just", "start_timestamp": "02:45:05", "end_timestamp": "02:45:47", "start_second": 9905, "end_second": 9947, "url": "https://www.youtube.com/watch?v=pPyOlGvWoXA&t=9905s", "title": "L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning", "thumbnail": "https://i.ytimg.com/vi/pPyOlGvWoXA/maxresdefault.jpg"} {"video_id": "pPyOlGvWoXA", "text": "code the first variable so here this is the algorithm for just as usual so here this is basically um decode Z and then encode X and then encode the prior so this is P of X given Z mmm-hmm and here Z given X what you can do instead is code just the first layer and then recursively invoke its back coding into the C the subsequent layers right and so the the consequence of doing this is that I won't go through these the exact steps but the consequence is that you no longer have to decode the entire block of latent", "start_timestamp": "02:45:47", "end_timestamp": "02:46:44", "start_second": 9947, "end_second": 10004, "url": "https://www.youtube.com/watch?v=pPyOlGvWoXA&t=9947s", "title": "L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning", "thumbnail": "https://i.ytimg.com/vi/pPyOlGvWoXA/maxresdefault.jpg"} {"video_id": "pPyOlGvWoXA", "text": "variables at the very first step rather you you just need to decode one of them and then you get some you can add more to the bit stream and then decode more and so on and what that means is that you need fewer auxiliary bits to start bits back coding so what this means is that remember before BBN has to make sense you need a bit stream with some bits on it to even sample Z in the first place and those bits must be sent across and if you you don't have any and if there are no bits there then you end up wasting them and so if you're able to", "start_timestamp": "02:46:44", "end_timestamp": "02:47:23", "start_second": 10004, "end_second": 10043, "url": "https://www.youtube.com/watch?v=pPyOlGvWoXA&t=10004s", "title": "L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning", "thumbnail": "https://i.ytimg.com/vi/pPyOlGvWoXA/maxresdefault.jpg"} {"video_id": "pPyOlGvWoXA", "text": "not have to decode it too many link variables in the first go then you can save on transmitting those logs iliad bits so you can see that in these experiments for especially deep latent variable models we're able to get better code lengths compared to just decoding the entire block of latent variables at once you right so that was via E's so let's just move on to how to trim flow models into compression algorithms so in this class we went through a series of likelihood based models like Auto regressive models", "start_timestamp": "02:47:23", "end_timestamp": "02:48:05", "start_second": 10043, "end_second": 10085, "url": "https://www.youtube.com/watch?v=pPyOlGvWoXA&t=10043s", "title": "L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning", "thumbnail": "https://i.ytimg.com/vi/pPyOlGvWoXA/maxresdefault.jpg"} {"video_id": "pPyOlGvWoXA", "text": "reason flows and we were seeing in this lecture that really any likely hit based model is as a compression algorithm so what about flow models they should also be compression algorithms and so what what's particularly appealing about them is that you get this we can write down the exact the exact log likelihood flow model this is not a bound this is just the real thing so hopefully we should be able to get some really good compression with this so let's think about what that actually means it turns out that it", "start_timestamp": "02:48:05", "end_timestamp": "02:48:39", "start_second": 10085, "end_second": 10119, "url": "https://www.youtube.com/watch?v=pPyOlGvWoXA&t=10085s", "title": "L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning", "thumbnail": "https://i.ytimg.com/vi/pPyOlGvWoXA/maxresdefault.jpg"} {"video_id": "pPyOlGvWoXA", "text": "doesn't really make sense to say that to get a compression algorithm that achieves this code length which is that which is just this flow log likelihood formula and the reason is that flows are density models and it doesn't make sense to code continuous data because it just needs you need infinite precision to do that so rather well we're gonna say is that will code data discretize the high precision so you you have your space of data like this let's say this is the space of all possible images and then we just tile it with these with this very", "start_timestamp": "02:48:39", "end_timestamp": "02:49:12", "start_second": 10119, "end_second": 10152, "url": "https://www.youtube.com/watch?v=pPyOlGvWoXA&t=10119s", "title": "L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning", "thumbnail": "https://i.ytimg.com/vi/pPyOlGvWoXA/maxresdefault.jpg"} {"video_id": "pPyOlGvWoXA", "text": "fine grid like this and then we're just going to discretize every possible data point so instead of coding one data point exactly we'll just coat the biddin that it lies in like that so that's G of X is like the cube bed that's some data point lies in and the point of doing this is that if if you define a probability mass function given by integrating this density given by the flow over these cubes and then you get then you get a negative log likelihood that looks like this it's just this the density times Delta negative plug", "start_timestamp": "02:49:12", "end_timestamp": "02:49:50", "start_second": 10152, "end_second": 10190, "url": "https://www.youtube.com/watch?v=pPyOlGvWoXA&t=10152s", "title": "L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning", "thumbnail": "https://i.ytimg.com/vi/pPyOlGvWoXA/maxresdefault.jpg"} {"video_id": "pPyOlGvWoXA", "text": "density times Delta so this is now a probability mass function and now it makes sense to say that we can compress up to this cook length so actually the code length that we're going to look for when we compress with flow models is this it's negative like this the flow times Delta so it's really just the same thing plus this additional term here so this is just the number of bits of discretization so it can actually be a lot of bits but then we can recover them later right so yeah so now we have a probability mass function I can Chris", "start_timestamp": "02:49:50", "end_timestamp": "02:50:28", "start_second": 10190, "end_second": 10228, "url": "https://www.youtube.com/watch?v=pPyOlGvWoXA&t=10190s", "title": "L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning", "thumbnail": "https://i.ytimg.com/vi/pPyOlGvWoXA/maxresdefault.jpg"} {"video_id": "pPyOlGvWoXA", "text": "wants to flow so can we just run Huffman coding so the answer is no because to do that we need to build a tree that's as big as the number of possible data points but these are we're working with large images here so that's that's exponential in the dimension so that's not tractable so we we need to harness the model structure we actually we actually have to make use of the fact that this is a flow model so one naive attempt to do this maybe this just the most intuitive thing is to take the latent that you get out of the", "start_timestamp": "02:50:28", "end_timestamp": "02:51:01", "start_second": 10228, "end_second": 10261, "url": "https://www.youtube.com/watch?v=pPyOlGvWoXA&t=10228s", "title": "L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning", "thumbnail": "https://i.ytimg.com/vi/pPyOlGvWoXA/maxresdefault.jpg"} {"video_id": "pPyOlGvWoXA", "text": "flow let's say we want to code X so why not just compute Z when that just compute Z by passing it through the flow and coding back using the prior so so maybe that's very simple but unfortunately doesn't work so you can just write down the code link that you get it's just negative log P of Z times let's say Delta but if a flow model is its trained well then the distribution of Z's will match the prior so you end up just let's say the prior is Gaussian so you end up coding Gaussian noise using a Gaussian prior so that's no", "start_timestamp": "02:51:01", "end_timestamp": "02:51:41", "start_second": 10261, "end_second": 10301, "url": "https://www.youtube.com/watch?v=pPyOlGvWoXA&t=10261s", "title": "L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning", "thumbnail": "https://i.ytimg.com/vi/pPyOlGvWoXA/maxresdefault.jpg"} {"video_id": "pPyOlGvWoXA", "text": "compression at all and so if you compare this expression with this expression up here you see that this is the missing term so somehow this this naive approach does not take into effect take into account this jacobian the the fact that the flow changes the volume so we have to somehow deal with that right okay so how do we do this well the claim is that we can turn any flow model into a VA actually we can locally approximate it using a VA so we have this flow model that takes here's F here's the full model and", "start_timestamp": "02:51:41", "end_timestamp": "02:52:26", "start_second": 10301, "end_second": 10346, "url": "https://www.youtube.com/watch?v=pPyOlGvWoXA&t=10301s", "title": "L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning", "thumbnail": "https://i.ytimg.com/vi/pPyOlGvWoXA/maxresdefault.jpg"} {"video_id": "pPyOlGvWoXA", "text": "here's here's a point X and we turn it into f of X this is just what a flow does and so we can define this distribution here so this distribution on the left this like this this ellipsoid and we're going to define it to be a standard normal where the mean is just f of X it's just the length but we give it this covariance matrix which is Sigma squared is just a small number like 0.0001 or something like that just some hyper parameter to this algorithm times the Jacobian times Jacobian transpose of the flow model and and so", "start_timestamp": "02:52:26", "end_timestamp": "02:53:12", "start_second": 10346, "end_second": 10392, "url": "https://www.youtube.com/watch?v=pPyOlGvWoXA&t=10346s", "title": "L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning", "thumbnail": "https://i.ytimg.com/vi/pPyOlGvWoXA/maxresdefault.jpg"} {"video_id": "pPyOlGvWoXA", "text": "this is what we call the encoder and this is a decoder so we so on the float on top of the flow model we define this encoder and decoder and the decoder is just the inverse of the flow with this identity covariance such as very small so that's what these two is ellipsoid on the left and their small circle on the right are so what why did we define this well the point is that if for a flow model is represents a differentiable function so if if you have some data point X and we had a very small amount of noise to it and then you", "start_timestamp": "02:53:12", "end_timestamp": "02:53:49", "start_second": 10392, "end_second": 10429, "url": "https://www.youtube.com/watch?v=pPyOlGvWoXA&t=10392s", "title": "L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning", "thumbnail": "https://i.ytimg.com/vi/pPyOlGvWoXA/maxresdefault.jpg"} {"video_id": "pPyOlGvWoXA", "text": "map that to the latent space that small that see that small amount of Gaussian noise that you added at the beginning will also be Gaussian it'll just have this twist there down covariance and that's given by how the flow behaves linearly and that's just the Jacobian so we know that if you take a multivariate Gaussian and you multiply by a matrix you also get a multivariate Gaussian so locally the the flow behaves like a linear transformation and that matrix is the Jacobian so that that's what that's where this comes from and the point is", "start_timestamp": "02:53:49", "end_timestamp": "02:54:28", "start_second": 10429, "end_second": 10468, "url": "https://www.youtube.com/watch?v=pPyOlGvWoXA&t=10429s", "title": "L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning", "thumbnail": "https://i.ytimg.com/vi/pPyOlGvWoXA/maxresdefault.jpg"} {"video_id": "pPyOlGvWoXA", "text": "that if you then run bits back coding using these two distributions so this was Z given X here Q this P of X given Z so if you run Kota using these two distributions the code link that you get from bits fat coding will be exactly what we wanted plus this little error term which is this second-order error term so so that this is how you turn or this is a way of turning a flu model into a compression algorithm is to convert it into is to locally approximate it with a certain beauty to find like this and then the", "start_timestamp": "02:54:28", "end_timestamp": "02:55:09", "start_second": 10468, "end_second": 10509, "url": "https://www.youtube.com/watch?v=pPyOlGvWoXA&t=10468s", "title": "L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning", "thumbnail": "https://i.ytimg.com/vi/pPyOlGvWoXA/maxresdefault.jpg"} {"video_id": "pPyOlGvWoXA", "text": "code link that you get from from bits back coding on that Wilmette that's what we wanted plus a very small error term so what what's nice about this is that it turns an intractable algorithm into a more tractable one so if you wish to directly implement this algorithm it turns out you do have to compute the Jacobian of the flu model and you do have to factorize it in a certain way and so that's that's polynomial time it's better than exponential time but it's still not good enough for high dimensional data and so the", "start_timestamp": "02:55:09", "end_timestamp": "02:55:43", "start_second": 10509, "end_second": 10543, "url": "https://www.youtube.com/watch?v=pPyOlGvWoXA&t=10509s", "title": "L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning", "thumbnail": "https://i.ytimg.com/vi/pPyOlGvWoXA/maxresdefault.jpg"} {"video_id": "pPyOlGvWoXA", "text": "solution to that is that we can actually specify we can we can specialize this algorithm you even further and so for a lot of regressive flows for example it turns out that we can just code one dimension at a time without ever constructing that Jacobian so that works in linear time if we have a composition of flows like we like we do in real MVPs and then you can just code one layer at a time and we recursively invoke this coding into the next layer but just like we can with hierarchical Yogi's so all together for real MVP type flows if you", "start_timestamp": "02:55:43", "end_timestamp": "02:56:17", "start_second": 10543, "end_second": 10577, "url": "https://www.youtube.com/watch?v=pPyOlGvWoXA&t=10543s", "title": "L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning", "thumbnail": "https://i.ytimg.com/vi/pPyOlGvWoXA/maxresdefault.jpg"} {"video_id": "pPyOlGvWoXA", "text": "implement it correctly you don't need to compute the Jacobi and ever and you get a you actually get a linear time compression algorithm you so the so that's nice so and we achieve this code length here which is negative log density times Delta but if you look at this this this suffers by terms of negative log Delta X which can be like actually quite bad like 32 bits or something like that so this is because we had to discretize the data a lot so that we can actually approximate the integral that defines a probability mass", "start_timestamp": "02:56:17", "end_timestamp": "02:56:55", "start_second": 10577, "end_second": 10615, "url": "https://www.youtube.com/watch?v=pPyOlGvWoXA&t=10577s", "title": "L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning", "thumbnail": "https://i.ytimg.com/vi/pPyOlGvWoXA/maxresdefault.jpg"} {"video_id": "pPyOlGvWoXA", "text": "function easily so that seems like a huge waste of bits especially if we want to transmit say integer data like images from CFR for example or it specifies its are specified as integers and we don't want to have to transmit lots of bits after the decimal point so the solution to this is to use those extra bits four bits back again and so if if you want to do that it turns out that there is an optimal way of doing this and it's and this sort of encoder that you use for that is a D quantizer which I think we talked", "start_timestamp": "02:56:55", "end_timestamp": "02:57:32", "start_second": 10615, "end_second": 10652, "url": "https://www.youtube.com/watch?v=pPyOlGvWoXA&t=10615s", "title": "L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning", "thumbnail": "https://i.ytimg.com/vi/pPyOlGvWoXA/maxresdefault.jpg"} {"video_id": "pPyOlGvWoXA", "text": "about and so if you plug it's back coding to the D quantizer to get those extra bits and then altogether the code length you get is the variational D quantization bound which is what you explicit you train to to be small on on the data set so it ends up being a reasonable you and so with all this stuff we tried it for one of the models that we trained and we found that we were able to get some code links that are very close to what is predicted by the variational T quantization bound and this sort of holds across all these data sets", "start_timestamp": "02:57:32", "end_timestamp": "02:58:17", "start_second": 10652, "end_second": 10697, "url": "https://www.youtube.com/watch?v=pPyOlGvWoXA&t=10652s", "title": "L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning", "thumbnail": "https://i.ytimg.com/vi/pPyOlGvWoXA/maxresdefault.jpg"} {"video_id": "pPyOlGvWoXA", "text": "um and there is a caveat which is that this algorithm doesn't need a lot lots of auxiliary bits actually much more than a via II type methods and that shows up in the fact that we need something maybe like 50 bits per dimension actually to just send one image and so that means that this algorithm really does not make sense if you just want to send one data point but say if you wanted to send like use this algorithm for each frame in a long video in a movie or something like that then the initial overhead can be amortized", "start_timestamp": "02:58:17", "end_timestamp": "02:58:52", "start_second": 10697, "end_second": 10732, "url": "https://www.youtube.com/watch?v=pPyOlGvWoXA&t=10697s", "title": "L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning", "thumbnail": "https://i.ytimg.com/vi/pPyOlGvWoXA/maxresdefault.jpg"} {"video_id": "pPyOlGvWoXA", "text": "across all the different dreams so that is a caveat of this algorithm right so finally let's talk about some other things which are not exactly about bids back so well all these algorithms that we talked about so far basically fall into the to the framework of you pre-trained a generative model on on some training set which you assume is you know drawn from the same distribution as the test set that you want to compress and then and then you just devise a coding algorithm that matches the negative log likelihood of", "start_timestamp": "02:58:52", "end_timestamp": "02:59:34", "start_second": 10732, "end_second": 10774, "url": "https://www.youtube.com/watch?v=pPyOlGvWoXA&t=10732s", "title": "L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning", "thumbnail": "https://i.ytimg.com/vi/pPyOlGvWoXA/maxresdefault.jpg"} {"video_id": "pPyOlGvWoXA", "text": "that and that's how you go but they're actually the other types of algorithms which are quite successful in text compression which we actually all use like in gzip and zip and so on which learn online so they they don't you don't really pre train them on a certain data set they just just give it a file and it learns how to compress it online and it turns out that maybe theoretically these types of algorithms at least if you get them lots of resources can actually learn to compress any distribution so we call them", "start_timestamp": "02:59:34", "end_timestamp": "03:00:07", "start_second": 10774, "end_second": 10807, "url": "https://www.youtube.com/watch?v=pPyOlGvWoXA&t=10774s", "title": "L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning", "thumbnail": "https://i.ytimg.com/vi/pPyOlGvWoXA/maxresdefault.jpg"} {"video_id": "pPyOlGvWoXA", "text": "Universal codes so there's one algorithm called problem if LZ somebody's and which works like this so I'll just try to very quickly describe it you're basically trying to so here's a long string that you're trying to compress and the way it works is that when you try to compress but you're at some position in the file let let's see we're at this position of the file and we want to code the future so what you do is you basically try to find a string starting at this position eight which has already occurred in the past so we", "start_timestamp": "03:00:07", "end_timestamp": "03:00:55", "start_second": 10807, "end_second": 10855, "url": "https://www.youtube.com/watch?v=pPyOlGvWoXA&t=10807s", "title": "L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning", "thumbnail": "https://i.ytimg.com/vi/pPyOlGvWoXA/maxresdefault.jpg"} {"video_id": "pPyOlGvWoXA", "text": "here we have this screen AAC and then we see out in the past AAC occurred so let's just store the index into the past so this occurred one two three times steps into the past so let's just store this number three in the past and then we we also add into the next character which is B so that's that's basically how this works at this point C we see oh there's a string CAE in the future but of that string occurred in the past so let's just store the number three which indicates that you just need to jump three into the past to just copy that", "start_timestamp": "03:00:55", "end_timestamp": "03:01:34", "start_second": 10855, "end_second": 10894, "url": "https://www.youtube.com/watch?v=pPyOlGvWoXA&t=10855s", "title": "L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning", "thumbnail": "https://i.ytimg.com/vi/pPyOlGvWoXA/maxresdefault.jpg"} {"video_id": "pPyOlGvWoXA", "text": "string the past over so this is roughly how Lempel Ziv works you just look for matches that you're trying to compress and past and copy them over and so what why is this a good idea okay so why is this a good idea so just very roughly why this was a good idea if the source of symbols you see is independent then whatever symbol you're at right now will reoccur that will will actually reoccur if you wait long enough and the reoccurrence time is as a geometric distribution so the average reoccurrence time is just one of the probability of", "start_timestamp": "03:01:34", "end_timestamp": "03:02:22", "start_second": 10894, "end_second": 10942, "url": "https://www.youtube.com/watch?v=pPyOlGvWoXA&t=10894s", "title": "L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning", "thumbnail": "https://i.ytimg.com/vi/pPyOlGvWoXA/maxresdefault.jpg"} {"video_id": "pPyOlGvWoXA", "text": "the symbol so that's that and so the local civil rhythm says to just write down the time that the the write down the time that it that you have to look back to find the same symbol again so that's going to take log t bits where T is the time so on average it's just log log T bits which is log 1 over P of X so you can see that this goes to the entropy of the source so this is an interesting algorithm it's basically nearest neighbors and it's saying that if you run if you just memorize tons of data over time and you run nearest", "start_timestamp": "03:02:22", "end_timestamp": "03:02:59", "start_second": 10942, "end_second": 10979, "url": "https://www.youtube.com/watch?v=pPyOlGvWoXA&t=10942s", "title": "L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning", "thumbnail": "https://i.ytimg.com/vi/pPyOlGvWoXA/maxresdefault.jpg"} {"video_id": "pPyOlGvWoXA", "text": "neighbors then this is like the best learning algorithm that you can or this learning algorithm does work um it just might take a very long time to learn and you can see that it does take very long time to learn because template matching does not generalize okay so that was simple as if so I'll conclude by talking just giving you a taste of some very recent research on on deep learning and compression so by no means is this comprehensive or anything like that it's just to give you an idea of what might be out there so", "start_timestamp": "03:02:59", "end_timestamp": "03:03:38", "start_second": 10979, "end_second": 11018, "url": "https://www.youtube.com/watch?v=pPyOlGvWoXA&t=10979s", "title": "L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning", "thumbnail": "https://i.ytimg.com/vi/pPyOlGvWoXA/maxresdefault.jpg"} {"video_id": "pPyOlGvWoXA", "text": "this the authors of EBA NS released some new work and I clear this year where they show that you can train a fully convolutional deep latent variable model on small images and just because it's fully convolutional you can just run it on large images make sure that this works very well so this these are I think some of the best numbers on full resolution image net just by using this full accomplishment property these authors here describe a very intriguing alternative to bits back coding so they described what they called minimum", "start_timestamp": "03:03:38", "end_timestamp": "03:04:18", "start_second": 11018, "end_second": 11058, "url": "https://www.youtube.com/watch?v=pPyOlGvWoXA&t=11018s", "title": "L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning", "thumbnail": "https://i.ytimg.com/vi/pPyOlGvWoXA/maxresdefault.jpg"} {"video_id": "pPyOlGvWoXA", "text": "minimal random code learning which is a coding scheme for latent variable models which achieves the bits that code link without needing bits back so the way that works is that you know the encoder samples a lot of weights the number of leads and samples is 2 to the KL divergence between the encoder and D and and prior and then picks a random one I did just a uniform random one and the decoder can do the same thing if they share the same random number generator and so it turns out that this is a way to basically get a sort of a low bias", "start_timestamp": "03:04:18", "end_timestamp": "03:04:58", "start_second": 11058, "end_second": 11098, "url": "https://www.youtube.com/watch?v=pPyOlGvWoXA&t=11058s", "title": "L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning", "thumbnail": "https://i.ytimg.com/vi/pPyOlGvWoXA/maxresdefault.jpg"} {"video_id": "pPyOlGvWoXA", "text": "sample from cube by sampling a lot of these lanes and picking a random one and the number of bits you need to encode the index the the random one that you picked out of them it's just the KL divergence just log K so the visionary log K bits um so this achieves the bits back cut length without many bits back the trade-off is computational complexity because the concurrency could have to collect a lot of samples and finally there's this other paper which has a very different flavor from the ones that we were talking about this is a paper about", "start_timestamp": "03:04:58", "end_timestamp": "03:05:35", "start_second": 11098, "end_second": 11135, "url": "https://www.youtube.com/watch?v=pPyOlGvWoXA&t=11098s", "title": "L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning", "thumbnail": "https://i.ytimg.com/vi/pPyOlGvWoXA/maxresdefault.jpg"} {"video_id": "pPyOlGvWoXA", "text": "lossy compression where they come up with a recurrent encoder and decoder architecture for lossy compression on sequential data like videos say and the way it works is quite interesting um just the very high-level idea is that the encoder simulates the decoder so normally you would think that the encoder and decoder just operate independently and the encoder you know just doesn't worry about what the decoder is doing but if there's this time structure then the encoder can simulate what the decoder is doing sort", "start_timestamp": "03:05:35", "end_timestamp": "03:06:07", "start_second": 11135, "end_second": 11167, "url": "https://www.youtube.com/watch?v=pPyOlGvWoXA&t=11135s", "title": "L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning", "thumbnail": "https://i.ytimg.com/vi/pPyOlGvWoXA/maxresdefault.jpg"} {"video_id": "pPyOlGvWoXA", "text": "of one time step behind and based on that you can send extra information that will help the decoder or just send the right information that will help the decoder reconstruct the data just in the right way and they show how to write down a neural network architecture that captures this idea and optimizes for the resulting code length in this end-to-end way so that's quite a cool idea right yeah so that's that's all I have to say hopefully that was helpful that was great Jonathan what's it's all over time but I'm thinking maybe we can", "start_timestamp": "03:06:07", "end_timestamp": "03:06:49", "start_second": 11167, "end_second": 11209, "url": "https://www.youtube.com/watch?v=pPyOlGvWoXA&t=11167s", "title": "L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning", "thumbnail": "https://i.ytimg.com/vi/pPyOlGvWoXA/maxresdefault.jpg"} {"video_id": "pPyOlGvWoXA", "text": "spend a couple more minutes if people have some questions didn't want to ask as we wrap up here anyway also value of lecturing and I also did a bunch of questions in the chat to be able to do that in parallel to you making progress on lecture I had a question about ans so I still don't see what the connection between like oh it seems like ans was just like a little add-on to this lecture like what's the connection feels like I don't I don't really see why we need ans like why you can just use another yeah it's a there are ways of", "start_timestamp": "03:06:49", "end_timestamp": "03:07:28", "start_second": 11209, "end_second": 11248, "url": "https://www.youtube.com/watch?v=pPyOlGvWoXA&t=11209s", "title": "L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning", "thumbnail": "https://i.ytimg.com/vi/pPyOlGvWoXA/maxresdefault.jpg"} {"video_id": "pPyOlGvWoXA", "text": "combining Ritz back with arithmetic coding it's just sort of the popular recent thing to do is to combine bits back with ans and in the reason we do it is because you get a very clean algorithm that works very well so that's that's just them that was the motivation can't you use it sorry can't you use base back so do any encoding scheme yeah yeah you definitely can just particularly convenient because of the stack structure of ans and also because the NS does work well in practice um to use it with that social practical", "start_timestamp": "03:07:28", "end_timestamp": "03:08:06", "start_second": 11248, "end_second": 11286, "url": "https://www.youtube.com/watch?v=pPyOlGvWoXA&t=11248s", "title": "L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning", "thumbnail": "https://i.ytimg.com/vi/pPyOlGvWoXA/maxresdefault.jpg"} {"video_id": "pPyOlGvWoXA", "text": "reasons why a net using mmhmm yeah maybe also dumping in here if you look at the Brendan frame geoff hinton paper it managed to do compression with a VA e and arithmetic coding but it had a bunch of overhead that you encountered because I think they could a nice look cute and this back axle or like a stack and so there's an overhead occurred then look at the Townsend on paper you can see how to make it all compatible through using a Dennis and get much better efficiency and compression efficiency tanam the previous paper that uses arithmetic", "start_timestamp": "03:08:06", "end_timestamp": "03:08:49", "start_second": 11286, "end_second": 11329, "url": "https://www.youtube.com/watch?v=pPyOlGvWoXA&t=11286s", "title": "L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning", "thumbnail": "https://i.ytimg.com/vi/pPyOlGvWoXA/maxresdefault.jpg"} {"video_id": "YqvhDPd1UEw", "text": "hi everyone welcome to lecture 12 of deep unsupervised learning spring 2020 today we'll cover representation learning in reinforcement learning first before I start I wanna give a big thank you to many colleagues and friends who have contributed to this lecture through sharing their insights illustrations slides and videos that all very directly contributed to well I'll be sharing in this lecture here today thank you so Terry what class has been about unsupervised learning today we're actually going to look at how one", "start_timestamp": "00:00:00", "end_timestamp": "00:00:41", "start_second": 0, "end_second": 41, "url": "https://www.youtube.com/watch?v=YqvhDPd1UEw&t=0s", "title": "L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020", "thumbnail": "https://i.ytimg.com/vi/YqvhDPd1UEw/maxresdefault.jpg"} {"video_id": "YqvhDPd1UEw", "text": "surprise learning and reinforcement can be brought together in a way that inspires learning to make reinforced learning more efficient but we haven't covered reinforcement learning yet in this class and so what I'm first gonna do is step through some of the very basics of reinforced learning obviously can't cover everything it could be a course in itself but we'll go through some of the basics from the recent successes and then from there look at successes where unsurprised learning and then reinforced winning or brought", "start_timestamp": "00:00:41", "end_timestamp": "00:01:09", "start_second": 41, "end_second": 69, "url": "https://www.youtube.com/watch?v=YqvhDPd1UEw&t=41s", "title": "L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020", "thumbnail": "https://i.ytimg.com/vi/YqvhDPd1UEw/maxresdefault.jpg"} {"video_id": "YqvhDPd1UEw", "text": "together so what is reinforcement lis reinforce learning is a problem setting where you have an agent agents supposed to take actions in the world ask the agent take actions the world will change so for example the world could be the robot body and the environment around the robot body and after the world has changed because of the agents action this process repeats over and over and over and the goal for the agent is to maximize reward collected in the process for example imagine our agent is supposed to control a self-driving car", "start_timestamp": "00:01:09", "end_timestamp": "00:01:42", "start_second": 69, "end_second": 102, "url": "https://www.youtube.com/watch?v=YqvhDPd1UEw&t=69s", "title": "L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020", "thumbnail": "https://i.ytimg.com/vi/YqvhDPd1UEw/maxresdefault.jpg"} {"video_id": "YqvhDPd1UEw", "text": "then the reward might be positive for reaching destination and negative forgetting to an accident maybe our agent is a robot chef and then the road might be positive for a good meal even more positive for an excellent meal and negative for making a total mess in the kitchen and the goal in reinforcement learning is for this agent to figure out through its own trial and error how to get high reward so as the human designer who gives a specification with the statement I'd like high reward and you say reward is high for the things I just described", "start_timestamp": "00:01:42", "end_timestamp": "00:02:18", "start_second": 102, "end_second": 138, "url": "https://www.youtube.com/watch?v=YqvhDPd1UEw&t=102s", "title": "L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020", "thumbnail": "https://i.ytimg.com/vi/YqvhDPd1UEw/maxresdefault.jpg"} {"video_id": "YqvhDPd1UEw", "text": "and then the agent specific items on how to achieve that another example could be a video game the score in the video game could be the reward and so the agents supposed to figure out how to play that game to maximize reward where are some challenges in reinforcement learning man let me contrast it with supervised links and supervised learning what happens is you have an input and a corresponding outputs and the way you supervise your learning systems bias named for this input that should be output for this other input that should be up and so", "start_timestamp": "00:02:18", "end_timestamp": "00:02:49", "start_second": 138, "end_second": 169, "url": "https://www.youtube.com/watch?v=YqvhDPd1UEw&t=138s", "title": "L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020", "thumbnail": "https://i.ytimg.com/vi/YqvhDPd1UEw/maxresdefault.jpg"} {"video_id": "YqvhDPd1UEw", "text": "forth in reinforcement learning your robot chef might be busy in the kitchen for half hour comes out with its meal and you might say good or bad meal but that's not reflective of the last action their robots have took it's reflective that whole half and half an hour of working a kitchen that somehow result in a higher board or a low reward and now when that chef robot chef cooks multiple times and sometimes has good outcome sometimes bad outcomes you could start looking at what's common between the good outcomes what's common", "start_timestamp": "00:02:49", "end_timestamp": "00:03:21", "start_second": 169, "end_second": 201, "url": "https://www.youtube.com/watch?v=YqvhDPd1UEw&t=169s", "title": "L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020", "thumbnail": "https://i.ytimg.com/vi/YqvhDPd1UEw/maxresdefault.jpg"} {"video_id": "YqvhDPd1UEw", "text": "among the bad outcomes that process is a seizing apart what might have positively concrete or might have negatively con treated that's solving the credit assignment problem it's one of the big challenges for a reinforcement winning agent another big challenge of stability when let's say you have an agent learned to fly a helicopter well helicopters are naturally unstable so if you're not careful during the learning process you might crash your system and I might just stop the whole thing another big challenge is", "start_timestamp": "00:03:21", "end_timestamp": "00:03:49", "start_second": 201, "end_second": 229, "url": "https://www.youtube.com/watch?v=YqvhDPd1UEw&t=201s", "title": "L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020", "thumbnail": "https://i.ytimg.com/vi/YqvhDPd1UEw/maxresdefault.jpg"} {"video_id": "YqvhDPd1UEw", "text": "exploration for a reinforced training agent to learn to do things it actually if it actually doesn't know how to do anything has to try things it's never done before it has to explore and this is many many challenges when you try things you never tried before well how do you even know what you should be trying it could be so many things to try what's more interesting what's less interesting and it also brings back the stability challenge is how do you make sure we try something you don't destroy the system and so forth now one example", "start_timestamp": "00:03:49", "end_timestamp": "00:04:19", "start_second": 229, "end_second": 259, "url": "https://www.youtube.com/watch?v=YqvhDPd1UEw&t=229s", "title": "L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020", "thumbnail": "https://i.ytimg.com/vi/YqvhDPd1UEw/maxresdefault.jpg"} {"video_id": "YqvhDPd1UEw", "text": "that many people would know in real life all three enforce planning is how to train a dog when you train a dog the dog is the reinforced running agent and you as a human provide rewards you might give the dog positive reward when it does well and negative reward when it does poorly and you don't control what the dog does it's not supervising against you cannot tell the dog do this do that do that all just all its muscle muscles will follow your commands no the dog will just do some stuff and you'll say good or bad depending on how happy", "start_timestamp": "00:04:19", "end_timestamp": "00:04:54", "start_second": 259, "end_second": 294, "url": "https://www.youtube.com/watch?v=YqvhDPd1UEw&t=259s", "title": "L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020", "thumbnail": "https://i.ytimg.com/vi/YqvhDPd1UEw/maxresdefault.jpg"} {"video_id": "YqvhDPd1UEw", "text": "you are with what the dog did so one of the things that we want to do in today's lecture is give you a bit of overview of successes of reinforcement learning but then also from there look at limitations and take a look at how representation when it can help out so one of these successes of probably success that foot deep reinforcement on the map was in 2013 when deepmind came out with the DQ end results deepmind showed that it's possible for a neural network to learn to play a wide range of Atari games from its own trial and error", "start_timestamp": "00:04:54", "end_timestamp": "00:05:42", "start_second": 294, "end_second": 342, "url": "https://www.youtube.com/watch?v=YqvhDPd1UEw&t=294s", "title": "L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020", "thumbnail": "https://i.ytimg.com/vi/YqvhDPd1UEw/maxresdefault.jpg"} {"video_id": "YqvhDPd1UEw", "text": "now this was a big surprise until then if you looked at reinforced learning results they would typically be on relatively small simple environments and the input would not be images the input to the agent would be a very well-crafted representation of whatever world the agent is in summarizing in a small set of features or state variables and so a big surprise all of a sudden reinforcement works with pixels as input from there a lot of progress was made of course including the progress list on this slide here a lot of it coming out", "start_timestamp": "00:05:42", "end_timestamp": "00:06:17", "start_second": 342, "end_second": 377, "url": "https://www.youtube.com/watch?v=YqvhDPd1UEw&t=342s", "title": "L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020", "thumbnail": "https://i.ytimg.com/vi/YqvhDPd1UEw/maxresdefault.jpg"} {"video_id": "YqvhDPd1UEw", "text": "of also deep mind Berkley open AI and much much higher scores and faster learning has been achieved in the Atari was on on the Atari benchmark sense wasn't just Atari deep mind also should even learn to play the game of go long-standing challenge many people thought it would be another 20 years if we asked him in 2013 2014 but sure enough in 2015 a computer beat the world champion in doe then the first version alphago was a combination of imitation learning and reinforcement learning the second version alphago zero was pure reinforcement", "start_timestamp": "00:06:17", "end_timestamp": "00:06:52", "start_second": 377, "end_second": 412, "url": "https://www.youtube.com/watch?v=YqvhDPd1UEw&t=377s", "title": "L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020", "thumbnail": "https://i.ytimg.com/vi/YqvhDPd1UEw/maxresdefault.jpg"} {"video_id": "YqvhDPd1UEw", "text": "learning it just learned from playing us itself and over time became better than the best human players and then was a big result in more advanced gameplay or video game play opening showed that the game of dota 2 can be mastered by reinforced learning engine into 2017 he was shown a reinforcement agent to master the one-on-one version of the game and be some of the best human players and then Lola was shown that reinforcement learning enables playing a very competitive not necessarily beating the human world champion team just yet but", "start_timestamp": "00:06:52", "end_timestamp": "00:07:34", "start_second": 412, "end_second": 454, "url": "https://www.youtube.com/watch?v=YqvhDPd1UEw&t=412s", "title": "L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020", "thumbnail": "https://i.ytimg.com/vi/YqvhDPd1UEw/maxresdefault.jpg"} {"video_id": "YqvhDPd1UEw", "text": "at a very competitive level with some of the best human teams through pure reinforcement learning at Berkeley in parallel we're exploring reinforcement learning for robotic control and so here is actually some reinforcement in action thus far we just talked about results what does it look like what you see it in action here we see an agent that's learning to control and this kid is learning to run and we give it positive reward the more positive the more moves to the right and it's negative reward for falling up to", "start_timestamp": "00:07:34", "end_timestamp": "00:08:06", "start_second": 454, "end_second": 486, "url": "https://www.youtube.com/watch?v=YqvhDPd1UEw&t=454s", "title": "L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020", "thumbnail": "https://i.ytimg.com/vi/YqvhDPd1UEw/maxresdefault.jpg"} {"video_id": "YqvhDPd1UEw", "text": "the ground and what we see is that over time it figures out a strategy a control policy that gets it to run off to the right now the beauty here is that it's able to learn this in a way that is not specific to this two legged robot meaning that we can run the exact same deep reinforced landing code and run it on the four-legged robot it'll learn to control this four-legged robots and in fact it can also learn to play at our games the exact same code in this case is precision policy optimization TRP o combined with generalized advanced", "start_timestamp": "00:08:06", "end_timestamp": "00:08:58", "start_second": 486, "end_second": 538, "url": "https://www.youtube.com/watch?v=YqvhDPd1UEw&t=486s", "title": "L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020", "thumbnail": "https://i.ytimg.com/vi/YqvhDPd1UEw/maxresdefault.jpg"} {"video_id": "YqvhDPd1UEw", "text": "estimation tae that's able to learn to master all their skills in this case that the robots learning to get up the reward is based on how close the head is to standing head height the closer the head of standing head height the higher the reward and then this was generalized to a much wider range of skills so what you see here is reinforced winning edge that has massive very wide range of locomotion skills and then here we see it in action on a real robot this is Bret the Berkeley robots for the elimination of the tedious tasks because", "start_timestamp": "00:08:58", "end_timestamp": "00:09:33", "start_second": 538, "end_second": 573, "url": "https://www.youtube.com/watch?v=YqvhDPd1UEw&t=538s", "title": "L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020", "thumbnail": "https://i.ytimg.com/vi/YqvhDPd1UEw/maxresdefault.jpg"} {"video_id": "YqvhDPd1UEw", "text": "we you must don't want to do the tedious tasks with want robots to do those students tasks for us when we see here is this robot is learning to put the Block in it imagine opening and indeed over time it figures out how to get the block did the matching opening now what it's doing under the hood it's learning a vision system and a control system all together to learn to complete this task what's the catch in all of this data inefficiency well monasteries achieved an Atari Ingo in robot locomotion robot manipulation", "start_timestamp": "00:09:33", "end_timestamp": "00:10:07", "start_second": 573, "end_second": 607, "url": "https://www.youtube.com/watch?v=YqvhDPd1UEw&t=573s", "title": "L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020", "thumbnail": "https://i.ytimg.com/vi/YqvhDPd1UEw/maxresdefault.jpg"} {"video_id": "YqvhDPd1UEw", "text": "and so forth this mastery requires an enormous amount of trial and error and so the big question is can we somehow bring that down and reduce the amount of trial and error that's required to master these skills it turns out I believe in many others believe that representation learning can play a big role in getting there to get to much more efficient reinforcement learning and it's not something that is fully understood yet this is a domain with a lot of room for more research and so we'll cover today is a pretty wide range of highlights of", "start_timestamp": "00:10:07", "end_timestamp": "00:10:47", "start_second": 607, "end_second": 647, "url": "https://www.youtube.com/watch?v=YqvhDPd1UEw&t=607s", "title": "L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020", "thumbnail": "https://i.ytimg.com/vi/YqvhDPd1UEw/maxresdefault.jpg"} {"video_id": "YqvhDPd1UEw", "text": "relatively recent results people have achieved by looking at combining representation learning with reinforcement to make RL reinforce learning more efficient we'll look at is long for directions auxilary losses state representation exploration and unsurprised field discovery and we'll unpack these as we go along one thing you'll notice is that it's not some kind of linearly building up thing and at the end you know it culminates in what is the most important piece really what we're going to be covering is a wide", "start_timestamp": "00:10:47", "end_timestamp": "00:11:20", "start_second": 647, "end_second": 680, "url": "https://www.youtube.com/watch?v=YqvhDPd1UEw&t=647s", "title": "L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020", "thumbnail": "https://i.ytimg.com/vi/YqvhDPd1UEw/maxresdefault.jpg"} {"video_id": "YqvhDPd1UEw", "text": "range of highlights that each has their own interesting factors to it and probably a solution that will be the final solution in the future we'll combine ideas from many of the research results that we cover today into one system so let's start with those hilary losses the paper wanna start with is the unreal paper by deep mine so the idea here is done reinforce money as a sense can be very data hungry especially there's only sparse rewards and the question is can we make an agent learn more efficiently by having auxiliar", "start_timestamp": "00:11:20", "end_timestamp": "00:12:02", "start_second": 680, "end_second": 722, "url": "https://www.youtube.com/watch?v=YqvhDPd1UEw&t=680s", "title": "L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020", "thumbnail": "https://i.ytimg.com/vi/YqvhDPd1UEw/maxresdefault.jpg"} {"video_id": "YqvhDPd1UEw", "text": "prediction and control test by having it not just learn from reward signal because it might be very sparse but have the agent absorb other signals of course nothing else we want to supervise as a human because then it becomes very tedious but self supervising those things that are available in the environment that the agent could try to learn from even if it's not exactly rewards signal so the unreal agent which stands for unsupervised reinforced planning and auxiliary learning showed it tenants improvement in data", "start_timestamp": "00:12:02", "end_timestamp": "00:12:38", "start_second": 722, "end_second": 758, "url": "https://www.youtube.com/watch?v=YqvhDPd1UEw&t=722s", "title": "L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020", "thumbnail": "https://i.ytimg.com/vi/YqvhDPd1UEw/maxresdefault.jpg"} {"video_id": "YqvhDPd1UEw", "text": "efficiency over a three C which was can be standard RL approach deep mind abuses at the time on the 3d deep mind lab which is a navigation first-person vision navigation task and sixty percent improvement the final scores so faster learning and converging to better final scores so what does the architecture look like when we see at the top here in the middle is the base a QC agent so again this is not a reinforcement in lecture let me give you a little bit of background what's going on here in reinforced when you have experiences the", "start_timestamp": "00:12:38", "end_timestamp": "00:13:14", "start_second": 758, "end_second": 794, "url": "https://www.youtube.com/watch?v=YqvhDPd1UEw&t=758s", "title": "L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020", "thumbnail": "https://i.ytimg.com/vi/YqvhDPd1UEw/maxresdefault.jpg"} {"video_id": "YqvhDPd1UEw", "text": "agent without any given time and has to make a decision taking the current input process it and then through the policy output try to make a decision on what to do and through the value function output try to predict how much reward is gonna get in the future from this moment in time onwards so there's two output predictions here and that's the standard base a through C editor already predicts two things how much reward that's V value the cumulative reward over time that's coming and policy PI action it should be take so both of those are", "start_timestamp": "00:13:14", "end_timestamp": "00:13:46", "start_second": 794, "end_second": 826, "url": "https://www.youtube.com/watch?v=YqvhDPd1UEw&t=794s", "title": "L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020", "thumbnail": "https://i.ytimg.com/vi/YqvhDPd1UEw/maxresdefault.jpg"} {"video_id": "YqvhDPd1UEw", "text": "outputs of the same network that's the basis of this edge then all the data gets put into a replay buffer and it's reused in other ways and the same neural net that is the a through C agent will give it multiple heads even more head so it has to make even more predictions and by giving an additional prediction tasks if these prediction paths are related to learning to solve the original problem which is achieve high reward and these prediction tasks are real ended and hopefully it'll learn something that will transfer over to the real task who", "start_timestamp": "00:13:46", "end_timestamp": "00:14:19", "start_second": 826, "end_second": 859, "url": "https://www.youtube.com/watch?v=YqvhDPd1UEw&t=826s", "title": "L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020", "thumbnail": "https://i.ytimg.com/vi/YqvhDPd1UEw/maxresdefault.jpg"} {"video_id": "YqvhDPd1UEw", "text": "care about and be able to learn the real task more quickly so what are these auxiliary tasks the first one is auxilary q functions so the idea here is you're given additional heads to the neural network that are q functions a q function is predicting for the current situation how much you work well I get in the future if I currently take a specific action so for each possible action I'll predict how much reward might I get now the interesting thing about Q function learning is that you can do Q function learning of policy", "start_timestamp": "00:14:19", "end_timestamp": "00:14:52", "start_second": 859, "end_second": 892, "url": "https://www.youtube.com/watch?v=YqvhDPd1UEw&t=859s", "title": "L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020", "thumbnail": "https://i.ytimg.com/vi/YqvhDPd1UEw/maxresdefault.jpg"} {"video_id": "YqvhDPd1UEw", "text": "meaning you can try to solve one task but in the meantime do Q learning against the other task that has a different reward function and that's the key idea here we're gonna take reward functions that are not the ones we care about that are auxiliary reward functions that are easy to automatically extract from the environment and the Q&A against those who word functions and by doing so the core structure the core of the neural net will learn things that are also useful for the task we actually care about okay", "start_timestamp": "00:14:52", "end_timestamp": "00:15:23", "start_second": 892, "end_second": 923, "url": "https://www.youtube.com/watch?v=YqvhDPd1UEw&t=892s", "title": "L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020", "thumbnail": "https://i.ytimg.com/vi/YqvhDPd1UEw/maxresdefault.jpg"} {"video_id": "YqvhDPd1UEw", "text": "so what's actually that a little deeper here so basically she agent is the core thing the sparse awards means you just I mean here's the cake from now on Laocoon you get this one T round the cake from the real reward that we care about that's not enough we want more rewards that's exactly what this cue function thing is going to do we're going to define many other rewards and there's many other rewards are going to allow us to learn from a lot more signal and if you only had our one reward okay so this reward function that was defined here by", "start_timestamp": "00:15:23", "end_timestamp": "00:16:00", "start_second": 923, "end_second": 960, "url": "https://www.youtube.com/watch?v=YqvhDPd1UEw&t=923s", "title": "L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020", "thumbnail": "https://i.ytimg.com/vi/YqvhDPd1UEw/maxresdefault.jpg"} {"video_id": "YqvhDPd1UEw", "text": "the office of the paper is called a pixel control reward function and so what they do is they turn what the agency's first-person vision of the maze into a courser this gives grayscale and a representation of what it's seeing and you get rewarded in this auxilary reward task for how much you're able to change the discourse pixel value so what does that mean if your agent turns into a direction where things are much brighter than the direction they're facing right now then the pixel values will change a lot and that would be a high reward", "start_timestamp": "00:16:00", "end_timestamp": "00:16:39", "start_second": 960, "end_second": 999, "url": "https://www.youtube.com/watch?v=YqvhDPd1UEw&t=960s", "title": "L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020", "thumbnail": "https://i.ytimg.com/vi/YqvhDPd1UEw/maxresdefault.jpg"} {"video_id": "YqvhDPd1UEw", "text": "thing or other way around if right now things look very bright in a certain pixel in it it in turns and makes that pixel darker that would be a high reward again that's not what we actually care about but it's a very simple auxilary loss that we can impose and that we can run q-learning against and so that's it also turns out that this is the one that mattered the most for improving the learning there are other Attilio losses with is the one that matters the most intuition here why this one matters the most is that in Q", "start_timestamp": "00:16:39", "end_timestamp": "00:17:10", "start_second": 999, "end_second": 1030, "url": "https://www.youtube.com/watch?v=YqvhDPd1UEw&t=999s", "title": "L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020", "thumbnail": "https://i.ytimg.com/vi/YqvhDPd1UEw/maxresdefault.jpg"} {"video_id": "YqvhDPd1UEw", "text": "learning you are learning about the effect of your actions and so because you're learning about the effect of many possible actions you could take would be the Q value if I turn to the left well did you follow that turn to the right what if we the Q value I look up look down and you're really learning something about how the world works and not just how the world works with how your actions interact is what happens in the world another usually loss is reward prediction so what you do here is for your current pause that you're executing", "start_timestamp": "00:17:10", "end_timestamp": "00:17:40", "start_second": 1030, "end_second": 1060, "url": "https://www.youtube.com/watch?v=YqvhDPd1UEw&t=1030s", "title": "L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020", "thumbnail": "https://i.ytimg.com/vi/YqvhDPd1UEw/maxresdefault.jpg"} {"video_id": "YqvhDPd1UEw", "text": "you can try to predict a future time steps how much real reward you're going to get so maybe you get rewards for eating an apple and so when you see an apple in the distance you should be able to predict that if you keep running forward for three more steps you'll get that Apple and so learning to predict that in three steps you gonna get that Apple is an auxilary loss that's introduced here and then elastic zero loss is value function replay so it's saying from the current time how much reward am I going to get over the next", "start_timestamp": "00:17:40", "end_timestamp": "00:18:10", "start_second": 1060, "end_second": 1090, "url": "https://www.youtube.com/watch?v=YqvhDPd1UEw&t=1060s", "title": "L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020", "thumbnail": "https://i.ytimg.com/vi/YqvhDPd1UEw/maxresdefault.jpg"} {"video_id": "YqvhDPd1UEw", "text": "few steps so this applies in theory in the base every see agent actually all right so if you look at results here this is undefined lab which is that 3d navigation environment where you collect apples and other fruits as a reward we can look at different approaches so the bottom curve we are looking at is the base a tree C agent so that's the dark blue bottom curve and the hope is by having auxiliary losses we can do better if we incorporate all the ideas that we just covered you get the unreal agent you get this top lead curve here and now", "start_timestamp": "00:18:10", "end_timestamp": "00:18:49", "start_second": 1090, "end_second": 1129, "url": "https://www.youtube.com/watch?v=YqvhDPd1UEw&t=1090s", "title": "L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020", "thumbnail": "https://i.ytimg.com/vi/YqvhDPd1UEw/maxresdefault.jpg"} {"video_id": "YqvhDPd1UEw", "text": "we see here is various operations to see well which one of these matters the most which ones might not contribute very much what we see is if he does do the human 4 pixel control it's a loss that's the yellow curve you get almost all the juice of these area losses but if an addition you have the reward prediction and the value replayer you have yet a little better performance so actually another thing I want to highlight here the butts on the top of the graph here says average of the top three aging so there's a way to evaluate things in the", "start_timestamp": "00:18:49", "end_timestamp": "00:19:23", "start_second": 1129, "end_second": 1163, "url": "https://www.youtube.com/watch?v=YqvhDPd1UEw&t=1129s", "title": "L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020", "thumbnail": "https://i.ytimg.com/vi/YqvhDPd1UEw/maxresdefault.jpg"} {"video_id": "YqvhDPd1UEw", "text": "paper usually reinforcement learning because you need to explore and there's a lot of random is an exploration the results are somewhat unpredictable of meaning that some runs will be more lucky than other rooms it'll be high variance and so with the baby here they pick the top three grunts might say why the top three that's a lot crazy shouldn't you look at the average performance anything like that yeah you could argue should look at the average performance it's what's done in most papers that our thinking here", "start_timestamp": "00:19:23", "end_timestamp": "00:19:50", "start_second": 1163, "end_second": 1190, "url": "https://www.youtube.com/watch?v=YqvhDPd1UEw&t=1163s", "title": "L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020", "thumbnail": "https://i.ytimg.com/vi/YqvhDPd1UEw/maxresdefault.jpg"} {"video_id": "YqvhDPd1UEw", "text": "was then imagine what you're interested in is finding a policy and you have maybe a budget of doing 20 runs then maybe what matters is what's the best one among those 20 runs or it could be a little more robust aberrancy you know how do the best three runs do and so an approach where the best three runs are consistently great then that's an approach where if you won't afford 20 runs total you'll have a good one among them and so that's a it's kind of a funny way to score things but it happens to be how they do things in this paper", "start_timestamp": "00:19:50", "end_timestamp": "00:20:25", "start_second": 1190, "end_second": 1225, "url": "https://www.youtube.com/watch?v=YqvhDPd1UEw&t=1190s", "title": "L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020", "thumbnail": "https://i.ytimg.com/vi/YqvhDPd1UEw/maxresdefault.jpg"} {"video_id": "YqvhDPd1UEw", "text": "so another thing that compared with which we as unsupervised learning students of course we're very clear about if you do pixel control why not do feature control why not cue function for kind of later layers in the network if for later lays in the network I want to see if I take an action you know can I change the future value and maybe layer five or layer six instead of just pixel value change well we see a 3c plus feature control in green and it received plus pixel control in orange you can see the pixel control actually works better", "start_timestamp": "00:20:25", "end_timestamp": "00:21:01", "start_second": 1225, "end_second": 1261, "url": "https://www.youtube.com/watch?v=YqvhDPd1UEw&t=1225s", "title": "L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020", "thumbnail": "https://i.ytimg.com/vi/YqvhDPd1UEw/maxresdefault.jpg"} {"video_id": "YqvhDPd1UEw", "text": "of course this might depend on the environment this might depend on exactly only architecture that the experiments that we're done in this paper showed that pixel control actually slightly outperformed feature based control and again control here means it's the auxiliary loss using the auxiliary cue functions that the ultimate reward function that you're actually optimized for and score against on the vertical axis here is the real reward function of collecting the fruits in the maze then here are a couple of unsupervised RL", "start_timestamp": "00:21:01", "end_timestamp": "00:21:31", "start_second": 1261, "end_second": 1291, "url": "https://www.youtube.com/watch?v=YqvhDPd1UEw&t=1261s", "title": "L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020", "thumbnail": "https://i.ytimg.com/vi/YqvhDPd1UEw/maxresdefault.jpg"} {"video_id": "YqvhDPd1UEw", "text": "baselines so what are some other things we're looking at again a principal pixel control shown in yellow that's the top curve and both plots then input input change frequency just try to have an auxiliary law that says can I predict how what I see will change as a function of my action so that's really learning a dynamics model that's shown in blue and then shown in green is in Petri constructions that's a bit like a Dae I have an input make a little representation reconstructed back out and so what we see is that these", "start_timestamp": "00:21:31", "end_timestamp": "00:22:04", "start_second": 1291, "end_second": 1324, "url": "https://www.youtube.com/watch?v=YqvhDPd1UEw&t=1291s", "title": "L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020", "thumbnail": "https://i.ytimg.com/vi/YqvhDPd1UEw/maxresdefault.jpg"} {"video_id": "YqvhDPd1UEw", "text": "things that might seem maybe more natural and more advanced like infantry construction input change prediction are actually less effective than pixel control and of course I mean there could be many many factors have plenty here but the high level intuition that most people have is that the reason these are clearly cute functions work so well is that what's happening here is that as we as we work with exilic q functions we are we are actually we're actually learning about not just how the world works which is the input change", "start_timestamp": "00:22:04", "end_timestamp": "00:22:44", "start_second": 1324, "end_second": 1364, "url": "https://www.youtube.com/watch?v=YqvhDPd1UEw&t=1324s", "title": "L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020", "thumbnail": "https://i.ytimg.com/vi/YqvhDPd1UEw/maxresdefault.jpg"} {"video_id": "YqvhDPd1UEw", "text": "prediction but how we are able to affect what happens in the world and that's really what matters for learning to achieve high reward than the task you care about now domain they looked at rather than first-person maze navigations Montezuma's in French on this month's range of famous at our game where expression is very difficult there are many many rooms in every room there's complicated things you have to do collecting keys jumping over things they make one mistake you're dead and you start back at the beginning", "start_timestamp": "00:22:44", "end_timestamp": "00:23:16", "start_second": 1364, "end_second": 1396, "url": "https://www.youtube.com/watch?v=YqvhDPd1UEw&t=1364s", "title": "L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020", "thumbnail": "https://i.ytimg.com/vi/YqvhDPd1UEw/maxresdefault.jpg"} {"video_id": "YqvhDPd1UEw", "text": "shows you that unreal outperforms it you see quite quite a bit a 3 CR at the bottom I think black is really not getting anywhere where the unreal approach is actually doing a lot better now let's take a look at this maze navigation agent in action so this is deep mind lab let's take a look at the agent plane you agents collecting the apples here not collecting the lemons that's apparently it's not good in this particular game to collect the lemons and so this agent has learned to navigate mazes the way it can", "start_timestamp": "00:23:16", "end_timestamp": "00:24:02", "start_second": 1396, "end_second": 1442, "url": "https://www.youtube.com/watch?v=YqvhDPd1UEw&t=1396s", "title": "L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020", "thumbnail": "https://i.ytimg.com/vi/YqvhDPd1UEw/maxresdefault.jpg"} {"video_id": "YqvhDPd1UEw", "text": "do that by de Rais because it has a lsdm which gives a memory and so you can remember places been before and things has tried before to more efficiently find the next new location where there might be a fruit it hasn't collected yet and so the reason I'm showing these specific results here is because the space of well reinforced planning in general but especially representation and reinforcement learning does the evaluations aren't all in the same type of environments there's a lot of variation how these things get evaluated", "start_timestamp": "00:24:02", "end_timestamp": "00:24:37", "start_second": 1442, "end_second": 1477, "url": "https://www.youtube.com/watch?v=YqvhDPd1UEw&t=1442s", "title": "L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020", "thumbnail": "https://i.ytimg.com/vi/YqvhDPd1UEw/maxresdefault.jpg"} {"video_id": "YqvhDPd1UEw", "text": "and so having a good feel for what these experiments actually look like is important to get a sense for how advanced might this method really be and so we see here as well this first person navigation well that's pretty complicated so this might be a pretty advanced method that play here here we see a bit of an inside look into the agent itself where on the top right you see the pixel control Q values is something depending on which action that take this for actions available how much how high will my Q value be which really", "start_timestamp": "00:24:37", "end_timestamp": "00:25:10", "start_second": 1477, "end_second": 1510, "url": "https://www.youtube.com/watch?v=YqvhDPd1UEw&t=1477s", "title": "L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020", "thumbnail": "https://i.ytimg.com/vi/YqvhDPd1UEw/maxresdefault.jpg"} {"video_id": "YqvhDPd1UEw", "text": "is corresponds to an understanding of how the world works what will change in what I see as a function of actions I take all right so to summarize the unreal law since the original ATC loss which is a state policy gradient most value function loss then there is value replay loss which look seven replay about her Valarie prediction and then there's the pixel control two functions for different course pixels in the view of the agent and then finally there's the reward prediction loss small opening in the word prediction they ensured was", "start_timestamp": "00:25:10", "end_timestamp": "00:25:48", "start_second": 1510, "end_second": 1548, "url": "https://www.youtube.com/watch?v=YqvhDPd1UEw&t=1510s", "title": "L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020", "thumbnail": "https://i.ytimg.com/vi/YqvhDPd1UEw/maxresdefault.jpg"} {"video_id": "YqvhDPd1UEw", "text": "an equal portion of rewarding and non rewarding examples so a balanced training set the pixel control did split into 20 by 20 rid of cells all right so in the entire results we see that unreal also helps over a 3 C but not nearly as much as in the deep mind lab environments but still a significant improvement the vertical axis here is human normalized to form a server the way deep mind is evaluating this is they look at what you missed in 13 Natalie for every game that's gonna be a different score because every game is", "start_timestamp": "00:25:48", "end_timestamp": "00:26:23", "start_second": 1548, "end_second": 1583, "url": "https://www.youtube.com/watch?v=YqvhDPd1UEw&t=1548s", "title": "L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020", "thumbnail": "https://i.ytimg.com/vi/YqvhDPd1UEw/maxresdefault.jpg"} {"video_id": "YqvhDPd1UEw", "text": "very different scoring system and then they normalize it across games another total score across all games how well the agent learns so it is across many many Atari games in terms of Atari games on average how fast is the learning curve go up so you cannot overfeeding onto one game or the other and do well on this score you need to be able to learn well on all of the games to do well on this score alright and then also look here at robustness because there's many agents being trained and this top three curves on", "start_timestamp": "00:26:23", "end_timestamp": "00:26:55", "start_second": 1583, "end_second": 1615, "url": "https://www.youtube.com/watch?v=YqvhDPd1UEw&t=1583s", "title": "L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020", "thumbnail": "https://i.ytimg.com/vi/YqvhDPd1UEw/maxresdefault.jpg"} {"video_id": "YqvhDPd1UEw", "text": "this le expose performance of all the agents here is a little evaluation of robustness and we'll see that you know there's a bit of decaying performance not all agents learn equally well but it's not that there's just one death as well and then nobody else as well so this looks pretty good okay so that's the first thing I want to cover which is auxilary losses and unreal is a very good example of that does more work happening all the time in this space but that was kind of the big initial result that's showed this is something that can", "start_timestamp": "00:26:55", "end_timestamp": "00:27:24", "start_second": 1615, "end_second": 1644, "url": "https://www.youtube.com/watch?v=YqvhDPd1UEw&t=1615s", "title": "L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020", "thumbnail": "https://i.ytimg.com/vi/YqvhDPd1UEw/maxresdefault.jpg"} {"video_id": "YqvhDPd1UEw", "text": "be very beneficial let's switch gears to state representation an ominous will have many many subsections it turns out first one we have is how would go from observation to state so the the kind of paper that most people might be most familiar with is the world models paper by David ha and collaborators and here's very kind of a simple diagram showcasing what they investigated so what you have is you have an environment the environment leads to an observation in this case pixel values and that can be very high dimensional fear but I'll take", "start_timestamp": "00:27:24", "end_timestamp": "00:28:03", "start_second": 1644, "end_second": 1683, "url": "https://www.youtube.com/watch?v=YqvhDPd1UEw&t=1644s", "title": "L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020", "thumbnail": "https://i.ytimg.com/vi/YqvhDPd1UEw/maxresdefault.jpg"} {"video_id": "YqvhDPd1UEw", "text": "100 by 100 image that's 10,000 pixels that's a very high dimensional input then they say well that's kinda Michelin we want our agent to work on something lower dimensional because we know under the hood there is a state of the world and the state of dog might be summarized wearing just a small set of numbers maybe 10 20 30 numbers is enough so my agent shouldn't have to operate shouldn't do reinforcement on those 10,000 number input should be doing reinforced learning on dusty 30 number input and might be able to learn a lot", "start_timestamp": "00:28:03", "end_timestamp": "00:28:32", "start_second": 1683, "end_second": 1712, "url": "https://www.youtube.com/watch?v=YqvhDPd1UEw&t=1683s", "title": "L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020", "thumbnail": "https://i.ytimg.com/vi/YqvhDPd1UEw/maxresdefault.jpg"} {"video_id": "YqvhDPd1UEw", "text": "more quickly because credit assignment should be easier if we only have to look at 30 numbers instead of 10,000 numbers and so it is a you risk a strain of racial auto-encoder which will of course cover them in their release of this course to find a latent representation now from which we can reconstruct there is more but then we'll use the latent representation as the input to the reinforcement learning agent which now hopefully will be more efficient so will them do in this approach is train a recurrent neural network now learns to", "start_timestamp": "00:28:32", "end_timestamp": "00:29:06", "start_second": 1712, "end_second": 1746, "url": "https://www.youtube.com/watch?v=YqvhDPd1UEw&t=1712s", "title": "L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020", "thumbnail": "https://i.ytimg.com/vi/YqvhDPd1UEw/maxresdefault.jpg"} {"video_id": "YqvhDPd1UEw", "text": "predict the next latent state so what's gonna happen here is we're gonna learn a way to simulate how the world works in this recurrent neural network but not by directly simulating and pixel space but by simulating in its latent space which could go a lot faster if there's a lot lower dimension we don't have to render the world at all times we can just simulate how the latent variables evolve over time of course this will also depend on the actions thing so it's a Mulligan's interaction and previous latent state generate the next life in", "start_timestamp": "00:29:06", "end_timestamp": "00:29:41", "start_second": 1746, "end_second": 1781, "url": "https://www.youtube.com/watch?v=YqvhDPd1UEw&t=1746s", "title": "L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020", "thumbnail": "https://i.ytimg.com/vi/YqvhDPd1UEw/maxresdefault.jpg"} {"video_id": "YqvhDPd1UEw", "text": "state and of course you want it to be the case that that matches up with the actual next latent state that you're VA you would output when you get to observe the next live in state and then he actually gets fed into the environment also and this so you have kind of two paths here at the actual environment path and you have the RNN prediction path and you hope that they line up or you training really to make this line up the thing in blue is called the world model is a thing that looks at the latent state see turns it into by", "start_timestamp": "00:29:41", "end_timestamp": "00:30:15", "start_second": 1781, "end_second": 1815, "url": "https://www.youtube.com/watch?v=YqvhDPd1UEw&t=1781s", "title": "L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020", "thumbnail": "https://i.ytimg.com/vi/YqvhDPd1UEw/maxresdefault.jpg"} {"video_id": "YqvhDPd1UEw", "text": "looking at the action latent state turns it into next light instead alright so they looked at this in the comics of car racing so in the left you see the environment that drains the roads you're supposed to stay on the road here the way they were rewarded shut up and race down this road as quickly as possible this is from pixel input so you get a lot of numbers as input and somehow you hope that would get turned into effectively an understanding of roughly where the road is where your cars on that road and which direction it's", "start_timestamp": "00:30:15", "end_timestamp": "00:30:46", "start_second": 1815, "end_second": 1846, "url": "https://www.youtube.com/watch?v=YqvhDPd1UEw&t=1815s", "title": "L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020", "thumbnail": "https://i.ytimg.com/vi/YqvhDPd1UEw/maxresdefault.jpg"} {"video_id": "YqvhDPd1UEw", "text": "facing your car and then understand how to steer it to go down the road as quickly as possible procedure that followed is that click 10,000 robots from a random policy and the trendler view to encode frames into Z space just thirty two dimensional z space so low dimensional compared to the pixel open space then they train a ironing model to predict next lengthen state from previous latent state action and there's this additional hit late instead inside the Arnim then they use evolution but it's just one of many possible RL", "start_timestamp": "00:30:46", "end_timestamp": "00:31:23", "start_second": 1846, "end_second": 1883, "url": "https://www.youtube.com/watch?v=YqvhDPd1UEw&t=1846s", "title": "L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020", "thumbnail": "https://i.ytimg.com/vi/YqvhDPd1UEw/maxresdefault.jpg"} {"video_id": "YqvhDPd1UEw", "text": "approaches to train a linear controller to maximize expected reward of a robot so step one two and three is the unsupervised learning that can happen ahead of time and then you can run RL on that representation that you've learned so one thing that's real interesting here is that remember the Omnitech where yaw would say oh well you know reinforced learning is a cherry on the cake which is tiny compared to the cake and why are something for spent in the chair because it's not a lot of rewards it's just small amount of reward signal", "start_timestamp": "00:31:23", "end_timestamp": "00:31:59", "start_second": 1883, "end_second": 1919, "url": "https://www.youtube.com/watch?v=YqvhDPd1UEw&t=1883s", "title": "L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020", "thumbnail": "https://i.ytimg.com/vi/YqvhDPd1UEw/maxresdefault.jpg"} {"video_id": "YqvhDPd1UEw", "text": "the following york response to be bound a signal there's a lot of signal coming from self supervised learning and that's the foundation of the kick and so if you look at what's happening here the VMA neural network has four million parameters the RN and dynamics model network says no network has four hundred thousand parameters and then the controller the thing that is learned with RL only has eight hundred something parameters there's a massive difference in that RL only has to learn a small number of parameters which Maps do it", "start_timestamp": "00:31:59", "end_timestamp": "00:32:28", "start_second": 1919, "end_second": 1948, "url": "https://www.youtube.com/watch?v=YqvhDPd1UEw&t=1919s", "title": "L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020", "thumbnail": "https://i.ytimg.com/vi/YqvhDPd1UEw/maxresdefault.jpg"} {"video_id": "YqvhDPd1UEw", "text": "only have a less lesser amount of signal whereas the saw survived part has to learn most parameters millions of parameters and that's done from one to device theta okay so here's an example of an input frame the 64 by 64 pixels here and a frame reconstruction which kind of roughly matches up not perfectly but it gets to just then here we have when we use just C or Z on h h is the ardennes in that state so it shows that it's important that the RNA and hidden state captures something important about the world let's look at results so what we", "start_timestamp": "00:32:28", "end_timestamp": "00:33:12", "start_second": 1948, "end_second": 1992, "url": "https://www.youtube.com/watch?v=YqvhDPd1UEw&t=1948s", "title": "L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020", "thumbnail": "https://i.ytimg.com/vi/YqvhDPd1UEw/maxresdefault.jpg"} {"video_id": "YqvhDPd1UEw", "text": "see here in the table is scores obtained with the model described highest score in this car is the environment compared to previous methods obviously in principle unlimited she be able to learn this too but when you limit the amount of time you get to train then using cell scribes learning to learn a representation combined with reinforcement to learn the control allow us to get seemingly higher scores than previous methods there were pure RL were able to do so this is the model we looked at before one experiment we saw", "start_timestamp": "00:33:12", "end_timestamp": "00:33:53", "start_second": 1992, "end_second": 2033, "url": "https://www.youtube.com/watch?v=YqvhDPd1UEw&t=1992s", "title": "L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020", "thumbnail": "https://i.ytimg.com/vi/YqvhDPd1UEw/maxresdefault.jpg"} {"video_id": "YqvhDPd1UEw", "text": "so far I had the car racing environment the second experiment where you have to dodge things being shot at you in a fist doom environment so the input will look something like we see on the left but sometimes you'll see fireballs coming at you when they're shooting at you and you got to dodge those fire bullets to get to stay alive and get high reward same approach train of uni training our end world model then linear controller train with our L on top of that and so again this linear controller train on top of", "start_timestamp": "00:33:53", "end_timestamp": "00:34:31", "start_second": 2033, "end_second": 2071, "url": "https://www.youtube.com/watch?v=YqvhDPd1UEw&t=2033s", "title": "L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020", "thumbnail": "https://i.ytimg.com/vi/YqvhDPd1UEw/maxresdefault.jpg"} {"video_id": "YqvhDPd1UEw", "text": "that is trained in the Arnon simulator itself so you don't need you don't need to simulate what things will look like rendering is often expensive computationally if you need to go all the way to rendering to train your policy I'll take a lot longer to do the same number of rollouts their oldest thing that low dimensional latent space to train the policy so it's called doom take cover here's a higher resolution version of what this looks like if you were to play this game yourself same approach laid out here again", "start_timestamp": "00:34:31", "end_timestamp": "00:35:03", "start_second": 2071, "end_second": 2103, "url": "https://www.youtube.com/watch?v=YqvhDPd1UEw&t=2071s", "title": "L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020", "thumbnail": "https://i.ytimg.com/vi/YqvhDPd1UEw/maxresdefault.jpg"} {"video_id": "YqvhDPd1UEw", "text": "unsupervised learning does all the stuff at the top here millions of parameters learn then the RL only needs to learn about a thousand forever again beautiful association of the non-linear cake idea so here is what here's what this what this looks like one thing to to keep in mind here is that it actually sometimes you can you can have some quirky results where the simulator of the world allows you to do things you can now do in the real world and so that's something to look out for that they're highlighting in on their", "start_timestamp": "00:35:03", "end_timestamp": "00:35:43", "start_second": 2103, "end_second": 2143, "url": "https://www.youtube.com/watch?v=YqvhDPd1UEw&t=2103s", "title": "L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020", "thumbnail": "https://i.ytimg.com/vi/YqvhDPd1UEw/maxresdefault.jpg"} {"video_id": "YqvhDPd1UEw", "text": "website so if you go look at the kind of normal temperature higher temperature things you'll see some differences there so here are the results we have depending on the temperature different discrepancies so for low temperature we see a very high virtual score but the actual score not so great for higher temperatures we have a closer match between the virtual score in the actual score so actually actually I should I quickly highlight what would meet with temperature here so typically in RL you have a policy that has stochastic output", "start_timestamp": "00:35:43", "end_timestamp": "00:36:29", "start_second": 2143, "end_second": 2189, "url": "https://www.youtube.com/watch?v=YqvhDPd1UEw&t=2143s", "title": "L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020", "thumbnail": "https://i.ytimg.com/vi/YqvhDPd1UEw/maxresdefault.jpg"} {"video_id": "YqvhDPd1UEw", "text": "so you would have a distribution over actions and that solution over actions can have a temperature parameter in terms of how much you favor your favorite action and so that temperature parameter if you make it small low close to zero then you'll always think your preferred out most preferred action when you have then you end up with a close to the domestic policy we have a close to domestic policy you can often explored quirks in your simulator whereas if you have some random is in your policy you have a higher temperature", "start_timestamp": "00:36:29", "end_timestamp": "00:37:03", "start_second": 2189, "end_second": 2223, "url": "https://www.youtube.com/watch?v=YqvhDPd1UEw&t=2189s", "title": "L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020", "thumbnail": "https://i.ytimg.com/vi/YqvhDPd1UEw/maxresdefault.jpg"} {"video_id": "YqvhDPd1UEw", "text": "if ourselves a little bit of randomness then you could not exploit the very specific quirks and lauren simulator because the randomness will prevent you from being able to go to that very very quirky path where you all of a sudden get a high score even though you know really you can't do that in the real world but your simulator has a small little bug you won't be able to trigger that small little bug and that's what's going on here with temperature at higher temperature we are not able to exploit tiny little bugs my learned simulator we", "start_timestamp": "00:37:03", "end_timestamp": "00:37:32", "start_second": 2223, "end_second": 2252, "url": "https://www.youtube.com/watch?v=YqvhDPd1UEw&t=2223s", "title": "L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020", "thumbnail": "https://i.ytimg.com/vi/YqvhDPd1UEw/maxresdefault.jpg"} {"video_id": "YqvhDPd1UEw", "text": "have to learn something more robust and that leads to a better match between performers in the real environment relative to the learned simulator ok so that was the world models paper by David hein collaborators now one question you could ask yourself if we're going to learn a world model we're going to learn a simulator some Lincoln space simulator couldn't make sense to try to learn a latent space such that control becomes easier what I mean with that so if you look at the control literature some control problems are easy to solve some", "start_timestamp": "00:37:32", "end_timestamp": "00:38:15", "start_second": 2252, "end_second": 2295, "url": "https://www.youtube.com/watch?v=YqvhDPd1UEw&t=2252s", "title": "L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020", "thumbnail": "https://i.ytimg.com/vi/YqvhDPd1UEw/maxresdefault.jpg"} {"video_id": "YqvhDPd1UEw", "text": "control problems are very hard to solve and maybe we can map our pixel observations and world and amps and pixel space into a blatant space dynamics that satisfy certain properties that make the resulting control problem easier to solve a good example of this is linear dynamical systems if you have a linear dynamical system then the control problem tends to be relatively straightforward to solve so how about this dinner this is what this paper we're going to cover here is going to do is hold on give me one second here", "start_timestamp": "00:38:15", "end_timestamp": "00:38:58", "start_second": 2295, "end_second": 2338, "url": "https://www.youtube.com/watch?v=YqvhDPd1UEw&t=2295s", "title": "L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020", "thumbnail": "https://i.ytimg.com/vi/YqvhDPd1UEw/maxresdefault.jpg"} {"video_id": "YqvhDPd1UEw", "text": "[Music] let me cover something else here first so one thing that might happen is as you train the world model on your randomly collected data and then turn your policy and test it in the real world it might not always work the reason might not work is because the randomly collected data might not not have been interesting enough to maybe cover the parts of the space where you would get high reward and so what then you'd want to do is iterate this process at this point you effectively a model-based reinforcement", "start_timestamp": "00:38:58", "end_timestamp": "00:39:43", "start_second": 2338, "end_second": 2383, "url": "https://www.youtube.com/watch?v=YqvhDPd1UEw&t=2338s", "title": "L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020", "thumbnail": "https://i.ytimg.com/vi/YqvhDPd1UEw/maxresdefault.jpg"} {"video_id": "YqvhDPd1UEw", "text": "procedure you collect data you learn a model you find a policy in the learned model you deploy that policy you take a new data and prove your world model and repeat so that's what they did and this gives for the carpal swing up and so after about twenty trations with this we would be able to learn to swing it up now a couple of other world models papers is the actions additional video prediction using deep networks in atari games just at the top here worth checking out model-based reinforced planning for atari it's another one", "start_timestamp": "00:39:43", "end_timestamp": "00:40:21", "start_second": 2383, "end_second": 2421, "url": "https://www.youtube.com/watch?v=YqvhDPd1UEw&t=2383s", "title": "L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020", "thumbnail": "https://i.ytimg.com/vi/YqvhDPd1UEw/maxresdefault.jpg"} {"video_id": "YqvhDPd1UEw", "text": "worth checking out and then learning the dynamics from for planning some pixels planet which we'll look at a little bit later also so if you want to look more closely at the specific it which is covered there's a really nice website world model start github dot which has the code which has many demos for you to play with the play with what actually the latent variables are doing in the VAD and so forth for these alarms so highly recommend checking that out and here is a video of the chris doom cover in action so you get these fireballs", "start_timestamp": "00:40:21", "end_timestamp": "00:41:04", "start_second": 2421, "end_second": 2464, "url": "https://www.youtube.com/watch?v=YqvhDPd1UEw&t=2421s", "title": "L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020", "thumbnail": "https://i.ytimg.com/vi/YqvhDPd1UEw/maxresdefault.jpg"} {"video_id": "YqvhDPd1UEw", "text": "coming at you and the agent has learned to get out of the way to not get killed all right so we looked at so far is we looked at how to go from observation to state and then learn a model in that latent state space now we're gonna take a look at program division to state but also from state action to next state so and this was that earlier alluded to an hour i jumped the gun a little bit on this yes we're gonna now be representational and that's not ahead of time learning your position and pixel to hopefully state or", "start_timestamp": "00:41:04", "end_timestamp": "00:41:45", "start_second": 2464, "end_second": 2505, "url": "https://www.youtube.com/watch?v=YqvhDPd1UEw&t=2464s", "title": "L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020", "thumbnail": "https://i.ytimg.com/vi/YqvhDPd1UEw/maxresdefault.jpg"} {"video_id": "YqvhDPd1UEw", "text": "something like state but that is already when it's doing representation looking at the dynamics and so when we look at the dynamics to representation learning why not learn a representation where the dynamics is such that control becomes easier for example learn a representation such that in this new representation space that an Amex is linear because if the dynamics is linear then all of a sudden control becomes easy and you turn your original pixel space problem might be highly nonlinear very complex to have a control methodology for into a", "start_timestamp": "00:41:45", "end_timestamp": "00:42:20", "start_second": 2505, "end_second": 2540, "url": "https://www.youtube.com/watch?v=YqvhDPd1UEw&t=2505s", "title": "L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020", "thumbnail": "https://i.ytimg.com/vi/YqvhDPd1UEw/maxresdefault.jpg"} {"video_id": "YqvhDPd1UEw", "text": "latent space problem where it's just linear and very simple to solve that's the main idea behind is embed to control paper we're covering now so the existence they considered were pendulum card pull and three linked arm but again this is from pixel soda a pixel input when the lid representation where hopefully dynamics is close to linear and hence control becomes easy so it's called stochastic control the methods they apply it's kind of a standard control method then you can apply to linear systems and embed to", "start_timestamp": "00:42:20", "end_timestamp": "00:42:57", "start_second": 2540, "end_second": 2577, "url": "https://www.youtube.com/watch?v=YqvhDPd1UEw&t=2540s", "title": "L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020", "thumbnail": "https://i.ytimg.com/vi/YqvhDPd1UEw/maxresdefault.jpg"} {"video_id": "YqvhDPd1UEw", "text": "control will learn a latent space model using pressure on encoder while forcing a locally linear latent space dynamics model so once you have a local inner model you can apply stochastic optimal control is an example that in action where once you have such a model is very easy to find the controller that brings you to a target and say a stable fixed point thanks to that controller or just to work well locally along this trajectory you seem to have linear dynamics models and in fact the way this methods work is they", "start_timestamp": "00:42:57", "end_timestamp": "00:43:31", "start_second": 2577, "end_second": 2611, "url": "https://www.youtube.com/watch?v=YqvhDPd1UEw&t=2577s", "title": "L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020", "thumbnail": "https://i.ytimg.com/vi/YqvhDPd1UEw/maxresdefault.jpg"} {"video_id": "YqvhDPd1UEw", "text": "tend to linearize the dynamics along trajectories but when if you learn a latent space model where it's already linear we already good to go and that linearization will not be an approximation or actually be the action model that you learn to that be nice to have a very big fit of your linear model to the absolute veneks so the costs are often assumed to be quadratic so that's that's an assumption to make you know this class of problems called lqr problems later through out of control problems sometimes LQG problems if you", "start_timestamp": "00:43:31", "end_timestamp": "00:44:01", "start_second": 2611, "end_second": 2641, "url": "https://www.youtube.com/watch?v=YqvhDPd1UEw&t=2611s", "title": "L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020", "thumbnail": "https://i.ytimg.com/vi/YqvhDPd1UEw/maxresdefault.jpg"} {"video_id": "YqvhDPd1UEw", "text": "also have some use to bestest be in there and these problems assume that you have linear dynamics and quadratic costs without a cost meaning there's a quadratic penalties for being away from the state where you're supposed to be okay so of course we can't just throw from our original pixel observations to some space where the NamUs is linear and ignore the real-world dynamics esta map button my lab back out to real world to them so let's look at the complete loss function to look at first of all go to latent space see you need to be able to", "start_timestamp": "00:44:01", "end_timestamp": "00:44:33", "start_second": 2641, "end_second": 2673, "url": "https://www.youtube.com/watch?v=YqvhDPd1UEw&t=2641s", "title": "L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020", "thumbnail": "https://i.ytimg.com/vi/YqvhDPd1UEw/maxresdefault.jpg"} {"video_id": "YqvhDPd1UEw", "text": "reconstruct the original language so Z should not lose important information about what is happening or what's the situation here then we have this temporal aspect here and I'll Polly want to reach a goal and want to have long-term act prediction that in the end put the sequence of actions that are keys the goal it also predicts that's going to be the case so every step along the way we're gonna have prediction for when and use linear models so prediction must be locally analyzable for all valid control magnitudes such", "start_timestamp": "00:44:33", "end_timestamp": "00:45:07", "start_second": 2673, "end_second": 2707, "url": "https://www.youtube.com/watch?v=YqvhDPd1UEw&t=2673s", "title": "L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020", "thumbnail": "https://i.ytimg.com/vi/YqvhDPd1UEw/maxresdefault.jpg"} {"video_id": "YqvhDPd1UEw", "text": "that when we optimize our controls we get something then when it works in simulation also works in real world now we're going to force that to be true by learning a model that does this by construction so let's look at that model here's the next component we already have our encoder decoder we have our control input u so in controls usually he was used for control input and reinforcement often a is used for action controls need for controls then we have our next latent state CP plus 1 now for this to be meaningful the same decoder", "start_timestamp": "00:45:07", "end_timestamp": "00:45:42", "start_second": 2707, "end_second": 2742, "url": "https://www.youtube.com/watch?v=YqvhDPd1UEw&t=2707s", "title": "L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020", "thumbnail": "https://i.ytimg.com/vi/YqvhDPd1UEw/maxresdefault.jpg"} {"video_id": "YqvhDPd1UEw", "text": "should be able to reconstruct the image input at time T plus 1 if that's the case then that latent space dynamics was correct okay so we're going to learn a locally linear model here in that transition to make that work okay then once we have all that in place we pretty much good to go we're going to use this model over long horizon C to make sure that actually that we don't just do this over one step we actually lay this out over and longer horizons and as we've trained the model we have multi-step predictions over which we", "start_timestamp": "00:45:42", "end_timestamp": "00:46:19", "start_second": 2742, "end_second": 2779, "url": "https://www.youtube.com/watch?v=YqvhDPd1UEw&t=2742s", "title": "L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020", "thumbnail": "https://i.ytimg.com/vi/YqvhDPd1UEw/maxresdefault.jpg"} {"video_id": "YqvhDPd1UEw", "text": "have this loss function you might say why do we need why do we need all this well it turns out that if you make a small mistake and your prediction for the next state then you might say nah Bob just a small mistake no big deal but if you make a small mistake the problem is that you land in a new latent state for which your model might not have been trained and when you make the next prediction to go to time T plus 2 you're doing it from a time T plus 1 latent state that you're not familiar with that doesn't lie in", "start_timestamp": "00:46:19", "end_timestamp": "00:46:50", "start_second": 2779, "end_second": 2810, "url": "https://www.youtube.com/watch?v=YqvhDPd1UEw&t=2779s", "title": "L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020", "thumbnail": "https://i.ytimg.com/vi/YqvhDPd1UEw/maxresdefault.jpg"} {"video_id": "YqvhDPd1UEw", "text": "your training distribution and now you might make a not so good prediction make it even worse and this is an accumulation of errors over time can lead to divergence and explicitly avoided any kind of simulations to run over longer horizon need some mechanism to avoid that okay so one mechanism is to explicitly have a loss of a multi-step another mechanisms to ensure that your next state prediction comes from the correct label distribution so if you embed into my Tate unit Gaussian spins then after you do your next state", "start_timestamp": "00:46:50", "end_timestamp": "00:47:21", "start_second": 2810, "end_second": 2841, "url": "https://www.youtube.com/watch?v=YqvhDPd1UEw&t=2810s", "title": "L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020", "thumbnail": "https://i.ytimg.com/vi/YqvhDPd1UEw/maxresdefault.jpg"} {"video_id": "YqvhDPd1UEw", "text": "prediction they should also that what you get there should come from a Gaussian unit gosh and distribution to ensure that when you go from there to the next one you're ready to make your predictions all right so those are the components we have an autoencoder tutoring image X into latent state with accurate long-term vision of latent states because we ensure that the next latent state comes from the correct distribution a unit Gaussian just like our auto encoder it forces it to be and then the prediction must be locally in", "start_timestamp": "00:47:21", "end_timestamp": "00:47:53", "start_second": 2841, "end_second": 2873, "url": "https://www.youtube.com/watch?v=YqvhDPd1UEw&t=2841s", "title": "L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020", "thumbnail": "https://i.ytimg.com/vi/YqvhDPd1UEw/maxresdefault.jpg"} {"video_id": "YqvhDPd1UEw", "text": "erisa bowl so we don't get some fans in their network to predict the next flight instead from the current day and see if it has to be feasible with just a linear prediction okay so this is the full system that they proposed all the last term's shown at the bottom now let's take a look at how all this works so they apply this to cart poll that showed a good amount of success in car pool and then here are some evaluations on on that showing that embed to control indeed can do in virtual pendulums swing up pretty well I can do carp or balance", "start_timestamp": "00:47:53", "end_timestamp": "00:48:29", "start_second": 2873, "end_second": 2909, "url": "https://www.youtube.com/watch?v=YqvhDPd1UEw&t=2873s", "title": "L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020", "thumbnail": "https://i.ytimg.com/vi/YqvhDPd1UEw/maxresdefault.jpg"} {"video_id": "YqvhDPd1UEw", "text": "you can do three link arm so good results on three environments that they experiment with and here's this environments look like this is from broad images so what we're watching is effectively also what the agencies the agent will often see the down sampling so they can actually look at the the environments themselves so really much on the left because worked on the right and here we have cardboard balancing in action and so this can use some idea of how capable this approach is so it does very well at the same time clearly these", "start_timestamp": "00:48:29", "end_timestamp": "00:49:10", "start_second": 2909, "end_second": 2950, "url": "https://www.youtube.com/watch?v=YqvhDPd1UEw&t=2909s", "title": "L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020", "thumbnail": "https://i.ytimg.com/vi/YqvhDPd1UEw/maxresdefault.jpg"} {"video_id": "YqvhDPd1UEw", "text": "environments they're not nearly as complicated as what we saw in the Unreal environments where it was deep mind lab navigation tasks versus these kind of 2d relatively low-resolution single robot that you fully control kind of tasks now in embed to control the idea was to have a single linear system and for your full of enemies that might be difficult but it's been shown in controls is that very often even though your real system is highly nonlinear locally it can be linearized and so you might ask the question can we instead follow the same", "start_timestamp": "00:49:10", "end_timestamp": "00:49:55", "start_second": 2950, "end_second": 2995, "url": "https://www.youtube.com/watch?v=YqvhDPd1UEw&t=2950s", "title": "L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020", "thumbnail": "https://i.ytimg.com/vi/YqvhDPd1UEw/maxresdefault.jpg"} {"video_id": "YqvhDPd1UEw", "text": "philosophy and I was in bed to control but instead of learning the single linear model can we learn just a collection of linear models in some way that allows us to apply time varying linear control methods which are also extremely efficient and maybe have a richer set of environment that we can solve for because time varying linear models can cover more than just a single linear model can that's actually what we did in this work called solar showing an action on the right here we now have different linear models at different", "start_timestamp": "00:49:55", "end_timestamp": "00:50:25", "start_second": 2995, "end_second": 3025, "url": "https://www.youtube.com/watch?v=YqvhDPd1UEw&t=2995s", "title": "L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020", "thumbnail": "https://i.ytimg.com/vi/YqvhDPd1UEw/maxresdefault.jpg"} {"video_id": "YqvhDPd1UEw", "text": "times and so we learn to embed into space where at each time a local linear model can capture the transition very well so you still get initial random rollouts followed by learning representation and latent dynamics but now I'm not a simple linear model but the sequence of linear models and then from that once we've done that we can start doing a robot infer where we are in this thing's getting out of in your models find the sequence of controllers execute that get new data and repeat the smaller base to draw in action a model-based reinforced", "start_timestamp": "00:50:25", "end_timestamp": "00:51:07", "start_second": 3025, "end_second": 3067, "url": "https://www.youtube.com/watch?v=YqvhDPd1UEw&t=3025s", "title": "L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020", "thumbnail": "https://i.ytimg.com/vi/YqvhDPd1UEw/maxresdefault.jpg"} {"video_id": "YqvhDPd1UEw", "text": "waiting in action on the distant if we're setting where we make the latent space very efficient to find optimal policies and so might not succeed on the first time around so get the new data update the representation infer where we are in terms of linear dynamics models and trying to updated policy and repeat and this can actually learn in about 20 minutes to stack it like a block learning it from from pixels as input okay so we looked at state representation which is how to go from broth acceleration to state learning", "start_timestamp": "00:51:07", "end_timestamp": "00:51:42", "start_second": 3067, "end_second": 3102, "url": "https://www.youtube.com/watch?v=YqvhDPd1UEw&t=3067s", "title": "L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020", "thumbnail": "https://i.ytimg.com/vi/YqvhDPd1UEw/maxresdefault.jpg"} {"video_id": "YqvhDPd1UEw", "text": "ahead of time with a few in the world models paper that we looked at after we learned dynamics model and mapping from pixels who stayed at the same time and maybe benefit from that now here's another way we can think about this which is we could think of putting some prior information so when we have pixels as inputs and we know that under the hood there is a state thing we know that state is just a bunch of real numbers so we did here in this papers is said okay when a plug data we're going to learn a latent", "start_timestamp": "00:51:42", "end_timestamp": "00:52:16", "start_second": 3102, "end_second": 3136, "url": "https://www.youtube.com/watch?v=YqvhDPd1UEw&t=3102s", "title": "L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020", "thumbnail": "https://i.ytimg.com/vi/YqvhDPd1UEw/maxresdefault.jpg"} {"video_id": "YqvhDPd1UEw", "text": "representation which is curated by sequels of column filters then we're going to apply a spatial softmax meaning we're going to look for each of these 16 filters where each filter is most active through a spatial softmax and output the corresponding coordinates those coordinates should allow us to reconstruct the original image because they captured essence they recorded to the objects in the scene if you know the courts of the objects we've seen at least as the home you can reconstruct the scene and then once we have learned", "start_timestamp": "00:52:16", "end_timestamp": "00:52:50", "start_second": 3136, "end_second": 3170, "url": "https://www.youtube.com/watch?v=YqvhDPd1UEw&t=3136s", "title": "L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020", "thumbnail": "https://i.ytimg.com/vi/YqvhDPd1UEw/maxresdefault.jpg"} {"video_id": "YqvhDPd1UEw", "text": "that representation we could learn to control with just a 32 dimensional input rather than needing to take in 240 by 240 input which is much higher dimensional and much more expensive to do reinforced fighting against there's actually capable of learning a pretty wide range of skills here so here is the data collection so it's just randomly moving around collecting data that data is used to train that spatial auto encoder and sir then we learn we look actually we imprinted the goal situation and then we do reinforce", "start_timestamp": "00:52:50", "end_timestamp": "00:53:37", "start_second": 3170, "end_second": 3217, "url": "https://www.youtube.com/watch?v=YqvhDPd1UEw&t=3170s", "title": "L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020", "thumbnail": "https://i.ytimg.com/vi/YqvhDPd1UEw/maxresdefault.jpg"} {"video_id": "YqvhDPd1UEw", "text": "learning in the feature space the thirty two dimensional feature space and learn in a relatively short amount of time how to push the block to the target location it's not ready you can follow and how to go from image observations to state or something likes hearing kind of interesting method here them it actually doesn't bother reconstructing it says all we need to do is think about physics what is physics tell us well we're gonna want to find an encoding of state underlying state coming through the observation fine here will be the big", "start_timestamp": "00:53:37", "end_timestamp": "00:54:15", "start_second": 3217, "end_second": 3255, "url": "https://www.youtube.com/watch?v=YqvhDPd1UEw&t=3217s", "title": "L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020", "thumbnail": "https://i.ytimg.com/vi/YqvhDPd1UEw/maxresdefault.jpg"} {"video_id": "YqvhDPd1UEw", "text": "neural network that turns image observation into underlying state well what do we know about state we know that in physics then there will be coordinates and then derivatives of coordinates which are the velocities of these objects so there is a state variable corresponding to velocity and other severe I'll compare corresponding to position and the change in position is velocity that's we know that velocity is derivative of position then what else do we know we know that when the world to be in different states we're going to need you", "start_timestamp": "00:54:15", "end_timestamp": "00:54:52", "start_second": 3255, "end_second": 3292, "url": "https://www.youtube.com/watch?v=YqvhDPd1UEw&t=3255s", "title": "L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020", "thumbnail": "https://i.ytimg.com/vi/YqvhDPd1UEw/maxresdefault.jpg"} {"video_id": "YqvhDPd1UEw", "text": "know different state values so by default if we you know get random stuff presented to us we want the embeddings as in a field two different situations to be far apart so that's what this law is is saying we want embeddings to be far apart but in all you do is play the embedding is far apart well then that's not enough to get any structure so then the next loss here says that four consecutive times the position state variables should be close it also says that between time T and t minus 1 the velocity state variables should be close", "start_timestamp": "00:54:52", "end_timestamp": "00:55:29", "start_second": 3292, "end_second": 3329, "url": "https://www.youtube.com/watch?v=YqvhDPd1UEw&t=3292s", "title": "L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020", "thumbnail": "https://i.ytimg.com/vi/YqvhDPd1UEw/maxresdefault.jpg"} {"video_id": "YqvhDPd1UEw", "text": "because philosophy cannot change quickly so this is saying acceleration people is going to be small on average an acceleration is gonna be small then conservation of momentum and our energy is captured in here and in the last part here Singer we need a representation where the actions are able to influence what the next state is going to be so wanted correlation between action and state all right so it tested is on a couple of environments you know where they would just collect data in these environments pixel input and then learn a state", "start_timestamp": "00:55:29", "end_timestamp": "00:56:09", "start_second": 3329, "end_second": 3369, "url": "https://www.youtube.com/watch?v=YqvhDPd1UEw&t=3329s", "title": "L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020", "thumbnail": "https://i.ytimg.com/vi/YqvhDPd1UEw/maxresdefault.jpg"} {"video_id": "YqvhDPd1UEw", "text": "representation that doesn't do reconstruction just try to satisfies those invariants that are expected to be no good loss functions based on physics and one pretty interesting state representations that way here's another example of state learning in action going relatively quickly earth-2 just gonna give it a lot of different ideas across we've covered the beta tae and one of the early lectures beta V is a very solid encoder we'll put a coefficient beta in front of the KL loss on the prior and by making that", "start_timestamp": "00:56:09", "end_timestamp": "00:56:41", "start_second": 3369, "end_second": 3401, "url": "https://www.youtube.com/watch?v=YqvhDPd1UEw&t=3369s", "title": "L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020", "thumbnail": "https://i.ytimg.com/vi/YqvhDPd1UEw/maxresdefault.jpg"} {"video_id": "YqvhDPd1UEw", "text": "collision beta bigger than one effective what we're doing is we're trying to make the latent variable Z maximally independent so we're trying to find a disentangled representation of the scene and so the thinking here is that well if we want to find something that we think of our state from raw pixel values and probably we need to find something that's really strongly disentangled and so it's putting that prior into it and they show that by having this beta V you actually get much better transfer so they train a beta vini and then do Q", "start_timestamp": "00:56:41", "end_timestamp": "00:57:16", "start_second": 3401, "end_second": 3436, "url": "https://www.youtube.com/watch?v=YqvhDPd1UEw&t=3401s", "title": "L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020", "thumbnail": "https://i.ytimg.com/vi/YqvhDPd1UEw/maxresdefault.jpg"} {"video_id": "YqvhDPd1UEw", "text": "learning and bit in a network that takes the embeddings from the beta ba and compare it with regular Q learning and so on the left here we see what happens in the training environments the training environments regular Chi learning and Darla's which is few learning with the beta V representation learning do about equally well but when we look at a new task related task but looks very different by doing the representation line which is shown at the bottom right we have to get much better performance top left is", "start_timestamp": "00:57:16", "end_timestamp": "00:57:51", "start_second": 3436, "end_second": 3471, "url": "https://www.youtube.com/watch?v=YqvhDPd1UEw&t=3436s", "title": "L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020", "thumbnail": "https://i.ytimg.com/vi/YqvhDPd1UEw/maxresdefault.jpg"} {"video_id": "YqvhDPd1UEw", "text": "actually not getting a job done it's not collecting these yellow targets whereas bottom varieties we look at at collecting yellow targets and what's changed while the walls in the background have changed to pink rather than green and the ground has changed the blue rather than yellow so a relatively small change originally QN this doesn't do representation learning per se hasn't learned those notions whereas stability has somehow learned a representation that allows it to transfer here at zero shop to this new environment much better", "start_timestamp": "00:57:51", "end_timestamp": "00:58:22", "start_second": 3471, "end_second": 3502, "url": "https://www.youtube.com/watch?v=YqvhDPd1UEw&t=3471s", "title": "L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020", "thumbnail": "https://i.ytimg.com/vi/YqvhDPd1UEw/maxresdefault.jpg"} {"video_id": "YqvhDPd1UEw", "text": "it's not our idea of representing first state and dynamics we looked up Gantz and I think lecture four of this class now if you just train again what happens is that you just transfer each frame independently what we want this we want to learn but there's intentions that are consistent over time as we're going to do is we're gonna have a discriminator here that looks at two consecutive observations and decides whether those two consecutive observations are consecutive observations from the real world and welcome secular observations generated", "start_timestamp": "00:58:22", "end_timestamp": "00:58:57", "start_second": 3502, "end_second": 3537, "url": "https://www.youtube.com/watch?v=YqvhDPd1UEw&t=3502s", "title": "L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020", "thumbnail": "https://i.ytimg.com/vi/YqvhDPd1UEw/maxresdefault.jpg"} {"video_id": "YqvhDPd1UEw", "text": "by a generator and so if a generator here trying to generate fake sequence of observations trying to fool the discriminator and at convergence what that means that this generator is trying to generate observation sequences that are indistinguishable from real-world observation sequences once you have done you can use that generator as a simulator and learn that simulator or planning that similar in this case we did planning to try to achieve goals we will see on the right here is we didn't actually did this for rope manipulation", "start_timestamp": "00:58:57", "end_timestamp": "00:59:30", "start_second": 3537, "end_second": 3570, "url": "https://www.youtube.com/watch?v=YqvhDPd1UEw&t=3537s", "title": "L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020", "thumbnail": "https://i.ytimg.com/vi/YqvhDPd1UEw/maxresdefault.jpg"} {"video_id": "YqvhDPd1UEw", "text": "so on the left is the initial spin of the road on the right design end state of the rope and we see us with causal info again it thinks that these are the interpolated states so it thinks that this is a sequence of states you have to go through to go from the initial state to the end state same for the next one next one next one compare that with VC GM which we also which would currently just doesn't look at transitions just looked at individual frames we see that the interpolation here doesn't necessarily lead to", "start_timestamp": "00:59:30", "end_timestamp": "00:59:56", "start_second": 3570, "end_second": 3596, "url": "https://www.youtube.com/watch?v=YqvhDPd1UEw&t=3570s", "title": "L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020", "thumbnail": "https://i.ytimg.com/vi/YqvhDPd1UEw/maxresdefault.jpg"} {"video_id": "YqvhDPd1UEw", "text": "intermediate states that are that meaningful for a robot to try to follow that you know sequence of intermediate states and rope configurations to get some start to go and so we're able to by training in Foca which looks at realism of transitions rather than just realism of individual frames is able to learn a dynamics model in a latent space that we can use for robot to make plants now one of the first things we covered was the world models which showed that you can learn a latent space and then learn all right on top of", "start_timestamp": "00:59:56", "end_timestamp": "01:00:31", "start_second": 3596, "end_second": 3631, "url": "https://www.youtube.com/watch?v=YqvhDPd1UEw&t=3596s", "title": "L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020", "thumbnail": "https://i.ytimg.com/vi/YqvhDPd1UEw/maxresdefault.jpg"} {"video_id": "YqvhDPd1UEw", "text": "the latent space for dynamics and then learn a linear control on top of that of course that's a very new you think it's almost surprising it works in there what's so interesting that it actually does work in a range of environments but hopefully it's not not likely to be the final answer to keep it that simple and so here's a paper called planet learning latent announced models from pixels lesson planning in it so what's what's new here is after learned laden space 10-ounce model it's actually risk is not deploying a policy of learning it's", "start_timestamp": "01:00:31", "end_timestamp": "01:01:03", "start_second": 3631, "end_second": 3663, "url": "https://www.youtube.com/watch?v=YqvhDPd1UEw&t=3631s", "title": "L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020", "thumbnail": "https://i.ytimg.com/vi/YqvhDPd1UEw/maxresdefault.jpg"} {"video_id": "YqvhDPd1UEw", "text": "using a planner look using look ahead as a function of which sequence of actions do I get the most reward taking that first action that sequence of actions repeat and here learns the latent space encoding together with learning D dynamics also is joint learning of encoding and dynamics recently has been an improver that is called dreamer from the same office roughly and what they show is that you can actually run a limb planning in in latent space you can actually train a active critic and Leyton space simulator and that'll", "start_timestamp": "01:01:03", "end_timestamp": "01:01:43", "start_second": 3663, "end_second": 3703, "url": "https://www.youtube.com/watch?v=YqvhDPd1UEw&t=3663s", "title": "L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020", "thumbnail": "https://i.ytimg.com/vi/YqvhDPd1UEw/maxresdefault.jpg"} {"video_id": "YqvhDPd1UEw", "text": "actually do better than i'm the planet he also showed that the dynamics model you learn it's better in these environments to learning stochastic dynamics model rather than in the domestic dynamics model and that there's a two big differences between planned a dreamer going from planning to learning active critic agent and using a stochastic model now so far we talked about latent space models and directly learning to control in the latent space there is also work that actually goes back the image space and so here are", "start_timestamp": "01:01:43", "end_timestamp": "01:02:20", "start_second": 3703, "end_second": 3740, "url": "https://www.youtube.com/watch?v=YqvhDPd1UEw&t=3703s", "title": "L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020", "thumbnail": "https://i.ytimg.com/vi/YqvhDPd1UEw/maxresdefault.jpg"} {"video_id": "YqvhDPd1UEw", "text": "some example executions by robot moving objects to target locations and this is done by this the system here learned a video prediction model so i learn as a function of action the robot takes what will be the next frame i see and i long to the next action will be the next frame RC and so forth once you have a action conditional video prediction model and if you have a target frame or target property that you want to achieve you can now use this action traditional bigger prediction model as your simulator and this can give really good", "start_timestamp": "01:02:20", "end_timestamp": "01:02:59", "start_second": 3740, "end_second": 3779, "url": "https://www.youtube.com/watch?v=YqvhDPd1UEw&t=3740s", "title": "L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020", "thumbnail": "https://i.ytimg.com/vi/YqvhDPd1UEw/maxresdefault.jpg"} {"video_id": "YqvhDPd1UEw", "text": "results actually some examples shown here the slide get the downside of this that often planning to take a long time because to generate an actual traditional video prediction it can be fairly expensive we need to generate actually many of them because you're trying different sequence of actions to see which one might work the best and then after you find one that might work the best it might be a sequence of ten actions you take the first of those ten actions and you repeat that whole process and so these things tend to be", "start_timestamp": "01:02:59", "end_timestamp": "01:03:29", "start_second": 3779, "end_second": 3809, "url": "https://www.youtube.com/watch?v=YqvhDPd1UEw&t=3779s", "title": "L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020", "thumbnail": "https://i.ytimg.com/vi/YqvhDPd1UEw/maxresdefault.jpg"} {"video_id": "YqvhDPd1UEw", "text": "not as real time as some of the other things we looked at but it's very surprising how all this works you can do full a traditional video fliction and manipulate objects that way now one thing you might wonder is it's all good and well to do full detailed video prediction but is it always meaningful imagine you drop the bottle of water class ball of water drops on the floor how are you gonna do video prediction 1/2 for what happens there very very hard I mean you you're never gonna have access to all the details of", "start_timestamp": "01:03:29", "end_timestamp": "01:04:09", "start_second": 3809, "end_second": 3849, "url": "https://www.youtube.com/watch?v=YqvhDPd1UEw&t=3809s", "title": "L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020", "thumbnail": "https://i.ytimg.com/vi/YqvhDPd1UEw/maxresdefault.jpg"} {"video_id": "YqvhDPd1UEw", "text": "the state of the water in the ball all the little defects that might be in the mid you know water bottle materials and so forth that will determine how exactly this thing fractures the best you can be able to do is probably why I think it's gonna break into a lot of pieces and pieces of different sizes and you know maybe the the net tongue stays together because it doesn't hit the ground it's the bottom that's hitting the ground and so forth and you also don't need the details like to make decisions you just", "start_timestamp": "01:04:09", "end_timestamp": "01:04:38", "start_second": 3849, "end_second": 3878, "url": "https://www.youtube.com/watch?v=YqvhDPd1UEw&t=3849s", "title": "L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020", "thumbnail": "https://i.ytimg.com/vi/YqvhDPd1UEw/maxresdefault.jpg"} {"video_id": "YqvhDPd1UEw", "text": "need to know it's going to break and so what you could do say hey instead of learning a fully dynamics model and say I need to learn just what the future will look like be able to predict that you say hey what if I can predict what action I took for example seeing this shattered bottle and say well the action taken was dropping the bottle and so if I can go from I can make that prediction then I can also understand you want to achieve a certain goal what action might leave me there and not with me there this is called inverse dynamics and so", "start_timestamp": "01:04:38", "end_timestamp": "01:05:12", "start_second": 3878, "end_second": 3912, "url": "https://www.youtube.com/watch?v=YqvhDPd1UEw&t=3878s", "title": "L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020", "thumbnail": "https://i.ytimg.com/vi/YqvhDPd1UEw/maxresdefault.jpg"} {"video_id": "YqvhDPd1UEw", "text": "that's at the core of many other dynamics models that being learned throughout that for dynamics learn an inverse dynamics more effectively like learning a goal condition action strategy so no is it a paper here what if it said as follows it says we want to learn a Ford model and latent space I want to sleep in space - of course in fact will represent the things that matter but if all we care about is live in space predictions then the problem is that maybe we'll make our little space always zero and we picked always zero", "start_timestamp": "01:05:12", "end_timestamp": "01:05:46", "start_second": 3912, "end_second": 3946, "url": "https://www.youtube.com/watch?v=YqvhDPd1UEw&t=3912s", "title": "L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020", "thumbnail": "https://i.ytimg.com/vi/YqvhDPd1UEw/maxresdefault.jpg"} {"video_id": "YqvhDPd1UEw", "text": "we're always good but we don't have anything interesting and so they're gonna say well we want to learn a little space we would complete the next latent state but to avoid it being all zeroes or any other way of being degenerate we're going to require that from the latest state of the next time 50 plus one and the light instead of the current times et which we offered to predict the action that was taken at time T and so we went to dynamics models at the same time we're learning a inverse dynamics in a fluid dynamics", "start_timestamp": "01:05:46", "end_timestamp": "01:06:19", "start_second": 3946, "end_second": 3979, "url": "https://www.youtube.com/watch?v=YqvhDPd1UEw&t=3946s", "title": "L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020", "thumbnail": "https://i.ytimg.com/vi/YqvhDPd1UEw/maxresdefault.jpg"} {"video_id": "YqvhDPd1UEw", "text": "model at the same time in this light in space this is applied to learn to poke object so well you see here on the left is data collection you can set this up for autonomous data collection on the right where you see is the learned control so it's learned that the Namek law and now I can look at the current state and look at the goal state and it can do a prediction of which action is going to help the most to get close to that goal state and can repeatedly do that well it finally reaches something very close to the goal State okay now", "start_timestamp": "01:06:19", "end_timestamp": "01:07:06", "start_second": 3979, "end_second": 4026, "url": "https://www.youtube.com/watch?v=YqvhDPd1UEw&t=3979s", "title": "L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020", "thumbnail": "https://i.ytimg.com/vi/YqvhDPd1UEw/maxresdefault.jpg"} {"video_id": "YqvhDPd1UEw", "text": "reinforced planning is about reward so far ration mostly ignored the rewards when we learned representations and so we'll switch that up now let's not just learn to predict next state but also learn to predict future reward kind of first paper down or first recent paper that looked at this and the deep reinforcement in convicts is a predictor on paper so enter learning and planning and what they said is well it's difficult to know what needs to go into the latent state and so because we don't really know what", "start_timestamp": "01:07:06", "end_timestamp": "01:07:38", "start_second": 4026, "end_second": 4058, "url": "https://www.youtube.com/watch?v=YqvhDPd1UEw&t=4026s", "title": "L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020", "thumbnail": "https://i.ytimg.com/vi/YqvhDPd1UEw/maxresdefault.jpg"} {"video_id": "YqvhDPd1UEw", "text": "it has to when laid instead and we don't necessarily want to reconstruct the full observation because that's just so many things to reconstruct them we really want to focus on the essence well if what we care about is getting high reward should we just focus on predicting future rewards for every sequence of actions we can predict the future reward well we should be good to go then we can just thick secretive action that leaves the highest feature award and we're good to go predictor on did this for some relatively simple environments showing", "start_timestamp": "01:07:38", "end_timestamp": "01:08:09", "start_second": 4058, "end_second": 4089, "url": "https://www.youtube.com/watch?v=YqvhDPd1UEw&t=4058s", "title": "L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020", "thumbnail": "https://i.ytimg.com/vi/YqvhDPd1UEw/maxresdefault.jpg"} {"video_id": "YqvhDPd1UEw", "text": "here billiards as a function of which actually you tink how old the billiards balls and up and I should have pretty well on that task and also they looked at it for maze navigation now the most threesome is all the scores that you might have heard of that builds on top of these very directly is mu 0 mu 0 is also learning a blatant dynamics model that predicts rewards and doesn't worry about reconstruction and so this one here doesn't just given one action in the beginning what's the sequence of latent states that allow me to predict", "start_timestamp": "01:08:09", "end_timestamp": "01:08:44", "start_second": 4089, "end_second": 4124, "url": "https://www.youtube.com/watch?v=YqvhDPd1UEw&t=4089s", "title": "L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020", "thumbnail": "https://i.ytimg.com/vi/YqvhDPd1UEw/maxresdefault.jpg"} {"video_id": "YqvhDPd1UEw", "text": "reward in the future and use 0 same thing but now action conditional and was able to solve a very wide range of game situations I'm on a variation is successful feature so you might say it's enough predicting reward which is just one number what if reward consists of many components gave a clear about location of the robot maybe I care about energy expend maybe I care about other things these are all features and so the idea here is then if I had a set of features that relate the reward why not learn to predict well learn a latent space model", "start_timestamp": "01:08:44", "end_timestamp": "01:09:23", "start_second": 4124, "end_second": 4163, "url": "https://www.youtube.com/watch?v=YqvhDPd1UEw&t=4124s", "title": "L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020", "thumbnail": "https://i.ytimg.com/vi/YqvhDPd1UEw/maxresdefault.jpg"} {"video_id": "YqvhDPd1UEw", "text": "that allows me to predict the future sequence of features encountered we looked at this ourselves in the comics of navigation actually so when you have a robot that's navigating a world it does some convolutional processing of its observations then they'll be some lsdm because when you're navigating you currently see something we want also remember things you've seen in the past it's in memory here and then some that should try to predict observations features of observations they might in the future for example might have a", "start_timestamp": "01:09:23", "end_timestamp": "01:09:54", "start_second": 4163, "end_second": 4194, "url": "https://www.youtube.com/watch?v=YqvhDPd1UEw&t=4163s", "title": "L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020", "thumbnail": "https://i.ytimg.com/vi/YqvhDPd1UEw/maxresdefault.jpg"} {"video_id": "YqvhDPd1UEw", "text": "pollution or something like that here's this system in action so we're gonna so what we have here let me fast forward this a little bit to the experimental setup so what we see here is this is inside a simulator actually for now but also real world experiments coming later you see the kind of visual inputs this is processing and it's trying to predict things about speed hiding collision those are features it's trying to predict so I put it know this many steps in the future well my heading B will my speed be my", "start_timestamp": "01:09:54", "end_timestamp": "01:10:45", "start_second": 4194, "end_second": 4245, "url": "https://www.youtube.com/watch?v=YqvhDPd1UEw&t=4194s", "title": "L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020", "thumbnail": "https://i.ytimg.com/vi/YqvhDPd1UEw/maxresdefault.jpg"} {"video_id": "YqvhDPd1UEw", "text": "collision be based on what I see right now and based on the actions I will take any intervening time through that is able to learn in a total internal representation of how the world works but most importantly how the world works as it relates to features that matter for navigation versus try to learn everything about the world which might be a lot to learn relative to what you actually need to learn be successful at your task and so based on its able to learn to navigate these environments pretty well then so that the real robot", "start_timestamp": "01:10:45", "end_timestamp": "01:11:16", "start_second": 4245, "end_second": 4276, "url": "https://www.youtube.com/watch?v=YqvhDPd1UEw&t=4245s", "title": "L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020", "thumbnail": "https://i.ytimg.com/vi/YqvhDPd1UEw/maxresdefault.jpg"} {"video_id": "YqvhDPd1UEw", "text": "so here we have the actual robot that's going to learn to navigate the hallways in quarry all over at the electrical engine electrical engineering building at Berkeley so we see here and actually when it's still learning has a lot of collisions but it learns to predict that it learns something say if I think if I see this take that sequence of actions I will have a collision in five time steps or my heading will change in that way and so forth and so after training its internalize a lot of how the world works", "start_timestamp": "01:11:16", "end_timestamp": "01:11:46", "start_second": 4276, "end_second": 4306, "url": "https://www.youtube.com/watch?v=YqvhDPd1UEw&t=4276s", "title": "L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020", "thumbnail": "https://i.ytimg.com/vi/YqvhDPd1UEw/maxresdefault.jpg"} {"video_id": "YqvhDPd1UEw", "text": "now I can plan against a transition well I need to act now let's go to this is test let's learn now we can see that it's learned to avoid collisions and in terms of what it's doing it it knows to predict as a function of the actions taking whatever which is likely to happen or not well heading it might end up with and then take actions accordingly and again the reason I'm showing all these videos here is because as you see different approaches are testable very different environments and this by no means a converged research field and there's a", "start_timestamp": "01:11:46", "end_timestamp": "01:12:21", "start_second": 4306, "end_second": 4341, "url": "https://www.youtube.com/watch?v=YqvhDPd1UEw&t=4306s", "title": "L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020", "thumbnail": "https://i.ytimg.com/vi/YqvhDPd1UEw/maxresdefault.jpg"} {"video_id": "YqvhDPd1UEw", "text": "lot of variation how things get tested and by looking at how its tested to give you a sense of how complex an environment a certain approach might be able to handle now a natural question you might have is well this is all great there's all these different ways of learning representations but could we come up with a way of optimally representing the world what would that even mean what does it mean to have an optimal reclamation of the world well there's some worried especially trying to get up this so here are some fairly theoretical", "start_timestamp": "01:12:21", "end_timestamp": "01:12:52", "start_second": 4341, "end_second": 4372, "url": "https://www.youtube.com/watch?v=YqvhDPd1UEw&t=4341s", "title": "L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020", "thumbnail": "https://i.ytimg.com/vi/YqvhDPd1UEw/maxresdefault.jpg"} {"video_id": "YqvhDPd1UEw", "text": "to be fair references on trying to understand what it means the popularization of the world and one thing you'll often see come back is his word oma morphism and when it refers to is that essentially you have the real world you have a simulator and you want it to be the case ad if you go from real world to some weight and space simulator so you have a one-to-one match that's happening you go from from lit from real world to this latent space representation at that point you simulate in both worlds and then after a", "start_timestamp": "01:12:52", "end_timestamp": "01:13:25", "start_second": 4372, "end_second": 4405, "url": "https://www.youtube.com/watch?v=YqvhDPd1UEw&t=4372s", "title": "L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020", "thumbnail": "https://i.ytimg.com/vi/YqvhDPd1UEw/maxresdefault.jpg"} {"video_id": "YqvhDPd1UEw", "text": "while he tried to map back and see if it still corresponds but homomorphism would mean that you had still the correspondents many steps if he's are any number of steps in the future and so that would be kind of a by simulation homomorphism type approach and the question of course becomes what's the minimal life in space that you need to be able to do that just the more middle that laden spaces the less variables you want to do was as a reinforcement winner or a planner who tried to learn achieve good reward in the environment now one", "start_timestamp": "01:13:25", "end_timestamp": "01:13:56", "start_second": 4405, "end_second": 4436, "url": "https://www.youtube.com/watch?v=YqvhDPd1UEw&t=4405s", "title": "L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020", "thumbnail": "https://i.ytimg.com/vi/YqvhDPd1UEw/maxresdefault.jpg"} {"video_id": "YqvhDPd1UEw", "text": "thing that's very well-known in traditional controls is something called separation principle and separation principle in traditional control says the following and it's not well it's very specific snare it says if I have a linear dynamical system and I have noisy observations of this state so I don't have access to state I only have noisy observations and these noisy observations are linear functions of the state so linear dynamics observations linear function of this state then to do optimal control in this", "start_timestamp": "01:13:56", "end_timestamp": "01:14:41", "start_second": 4436, "end_second": 4481, "url": "https://www.youtube.com/watch?v=YqvhDPd1UEw&t=4436s", "title": "L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020", "thumbnail": "https://i.ytimg.com/vi/YqvhDPd1UEw/maxresdefault.jpg"} {"video_id": "YqvhDPd1UEw", "text": "environment where I don't have full access to the state all I need to do is to find the optimal estimator of state which will be a common filter and use data out with my best estimate of the state at every time combine them in the optimal controller assuming I have full access to state so the separation panel says I could several enzyme an estimator and a controller design them separately and then combine and that's actually optimal and that's actually very related things we've been talking about one of the representation I wanted to control want", "start_timestamp": "01:14:41", "end_timestamp": "01:15:15", "start_second": 4481, "end_second": 4515, "url": "https://www.youtube.com/watch?v=YqvhDPd1UEw&t=4481s", "title": "L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020", "thumbnail": "https://i.ytimg.com/vi/YqvhDPd1UEw/maxresdefault.jpg"} {"video_id": "YqvhDPd1UEw", "text": "to be the case I would do the right can representation when decision comes out of it and just be used with optimal control we get the optimal result and so you some work now trying to look at what you have a nonlinear system might apply deep neural networks and so forth what does it mean to have you know optimal estimation of state from your observations and how you know when is that compatible with your control and so forth so very interesting theoretical direction if you're more Theory inclined so another way to think of it is to say", "start_timestamp": "01:15:15", "end_timestamp": "01:15:49", "start_second": 4515, "end_second": 4549, "url": "https://www.youtube.com/watch?v=YqvhDPd1UEw&t=4515s", "title": "L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020", "thumbnail": "https://i.ytimg.com/vi/YqvhDPd1UEw/maxresdefault.jpg"} {"video_id": "YqvhDPd1UEw", "text": "well shouldn't I just think about it end to end so often in deep learning you have kind of two paths one path is you're gonna try to design something and the other pattern you say hey I'm just think about the result that one is the result that one let me define a loss function on the result I want and then training a staff instead of putting all the modules in more detail together myself so in this case what it might mean well instead of learning representation for a dynamic smaller and then bolt it on a planter or bolting on", "start_timestamp": "01:15:49", "end_timestamp": "01:16:25", "start_second": 4549, "end_second": 4585, "url": "https://www.youtube.com/watch?v=YqvhDPd1UEw&t=4549s", "title": "L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020", "thumbnail": "https://i.ytimg.com/vi/YqvhDPd1UEw/maxresdefault.jpg"} {"video_id": "YqvhDPd1UEw", "text": "a reinforcement agent why not say hey when I learned my dynamics model I should train it end to end such that what I learned is maximal compatible with a planner that I will use in the future so this goes a little bit back to the early thing we cover the embed to control we said if we can learn a linear dynamics model in latent space planning comes easy you're gonna say what a feel a more general plan and my mom and so that general planner might work well in a wide range of situations now can we learn a representation that if we combine it", "start_timestamp": "01:16:25", "end_timestamp": "01:17:01", "start_second": 4585, "end_second": 4621, "url": "https://www.youtube.com/watch?v=YqvhDPd1UEw&t=4585s", "title": "L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020", "thumbnail": "https://i.ytimg.com/vi/YqvhDPd1UEw/maxresdefault.jpg"} {"video_id": "YqvhDPd1UEw", "text": "with a more general planner together they function well if so then we learned a good representation so when we did this and some early work validation that works led by then post hoc Aviv tomorrow now professor at Technion we showed that actually the validation a very common way of doing planning for toddler Markov decision processes actually this validation process can be turned into a neural network representation and so we can then bolt this validation network onto a representation Learning Network and optimize them together to try to get", "start_timestamp": "01:17:01", "end_timestamp": "01:17:44", "start_second": 4621, "end_second": 4664, "url": "https://www.youtube.com/watch?v=YqvhDPd1UEw&t=4621s", "title": "L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020", "thumbnail": "https://i.ytimg.com/vi/YqvhDPd1UEw/maxresdefault.jpg"} {"video_id": "YqvhDPd1UEw", "text": "good performance out turning image input into a representation on which validation runs and the encoding of image input will need to be such down the validation process actually gives good results and we even gave the validation process and flexibility to learn parts of that which showed that this way you can actually get very good performance on planning tasks they might say well planning with visual inputs shouldn't just choose you just be able to learn a confident that just kind of looks at it and makes the right decision", "start_timestamp": "01:17:44", "end_timestamp": "01:18:14", "start_second": 4664, "end_second": 4694, "url": "https://www.youtube.com/watch?v=YqvhDPd1UEw&t=4664s", "title": "L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020", "thumbnail": "https://i.ytimg.com/vi/YqvhDPd1UEw/maxresdefault.jpg"} {"video_id": "YqvhDPd1UEw", "text": "well it turns out really if sometimes what we're doing here is building a very strong prior intercom by building the validation aspect into it that's a bit like why do we use a confident we'll use a continent to encode translation invariance and once we can learn more efficiently than if we were to use a fully connected Network it's kind of the same idea here we're learning that work that should solve a control problem that under the hood uses planning well then we should just put that planning into network the planning structure into the", "start_timestamp": "01:18:14", "end_timestamp": "01:18:43", "start_second": 4694, "end_second": 4723, "url": "https://www.youtube.com/watch?v=YqvhDPd1UEw&t=4694s", "title": "L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020", "thumbnail": "https://i.ytimg.com/vi/YqvhDPd1UEw/maxresdefault.jpg"} {"video_id": "YqvhDPd1UEw", "text": "network so we can learn it all and when and now one question that has often come up in this in this context as well she we ever do pixel level video prediction that's a good question I mean awfully you're just looking at noise and what's the point in trying to predict that what really matters is predicting the things that effect so how do you do that more directly so we're going to use plan ability as a criterion for representation learning now so validation that works as I just described go into a little more detail", "start_timestamp": "01:18:43", "end_timestamp": "01:19:22", "start_second": 4723, "end_second": 4762, "url": "https://www.youtube.com/watch?v=YqvhDPd1UEw&t=4723s", "title": "L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020", "thumbnail": "https://i.ytimg.com/vi/YqvhDPd1UEw/maxresdefault.jpg"} {"video_id": "YqvhDPd1UEw", "text": "it says have an observation observation it's turn goes into a module that outputs a value function which is how good a certain state is they put that out for every every state that you can hang on in parallel then an attention mechanism will look at the currents observation and understand which of all these possible stages should index into then make a decision on what to do in the current state the value iteration module looks at essentially validation cannot remember that sense what it does it needs to look", "start_timestamp": "01:19:22", "end_timestamp": "01:20:06", "start_second": 4762, "end_second": 4806, "url": "https://www.youtube.com/watch?v=YqvhDPd1UEw&t=4762s", "title": "L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020", "thumbnail": "https://i.ytimg.com/vi/YqvhDPd1UEw/maxresdefault.jpg"} {"video_id": "YqvhDPd1UEw", "text": "at reward and dynamics model and then this and rewarding dynamic smaller can do a recurrent calculation to get out the value of each state so this is just a recurrent calculation repeating applying the same operation so some recurring moment and it's a return on that work with local calculation because states next to each other can be visited from each other and happen to show off in this dynamic programming calculation so turns out that there is a recurrent component is enough to represent validation calculation but we don't need", "start_timestamp": "01:20:06", "end_timestamp": "01:20:38", "start_second": 4806, "end_second": 4838, "url": "https://www.youtube.com/watch?v=YqvhDPd1UEw&t=4806s", "title": "L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020", "thumbnail": "https://i.ytimg.com/vi/YqvhDPd1UEw/maxresdefault.jpg"} {"video_id": "YqvhDPd1UEw", "text": "to do it with some evaluation which only applies to situation we can have a tabular representation of the world which means for very relatively small discrete states places he knows more generally so we're looking at here is universal planning Network universal fine network says okay we have an observation we want to achieve a goal observation we take our initial observation we're turning to the late instead we're gonna encoded then we take an action new let instead looks we take an action new let me say we're not", "start_timestamp": "01:20:38", "end_timestamp": "01:21:10", "start_second": 4838, "end_second": 4870, "url": "https://www.youtube.com/watch?v=YqvhDPd1UEw&t=4838s", "title": "L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020", "thumbnail": "https://i.ytimg.com/vi/YqvhDPd1UEw/maxresdefault.jpg"} {"video_id": "YqvhDPd1UEw", "text": "actually late in state and so forth taken on articulated state and after that series of actions we want our little state here to match up the delays in state of the goal of the rich that water key so we can do is within search over actions that will get us close and so if we had already trained this live in space dynamics model all we would need to do is to optimize this sequence of actions and if this is a continuous space we can optimize the sequence of actions and back to the dishes a look around standard vacuum Gatien define a", "start_timestamp": "01:21:10", "end_timestamp": "01:21:44", "start_second": 4870, "end_second": 4904, "url": "https://www.youtube.com/watch?v=YqvhDPd1UEw&t=4870s", "title": "L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020", "thumbnail": "https://i.ytimg.com/vi/YqvhDPd1UEw/maxresdefault.jpg"} {"video_id": "YqvhDPd1UEw", "text": "sequence of actions that optimizes how close we'll get to the goal so that's the planning part assuming we have this dance model we can run back obligation to play how do you get the dynamics model well here's we're going to do we're going to learn the dynamics model such that so we're going to try to find parameters in this dynamics model such that if we use those parameters to run this optimization to find actions then the sequence of actions we find corresponds to what was shown in a demonstration that we're given so we're", "start_timestamp": "01:21:44", "end_timestamp": "01:22:23", "start_second": 4904, "end_second": 4943, "url": "https://www.youtube.com/watch?v=YqvhDPd1UEw&t=4904s", "title": "L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020", "thumbnail": "https://i.ytimg.com/vi/YqvhDPd1UEw/maxresdefault.jpg"} {"video_id": "YqvhDPd1UEw", "text": "given a demonstration a sequence of actions and we'll have an imitation loss and that will say we want to be able to imitate the sequence of actions by writing this very specific process of optimizing with aggregation our sequence of actions against a dynamics model that we're going to learn once we have learned that the nameks model this way what it means is then onwards we can learn we can use this latent space dynamics model to find sequence of actions to optimize how close we get to some other goal in the future so benefit", "start_timestamp": "01:22:23", "end_timestamp": "01:22:58", "start_second": 4943, "end_second": 4978, "url": "https://www.youtube.com/watch?v=YqvhDPd1UEw&t=4943s", "title": "L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020", "thumbnail": "https://i.ytimg.com/vi/YqvhDPd1UEw/maxresdefault.jpg"} {"video_id": "YqvhDPd1UEw", "text": "here is that internalization that our inductive bias than just learning some backbone black box for imitation now she also learns a metric in this abstract space that's useful for reinforced learning in the future so we're comparing us with a reactive imitation learning it just says okay I need to just imitate a sequence of actions but this black box known that doesn't know that when you imitate probably the demonstrator had a goal and you're trying to find something of actions that it keeps that goal so it doesn't have that inductive bias it's", "start_timestamp": "01:22:58", "end_timestamp": "01:23:27", "start_second": 4978, "end_second": 5007, "url": "https://www.youtube.com/watch?v=YqvhDPd1UEw&t=4978s", "title": "L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020", "thumbnail": "https://i.ytimg.com/vi/YqvhDPd1UEw/maxresdefault.jpg"} {"video_id": "YqvhDPd1UEw", "text": "not I do as well and something closes the architecture we use is something that's also a recurrent neural network but doesn't have the internal optimization process in the inner loop to find a sequence of actions that optimizes how close we get to a goal so task we looked at here was some maze navigation tasks and also reaching between obstacles to a target the courtesy here on the horizontal axis number of demonstrations the bring boxes is the average test success rate oh it seems Universal planning networks", "start_timestamp": "01:23:27", "end_timestamp": "01:24:02", "start_second": 5007, "end_second": 5042, "url": "https://www.youtube.com/watch?v=YqvhDPd1UEw&t=5007s", "title": "L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020", "thumbnail": "https://i.ytimg.com/vi/YqvhDPd1UEw/maxresdefault.jpg"} {"video_id": "YqvhDPd1UEw", "text": "outperforms the baselines that I just described but that means that building that inductive bias helps significantly in learning to solve these problems now note that you can says well what did it actually learn we said to build an inductive bias we say with building inductive bias to learn to plan in that inner loop but is it really learn to plan here's experiment doing say what if we learn with 40 iterations of gradient descent to find a civil actions and then we test with a very number of planning steps meaning we vary the number of Grandison", "start_timestamp": "01:24:02", "end_timestamp": "01:24:44", "start_second": 5042, "end_second": 5084, "url": "https://www.youtube.com/watch?v=YqvhDPd1UEw&t=5042s", "title": "L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020", "thumbnail": "https://i.ytimg.com/vi/YqvhDPd1UEw/maxresdefault.jpg"} {"video_id": "YqvhDPd1UEw", "text": "steps in the inner loop when we do plan if our thing is doing planning then the hope is that by writing more planning iterations it would keep refining the plan and end up with a better plan than if it always access to 40 iterations that's indeed what we see here after the horizontal actually we increase the number of planning steps the test success rate goes up for the same amount same training same training just different number of planning steps of tests on so this indicates that likely is something like planning is really", "start_timestamp": "01:24:44", "end_timestamp": "01:25:14", "start_second": 5084, "end_second": 5114, "url": "https://www.youtube.com/watch?v=YqvhDPd1UEw&t=5084s", "title": "L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020", "thumbnail": "https://i.ytimg.com/vi/YqvhDPd1UEw/maxresdefault.jpg"} {"video_id": "YqvhDPd1UEw", "text": "happening under the hood and if you plan longer you can do better nothing that happens is when you do this you learn a representation that ties into how an agent should make decisions that representation can be used by a reinforcement learning agent to learn more quickly what makes me a force wink typically hard is that the reward is sparse but if you map your world into this latent space in that latent space where you're running this optimizer grading descent to find good actions well again bring descend assumes that", "start_timestamp": "01:25:14", "end_timestamp": "01:25:47", "start_second": 5114, "end_second": 5147, "url": "https://www.youtube.com/watch?v=YqvhDPd1UEw&t=5114s", "title": "L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020", "thumbnail": "https://i.ytimg.com/vi/YqvhDPd1UEw/maxresdefault.jpg"} {"video_id": "YqvhDPd1UEw", "text": "there's some smoothness so once you've learned that we can space where there are smoothness you can optimize against that probably means that in that latent space distances are more meaningful I think now do reinforce learning against distances in that waking space you're doing it against the reward that's better that's not sparse but it's dense and it's giving a signal locally on whether you're improving or not improving on what you're doing and so we showed in a wide range of environments I did indeed reinforcement learning can be", "start_timestamp": "01:25:47", "end_timestamp": "01:26:16", "start_second": 5147, "end_second": 5176, "url": "https://www.youtube.com/watch?v=YqvhDPd1UEw&t=5147s", "title": "L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020", "thumbnail": "https://i.ytimg.com/vi/YqvhDPd1UEw/maxresdefault.jpg"} {"video_id": "YqvhDPd1UEw", "text": "a lot more effective when using the distance in Laytonsville earning in the process I just described but then you do reinforcement in a new environment example we did imitation in three link and 4 link environments switch to a 5 link environment Rand reinforced wanting the file of environments faced and the latent space there is used for reward shaping and I guess you learn a lot more quickly same thing here where the initial learning happened with a point mass and a to sub-point mass and then actually have the controller and robot and thanks to", "start_timestamp": "01:26:16", "end_timestamp": "01:26:52", "start_second": 5176, "end_second": 5212, "url": "https://www.youtube.com/watch?v=YqvhDPd1UEw&t=5176s", "title": "L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020", "thumbnail": "https://i.ytimg.com/vi/YqvhDPd1UEw/maxresdefault.jpg"} {"video_id": "YqvhDPd1UEw", "text": "these shaping that comes from learning the slave representation where distances are meaningful learning can be a lot more efficient okay so at this point we've covered quite a few different ways of combining representation learning with reinforced learning to be more efficient and the general theme so far has been that or at least in our state for positions been done raw pixels sure it has the information but it's embedded in a very high dimensional space is million megapixel image million dimensional input we wanted in a more", "start_timestamp": "01:26:52", "end_timestamp": "01:27:31", "start_second": 5212, "end_second": 5251, "url": "https://www.youtube.com/watch?v=YqvhDPd1UEw&t=5212s", "title": "L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020", "thumbnail": "https://i.ytimg.com/vi/YqvhDPd1UEw/maxresdefault.jpg"} {"video_id": "YqvhDPd1UEw", "text": "compact position with and learn against more efficiently and all these ascribe observations available to state and state actually the next date and so forth all tried to get a handle of that problem now nothing you might observe is down what we covered so far is fairly complex is a wide range of ideas at play and so the question we asked ourselves recently is is it possible with a relatively simple idea to maybe get a lot of leverage that we have seen here and let's take a look at that and see how far agree with relatively simple", "start_timestamp": "01:27:31", "end_timestamp": "01:28:09", "start_second": 5251, "end_second": 5289, "url": "https://www.youtube.com/watch?v=YqvhDPd1UEw&t=5251s", "title": "L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020", "thumbnail": "https://i.ytimg.com/vi/YqvhDPd1UEw/maxresdefault.jpg"} {"video_id": "YqvhDPd1UEw", "text": "idea and actually we'll see out the form essentially all the approaches we've covered so far that doesn't mean the ideas and the approaches we've covered so far are not important so we're not important with colleges to skip them there's a lot of good ideas we've covered that we probably want to bring into this next approach we're about to cover but what I'm about to cover curl will really focus on simplicity and see how far I can get with something very simple our stunning exhibition here was if you look at the learning curves the", "start_timestamp": "01:28:09", "end_timestamp": "01:28:37", "start_second": 5289, "end_second": 5317, "url": "https://www.youtube.com/watch?v=YqvhDPd1UEw&t=5289s", "title": "L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020", "thumbnail": "https://i.ytimg.com/vi/YqvhDPd1UEw/maxresdefault.jpg"} {"video_id": "YqvhDPd1UEw", "text": "vertical axis here is reward and higher is better horizontal axis number of trials in this environment and so see like at the end here 1e a to a hundred million steps have been taken in this environment and so we see a blue learning curve here that learns very quickly and then green learning curves that take a long time to learn what's different blue learns from states green learns from pixels same thing here blue learns from stayed very flowers green from pixels not nearly so fast and this isn't this case the RL", "start_timestamp": "01:28:37", "end_timestamp": "01:29:08", "start_second": 5317, "end_second": 5348, "url": "https://www.youtube.com/watch?v=YqvhDPd1UEw&t=5317s", "title": "L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020", "thumbnail": "https://i.ytimg.com/vi/YqvhDPd1UEw/maxresdefault.jpg"} {"video_id": "YqvhDPd1UEw", "text": "algorithm is soft is a d4 PG which is still yard are a logger so if you think about the essence here reinforced winning is about learning to achieve goals and if the underlying space is low dimensional there is a low dimensional state shims will be able to recover that low dimensional state and then learn just as efficiently from pixels as from state and how might we do that well we've seen a lot of success in past lectures with contrast of learning for computer vision in fact we saw with CTC that it was possible by using", "start_timestamp": "01:29:08", "end_timestamp": "01:29:49", "start_second": 5348, "end_second": 5389, "url": "https://www.youtube.com/watch?v=YqvhDPd1UEw&t=5348s", "title": "L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020", "thumbnail": "https://i.ytimg.com/vi/YqvhDPd1UEw/maxresdefault.jpg"} {"video_id": "YqvhDPd1UEw", "text": "unlabeled data is on image net to constantly out the form learning with label data so unlabeled plus so there's the same amount of label data but the blue curve also has unlabeled data you see that the unlabeled data consistently helps outperform having only access to that amount of labeled data then of course very recently Sinclair came out and as actually getting equally good performance has supervised learning on image net when using a linear classifier just a linear classifier on top of a self supervised representation so that means", "start_timestamp": "01:29:49", "end_timestamp": "01:30:29", "start_second": 5389, "end_second": 5429, "url": "https://www.youtube.com/watch?v=YqvhDPd1UEw&t=5389s", "title": "L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020", "thumbnail": "https://i.ytimg.com/vi/YqvhDPd1UEw/maxresdefault.jpg"} {"video_id": "YqvhDPd1UEw", "text": "that almost all the learning happens in self supervision and then a little bit of learning habitat the M of course to get the meaning of the labels but it just needed a linear classifier if that's the case then the hope is if we do something similar in reinforcement all we need to do is do something where we do representation learning that extracts the essence and I've gained a little bit of extra information the reward to do the rest of the learning so would it simply or do it essentially said I have an image I'm going to turn", "start_timestamp": "01:30:29", "end_timestamp": "01:30:59", "start_second": 5429, "end_second": 5459, "url": "https://www.youtube.com/watch?v=YqvhDPd1UEw&t=5429s", "title": "L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020", "thumbnail": "https://i.ytimg.com/vi/YqvhDPd1UEw/maxresdefault.jpg"} {"video_id": "YqvhDPd1UEw", "text": "into two versions of that same image and when I then embed them linear neural network the symbol networking the left hand the right upper channels then the embedding should be close as measured with some cosine similarity and of course over another image that I embed then I'm betting should be far away and those are the negatives in the denominator so for more details and that of course go back to our self supervised learning lectures from a few weeks ago what's important here is done this is a very simple idea it's just saying turn an", "start_timestamp": "01:30:59", "end_timestamp": "01:31:32", "start_second": 5459, "end_second": 5492, "url": "https://www.youtube.com/watch?v=YqvhDPd1UEw&t=5459s", "title": "L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020", "thumbnail": "https://i.ytimg.com/vi/YqvhDPd1UEw/maxresdefault.jpg"} {"video_id": "YqvhDPd1UEw", "text": "image into two images and the embedding should be close take a different image it's embedding should be far from this and what's surprising about this even though it's relatively simple it enables representation learning that then on top of that all you need is a linear classifier to get a really good image that classification performance and they actually looked at many types of augmentations cropping cut out color surveilled filter Norris blur rotate and what they found is that crop matters the most and color matters quite a bit too", "start_timestamp": "01:31:32", "end_timestamp": "01:32:09", "start_second": 5492, "end_second": 5529, "url": "https://www.youtube.com/watch?v=YqvhDPd1UEw&t=5492s", "title": "L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020", "thumbnail": "https://i.ytimg.com/vi/YqvhDPd1UEw/maxresdefault.jpg"} {"video_id": "YqvhDPd1UEw", "text": "but really cropping is the one that matters the most so now the curl curl looks like a nice representation learning with RL so what did we do here we have our a replay buffer on which we normally would just run reinforcement learn and so we have our replay buffer we take on my observations now to replay buffer since this is a dynamical system we need to look at a sequence of frame and consider that a single observation otherwise we cannot observe philosophy and a single frame acknowledge their velocity so we'll have a stack of sequential frames", "start_timestamp": "01:32:09", "end_timestamp": "01:32:43", "start_second": 5529, "end_second": 5563, "url": "https://www.youtube.com/watch?v=YqvhDPd1UEw&t=5529s", "title": "L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020", "thumbnail": "https://i.ytimg.com/vi/YqvhDPd1UEw/maxresdefault.jpg"} {"video_id": "YqvhDPd1UEw", "text": "that we together consider a single observation let's pack of frames then gets undergoes did augmentation to that augmentation in this case two different crops then one goes into the query encoder I'm gonna go key and predators could actually be the same or different you can choose and then ultimately you do two things with this in the top path it just goes to the reinforcements in law so if you run B for PG again or you run soft actor critic you run PPO and so forth that happens along the top path so what it means is along the top", "start_timestamp": "01:32:43", "end_timestamp": "01:33:21", "start_second": 5563, "end_second": 5601, "url": "https://www.youtube.com/watch?v=YqvhDPd1UEw&t=5563s", "title": "L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020", "thumbnail": "https://i.ytimg.com/vi/YqvhDPd1UEw/maxresdefault.jpg"} {"video_id": "YqvhDPd1UEw", "text": "path you run your standard our logarithm the only thing that's changed is that we take this rare replay buffer you do some data on English now in the bottom path you have another data images in the same frame' you have a contrast of loss so essentially the same loss not exactly same details but at a high level same as we saw in the Sinclair slide okay so a couple of things that were important to make this work Sinclair uses a cosine laws what we found is that having a a weighting matrix here between the key McQuarrie is Ashley Borden then we'll", "start_timestamp": "01:33:21", "end_timestamp": "01:34:02", "start_second": 5601, "end_second": 5642, "url": "https://www.youtube.com/watch?v=YqvhDPd1UEw&t=5601s", "title": "L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020", "thumbnail": "https://i.ytimg.com/vi/YqvhDPd1UEw/maxresdefault.jpg"} {"video_id": "YqvhDPd1UEw", "text": "see in the red curve the bilinear and waiting is us secretly outperform using just cosine the other thing we notice is that using momentum Indian and one of the encoder pass is very important to which was actually dawn actually saw herself as learning lecture in the moko work we also have momentum and one of their past same thing was important here again big difference so once we do that we can see that curl outperformance both prior model based and model free state-of-the-art methods so we look at here is medians course on deep mind", "start_timestamp": "01:34:02", "end_timestamp": "01:34:37", "start_second": 5642, "end_second": 5677, "url": "https://www.youtube.com/watch?v=YqvhDPd1UEw&t=5642s", "title": "L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020", "thumbnail": "https://i.ytimg.com/vi/YqvhDPd1UEw/maxresdefault.jpg"} {"video_id": "YqvhDPd1UEw", "text": "control one hundred kid you might control firing the kiss it's hundred can steps are firing the kid steps and so it's checking really can you learn to bounce it's not about after one hundred million steps where you ask is about 100 thousand firing down steps where are you at and so we see here after winter and kid steps from state ship access to state this is how far you get curl on one hundred K steps is a little bit behind what you can do from state but no family K stuff is actually all the way there so we see that we can learn almost", "start_timestamp": "01:34:37", "end_timestamp": "01:35:09", "start_second": 5677, "end_second": 5709, "url": "https://www.youtube.com/watch?v=YqvhDPd1UEw&t=5677s", "title": "L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020", "thumbnail": "https://i.ytimg.com/vi/YqvhDPd1UEw/maxresdefault.jpg"} {"video_id": "YqvhDPd1UEw", "text": "as well from pixels as from stayed with curl for Kepner prior methods that also tried to learn from pixels we see that they consistently we're not doing nearly as well after firing the kid steps and Sam with after hundred clear steps so both after hundred K M cavity steps curl up of homes or prior our elephant pixels on deep mind control Sweden and after getting very close to take this learning here we have the learning curves in gray we see state based learning and red we see curl we see that in many of these red is matching gray", "start_timestamp": "01:35:09", "end_timestamp": "01:35:50", "start_second": 5709, "end_second": 5750, "url": "https://www.youtube.com/watch?v=YqvhDPd1UEw&t=5709s", "title": "L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020", "thumbnail": "https://i.ytimg.com/vi/YqvhDPd1UEw/maxresdefault.jpg"} {"video_id": "YqvhDPd1UEw", "text": "there are a few exceptions within most of them red matches fray meaning that with curl are elfin pixels can be almost as efficient as RL fringe state at least for these deep mind control tasks and here we'll look at you know a table of results you see in boldface the winner compared with all prior methods favorite methods of learning from pixels and you see that consistently curl outperforms the other methods for the tyranny and hierarchy not just an average but on essentially all of the individual tasks except for", "start_timestamp": "01:35:50", "end_timestamp": "01:36:28", "start_second": 5750, "end_second": 5788, "url": "https://www.youtube.com/watch?v=YqvhDPd1UEw&t=5750s", "title": "L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020", "thumbnail": "https://i.ytimg.com/vi/YqvhDPd1UEw/maxresdefault.jpg"} {"video_id": "YqvhDPd1UEw", "text": "that no one here and one here dark public iris with curl doesn't learn as fast and we look at the details what happens there these are environments where the dynamics is fairly complex so this requires some more research with our hypothesis here has been that in those environments learning from pixels is particularly difficult because if you just look at the pixels the dynamic is not well captured in the sequence of frames you get to see for example if contact forces matter a lot and it's you can easily read those off from pixels", "start_timestamp": "01:36:28", "end_timestamp": "01:37:05", "start_second": 5788, "end_second": 5825, "url": "https://www.youtube.com/watch?v=YqvhDPd1UEw&t=5788s", "title": "L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020", "thumbnail": "https://i.ytimg.com/vi/YqvhDPd1UEw/maxresdefault.jpg"} {"video_id": "YqvhDPd1UEw", "text": "and so having access to state makes a pretty big difference in terms of being able to learn looking at the entire benchmark we are looking at median human on normalized score across 26 Atari games at 100k frames and we see that compared to Paris today our rainbow rainbow dqm simple and well at least rainbow DQ and simple and rainbow DQ and curl seen every out performs prior and state-of-the-art and it's getting at about 25 percent of human normalized score here is a broken out for the individual games and curls", "start_timestamp": "01:37:05", "end_timestamp": "01:37:42", "start_second": 5825, "end_second": 5862, "url": "https://www.youtube.com/watch?v=YqvhDPd1UEw&t=5825s", "title": "L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020", "thumbnail": "https://i.ytimg.com/vi/YqvhDPd1UEw/maxresdefault.jpg"} {"video_id": "YqvhDPd1UEw", "text": "outperforming proxy they are fairly consistently what's simple coming in first on it's still two of them so computers are all matched human data efficiency it's good question human normalized algorithm score we see on the freeway and on Janus bond that we get pretty much the level of human efficiency for the other games is a little bit of a way to go but it is not rotates night and don't know it's already double-digit percentage performance relative to human on almost all of them okay so we looked at two main directions in representation", "start_timestamp": "01:37:42", "end_timestamp": "01:38:22", "start_second": 5862, "end_second": 5902, "url": "https://www.youtube.com/watch?v=YqvhDPd1UEw&t=5862s", "title": "L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020", "thumbnail": "https://i.ytimg.com/vi/YqvhDPd1UEw/maxresdefault.jpg"} {"video_id": "YqvhDPd1UEw", "text": "learning in reinforced going so far using auxiliary losses and doing things down if I couldn't come down to trying to recover on the line state with a self supervised type loss now there are only ways representation that I can help mainly an exploration which is one of the big challenges in reinforced learning and in Austin for unsupervised feel discovery so let's look at those two now first we can help exploration is through exploration bonuses so what's the idea here in a tabular scenario meaning a very small", "start_timestamp": "01:38:22", "end_timestamp": "01:38:56", "start_second": 5902, "end_second": 5936, "url": "https://www.youtube.com/watch?v=YqvhDPd1UEw&t=5902s", "title": "L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020", "thumbnail": "https://i.ytimg.com/vi/YqvhDPd1UEw/maxresdefault.jpg"} {"video_id": "YqvhDPd1UEw", "text": "reinforcement problem where the number of states you can visit you can count but say there's only you know a good grid world addition to being only one of sixteen squares that's it one of sixteen possible states a very simple thing new is you give a bonus to the agent for visiting grid squares it hasn't been to before or hasn't been frequently before that encourages going and checking things out that you have don't have much experience with yet that can be very effective in is small environments but its impact of a large continuous state", "start_timestamp": "01:38:56", "end_timestamp": "01:39:26", "start_second": 5936, "end_second": 5966, "url": "https://www.youtube.com/watch?v=YqvhDPd1UEw&t=5936s", "title": "L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020", "thumbnail": "https://i.ytimg.com/vi/YqvhDPd1UEw/maxresdefault.jpg"} {"video_id": "YqvhDPd1UEw", "text": "space is because in a large but they infinite States build infinitely many splits well there's always more stuff you haven't seen so you need a different way of measuring what makes something new versus something already may be understood so one big breaker in the space wants to look at using generic model in this case a pixel CN n for density estimation so the idea here is you planet our game or the agents playing at target you want to measure how often has the agent been in the stick but if you'd never special", "start_timestamp": "01:39:26", "end_timestamp": "01:40:02", "start_second": 5966, "end_second": 6002, "url": "https://www.youtube.com/watch?v=YqvhDPd1UEw&t=5966s", "title": "L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020", "thumbnail": "https://i.ytimg.com/vi/YqvhDPd1UEw/maxresdefault.jpg"} {"video_id": "YqvhDPd1UEw", "text": "specific stage there's too many of them so still we're gonna do is women train a pixel CNN model on what you see on the screen and things you've seen so far the more often you've seen something the higher the log-likelihood under that pixel CNN model but when you let's say enter a new room in this game first time you enter the new room the log likelihood of that new thing you see on the screen will be very very low it'll be a bad score then it's a signal that this is something you need to explore because you're unfamiliar with it as", "start_timestamp": "01:40:02", "end_timestamp": "01:40:36", "start_second": 6002, "end_second": 6036, "url": "https://www.youtube.com/watch?v=YqvhDPd1UEw&t=6002s", "title": "L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020", "thumbnail": "https://i.ytimg.com/vi/YqvhDPd1UEw/maxresdefault.jpg"} {"video_id": "YqvhDPd1UEw", "text": "measured by the flow log likelihood score as you can effectively give exploration bonuses now based on the log likelihood scores under your pixel CNN model that you're trained online has your age and this acting in the world there's a comparison here between using the odds versus just using random exploration and it helps a lot another way to do this you can train a variational honor encoder which leads to an embedding and then you can mount these embeddings into a hash table and just do counting in that a hash table", "start_timestamp": "01:40:36", "end_timestamp": "01:41:09", "start_second": 6036, "end_second": 6069, "url": "https://www.youtube.com/watch?v=YqvhDPd1UEw&t=6036s", "title": "L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020", "thumbnail": "https://i.ytimg.com/vi/YqvhDPd1UEw/maxresdefault.jpg"} {"video_id": "YqvhDPd1UEw", "text": "and that's something we did a couple years ago Amin helps a lot in terms of giving out the right kind of exploration incentives to explore difficult to explore environments more efficiently another thing you can do that kind of gets maybe more at the core of what you really want but it's a little more complicated to set up is for information maximizing exploration so the idea here is the following when you are in a new situation what what makes a deal what makes it interesting about it being new well one way to measure this", "start_timestamp": "01:41:09", "end_timestamp": "01:41:43", "start_second": 6069, "end_second": 6103, "url": "https://www.youtube.com/watch?v=YqvhDPd1UEw&t=6069s", "title": "L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020", "thumbnail": "https://i.ytimg.com/vi/YqvhDPd1UEw/maxresdefault.jpg"} {"video_id": "YqvhDPd1UEw", "text": "is to say hey if I'm in a situation where after taking Archie I cannot predict what's happening next very well then I'm not familiar with this so I should give a bonus for you know having gone into unfamiliar territory that's called curiosity we'll cover that in a moment especially been pretty successful but actually it's also a little defective because if you just have something that's too passive in the world let's say you roll some dice well it's gonna be unpredictable so to make this more charitable one thing you can do is you can say hey I", "start_timestamp": "01:41:43", "end_timestamp": "01:42:22", "start_second": 6103, "end_second": 6142, "url": "https://www.youtube.com/watch?v=YqvhDPd1UEw&t=6103s", "title": "L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020", "thumbnail": "https://i.ytimg.com/vi/YqvhDPd1UEw/maxresdefault.jpg"} {"video_id": "YqvhDPd1UEw", "text": "don't want to be getting exploration bonuses when something is inherently unpredictable how I'm going to get them what it's something that's unpredictable because I have not learned enough yet about this and so the way we did this environment COK you can set up a dynamics model that you're learning and as you learn the dynamics model as Nydia that comes in you can see we actually set up a a posterior over dynamics policy of the distribution over possible dynamics models as new data comes in you get that posterior", "start_timestamp": "01:42:22", "end_timestamp": "01:42:56", "start_second": 6142, "end_second": 6176, "url": "https://www.youtube.com/watch?v=YqvhDPd1UEw&t=6142s", "title": "L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020", "thumbnail": "https://i.ytimg.com/vi/YqvhDPd1UEw/maxresdefault.jpg"} {"video_id": "YqvhDPd1UEw", "text": "if that updated posterior is very different from a previous posterior it means that you got interesting information it allows you to learn something about how the world works so that should give you an exploration bonus because you did something interesting to learn about the world but when throwing the dice addition guys rolled many many times and then rolls again and you couldn't predict it because that's just awareness you cannot predict but your model for the dice will already say it's uniform you know over", "start_timestamp": "01:42:56", "end_timestamp": "01:43:21", "start_second": 6176, "end_second": 6201, "url": "https://www.youtube.com/watch?v=YqvhDPd1UEw&t=6176s", "title": "L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020", "thumbnail": "https://i.ytimg.com/vi/YqvhDPd1UEw/maxresdefault.jpg"} {"video_id": "YqvhDPd1UEw", "text": "all possible outcomes that model will not see much update if any and you will not be given an exploration products and so that's the idea in vine only get exploration bonuses when it updates your posterior over how the world works and again showing here that that helps a lot in terms of exploring more efficiently under the hood that's really self supervising type ideas for a dead and small ensembles or based on representations of the AMEX models and been given exploration bonuses based on that the simple version of that is", "start_timestamp": "01:43:21", "end_timestamp": "01:43:54", "start_second": 6201, "end_second": 6234, "url": "https://www.youtube.com/watch?v=YqvhDPd1UEw&t=6201s", "title": "L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020", "thumbnail": "https://i.ytimg.com/vi/YqvhDPd1UEw/maxresdefault.jpg"} {"video_id": "YqvhDPd1UEw", "text": "called curiosity we're more directly look at you know was something pretty cool or not pretty quiet more the domestic environment often that's actually enough and that's in a lot of success in many of these game environments another thing you could do with self that's learning a representation learning for exploration is to think about it in a more deliberate way you could say hey it's notice about getting bonuses after it's being something new it should also be about thinking about what I should even do before I experience it can set a goal", "start_timestamp": "01:43:54", "end_timestamp": "01:44:27", "start_second": 6234, "end_second": 6267, "url": "https://www.youtube.com/watch?v=YqvhDPd1UEw&t=6234s", "title": "L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020", "thumbnail": "https://i.ytimg.com/vi/YqvhDPd1UEw/maxresdefault.jpg"} {"video_id": "YqvhDPd1UEw", "text": "for myself that makes for a good goal when I'm trying to explore train goal again what's done is the idea is the following you have a in this case let's look at iteration 5 down here the other a set of points that you've reached in this maze you start the bottom left you did a bunch of runs to reach a set of points and where you notice is that the way ascetic goals in the green area unable to consistently achieve your goals we accepted in the blue area it's high variance and some in the red area you should don't achieve your goals we", "start_timestamp": "01:44:27", "end_timestamp": "01:45:05", "start_second": 6267, "end_second": 6305, "url": "https://www.youtube.com/watch?v=YqvhDPd1UEw&t=6267s", "title": "L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020", "thumbnail": "https://i.ytimg.com/vi/YqvhDPd1UEw/maxresdefault.jpg"} {"video_id": "YqvhDPd1UEw", "text": "can induce and say oh actually in the future set my goals in the blue / red area cuz that's the frontier of what I know how to do and so how you're gonna do that you're gonna learn some kind of generative model to generate goals in that regime then go again did you ever have a cell network strained to them generate goals at the frontier of what you're capable of and this allows you to explore Avars much more efficiently because Mary is setting goals to go to places at the frontier of your capability so you continue expanding", "start_timestamp": "01:45:05", "end_timestamp": "01:45:35", "start_second": 6305, "end_second": 6335, "url": "https://www.youtube.com/watch?v=YqvhDPd1UEw&t=6305s", "title": "L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020", "thumbnail": "https://i.ytimg.com/vi/YqvhDPd1UEw/maxresdefault.jpg"} {"video_id": "YqvhDPd1UEw", "text": "your skills you can also do this with a various auto-encoder that's done in rig where the traditional auto-encoder is generating new goals it's those goals and I'm silly not this frontier in the same where they're essentially goals that are similar to things you've seen in the past but the hope is that frequently enough you are the frontier that you learn relatively quickly no can also read way those goals based on how you know how much they're at the frontier measured in something called skew fit which is an expansion to this", "start_timestamp": "01:45:35", "end_timestamp": "01:46:06", "start_second": 6335, "end_second": 6366, "url": "https://www.youtube.com/watch?v=YqvhDPd1UEw&t=6335s", "title": "L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020", "thumbnail": "https://i.ytimg.com/vi/YqvhDPd1UEw/maxresdefault.jpg"} {"video_id": "YqvhDPd1UEw", "text": "paper that sometimes changes the sampling in late in space to get closer to sampling from the frontier rather than just from what you've seen in the past so brick itself here are some examples of this in action you see robots learning to breach and to push so that's the kind of thing that channel is pretty hard to explore for because normally a robot would just be waving in the air and so forth here you can you know set goals that relate to moving objects around and then it would be inclined to move towards object and", "start_timestamp": "01:46:06", "end_timestamp": "01:46:43", "start_second": 6366, "end_second": 6403, "url": "https://www.youtube.com/watch?v=YqvhDPd1UEw&t=6366s", "title": "L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020", "thumbnail": "https://i.ytimg.com/vi/YqvhDPd1UEw/maxresdefault.jpg"} {"video_id": "YqvhDPd1UEw", "text": "wisdom now another thing you can do in terms of exploration and leveraging chariot of models or once about smalls is skill transfer and this should remind you of how we initially motivated unsupervised learning or some of the motivation which was no transfer learning can be very effective with deep fuel nuts now would it be nice if it'll be translating from a task that does not require labels on to a task that requires labels that's transfer from one surprise learning task to them fine tuned when a supervised task well", "start_timestamp": "01:46:43", "end_timestamp": "01:47:17", "start_second": 6403, "end_second": 6437, "url": "https://www.youtube.com/watch?v=YqvhDPd1UEw&t=6403s", "title": "L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020", "thumbnail": "https://i.ytimg.com/vi/YqvhDPd1UEw/maxresdefault.jpg"} {"video_id": "YqvhDPd1UEw", "text": "similar ideas can be a planned reinforcement money so what's going on here so far we mostly talked about going from observations to state those kind of representation money but there's another type ropes and fish line that matters for position learning around objectives behaviors tasks the question here is how do you supervisors and their learning for these things what's the contrast what's done now to explore you maybe put some noise in your action and that way you have some random behavior you might explore something", "start_timestamp": "01:47:17", "end_timestamp": "01:47:47", "start_second": 6437, "end_second": 6467, "url": "https://www.youtube.com/watch?v=YqvhDPd1UEw&t=6437s", "title": "L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020", "thumbnail": "https://i.ytimg.com/vi/YqvhDPd1UEw/maxresdefault.jpg"} {"video_id": "YqvhDPd1UEw", "text": "interesting gonna take a long time sometimes that's shown to be a bit more effective on your explore by putting random is on the whate near no network not only kind of consistently deviate in one way or the other so the good example for the thing on the right works better than thing on the left is let's say is posted for explore a hallway but when I'm left with random walk left right will take very long to get to the end of the hallway and explore both ends of the hallway the one on the right would induce a bias to mark to the right and", "start_timestamp": "01:47:47", "end_timestamp": "01:48:17", "start_second": 6467, "end_second": 6497, "url": "https://www.youtube.com/watch?v=YqvhDPd1UEw&t=6467s", "title": "L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020", "thumbnail": "https://i.ytimg.com/vi/YqvhDPd1UEw/maxresdefault.jpg"} {"video_id": "YqvhDPd1UEw", "text": "maybe with not a random perturbation and do some bars to go to the left and maybe after a couple of robots would have gone to both ends and that's it but it's still really counting on markets it's not it's not really using any more knowledge about experience from the past to explore something new more quickly and that's where the question we're after can we use experience from the past to now learn to do something more quickly for example if you have been in environments shown on the left ear where when you're in the environment and you", "start_timestamp": "01:48:17", "end_timestamp": "01:48:48", "start_second": 6497, "end_second": 6528, "url": "https://www.youtube.com/watch?v=YqvhDPd1UEw&t=6497s", "title": "L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020", "thumbnail": "https://i.ytimg.com/vi/YqvhDPd1UEw/maxresdefault.jpg"} {"video_id": "YqvhDPd1UEw", "text": "don't get to see the red dots the red dots are just for us but imagine we cannot see us or about the red dots and any time you get dropped in environment the reward is I have a spot on that semi circle but you don't know which spot and so you have to go find that reward after a while you should realize done I should go to the semi circle and see which Pollan's semi circle has the reward and that will be a more efficient exploration than to just randomly walk around in this 2d world and then randomly maybe run into the reward on", "start_timestamp": "01:48:48", "end_timestamp": "01:49:16", "start_second": 6528, "end_second": 6556, "url": "https://www.youtube.com/watch?v=YqvhDPd1UEw&t=6528s", "title": "L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020", "thumbnail": "https://i.ytimg.com/vi/YqvhDPd1UEw/maxresdefault.jpg"} {"video_id": "YqvhDPd1UEw", "text": "that semi circle or shown on the right imagine they're supposed to push a block onto the red flat target in the back but you don't know which block you're supposed to push well you'd have a very good strategy saying I push the purple one mmm no reward okay I'm gonna try to push the green one you know her would try to push the silent one Norway which is the yellow one I reward I push the yellow one again and keep collecting reward that's what we would do I see much but how do we get that kind of exploration behavior that's much", "start_timestamp": "01:49:16", "end_timestamp": "01:49:47", "start_second": 6556, "end_second": 6587, "url": "https://www.youtube.com/watch?v=YqvhDPd1UEw&t=6556s", "title": "L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020", "thumbnail": "https://i.ytimg.com/vi/YqvhDPd1UEw/maxresdefault.jpg"} {"video_id": "YqvhDPd1UEw", "text": "more targeted than random motions into an edge and how to get to learn to do that well what we really want then is somehow a representation of behaviors for example pushing objects makes for an interesting behavior that often relates to reward whereas random motion that where the gripper does not interact with objects will rarely be interesting and rarely lead to rewards that's the kind of thing we want to learn in our representation of behaviors it is one way we can do that this is supervised the bridge but just", "start_timestamp": "01:49:47", "end_timestamp": "01:50:19", "start_second": 6587, "end_second": 6619, "url": "https://www.youtube.com/watch?v=YqvhDPd1UEw&t=6587s", "title": "L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020", "thumbnail": "https://i.ytimg.com/vi/YqvhDPd1UEw/maxresdefault.jpg"} {"video_id": "YqvhDPd1UEw", "text": "doesn't just set some context or not ones about supervised for now but it will go from supervised and transfer to an unsupervised and transfer very soon imagine of many many tasks for each task you have a discrete indexing through the top which is turn into an embedding then I've been expended to the policy the currents data observation fed into the policy nopales take action if you train this policy for many many tasks at the same time then it'll learn depending on what task representative with this index D to take a good action for that task", "start_timestamp": "01:50:19", "end_timestamp": "01:50:54", "start_second": 6619, "end_second": 6654, "url": "https://www.youtube.com/watch?v=YqvhDPd1UEw&t=6619s", "title": "L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020", "thumbnail": "https://i.ytimg.com/vi/YqvhDPd1UEw/maxresdefault.jpg"} {"video_id": "YqvhDPd1UEw", "text": "but now the additional thing done here is that this latent code Z here is forced to come from a normal distribution what does that do the normal distribution means that even the future we don't know what the task is nobody tells us what the task is there might be a new task we can actually sample from this distribution to get exploratory behavior so you say oh let's sample this you know sample Batsy and the possible still do something very directed something that relates to maybe interacting with objects as opposed to", "start_timestamp": "01:50:54", "end_timestamp": "01:51:26", "start_second": 6654, "end_second": 6686, "url": "https://www.youtube.com/watch?v=YqvhDPd1UEw&t=6654s", "title": "L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020", "thumbnail": "https://i.ytimg.com/vi/YqvhDPd1UEw/maxresdefault.jpg"} {"video_id": "YqvhDPd1UEw", "text": "just some random chittering to make this even stronger the case there's a mutual information objective between each directory and the latent variables see here so turns out there's actually help so you learn in a bunch of tasks this way and they have a new task and you explore by generating Latham code see and then someone you'll finally I can carry that actually leads to good behavior and you'll start collecting higher reward coming Bank is low less supervise where there's a little differently let me canoes and say well let's not", "start_timestamp": "01:51:26", "end_timestamp": "01:52:00", "start_second": 6686, "end_second": 6720, "url": "https://www.youtube.com/watch?v=YqvhDPd1UEw&t=6686s", "title": "L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020", "thumbnail": "https://i.ytimg.com/vi/YqvhDPd1UEw/maxresdefault.jpg"} {"video_id": "YqvhDPd1UEw", "text": "even have discreet tasks to mixing let's just have a late until it going in and when a policy that pays attention to the latent code while collecting your work why would that happen well there will still be many tasks under the hood but we're not telling it the indexes of the task we're just letting experience reward and so what I'll learn to do is they'll learn to sample a see he would have got zv4 successful behavior with dust and it'll reinforce that see if it doesn't it'll have to sample a different Z and so forth so here's some tasks", "start_timestamp": "01:52:00", "end_timestamp": "01:52:32", "start_second": 6720, "end_second": 6752, "url": "https://www.youtube.com/watch?v=YqvhDPd1UEw&t=6720s", "title": "L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020", "thumbnail": "https://i.ytimg.com/vi/YqvhDPd1UEw/maxresdefault.jpg"} {"video_id": "YqvhDPd1UEw", "text": "family is every dot in the semester Birkin spreads to a different task so we hope here that it would learn to associate different Z's with different spots in the semicircle such that when it later explores by sampling different G's it would go to different spots in the semicircle I mean the one that's successful be able to reinforce that same for the wheeled robot here and here's the block pushing tasks looking at the learning curves we see that indeed by getting to because it getting to pre trained on this notion of", "start_timestamp": "01:52:32", "end_timestamp": "01:53:05", "start_second": 6752, "end_second": 6785, "url": "https://www.youtube.com/watch?v=YqvhDPd1UEw&t=6752s", "title": "L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020", "thumbnail": "https://i.ytimg.com/vi/YqvhDPd1UEw/maxresdefault.jpg"} {"video_id": "YqvhDPd1UEw", "text": "indexing into tasks or a dispute over tossed and then be able to explore by sampling possible tasks it's able to them in blue here learn very quickly to solve new tasks compared to other approaches the generated behaviors we see are also very explored at Reseda we explored their behaviors indeed respond to visiting the semicircle and this gives the wheeled robot in the mill here it's Thea walking robot on the right is the block pushing what is look like in a human projet didn't do representation learning for exploration", "start_timestamp": "01:53:05", "end_timestamp": "01:53:37", "start_second": 6785, "end_second": 6817, "url": "https://www.youtube.com/watch?v=YqvhDPd1UEw&t=6785s", "title": "L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020", "thumbnail": "https://i.ytimg.com/vi/YqvhDPd1UEw/maxresdefault.jpg"} {"video_id": "YqvhDPd1UEw", "text": "behaviors you and instead of having this nice push behavior should have just some jittery behavior of the robot gripper that wouldn't really interact with the blocks or get any block to the target area after it's done those exploratory behaviors of course the next single will happens a policy grand update that will update the policy to essentially sample Z from a more focused distribution that focuses on the part of the session will correspond to the part a semi-circle what the target is or the block that needs to be pushed okay now what we did", "start_timestamp": "01:53:37", "end_timestamp": "01:54:11", "start_second": 6817, "end_second": 6851, "url": "https://www.youtube.com/watch?v=YqvhDPd1UEw&t=6817s", "title": "L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020", "thumbnail": "https://i.ytimg.com/vi/YqvhDPd1UEw/maxresdefault.jpg"} {"video_id": "YqvhDPd1UEw", "text": "here was transfer from having a set of tasks to now solving a new task relatively quickly by having good exploration behavior but we still needed to define a set of tasks and then transfer from that to question how is going to completely unsupervised school we just have the robot we're on its own to learn a range of behaviors and another test time Explorer in a meaningful way to Zone in on specific skill quickly take a look so it's actually multiple lines of work that effectively do the same thing but try a different object there's but the same", "start_timestamp": "01:54:11", "end_timestamp": "01:54:50", "start_second": 6851, "end_second": 6890, "url": "https://www.youtube.com/watch?v=YqvhDPd1UEw&t=6851s", "title": "L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020", "thumbnail": "https://i.ytimg.com/vi/YqvhDPd1UEw/maxresdefault.jpg"} {"video_id": "YqvhDPd1UEw", "text": "high-level idea so the hang of ideas we're still gonna have a policy PI that conditions his actions on the observation the current stayed near and a latent code which might or might not come from a discreet code bubbly has come from a latent critical to a normal distribution so we can resample this in the future this will solving trajectories and so the way we're going to pre train this is by saying that there needs to be high mutual information between its directory that results from this policy and the latent code is acting based upon so you", "start_timestamp": "01:54:50", "end_timestamp": "01:55:22", "start_second": 6890, "end_second": 6922, "url": "https://www.youtube.com/watch?v=YqvhDPd1UEw&t=6890s", "title": "L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020", "thumbnail": "https://i.ytimg.com/vi/YqvhDPd1UEw/maxresdefault.jpg"} {"video_id": "YqvhDPd1UEw", "text": "start you roll out at the beginning of you roll out your sample Z you keep z fixed for the entire roll out to get a trajectory you want the trajectory to say something about whatever Z was that you used for this trajectory what does it mean that high missile bases in Turkey Thailand see what we measured in many ways and that's what these four different papers are the first paper which are actually discrete variable in a directory and the second paper looks at B and the final state the third paper looks at Z and every", "start_timestamp": "01:55:22", "end_timestamp": "01:55:54", "start_second": 6922, "end_second": 6954, "url": "https://www.youtube.com/watch?v=YqvhDPd1UEw&t=6922s", "title": "L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020", "thumbnail": "https://i.ytimg.com/vi/YqvhDPd1UEw/maxresdefault.jpg"} {"video_id": "YqvhDPd1UEw", "text": "intermediate state independently some together and then the fourth one looks at Z and the full trajectory as a whole and they all get fairly similar results actually so here's the third tapered Eisenbach a tall paper showing a range of behaviors that comes out of this when you apply this to the cheetah robot so for different disease you get different behaviors here I mean we see how our mission information people use different Z's and the trajectories look very different to us may not indeed a different Z results in a very different", "start_timestamp": "01:55:54", "end_timestamp": "01:56:28", "start_second": 6954, "end_second": 6988, "url": "https://www.youtube.com/watch?v=YqvhDPd1UEw&t=6954s", "title": "L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020", "thumbnail": "https://i.ytimg.com/vi/YqvhDPd1UEw/maxresdefault.jpg"} {"video_id": "YqvhDPd1UEw", "text": "directory and of course the beauteous ones is learn to check out all these behaviors for different Z's now at test time you need to do something else but they need to run out of certain speed either will be Z's that already correspond to running forward and then you can fine-tune the Z directly around with to learn a policy it isn't to figure out the Z that will result in the behavior that you want here are some videos from the paper that make a model paper looking at and curating all kinds of different trajectories correspond to", "start_timestamp": "01:56:28", "end_timestamp": "01:57:02", "start_second": 6988, "end_second": 7022, "url": "https://www.youtube.com/watch?v=YqvhDPd1UEw&t=6988s", "title": "L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020", "thumbnail": "https://i.ytimg.com/vi/YqvhDPd1UEw/maxresdefault.jpg"} {"video_id": "YqvhDPd1UEw", "text": "responding to all kinds of different latent variables see so we see pretty same latent variable see same kind of directory gets output and here's some more videos well some of these cannot be played for some reason but here's a cheetah robot the eggs I'm a tall approaching so this kind of just not too sure that you know they camera at all might be better than the awesome burger dollar I think it's just a show that actually is very similar that so the the difference in those four objectives might not be too important", "start_timestamp": "01:57:02", "end_timestamp": "01:57:47", "start_second": 7022, "end_second": 7067, "url": "https://www.youtube.com/watch?v=YqvhDPd1UEw&t=7022s", "title": "L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020", "thumbnail": "https://i.ytimg.com/vi/YqvhDPd1UEw/maxresdefault.jpg"} {"video_id": "YqvhDPd1UEw", "text": "actually some limitations in this approach this coaster comes from an ATM at all paper is that when you have a humanoid which is very high dimensional compared to cheetah which is essentially just kind of stands up or runs on its head humanoid is high dimensional you try to find financial information behaviors between Si and trajectories you can it can take a long time or it can have a lot of mutual information with all trajectories actually being on the ground because there's a lot of different things you can do on the", "start_timestamp": "01:57:47", "end_timestamp": "01:58:21", "start_second": 7067, "end_second": 7101, "url": "https://www.youtube.com/watch?v=YqvhDPd1UEw&t=7067s", "title": "L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020", "thumbnail": "https://i.ytimg.com/vi/YqvhDPd1UEw/maxresdefault.jpg"} {"video_id": "YqvhDPd1UEw", "text": "ground and it's not something where you necessarily automatically get it to run around this running is very hard to learn where I was doing all kind of different tricks on the ground is much much easier okay so let me summarize what we covered today would cover a lot of ground much more quickly than in most of our other lectures because this lecture here is more of a sampling of ideas of how representation learning and reinforcing have come together in theorists place and you know a very deep dive in any one", "start_timestamp": "01:58:21", "end_timestamp": "01:58:53", "start_second": 7101, "end_second": 7133, "url": "https://www.youtube.com/watch?v=YqvhDPd1UEw&t=7101s", "title": "L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020", "thumbnail": "https://i.ytimg.com/vi/YqvhDPd1UEw/maxresdefault.jpg"} {"video_id": "YqvhDPd1UEw", "text": "of them as we've done in previous lectures the big high level ideas are that we attain a neural network and beep reinforcement learning your mind is looking at auxilary losses and if those losses are related to your task well it might help you to learn more quickly than if you did not have those exact losses and of course the most economical paper there was the Unreal paper under the hood a lot of disguise to state representation if we have high dimensional image inputs well hopefully under the hood in this task often there", "start_timestamp": "01:58:53", "end_timestamp": "01:59:28", "start_second": 7133, "end_second": 7168, "url": "https://www.youtube.com/watch?v=YqvhDPd1UEw&t=7133s", "title": "L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020", "thumbnail": "https://i.ytimg.com/vi/YqvhDPd1UEw/maxresdefault.jpg"} {"video_id": "YqvhDPd1UEw", "text": "is a low dimensional state and so there's many things you can do to try to extract latent representation that is closer to state than there is no pixels once you're working without lady representation closer to state or maybe even match to a state learning might go along more quickly and in fact we've seen that with the curl approach it's possible to learn almost as quickly from pixels as from spit it's not just about turning a raw sensor observation into a state there's other things you can do with representation lying in our own you", "start_timestamp": "01:59:28", "end_timestamp": "02:00:05", "start_second": 7168, "end_second": 7205, "url": "https://www.youtube.com/watch?v=YqvhDPd1UEw&t=7168s", "title": "L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020", "thumbnail": "https://i.ytimg.com/vi/YqvhDPd1UEw/maxresdefault.jpg"} {"video_id": "YqvhDPd1UEw", "text": "can have it help with exploration you can have it help an exploration by helping you generate exploration bonuses especially measured things that are new canonically and tabular environment she measured by you know visitation rates but in high dimensional spaces you'll always visit new states so you need to measure how different that new state is from past is which which you can do it to narrative models and my clearance another thing you can do in terms of exploration is you can think about generic models for behaviors that are", "start_timestamp": "02:00:05", "end_timestamp": "02:00:38", "start_second": 7205, "end_second": 7238, "url": "https://www.youtube.com/watch?v=YqvhDPd1UEw&t=7205s", "title": "L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020", "thumbnail": "https://i.ytimg.com/vi/YqvhDPd1UEw/maxresdefault.jpg"} {"video_id": "YqvhDPd1UEw", "text": "interesting such that mount exploration becomes a matter of behavior generation rather than random action all the time or you can learn to narrative models for goals that might be interesting to set and then set goals with your generic model for a reinforcement agent to try to achieve to expand its frontiers of capabilities and I'm not if you can do is ultra by skill discovery don't suppose skill discovery what we do is we essentially have no reward at all in a pre training phase but the hope is that the agent nevertheless starts exhibiting", "start_timestamp": "02:00:38", "end_timestamp": "02:01:14", "start_second": 7238, "end_second": 7274, "url": "https://www.youtube.com/watch?v=YqvhDPd1UEw&t=7238s", "title": "L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020", "thumbnail": "https://i.ytimg.com/vi/YqvhDPd1UEw/maxresdefault.jpg"} {"video_id": "BnpB3GrpsfM", "text": "to be able to introduce Alec Radford Alec Radford is a research scientist at open the eye alec has pioneered many of the latest advances in AI for natural language processing you might be familiar already with GPT and GPT to which Alec led those efforts at open AI and of course earlier in the semester we covered BC GN which was the first Gann incarnation that could start generating realistic looking images and that was also led by Alex it's a real honor to have Alec with us today and yeah now Mike please take it away from here", "start_timestamp": "00:00:00", "end_timestamp": "00:00:39", "start_second": 0, "end_second": 39, "url": "https://www.youtube.com/watch?v=BnpB3GrpsfM&t=0s", "title": "L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20", "thumbnail": "https://i.ytimg.com/vi/BnpB3GrpsfM/maxresdefault.jpg"} {"video_id": "BnpB3GrpsfM", "text": "yeah totally I'm super excited to be here and present because this course is like my favorite research topic unsupervised learning and yeah just really excited to chat with you all today so today I'm gonna focus on the NLP and tech side and I'm just gonna start the timer and today I'll be talking about just kind of generally learning from text in a scalable unsupervised kind of fashion kind of give a history of the field and some of the you know main techniques and approaches and kind of walk through the methods and kind of where we are today", "start_timestamp": "00:00:39", "end_timestamp": "00:01:10", "start_second": 39, "end_second": 70, "url": "https://www.youtube.com/watch?v=BnpB3GrpsfM&t=39s", "title": "L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20", "thumbnail": "https://i.ytimg.com/vi/BnpB3GrpsfM/maxresdefault.jpg"} {"video_id": "BnpB3GrpsfM", "text": "as well as providing some commentary on kind of supervised learning versus on supervised learning in NLP and why I think you know unsupervised methods are so important in this space yeah so let's I guess get started so learning from text you know one of the I think prerequisites to kind of start with is standard supervised learning requires kind of you know what we'd say is machine learning great data and what I mean by that is kind of your canonical machine learning data set is something at least in an academic context is", "start_timestamp": "00:01:10", "end_timestamp": "00:01:42", "start_second": 70, "end_second": 102, "url": "https://www.youtube.com/watch?v=BnpB3GrpsfM&t=70s", "title": "L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20", "thumbnail": "https://i.ytimg.com/vi/BnpB3GrpsfM/maxresdefault.jpg"} {"video_id": "BnpB3GrpsfM", "text": "something like you go use a crowd worker pipeline and you very carefully curate gold labeled standards for some data you're trying to annotate and this is a pretty involved expensive process and you often are emphasizing kind of quality and specificity and preciseness to the thing you care about the task you're trying to predict and maybe a very specific targeted data distribution and what this often means is you get a small amount of very high quality data and even some of the largest efforts in space just because you have paid human", "start_timestamp": "00:01:42", "end_timestamp": "00:02:16", "start_second": 102, "end_second": 136, "url": "https://www.youtube.com/watch?v=BnpB3GrpsfM&t=102s", "title": "L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20", "thumbnail": "https://i.ytimg.com/vi/BnpB3GrpsfM/maxresdefault.jpg"} {"video_id": "BnpB3GrpsfM", "text": "feedback often involved and sometimes your ensemble in predictions of three five or more laborers it's often a few hundred thousand examples is like a big data set especially for NLP in computer vision you sometimes see you know things like imagenet where they push that to a million or ten million but those are kind of afar outliers and you know very many canonical NLP datasets might only have five or 10,000 labeled examples so there's not really a lot of machine learning great data out there at least compared to what kind of the current", "start_timestamp": "00:02:16", "end_timestamp": "00:02:49", "start_second": 136, "end_second": 169, "url": "https://www.youtube.com/watch?v=BnpB3GrpsfM&t=136s", "title": "L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20", "thumbnail": "https://i.ytimg.com/vi/BnpB3GrpsfM/maxresdefault.jpg"} {"video_id": "BnpB3GrpsfM", "text": "learning complexities and efficiencies of current models are you know one of the primary criticisms of modern supervised learning deep learning in particular is how data intensive it is so we really have to get that number down and this lecture is basically going to be discussing all the variety of methods that have been developed for using natural language that kind of is available beyond kind of the machine learning great data and unsupervised or scalable self supervised methods for hoping to somehow pre-trained do some", "start_timestamp": "00:02:49", "end_timestamp": "00:03:18", "start_second": 169, "end_second": 198, "url": "https://www.youtube.com/watch?v=BnpB3GrpsfM&t=169s", "title": "L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20", "thumbnail": "https://i.ytimg.com/vi/BnpB3GrpsfM/maxresdefault.jpg"} {"video_id": "BnpB3GrpsfM", "text": "auxilary objective or tasks or you know hand design some method that allows you to improve performance once you flip the switch and go to a supervised learning on the standard machine learning great data or in the limit as we'll talk later get rid of the need entirely to have a classic supervised learning data side and potentially begin to learn tasks in a purely unsupervised way and evaluate them in a like zero shot setting so there's a variety of methods this lecture is going to focus primarily on auto regressive maximum-likelihood", "start_timestamp": "00:03:18", "end_timestamp": "00:03:51", "start_second": 198, "end_second": 231, "url": "https://www.youtube.com/watch?v=BnpB3GrpsfM&t=198s", "title": "L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20", "thumbnail": "https://i.ytimg.com/vi/BnpB3GrpsfM/maxresdefault.jpg"} {"video_id": "BnpB3GrpsfM", "text": "language models they're kind of the core and I think they're the most common uniting thread that kind of carries the early days this field through to kind of the current modern methods but I want to you know make clear at the front that there's many proxy objectives and tasks that have been designed in actual image processing to somehow you know do something before the thing you care about in order to do better on my thing you care about and there's quite a lot and in particular in the last year or two we've now seen that area really kind", "start_timestamp": "00:03:51", "end_timestamp": "00:04:20", "start_second": 231, "end_second": 260, "url": "https://www.youtube.com/watch?v=BnpB3GrpsfM&t=231s", "title": "L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20", "thumbnail": "https://i.ytimg.com/vi/BnpB3GrpsfM/maxresdefault.jpg"} {"video_id": "BnpB3GrpsfM", "text": "of grow dramatically and in many cases they now I'll perform the standard language model based methods that I kind of will as the core of the presentation and we'll talk more about the details of the differences as we get to those parts so some more motivation in intro as we've kind of going so I think I think one of the ways to think about this is like what do we do with the Internet so you know the wild Internet appears and you can either have your glowing brain ask representation on the left we can laugh at or we can make it you know how messy", "start_timestamp": "00:04:20", "end_timestamp": "00:04:51", "start_second": 260, "end_second": 291, "url": "https://www.youtube.com/watch?v=BnpB3GrpsfM&t=260s", "title": "L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20", "thumbnail": "https://i.ytimg.com/vi/BnpB3GrpsfM/maxresdefault.jpg"} {"video_id": "BnpB3GrpsfM", "text": "and random and weird and difficult it might be for algorithms to learn from it on the right so that's good old Geocities and so you know there's a lot of skepticism I think about kind of these approaches that might kind of at the highest level look kind of silly or kind of whimsical to be like let's just throw an algorithm at the internet and see what comes out the other end but I think that's actually kind of one of the like one seven summaries of basically what modern NLP has been seeing a lot of success from and you know I think one of", "start_timestamp": "00:04:51", "end_timestamp": "00:05:21", "start_second": 291, "end_second": 321, "url": "https://www.youtube.com/watch?v=BnpB3GrpsfM&t=291s", "title": "L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20", "thumbnail": "https://i.ytimg.com/vi/BnpB3GrpsfM/maxresdefault.jpg"} {"video_id": "BnpB3GrpsfM", "text": "the reasons why is just the Internet is so big there's so much data on it and we're starting to see some very exciting methods of learning from this kind of messy large-scale and curated data and so there's a great tweet from from an LP researcher just kind of showing just how is are they big and you know kind of just massive the Internet is where you can go and find an article about how to open doors and you know there's often a lot of arguments saying that oh you know we're not going to you know and it feels", "start_timestamp": "00:05:21", "end_timestamp": "00:05:52", "start_second": 321, "end_second": 352, "url": "https://www.youtube.com/watch?v=BnpB3GrpsfM&t=321s", "title": "L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20", "thumbnail": "https://i.ytimg.com/vi/BnpB3GrpsfM/maxresdefault.jpg"} {"video_id": "BnpB3GrpsfM", "text": "wrong in the limit to be like yes let's just throw algorithms at the internet and see what happens like that doesn't match human experience that doesn't match kind of the grounded embodied agents that you know we think of you know intelligent systems and instead is this kind of just like processing bits or abstract tokens and so there's a lot of skepticism about this approach but I think that just quantities of scale and other methods play very well with current techniques and you know you see lots of arguments about things like oh", "start_timestamp": "00:05:52", "end_timestamp": "00:06:18", "start_second": 352, "end_second": 378, "url": "https://www.youtube.com/watch?v=BnpB3GrpsfM&t=352s", "title": "L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20", "thumbnail": "https://i.ytimg.com/vi/BnpB3GrpsfM/maxresdefault.jpg"} {"video_id": "BnpB3GrpsfM", "text": "there's this long tail and we're never going to be able to deal with composition and really it's just maybe brute force can get us surprisingly far in the in the new term not saying that these methods or techniques are and I'll be all but at least today there's I think strong evidence that we shouldn't dismiss this somewhat silly approach at a high level so let's start with kind of I think what would be the like simplest starting point that we can convert from this kind of high-level idea into something that", "start_timestamp": "00:06:18", "end_timestamp": "00:06:50", "start_second": 378, "end_second": 410, "url": "https://www.youtube.com/watch?v=BnpB3GrpsfM&t=378s", "title": "L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20", "thumbnail": "https://i.ytimg.com/vi/BnpB3GrpsfM/maxresdefault.jpg"} {"video_id": "BnpB3GrpsfM", "text": "looks like a machine learning algorithm so we process a bunch of texts on the internet let's say and we're going to build this matrix called the word word co-occurrence matrix and so what we can kind of think of is it's a square matrix where the ith entry corresponds to for a given word like water the count of another word and whether they co-occur with each other so it might be you have to define when a co-occurrence is so that just means that the two happened to be present together and you might define a window of this for instance they both", "start_timestamp": "00:06:50", "end_timestamp": "00:07:22", "start_second": 410, "end_second": 442, "url": "https://www.youtube.com/watch?v=BnpB3GrpsfM&t=410s", "title": "L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20", "thumbnail": "https://i.ytimg.com/vi/BnpB3GrpsfM/maxresdefault.jpg"} {"video_id": "BnpB3GrpsfM", "text": "occur in the same sentence or within five words of each other or in the limit you can go quite far with like just happen to occur in the same document on the internet and so you're just gonna brute force kind of countless it's just counting that's all it is we're just going over you know tons and tons of text and we're just building up this this table basically so just a lookup table and it just tells you oh the word steam and water co-occur 250 times or you know the word steam is just in the data set 3 to 24 times total or you know", "start_timestamp": "00:07:22", "end_timestamp": "00:07:49", "start_second": 442, "end_second": 469, "url": "https://www.youtube.com/watch?v=BnpB3GrpsfM&t=442s", "title": "L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20", "thumbnail": "https://i.ytimg.com/vi/BnpB3GrpsfM/maxresdefault.jpg"} {"video_id": "BnpB3GrpsfM", "text": "the words hot and water you know 19500 forty times so that's all we're doing and this is a way you know one this is incredibly scalable you can just run a spark job over the entire internet with this kind of system you can quickly get this giant table and it's you know I'm not computationally intensive it's just counting and processing and tokenization this thing can be run on a common desktop and get very far and it's simple it's just counting so how good is counting a bunch of stuff like we're we're talking about something incredibly", "start_timestamp": "00:07:49", "end_timestamp": "00:08:22", "start_second": 469, "end_second": 502, "url": "https://www.youtube.com/watch?v=BnpB3GrpsfM&t=469s", "title": "L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20", "thumbnail": "https://i.ytimg.com/vi/BnpB3GrpsfM/maxresdefault.jpg"} {"video_id": "BnpB3GrpsfM", "text": "basic it's just kind of how often do these two things occur together and I think you know one of one of the big takeaways that I'm gonna have a lot of during this presentation is just how far these simple methods that are scalable and with large amounts of data can get so this is a great example of a paper called combining retrieval statistics and inference to answer elementary science questions it's from Clark at all AI - from 2016 and what they do is they take the same data structure this word word co-occurrence matrix", "start_timestamp": "00:08:22", "end_timestamp": "00:08:53", "start_second": 502, "end_second": 533, "url": "https://www.youtube.com/watch?v=BnpB3GrpsfM&t=502s", "title": "L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20", "thumbnail": "https://i.ytimg.com/vi/BnpB3GrpsfM/maxresdefault.jpg"} {"video_id": "BnpB3GrpsfM", "text": "they did let me start with the task so the task is elementary science questions so it's just I believe through 5th grade kind of you know elementary school kind of simple settings questions so they're multiple choice therefore no possible answers and there are these kind of simple things like a student crumpled up a flat sheet of paper into ramble what property the one who changed hardness color master shape or you know what property of a mirror makes it possible for a student to see an image in it is it volume magnetism reflectiveness or", "start_timestamp": "00:08:53", "end_timestamp": "00:09:25", "start_second": 533, "end_second": 565, "url": "https://www.youtube.com/watch?v=BnpB3GrpsfM&t=533s", "title": "L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20", "thumbnail": "https://i.ytimg.com/vi/BnpB3GrpsfM/maxresdefault.jpg"} {"video_id": "BnpB3GrpsfM", "text": "connectivity so this is a kind of thing that like you know again it's pretty basic in terms of like the high levels they're you know relatively simple facts and they don't require all that much in the form of reasoning or comprehension but there's still the kind of thing that we do give to is you know kids learning about the world and so you might think that like oh you know this is the kind of thing where to understand a mirror you really need to you know exist in the world and to you know learn about all these properties or to have a teacher", "start_timestamp": "00:09:25", "end_timestamp": "00:09:51", "start_second": 565, "end_second": 591, "url": "https://www.youtube.com/watch?v=BnpB3GrpsfM&t=565s", "title": "L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20", "thumbnail": "https://i.ytimg.com/vi/BnpB3GrpsfM/maxresdefault.jpg"} {"video_id": "BnpB3GrpsfM", "text": "and how are we gonna get there we're just kind of this brute force thing that just counts a bunch of words and puts them into a table and then starts looking them up and you know the takeaway here is that it can work surprisingly well so you can't quite pass these examples so the specific solver that we're gonna use talking about in a second is the PMI solver and that gets to about 60% but random guesses 25% so we basically almost you know have the error rate and get to addy with just this very dumb brute brute", "start_timestamp": "00:09:51", "end_timestamp": "00:10:22", "start_second": 591, "end_second": 622, "url": "https://www.youtube.com/watch?v=BnpB3GrpsfM&t=591s", "title": "L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20", "thumbnail": "https://i.ytimg.com/vi/BnpB3GrpsfM/maxresdefault.jpg"} {"video_id": "BnpB3GrpsfM", "text": "force approach so what actually is this solver they call it the point-wise mutual information solver and what you can think of it as is it just scores all of these possible answers so we have this sentence of context of you know the question and then we have you know four possible answers so what we do is we loop over the basically the sentence and we just look for the word toward co-occurrences and we just keep counting them up and we use this scoring formula which is the log of a ratio between two probabilities the first the P of XY is", "start_timestamp": "00:10:22", "end_timestamp": "00:10:56", "start_second": 622, "end_second": 656, "url": "https://www.youtube.com/watch?v=BnpB3GrpsfM&t=622s", "title": "L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20", "thumbnail": "https://i.ytimg.com/vi/BnpB3GrpsfM/maxresdefault.jpg"} {"video_id": "BnpB3GrpsfM", "text": "the joint which is basically the co-occurrence and so that gets you that count that's basically looking it up directly from that table the IJ entry for XY and then you normal by this kind of baseline assumption which is that the words should not Co occur more than by chance so that would be just their gonna depend of probabilities multiplied together as you can imagine those may be quite small and product into two gether makes them even smaller but some words co-occur together so a mirror occurs with reflective or", "start_timestamp": "00:10:56", "end_timestamp": "00:11:27", "start_second": 656, "end_second": 687, "url": "https://www.youtube.com/watch?v=BnpB3GrpsfM&t=656s", "title": "L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20", "thumbnail": "https://i.ytimg.com/vi/BnpB3GrpsfM/maxresdefault.jpg"} {"video_id": "BnpB3GrpsfM", "text": "you know electricity occurs with lightning or you know crumpled up might co-occur with like hardness and so that's all this method does is it kind of just says these basic associations between words and that can get you surprisingly far it doesn't feel like real learning you know maybe and it does it's definitely not very human-like but it's just an example of kind of the power of basic methods and how something that you know doesn't involve any you know you know intelligence or hand waving that we might make about you know", "start_timestamp": "00:11:27", "end_timestamp": "00:12:01", "start_second": 687, "end_second": 721, "url": "https://www.youtube.com/watch?v=BnpB3GrpsfM&t=687s", "title": "L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20", "thumbnail": "https://i.ytimg.com/vi/BnpB3GrpsfM/maxresdefault.jpg"} {"video_id": "BnpB3GrpsfM", "text": "complicated systems it's just a big lookup table you know a smart job but you might run on the internet and it can give you surprisingly far so there's a problem with working with these word to word co-occurrence matrices they're huge so let's say we have a million word vocabulary so we have a million words by a million words just to have the full version naively and then you might store it with in 32 hopefully you don't need them in 64 so that's four bytes so storing this whole matrix in memory in a dense representation is four terabytes", "start_timestamp": "00:12:01", "end_timestamp": "00:12:31", "start_second": 721, "end_second": 751, "url": "https://www.youtube.com/watch?v=BnpB3GrpsfM&t=721s", "title": "L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20", "thumbnail": "https://i.ytimg.com/vi/BnpB3GrpsfM/maxresdefault.jpg"} {"video_id": "BnpB3GrpsfM", "text": "you know that's still huge for today most machines don't have that much memory in them so and you know if we were to kind of like start working with like how do we use this system or how do we kind of make it more general you know we just have this matrix and there's you can definitely design hand-coded algorithms to let go look up entries and query on it and we see that they can get quite far but you know we'd like to do more and how does this slot into NLP more broadly so we want to come up with a more compact but faithful", "start_timestamp": "00:12:31", "end_timestamp": "00:13:04", "start_second": 751, "end_second": 784, "url": "https://www.youtube.com/watch?v=BnpB3GrpsfM&t=751s", "title": "L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20", "thumbnail": "https://i.ytimg.com/vi/BnpB3GrpsfM/maxresdefault.jpg"} {"video_id": "BnpB3GrpsfM", "text": "representation of the relations between the words and the information they represent and we could just say that we really just want to find a way of representing this J co-occurrence matrix as something more like what we know from deep learning and machine learning in general so here's the algorithm called glove from and in 10 Li the Stanford NLP in 2014 so we take that matrix of word word co-occurrences like I mentioned it's cheap so you can run this thing when we like a trillion tokens and each entry X I X IJ would be the count of word I", "start_timestamp": "00:13:04", "end_timestamp": "00:13:33", "start_second": 784, "end_second": 813, "url": "https://www.youtube.com/watch?v=BnpB3GrpsfM&t=784s", "title": "L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20", "thumbnail": "https://i.ytimg.com/vi/BnpB3GrpsfM/maxresdefault.jpg"} {"video_id": "BnpB3GrpsfM", "text": "co-occurring context for J and what we're going to do instead is we're going to you know learning an approximation of this full matrix and the way we're going to do it is we're going to say we're going to redefined a word as a low dimensional or at least compared to you know a million by a million matrix much more low dimensional vector so we're gonna learn a dense distributed representation of a word and all we're gonna say is this very simple model such that we're trying to predict the log prob or the or the log co-occurrence", "start_timestamp": "00:13:33", "end_timestamp": "00:14:03", "start_second": 813, "end_second": 843, "url": "https://www.youtube.com/watch?v=BnpB3GrpsfM&t=813s", "title": "L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20", "thumbnail": "https://i.ytimg.com/vi/BnpB3GrpsfM/maxresdefault.jpg"} {"video_id": "BnpB3GrpsfM", "text": "counts of the X IJ entry and then we're going to do it is we're gonna look up the rector representation of word I and the vector representation of word J we're just gonna say their dot product should be proportional to the log occurrence count and that's all this is and so it's really simple and you can just use a weighted like square to error loss so that's what this this FX IJ is a basically a weighting function to account for the fact that some words are way more common and you don't want to over train this thing on like those", "start_timestamp": "00:14:03", "end_timestamp": "00:14:34", "start_second": 843, "end_second": 874, "url": "https://www.youtube.com/watch?v=BnpB3GrpsfM&t=843s", "title": "L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20", "thumbnail": "https://i.ytimg.com/vi/BnpB3GrpsfM/maxresdefault.jpg"} {"video_id": "BnpB3GrpsfM", "text": "words and you might also want to like clip because you might have like extremely long tail frequency distributions and things like that so but at the other day you just have there your ID WJ and you had some bias terms and you're just trying to compare that to the log of the rock codes count so this allows us to go from that giant m by m matrix which might be a million by a million to an M by n matrix where there's M words and each is an N dimensional vector and often this turns out that these can approximate that full", "start_timestamp": "00:14:34", "end_timestamp": "00:15:05", "start_second": 874, "end_second": 905, "url": "https://www.youtube.com/watch?v=BnpB3GrpsfM&t=874s", "title": "L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20", "thumbnail": "https://i.ytimg.com/vi/BnpB3GrpsfM/maxresdefault.jpg"} {"video_id": "BnpB3GrpsfM", "text": "co-occurrence matrix quite well and they're much much smaller dimensionality so they might be just 300 dimensions and you know there's a question of what does this thing learn and how does it approximate that but empirically it just cannot compress it quite well and this might make sense because you can imagine that so many many words just never occur with each other all that often and in fact simple sparse storage of that full matrix can get a lot smaller already but then we work mostly with them distributed representations these days", "start_timestamp": "00:15:05", "end_timestamp": "00:15:32", "start_second": 905, "end_second": 932, "url": "https://www.youtube.com/watch?v=BnpB3GrpsfM&t=905s", "title": "L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20", "thumbnail": "https://i.ytimg.com/vi/BnpB3GrpsfM/maxresdefault.jpg"} {"video_id": "BnpB3GrpsfM", "text": "in deep learning so we're going smash it into the framework we know there's another word of this yep the question so do you still have to first build the full matrix and then you run this or so this is a way of having had the full matrix you then run this as a way of like kind of compressing or we representing the matrix chronic Thanks mm-hmm so now as an example where you don't have to build that full matrix so there's another variant of very similar kind of and I think usually a more well-known version of kind of an", "start_timestamp": "00:15:32", "end_timestamp": "00:16:05", "start_second": 932, "end_second": 965, "url": "https://www.youtube.com/watch?v=BnpB3GrpsfM&t=932s", "title": "L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20", "thumbnail": "https://i.ytimg.com/vi/BnpB3GrpsfM/maxresdefault.jpg"} {"video_id": "BnpB3GrpsfM", "text": "algorithms class called word effect and so weird effect is instead a kind of predictive framework where instead of saying we've got this kind of you know abstract like co-occurrence matrix then we're going to try to like compress it and we represent it as word vectors we're gonna just work with natural sequence of text so you might have you know a five-word sentence like the cat sat on the mat and what you're gonna do is there's going to be a model that's trained to take a local context window like you know the cat said maybe two", "start_timestamp": "00:16:05", "end_timestamp": "00:16:34", "start_second": 965, "end_second": 994, "url": "https://www.youtube.com/watch?v=BnpB3GrpsfM&t=965s", "title": "L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20", "thumbnail": "https://i.ytimg.com/vi/BnpB3GrpsfM/maxresdefault.jpg"} {"video_id": "BnpB3GrpsfM", "text": "words of past context in two words of next context we're going to do an incredibly simple linear operation like summing them and then we're just going to try to predict that word in the center so this is called the continuous bag of words representation continuous because it's a distributed representation bag of words because the operation that composes the context is just sum or a bag and then we just predict the output and we can parameterize that as like the log probability of the word in the center of the context and there's the inverse", "start_timestamp": "00:16:34", "end_timestamp": "00:17:04", "start_second": 994, "end_second": 1024, "url": "https://www.youtube.com/watch?v=BnpB3GrpsfM&t=994s", "title": "L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20", "thumbnail": "https://i.ytimg.com/vi/BnpB3GrpsfM/maxresdefault.jpg"} {"video_id": "BnpB3GrpsfM", "text": "version of this which is the skip grand model which given a central word of context tries to predict the window and so this uses kind of the more standard approach of like online training and it just streams over a bunch of examples of text you can use mini-batch training it looks like your standard algorithms now the same way I mentioned some tricks with like the using the log co-occurrence or a real waiting function you need those same kind of things here again many words span many different ranges of frequencies where you might", "start_timestamp": "00:17:04", "end_timestamp": "00:17:33", "start_second": 1024, "end_second": 1053, "url": "https://www.youtube.com/watch?v=BnpB3GrpsfM&t=1024s", "title": "L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20", "thumbnail": "https://i.ytimg.com/vi/BnpB3GrpsfM/maxresdefault.jpg"} {"video_id": "BnpB3GrpsfM", "text": "have words like the be literally 7% of all your data so you if you now usually train or direct algorithm without subsampling or resampling based on the frequency distribution seven percent of your computes going to modeling the word though and then you know some important word in New York City or something or phrase is just basically lost in the noise so we use a real weighting function I believe it's the inverse fifth root so it just works and that just heavily truncates the frequency distribution so they're basically doing the same thing", "start_timestamp": "00:17:33", "end_timestamp": "00:18:05", "start_second": 1053, "end_second": 1085, "url": "https://www.youtube.com/watch?v=BnpB3GrpsfM&t=1053s", "title": "L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20", "thumbnail": "https://i.ytimg.com/vi/BnpB3GrpsfM/maxresdefault.jpg"} {"video_id": "BnpB3GrpsfM", "text": "today this is a predictive framework where it's it takes in a sequence and it tries to predict some subset of that sequence with a very simple linear model and you just have word of the same word event betting table we talked about but they both do about the same thing and they're kind of the canonical first round of distributed or scalable kind of unsupervised cell supervised representations for a multi again there's there's no you know human supervision classically involved in these algorithms they just kind of have", "start_timestamp": "00:18:05", "end_timestamp": "00:18:33", "start_second": 1085, "end_second": 1113, "url": "https://www.youtube.com/watch?v=BnpB3GrpsfM&t=1085s", "title": "L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20", "thumbnail": "https://i.ytimg.com/vi/BnpB3GrpsfM/maxresdefault.jpg"} {"video_id": "BnpB3GrpsfM", "text": "this automated procedure to just churn through large amounts of data and you know we're Tyvek came out of google in like 2013 and you know one of the first things that is written on a big city you cluster with like a very efficient C++ implementation and shove a bunch of words through it and it works really well and so let's kind of talk about what this does so for this graph I'm gonna talk about how I'm gonna interrupt for a moment if you go back yep so on the Left it's the word words are represented by vectors and then your", "start_timestamp": "00:18:33", "end_timestamp": "00:19:04", "start_second": 1113, "end_second": 1144, "url": "https://www.youtube.com/watch?v=BnpB3GrpsfM&t=1113s", "title": "L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20", "thumbnail": "https://i.ytimg.com/vi/BnpB3GrpsfM/maxresdefault.jpg"} {"video_id": "BnpB3GrpsfM", "text": "average and you're supposed to get two vector representing the middle word on the right where did he embedding slid they're the same embeddings wte so they're both inputs and targets so you you would basically slice out some word WT from from your from your list you would then also pull a sequence of context to be predicted like the were before and the word after and then you would have the same prediction objective like the whole or all of that word at that location and there's other approximations that kind of just", "start_timestamp": "00:19:04", "end_timestamp": "00:19:39", "start_second": 1144, "end_second": 1179, "url": "https://www.youtube.com/watch?v=BnpB3GrpsfM&t=1144s", "title": "L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20", "thumbnail": "https://i.ytimg.com/vi/BnpB3GrpsfM/maxresdefault.jpg"} {"video_id": "BnpB3GrpsfM", "text": "embossing over right now how to do this efficiently because computing a full normalization of the predictions over like a full million size vocabulary is very expensive so you often you can use a tree structure or a sub sampling algorithm where you might normalize over only a randomly selected subset and you can weight that subset and things like this all production negative sampling is a prediction some kind of inner product between WT & WT - - or yeah so that would be how you'd get the logit for the log problems it's a", "start_timestamp": "00:19:39", "end_timestamp": "00:20:12", "start_second": 1179, "end_second": 1212, "url": "https://www.youtube.com/watch?v=BnpB3GrpsfM&t=1179s", "title": "L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20", "thumbnail": "https://i.ytimg.com/vi/BnpB3GrpsfM/maxresdefault.jpg"} {"video_id": "BnpB3GrpsfM", "text": "dumb profit as well yeah sorry I should've been clear about that operation thank you cool thanks Alec so yeah what do we do with these things so this is where kind of a lot of the first wave of kind of modern you know modern modern is a contentious word but kind of NLP starting to leverage large-scale and supervised data started figuring out how to use these things so these examples on the left are with glove and what we see is kind of a suite of tasks so there's the Stanford sentiment tree bank which is predicting for a sentence of a movie", "start_timestamp": "00:20:12", "end_timestamp": "00:20:45", "start_second": 1212, "end_second": 1245, "url": "https://www.youtube.com/watch?v=BnpB3GrpsfM&t=1212s", "title": "L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20", "thumbnail": "https://i.ytimg.com/vi/BnpB3GrpsfM/maxresdefault.jpg"} {"video_id": "BnpB3GrpsfM", "text": "review is it a positive review that they like the movie or is it a negative review you know like movie IMDB is another central analysis data set but it's a paragraph of context t-rex six and 50 are classifying kind of types of questions like who what where when and SLI is a much fancier thing of logical entailment so it's kind of measuring the relation between two sentences a premise sentence and a hypothesis sentence and you're basically trying to say given the premise does the following sentence follow logically from", "start_timestamp": "00:20:45", "end_timestamp": "00:21:21", "start_second": 1245, "end_second": 1281, "url": "https://www.youtube.com/watch?v=BnpB3GrpsfM&t=1245s", "title": "L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20", "thumbnail": "https://i.ytimg.com/vi/BnpB3GrpsfM/maxresdefault.jpg"} {"video_id": "BnpB3GrpsfM", "text": "it it being tailed is it kind of irrelevant or containing information that's maybe correct but maybe not wrong which would be a neutral or is it actually a contradiction with the previous sentence so you know it might be the first sentence is like you know a woman is walking a dog and then the second sentence is like a man is playing with a cat and that would just be a contradiction of the first sentence so that's s Noy and it's some sensible objective and it's kind of this more complex operation because it's doing", "start_timestamp": "00:21:21", "end_timestamp": "00:21:50", "start_second": 1281, "end_second": 1310, "url": "https://www.youtube.com/watch?v=BnpB3GrpsfM&t=1281s", "title": "L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20", "thumbnail": "https://i.ytimg.com/vi/BnpB3GrpsfM/maxresdefault.jpg"} {"video_id": "BnpB3GrpsfM", "text": "logical reasoning supposedly and it's doing it over semantic concepts like you might need to know the relations between playing an instrument or you know that saxophone is an instrument so that if the premise is a man playing saxophone you need to know that the hypothesis might be you know in tailing it if it's the man is playing a musical instrument so that one has like kind of an interesting relation to some more semantic content and the final example here is squad which is answering dataset so you get a paragraph", "start_timestamp": "00:21:50", "end_timestamp": "00:22:17", "start_second": 1310, "end_second": 1337, "url": "https://www.youtube.com/watch?v=BnpB3GrpsfM&t=1310s", "title": "L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20", "thumbnail": "https://i.ytimg.com/vi/BnpB3GrpsfM/maxresdefault.jpg"} {"video_id": "BnpB3GrpsfM", "text": "from Wikipedia and you have to predict you know given a question what the answer is from that paragraph and so for all of these data sets again this is a pretty broad suite of tasks you see multiple absolute percentage performance jumps from slotting in word vectors compared to randomly initialized components of the models that were used to predict the so you can always do random initialization kind of standard canonical thing and deep learning or you could use these pre trained vectors and so they really do seem to help in terms", "start_timestamp": "00:22:17", "end_timestamp": "00:22:47", "start_second": 1337, "end_second": 1367, "url": "https://www.youtube.com/watch?v=BnpB3GrpsfM&t=1337s", "title": "L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20", "thumbnail": "https://i.ytimg.com/vi/BnpB3GrpsfM/maxresdefault.jpg"} {"video_id": "BnpB3GrpsfM", "text": "of data efficiency and you can see in some cases like for question answering that you can get a 10% plus absolute improvement here for glove glove plus code is another thing which we'll come to in a bit and you know why might these be helping so much so that's the kind of empirical data well on the right here we kind of have some of the work that God did to kind of inspect the properties of these word of vectors so they would for instance have a query vector like the word frog and then they would show all of the different possible nearest words", "start_timestamp": "00:22:47", "end_timestamp": "00:23:18", "start_second": 1367, "end_second": 1398, "url": "https://www.youtube.com/watch?v=BnpB3GrpsfM&t=1367s", "title": "L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20", "thumbnail": "https://i.ytimg.com/vi/BnpB3GrpsfM/maxresdefault.jpg"} {"video_id": "BnpB3GrpsfM", "text": "in terms of just cosine similarity to that first word so you can see that you know immediately it's the plural version of it frog two frogs and you know toad is very similar to frog Ronna is like I guess more scientific you name and then you get slightly farther on things like wizard so you can see how that can simplify the problem space if we have a distributed model and we have an input that's asking a question about a frog if we don't have any knowledge of the structure of language or the relations between the word frog and toad it's you", "start_timestamp": "00:23:18", "end_timestamp": "00:23:50", "start_second": 1398, "end_second": 1430, "url": "https://www.youtube.com/watch?v=BnpB3GrpsfM&t=1398s", "title": "L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20", "thumbnail": "https://i.ytimg.com/vi/BnpB3GrpsfM/maxresdefault.jpg"} {"video_id": "BnpB3GrpsfM", "text": "know naive or basically impossible for that model to then potentially generalize the same question asked about a toad instead but if we have this dense distributed representation that is bringing together these words kind of into this similar feature space then you might expect that well if the you know representation frog is very similar the representation for toad the model might just be able to generalize and handle that and you know there's even more relations and properties which is beyond just similarity in that embedding space", "start_timestamp": "00:23:50", "end_timestamp": "00:24:17", "start_second": 1430, "end_second": 1457, "url": "https://www.youtube.com/watch?v=BnpB3GrpsfM&t=1430s", "title": "L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20", "thumbnail": "https://i.ytimg.com/vi/BnpB3GrpsfM/maxresdefault.jpg"} {"video_id": "BnpB3GrpsfM", "text": "you can also get very interesting relations like the concept of like kind of you know like the CEO to a company might all be expressed in kind of the same same subspace or the same direction in the embedding space or connecting a zip code to its city and you can see kind of how you know how could this be happening well co-occurrence is kind of get you this don't they you know Honolulu if you want to predict this random number you've got to eventually figure out that oh these you know do occur together because they are", "start_timestamp": "00:24:17", "end_timestamp": "00:24:52", "start_second": 1457, "end_second": 1492, "url": "https://www.youtube.com/watch?v=BnpB3GrpsfM&t=1457s", "title": "L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20", "thumbnail": "https://i.ytimg.com/vi/BnpB3GrpsfM/maxresdefault.jpg"} {"video_id": "BnpB3GrpsfM", "text": "the code of one to the other and so yeah you you get a lot of structure even though again all we were doing or you know one of the views of all these algorithms are doing is they're just processing very local relations in a very simple fashion and it's just scaleable and simple but it can work quite well as a starting point now you know these aren't the end of obviously it's only thirty minutes in two or three hour lecture so there's a long way to go so kind of these are all cool and whatnot and they really did", "start_timestamp": "00:24:52", "end_timestamp": "00:25:24", "start_second": 1492, "end_second": 1524, "url": "https://www.youtube.com/watch?v=BnpB3GrpsfM&t=1492s", "title": "L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20", "thumbnail": "https://i.ytimg.com/vi/BnpB3GrpsfM/maxresdefault.jpg"} {"video_id": "BnpB3GrpsfM", "text": "drive the first few years of modern deep learning NLP and helping to move these models to much higher performance but kind of what might be the issues with them so you know obviously languages a lot more than just the counts of words it has a ton of structure on top of than in addition to words and and furthermore context is very important and these kind of fix static representations of words that we're learning are just insufficient in many cases so you might have for instance three different sentences I went to the riverbank I made", "start_timestamp": "00:25:24", "end_timestamp": "00:25:53", "start_second": 1524, "end_second": 1553, "url": "https://www.youtube.com/watch?v=BnpB3GrpsfM&t=1524s", "title": "L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20", "thumbnail": "https://i.ytimg.com/vi/BnpB3GrpsfM/maxresdefault.jpg"} {"video_id": "BnpB3GrpsfM", "text": "it withdrawal from the bank or I wouldn't Bank on it and all of them have the word Bank being used in a very different context and you know basically representing a different thing it's a now and a reverb or you know it or just a phrase of expression and so you really need to learn how to do more complex things but if you're just counting whether two words happen to occur in the sentence or you know in word to vector kind of looking at a very short window and just using an averaging operator you you you can't really model all that much", "start_timestamp": "00:25:53", "end_timestamp": "00:26:22", "start_second": 1553, "end_second": 1582, "url": "https://www.youtube.com/watch?v=BnpB3GrpsfM&t=1553s", "title": "L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20", "thumbnail": "https://i.ytimg.com/vi/BnpB3GrpsfM/maxresdefault.jpg"} {"video_id": "BnpB3GrpsfM", "text": "complexity so we need to do more and you know there's also just kind of the design space right now you have this million by 300 dimensional matrix so it's like word vectors and then the question is just what do we do with that and you know obviously we figured out quite a lot of ways to use them but there's a lot that's still up to the practitioner and this often involves a lot of tests specific models slapped on top of it and that's where a lot of the first few years of research and NLP for deep learning went was kind of designing", "start_timestamp": "00:26:22", "end_timestamp": "00:26:52", "start_second": 1582, "end_second": 1612, "url": "https://www.youtube.com/watch?v=BnpB3GrpsfM&t=1582s", "title": "L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20", "thumbnail": "https://i.ytimg.com/vi/BnpB3GrpsfM/maxresdefault.jpg"} {"video_id": "BnpB3GrpsfM", "text": "all these tests specific models a model for doing question answering a model for doing summarization a model for doing sentiment analysis and they would all kind of take this common input of the word vectors and slap them in but then there was a huge amount of design on top of that and these models got progressively more and more complex with more and more details and so you can kind of think this does like well we only really did the first step sure learning word vectors is great but they're really kind of like learning", "start_timestamp": "00:26:52", "end_timestamp": "00:27:16", "start_second": 1612, "end_second": 1636, "url": "https://www.youtube.com/watch?v=BnpB3GrpsfM&t=1612s", "title": "L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20", "thumbnail": "https://i.ytimg.com/vi/BnpB3GrpsfM/maxresdefault.jpg"} {"video_id": "BnpB3GrpsfM", "text": "just the edge detectors in computer vision and they get us something but we know like in you know debugging for computer vision there's a lot more that goes into comment than just some edge detectors at the beginning system and that's true for an LP as well so there's a lot more going on in language beyond just these input representations so kind of how do we get there well we're going to take a little bit of a detour into the history of language models and kind of walk through how this kind of method and kind of set of generative models", "start_timestamp": "00:27:16", "end_timestamp": "00:27:45", "start_second": 1636, "end_second": 1665, "url": "https://www.youtube.com/watch?v=BnpB3GrpsfM&t=1636s", "title": "L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20", "thumbnail": "https://i.ytimg.com/vi/BnpB3GrpsfM/maxresdefault.jpg"} {"video_id": "BnpB3GrpsfM", "text": "kind of ended up providing one of the methods for moving beyond just word vectors and kind of introducing the second wave of modern NLP methods that use unsupervised or self supervised methods so fun overview real quickly is kind of seventy years of history here on one slide where we kind of are looking at a language model what is a language model well it models language and it's a generative model so hopefully depending on how nicely it's set up we can draw samples from it to understand kind of what distribution it's actually learned", "start_timestamp": "00:27:45", "end_timestamp": "00:28:16", "start_second": 1665, "end_second": 1696, "url": "https://www.youtube.com/watch?v=BnpB3GrpsfM&t=1665s", "title": "L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20", "thumbnail": "https://i.ytimg.com/vi/BnpB3GrpsfM/maxresdefault.jpg"} {"video_id": "BnpB3GrpsfM", "text": "and how well it's actually approximated the real distribution of language so without getting in the details of how you sample you can kind of see this kind of list here so very early there's this thing called a three gram model from Claude Shannon himself in the 1950s and this kind of still makes basically gibberish they also point to ninety-nine point six billion dollars from two hundred four six three percent of interest rate stores as Mexico and Brazil in market conditions well that's basically gibberish but notice that", "start_timestamp": "00:28:16", "end_timestamp": "00:28:45", "start_second": 1696, "end_second": 1725, "url": "https://www.youtube.com/watch?v=BnpB3GrpsfM&t=1696s", "title": "L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20", "thumbnail": "https://i.ytimg.com/vi/BnpB3GrpsfM/maxresdefault.jpg"} {"video_id": "BnpB3GrpsfM", "text": "there's still a bit of like local correlation and structure it says a lot of numbers and then it mentions interest rates after six point three percent or six three percent and that's like all kind of right and you can see how there's the tiniest bit of structure in there beyond just like what it would look like I could be just drew words independently according to their frequencies and then there's been a lot of investment in this kind of field in area over the last few years so Ilya sutskever in 2011 kind of", "start_timestamp": "00:28:45", "end_timestamp": "00:29:12", "start_second": 1725, "end_second": 1752, "url": "https://www.youtube.com/watch?v=BnpB3GrpsfM&t=1725s", "title": "L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20", "thumbnail": "https://i.ytimg.com/vi/BnpB3GrpsfM/maxresdefault.jpg"} {"video_id": "BnpB3GrpsfM", "text": "introduced a character RNN for the task and so here anytime there's a prompt it's highlighted in yellow which means it's a manually specified kind of prefix and then you condition on that and you sample from that so the meaning of life is the tradition of ancient human reproduction that's almost a sentence it is less favorable to the good boy for when to remove her bigger so it quickly fell apart in the second part but it almost got something there and it's still gibberish but it at least shows another hint of structure and then", "start_timestamp": "00:29:12", "end_timestamp": "00:29:42", "start_second": 1752, "end_second": 1782, "url": "https://www.youtube.com/watch?v=BnpB3GrpsfM&t=1752s", "title": "L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20", "thumbnail": "https://i.ytimg.com/vi/BnpB3GrpsfM/maxresdefault.jpg"} {"video_id": "BnpB3GrpsfM", "text": "there's a Rafale or falls paper from 2016 which is basically a much bigger word level version of that RNN from 2011 and just kind of use a scale and a lot more data and here's a sample drawn from it with even more new technologies coming onto the market quickly during the past three years the increasing number of companies was now Ted called the ever-changing and ever changing environmental challenges online so that's basically a sentence at this point there's a weird thing where it repeats itself with ever-changing and", "start_timestamp": "00:29:42", "end_timestamp": "00:30:10", "start_second": 1782, "end_second": 1810, "url": "https://www.youtube.com/watch?v=BnpB3GrpsfM&t=1782s", "title": "L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20", "thumbnail": "https://i.ytimg.com/vi/BnpB3GrpsfM/maxresdefault.jpg"} {"video_id": "BnpB3GrpsfM", "text": "ever changing but we've now got a phrase you know multiple phrases or clauses and kind of longer term structure there so that's a big amount of progress and again as we talked about with word vectors a lot of their failure is that they don't exploit contexts and they're kind of these isolated representations of only single words so the fact that these language models were starting to learn context as you looked at and inspected their samples is kind of a clue that they're going in the right direction towards some of the", "start_timestamp": "00:30:10", "end_timestamp": "00:30:35", "start_second": 1810, "end_second": 1835, "url": "https://www.youtube.com/watch?v=BnpB3GrpsfM&t=1810s", "title": "L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20", "thumbnail": "https://i.ytimg.com/vi/BnpB3GrpsfM/maxresdefault.jpg"} {"video_id": "BnpB3GrpsfM", "text": "functionality behaviors we might want in a natural image processing so then the next major step came in 2017 2018 with the introduction of the transformer based architecture we'll talk a little bit about that later if that's appropriate but its handles long term dependencies much better through self intention mechanisms and then you start to see potentially multiple sentences that kind of flow together and then the final one here is GPT - which can kind of take potentially a pretty low probability or difficult to understand", "start_timestamp": "00:30:35", "end_timestamp": "00:31:05", "start_second": 1835, "end_second": 1865, "url": "https://www.youtube.com/watch?v=BnpB3GrpsfM&t=1835s", "title": "L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20", "thumbnail": "https://i.ytimg.com/vi/BnpB3GrpsfM/maxresdefault.jpg"} {"video_id": "BnpB3GrpsfM", "text": "prompt that you know probably isn't in the training data like scientist discovering a herd of unicorns living in a you know remote previously unexplored Valley in the Andes Mountains and they're able to speak English and then it can write something that looks like a news article on the top of that this does cherry-picked all howl like 20 times i sat there till I got a good one but it's progress and most of these are cherry picked so it's cherry picks again cherry fix all the way down and yeah at that point you basically have something that", "start_timestamp": "00:31:05", "end_timestamp": "00:31:33", "start_second": 1865, "end_second": 1893, "url": "https://www.youtube.com/watch?v=BnpB3GrpsfM&t=1865s", "title": "L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20", "thumbnail": "https://i.ytimg.com/vi/BnpB3GrpsfM/maxresdefault.jpg"} {"video_id": "BnpB3GrpsfM", "text": "just reads like a full news article and it keeps characters and names persistent and you know pulls information from the source sentence over you know multiple paragraphs and this is all a lot of progress being driven in the last few years so kind of now that we have just like a look at the cool samples let's like get into the details here so this is going to be about statistical or probabilistic language modeling and kind of the way we formulate this or is we interpret language as a high dimensional discrete data distribution that we want", "start_timestamp": "00:31:33", "end_timestamp": "00:32:02", "start_second": 1893, "end_second": 1922, "url": "https://www.youtube.com/watch?v=BnpB3GrpsfM&t=1893s", "title": "L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20", "thumbnail": "https://i.ytimg.com/vi/BnpB3GrpsfM/maxresdefault.jpg"} {"video_id": "BnpB3GrpsfM", "text": "to model and kind of the set up since this is statistical method is we're going to observe a bunch of strings of language and the framing here with a probabilistic language model is we want to learn a function that can just compute the probability or density of new sentences so we want to be able to compute Oh what is the probability of the sentence is it going to rain today and we're just going to give it a bunch of strings and somehow we're going to design a model that can compute this quantity so what does it mean to compute", "start_timestamp": "00:32:02", "end_timestamp": "00:32:27", "start_second": 1922, "end_second": 1947, "url": "https://www.youtube.com/watch?v=BnpB3GrpsfM&t=1922s", "title": "L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20", "thumbnail": "https://i.ytimg.com/vi/BnpB3GrpsfM/maxresdefault.jpg"} {"video_id": "BnpB3GrpsfM", "text": "the probability of a string you know what what should the probability of that sum is the cat sat on the mat B well you know there's some people who kind of think that this might not be the most well defined concept or there's a lot of reason for skepticism potentially Noam Chomsky in 1969 has a very famous quote but it must be recognized that the notion of the probability of a sentence is an entirely useless one under any known interpretation of this term of this term so some people were quite skeptical to be fair this is well before", "start_timestamp": "00:32:27", "end_timestamp": "00:32:55", "start_second": 1947, "end_second": 1975, "url": "https://www.youtube.com/watch?v=BnpB3GrpsfM&t=1947s", "title": "L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20", "thumbnail": "https://i.ytimg.com/vi/BnpB3GrpsfM/maxresdefault.jpg"} {"video_id": "BnpB3GrpsfM", "text": "that kind of any of the modern renaissance and he goes on to kind of explain a bit more that there's quite likely that statistical methods can work but it's a good example of kind of where we're coming from and some of the contrast in the field so let's instead kind of like talk about why this concept might be useful like why do we want to know what the probability of this is and this is where I think it begins to see the connection between oh what does the generative and how like we end up using it or why might actually learn useful", "start_timestamp": "00:32:55", "end_timestamp": "00:33:23", "start_second": 1975, "end_second": 2003, "url": "https://www.youtube.com/watch?v=BnpB3GrpsfM&t=1975s", "title": "L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20", "thumbnail": "https://i.ytimg.com/vi/BnpB3GrpsfM/maxresdefault.jpg"} {"video_id": "BnpB3GrpsfM", "text": "functionality for downstream tasks or for transfer and so you know we could compare for instance the probability of the sentence the cat sat on the mat so the probability of the sentence the cat sets on the mat and you know we would expect that let's say we somehow have the true function here we don't know how to learn it yet but we just assume we have like the ground truth of the probabilities of these two sentences well it should assign more probability to the grammatically correct one and that gives you something like grammar", "start_timestamp": "00:33:23", "end_timestamp": "00:33:49", "start_second": 2003, "end_second": 2029, "url": "https://www.youtube.com/watch?v=BnpB3GrpsfM&t=2003s", "title": "L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20", "thumbnail": "https://i.ytimg.com/vi/BnpB3GrpsfM/maxresdefault.jpg"} {"video_id": "BnpB3GrpsfM", "text": "and that's you know an important part of language is understanding its structure and what are the valid sentences or not but you know should the probability of the sentence the cat sets on the mat be zero well no because sometimes people fudge their keyboard or miss type it should be much lower but it shouldn't be all the way to zero for instance and then you can kind of get to more interesting sentences that you could query you could say you know what's the probability in the sentence the hyena stephannie met and compare that to the", "start_timestamp": "00:33:49", "end_timestamp": "00:34:15", "start_second": 2029, "end_second": 2055, "url": "https://www.youtube.com/watch?v=BnpB3GrpsfM&t=2029s", "title": "L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20", "thumbnail": "https://i.ytimg.com/vi/BnpB3GrpsfM/maxresdefault.jpg"} {"video_id": "BnpB3GrpsfM", "text": "sentence the cat sat on the mat and you know we would say well as a human being asked this you would say well hyenas you know are wild animals they don't often sit on mats unless they're at the zoo or something so this kind of shows how to do this to compute this probability correctly you would need to start to have world knowledge what is a common operator you know what is a common environment for a hyena you know what is even sitting on a mat mean and then you can ask other questions again you could start to get two conditional", "start_timestamp": "00:34:15", "end_timestamp": "00:34:43", "start_second": 2055, "end_second": 2083, "url": "https://www.youtube.com/watch?v=BnpB3GrpsfM&t=2055s", "title": "L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20", "thumbnail": "https://i.ytimg.com/vi/BnpB3GrpsfM/maxresdefault.jpg"} {"video_id": "BnpB3GrpsfM", "text": "probabilities too depending on how you set up this generative model is you might be able to query you know given that the prefix is the substring two plus two equals you know what should the probability of the completion for be it probably shouldn't be one because people sometimes joke that two plus two equals five but maybe if you had bit more context you would be able to disambiguate which of those two you might predict and then finally kind of coming back to some of the data sets or tests we've already mentioned if you", "start_timestamp": "00:34:43", "end_timestamp": "00:35:09", "start_second": 2083, "end_second": 2109, "url": "https://www.youtube.com/watch?v=BnpB3GrpsfM&t=2083s", "title": "L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20", "thumbnail": "https://i.ytimg.com/vi/BnpB3GrpsfM/maxresdefault.jpg"} {"video_id": "BnpB3GrpsfM", "text": "have the prefix that movie was terrible I'd rate it one star out of five you you really should know that that is a likely completion and so to do that completion and to generate that sentence and to know that string is likely you basically have to have that language model somehow I've learned what sentiment analysis is and what is a little you know a likely relation between the concept of like one star or five stars the kind of you know reception of the movie or the description of the movie before that and so with that one it kind", "start_timestamp": "00:35:09", "end_timestamp": "00:35:39", "start_second": 2109, "end_second": 2139, "url": "https://www.youtube.com/watch?v=BnpB3GrpsfM&t=2109s", "title": "L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20", "thumbnail": "https://i.ytimg.com/vi/BnpB3GrpsfM/maxresdefault.jpg"} {"video_id": "BnpB3GrpsfM", "text": "of becomes clear that in the limit these functions that these language models learn and compute should be quite useful traditionally we approach that as a supervised learning problem right we were gonna like oh let's go build a data set let's go collect you know a bunch of crowd laborers and have them assign ratings to a bunch of different movie reviews that's what the Stanford sentiment tree Bank is but in the limit this kind of unsupervised scalable method of just like fit a probability distribution to strings of language", "start_timestamp": "00:35:39", "end_timestamp": "00:36:03", "start_second": 2139, "end_second": 2163, "url": "https://www.youtube.com/watch?v=BnpB3GrpsfM&t=2139s", "title": "L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20", "thumbnail": "https://i.ytimg.com/vi/BnpB3GrpsfM/maxresdefault.jpg"} {"video_id": "BnpB3GrpsfM", "text": "should eventually maybe be able to handle a test like this without any of the classic supervised learning framework being used and yet so that kind of extends much more broadly those are kind of some you know canonical example or some toy examples but this actually can be quite useful and this is kind of where language models got their start in many cases 30 or 40 years ago or 20 years ago in kind of machine learning so they're often used for speech recognition and machine translation which again are traditionally approach to supervised", "start_timestamp": "00:36:03", "end_timestamp": "00:36:35", "start_second": 2163, "end_second": 2195, "url": "https://www.youtube.com/watch?v=BnpB3GrpsfM&t=2163s", "title": "L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20", "thumbnail": "https://i.ytimg.com/vi/BnpB3GrpsfM/maxresdefault.jpg"} {"video_id": "BnpB3GrpsfM", "text": "tasks with pairs of transcripts that are somehow aligned and you know a major promise is that we can somehow leverage these language models to you know really help with these problems and for speech recognition for instance you could prune the space of possible transcriptions from an acoustic model there's a variance example from Jeffrey Hanson of you know how to tell the difference between the sentence recognize each and recognize speech you know they're very similar from a you know a raw audio perspective but if you have context you", "start_timestamp": "00:36:35", "end_timestamp": "00:37:05", "start_second": 2195, "end_second": 2225, "url": "https://www.youtube.com/watch?v=BnpB3GrpsfM&t=2195s", "title": "L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20", "thumbnail": "https://i.ytimg.com/vi/BnpB3GrpsfM/maxresdefault.jpg"} {"video_id": "BnpB3GrpsfM", "text": "know that these can be quite different things and wrecking a nice Beach is just also a much less likely string than to recognize speech and for translation for instance you could rewrite possible translations based on a monolingual language model so if you have an English to French translation system and you have some proposal of the French translation you could say well hey language model that I've trained already how likely do you think the sentence is in French and there's a lot of work on integrating this directly into decoders", "start_timestamp": "00:37:05", "end_timestamp": "00:37:30", "start_second": 2225, "end_second": 2250, "url": "https://www.youtube.com/watch?v=BnpB3GrpsfM&t=2225s", "title": "L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20", "thumbnail": "https://i.ytimg.com/vi/BnpB3GrpsfM/maxresdefault.jpg"} {"video_id": "BnpB3GrpsfM", "text": "and using them as restoring mechanisms Oh statistical language model has really got their start often as in these tasks so let's move towards actually having a computational model of language so first maybe we'll do some pre-processing like lower case so we'll take some maybe messed up text and turn it into just all lowercase to simplify it well then you know maybe said a vocabulary size to just like make the distribution easier to handle to set it to like you know a million tokens or something so we might substitute a rare", "start_timestamp": "00:37:30", "end_timestamp": "00:38:03", "start_second": 2250, "end_second": 2283, "url": "https://www.youtube.com/watch?v=BnpB3GrpsfM&t=2250s", "title": "L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20", "thumbnail": "https://i.ytimg.com/vi/BnpB3GrpsfM/maxresdefault.jpg"} {"video_id": "BnpB3GrpsfM", "text": "word like countertop with like an unknown token just so we kind of don't have to deal with this like potentially open-ended probability of observing a new novel where I've never seen before and then finally we'll use something like a tokenizer which will take a input string and return a sequence of tokens so it'll chunk it into a sequence somehow with kind of some rules or logic and you know this is another example of classic and LP work on designing tokenizer x' so we might take you know the cats out of the man and choke it", "start_timestamp": "00:38:03", "end_timestamp": "00:38:33", "start_second": 2283, "end_second": 2313, "url": "https://www.youtube.com/watch?v=BnpB3GrpsfM&t=2283s", "title": "L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20", "thumbnail": "https://i.ytimg.com/vi/BnpB3GrpsfM/maxresdefault.jpg"} {"video_id": "BnpB3GrpsfM", "text": "into just the words and you know throw that punctuation on the end there and then you know because this is machine learning we basically are we representing these words as you know unique identifiers or indices and that's again a way to get a window into how a machine learning model really sees natural language you know we come in as humans with so much understanding in context and from like lived experience but you know if you try to train a naive supervised learning model and you started from random visualization it's a", "start_timestamp": "00:38:33", "end_timestamp": "00:38:59", "start_second": 2313, "end_second": 2339, "url": "https://www.youtube.com/watch?v=BnpB3GrpsfM&t=2313s", "title": "L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20", "thumbnail": "https://i.ytimg.com/vi/BnpB3GrpsfM/maxresdefault.jpg"} {"video_id": "BnpB3GrpsfM", "text": "lot harder to understand what 223 in 1924 742 followed by 101 23 etc is and I think this helps you get into the mindset of when people talk about machine learning models being spurious pattern matchers or just learning weird correlations that aren't true if you if you've looked at a bunch and it's like tried to do natural and processing tasks as a human where your inputs are represented in this format you'd probably be a lot worse than current machine learning models already are and it'd be understandable if you made kind", "start_timestamp": "00:38:59", "end_timestamp": "00:39:27", "start_second": 2339, "end_second": 2367, "url": "https://www.youtube.com/watch?v=BnpB3GrpsfM&t=2339s", "title": "L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20", "thumbnail": "https://i.ytimg.com/vi/BnpB3GrpsfM/maxresdefault.jpg"} {"video_id": "BnpB3GrpsfM", "text": "of mistakes as an algorithm trying to figure out how to make sense of any of this especially once you get to some more complicated task like do these two sentences logically reason or follow each other you could even just do a simpler thing like split it into spaces so there's a huge design space here and I'm just providing a few examples right now okay so there's character level there's byte level which would be kind of working on you know if you just work on characters how do you deal with non ASCII text or text in there you know non", "start_timestamp": "00:39:27", "end_timestamp": "00:39:56", "start_second": 2367, "end_second": 2396, "url": "https://www.youtube.com/watch?v=BnpB3GrpsfM&t=2367s", "title": "L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20", "thumbnail": "https://i.ytimg.com/vi/BnpB3GrpsfM/maxresdefault.jpg"} {"video_id": "BnpB3GrpsfM", "text": "a Roman numeral or sorry standard like lettering systems so you could work on like a standard encoding scheme like utf-8 byte stream you could also work on unicode symbols or code points and then there's kind of these middle grounds between word level and character level which would be something like by parent coding and this one actually turns out to be super important so I'm kind of just covering it as part of general NLP methods and it's used by quite a lot of methods in the space now so what this does it starts with the", "start_timestamp": "00:39:56", "end_timestamp": "00:40:25", "start_second": 2396, "end_second": 2425, "url": "https://www.youtube.com/watch?v=BnpB3GrpsfM&t=2396s", "title": "L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20", "thumbnail": "https://i.ytimg.com/vi/BnpB3GrpsfM/maxresdefault.jpg"} {"video_id": "BnpB3GrpsfM", "text": "character level vocabulary and it kind of just merges the two most common pairs of characters at a time so you might have T and H be the most common pair of words or characters and then you'll combine them into a new token called you have th and you'll merge that and you'll resub stitute it in all of your words and then you'll run this loop again and so if you run this and kind of just keep merging and merging and merging it learns basically a tree of merges that quickly pop out words full words like the and you know common endings like IMG", "start_timestamp": "00:40:25", "end_timestamp": "00:40:53", "start_second": 2425, "end_second": 2453, "url": "https://www.youtube.com/watch?v=BnpB3GrpsfM&t=2425s", "title": "L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20", "thumbnail": "https://i.ytimg.com/vi/BnpB3GrpsfM/maxresdefault.jpg"} {"video_id": "BnpB3GrpsfM", "text": "and he and two and this learns something that kind of lets us handle potentially the full distribution of of language while also having maybe the efficiency of representing semantic chunks like words instead of operating on these characters which might result in strings that are five or ten times or five times longer and require like much more compute and have much longer term dependencies that are difficult to handle then standard board models so if I clear encoding from recur Center is all over the place and is a very common", "start_timestamp": "00:40:53", "end_timestamp": "00:41:24", "start_second": 2453, "end_second": 2484, "url": "https://www.youtube.com/watch?v=BnpB3GrpsfM&t=2453s", "title": "L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20", "thumbnail": "https://i.ytimg.com/vi/BnpB3GrpsfM/maxresdefault.jpg"} {"video_id": "BnpB3GrpsfM", "text": "middle ground to back off to character level if you see something rare you don't know or to handle like all these different languages while still having some sort of like kind of sensible handling of common words and frequencies hey Alec is there a common kind of number of bite pairs that you want to end up with cuz it sounds like you start with byte level which is just 256 possibilities and then you could imagine that you can have many many by two pairs and sometimes it goes beyond pairs I think right where you recombine an", "start_timestamp": "00:41:24", "end_timestamp": "00:41:59", "start_second": 2484, "end_second": 2519, "url": "https://www.youtube.com/watch?v=BnpB3GrpsfM&t=2484s", "title": "L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20", "thumbnail": "https://i.ytimg.com/vi/BnpB3GrpsfM/maxresdefault.jpg"} {"video_id": "BnpB3GrpsfM", "text": "existing pair with an initial just a bite like the iron and the G yeah yeah when do you stop so yeah that's a good question usually you have a heuristic for merging only across or only within words so you won't merge across like word boundaries with like whitespace or things like that and that just helps with efficiency because otherwise you'll start wasting emerges on things like you know come in like pairs of you know maybe fill our words or stop words and the other thing is you you could just in the limit run", "start_timestamp": "00:41:59", "end_timestamp": "00:42:29", "start_second": 2519, "end_second": 2549, "url": "https://www.youtube.com/watch?v=BnpB3GrpsfM&t=2519s", "title": "L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20", "thumbnail": "https://i.ytimg.com/vi/BnpB3GrpsfM/maxresdefault.jpg"} {"video_id": "BnpB3GrpsfM", "text": "this all the way out to a full vocab but we often work on this kind of middle ground where you know to get good coverage of of natural language you often need a hundred thousand plus words and you know in the limit if you want to start having you know common names and places you need really like million sized vocabularies and that can just be incredibly competition expensive so you'll often stick this in a middle ground of like 32 K bps and you're absolutely right that it'll merge all the way up to a full word like you will", "start_timestamp": "00:42:29", "end_timestamp": "00:42:55", "start_second": 2549, "end_second": 2575, "url": "https://www.youtube.com/watch?v=BnpB3GrpsfM&t=2549s", "title": "L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20", "thumbnail": "https://i.ytimg.com/vi/BnpB3GrpsfM/maxresdefault.jpg"} {"video_id": "BnpB3GrpsfM", "text": "get things like you know neurobiology in the limit that would be merged all the way with BPA just by doing merges over and over again yep thank you cool so how do we compute the probability of extreme well the dumbest model is we can just assume a uniform like prior over tokens and assume all of our independence we just product they're probably independent probabilities together to compute for any arbitrary sequence dumbest model but we'll start somewhere all right so let's get rid of some of these dumb assumptions so we could", "start_timestamp": "00:42:55", "end_timestamp": "00:43:25", "start_second": 2575, "end_second": 2605, "url": "https://www.youtube.com/watch?v=BnpB3GrpsfM&t=2575s", "title": "L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20", "thumbnail": "https://i.ytimg.com/vi/BnpB3GrpsfM/maxresdefault.jpg"} {"video_id": "BnpB3GrpsfM", "text": "suddenly like say well we know some words are more common than others and that kind of word co-occurrence matrix has that diagonal term which is just the frequencies or counts of words so we could use that instead and you know that would just allow us to say well the word these really comments were going to send more probability mass to it and you know the word supercalifragilisticexpialidocious is just pretty rare so this would be called a unigram model where all we do is we just product proportional to the", "start_timestamp": "00:43:25", "end_timestamp": "00:43:48", "start_second": 2605, "end_second": 2628, "url": "https://www.youtube.com/watch?v=BnpB3GrpsfM&t=2605s", "title": "L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20", "thumbnail": "https://i.ytimg.com/vi/BnpB3GrpsfM/maxresdefault.jpg"} {"video_id": "BnpB3GrpsfM", "text": "probabilities of the tokens from like the empirical distribution and again we can estimate that just by counting a time we can then go a bit farther and start to exploit context so again we've talked before about how important context might eat and this is where you can start to see you language well begin to handle a potentially so you can say instead that we're instead of estimating just like the you know diagonal of that of that matrix we can use that full matrix basically and say well yeah then that you know we just saw the word the", "start_timestamp": "00:43:48", "end_timestamp": "00:44:16", "start_second": 2628, "end_second": 2656, "url": "https://www.youtube.com/watch?v=BnpB3GrpsfM&t=2628s", "title": "L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20", "thumbnail": "https://i.ytimg.com/vi/BnpB3GrpsfM/maxresdefault.jpg"} {"video_id": "BnpB3GrpsfM", "text": "how often is the word cat after the word though and so we can kind of conditioned on that previous token and you know use a modified version of that like look at our count table and start to handle a little bit of context so that's a bigram model by Gram language model but there's a problem of generalization here and this is where kind of counting methods eventually like hit their limit and yeah we can brute force them with all the data on the internet but at the other day they're not flexible enough so let's say you've", "start_timestamp": "00:44:16", "end_timestamp": "00:44:41", "start_second": 2656, "end_second": 2681, "url": "https://www.youtube.com/watch?v=BnpB3GrpsfM&t=2656s", "title": "L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20", "thumbnail": "https://i.ytimg.com/vi/BnpB3GrpsfM/maxresdefault.jpg"} {"video_id": "BnpB3GrpsfM", "text": "never seen a word before like self attention you can't assign zero probability to that if you're trying to optimize for like log loss or something because you just get an infinite loss and you know if we just start going to longer and longer strings this count method explodes and the observances of every substring get rare and rare and this just kind of hits a wall so in the like 80s and 90s the way we kind of handle this is we kind of accepted that we couldn't handle the longer term dependencies here and we kind of use", "start_timestamp": "00:44:41", "end_timestamp": "00:45:12", "start_second": 2681, "end_second": 2712, "url": "https://www.youtube.com/watch?v=BnpB3GrpsfM&t=2681s", "title": "L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20", "thumbnail": "https://i.ytimg.com/vi/BnpB3GrpsfM/maxresdefault.jpg"} {"video_id": "BnpB3GrpsfM", "text": "clever smoothing methods of mixture models where you might put a lot of your probability on you know the you know grammar by grammar trigram model which is more expressive but you'll smear probability backing off if you don't see a word for instance or don't have a match to that unigram model or uniform model in the limit and so this was kind of what you saw a language models in the 80s and 90s spend a lot of their time on is they kind of were these very they were still basically count tables and statistical count tables but they", "start_timestamp": "00:45:12", "end_timestamp": "00:45:39", "start_second": 2712, "end_second": 2739, "url": "https://www.youtube.com/watch?v=BnpB3GrpsfM&t=2712s", "title": "L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20", "thumbnail": "https://i.ytimg.com/vi/BnpB3GrpsfM/maxresdefault.jpg"} {"video_id": "BnpB3GrpsfM", "text": "optimized for kind of achieving something looking more like generalization of a simple form of kind of combining these mixture models and so this is a good review paper if you ever want to kind of go back through the history of this and all the different methods develop there you start to get things that look more like representation learning and even multi-layer models so they'll start doing things like clustering over parts of speech or substituting for that so it's a very hand engineered way of potentially adding expressiveness but", "start_timestamp": "00:45:39", "end_timestamp": "00:46:04", "start_second": 2739, "end_second": 2764, "url": "https://www.youtube.com/watch?v=BnpB3GrpsfM&t=2739s", "title": "L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20", "thumbnail": "https://i.ytimg.com/vi/BnpB3GrpsfM/maxresdefault.jpg"} {"video_id": "BnpB3GrpsfM", "text": "it's a good history of kind of where these methods came from so you know since we're talking about NLP and language models is one of the core workhorses here kind of how do you evaluate and interpret a language model well probabilities are often within running or of 0 since language is a huge discrete space in the sentence might you know or a document might just be very long and so the most common way of evaluating these models and saying how well does it do is we use a quantity that's not dependent on the length so we", "start_timestamp": "00:46:04", "end_timestamp": "00:46:33", "start_second": 2764, "end_second": 2793, "url": "https://www.youtube.com/watch?v=BnpB3GrpsfM&t=2764s", "title": "L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20", "thumbnail": "https://i.ytimg.com/vi/BnpB3GrpsfM/maxresdefault.jpg"} {"video_id": "BnpB3GrpsfM", "text": "use like the average negative log probability per token and you know this token definition again might be arbitrary character level or might be weird level and so if we're using character level we might convert from you know base e to base two and report bits per character or bits per byte you see a lot of common language modeling benchmarks work in this setting and word level language model is often exponentiate that quantity and report what they call the perplexity and set so yeah it's just giving you bigger numbers", "start_timestamp": "00:46:33", "end_timestamp": "00:47:01", "start_second": 2793, "end_second": 2821, "url": "https://www.youtube.com/watch?v=BnpB3GrpsfM&t=2793s", "title": "L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20", "thumbnail": "https://i.ytimg.com/vi/BnpB3GrpsfM/maxresdefault.jpg"} {"video_id": "BnpB3GrpsfM", "text": "and better improvements because you're working on expensive log scale so how do we ground these numbers they're kind of abstract or random quantities you know what is the difference between one point two three bits per character and one point two bits per character especially if you just spent pretty much your life working on a paper and that's the number you got out so you know it's important to understand these quantities our data set dependent it's really easy to guess all zeroes it's really hard to guess the", "start_timestamp": "00:47:01", "end_timestamp": "00:47:23", "start_second": 2821, "end_second": 2843, "url": "https://www.youtube.com/watch?v=BnpB3GrpsfM&t=2821s", "title": "L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20", "thumbnail": "https://i.ytimg.com/vi/BnpB3GrpsfM/maxresdefault.jpg"} {"video_id": "BnpB3GrpsfM", "text": "archive and you know you can start calibrating the scales by saying well random guessing would get you you know log two of you know one over 256 so eight bits per character and human estimates from not the best studies but the only ones we've got I've kind of tried to peg on people in the range of like zero point six to one point three bits per character and the best of the models now are often a little bit lower than one bit per character so that range probably is lower for humans and we're somewhere you know getting okay but not", "start_timestamp": "00:47:23", "end_timestamp": "00:47:53", "start_second": 2843, "end_second": 2873, "url": "https://www.youtube.com/watch?v=BnpB3GrpsfM&t=2843s", "title": "L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20", "thumbnail": "https://i.ytimg.com/vi/BnpB3GrpsfM/maxresdefault.jpg"} {"video_id": "BnpB3GrpsfM", "text": "matching humans on these kind of quantities and you know way of grinding perplexities is gonna use the same random baseline so it ends up just matching the vocabulary size for like a standard model so random guessing would be a plexi of 50k and one way of thinking about perplexity is as like a branching factor of language so flexi to the N is like the space of possible generations of length then how many your model might assign so you have a perplexity of 10 for a language model and you generate you know to two like", "start_timestamp": "00:47:53", "end_timestamp": "00:48:22", "start_second": 2873, "end_second": 2902, "url": "https://www.youtube.com/watch?v=BnpB3GrpsfM&t=2873s", "title": "L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20", "thumbnail": "https://i.ytimg.com/vi/BnpB3GrpsfM/maxresdefault.jpg"} {"video_id": "BnpB3GrpsfM", "text": "word sequence there might be a hundred kind of high probability events in that space and human level estimates again Europe between five and ten and an example though again is this is always data set dependent always problem dependent if you have a lot of well constrained context like in translation these numbers can be a lot lower and best models are often like three perplexity on translation so you're picking between maybe three as possible likely words and you know that kind of agrees with like maybe there's a few", "start_timestamp": "00:48:22", "end_timestamp": "00:48:48", "start_second": 2902, "end_second": 2928, "url": "https://www.youtube.com/watch?v=BnpB3GrpsfM&t=2902s", "title": "L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20", "thumbnail": "https://i.ytimg.com/vi/BnpB3GrpsfM/maxresdefault.jpg"} {"video_id": "BnpB3GrpsfM", "text": "different ways for a human translate to census but it's not a huge space by any means so evaluation type 2 is kind of what we talked about so that evaluation type 1 is very much the generative model perspective of like well how good of a probabilistic model is this and so type 2 is instead kind of transfer and the things we're really talking about and caring about more there's a lot of ways we could use these language models so you could say how well does a better language mall potentially improve the word error rate", "start_timestamp": "00:48:48", "end_timestamp": "00:49:14", "start_second": 2928, "end_second": 2954, "url": "https://www.youtube.com/watch?v=BnpB3GrpsfM&t=2928s", "title": "L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20", "thumbnail": "https://i.ytimg.com/vi/BnpB3GrpsfM/maxresdefault.jpg"} {"video_id": "BnpB3GrpsfM", "text": "for your speech recognition system or the blue score for your translation system or the Younger suit for your document classification and this is kind of where NLP has really taken off leveraging these language models and kind of the history of the last five years has been discovering more and more ways we could use smarter and smarter language models or better and better language models to do more and more things so let's go through kind of the history here of kind of the sequence of developing real context models models", "start_timestamp": "00:49:14", "end_timestamp": "00:49:39", "start_second": 2954, "end_second": 2979, "url": "https://www.youtube.com/watch?v=BnpB3GrpsfM&t=2954s", "title": "L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20", "thumbnail": "https://i.ytimg.com/vi/BnpB3GrpsfM/maxresdefault.jpg"} {"video_id": "BnpB3GrpsfM", "text": "that can generalize better than these kind of count based methods we've so far been kind of using it for all of our discussion so the first one here is surprisingly you know like honestly this paper is amazing if you go back and read it it's from yatra Ben geo and from 2003 and it has a ton of very modern things in it and has skipped connections like you see in things like resonates in 2003 you know it's learning distributed representations of words and this is kind of that core concept we mentioned right at the beginning of like", "start_timestamp": "00:49:39", "end_timestamp": "00:50:09", "start_second": 2979, "end_second": 3009, "url": "https://www.youtube.com/watch?v=BnpB3GrpsfM&t=2979s", "title": "L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20", "thumbnail": "https://i.ytimg.com/vi/BnpB3GrpsfM/maxresdefault.jpg"} {"video_id": "BnpB3GrpsfM", "text": "representing a word by a vector with learned values for each location this is like the paper that kind of really introduced this in the neural setting and they were doing like large-scale distributed asynchronous SGD on a cluster even back then in 2003 they had to do it because single mushy peas were so slow so this is like I think it's 64 128 CPU cluster and it would take them I think a month to train a model with like three layers and you know sixty hidden units so this is a still a and grand model but we're using a multi-layer", "start_timestamp": "00:50:09", "end_timestamp": "00:50:45", "start_second": 3009, "end_second": 3045, "url": "https://www.youtube.com/watch?v=BnpB3GrpsfM&t=3009s", "title": "L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20", "thumbnail": "https://i.ytimg.com/vi/BnpB3GrpsfM/maxresdefault.jpg"} {"video_id": "BnpB3GrpsfM", "text": "perceptron to compute the kind of conditioning on the context so instead of this kind of you know cow based method we have an MLP that looks at you know the index for reward t minus one the index forward to minus two and you know T minus let's say just a three word context so these three of these vectors can and together of you know the last three words seen we then run it through a hidden layer and then we feed it through a soft max to try to predict what the next word would be so this is a trigram language model still but we've changed", "start_timestamp": "00:50:45", "end_timestamp": "00:51:14", "start_second": 3045, "end_second": 3074, "url": "https://www.youtube.com/watch?v=BnpB3GrpsfM&t=3045s", "title": "L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20", "thumbnail": "https://i.ytimg.com/vi/BnpB3GrpsfM/maxresdefault.jpg"} {"video_id": "BnpB3GrpsfM", "text": "the model from a count based method to a distributed setting with an MLP and you know this was kind of the first paper that heroically showed that they could match the performance of some of those super optimized and grand models but again it took like ten days or a month and it was on a giant cluster so you know neural language models really had some catch up to play compared to these smart quick count methods and this is a lot of what took this so long was just unfortunately they do need a lot more compute so then the next major step", "start_timestamp": "00:51:14", "end_timestamp": "00:51:43", "start_second": 3074, "end_second": 3103, "url": "https://www.youtube.com/watch?v=BnpB3GrpsfM&t=3074s", "title": "L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20", "thumbnail": "https://i.ytimg.com/vi/BnpB3GrpsfM/maxresdefault.jpg"} {"video_id": "BnpB3GrpsfM", "text": "here was kind of moving away from these fixed context windows which are kind of unsatisfying we know that as humans we can look back in pieces of text and condition on multiple sentences but these kind of methods always so far have had fixed context windows and have only been able to process or condition on just the last few words so this is kind of where our n ends come in and Thomas week loves 2010 paper it's kind of the first modern deep learning version of this that kind of started working quite well so we", "start_timestamp": "00:51:43", "end_timestamp": "00:52:12", "start_second": 3103, "end_second": 3132, "url": "https://www.youtube.com/watch?v=BnpB3GrpsfM&t=3103s", "title": "L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20", "thumbnail": "https://i.ytimg.com/vi/BnpB3GrpsfM/maxresdefault.jpg"} {"video_id": "BnpB3GrpsfM", "text": "replaced that MLP with a recurrent neural network and that allows for handling potentially unbounded context now it handles that context in a learn fashion so you'll get an input word vector at one time step and you'll have this context buffer which is a learned memory state that the RNN kind of modifies and updates and you'll use that to kind of represent a running summary of everything you've seen that's important for predicting the next potential word this has potentially unbounded context but in practice we'll", "start_timestamp": "00:52:12", "end_timestamp": "00:52:39", "start_second": 3132, "end_second": 3159, "url": "https://www.youtube.com/watch?v=BnpB3GrpsfM&t=3132s", "title": "L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20", "thumbnail": "https://i.ytimg.com/vi/BnpB3GrpsfM/maxresdefault.jpg"} {"video_id": "BnpB3GrpsfM", "text": "train it with method methods like truncated back prop where we only update for and compute you know how to modify the the transition function of the state for up to like maybe 32 words or 64 words so it might be biased in that way but it's kind of just still can potentially learn to encode a lot of information into a learned memory system instead of kind of using like hard coded methods of just like keeping the explicit input presentations so here we get again like probably one of the first real language models where on that", "start_timestamp": "00:52:39", "end_timestamp": "00:53:12", "start_second": 3159, "end_second": 3192, "url": "https://www.youtube.com/watch?v=BnpB3GrpsfM&t=3159s", "title": "L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20", "thumbnail": "https://i.ytimg.com/vi/BnpB3GrpsfM/maxresdefault.jpg"} {"video_id": "BnpB3GrpsfM", "text": "previous one we were still using a type 1 evaluation we're just like how well can you predict the next word but here with my cloths paper they showed that if you ran this for a speech translation system you can get actually a much lower word error rate on you not only predict better and you start really improving over the the traditional like in ground based language models but if you look at this word error rate table here you actually see that it improves the speech recognition system so your transcriber will make much potentially", "start_timestamp": "00:53:12", "end_timestamp": "00:53:42", "start_second": 3192, "end_second": 3222, "url": "https://www.youtube.com/watch?v=BnpB3GrpsfM&t=3192s", "title": "L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20", "thumbnail": "https://i.ytimg.com/vi/BnpB3GrpsfM/maxresdefault.jpg"} {"video_id": "BnpB3GrpsfM", "text": "like you know over 1 to 2 so the k10 baseline here is 13.5 percent were word error rate so you messed up 13.5 percent words and using all these are nouns together you could actually reduce that by over two points and so you're talking about like a 20 percent error reduction which is quite significant yeah this is like a lot of early language models were actually published in speech conferences because this was such a important and exciting application of them to start with and again you don't need to collect", "start_timestamp": "00:53:42", "end_timestamp": "00:54:09", "start_second": 3222, "end_second": 3249, "url": "https://www.youtube.com/watch?v=BnpB3GrpsfM&t=3222s", "title": "L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20", "thumbnail": "https://i.ytimg.com/vi/BnpB3GrpsfM/maxresdefault.jpg"} {"video_id": "BnpB3GrpsfM", "text": "a bunch of speech transcription data here in the limit you could just run this thing over like New York Times articles and then use it to help potentially with your speech transcriber and that's where a lot of the power comes in from an unsupervised scaleable method and transfer capabilities so we already showed samples from this one but it's kind of a slightly different version where all these models so far have been operating on words and kind of pre-built tokenizer x' to split it off and chunk it and kind of fix", "start_timestamp": "00:54:09", "end_timestamp": "00:54:36", "start_second": 3249, "end_second": 3276, "url": "https://www.youtube.com/watch?v=BnpB3GrpsfM&t=3249s", "title": "L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20", "thumbnail": "https://i.ytimg.com/vi/BnpB3GrpsfM/maxresdefault.jpg"} {"video_id": "BnpB3GrpsfM", "text": "vocabularies the exciting thing with character level models so it's the same kind of architecture or recurrent Network it approximates a richer transition function where you might have a different set of weights with multiplicative interactions this was back when we thought optimization was hard so it's using second-order optimizers because RNs are scary and we still haven't gotten used to like just first-order methods working well and you know it begins to handle much longer short dependencies when you work on", "start_timestamp": "00:54:36", "end_timestamp": "00:55:05", "start_second": 3276, "end_second": 3305, "url": "https://www.youtube.com/watch?v=BnpB3GrpsfM&t=3276s", "title": "L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20", "thumbnail": "https://i.ytimg.com/vi/BnpB3GrpsfM/maxresdefault.jpg"} {"video_id": "BnpB3GrpsfM", "text": "character level you're you're you know suddenly talk about sequences that are five times longer so you start having models that maybe handle hundreds of time steps and that starts you know abstractly meaning maybe you could have a model that could actually parse a paragraph or parse a page and you know it wasn't a lot better than Engram models in terms of its perplexities but it was very easy to sample from and this was kind of one of the first I think demos that people might have seen online of the language model back on", "start_timestamp": "00:55:05", "end_timestamp": "00:55:29", "start_second": 3305, "end_second": 3329, "url": "https://www.youtube.com/watch?v=BnpB3GrpsfM&t=3305s", "title": "L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20", "thumbnail": "https://i.ytimg.com/vi/BnpB3GrpsfM/maxresdefault.jpg"} {"video_id": "BnpB3GrpsfM", "text": "some meats University of Toronto static website from like 20 2011 so the next male like not like quick question when you look at character level models versus word level models can you directly compare the perplexity uh if you're careful yes so you know it in the limit these are both just predicting a sequence and if you set it up correctly you could just like you know here would be the simplest way to do it with a character level model sorry I should clarify you can go one way so you can you can compute for a character or byte", "start_timestamp": "00:55:29", "end_timestamp": "00:56:09", "start_second": 3329, "end_second": 3369, "url": "https://www.youtube.com/watch?v=BnpB3GrpsfM&t=3329s", "title": "L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20", "thumbnail": "https://i.ytimg.com/vi/BnpB3GrpsfM/maxresdefault.jpg"} {"video_id": "BnpB3GrpsfM", "text": "level model the perplexities that it would assign to a ward level model but some word level models might have limitations like using unknown tokens or out of vocabulary that means they can't actually compute probabilities of arbitrary sentences whereas that someone in the expressive benefits of a character level model so the simplest way to do this would be you would convert the word level model like the token sequence of processed like let's just say it's split on spaces you'll just rejoin on spaces and then compute", "start_timestamp": "00:56:09", "end_timestamp": "00:56:34", "start_second": 3369, "end_second": 3394, "url": "https://www.youtube.com/watch?v=BnpB3GrpsfM&t=3369s", "title": "L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20", "thumbnail": "https://i.ytimg.com/vi/BnpB3GrpsfM/maxresdefault.jpg"} {"video_id": "BnpB3GrpsfM", "text": "the probabilities the the character level model assigns and you'll have an adjustment factor you could just sum the probabilities over the full sequence and then renormalize by the relevant metric and we'll actually be using that later to talk about how to compare different language almost more appropriately but again you need to have the expressivity to handle an arbitrary string to be able to compute this and you know old models because their computation is often worked with small vocabularies so they wouldn't truly be computing the", "start_timestamp": "00:56:34", "end_timestamp": "00:57:01", "start_second": 3394, "end_second": 3421, "url": "https://www.youtube.com/watch?v=BnpB3GrpsfM&t=3394s", "title": "L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20", "thumbnail": "https://i.ytimg.com/vi/BnpB3GrpsfM/maxresdefault.jpg"} {"video_id": "BnpB3GrpsfM", "text": "probability of arbitrary strings because they might normalize them in various ways got it thank you yep so the next step is kind of going to multi-layer ellis TMS and also introducing the LS TM again even though it came out in 2000 and kind of gotten realized primarily by you know one of the major people that we popularized it was Alex greys and kind of 2013 ish so this is gonna character level model except we now have a gated RNN which uses kind of these multiplicative gates and more complicated transition dynamics to", "start_timestamp": "00:57:01", "end_timestamp": "00:57:33", "start_second": 3421, "end_second": 3453, "url": "https://www.youtube.com/watch?v=BnpB3GrpsfM&t=3421s", "title": "L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20", "thumbnail": "https://i.ytimg.com/vi/BnpB3GrpsfM/maxresdefault.jpg"} {"video_id": "BnpB3GrpsfM", "text": "better store state and to help compared to like a multiplicative Arn in with kind of credit assignment and just trainability and you start to get the things that like can handle a kind of arbitrary arbitrary strings of text so you get you know something that's learning how to parse Wikipedia markdown or XML and Andre Carpathia kind of really popularized these models with like some blog post in 2015 showing that they're like you know work full of tech they work for XML they were Python programs you know they're", "start_timestamp": "00:57:33", "end_timestamp": "00:58:01", "start_second": 3453, "end_second": 3481, "url": "https://www.youtube.com/watch?v=BnpB3GrpsfM&t=3453s", "title": "L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20", "thumbnail": "https://i.ytimg.com/vi/BnpB3GrpsfM/maxresdefault.jpg"} {"video_id": "BnpB3GrpsfM", "text": "not generating valid things but they kind of can handle this they really have this flexibility of kind of you know feeling exciting from an unsupervised learning perspective you give them some data distribution of like Python programs or something and you just have a you know train over that and then you get something that looks like it's really drawn from that distribution so we kind of like talking out through like a lot of the early work here and although there was one example with the Thomas pickle off paper of Thomas paper", "start_timestamp": "00:58:01", "end_timestamp": "00:58:27", "start_second": 3481, "end_second": 3507, "url": "https://www.youtube.com/watch?v=BnpB3GrpsfM&t=3481s", "title": "L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20", "thumbnail": "https://i.ytimg.com/vi/BnpB3GrpsfM/maxresdefault.jpg"} {"video_id": "BnpB3GrpsfM", "text": "of actually having an application a lot of this was kind of just like competing for competing sake on type one evals or like look at the funny samples so again one of the very fascinating things about the last few years of NLP has been how we figured out how to really use these things much more broadly across the board and this is where I think it really starts to get exciting so one of the first papers to do this it was the skip thought vectors paper from Jimmy tauros and collaborators in 2015 and so what they did is they proposed learning", "start_timestamp": "00:58:27", "end_timestamp": "00:58:56", "start_second": 3507, "end_second": 3536, "url": "https://www.youtube.com/watch?v=BnpB3GrpsfM&t=3507s", "title": "L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20", "thumbnail": "https://i.ytimg.com/vi/BnpB3GrpsfM/maxresdefault.jpg"} {"video_id": "BnpB3GrpsfM", "text": "a RN and sequencing coder to provide context to in a language model and basically to learn how to use a sentence level feature extractor so what I mean by that is let's say we have a sentence I could see the cat on the steps what this model is trying to do is it first ingest this context sentence in the middle and they call it skip thought vectors because it's you can think of this is basically that skip grande model that was again you take a word in the center and then you predict the word before in the word after this is", "start_timestamp": "00:58:56", "end_timestamp": "00:59:25", "start_second": 3536, "end_second": 3565, "url": "https://www.youtube.com/watch?v=BnpB3GrpsfM&t=3536s", "title": "L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20", "thumbnail": "https://i.ytimg.com/vi/BnpB3GrpsfM/maxresdefault.jpg"} {"video_id": "BnpB3GrpsfM", "text": "generalizing it to sentences and it's using an RN end to kind of learn to summarize the context of the long sequence and handle kind of predicting complex dependencies between multiple words so we encode that center sentence with an RNN we iterate over it in the left-to-right fashion and then we have Anna linguish model that predicts the previous sentence so what might have happened before the sentence and then a language model that also predicts the suffix sentence that comes after it so what they then do is they say well", "start_timestamp": "00:59:25", "end_timestamp": "00:59:55", "start_second": 3565, "end_second": 3595, "url": "https://www.youtube.com/watch?v=BnpB3GrpsfM&t=3565s", "title": "L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20", "thumbnail": "https://i.ytimg.com/vi/BnpB3GrpsfM/maxresdefault.jpg"} {"video_id": "BnpB3GrpsfM", "text": "you know a model that does this test very well should learn to kind of summarize this sentence in the middle and for our nen's it's you know still these distributed representations so you have this state vector that's representing kind of an alert fashion all of the previous words you've seen so importantly that's now generalized from representations of single words to representations of sequences that can exploit context and potentially handle more complex properties and just big u8 meanings of words and they showed across", "start_timestamp": "00:59:55", "end_timestamp": "01:00:23", "start_second": 3595, "end_second": 3623, "url": "https://www.youtube.com/watch?v=BnpB3GrpsfM&t=3595s", "title": "L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20", "thumbnail": "https://i.ytimg.com/vi/BnpB3GrpsfM/maxresdefault.jpg"} {"video_id": "BnpB3GrpsfM", "text": "the board that these models healthily outperformed classic methods like so Cibao with word effect would be the simplest version so you know what does a document how do you represent a document with word effect well one of the future observations you could take is to just average the embeddings of each of the each of the words in the document and that would be what this like Cibao baseline here is on a bunch of different data sets and so you could instead say well we're gonna you know we somehow learned this sorry we somehow learned", "start_timestamp": "01:00:23", "end_timestamp": "01:00:52", "start_second": 3623, "end_second": 3652, "url": "https://www.youtube.com/watch?v=BnpB3GrpsfM&t=3623s", "title": "L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20", "thumbnail": "https://i.ytimg.com/vi/BnpB3GrpsfM/maxresdefault.jpg"} {"video_id": "BnpB3GrpsfM", "text": "this sequence extractor we could run it take its feature representation for each sentence and use that instead and you kind of see that these models you know if you use the combined skip of the like bi-directional models and using the forward and backward versions you can actually get these to start to outperform the words effect models kind of across the board and it wasn't really like this paper was kind of exciting especially from the breadth of things they do they have things like image captioning representations that they", "start_timestamp": "01:00:52", "end_timestamp": "01:01:20", "start_second": 3652, "end_second": 3680, "url": "https://www.youtube.com/watch?v=BnpB3GrpsfM&t=3652s", "title": "L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20", "thumbnail": "https://i.ytimg.com/vi/BnpB3GrpsfM/maxresdefault.jpg"} {"video_id": "BnpB3GrpsfM", "text": "learn and they kind of show you know with analysis methods like tease me that you see clustering according to classes you know it was kind of like on the edge where it showed some pretty exciting promise and it was you know a lot a lot stronger than potentially a super baseline but there were other still discriminative methods for like training models from scratch that were still matching it with like you know well-designed comment architectures or things like this so although this had be like a very exciting kind of oh it's a", "start_timestamp": "01:01:20", "end_timestamp": "01:01:49", "start_second": 3680, "end_second": 3709, "url": "https://www.youtube.com/watch?v=BnpB3GrpsfM&t=3680s", "title": "L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20", "thumbnail": "https://i.ytimg.com/vi/BnpB3GrpsfM/maxresdefault.jpg"} {"video_id": "BnpB3GrpsfM", "text": "learn feature extractor that's able to handle long term contexts and dependencies it it kind of worked but it wasn't like sweeping the sodas away it was you know exciting and honestly I think a lot of people ended up using it more as a language model where they saw some cool demos of having to generate multiple sentences but it never really quite you know blew everyone away from its quality so you know this is like a good early hit but it didn't quite you know it wasn't a homerun by any means and so this is where androids paper from 2015", "start_timestamp": "01:01:49", "end_timestamp": "01:02:19", "start_second": 3709, "end_second": 3739, "url": "https://www.youtube.com/watch?v=BnpB3GrpsfM&t=3709s", "title": "L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20", "thumbnail": "https://i.ytimg.com/vi/BnpB3GrpsfM/maxresdefault.jpg"} {"video_id": "BnpB3GrpsfM", "text": "semi-supervised sequence learning kind of comes at it from a slightly different angle so again for skip thought vectors we just used this vector representation as an input to a model and we fix the model itself and we just like train another model on top of this vector representation and it's a rebekah representation summarizing the whole sentence so maybe that's kind of a difficult test to summarize all the complexities of long sentences short sentences so what died all did instead was they said we'll take this language", "start_timestamp": "01:02:19", "end_timestamp": "01:02:48", "start_second": 3739, "end_second": 3768, "url": "https://www.youtube.com/watch?v=BnpB3GrpsfM&t=3739s", "title": "L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20", "thumbnail": "https://i.ytimg.com/vi/BnpB3GrpsfM/maxresdefault.jpg"} {"video_id": "BnpB3GrpsfM", "text": "model that we've learned and we're just going to fine-tune it directly we're not going to like precache the features like kind of words of Exile we're just gonna you know take it whatever parameters that language model learned predicting the sequence we're going to use that as an initialization point for training a supervised model for a downstream task and this is the one that started to get good results and they were showing compared to standard supervised learning on you know datasets with like 20,000 labeled examples and stuff like that", "start_timestamp": "01:02:48", "end_timestamp": "01:03:14", "start_second": 3768, "end_second": 3794, "url": "https://www.youtube.com/watch?v=BnpB3GrpsfM&t=3768s", "title": "L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20", "thumbnail": "https://i.ytimg.com/vi/BnpB3GrpsfM/maxresdefault.jpg"} {"video_id": "BnpB3GrpsfM", "text": "that these models could get quite far and so you see in the limit that you know if you you kind of have all of your different baselines here of you know word vectors feeding as inputs but then we could use like a sequence auto encoder or sequence language model and fine-tune that and you start getting quite large drops here and what's kind of cool here is these two different rows here one of these is pre-training only on the IMDB movie reviews so basically the same data set it's a two-stage algorithm and then this third table here", "start_timestamp": "01:03:14", "end_timestamp": "01:03:45", "start_second": 3794, "end_second": 3825, "url": "https://www.youtube.com/watch?v=BnpB3GrpsfM&t=3794s", "title": "L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20", "thumbnail": "https://i.ytimg.com/vi/BnpB3GrpsfM/maxresdefault.jpg"} {"video_id": "BnpB3GrpsfM", "text": "or this third row here is using a bunch of unlabeled Amazon reviews and that's you know starting to get towards transfer learning starting to get towards well we can run this thing over a lot of data and as we get more compute we can just get more data from the internet we can feed in more and we see that that actually improves things significantly over only using like the small standard supervised learning dataset in isolation some of this might have just been at the time that it was difficult to Train language models and", "start_timestamp": "01:03:45", "end_timestamp": "01:04:09", "start_second": 3825, "end_second": 3849, "url": "https://www.youtube.com/watch?v=BnpB3GrpsfM&t=3825s", "title": "L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20", "thumbnail": "https://i.ytimg.com/vi/BnpB3GrpsfM/maxresdefault.jpg"} {"video_id": "BnpB3GrpsfM", "text": "train Armand's back in the day but you know as we'll see for the rest of this lecture the methods have kind of continued to hold on top of this and continue to make progress this is the first one where it got a strong soda and you know there were strong bass lines before and people started like really I mean well to be fair it came out and not much work happened in the space for the next two years but it kind of and a lot of that was because it like really just killed it on these sentences datasets and not not as much elsewhere and this really", "start_timestamp": "01:04:09", "end_timestamp": "01:04:40", "start_second": 3849, "end_second": 3880, "url": "https://www.youtube.com/watch?v=BnpB3GrpsfM&t=3849s", "title": "L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20", "thumbnail": "https://i.ytimg.com/vi/BnpB3GrpsfM/maxresdefault.jpg"} {"video_id": "BnpB3GrpsfM", "text": "took some further work to kind of figure out how do we make this a generalizable approach that works kind of everywhere the same way that like plugging in word vectors does so moving back one one moment to a type one evil there was a followed paper or a neck in the next year that kind of really started to push on scale and compute used for training language models as we mentioned before they've kind of always been compute limited so this was a that Google paper that showed kind of the first language of all that could generate something", "start_timestamp": "01:04:40", "end_timestamp": "01:05:05", "start_second": 3880, "end_second": 3905, "url": "https://www.youtube.com/watch?v=BnpB3GrpsfM&t=3880s", "title": "L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20", "thumbnail": "https://i.ytimg.com/vi/BnpB3GrpsfM/maxresdefault.jpg"} {"video_id": "BnpB3GrpsfM", "text": "like a coherent sentence and a lot of it was using a larger data set so the billion word benchmark was a big data set at the time and they use a an HK hidden unit projection LST M which is basically a low rank factorization of like the transition transition matrix just to keep the parameter count down while keeping the state size hi it's character aware with some improvements that let it process the character level inputs so you kind of see on the right that this is starting get to be a kind of complex system and then they throw", "start_timestamp": "01:05:05", "end_timestamp": "01:05:35", "start_second": 3905, "end_second": 3935, "url": "https://www.youtube.com/watch?v=BnpB3GrpsfM&t=3905s", "title": "L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20", "thumbnail": "https://i.ytimg.com/vi/BnpB3GrpsfM/maxresdefault.jpg"} {"video_id": "BnpB3GrpsfM", "text": "you know a large vocabulary they throw a 32 King 4k 40s at it so 32 GPUs for three weeks and they kind of really got a huge improvement over the previous results and at this point those old Engram language models the old statistical methods were in the mid 40s or even in the 50s and 60s were hybrid systems and suddenly you're at like 23.7 so you basically have this metric you know again it's exponentiated so it's actually like a 20 percent reduction in like just actual log loss but you know you're starting to see a lot of", "start_timestamp": "01:05:35", "end_timestamp": "01:06:08", "start_second": 3935, "end_second": 3968, "url": "https://www.youtube.com/watch?v=BnpB3GrpsfM&t=3935s", "title": "L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20", "thumbnail": "https://i.ytimg.com/vi/BnpB3GrpsfM/maxresdefault.jpg"} {"video_id": "BnpB3GrpsfM", "text": "significant progress at this space just throwing scale at it and this has ended up being you know something that was really developed just to push it and say how far can we get you know sentence quality can we start to get something that looks like coherence and one of the surprising results is it turned out that this actually paved the way for further methods even though it was just designed to be a really good language model and just better predict the next word it ends up laying the foundations for talk about in a little bit called Elmo", "start_timestamp": "01:06:08", "end_timestamp": "01:06:35", "start_second": 3968, "end_second": 3995, "url": "https://www.youtube.com/watch?v=BnpB3GrpsfM&t=3968s", "title": "L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20", "thumbnail": "https://i.ytimg.com/vi/BnpB3GrpsfM/maxresdefault.jpg"} {"video_id": "BnpB3GrpsfM", "text": "that really was the first one to crack how do we use these Ellen's all over the place and start seeing it working for question answering and you know summarization and all these different domains so there's kind of a bit of the Tibbett here we're at an hour should we stop for a little bit or let me check out a stopping point I'm gonna go a bit farther we could go to about an hour 30 and stop for a little bit longer there is my kind of conference yeah so you know I've motivated scale a little bit so like I mentioned there's a whole", "start_timestamp": "01:06:35", "end_timestamp": "01:07:06", "start_second": 3995, "end_second": 4026, "url": "https://www.youtube.com/watch?v=BnpB3GrpsfM&t=3995s", "title": "L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20", "thumbnail": "https://i.ytimg.com/vi/BnpB3GrpsfM/maxresdefault.jpg"} {"video_id": "BnpB3GrpsfM", "text": "internet out there there's so much information and that perfect language model would you know basically from one view need to fit the Internet into its parameters given how big it is it's not surprising that we're going to need a big model to do that we're going to need a lot of compute pretentiously to do it to get as close as possible and for many of these tests we're talking about where you want to learn long term dependencies we want to learn complicated tasks you know they might be quite rare they also are quite difficult so you know the", "start_timestamp": "01:07:06", "end_timestamp": "01:07:30", "start_second": 4026, "end_second": 4050, "url": "https://www.youtube.com/watch?v=BnpB3GrpsfM&t=4026s", "title": "L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20", "thumbnail": "https://i.ytimg.com/vi/BnpB3GrpsfM/maxresdefault.jpg"} {"video_id": "BnpB3GrpsfM", "text": "closer you get the better you are to maybe learning real interesting behaviors first kind of a basic system that just like is locally plugging a few words together so another you know just vivid way of pointing this out is a small character RNN is basically gibberish you know this is what happens you know this can be a very good architecture but if you don't give it capacity it just can't really learn language you know there's so many words there's so many objects there's so many relations you really need a lot of", "start_timestamp": "01:07:30", "end_timestamp": "01:07:56", "start_second": 4050, "end_second": 4076, "url": "https://www.youtube.com/watch?v=BnpB3GrpsfM&t=4050s", "title": "L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20", "thumbnail": "https://i.ytimg.com/vi/BnpB3GrpsfM/maxresdefault.jpg"} {"video_id": "BnpB3GrpsfM", "text": "expressivity to handle all that complexity and you know another way of putting pointing this out is classic resources that were built by humans trying to map out kind of like the relations between all words in natural language you know build hierarchies over them so there's there's really heroic efforts here like wordnet they were larger than many of the language models we were still training especially a few years ago so it might have like five point five million relational features in this package and you know when you", "start_timestamp": "01:07:56", "end_timestamp": "01:08:21", "start_second": 4076, "end_second": 4101, "url": "https://www.youtube.com/watch?v=BnpB3GrpsfM&t=4076s", "title": "L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20", "thumbnail": "https://i.ytimg.com/vi/BnpB3GrpsfM/maxresdefault.jpg"} {"video_id": "BnpB3GrpsfM", "text": "have it zipped on disk or unzipped on disk it's like already 55 megabytes and you know a lot of common language model is especially early on we're only a few megabytes for parameters and so we know this is probably going to be very inefficient and you know we're probably going to need quite large models and right now you know the answer we have so far is to kind of address this facts with scale and you know hopefully we do find out more efficient and we'll talk a bit about that later too but right now you know kind of the", "start_timestamp": "01:08:21", "end_timestamp": "01:08:51", "start_second": 4101, "end_second": 4131, "url": "https://www.youtube.com/watch?v=BnpB3GrpsfM&t=4101s", "title": "L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20", "thumbnail": "https://i.ytimg.com/vi/BnpB3GrpsfM/maxresdefault.jpg"} {"video_id": "BnpB3GrpsfM", "text": "first dumb thing you try is brute force if we scale and you know another reason why this is worth investing in is it's now a very well validated empirical trend so across the bottom here is for both language modeling and and for like computer vision kind of the performance of models laid out on log scale plots where you see you have a large scale x-axis which might be the amount of words you train on so every new tic is a doubling of the data set size you know block scale is not great because it quickly gets inefficient but these", "start_timestamp": "01:08:51", "end_timestamp": "01:09:22", "start_second": 4131, "end_second": 4162, "url": "https://www.youtube.com/watch?v=BnpB3GrpsfM&t=4131s", "title": "L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20", "thumbnail": "https://i.ytimg.com/vi/BnpB3GrpsfM/maxresdefault.jpg"} {"video_id": "BnpB3GrpsfM", "text": "trends are incredibly linear they're very predictable so like it's almost like the natural kind of domain to think about is like what how does this look on a log scale and you see that again for language models on the right left and for like the performance of like captioning cysts or sorry image consecration systems on image net in the middle so these are quite consistent trends and they span now quite a few orders of magnitude so so far they've continued to improve from 6 million parameters up to 600 million on like", "start_timestamp": "01:09:22", "end_timestamp": "01:09:51", "start_second": 4162, "end_second": 4191, "url": "https://www.youtube.com/watch?v=BnpB3GrpsfM&t=4162s", "title": "L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20", "thumbnail": "https://i.ytimg.com/vi/BnpB3GrpsfM/maxresdefault.jpg"} {"video_id": "BnpB3GrpsfM", "text": "image net and you know data set sizes spanning that's probably over two to two orders of magnitude or two orders of magnitude there yeah and also computers becoming available as investment of more resources in Jim machine learning and AI and improvements in Hardware and distributive training have kind of allowed for you know even though there's there's this logarithmic or this heavy demand for additional compute to see kind of finite sized improvements at least as of yet kind of the industry as a whole has been developing techniques", "start_timestamp": "01:09:51", "end_timestamp": "01:10:21", "start_second": 4191, "end_second": 4221, "url": "https://www.youtube.com/watch?v=BnpB3GrpsfM&t=4191s", "title": "L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20", "thumbnail": "https://i.ytimg.com/vi/BnpB3GrpsfM/maxresdefault.jpg"} {"video_id": "BnpB3GrpsfM", "text": "and systems to keep providing that additional compute to keep these trend lines going so that was kind of just a quick digression on likewise scale might be important and it really intimately plays into like where these language models came from and how they kind of had their success so here's a like kind of a cute example looking at kind of starting to get away from just learning these kind of feature representations that could then be reused by downstream tasks towards maybe we can learn the tests themselves without having to have", "start_timestamp": "01:10:21", "end_timestamp": "01:10:49", "start_second": 4221, "end_second": 4249, "url": "https://www.youtube.com/watch?v=BnpB3GrpsfM&t=4221s", "title": "L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20", "thumbnail": "https://i.ytimg.com/vi/BnpB3GrpsfM/maxresdefault.jpg"} {"video_id": "BnpB3GrpsfM", "text": "standard human label feedback and kind of shared that intuition with like the talking about the you know computing the probability of the string I rate this you know one star out of five after seeing the prefix of the of the product review so this is a paper I did in a 20-17 which was like kind of a very targeted experiment here and one of the hypotheses I was working on was that maybe just data was the model neck you know our models are so inefficient that if we were able to just tile kind of in an unsupervised fashion the landscape of", "start_timestamp": "01:10:49", "end_timestamp": "01:11:19", "start_second": 4249, "end_second": 4279, "url": "https://www.youtube.com/watch?v=BnpB3GrpsfM&t=4249s", "title": "L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20", "thumbnail": "https://i.ytimg.com/vi/BnpB3GrpsfM/maxresdefault.jpg"} {"video_id": "BnpB3GrpsfM", "text": "one domain we might care about like product reviews we could maybe do quite well so we made a much larger dataset that's our rather we used an existing data set from from I think UCSD and Amazon in partnership which had 40 gigabytes of text so that was way bigger than that billion word benchmark and it's all in just one domain and we trained a byte level language model on this for you know a reasonable amount of computer month on for tiny Nexus the model ended up under fitting a lot but you know one of the most interesting", "start_timestamp": "01:11:19", "end_timestamp": "01:11:52", "start_second": 4279, "end_second": 4312, "url": "https://www.youtube.com/watch?v=BnpB3GrpsfM&t=4279s", "title": "L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20", "thumbnail": "https://i.ytimg.com/vi/BnpB3GrpsfM/maxresdefault.jpg"} {"video_id": "BnpB3GrpsfM", "text": "things about this is if we go and poke into that model and say well you have this hidden state that summarizes everything you've seen and we do probes over that we found that actually there was a single unit within this language model which very vividly indirectly just computes a running estimates of what is the sentiment of the characters I've seen so far in the review so you can see that you know as it turns on this is one of Michael Crichton's best books and so we have green colored as positive and red colored as negative so again there's", "start_timestamp": "01:11:52", "end_timestamp": "01:12:19", "start_second": 4312, "end_second": 4339, "url": "https://www.youtube.com/watch?v=BnpB3GrpsfM&t=4312s", "title": "L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20", "thumbnail": "https://i.ytimg.com/vi/BnpB3GrpsfM/maxresdefault.jpg"} {"video_id": "BnpB3GrpsfM", "text": "no supervised learning going on here this is all just unsupervised prediction of a byte stream it just sees a stream of bytes 40 billion in a row and they're all just you know numbers 0 to 256 and it somehow figures out in order to better predict this text you know it recovers this useful feature which is well as this review gonna be excited or you know dismissive and you know it can handle complexity where you know I can switch from a great start you know it's something where it's like you know you know here in the middle seriously the", "start_timestamp": "01:12:19", "end_timestamp": "01:12:48", "start_second": 4339, "end_second": 4368, "url": "https://www.youtube.com/watch?v=BnpB3GrpsfM&t=4339s", "title": "L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20", "thumbnail": "https://i.ytimg.com/vi/BnpB3GrpsfM/maxresdefault.jpg"} {"video_id": "BnpB3GrpsfM", "text": "screenplay and the directing were horrendous and then it suddenly drops off and it's you know performance analysis and starts going negative you know I can't fault the actors I know good novels especially are hard but this may be the absolute worst disparity in quality between a null and screen adaptation forever so it really does it and it turns out that if we just threshold on this unit so we're not even fitting parameters we're fitting one parameter it actually was matching these old words avec or by ground baselines and even things", "start_timestamp": "01:12:48", "end_timestamp": "01:13:17", "start_second": 4368, "end_second": 4397, "url": "https://www.youtube.com/watch?v=BnpB3GrpsfM&t=4368s", "title": "L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20", "thumbnail": "https://i.ytimg.com/vi/BnpB3GrpsfM/maxresdefault.jpg"} {"video_id": "BnpB3GrpsfM", "text": "like skip thought vectors and it's just a single unit in the model and we're just running it over this over the document and you know threshold the value at zero and so this is a histogram for positive reviews and negative reviews of what this system does so this is kind of showing I think in a very clean and pure way how you can really do some unsupervised representation learning here and start to learn something that really helps potentially with downstream tasks it's very hand engineered it was very targeted we knew that like you know", "start_timestamp": "01:13:17", "end_timestamp": "01:13:44", "start_second": 4397, "end_second": 4424, "url": "https://www.youtube.com/watch?v=BnpB3GrpsfM&t=4397s", "title": "L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20", "thumbnail": "https://i.ytimg.com/vi/BnpB3GrpsfM/maxresdefault.jpg"} {"video_id": "BnpB3GrpsfM", "text": "product reviews sentiment is a very important feature so we were kind of really hoping that something like this would happen and it would learn a really good representation but it was you know it's still like kind of shows a proof point when with limited scale but lots of data you can get something done here a follow-up work we did was with scott gray was pushing on kind of model signs again so we said maybe hidden state size is the bottleneck so again these standard LS teams and RNs summarize the entire past context as a fixed length", "start_timestamp": "01:13:44", "end_timestamp": "01:14:12", "start_second": 4424, "end_second": 4452, "url": "https://www.youtube.com/watch?v=BnpB3GrpsfM&t=4424s", "title": "L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20", "thumbnail": "https://i.ytimg.com/vi/BnpB3GrpsfM/maxresdefault.jpg"} {"video_id": "BnpB3GrpsfM", "text": "feature vector and so that might be for a standard model in a big model like 4096 units or that were false model was 8 K units and you know if we had like three hundred dimensional word vectors if you naively just concatenated them into your that state representation you could only handle like 30 in a row with like a you know an 8k or 9k state size that's only about a sentence or two so we thought that you know maybe it just turned out that models were really limited by their state size and so we pushed on these kind of blocks sparse", "start_timestamp": "01:14:12", "end_timestamp": "01:14:40", "start_second": 4452, "end_second": 4480, "url": "https://www.youtube.com/watch?v=BnpB3GrpsfM&t=4452s", "title": "L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20", "thumbnail": "https://i.ytimg.com/vi/BnpB3GrpsfM/maxresdefault.jpg"} {"video_id": "BnpB3GrpsfM", "text": "methods that kind of allowed us to train with much larger state sizes that we would factorize the weight matrices such that they would be represented kind of as this two layered system of having a dense sub dense block and a lot of sparse blocks that are pruned away and we saw that these were slightly efficient more efficient in terms of parameters and they also worked better on things like set analysis when evaluated by these linear models which is like a standard probing for how good of a feature representation have I", "start_timestamp": "01:14:40", "end_timestamp": "01:15:08", "start_second": 4480, "end_second": 4508, "url": "https://www.youtube.com/watch?v=BnpB3GrpsfM&t=4480s", "title": "L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20", "thumbnail": "https://i.ytimg.com/vi/BnpB3GrpsfM/maxresdefault.jpg"} {"video_id": "BnpB3GrpsfM", "text": "learned that's partially because when your model is just like lots of features and that's all they wear their expressiveness comes from you know when your summer ability is easier and high dimensional spaces and yeah I was this was kind of like explaining some of the history of I was pushing on trying to get these things to work and figure out how do I like really you know push their performance potentially and so this is like showing again that that performance analysis of these units learned by these models so this is how", "start_timestamp": "01:15:08", "end_timestamp": "01:15:32", "start_second": 4508, "end_second": 4532, "url": "https://www.youtube.com/watch?v=BnpB3GrpsfM&t=4508s", "title": "L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20", "thumbnail": "https://i.ytimg.com/vi/BnpB3GrpsfM/maxresdefault.jpg"} {"video_id": "BnpB3GrpsfM", "text": "that kind of representation evolves and we show kind of data efficiency on the x-axis here so in the limit we know there's that zero shot performance of fitting a threshold to zero examples and that actually turned out to be about about here on this graph if you use all the data to probe and find it but if you just fit kind of naively as you saw more and more data you you know could start with like in the limit only needing 10 labeled examples to beat some of the original supervised learning baselines which just train systems from scratch", "start_timestamp": "01:15:32", "end_timestamp": "01:16:00", "start_second": 4532, "end_second": 4560, "url": "https://www.youtube.com/watch?v=BnpB3GrpsfM&t=4532s", "title": "L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20", "thumbnail": "https://i.ytimg.com/vi/BnpB3GrpsfM/maxresdefault.jpg"} {"video_id": "BnpB3GrpsfM", "text": "there's this recurrent neural tenser Network paper from a searcher at all very early do planning work here with a you know really cool complex model and we were able to imagine it with just ten labeled examples whereas it was trained on all 8,000 in this case and then as we kind of keep adding more and more data we see that the representations learned by these language models can be quite powerful and you're kind of able to like quickly sweep through kind of in the limit you know if you don't have any pre training you started getting into these", "start_timestamp": "01:16:00", "end_timestamp": "01:16:25", "start_second": 4560, "end_second": 4585, "url": "https://www.youtube.com/watch?v=BnpB3GrpsfM&t=4560s", "title": "L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20", "thumbnail": "https://i.ytimg.com/vi/BnpB3GrpsfM/maxresdefault.jpg"} {"video_id": "BnpB3GrpsfM", "text": "increasingly complex and desperate desperate maybe a judgy word ensembles of 30 different models to hit sodas and then we're able to just use this model that exploits this unsupervised learning on a lot more data to push significantly higher and then that small world improvement with blocks bars had another large jump above that and so this is kind of one of the precursors that or kind of heralds what's about to happen on every task over the next few years this is 2017 as like this field really starts taking off so we mentioned this", "start_timestamp": "01:16:25", "end_timestamp": "01:16:56", "start_second": 4585, "end_second": 4616, "url": "https://www.youtube.com/watch?v=BnpB3GrpsfM&t=4585s", "title": "L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20", "thumbnail": "https://i.ytimg.com/vi/BnpB3GrpsfM/maxresdefault.jpg"} {"video_id": "BnpB3GrpsfM", "text": "kind of cool and interesting thing of learning a single feature within one of these networks that kind of really shows some representation learning going on so there's another really great paper I love here from Royce warts and collaborators in 2017 that I think again starts to speak to hey these language models that are you know recurrent networks or more expressive neural networks are really actually learning something interesting and beginning to be useful for downstream tasks that might be difficult so this is a data set", "start_timestamp": "01:16:56", "end_timestamp": "01:17:26", "start_second": 4616, "end_second": 4646, "url": "https://www.youtube.com/watch?v=BnpB3GrpsfM&t=4616s", "title": "L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20", "thumbnail": "https://i.ytimg.com/vi/BnpB3GrpsfM/maxresdefault.jpg"} {"video_id": "BnpB3GrpsfM", "text": "called the story closed task so what you do is you have a paragraph of context in this case Karen was assigned a roommate for her first year in college yeah they go to a music show together and it goes really well and then you turn you're trying to train a system to predict which is the right ending and which is the wrong end and so this fits very cleanly or this is what Rory was quite clever about was realizing that this fits very cleanly into the generative modeling framework you could say well what is the probability of the right", "start_timestamp": "01:17:26", "end_timestamp": "01:17:52", "start_second": 4646, "end_second": 4672, "url": "https://www.youtube.com/watch?v=BnpB3GrpsfM&t=4646s", "title": "L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20", "thumbnail": "https://i.ytimg.com/vi/BnpB3GrpsfM/maxresdefault.jpg"} {"video_id": "BnpB3GrpsfM", "text": "ending versus what is the probability of the wrong ending and again as we get better language models they should start to learn to exploit context and assign correct like you know the correct probabilities to these different strings and so very early work kind of took the classic supervised learning approach of just throwing you know a you know a model maybe with even word vectors pretty trained at the system and treating it as like a binary classification task but in this case the story close task it's difficult to", "start_timestamp": "01:17:52", "end_timestamp": "01:18:16", "start_second": 4672, "end_second": 4696, "url": "https://www.youtube.com/watch?v=BnpB3GrpsfM&t=4672s", "title": "L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20", "thumbnail": "https://i.ytimg.com/vi/BnpB3GrpsfM/maxresdefault.jpg"} {"video_id": "BnpB3GrpsfM", "text": "generate this data they only had 2,000 labeled examples so a purely supervised discriminatory system really couldn't get that far and they actually were basically not performing much better than random and so what Roy was able to show is that well if you exploit tons more additional data which was available of like training on small short stories and then you use this model to score the endings so it just produces a single scalar which is like the ratio of the probabilities is sinky my trick that we talked about before but commuted for a", "start_timestamp": "01:18:16", "end_timestamp": "01:18:45", "start_second": 4696, "end_second": 4725, "url": "https://www.youtube.com/watch?v=BnpB3GrpsfM&t=4696s", "title": "L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20", "thumbnail": "https://i.ytimg.com/vi/BnpB3GrpsfM/maxresdefault.jpg"} {"video_id": "BnpB3GrpsfM", "text": "language model where you say well what is the probability of the N being given the story and you normalize by the probability the ending in isolation and this this trick just helps a bit compared to just computing only the probability the ending given story that actually still works quite well but you get a fair amount more and so they were able to significantly improve the performance on this data set again in the limit just using that single feature the RN NLM features here they got a almost 10% jump in performance just by", "start_timestamp": "01:18:45", "end_timestamp": "01:19:11", "start_second": 4725, "end_second": 4751, "url": "https://www.youtube.com/watch?v=BnpB3GrpsfM&t=4725s", "title": "L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20", "thumbnail": "https://i.ytimg.com/vi/BnpB3GrpsfM/maxresdefault.jpg"} {"video_id": "BnpB3GrpsfM", "text": "using the generative model off the shelf there's no discernible it's not exploiting statistic you know spurious correlations here because it doesn't see any labels it's just fitting a threshold of what it already thinks is the right ending versus wrong I'm being you know another quick inner loop of scaling so these kind of all are happening Nestle together and I think this gives kind of a sense of how you know research feels often involved where you see these different authors and different people pushing down different", "start_timestamp": "01:19:11", "end_timestamp": "01:19:36", "start_second": 4751, "end_second": 4776, "url": "https://www.youtube.com/watch?v=BnpB3GrpsfM&t=4751s", "title": "L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20", "thumbnail": "https://i.ytimg.com/vi/BnpB3GrpsfM/maxresdefault.jpg"} {"video_id": "BnpB3GrpsfM", "text": "lines of work and then kind of things come together in exciting ways so this is work from Noma shows here really just pushing on maybe parameter counting the bottleneck you know maybe that's what's been holding back language models and so they really went crazy here and they they train models that have these what they call sparse again a mixture of experts layers so you have your standard LS teams and pink on top and bottom of this model and then in the middle you sandwich in what's called its mixture of experts layer and", "start_timestamp": "01:19:36", "end_timestamp": "01:20:01", "start_second": 4776, "end_second": 4801, "url": "https://www.youtube.com/watch?v=BnpB3GrpsfM&t=4776s", "title": "L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20", "thumbnail": "https://i.ytimg.com/vi/BnpB3GrpsfM/maxresdefault.jpg"} {"video_id": "BnpB3GrpsfM", "text": "what this has is it has a gaining network that decides to pick basically a two layer fully connected Network and it says which ones just slotted in for this given word so you think that you know maybe you want to memorize a lot of information and when you see you know they went to the city blank or something the mixture network and the gating Network will say oh I should load up like you know the expert that handles you know where cities are in the world or is this kind of just a hand wavy high-level intuition and particularly", "start_timestamp": "01:20:01", "end_timestamp": "01:20:28", "start_second": 4801, "end_second": 4828, "url": "https://www.youtube.com/watch?v=BnpB3GrpsfM&t=4801s", "title": "L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20", "thumbnail": "https://i.ytimg.com/vi/BnpB3GrpsfM/maxresdefault.jpg"} {"video_id": "BnpB3GrpsfM", "text": "when you train this thing at large scale because it's sparse only one experts being evaluated for any given location at a time so you can group these and you can have many of these experts being trained in parallel and so they're able to push to like you know an eye-popping 137 billion parameters in this language model it's all on this very specific sub module but it ends up being more computer ficient and it has like a lot of clever and very impressive system engineering work to handle how do you run this thing at scale and you know", "start_timestamp": "01:20:28", "end_timestamp": "01:20:55", "start_second": 4828, "end_second": 4855, "url": "https://www.youtube.com/watch?v=BnpB3GrpsfM&t=4828s", "title": "L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20", "thumbnail": "https://i.ytimg.com/vi/BnpB3GrpsfM/maxresdefault.jpg"} {"video_id": "BnpB3GrpsfM", "text": "have it be efficient when handling so many parameters there so now we come back to type to evals and kind of the standard slutted in and see how it does and this is like really the paper that kind of set this field off it's called a elmo from peter's at all this day i to work again and they or elmo is the name of them all but it's really about deep contextualized word representations and this is kind of where there's the clean mark between the word defect error and the like language model era and so the way era and so the way they do this is", "start_timestamp": "01:20:55", "end_timestamp": "01:21:30", "start_second": 4855, "end_second": 4890, "url": "https://www.youtube.com/watch?v=BnpB3GrpsfM&t=4855s", "title": "L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20", "thumbnail": "https://i.ytimg.com/vi/BnpB3GrpsfM/maxresdefault.jpg"} {"video_id": "BnpB3GrpsfM", "text": "they kind of cleverly say well what do word vectors do they slot in kind of as inputs and they rear nth you know this discrete identity tour ID identifier token of like you know word you know cab being ID 256 with a distributed representation as we discussed before things like contexts are missing in this case so this paper talks about how to use a language model to do the same thing they're substituting the input representation but instead what they're using is a deep bi-directional language model so this is kind of the schematic", "start_timestamp": "01:21:30", "end_timestamp": "01:22:03", "start_second": 4890, "end_second": 4923, "url": "https://www.youtube.com/watch?v=BnpB3GrpsfM&t=4890s", "title": "L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20", "thumbnail": "https://i.ytimg.com/vi/BnpB3GrpsfM/maxresdefault.jpg"} {"video_id": "BnpB3GrpsfM", "text": "here where they have a for Dallas TM that will first take in its own learn word representations and it right over the sentence in a left/right fashion and then they want to you know have context not just for sentence words that happened in the past but words that might be about to happen so they also run a backwards L s TM in the other direction from the right to the left and then they have this Bihar cable or sorry you sees me a deep model with multiple layers so they run two layers rails TM and then what they do is", "start_timestamp": "01:22:03", "end_timestamp": "01:22:30", "start_second": 4923, "end_second": 4950, "url": "https://www.youtube.com/watch?v=BnpB3GrpsfM&t=4923s", "title": "L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20", "thumbnail": "https://i.ytimg.com/vi/BnpB3GrpsfM/maxresdefault.jpg"} {"video_id": "BnpB3GrpsfM", "text": "they learn weighted averages of the word vector so maybe for some low-level tasks you only want those input representations but maybe for some tasks you really want that kind of long-range context and so you might want to use the higher-level layers and so then they rear nth instead of feeding in those kind of like one-to-one look up in a table what the word vector is they have this RNN language model that processes the sentence or a piece of text in both directions and it learns to you know reuse its hidden state representation as", "start_timestamp": "01:22:30", "end_timestamp": "01:22:56", "start_second": 4950, "end_second": 4976, "url": "https://www.youtube.com/watch?v=BnpB3GrpsfM&t=4950s", "title": "L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20", "thumbnail": "https://i.ytimg.com/vi/BnpB3GrpsfM/maxresdefault.jpg"} {"video_id": "BnpB3GrpsfM", "text": "the input to the model instead of the word vector representation so kind of seeing all those early results like skip thought vector showing that well you could learn and distribute representation of the sentence this one does it but it does it at a word level and it just cleanly slots in where word vectors used to go and so what this is quite nice by is it allows you to have very direct comparisons with prior work and across the board they basically show that like simple baseline models which substituted to use these representations", "start_timestamp": "01:22:56", "end_timestamp": "01:23:22", "start_second": 4976, "end_second": 5002, "url": "https://www.youtube.com/watch?v=BnpB3GrpsfM&t=4976s", "title": "L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20", "thumbnail": "https://i.ytimg.com/vi/BnpB3GrpsfM/maxresdefault.jpg"} {"video_id": "BnpB3GrpsfM", "text": "instead to reward effective representations we're outperforming very well-engineered very tuned state-of-the-art systems that were like squeezing as much performance as they could add award vectors and they're getting you know quite large numbers here where you see you know 10 20 % relative or sorry yet relative error improvements and importantly they kind of have that clean sweep of very many different tasks like question-answering entailments coreference ner so even classical tests like you know part of speech recognition like and you know", "start_timestamp": "01:23:22", "end_timestamp": "01:23:50", "start_second": 5002, "end_second": 5030, "url": "https://www.youtube.com/watch?v=BnpB3GrpsfM&t=5002s", "title": "L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20", "thumbnail": "https://i.ytimg.com/vi/BnpB3GrpsfM/maxresdefault.jpg"} {"video_id": "BnpB3GrpsfM", "text": "this kind of really just swept everything and it was very clean it kind of like made clear that okay you know it word vectors were great but it's time to you know here comes the new thing and you know the other very important and fascinating thing I find about this is this model was that language model that was developed or the limit of all they used for this system is that language model that were fall developed in 2016 at Google to just along with co-authors like Orioles that they really were just pushing on", "start_timestamp": "01:23:50", "end_timestamp": "01:24:19", "start_second": 5030, "end_second": 5059, "url": "https://www.youtube.com/watch?v=BnpB3GrpsfM&t=5030s", "title": "L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20", "thumbnail": "https://i.ytimg.com/vi/BnpB3GrpsfM/maxresdefault.jpg"} {"video_id": "BnpB3GrpsfM", "text": "four-plex keys they're just pushing on how well can we get a generative model this text and then you know two years later or you know two years later someone just was like wait a second this thing is learning amazing representations and you know those two works are separated by two years and completely different research labs and they just discovered that you know these language models are really doing something here yeah so that's kind of like really where things turned and you see you know again looking at data", "start_timestamp": "01:24:19", "end_timestamp": "01:24:46", "start_second": 5059, "end_second": 5086, "url": "https://www.youtube.com/watch?v=BnpB3GrpsfM&t=5059s", "title": "L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20", "thumbnail": "https://i.ytimg.com/vi/BnpB3GrpsfM/maxresdefault.jpg"} {"video_id": "BnpB3GrpsfM", "text": "efficiency that when you're at very low amounts of data you get huge improvements like 10 plus percent absolute improvements so that really feels like you know as you get more and more supervised data you can begin to overcome the limitations of you know training from scratch but in the limit you know you want to use as little data as possible you want to learn as quickly as possible so this is like very exciting and it's kind of like really got everyone to start stirring and paying attention to this field yeah so", "start_timestamp": "01:24:46", "end_timestamp": "01:25:13", "start_second": 5086, "end_second": 5113, "url": "https://www.youtube.com/watch?v=BnpB3GrpsfM&t=5086s", "title": "L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20", "thumbnail": "https://i.ytimg.com/vi/BnpB3GrpsfM/maxresdefault.jpg"} {"video_id": "BnpB3GrpsfM", "text": "final one before the break is kind of you could think of it as pretty much in the same vein as Elmo and what we did instead is we took a better language model again so transformers came out and we were really excited by their ability to handle longer range dependencies and they were also very computer efficient so you you could train them quite well and quite fast so we swap out the recurrent network or the LS TM in the language model for a transformer based language model and if we want we could talk a bit about a self attention and", "start_timestamp": "01:25:13", "end_timestamp": "01:25:44", "start_second": 5113, "end_second": 5144, "url": "https://www.youtube.com/watch?v=BnpB3GrpsfM&t=5113s", "title": "L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20", "thumbnail": "https://i.ytimg.com/vi/BnpB3GrpsfM/maxresdefault.jpg"} {"video_id": "BnpB3GrpsfM", "text": "transform based architectures in a bit but now just think of it as like we subbed in a different better architecture and it's slightly larger we use a similar data set of books it's the same data set that skip thought vectors introduced slash trained on and we we just fine-tune it the same way that Android I it all did and this exciting thing here is we saw that we no longer needed these tests specific architectures for each task so you know a lot of the cleanliness of Elmo was that because it was just substituting", "start_timestamp": "01:25:44", "end_timestamp": "01:26:14", "start_second": 5144, "end_second": 5174, "url": "https://www.youtube.com/watch?v=BnpB3GrpsfM&t=5144s", "title": "L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20", "thumbnail": "https://i.ytimg.com/vi/BnpB3GrpsfM/maxresdefault.jpg"} {"video_id": "BnpB3GrpsfM", "text": "the impact representation you could reuse all those and engineered architectures and often they would you know counteract for the issues of handling long term dependencies in or an end with like an attention layer the like but they still require you know that engineering of each these tasks for each of these different architectures which means that you're still leaving performance on the head room you know it's not like where you're initializing the middle layer features of a CNN instead of like the edge detectors of the lower", "start_timestamp": "01:26:14", "end_timestamp": "01:26:37", "start_second": 5174, "end_second": 5197, "url": "https://www.youtube.com/watch?v=BnpB3GrpsfM&t=5174s", "title": "L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20", "thumbnail": "https://i.ytimg.com/vi/BnpB3GrpsfM/maxresdefault.jpg"} {"video_id": "BnpB3GrpsfM", "text": "layers but then we still are sticking new layers on top so we were trying to like really kind of move towards a general-purpose framework that kind of a lot of Surrey is the same architecture everywhere and not have to have as much of these tests specific engineering which requires a lot of effort and time and grad student hours to like push those systems for so we have this transfer based language model and we kind of showed that for a fair variety of question of tasks primarily classification we kind of take the same", "start_timestamp": "01:26:37", "end_timestamp": "01:27:03", "start_second": 5197, "end_second": 5223, "url": "https://www.youtube.com/watch?v=BnpB3GrpsfM&t=5197s", "title": "L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20", "thumbnail": "https://i.ytimg.com/vi/BnpB3GrpsfM/maxresdefault.jpg"} {"video_id": "BnpB3GrpsfM", "text": "model and without having to modify it or introduce new layers we could just fine-tune it with only a linear classifier in text typed on top and we could across-the-board do quite well and in many cases we were outperforming ensembles the same way that elmo is doing before and using basically the same unified architecture to perform quite a lot of different tasks and the glue benchmark had recently come out as like kind of a standard multi test benchmark and this is kind of one of the first major ones to bump up accuracy", "start_timestamp": "01:27:03", "end_timestamp": "01:27:31", "start_second": 5223, "end_second": 5251, "url": "https://www.youtube.com/watch?v=BnpB3GrpsfM&t=5223s", "title": "L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20", "thumbnail": "https://i.ytimg.com/vi/BnpB3GrpsfM/maxresdefault.jpg"} {"video_id": "BnpB3GrpsfM", "text": "there and reduce the complexity of it and you know there's two particular things that I'd like to focus on for discussing some of the results from that paper which is if we oblate the number of features transferred we really see that this is a 12 transformer 12 self attention block model we really see that you need all those layers and the random initialization of higher layers was not working well at the time it may be you know as always that you figure out better initialization methods and you can close that gap but you see kind of", "start_timestamp": "01:27:31", "end_timestamp": "01:27:58", "start_second": 5251, "end_second": 5278, "url": "https://www.youtube.com/watch?v=BnpB3GrpsfM&t=5251s", "title": "L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20", "thumbnail": "https://i.ytimg.com/vi/BnpB3GrpsfM/maxresdefault.jpg"} {"video_id": "BnpB3GrpsfM", "text": "cleanly that we're transferring a deep heart you know a deep distributed representation and the you know the deeper it was the better it was generalizing and that seemed to hold true across multiple data sets and was a very clean kind of performance increase as you just transfer more and more of those blocks so Elmo is a 2 layer model and now we're going to like a 12 layer model and then this rightmost graph is really the one that I want to focus on and this kind of links together some of the hints and pieces we've been seeing", "start_timestamp": "01:27:58", "end_timestamp": "01:28:25", "start_second": 5278, "end_second": 5305, "url": "https://www.youtube.com/watch?v=BnpB3GrpsfM&t=5278s", "title": "L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20", "thumbnail": "https://i.ytimg.com/vi/BnpB3GrpsfM/maxresdefault.jpg"} {"video_id": "BnpB3GrpsfM", "text": "so far through like many of the different papers which is kind of this interesting behavior sometimes the language model is learning a supervised task or a task we kind of thought needed supervision to classically trained in the machine learning framework without any direct explicit labeling or supervision of it so what we did here is we took this transform language ball and we kind of design these heuristic ways of having it compute probabilities the same way that right where Schwartz was doing and we kind of started to extend that beyond", "start_timestamp": "01:28:25", "end_timestamp": "01:28:51", "start_second": 5305, "end_second": 5331, "url": "https://www.youtube.com/watch?v=BnpB3GrpsfM&t=5305s", "title": "L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20", "thumbnail": "https://i.ytimg.com/vi/BnpB3GrpsfM/maxresdefault.jpg"} {"video_id": "BnpB3GrpsfM", "text": "just you know the very specific thing like which of these two sentences is most likely so like for instance we could take a language model and do exactly that example at the beginning and ask it well you just saw a movie sentence review do you think the word very positive or very negative it's more likely after seeing this sentence so this would be this probe here which is sentiment analysis in blue and so we show over the course of training this language model we evaluate this kind of zero shot performance probe and we call", "start_timestamp": "01:28:51", "end_timestamp": "01:29:17", "start_second": 5331, "end_second": 5357, "url": "https://www.youtube.com/watch?v=BnpB3GrpsfM&t=5331s", "title": "L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20", "thumbnail": "https://i.ytimg.com/vi/BnpB3GrpsfM/maxresdefault.jpg"} {"video_id": "BnpB3GrpsfM", "text": "it zero shot because and this is a broader you know we didn't invent zero shot by any means but it just means evaluating on a task or data set or a class that we've never seen before and we haven't done standard supervised learning to update the representations or to train the model to do this and so we see that kind of as you train you steadily improve performance we've normalized test performance so that zero is random guessing and one was the overall state of the art do you still see across the board that these models", "start_timestamp": "01:29:17", "end_timestamp": "01:29:42", "start_second": 5357, "end_second": 5382, "url": "https://www.youtube.com/watch?v=BnpB3GrpsfM&t=5357s", "title": "L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20", "thumbnail": "https://i.ytimg.com/vi/BnpB3GrpsfM/maxresdefault.jpg"} {"video_id": "BnpB3GrpsfM", "text": "are you know nowhere near soda and often they're less than 50% of between soda and random guessing but they're showing clear and steady improvements and they're showing that even on tasks like question answering you could actually you know take a paragraph of like a question answering task and asked it well which of these answers do you think is more likely and you know there's no supervised training here it was trying to predict books and then you ask it like a 5th grade science question and it starts to sorry I shouldn't answer", "start_timestamp": "01:29:42", "end_timestamp": "01:30:06", "start_second": 5382, "end_second": 5406, "url": "https://www.youtube.com/watch?v=BnpB3GrpsfM&t=5382s", "title": "L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20", "thumbnail": "https://i.ytimg.com/vi/BnpB3GrpsfM/maxresdefault.jpg"} {"video_id": "BnpB3GrpsfM", "text": "promotoras it so much but you just probe it you know you can compute some conditional probabilities from it you start to see progress being made on you know some potentially quite far afield task the final point to make your to is self attention and transformers really seem to help a lot here where as we did the same exact model or you know it's equivalent size and similar compute with an LS TM and we were seeing that especially on the zero shot tasks sometimes it could do relatively well but on some of them especially ones that", "start_timestamp": "01:30:06", "end_timestamp": "01:30:33", "start_second": 5406, "end_second": 5433, "url": "https://www.youtube.com/watch?v=BnpB3GrpsfM&t=5406s", "title": "L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20", "thumbnail": "https://i.ytimg.com/vi/BnpB3GrpsfM/maxresdefault.jpg"} {"video_id": "BnpB3GrpsfM", "text": "handle long range dependencies you really need these self attention layers handle long term dependencies cool so I think it's we're at about half time for the lecture and I think that's probably good time for a break then fantastic Alec thank you let's let's take a break till 6.50 specific time well about eight minutes does that sound good yeah okay great and I'll pause the recording for a moment in here I'm sure if you have a certain like limitations on how large your Monica tree life is everything you need to run in like a", "start_timestamp": "01:30:33", "end_timestamp": "01:31:12", "start_second": 5433, "end_second": 5472, "url": "https://www.youtube.com/watch?v=BnpB3GrpsfM&t=5433s", "title": "L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20", "thumbnail": "https://i.ytimg.com/vi/BnpB3GrpsfM/maxresdefault.jpg"} {"video_id": "BnpB3GrpsfM", "text": "particular device and you can to like train a large model or a cartoon elephant this one are there any strategies yeah I mean so I admittedly have been emphasizing the need for scale but it's kind of a continuous spectrum thing and there's some work we'll be talking about later that kind of focuses on efficiency and kind of how far you can push models of a given capacity in size you know probably the answer here I think from a pragmatic perspective is to kind of use whatever is the largest thing you can fit into the given device", "start_timestamp": "01:31:12", "end_timestamp": "01:31:41", "start_second": 5472, "end_second": 5501, "url": "https://www.youtube.com/watch?v=BnpB3GrpsfM&t=5472s", "title": "L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20", "thumbnail": "https://i.ytimg.com/vi/BnpB3GrpsfM/maxresdefault.jpg"} {"video_id": "BnpB3GrpsfM", "text": "framework or kind of you know resource specifications you have but then kind of really pushing on how far you can you can take that thing and some of the methods and techniques that have been developed especially in the last year or two of kind of increased efficiencies by factors of maybe five to ten so there's I think a lot of promise there from you know really pushing even with a fixed size and many of those still fit on single GPUs yeah thanks cool yeah so yeah I guess given it seems like the class has gone over transforms a few", "start_timestamp": "01:31:41", "end_timestamp": "01:32:14", "start_second": 5501, "end_second": 5534, "url": "https://www.youtube.com/watch?v=BnpB3GrpsfM&t=5501s", "title": "L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20", "thumbnail": "https://i.ytimg.com/vi/BnpB3GrpsfM/maxresdefault.jpg"} {"video_id": "BnpB3GrpsfM", "text": "times I won't do the super detailed version here so yeah I guess we'll just kind of look through that real quickly so we've kind of talked right now so far about standard mostly standard language models and kind of using different architectures you know character level aren't ends and 2ls TM or double all those teams and transfer based language models and they're always kind of trained with the standard auto regressive left right or in the case of Elmo adding a backwards right-left language model and you know that's nice", "start_timestamp": "01:32:14", "end_timestamp": "01:32:46", "start_second": 5534, "end_second": 5566, "url": "https://www.youtube.com/watch?v=BnpB3GrpsfM&t=5534s", "title": "L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20", "thumbnail": "https://i.ytimg.com/vi/BnpB3GrpsfM/maxresdefault.jpg"} {"video_id": "BnpB3GrpsfM", "text": "it's a it's a clear framework it allows you to compete probabilities easily it allows you to sample kind of in just a iterated fashion it's not the fastest but it's quite simple to do you just feed in the sample in the distribution over the next word and then you feed that as a new input and conditioned on it and then resample and so it's it's it's a very clean in like general framework but it may actually not be all that optimal so it's cool and exciting to see some of the things that these language models are doing and some of the work as I was", "start_timestamp": "01:32:46", "end_timestamp": "01:33:14", "start_second": 5566, "end_second": 5594, "url": "https://www.youtube.com/watch?v=BnpB3GrpsfM&t=5566s", "title": "L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20", "thumbnail": "https://i.ytimg.com/vi/BnpB3GrpsfM/maxresdefault.jpg"} {"video_id": "BnpB3GrpsfM", "text": "just mentioning has really pushed farther by walking away from that very explicit like left/right auto regressive language modeling strategy so this is a common leaderboard it's called the glute benchmark and it combines a set of like nine tasks together and this is this was pretty important for the field to kind of standardize on the set of tasks people reported on as you can imagine especially early on when the research is kind of scattered and not all that standardized you kind of see you know people picking their favorite benchmarks", "start_timestamp": "01:33:14", "end_timestamp": "01:33:46", "start_second": 5594, "end_second": 5626, "url": "https://www.youtube.com/watch?v=BnpB3GrpsfM&t=5594s", "title": "L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20", "thumbnail": "https://i.ytimg.com/vi/BnpB3GrpsfM/maxresdefault.jpg"} {"video_id": "BnpB3GrpsfM", "text": "I was totally guilty of this myself I really cared about seven classification just because I happen to you know find that to be an interesting task and worked on it a lot and so you know my favorite report is that a classification someone else is to report on you know in tailmon and someone else report on crushing answering you gotta have this lack of commonality and comparison points so the blue benchmark came in and said we're going to standardize we're gonna focus on 7th level comparison tasks primarily and we're going to kind", "start_timestamp": "01:33:46", "end_timestamp": "01:34:10", "start_second": 5626, "end_second": 5650, "url": "https://www.youtube.com/watch?v=BnpB3GrpsfM&t=5626s", "title": "L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20", "thumbnail": "https://i.ytimg.com/vi/BnpB3GrpsfM/maxresdefault.jpg"} {"video_id": "BnpB3GrpsfM", "text": "of have a suite of them and we're basically gonna say hey you should report on all of them so you can't hide your bad results on one and this helped drive a lot of progress too so this is a screenshot of kind of where this leaderboard has gone showing all these new improvements and methods so there's the BIOS TM Elmo baseline autumn that we mentioned and GT one would have slotted in slightly above that but then there's at now ranked 20 from Jacob Devlin and crew and then there's Facebook ai's Roberta as another big jump and so we", "start_timestamp": "01:34:10", "end_timestamp": "01:34:42", "start_second": 5650, "end_second": 5682, "url": "https://www.youtube.com/watch?v=BnpB3GrpsfM&t=5650s", "title": "L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20", "thumbnail": "https://i.ytimg.com/vi/BnpB3GrpsfM/maxresdefault.jpg"} {"video_id": "BnpB3GrpsfM", "text": "saw it like on the average metric here which kind of just averages the performance across all these different tasks it went from 70 to 80 between the biles team elmo bench baseline - yeah - burn so that was a big jump there and then an almost equally sized jump happened with Berk - Roberta which we'll talk about in a bit and then there's newer things like Elektra and t5 so this kind of whole area has really you know really exploded in the last year two years in terms of the amount of teams and basically every", "start_timestamp": "01:34:42", "end_timestamp": "01:35:11", "start_second": 5682, "end_second": 5711, "url": "https://www.youtube.com/watch?v=BnpB3GrpsfM&t=5682s", "title": "L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20", "thumbnail": "https://i.ytimg.com/vi/BnpB3GrpsfM/maxresdefault.jpg"} {"video_id": "BnpB3GrpsfM", "text": "major research lab Microsoft like Stanford and why you like pretty much you know AI too you see a huge amount of people you know I do every everyone everywhere has been kind of pushing what they can do on this this kind of benchmark and really seeing a lot of progress so we're going to go through kind of each of some of these improvements that these are highlighted select a few there's many others so sorry if I dropped here you briefly was a cool recording spec on so an SST too is like synth analysis like we mentioned before so it's kind of a", "start_timestamp": "01:35:11", "end_timestamp": "01:36:15", "start_second": 5711, "end_second": 5775, "url": "https://www.youtube.com/watch?v=BnpB3GrpsfM&t=5711s", "title": "L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20", "thumbnail": "https://i.ytimg.com/vi/BnpB3GrpsfM/maxresdefault.jpg"} {"video_id": "BnpB3GrpsfM", "text": "you know again a diverse suite of tasks here so how do we kind of what are these big improvements we're seeing beyond the standard left-right language models and there's one more point Domanick which is there is a human baseline here and it's slotted in actually in the middle it's in 12th place now so what does it mean like are these models actually better than people and you know the answer really is no and it's complicated and confusing and we'll chat about this a bit more later and supervised learning is always playing tricks on you but you", "start_timestamp": "01:36:15", "end_timestamp": "01:36:42", "start_second": 5775, "end_second": 5802, "url": "https://www.youtube.com/watch?v=BnpB3GrpsfM&t=5775s", "title": "L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20", "thumbnail": "https://i.ytimg.com/vi/BnpB3GrpsfM/maxresdefault.jpg"} {"video_id": "BnpB3GrpsfM", "text": "know now these models are like really really you know went from like in the last two or three years because of leveraging unsupervised pre training and kind of scalable methods to really make quite a lot of progress in this space very quickly so this is Bert so what Bert does is it basically finds a very great way to hybridize a language model objective with kind of the importance of like bidirectionality so again you know by default we have this like left/right auto regressive factorization where we say given the", "start_timestamp": "01:36:42", "end_timestamp": "01:37:12", "start_second": 5802, "end_second": 5832, "url": "https://www.youtube.com/watch?v=BnpB3GrpsfM&t=5802s", "title": "L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20", "thumbnail": "https://i.ytimg.com/vi/BnpB3GrpsfM/maxresdefault.jpg"} {"video_id": "BnpB3GrpsfM", "text": "previous words predict the next word in the language model that's like what GPT one does and so what we see with that is you're not able to exploit context for the right you're not able to see you know you by masking the model you have to prevent it from being able to just look at the next word and say well I see my sequence that it's cat so I'll just learn to copy it over in predict cat so that has a major limitation and when we when we release GP t1 we actually like weren't able to do well on or you know we found that some of the", "start_timestamp": "01:37:12", "end_timestamp": "01:37:43", "start_second": 5832, "end_second": 5863, "url": "https://www.youtube.com/watch?v=BnpB3GrpsfM&t=5832s", "title": "L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20", "thumbnail": "https://i.ytimg.com/vi/BnpB3GrpsfM/maxresdefault.jpg"} {"video_id": "BnpB3GrpsfM", "text": "question-answering datasets we just couldn't do as long because we weren't able to exploit bidirectional context whereas elmo was the old bi-directional image model and with an LST M and you know because they trained afford one of their backward one and the average of the representations that works quite well for shell models and that gets you that my directional context and that can help a ton and you know they performed us I still have performed this on some data sets because of that and then Bert basically figures out how to have", "start_timestamp": "01:37:43", "end_timestamp": "01:38:06", "start_second": 5863, "end_second": 5886, "url": "https://www.youtube.com/watch?v=BnpB3GrpsfM&t=5863s", "title": "L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20", "thumbnail": "https://i.ytimg.com/vi/BnpB3GrpsfM/maxresdefault.jpg"} {"video_id": "BnpB3GrpsfM", "text": "bi-directional context within a self attention model and the way they do this is they change the objective so they're no longer doing this like standard you know maximum likelihood training on like just the data distribution they use this kind of proxy tasks called mast language modeling so again you know at the bottom here we could see left-right LM is like the cat sat on the and then you blink out a word and it's supposed to predict math right language modeling would be we'd go the other way around and same at the onset", "start_timestamp": "01:38:06", "end_timestamp": "01:38:32", "start_second": 5886, "end_second": 5912, "url": "https://www.youtube.com/watch?v=BnpB3GrpsfM&t=5886s", "title": "L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20", "thumbnail": "https://i.ytimg.com/vi/BnpB3GrpsfM/maxresdefault.jpg"} {"video_id": "BnpB3GrpsfM", "text": "cat mask and predict you know there and so what master LUN does is it just takes your input sequence and it's it corrupts a few locations 15% in the case of bird and it trains the model to predict what's at those masked locations so you know might in in this case there's no like left requirement to write requirement it just randomly selects 15% and this allows you to have bi-directional like representations you can't leak the word because it's masked in the inputs whereas for a standard left-right right-left Ella you kind of", "start_timestamp": "01:38:32", "end_timestamp": "01:39:02", "start_second": 5912, "end_second": 5942, "url": "https://www.youtube.com/watch?v=BnpB3GrpsfM&t=5912s", "title": "L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20", "thumbnail": "https://i.ytimg.com/vi/BnpB3GrpsfM/maxresdefault.jpg"} {"video_id": "BnpB3GrpsfM", "text": "hide that this is probably one detail of self attention models that you have that you have the self attention matrix and that kind of defines the connectivity pattern between different locations in your sequence of inputs that you're processing and so you used masked self attention matrix for standard left/right language modeling or right left wing which following where you masked the upper triangle and that prevents you from that future blinking and so you know we say bi-directional context that corresponds to training the same self", "start_timestamp": "01:39:02", "end_timestamp": "01:39:33", "start_second": 5942, "end_second": 5973, "url": "https://www.youtube.com/watch?v=BnpB3GrpsfM&t=5942s", "title": "L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20", "thumbnail": "https://i.ytimg.com/vi/BnpB3GrpsfM/maxresdefault.jpg"} {"video_id": "BnpB3GrpsfM", "text": "attention transformer basically except you don't longer have this masking to prevent future locations like J you know after I being able to have I look at and attend to position J after I so that that's kind of the architectural detail that corresponds to this change and it kind of makes sense that having that ability to look at both sides of context helps with disambiguation it helps with information processing and you know information flow through the model because you know the model can like query back and you know for things like", "start_timestamp": "01:39:33", "end_timestamp": "01:40:01", "start_second": 5973, "end_second": 6001, "url": "https://www.youtube.com/watch?v=BnpB3GrpsfM&t=5973s", "title": "L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20", "thumbnail": "https://i.ytimg.com/vi/BnpB3GrpsfM/maxresdefault.jpg"} {"video_id": "BnpB3GrpsfM", "text": "question answering for instance if you have a if you have the question after the context you can't update the representations of the context you know in a left-right Auto regressive model after you've seen the question because they're mass then they're hidden from it so the model isn't doing any you know right context dependent processing but in bird it can actually you know bidirectionally attend and quickly passed information forward and backward and this is just what you see if anyone who actually does like", "start_timestamp": "01:40:01", "end_timestamp": "01:40:24", "start_second": 6001, "end_second": 6024, "url": "https://www.youtube.com/watch?v=BnpB3GrpsfM&t=6001s", "title": "L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20", "thumbnail": "https://i.ytimg.com/vi/BnpB3GrpsfM/maxresdefault.jpg"} {"video_id": "BnpB3GrpsfM", "text": "self attention architecture from scratch on a supervised test they always use bi-directional or almost always instead of no mess mess self attention matrices and so this turns out to have a huge boost on glue so that that bump I believe between GP t1 and Burt was like they went from I will GPT one had like an average of 78 or something or sorry excuse me I think this got reworked and sorry we excluded WN a lie it was like a bump of like five five plus percent so it basically got a double the Headroom on gbg1 and they show with very careful", "start_timestamp": "01:40:24", "end_timestamp": "01:41:00", "start_second": 6024, "end_second": 6060, "url": "https://www.youtube.com/watch?v=BnpB3GrpsfM&t=6024s", "title": "L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20", "thumbnail": "https://i.ytimg.com/vi/BnpB3GrpsfM/maxresdefault.jpg"} {"video_id": "BnpB3GrpsfM", "text": "controls that for the exact same model in the exact same setting you know it does look quite a bit better so bidirectionality makes sense also for sentence comparison tasks like entailment where you have two sentences you're comparing really want them all to be able to compute them and tend back and forth between them and you know look at one and then the other that just seems like correct behavior to do whereas 251 would just go left right and then you'd be done so yeah Burt ended up being kind of the thing after Elmo Elmo", "start_timestamp": "01:41:00", "end_timestamp": "01:41:28", "start_second": 6060, "end_second": 6088, "url": "https://www.youtube.com/watch?v=BnpB3GrpsfM&t=6060s", "title": "L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20", "thumbnail": "https://i.ytimg.com/vi/BnpB3GrpsfM/maxresdefault.jpg"} {"video_id": "BnpB3GrpsfM", "text": "kicked it off especially in the research side and got a lot of people to start investing in this space bird is kind of the thing that moved this to the point where suddenly it was like you know ready for like more commercialization or you know production ready basically and so this is now deployed in Google search and its really like kind of showing up everywhere you know if you go to basically any leaderboard burg variants are often very near the top now and pretty much most NLP tasks and just like GPT one they use the same architecture", "start_timestamp": "01:41:28", "end_timestamp": "01:41:57", "start_second": 6088, "end_second": 6117, "url": "https://www.youtube.com/watch?v=BnpB3GrpsfM&t=6088s", "title": "L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20", "thumbnail": "https://i.ytimg.com/vi/BnpB3GrpsfM/maxresdefault.jpg"} {"video_id": "BnpB3GrpsfM", "text": "everywhere they remove the need for kind of having these tests specific modules on top and so this was another like incredibly strong step so you know that was Bert I guess there's one more point to make which is because it's predicting these masking tokens it's only predicting like you know you have to set that mass percentage and by default it's often said to like 15 percent so you should understand that like your left/right model it actually predicts a lot more words because it'll predict the full sequence within a single fordpass", "start_timestamp": "01:41:57", "end_timestamp": "01:42:25", "start_second": 6117, "end_second": 6145, "url": "https://www.youtube.com/watch?v=BnpB3GrpsfM&t=6117s", "title": "L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20", "thumbnail": "https://i.ytimg.com/vi/BnpB3GrpsfM/maxresdefault.jpg"} {"video_id": "BnpB3GrpsfM", "text": "whereas by default you'd have to run a Bert you know model like six times two on average see every predict every token so it turns out that they often learn a bit slower early but then they just keep training and they begin to learn how to use the bidirectional representations to their benefit and then they continue to outperform left/right language models now the problem is you can't sample from it and it's no longer quite as clear that it's like you know you can't compute a correct I'm normally like correctly normalized probability over", "start_timestamp": "01:42:25", "end_timestamp": "01:42:52", "start_second": 6145, "end_second": 6172, "url": "https://www.youtube.com/watch?v=BnpB3GrpsfM&t=6145s", "title": "L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20", "thumbnail": "https://i.ytimg.com/vi/BnpB3GrpsfM/maxresdefault.jpg"} {"video_id": "BnpB3GrpsfM", "text": "the sequence without a lot of work there's some research and to figure out how to do this with clever methods but kind of it removes some of the elegance of like sampling and having easy density or probability estimates for kind of trading off this representation capability so Roberta is if we go back to this leaderboard the next big jump up from 80 point five to eighty eight point one you know kind of you know like as a benchmark or you know important event it kind of you know solidly is above the supposed human average baselines here so", "start_timestamp": "01:42:52", "end_timestamp": "01:43:24", "start_second": 6172, "end_second": 6204, "url": "https://www.youtube.com/watch?v=BnpB3GrpsfM&t=6172s", "title": "L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20", "thumbnail": "https://i.ytimg.com/vi/BnpB3GrpsfM/maxresdefault.jpg"} {"video_id": "BnpB3GrpsfM", "text": "what is Roberta Roberta is a very well executed engineering refinement on Burt it's it's a good example of how so often in this field kind of you know the second pass at an approach with maybe the same very similar model architecture algorithm I've can just by careful engineering and fine tuning and tweaking still have tons of extra Headroom to it so they better tune the hyper parameters they remove a few hacks that the original Burt had so for instance the original point for computational reasons predicted most of its training on a", "start_timestamp": "01:43:24", "end_timestamp": "01:43:54", "start_second": 6204, "end_second": 6234, "url": "https://www.youtube.com/watch?v=BnpB3GrpsfM&t=6204s", "title": "L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20", "thumbnail": "https://i.ytimg.com/vi/BnpB3GrpsfM/maxresdefault.jpg"} {"video_id": "BnpB3GrpsfM", "text": "relatively short context length and I believe out of 28 tokens and then right at the end of training double that two times two up to 512 tokens for prediction and so they just train at 512 the whole way through it's the same model capacity it has the same runtime per sequence length but they just have you know they spend the pre-training compute to buy that and when you're thinking about deploying the system you know one of the important criteria to realize is especially when you're talking about a system that might get", "start_timestamp": "01:43:54", "end_timestamp": "01:44:23", "start_second": 6234, "end_second": 6263, "url": "https://www.youtube.com/watch?v=BnpB3GrpsfM&t=6234s", "title": "L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20", "thumbnail": "https://i.ytimg.com/vi/BnpB3GrpsfM/maxresdefault.jpg"} {"video_id": "BnpB3GrpsfM", "text": "deployed broadly and used across the you know across the world and many different companies once it's released most of the compute is going into inference time it's not actually going into training time and so that means that if you have a method of getting further performance improvements by spending more flops at pre training time often it can be quite worth it from like kind of a full ecosystem view of where is the compute being spent this is one of the counterintuitive things I think about how you think about these systems so", "start_timestamp": "01:44:23", "end_timestamp": "01:44:48", "start_second": 6263, "end_second": 6288, "url": "https://www.youtube.com/watch?v=BnpB3GrpsfM&t=6263s", "title": "L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20", "thumbnail": "https://i.ytimg.com/vi/BnpB3GrpsfM/maxresdefault.jpg"} {"video_id": "BnpB3GrpsfM", "text": "they also do better data generation it turned out that the original bird kind of from a simplicity perspective cache the masking so they only actually masked the sequences once and they always print mask location and so you can simply change that to an online setting where you keep sampling the mask and that helps with overfitting and they also use a more flexible vocab scheme these kind of a full BP scheme that can do kind of full utf-8 byte sequences so you can handle any string at least with the standard byte sequence representation and then", "start_timestamp": "01:44:48", "end_timestamp": "01:45:17", "start_second": 6288, "end_second": 6317, "url": "https://www.youtube.com/watch?v=BnpB3GrpsfM&t=6288s", "title": "L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20", "thumbnail": "https://i.ytimg.com/vi/BnpB3GrpsfM/maxresdefault.jpg"} {"video_id": "BnpB3GrpsfM", "text": "they just train longer with more compute so as we mentioned before bird is only predicting like one of six tokens on average so that just means it's under train for the coolant amount of time and you can actually just keep training it longer with more GPUs and continue to see higher and higher performance and so I mentioned Roberta Bert was on the leaderboards everywhere well now about you know eight months later it's Roberta everywhere on the leaderboards and that's still true today largely except for a few like targeted things is I", "start_timestamp": "01:45:17", "end_timestamp": "01:45:42", "start_second": 6317, "end_second": 6342, "url": "https://www.youtube.com/watch?v=BnpB3GrpsfM&t=6317s", "title": "L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20", "thumbnail": "https://i.ytimg.com/vi/BnpB3GrpsfM/maxresdefault.jpg"} {"video_id": "BnpB3GrpsfM", "text": "think if you go to the our general PD leaderboard you're gonna find that model in the first place is some variants of like a row burger or something so that's like an example again of where you know it's not you know like there's no super clever new algorithm or approach or you know and even for bird it's a pretty you know it's a pretty precise refinement of previous of like gt1 but it can have a huge impact when it's just well executed and you know is I think somewhat you know exciting from one view where it's", "start_timestamp": "01:45:42", "end_timestamp": "01:46:13", "start_second": 6342, "end_second": 6373, "url": "https://www.youtube.com/watch?v=BnpB3GrpsfM&t=6342s", "title": "L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20", "thumbnail": "https://i.ytimg.com/vi/BnpB3GrpsfM/maxresdefault.jpg"} {"video_id": "BnpB3GrpsfM", "text": "like okay we're kind of really finding that there's a lot of fertile ground here and with like kind of the right tweaks and you know clever insights we can continue to make further progress so this is where a lecture comes in and this is like I think one of the ones that first shows kind of another new interesting algorithmic potential improvement and someone excitingly shows that it's much more efficient so we mentioned the masking for bird so there's actually this kind of gap here which is the problem is when you", "start_timestamp": "01:46:13", "end_timestamp": "01:46:39", "start_second": 6373, "end_second": 6399, "url": "https://www.youtube.com/watch?v=BnpB3GrpsfM&t=6373s", "title": "L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20", "thumbnail": "https://i.ytimg.com/vi/BnpB3GrpsfM/maxresdefault.jpg"} {"video_id": "BnpB3GrpsfM", "text": "were training you're masking all these input sequences and you know you may sample the masquerades so you you're kind of crunching your inputs but then when you want to train a test time or when you want to transfer to some downstream task it doesn't make sense to corrupt the inputs right because if you were doing some analysis and you mask the token you know this was a mask movie you don't know if it's going to be a great movie or a terrible movie in that mask location so bird just kind of like as a few tricks to minimize this impact", "start_timestamp": "01:46:39", "end_timestamp": "01:47:07", "start_second": 6399, "end_second": 6427, "url": "https://www.youtube.com/watch?v=BnpB3GrpsfM&t=6399s", "title": "L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20", "thumbnail": "https://i.ytimg.com/vi/BnpB3GrpsfM/maxresdefault.jpg"} {"video_id": "BnpB3GrpsfM", "text": "but if the other day it's kind of this train test gap where you trained it with one distribution with mask inputs and then you want to test it and and and predict with it on a different one and it turns out though that gap actually looks to contribute to some performance issues the other gap is again it's only predicting 15% of tokens so it may also be learning slower than it could because he would have to F crop six times to see the same same predicted segment of data so when Elektra does is a very clever hybrid system so they have", "start_timestamp": "01:47:07", "end_timestamp": "01:47:36", "start_second": 6427, "end_second": 6456, "url": "https://www.youtube.com/watch?v=BnpB3GrpsfM&t=6427s", "title": "L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20", "thumbnail": "https://i.ytimg.com/vi/BnpB3GrpsfM/maxresdefault.jpg"} {"video_id": "BnpB3GrpsfM", "text": "a bird or basically a mini bird inside of it so it's the standard math language modeling technique and then you sample from it and you say well you know for that first word that we've asked what do you think is the right word so sample from it's uh its distribution over next tokens and then you're going to feed it into this discriminator which is the actual Electra model and what its job is to do is to predict whether or not the token at any given location is the original token or a replaced token so if the generator gets it wrong again it", "start_timestamp": "01:47:36", "end_timestamp": "01:48:04", "start_second": 6456, "end_second": 6484, "url": "https://www.youtube.com/watch?v=BnpB3GrpsfM&t=6456s", "title": "L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20", "thumbnail": "https://i.ytimg.com/vi/BnpB3GrpsfM/maxresdefault.jpg"} {"video_id": "BnpB3GrpsfM", "text": "might sample something kind of reasonable and kind of correct like cooked verse eight the job of the discriminator or the Electra system is to just estimate is this the correct one or wrong not so it's just a binary classification task but it's done at every location it's basically saying was this input corrupted and that allows it to you know one it has a natural distribution and it may be because this math language model could be quite good a lot closer to the real input distribution so you don't have this", "start_timestamp": "01:48:04", "end_timestamp": "01:48:29", "start_second": 6484, "end_second": 6509, "url": "https://www.youtube.com/watch?v=BnpB3GrpsfM&t=6484s", "title": "L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20", "thumbnail": "https://i.ytimg.com/vi/BnpB3GrpsfM/maxresdefault.jpg"} {"video_id": "BnpB3GrpsfM", "text": "shock when you transfer it for your downstream tasks and you also speed things up because you're taking a loss and propagating a gradient for every location because you're always estimating is it the correct one or the wrong one and that's like still can be a difficult task for every location rather than like kind of the degenerate thing for like 80 percent of tokens which is just like the egg and B function for birth and so when we look across the board here we see that this model kind of the standard models and you get like", "start_timestamp": "01:48:29", "end_timestamp": "01:48:53", "start_second": 6509, "end_second": 6533, "url": "https://www.youtube.com/watch?v=BnpB3GrpsfM&t=6509s", "title": "L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20", "thumbnail": "https://i.ytimg.com/vi/BnpB3GrpsfM/maxresdefault.jpg"} {"video_id": "BnpB3GrpsfM", "text": "glove or selma over spurt and they're all kind of like smashed up right here on 0 because and your TP one starts to move over and then you get kind of you know roberto scales with more and more compute and then the graph keeps going it is hidden you can see that electro is kind of across the board can be quite a lot more efficient and often by like factors of 5 for kind of equivalent performance on a data set so that's quite exciting and in the limit they show that for instance in Elektra a small electrical model so quite a lot", "start_timestamp": "01:48:53", "end_timestamp": "01:49:18", "start_second": 6533, "end_second": 6558, "url": "https://www.youtube.com/watch?v=BnpB3GrpsfM&t=6533s", "title": "L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20", "thumbnail": "https://i.ytimg.com/vi/BnpB3GrpsfM/maxresdefault.jpg"} {"video_id": "BnpB3GrpsfM", "text": "smaller than even a GPT one by exploiting bidirectionality in this dense training function can actually outperform GPT one in two days on a single be 100 whereas gbg1 was 25 days on 8p 6000 partially this is because I have P 16 verse F P 32 but it really shows like how you know I think unfortunately some people have and it kind of makes sense because I've talked about the importance of scale what not you know some people have kind of like written this whole subfield office like whoever has the most GPUs is gonna win and you know oh", "start_timestamp": "01:49:18", "end_timestamp": "01:49:48", "start_second": 6558, "end_second": 6588, "url": "https://www.youtube.com/watch?v=BnpB3GrpsfM&t=6558s", "title": "L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20", "thumbnail": "https://i.ytimg.com/vi/BnpB3GrpsfM/maxresdefault.jpg"} {"video_id": "BnpB3GrpsfM", "text": "it's all just training bigger models and maybe you know as a new when you're a grad student or as a hobbyist I don't have access to the resources to do interesting work in this space but if paper like electro it's really exciting because it shows that you know a single commercial GPU can actually still have very interesting results in this space nominally they still run the foreign version of the model on a TPU pod but you know there's here you're already having last year's model being beaten in a day or two on a single GPU next year", "start_timestamp": "01:49:48", "end_timestamp": "01:50:16", "start_second": 6588, "end_second": 6616, "url": "https://www.youtube.com/watch?v=BnpB3GrpsfM&t=6588s", "title": "L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20", "thumbnail": "https://i.ytimg.com/vi/BnpB3GrpsfM/maxresdefault.jpg"} {"video_id": "BnpB3GrpsfM", "text": "so I think that's a very exciting point in this is from Clarke it all blew the system for Google Clarion I think it's Kevin sorry his first name yeah so it's it's really exciting work here there's this final one kind of this is like kind of the deluxe result coming out of space from Colin Rafal and collaborators at Google and this is like kind of after the first crazy year of like well there's you know bird and now there's Roberta and others you know like you know all these things coming out one after the other every few months bumping", "start_timestamp": "01:50:16", "end_timestamp": "01:50:51", "start_second": 6616, "end_second": 6651, "url": "https://www.youtube.com/watch?v=BnpB3GrpsfM&t=6616s", "title": "L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20", "thumbnail": "https://i.ytimg.com/vi/BnpB3GrpsfM/maxresdefault.jpg"} {"video_id": "BnpB3GrpsfM", "text": "up the leaderboard this is the paper that like took a step back and kind of more systematically studying the space analyzed it used a lot of compute to do it but kind of really brought a lot of things together and kind of very carefully curated it's a it's a treasure trove of information for this space it's 50 pages long there's pages and pages of table so with hundreds of numbers and them so it can take a while to work through it but I really recommend it is like one of the ways to like get up to speed on this whole area and all the", "start_timestamp": "01:50:51", "end_timestamp": "01:51:17", "start_second": 6651, "end_second": 6677, "url": "https://www.youtube.com/watch?v=BnpB3GrpsfM&t=6651s", "title": "L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20", "thumbnail": "https://i.ytimg.com/vi/BnpB3GrpsfM/maxresdefault.jpg"} {"video_id": "BnpB3GrpsfM", "text": "techniques and all the different ways so they again systematically study this so their standard language modeling objectives there's bird style masking there's there's there's their own kind of things like spam based extensions of bird and then they also look at differences in the architecture so there's your standard left/right language model there's encoder decoders which could have like bi-directional encoder that processes like the previous sentence kind of skip thought style and then a autoregressive decoder and then there's", "start_timestamp": "01:51:17", "end_timestamp": "01:51:45", "start_second": 6677, "end_second": 6705, "url": "https://www.youtube.com/watch?v=BnpB3GrpsfM&t=6677s", "title": "L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20", "thumbnail": "https://i.ytimg.com/vi/BnpB3GrpsfM/maxresdefault.jpg"} {"video_id": "BnpB3GrpsfM", "text": "a corporate hybrid called a prefix L M which is a single well you could have untied weights but you think of it as like a partial and tying of the masking and a self attention matrix where you allow some part of the sequence to do bi-directional attention like in the past and then you switch over at some point to doing auto regressive like language modeling so you can get the benefits potentially of for past contexts doing bidirectional representations or in the limit if your downstream task is always just going to", "start_timestamp": "01:51:45", "end_timestamp": "01:52:11", "start_second": 6705, "end_second": 6731, "url": "https://www.youtube.com/watch?v=BnpB3GrpsfM&t=6705s", "title": "L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20", "thumbnail": "https://i.ytimg.com/vi/BnpB3GrpsfM/maxresdefault.jpg"} {"video_id": "BnpB3GrpsfM", "text": "be bi-directional you can just run it in purely bi-directional mode so it's kind of trading a hybrid system and I think that was also a quite clever improvement they had the other thing I really like about this paper is it goes even farther in terms of elegance of kind of this shared framework for doing all tasks and all predictions so kind of one of the trends has been moving away from these custom architectures to the kind of shared pre-trained models that are a little bit more monolithic and can be used across a wide range of tasks with", "start_timestamp": "01:52:11", "end_timestamp": "01:52:37", "start_second": 6731, "end_second": 6757, "url": "https://www.youtube.com/watch?v=BnpB3GrpsfM&t=6731s", "title": "L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20", "thumbnail": "https://i.ytimg.com/vi/BnpB3GrpsfM/maxresdefault.jpg"} {"video_id": "BnpB3GrpsfM", "text": "high performance and so tv5 you know typically like you know what do t1 and Bert the only difference we would do is we still flawed in the linear classifier at the end you like predict which of the right classes it's correct so what t5 says instead is and this is extra something that Brian McCann and collaborators at Salesforce introduced about two years ago is they basically say we're gonna phrase everything is like pure natural language pure question-answering or something so we're going to give the model like you know a", "start_timestamp": "01:52:37", "end_timestamp": "01:53:03", "start_second": 6757, "end_second": 6783, "url": "https://www.youtube.com/watch?v=BnpB3GrpsfM&t=6757s", "title": "L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20", "thumbnail": "https://i.ytimg.com/vi/BnpB3GrpsfM/maxresdefault.jpg"} {"video_id": "BnpB3GrpsfM", "text": "command or a prompt as the prefix like translate in English sentence to German and then it'll give it the English that is good and then t5 just through natural language rests that - you know dust is good or something and you know for all of these it does this basically so for the coolest sentences it'll predict the language phrase not acceptable and for you know STS STS B here's a kind of almost silly version where it's a it's a continuous valued sentence similarity prediction task and then they just have an output discreet token 3.8 so it has", "start_timestamp": "01:53:03", "end_timestamp": "01:53:33", "start_second": 6783, "end_second": 6813, "url": "https://www.youtube.com/watch?v=BnpB3GrpsfM&t=6783s", "title": "L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20", "thumbnail": "https://i.ytimg.com/vi/BnpB3GrpsfM/maxresdefault.jpg"} {"video_id": "BnpB3GrpsfM", "text": "the but you know because it's pre-trained it's learned kind of the continuum of numbers and the similarities between them but it's kind of like this for me to see a regression test reframed as discrete token prediction and you know again it's quite general you can do summarize and everything and so we saw a little bit of this what's like when we were probing with like where Schwartz's work just using natural language like probabilities from language model or like some of those zero shot transfers transform like GPT one and so", "start_timestamp": "01:53:33", "end_timestamp": "01:53:57", "start_second": 6813, "end_second": 6837, "url": "https://www.youtube.com/watch?v=BnpB3GrpsfM&t=6813s", "title": "L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20", "thumbnail": "https://i.ytimg.com/vi/BnpB3GrpsfM/maxresdefault.jpg"} {"video_id": "BnpB3GrpsfM", "text": "like to five really goes through and shows yeah you can actually exploit the natural language that it's learned and that helps with the transfer and helps with the fine-tuning tasks potentially so yeah to five it's a really good kind of overview of like all the work in this space and then they also just threw a big model on it to at the end in that gets you another bump on those leaderboards that we were talking about so that's that in fourth place though now so others have done some more things on top of it", "start_timestamp": "01:53:57", "end_timestamp": "01:54:22", "start_second": 6837, "end_second": 6862, "url": "https://www.youtube.com/watch?v=BnpB3GrpsfM&t=6837s", "title": "L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20", "thumbnail": "https://i.ytimg.com/vi/BnpB3GrpsfM/maxresdefault.jpg"} {"video_id": "BnpB3GrpsfM", "text": "so that's kind of I think the core like set of literature I wanted to cover here and ideas at this point we've kind of gone through the history of kind of language models and how they've been adapted and used across kind of you know the winding history here of how kind of NLP like kind of really took off with these and supervised and self supervised methods and kind of figured out how to use them and all these different papers that kind of found pieces of the puzzle and propose different methods that it or didn't work and kind of combined well", "start_timestamp": "01:54:22", "end_timestamp": "01:54:53", "start_second": 6862, "end_second": 6893, "url": "https://www.youtube.com/watch?v=BnpB3GrpsfM&t=6862s", "title": "L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20", "thumbnail": "https://i.ytimg.com/vi/BnpB3GrpsfM/maxresdefault.jpg"} {"video_id": "BnpB3GrpsfM", "text": "with other you know modeling improvements and everything I think it's a really cool story I'm excited that I was able to chat through that with y'all today the last bit here now we still have about fifteen minutes left but we should maybe leave a little bit for questions at the end to is kind of just a bit of more high-level thoughts on this is an unsupervised learning course why do we need it you know what's wrong with the current paradigm of supervised learning and I'm sure you see motivation you know and there's been great discussion", "start_timestamp": "01:54:53", "end_timestamp": "01:55:18", "start_second": 6893, "end_second": 6918, "url": "https://www.youtube.com/watch?v=BnpB3GrpsfM&t=6893s", "title": "L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20", "thumbnail": "https://i.ytimg.com/vi/BnpB3GrpsfM/maxresdefault.jpg"} {"video_id": "BnpB3GrpsfM", "text": "already on this topic here but kind of I would like to share a bit of my own like thoughts and opinions here so you know I think a motivating thing here again kind of we've had this thread running through a lot of the discussions so far on this in this talk has been how well does supervised learning work and you know what should we expect of it and so kind of concurrent with some of the stuff taking off in the last few years was kind of a lot of work that started critically evaluating kind of deep learning for supervised NLP and so you", "start_timestamp": "01:55:18", "end_timestamp": "01:55:48", "start_second": 6918, "end_second": 6948, "url": "https://www.youtube.com/watch?v=BnpB3GrpsfM&t=6918s", "title": "L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20", "thumbnail": "https://i.ytimg.com/vi/BnpB3GrpsfM/maxresdefault.jpg"} {"video_id": "BnpB3GrpsfM", "text": "know for a national ears in France for instance this is a three-way classification task and even before pre training really took off so isom is using just word vectors and it was a very you know well design architecture they nominally got to you know average human accuracy of a I believe a single Turk worker it may have been an unsound three so it's like whoa is this done did we already hit human accuracy and you know like I think everyone kind of knew well no because clearly these models are you know still making weird mistakes and", "start_timestamp": "01:55:48", "end_timestamp": "01:56:17", "start_second": 6948, "end_second": 6977, "url": "https://www.youtube.com/watch?v=BnpB3GrpsfM&t=6948s", "title": "L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20", "thumbnail": "https://i.ytimg.com/vi/BnpB3GrpsfM/maxresdefault.jpg"} {"video_id": "BnpB3GrpsfM", "text": "this is kind of where in the last few years there's been a lot of great work starting to really quantify these concepts of how robust our models how well do they work kind of distribution kind of pressuring and challenging the standards supervised learning paradigm of you know training on an iid training set and evaluating on another ID split held out data and basically showing that that's no longer sufficient and something's going wrong somewhere in supervised learning that means this is a being too fortunate to algorithms and", "start_timestamp": "01:56:17", "end_timestamp": "01:56:43", "start_second": 6977, "end_second": 7003, "url": "https://www.youtube.com/watch?v=BnpB3GrpsfM&t=6977s", "title": "L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20", "thumbnail": "https://i.ytimg.com/vi/BnpB3GrpsfM/maxresdefault.jpg"} {"video_id": "BnpB3GrpsfM", "text": "you know not being fortunate enough to humans and so this is a great paper from a Sutra and I believe called annotation artifacts of natural image inference data and so when you do hear people talking about these models are exploiting statistical artifacts and Maya sees of the train distribution you know this is a paper that really nailed that down and showed it quite conclusively so you know they kind of start from a high level well how were these datasets created these supervised data sets you know admittedly they're", "start_timestamp": "01:56:43", "end_timestamp": "01:57:10", "start_second": 7003, "end_second": 7030, "url": "https://www.youtube.com/watch?v=BnpB3GrpsfM&t=7003s", "title": "L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20", "thumbnail": "https://i.ytimg.com/vi/BnpB3GrpsfM/maxresdefault.jpg"} {"video_id": "BnpB3GrpsfM", "text": "kind of artificial you're paying people to label these tasks they're not natural instances of the task it kind of what people can come up with on the top of their head or you know they can have very good experimental methodology in data sets like a semi multi know why are some of the best we've got in terms of like very good set ups you know curated by people who really know what they're doing but you still run into the issue of like well you've got to have a human generate an example and you know maybe they're less creative than you think", "start_timestamp": "01:57:10", "end_timestamp": "01:57:36", "start_second": 7030, "end_second": 7056, "url": "https://www.youtube.com/watch?v=BnpB3GrpsfM&t=7030s", "title": "L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20", "thumbnail": "https://i.ytimg.com/vi/BnpB3GrpsfM/maxresdefault.jpg"} {"video_id": "BnpB3GrpsfM", "text": "so the data is actually drawn from a much more narrow distribution that it should be and so this paper kind of went through critically and showed a lot of these artifacts actually showing up so you know a worker would be told make a you know a negative or a contradictory label and so they would just be like oh I'll just slap a knot and top with all copy the sentence you know and it's not quite this bad but it gives you the idea of what's going on is they'll copy the premise sentences the hypothesis and they'll just put a knot in it or they'll", "start_timestamp": "01:57:36", "end_timestamp": "01:58:03", "start_second": 7056, "end_second": 7083, "url": "https://www.youtube.com/watch?v=BnpB3GrpsfM&t=7056s", "title": "L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20", "thumbnail": "https://i.ytimg.com/vi/BnpB3GrpsfM/maxresdefault.jpg"} {"video_id": "BnpB3GrpsfM", "text": "you know to have entailment they'll just restate the sentence in a more generic or abstract way and so you might go from you know you know you know a dog is you know playing to an animal is playing or a pet is playing or stuff like that or you'll add some kind of like super information like tall or sad or popular to hint at the neutral class which is like well it might be true or am I not but it's not clear that way and so what they showed is somewhat disturbingly if you only trained to model on the hypothesis sentence so the second", "start_timestamp": "01:58:03", "end_timestamp": "01:58:34", "start_second": 7083, "end_second": 7114, "url": "https://www.youtube.com/watch?v=BnpB3GrpsfM&t=7083s", "title": "L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20", "thumbnail": "https://i.ytimg.com/vi/BnpB3GrpsfM/maxresdefault.jpg"} {"video_id": "BnpB3GrpsfM", "text": "sentence and again semantically this task is defined as the logic correlation between two sentences but have you trained a model only on the second sentence to predict which of the classes it would be it actually did it basically got half of them right you know it went from 33 to 66 percent or so it was a large bump and you know by default we know that model can't be doing the true task because it's just predicting you know given only at the random second sentence so this is a great example of where you can see that standard", "start_timestamp": "01:58:34", "end_timestamp": "01:59:01", "start_second": 7114, "end_second": 7141, "url": "https://www.youtube.com/watch?v=BnpB3GrpsfM&t=7114s", "title": "L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20", "thumbnail": "https://i.ytimg.com/vi/BnpB3GrpsfM/maxresdefault.jpg"} {"video_id": "BnpB3GrpsfM", "text": "supervised learning might be picking up on these spurious correlations or artifacts and you know assume when you evaluate it on a new card set which is the set of answers that the model that only looks in the sentence evidence can get right drops quickly from like 16 percentage points from 80 to 88 to 72 percent and this shows up across the board there's now probably a dozen papers in this space if not more that show that kind of these analyzed systems that you know nominally we're supposed to have human level accuracy actually", "start_timestamp": "01:59:01", "end_timestamp": "01:59:29", "start_second": 7141, "end_second": 7169, "url": "https://www.youtube.com/watch?v=BnpB3GrpsfM&t=7141s", "title": "L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20", "thumbnail": "https://i.ytimg.com/vi/BnpB3GrpsfM/maxresdefault.jpg"} {"video_id": "BnpB3GrpsfM", "text": "are not consistent or not robust or not systematically generalizing so you know this is another one from Glockner that you know very carefully constructs kind of these probe sets so it's like permuting objects and the sentences are permuting you know synonyms or antonyms and you know on these probes they show that you know actually again drops quite a lot and then a final point here is on distributional robustness so this is a paper from Devon called learning evaluating general linguistic intelligence and so what they showed is", "start_timestamp": "01:59:29", "end_timestamp": "01:59:56", "start_second": 7169, "end_second": 7196, "url": "https://www.youtube.com/watch?v=BnpB3GrpsfM&t=7169s", "title": "L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20", "thumbnail": "https://i.ytimg.com/vi/BnpB3GrpsfM/maxresdefault.jpg"} {"video_id": "BnpB3GrpsfM", "text": "those near sort of question-answering models which again on squad it's you take a wikipedia sentence and you take burden which we've already talked about how much of an improvement it's had and you know how you know it's improved scores a ton so you take that sentence that question answering well that's trainable capilla and you just run out on a different data sets it's still question answering except maybe we run it on like trivia like trivia factoids that are sourced from like google search results or maybe", "start_timestamp": "01:59:56", "end_timestamp": "02:00:20", "start_second": 7196, "end_second": 7220, "url": "https://www.youtube.com/watch?v=BnpB3GrpsfM&t=7196s", "title": "L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20", "thumbnail": "https://i.ytimg.com/vi/BnpB3GrpsfM/maxresdefault.jpg"} {"video_id": "BnpB3GrpsfM", "text": "we run it in a more conversational framework with kind of you know to to people asking questions between each other and we see that just like a gracie's can crater or you know f ones basically actually metric here so it's you know the same task and we know that people when you ask them a question on one task force on another they're gonna do about the same you know maybe yes one task is a little bit harder than the other but you don't see them like suddenly you know lose half their accuracy this is again just hit set some", "start_timestamp": "02:00:20", "end_timestamp": "02:00:45", "start_second": 7220, "end_second": 7245, "url": "https://www.youtube.com/watch?v=BnpB3GrpsfM&t=7220s", "title": "L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20", "thumbnail": "https://i.ytimg.com/vi/BnpB3GrpsfM/maxresdefault.jpg"} {"video_id": "BnpB3GrpsfM", "text": "of the distribution a lot more muscles tissues and brittleness we're seeing and again this is still some of the best stuff we've got it's combining supervised learning and unsupervised learning but there's hints as we're going to go through here that these self supervised methods and unsupervised pre training is really helping with the robustness we're still not there yet but we're making progress and all that's being driven by moving away from a purely supervised learning framework to moving to these like hybrid Android training and pre", "start_timestamp": "02:00:45", "end_timestamp": "02:01:09", "start_second": 7245, "end_second": 7269, "url": "https://www.youtube.com/watch?v=BnpB3GrpsfM&t=7245s", "title": "L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20", "thumbnail": "https://i.ytimg.com/vi/BnpB3GrpsfM/maxresdefault.jpg"} {"video_id": "BnpB3GrpsfM", "text": "training methods so as I mentioned like there's a lot of things that could be going on current techniques are brittle they're memorizing and so generalizing they're exploiting spurious correlations you know also stop learning once you want to get to you know memorizing your training set you just wall turns off because the gradient ties ISM training lost goes to zero so it just kind of feels incorrect so there's like a lot of different routes we could go down to make progress we can do better models and architectures we could do more data", "start_timestamp": "02:01:09", "end_timestamp": "02:01:35", "start_second": 7269, "end_second": 7295, "url": "https://www.youtube.com/watch?v=BnpB3GrpsfM&t=7269s", "title": "L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20", "thumbnail": "https://i.ytimg.com/vi/BnpB3GrpsfM/maxresdefault.jpg"} {"video_id": "BnpB3GrpsfM", "text": "we can go down different paths all together so obviously since I'm kind of talking about unsupervised learning in a supervised learning class I'm gonna talk about how that's a very exciting one but you know we could always keep working in the supervised learning paradigm and just say well we're gonna have better models and we're gonna get more data we're some kind of purses problems in the same way and so this was like kind of what I'd say a lot of like early deep learning was really highlighting was kind of you know we were working on", "start_timestamp": "02:01:35", "end_timestamp": "02:01:57", "start_second": 7295, "end_second": 7317, "url": "https://www.youtube.com/watch?v=BnpB3GrpsfM&t=7295s", "title": "L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20", "thumbnail": "https://i.ytimg.com/vi/BnpB3GrpsfM/maxresdefault.jpg"} {"video_id": "BnpB3GrpsfM", "text": "supervised learning datasets we kind of were seeing you know these new architectures that were exploiting priors and inductive biases of the data domain we're really helping a ton so on images you know this is the grand story of we added you know comets and they are a great fit for the domain and they kind of cleverly quote encode you know all these equivariance and translation and you know shared weights and all this structure and that helps a ton with their accuracies and then we just use a large supervised", "start_timestamp": "02:01:57", "end_timestamp": "02:02:25", "start_second": 7317, "end_second": 7345, "url": "https://www.youtube.com/watch?v=BnpB3GrpsfM&t=7317s", "title": "L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20", "thumbnail": "https://i.ytimg.com/vi/BnpB3GrpsfM/maxresdefault.jpg"} {"video_id": "BnpB3GrpsfM", "text": "notice and let HDD figure it all out for us and so this kind of led to I think of mindset of heavily emphasizing architecture engineering you know there's a very large design space here someone cynically it allows for a lot of different papers to be written and you know you can really kind of combine and contrast like all these building blocks like we really like playing with these blocks and you know a lot of really good work has been done that like does empirically push the state of the art by exploiting you know properties of domains and you", "start_timestamp": "02:02:25", "end_timestamp": "02:02:52", "start_second": 7345, "end_second": 7372, "url": "https://www.youtube.com/watch?v=BnpB3GrpsfM&t=7345s", "title": "L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20", "thumbnail": "https://i.ytimg.com/vi/BnpB3GrpsfM/maxresdefault.jpg"} {"video_id": "BnpB3GrpsfM", "text": "know an example of that is this diagram on the left so does anyone want to guess the name of this model well sorry because it's kind of hypothetical it's called a simple model so this has got six different color embeddings and you know there's screws and character models and by attentions and MLPs and you know it starts to get quite complex when you're really all you've got is inductive biases and kind of the standard supervised learning datasets so it's a heroic effort but you're kind of exploiting more and more", "start_timestamp": "02:02:52", "end_timestamp": "02:03:23", "start_second": 7372, "end_second": 7403, "url": "https://www.youtube.com/watch?v=BnpB3GrpsfM&t=7372s", "title": "L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20", "thumbnail": "https://i.ytimg.com/vi/BnpB3GrpsfM/maxresdefault.jpg"} {"video_id": "BnpB3GrpsfM", "text": "details and getting more and more complex to make progress when you kind of have locked in these other constraints like the dataset size and the paradigm of training and on the right is another one that I think is like almost looks like it's like you know some like pentagram or something you know they look like kind of these very cool like architectures and they're very quite fun to look at and kind of look through all the work that's been done on creating these systems and again like we said there's all these different", "start_timestamp": "02:03:23", "end_timestamp": "02:03:48", "start_second": 7403, "end_second": 7428, "url": "https://www.youtube.com/watch?v=BnpB3GrpsfM&t=7403s", "title": "L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20", "thumbnail": "https://i.ytimg.com/vi/BnpB3GrpsfM/maxresdefault.jpg"} {"video_id": "BnpB3GrpsfM", "text": "methods of having a deck Tobias and it can really help a lot and so they're all important and very impactful and please don't take this as like criticising kind of the standard approach of like iterating and hell climbing on supervised learning with like better and better architectures but I think it's a bit like this where really when you treat a data set in isolation if we come back to how people learn and experience the world it's so varied it's so diverse there's so much experience in information and knowledge you're", "start_timestamp": "02:03:48", "end_timestamp": "02:04:14", "start_second": 7428, "end_second": 7454, "url": "https://www.youtube.com/watch?v=BnpB3GrpsfM&t=7428s", "title": "L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20", "thumbnail": "https://i.ytimg.com/vi/BnpB3GrpsfM/maxresdefault.jpg"} {"video_id": "BnpB3GrpsfM", "text": "leveraging before you ever saw this data set and some machine learning models when they're started in isolation on a supervisor zone you get a set by itself are kind of like you know that supervised data set is like a peak in a very big space it's a small peak and we can add more and more data and make that peak more you know taller and wider and that might help with robustness and generalization but at the other day it's kind of a little bit futile I think you know the real way to solve these tasks or at least the way that people do it is", "start_timestamp": "02:04:14", "end_timestamp": "02:04:43", "start_second": 7454, "end_second": 7483, "url": "https://www.youtube.com/watch?v=BnpB3GrpsfM&t=7454s", "title": "L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20", "thumbnail": "https://i.ytimg.com/vi/BnpB3GrpsfM/maxresdefault.jpg"} {"video_id": "BnpB3GrpsfM", "text": "they don't sit down and memorize you know a million different examples they somehow learn a much more general set of tasks behavior and transfer of knowledge and information instead of like just becoming a master at a very specific isolated domain you know we're amazing because of our Gen not because of our you know or well we were amazing for both because we can do incredible things in specific domains but at least machine learning is starting to see that I'm very targeted supervised data says you can do it", "start_timestamp": "02:04:43", "end_timestamp": "02:05:08", "start_second": 7483, "end_second": 7508, "url": "https://www.youtube.com/watch?v=BnpB3GrpsfM&t=7483s", "title": "L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20", "thumbnail": "https://i.ytimg.com/vi/BnpB3GrpsfM/maxresdefault.jpg"} {"video_id": "BnpB3GrpsfM", "text": "models that do a bit better and so then there's papers that even on architecture engineering show kind of somewhat critically that some of these you know fancy or new architectures that we saw them don't quite improve as much as you think or with more careful oblations don't show much of a benefit so you know there's a one of the famous examples here is they took a baseline Alice T and gave it some love this is kind of a common story for language modeling and show that it was about performing kind of a lot of new recent state-of-the-art", "start_timestamp": "02:05:08", "end_timestamp": "02:05:35", "start_second": 7508, "end_second": 7535, "url": "https://www.youtube.com/watch?v=BnpB3GrpsfM&t=7508s", "title": "L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20", "thumbnail": "https://i.ytimg.com/vi/BnpB3GrpsfM/maxresdefault.jpg"} {"video_id": "BnpB3GrpsfM", "text": "models if you just have careful comparisons in careful trying to me so you know maybe we need to back off and rethink beyond just pure supervised learning on test specific datasets and you know I think one of the reasons to frame this is the largest supervised days so you know basically in the world what I'm aware of publicly is gft 300 million actually there's a Facebook one that I haven't talked about their Instagram pre training but this work this was true a little while ago so there's a straight a million images", "start_timestamp": "02:05:35", "end_timestamp": "02:06:02", "start_second": 7535, "end_second": 7562, "url": "https://www.youtube.com/watch?v=BnpB3GrpsfM&t=7535s", "title": "L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20", "thumbnail": "https://i.ytimg.com/vi/BnpB3GrpsfM/maxresdefault.jpg"} {"video_id": "BnpB3GrpsfM", "text": "18,000 classes if you if you do like a very simple like loose bound on how much information content you get you have like log 18,000 bits per image and you have 300 million of them so that ends up with about getting 300 50 megabytes of constraint on the function you can learn so this is the world's biggest data set and in terms of the correct function that we're trying to approximate with supervised learning you know we only are able to pump about from this kind of slightly naive in toyish view about 530 megabytes of information into the system", "start_timestamp": "02:06:02", "end_timestamp": "02:06:33", "start_second": 7562, "end_second": 7593, "url": "https://www.youtube.com/watch?v=BnpB3GrpsfM&t=7562s", "title": "L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20", "thumbnail": "https://i.ytimg.com/vi/BnpB3GrpsfM/maxresdefault.jpg"} {"video_id": "BnpB3GrpsfM", "text": "from the supervision here but you know like like trying to connect this back to everything we've been talking about today there's you know terabytes and petabytes of actual raw natural language on the internet so if we figure out how to exploit all that information in some reasonable way there's a hell of a lot horror there that we should hopefully and again we're gonna be a lot less efficient you know gold labeled supervised data you know per bit is probably helping far more and less specify and learn a task but we only", "start_timestamp": "02:06:33", "end_timestamp": "02:07:01", "start_second": 7593, "end_second": 7621, "url": "https://www.youtube.com/watch?v=BnpB3GrpsfM&t=7593s", "title": "L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20", "thumbnail": "https://i.ytimg.com/vi/BnpB3GrpsfM/maxresdefault.jpg"} {"video_id": "BnpB3GrpsfM", "text": "have a little bit of it it's like you know yawns kickin ology of the cherry on top first kind of you know everything else we need to be able to do and I kind of tried to take the supervised learning approach for language for I spent most of 2015 myself building what I hoped would be an image time for text it was a very large weekly supervised data set where we basically did classification over edit communities and we built like on a 50 million training examples over a thousand communities we turned our Nan's to", "start_timestamp": "02:07:01", "end_timestamp": "02:07:26", "start_second": 7621, "end_second": 7646, "url": "https://www.youtube.com/watch?v=BnpB3GrpsfM&t=7621s", "title": "L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20", "thumbnail": "https://i.ytimg.com/vi/BnpB3GrpsfM/maxresdefault.jpg"} {"video_id": "BnpB3GrpsfM", "text": "predict everything and we're hoping they would learn useful features and representations kind of skip thought style it was pretty concurrent work at the time except we were going with the supervisor route instead the unsupervised route and the sad thing was the unsupervised model beat us so skip thought vectors was beating you know just by doing a large bottle objective was beating this system that we built with like your middle e weekly supervised data but we were like oh yeah this is the gold label so you know these", "start_timestamp": "02:07:26", "end_timestamp": "02:07:51", "start_second": 7646, "end_second": 7671, "url": "https://www.youtube.com/watch?v=BnpB3GrpsfM&t=7646s", "title": "L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20", "thumbnail": "https://i.ytimg.com/vi/BnpB3GrpsfM/maxresdefault.jpg"} {"video_id": "BnpB3GrpsfM", "text": "are the right things to predict they're semantically aligned with classification and this kind of really made me quite confused and kind of skeptical so what's going on in his face because you or at least supervised learning and got me really excited more on the gener of long ago and some of us I'm excited because we just weren't seeing the supervised learning pull through here because it's just I think is a little bit too weak of a supervision source and a little bit too specific so like again the big question I think is a lot more you know", "start_timestamp": "02:07:51", "end_timestamp": "02:08:20", "start_second": 7671, "end_second": 7700, "url": "https://www.youtube.com/watch?v=BnpB3GrpsfM&t=7671s", "title": "L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20", "thumbnail": "https://i.ytimg.com/vi/BnpB3GrpsfM/maxresdefault.jpg"} {"video_id": "BnpB3GrpsfM", "text": "in terms of like novel research frontiers how do we go from kind of isolated peaks of competency that you can very quickly you know fall down if you change the problem just a little bit you know quickly collapse in terms of task mastery how do we go to systems that perform you know and then much more general robust kind of you know maybe they're not nearly as good in terms of competency on any given specific task but how do they perform much more broadly across the board and again so this this is an example of kind of the", "start_timestamp": "02:08:20", "end_timestamp": "02:08:48", "start_second": 7700, "end_second": 7728, "url": "https://www.youtube.com/watch?v=BnpB3GrpsfM&t=7700s", "title": "L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20", "thumbnail": "https://i.ytimg.com/vi/BnpB3GrpsfM/maxresdefault.jpg"} {"video_id": "BnpB3GrpsfM", "text": "classic architecture engineered approach like one of the kind of you know incredibly well done versions here that's exploiting so much information with inductive biases is using a word net which is like that great hand curated data set and so we see that it like gets and you know because it's able to exploit all this site information of you know helping with like learning oh these are in it you know synonyms or antonyms or this is you know more abstract or less abstract you know a child or a parent and in terms of like", "start_timestamp": "02:08:48", "end_timestamp": "02:09:13", "start_second": 7728, "end_second": 7753, "url": "https://www.youtube.com/watch?v=BnpB3GrpsfM&t=7728s", "title": "L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20", "thumbnail": "https://i.ytimg.com/vi/BnpB3GrpsfM/maxresdefault.jpg"} {"video_id": "BnpB3GrpsfM", "text": "semantic hierarchy of a different word so you can see how that's bringing in the information that should help with generalization and so it actually does better on those kind of systemic evals so this is one way of like widening that peak and someone excitedly though if we just slot GPT one in as well it performs just as well on the more robust transfer setting so there we didn't have to you know manually curate that the relations between words or build Ward Annette we kind of let a language model figure it out and so I think this really again is", "start_timestamp": "02:09:13", "end_timestamp": "02:09:43", "start_second": 7753, "end_second": 7783, "url": "https://www.youtube.com/watch?v=BnpB3GrpsfM&t=7753s", "title": "L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20", "thumbnail": "https://i.ytimg.com/vi/BnpB3GrpsfM/maxresdefault.jpg"} {"video_id": "BnpB3GrpsfM", "text": "one of the proof points that you know some supervisory training is really figuring out the same relation is the same kind of connecting the concepts connecting them and helping with robustness and generalization and there's some new work from Tim Hendricks this week that I've which I have put in these slides showing that Berkeley as follows are much more robust a t-distribution than classic purely supervisor models with like LS TNS or cnn's so I think that's starting to get much more well empirically founded than kind", "start_timestamp": "02:09:43", "end_timestamp": "02:10:07", "start_second": 7783, "end_second": 7807, "url": "https://www.youtube.com/watch?v=BnpB3GrpsfM&t=7783s", "title": "L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20", "thumbnail": "https://i.ytimg.com/vi/BnpB3GrpsfM/maxresdefault.jpg"} {"video_id": "BnpB3GrpsfM", "text": "of me spouting off one or two numbers from like the models I know so kind of at the high level takeaway here kind of this is just a hurrah message for everyone taking this course is you know I really think that one of the most promising methods of moving forward here is in terms of like really lying tasks and robust systems that actually you know perform the things we want them to is we need to move away from standard supervised learning instead of manually specifying what to predict through the creation of large supervised data sets", "start_timestamp": "02:10:07", "end_timestamp": "02:10:34", "start_second": 7807, "end_second": 7834, "url": "https://www.youtube.com/watch?v=BnpB3GrpsfM&t=7807s", "title": "L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20", "thumbnail": "https://i.ytimg.com/vi/BnpB3GrpsfM/maxresdefault.jpg"} {"video_id": "BnpB3GrpsfM", "text": "we need to figure out how to learn from and predict everything out there and you know one of the ways that you can think of this is like every time we build a data set we're sitting the importance of everything in that data set to one and everything else in the world and all the other useful information may be out there is set to zero so like when you start with a model from scratch you should really get in that supervised learning as well as head and be like oh it's almost a hopeless task you know they know so little and we've", "start_timestamp": "02:10:34", "end_timestamp": "02:10:59", "start_second": 7834, "end_second": 7859, "url": "https://www.youtube.com/watch?v=BnpB3GrpsfM&t=7834s", "title": "L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20", "thumbnail": "https://i.ytimg.com/vi/BnpB3GrpsfM/maxresdefault.jpg"} {"video_id": "BnpB3GrpsfM", "text": "hidden so much from them when we only give them this one canonical gold standard data set and of course they're gonna cheat however they can because they're you know great at optimizing the objectives we give them but if they don't have the foundations with which to truly you know build off of all they can do is exploit clever spurious correlations so yeah I think this kind of comes together with all the work we've been chatting about of like a potential recipe for and you know I think this is getting proved out with t5", "start_timestamp": "02:10:59", "end_timestamp": "02:11:25", "start_second": 7859, "end_second": 7885, "url": "https://www.youtube.com/watch?v=BnpB3GrpsfM&t=7859s", "title": "L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20", "thumbnail": "https://i.ytimg.com/vi/BnpB3GrpsfM/maxresdefault.jpg"} {"video_id": "BnpB3GrpsfM", "text": "and all the future work here of how to kind of combine a bunch of pieces together we need high capacity and flexible model classes so they can handle a lot of different tasks we need algorithms for extracting information running the structure across many different domains so this would be basically you know a lot of things we talked about it turned out language modeling you actually just worked really well as one of these it's an incredibly old idea but that algorithm just or you know method just worked quite well in terms of people", "start_timestamp": "02:11:25", "end_timestamp": "02:11:51", "start_second": 7885, "end_second": 7911, "url": "https://www.youtube.com/watch?v=BnpB3GrpsfM&t=7885s", "title": "L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20", "thumbnail": "https://i.ytimg.com/vi/BnpB3GrpsfM/maxresdefault.jpg"} {"video_id": "BnpB3GrpsfM", "text": "there's a lot of different clever approaches to specify and proxy tasks but this simple one has gone it's quite far and you're unfortunately still going to need because these are dumb models that don't you know have anywhere near the robustness or generality of humans you're going to need a lot of data tiling everything but at least it'll be unsupervised and so we have it available and you're going to need unfortunately at least to get the you know the soda grind a little more you can in fed some amount of compute with which to learn", "start_timestamp": "02:11:51", "end_timestamp": "02:12:16", "start_second": 7911, "end_second": 7936, "url": "https://www.youtube.com/watch?v=BnpB3GrpsfM&t=7911s", "title": "L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20", "thumbnail": "https://i.ytimg.com/vi/BnpB3GrpsfM/maxresdefault.jpg"} {"video_id": "BnpB3GrpsfM", "text": "them but again that may produce a model that's actually quite small and efficient to run a test time and I think that's one of the hopeful direction is going for is you you know train these big models and you know Google or Facebook or open the I you know burns the GPU years to to get that model but then you know you're able to distill it and prune in and release it and then it can still run on your own laptop and or on you know a single GPU and you know that means that there's downstream tasks that you may want to investigate or you", "start_timestamp": "02:12:16", "end_timestamp": "02:12:44", "start_second": 7936, "end_second": 7964, "url": "https://www.youtube.com/watch?v=BnpB3GrpsfM&t=7936s", "title": "L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20", "thumbnail": "https://i.ytimg.com/vi/BnpB3GrpsfM/maxresdefault.jpg"} {"video_id": "BnpB3GrpsfM", "text": "know build models on are much more efficient because you've amortized all this compute that went into pre training and now you're able to you know use that during the fine-tuning so it may actually be that like bird is actually you know though it took a ton of compute to train bird and Roberto may actually have reduced the overall volume of compute needed to achieve a given level of result and may actually widen the amount of usefulness and test that can be tackled in the field because it can transfer and you know been and be", "start_timestamp": "02:12:44", "end_timestamp": "02:13:09", "start_second": 7964, "end_second": 7989, "url": "https://www.youtube.com/watch?v=BnpB3GrpsfM&t=7964s", "title": "L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20", "thumbnail": "https://i.ytimg.com/vi/BnpB3GrpsfM/maxresdefault.jpg"} {"video_id": "BnpB3GrpsfM", "text": "beneficial to everything downstream and you know I think it's very reasonable that some people in the field kind of look at all this coming together and are like well you know I don't find that satisfying and so I think that's a valid view and so you know maybe backing up and working towards you know more grounded learning and there's lots of really interesting work in this space now of you know moving towards reinforcement learning and granted learning with you know multimodal agents and all this kind of stuff that connects", "start_timestamp": "02:13:09", "end_timestamp": "02:13:37", "start_second": 7989, "end_second": 8017, "url": "https://www.youtube.com/watch?v=BnpB3GrpsfM&t=7989s", "title": "L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20", "thumbnail": "https://i.ytimg.com/vi/BnpB3GrpsfM/maxresdefault.jpg"} {"video_id": "BnpB3GrpsfM", "text": "to you know more what feels like you know true learning about the world instead of just seeing abstract bits of text you know that I think that's a very valid approach but right now you know we've been just seeing that it's been driving a good chunk of empirical progress over the last few years you know there's a whole other set of methods here that's multitask learning and I think that that's actually been showing a lot of promise in the last when I made these slides this slide last year I think I was a little bit more", "start_timestamp": "02:13:37", "end_timestamp": "02:14:01", "start_second": 8017, "end_second": 8041, "url": "https://www.youtube.com/watch?v=BnpB3GrpsfM&t=8017s", "title": "L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20", "thumbnail": "https://i.ytimg.com/vi/BnpB3GrpsfM/maxresdefault.jpg"} {"video_id": "BnpB3GrpsfM", "text": "pessimistic on it and there's actually been a lot of good work like I'm tdmn and others that's been making progress on on this set of methods but it still kind of relies on us building a data set so for multi task learning you train on a bunch of different tests together and you kind of hope that you get transferred nationally between them but often they're all supervised tasks and t5 is a good paper actually like really talk through the nuances of what's the test learning for gendered pre training and one of the surprising things they", "start_timestamp": "02:14:01", "end_timestamp": "02:14:25", "start_second": 8041, "end_second": 8065, "url": "https://www.youtube.com/watch?v=BnpB3GrpsfM&t=8041s", "title": "L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20", "thumbnail": "https://i.ytimg.com/vi/BnpB3GrpsfM/maxresdefault.jpg"} {"video_id": "BnpB3GrpsfM", "text": "share now is when you do it well and you kind of exactly emulate the pre trained and fine-tuned framework if you do multitask pre-training followed by supervised fine-tuning you still need the uh you still need the unsupervised objective of like math slang which blah blah like but you can get rid of or you can at least find very similar performance in many cases compared to having to do the giant pre-training on you know the full internet for instance so they're still having room left and it's actually improving these methods", "start_timestamp": "02:14:25", "end_timestamp": "02:14:51", "start_second": 8065, "end_second": 8091, "url": "https://www.youtube.com/watch?v=BnpB3GrpsfM&t=8065s", "title": "L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20", "thumbnail": "https://i.ytimg.com/vi/BnpB3GrpsfM/maxresdefault.jpg"} {"video_id": "BnpB3GrpsfM", "text": "and the final one here to just chat a little bit about is some of the fall work we did it and they're open ion gt2 and this is kind of like what I've been chatting through here is kind of like a lot of the motivation that went to this project so we collected more data compared to GPT one and we collected much more diverse and heterogeneous data so we're hoping that we have models that would generalize better and see a much broader set of tasks so it's 40 gigabytes of text 10 billion to go cans 8 million webpages we scale up the", "start_timestamp": "02:14:51", "end_timestamp": "02:15:18", "start_second": 8091, "end_second": 8118, "url": "https://www.youtube.com/watch?v=BnpB3GrpsfM&t=8091s", "title": "L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20", "thumbnail": "https://i.ytimg.com/vi/BnpB3GrpsfM/maxresdefault.jpg"} {"video_id": "BnpB3GrpsfM", "text": "models just because we kind of saw those trend lines and you know I think there's a lot of reasonable arguments for why you just need bigger models to handle complex tasks and it's just a language model which predicts everything so immediately it still left right on aggressive model so it has some drawbacks compared to things like Bert but it's just a language model and so what we focused on in this case was purely how well this mala could do across you know many different tasks in at zero shot setting so we we never", "start_timestamp": "02:15:18", "end_timestamp": "02:15:44", "start_second": 8118, "end_second": 8144, "url": "https://www.youtube.com/watch?v=BnpB3GrpsfM&t=8118s", "title": "L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20", "thumbnail": "https://i.ytimg.com/vi/BnpB3GrpsfM/maxresdefault.jpg"} {"video_id": "BnpB3GrpsfM", "text": "fine-tuned it because you know supervised learning is tricky and it learns to exploit spurious correlations and dependencies so we're only ever saying well you did all your pre training work and we had you predict a bunch of words how well can you handle this new data distribution you've not only never seen before I mean really you know we trained on a lot of data so we actually see a bit of a lot of data distributions you're not letting it like specifically turn specific tasks but that specific label we're just saying run it and see what I", "start_timestamp": "02:15:44", "end_timestamp": "02:16:08", "start_second": 8144, "end_second": 8168, "url": "https://www.youtube.com/watch?v=BnpB3GrpsfM&t=8144s", "title": "L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20", "thumbnail": "https://i.ytimg.com/vi/BnpB3GrpsfM/maxresdefault.jpg"} {"video_id": "BnpB3GrpsfM", "text": "can do and we show that like it actually begins to do something particularly as you scale the model across a wide range of canonical and LP tasks so it's purely unsupervised there's no you know there's no direct human labeling or supervision going on here but this model can actually you know you can feed it a paragraph in the mask of question and you get transfer and linkage well can give the right answer sometimes often they're just matching kind of old baselines and they still have a huge gap to the you know human", "start_timestamp": "02:16:08", "end_timestamp": "02:16:33", "start_second": 8168, "end_second": 8193, "url": "https://www.youtube.com/watch?v=BnpB3GrpsfM&t=8168s", "title": "L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20", "thumbnail": "https://i.ytimg.com/vi/BnpB3GrpsfM/maxresdefault.jpg"} {"video_id": "BnpB3GrpsfM", "text": "performance but I feel like this is a much better measure of potentially what the like underlying performance of these systems might be because we're not doing supervised training here and you know yeah and surprisingly we know our models are still worse than people so that kind of shows up here but it also shows a promising trend line where in some cases like there's very domain-specific algorithms for unsupervised translation middlee it's been a year so that speech should be up here now is there like some great follow up work from Ferran well", "start_timestamp": "02:16:33", "end_timestamp": "02:16:58", "start_second": 8193, "end_second": 8218, "url": "https://www.youtube.com/watch?v=BnpB3GrpsfM&t=8193s", "title": "L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20", "thumbnail": "https://i.ytimg.com/vi/BnpB3GrpsfM/maxresdefault.jpg"} {"video_id": "BnpB3GrpsfM", "text": "there's pushing and supervising on team farther but this is just a language model with no real customization and we're just seeing that it begins to do translation between the cumulation French you know you can tack a tldr on the end of a document and get something like summarization it's pretty garbage on the official metrics because it's only barely matching read three random sentences from the article but kind of quantitatively and qualitatively if you ask people which you prefer it looks a lot better than these numbers show", "start_timestamp": "02:16:58", "end_timestamp": "02:17:26", "start_second": 8218, "end_second": 8246, "url": "https://www.youtube.com/watch?v=BnpB3GrpsfM&t=8218s", "title": "L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20", "thumbnail": "https://i.ytimg.com/vi/BnpB3GrpsfM/maxresdefault.jpg"} {"video_id": "BnpB3GrpsfM", "text": "because this is a kind of very coarse evaluation metric and then the final thing here is like question answering so it's kind of shows domain knowledge and kind of like kind of world knowledge and potentially a lot of factoid information and this one we unsurprisingly see a really strong scaling curve with model capacity so like how is this working how does this kind of unsupervised system it's just a language model begin to translation question answering and reading comprehension well if we go through an inspector data said it turns", "start_timestamp": "02:17:26", "end_timestamp": "02:17:56", "start_second": 8246, "end_second": 8276, "url": "https://www.youtube.com/watch?v=BnpB3GrpsfM&t=8246s", "title": "L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20", "thumbnail": "https://i.ytimg.com/vi/BnpB3GrpsfM/maxresdefault.jpg"} {"video_id": "BnpB3GrpsfM", "text": "out there's actually just like kind of natural occurrences of tasks and you're turning them all to predict the next words so that's you know it's easy an English sentences then you're like then it happens to just have you know inside of the middle of this article that someone wrote a training example of English to French so it's a much more natural way of learning and when you have very large data sets you just actually begin to have a non-trivial data and so you see for translation for summarization like if we just like crap", "start_timestamp": "02:17:56", "end_timestamp": "02:18:21", "start_second": 8276, "end_second": 8301, "url": "https://www.youtube.com/watch?v=BnpB3GrpsfM&t=8276s", "title": "L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20", "thumbnail": "https://i.ytimg.com/vi/BnpB3GrpsfM/maxresdefault.jpg"} {"video_id": "BnpB3GrpsfM", "text": "through the data set how many times does TLDR up here well there's not a thousand training examples in quotes here and how many times does like someone asked her who what where when how why question well there's six million of those so we're kind of seeing that these kind of systems that you know don't make assumptions and silly about any specific task we kind of try to predict everything kind of really begin to make some progress I mean again like one of those areas we saw the most on is this question answering an open domain", "start_timestamp": "02:18:21", "end_timestamp": "02:18:46", "start_second": 8301, "end_second": 8326, "url": "https://www.youtube.com/watch?v=BnpB3GrpsfM&t=8301s", "title": "L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20", "thumbnail": "https://i.ytimg.com/vi/BnpB3GrpsfM/maxresdefault.jpg"} {"video_id": "BnpB3GrpsfM", "text": "question answering where you're just asking like what is the capital of you know Paris or in what year was star wars released and you know I think that this kind of gives you a very clear picture of why supervised learning with like kind of the standard approach just is never going to really be able to solve this kind of task so on the x-axis we have a number of training examples seen and again this is log scale and yeah if you start with a Randal initialize what model there's no way it's going to be able to do question answering I don't", "start_timestamp": "02:18:46", "end_timestamp": "02:19:12", "start_second": 8326, "end_second": 8352, "url": "https://www.youtube.com/watch?v=BnpB3GrpsfM&t=8326s", "title": "L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20", "thumbnail": "https://i.ytimg.com/vi/BnpB3GrpsfM/maxresdefault.jpg"} {"video_id": "BnpB3GrpsfM", "text": "open domain you know there's no way it can have the information for you know what is the capital of Paris until it's seen that training example and there's very little generalization there you just need so much data to approach this from a naive supervised learning approach whereas we have bigger models that have more capacity you know in the limit they very quickly began to do non trivially on these data sets and then they kind of fine tune in and learn how to better extract the information that's somehow contained within the weights to", "start_timestamp": "02:19:12", "end_timestamp": "02:19:38", "start_second": 8352, "end_second": 8378, "url": "https://www.youtube.com/watch?v=BnpB3GrpsfM&t=8352s", "title": "L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20", "thumbnail": "https://i.ytimg.com/vi/BnpB3GrpsfM/maxresdefault.jpg"} {"video_id": "BnpB3GrpsfM", "text": "various degrees so again this red baseline here is completely randomly emitted and these are basically random guessing numbers the entire way through you know those data sets doll is 20,000 labeled examples but as we try bigger and bigger language models we see that they really begin to make person\u00eds and t5 I think has continued pushing this quite a lot farther to where they're actually sometimes matching with only a neural model that's never looking at documents with like the actual factoids in them it just from its parameters is", "start_timestamp": "02:19:38", "end_timestamp": "02:20:04", "start_second": 8378, "end_second": 8404, "url": "https://www.youtube.com/watch?v=BnpB3GrpsfM&t=8378s", "title": "L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20", "thumbnail": "https://i.ytimg.com/vi/BnpB3GrpsfM/maxresdefault.jpg"} {"video_id": "BnpB3GrpsfM", "text": "able to answer quite competitively on some of these tasks yeah we're pretty much into a conversational period at this point but um you know some of the takeaways I would kind of say from this and kind of you know really pushing on language models for a few years here's performances you know not usually limited by something single paper fixes this is a very long history of you know I think we probably talked about 25 papers during the trajectory of research here and usually it's always someone chipping away on one", "start_timestamp": "02:20:04", "end_timestamp": "02:20:32", "start_second": 8404, "end_second": 8432, "url": "https://www.youtube.com/watch?v=BnpB3GrpsfM&t=8404s", "title": "L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20", "thumbnail": "https://i.ytimg.com/vi/BnpB3GrpsfM/maxresdefault.jpg"} {"video_id": "BnpB3GrpsfM", "text": "specific access you know diminishing returns basically mean there's always some other bottleneck so if you scale to compute but not the data you'll get back there if you scale you know the parameter size you'll just need more computer or if you scale you know the Moloch caste but don't increase that is that it'll just over fit or you could try to scale via like you know human intuition and you can use fancier models but maybe that's just more difficult to train so kind of I tell you that like you know particularly if you have a", "start_timestamp": "02:20:32", "end_timestamp": "02:20:58", "start_second": 8432, "end_second": 8458, "url": "https://www.youtube.com/watch?v=BnpB3GrpsfM&t=8432s", "title": "L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20", "thumbnail": "https://i.ytimg.com/vi/BnpB3GrpsfM/maxresdefault.jpg"} {"video_id": "BnpB3GrpsfM", "text": "little bit more of an engineering mindset here kind of the pragmatic approach of kind of pushing on all these axes together may allow kind of for a larger effect size to show up than pushing on anyone in isolation this is an unfortunate tension I think in research and science where you often want to you know microscopically measure effect sizes and walk controlled oblations and experiments in isolation but you know if you change a few things together you might actually see morgan outsized effect because that's like one", "start_timestamp": "02:20:58", "end_timestamp": "02:21:22", "start_second": 8458, "end_second": 8482, "url": "https://www.youtube.com/watch?v=BnpB3GrpsfM&t=8458s", "title": "L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20", "thumbnail": "https://i.ytimg.com/vi/BnpB3GrpsfM/maxresdefault.jpg"} {"video_id": "BnpB3GrpsfM", "text": "of the things we do we typically to where we get more data get more you know a bigger model through everything cut it together let me try to see if that really pushes toward qualitatively different behavior maybe yeah I mean I really could transition in a question period at any point now you know there's a little bit more advice at the end just saying that don't work on large scale models particularly you know like as things like a lecture so show you can work on the smaller models and see the same effects showing up they're not", "start_timestamp": "02:21:22", "end_timestamp": "02:21:48", "start_second": 8482, "end_second": 8508, "url": "https://www.youtube.com/watch?v=BnpB3GrpsfM&t=8482s", "title": "L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20", "thumbnail": "https://i.ytimg.com/vi/BnpB3GrpsfM/maxresdefault.jpg"} {"video_id": "BnpB3GrpsfM", "text": "going to have the same accuracy curves but you know we know from scaling laws and kind of all those trend lines that if you start seeing an effect that's robust at small scale probably fingers crossed it'll also hold at larger scale so you can do a lot more more rapid development and you know and this I think works quite well you know you should try ten as many or ten times as many times models that are just ten times smaller each and you know that way you can run 10 times as many experiments in parallel this is still you know a", "start_timestamp": "02:21:48", "end_timestamp": "02:22:17", "start_second": 8508, "end_second": 8537, "url": "https://www.youtube.com/watch?v=BnpB3GrpsfM&t=8508s", "title": "L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20", "thumbnail": "https://i.ytimg.com/vi/BnpB3GrpsfM/maxresdefault.jpg"} {"video_id": "BnpB3GrpsfM", "text": "large research field so there's a lot of things you got to try and you know beat all the behaviors in a paper like GP d2 which kind of I feel like gets pointed to as a canonical like the computer Big Data kind of thing they still show up on models you can train on a single desktop it you know it takes a week to about to see the hints of that middle E but you know gbg small you can train quite well and I've got a week on like a for GPU setup and then after you get proofs of concept on like your algorithm or your idea then you can scale up if", "start_timestamp": "02:22:17", "end_timestamp": "02:22:46", "start_second": 8537, "end_second": 8566, "url": "https://www.youtube.com/watch?v=BnpB3GrpsfM&t=8537s", "title": "L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20", "thumbnail": "https://i.ytimg.com/vi/BnpB3GrpsfM/maxresdefault.jpg"} {"video_id": "BnpB3GrpsfM", "text": "you can have the computer resources you're going to get a you know a low enough time on the cluster or thanks and kind of the same strategy used back to the day with like the seventh unit where the initial proofs of concept were 512 dimensional as teams that took a day or two on standard hardware and then you know for the final version then we kicked off a big run with the model that took 16 times the computer and you know how do you not go insane when you wait for a model to train from month well we like to do this thing at opening eye", "start_timestamp": "02:22:46", "end_timestamp": "02:23:13", "start_second": 8566, "end_second": 8593, "url": "https://www.youtube.com/watch?v=BnpB3GrpsfM&t=8566s", "title": "L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20", "thumbnail": "https://i.ytimg.com/vi/BnpB3GrpsfM/maxresdefault.jpg"} {"video_id": "BnpB3GrpsfM", "text": "where you boot your big model over before you go on vacation so you put that before winter break and you just let her train over the break and luckily fortunately you're in the set of sight machine for the whole time but don't stare at that graph every day you won't make nearly as much progress if you're just staring every day at that number but often models surprise you when you give them more time to learn so you know when you're really trying to push that result at the end it's a really good idea to try that if it's available in", "start_timestamp": "02:23:13", "end_timestamp": "02:23:38", "start_second": 8593, "end_second": 8618, "url": "https://www.youtube.com/watch?v=BnpB3GrpsfM&t=8593s", "title": "L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20", "thumbnail": "https://i.ytimg.com/vi/BnpB3GrpsfM/maxresdefault.jpg"} {"video_id": "BnpB3GrpsfM", "text": "the option yeah and again one of the other spreading things just about this field has been how far we've gotten pushing often where the developers of one paper or modeling architecture let me just push them log probability or the you know type 1 evals and then someone else coming along in another paper and showed oh this thing's actually great a type 2 evals so I think that's you know really reassuring and you know I'd often say that you could work on one or the other in isolation and often you see things that robustly", "start_timestamp": "02:23:38", "end_timestamp": "02:24:04", "start_second": 8618, "end_second": 8644, "url": "https://www.youtube.com/watch?v=BnpB3GrpsfM&t=8618s", "title": "L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20", "thumbnail": "https://i.ytimg.com/vi/BnpB3GrpsfM/maxresdefault.jpg"} {"video_id": "BnpB3GrpsfM", "text": "scale or contribute on both sides you know there's some gotcha as always with scaling you know things break when you at some point you can't extrapolate too far and things just change so you've got to watch out a bit for that you know for like a model like 2 PT 2 real on one of my collaborators was like we were originally trying to train these deeper bigger models and they just weren't working better and we had to fix an initialization technique and rearm came up with this and it helped you know continue scaling so when you see you're", "start_timestamp": "02:24:04", "end_timestamp": "02:24:34", "start_second": 8644, "end_second": 8674, "url": "https://www.youtube.com/watch?v=BnpB3GrpsfM&t=8644s", "title": "L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20", "thumbnail": "https://i.ytimg.com/vi/BnpB3GrpsfM/maxresdefault.jpg"} {"video_id": "BnpB3GrpsfM", "text": "scaling like not happening in the way you'd expect or the try mods kind of suggest it's also in some that something's wrong you need to like tweak it or fine-tune it or come up with like to actually do the clever work I don't do much of that myself to fix it up and try to keep making progress yeah and then the other thing is just like writing efficient and smart code these days luckily hardware is proving and for the same price point so with things like FP 16 1/2 precision compute if you switch over to that with", "start_timestamp": "02:24:34", "end_timestamp": "02:25:02", "start_second": 8674, "end_second": 8702, "url": "https://www.youtube.com/watch?v=BnpB3GrpsfM&t=8674s", "title": "L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20", "thumbnail": "https://i.ytimg.com/vi/BnpB3GrpsfM/maxresdefault.jpg"} {"video_id": "BnpB3GrpsfM", "text": "things like with example being GPT one the original version took 25 days and FP 32 and one generation older hardware and then the same next generation hardware where you you know a lot of people did a great job optimizing this to Scott grey in particular it's amazing GPU engineer and researcher and opening I we worked with some blocks part part the box of our spork is basically his work and he was able to optimize these down by almost order of magnitude on just the next generations hardware from a lot of you know great improvements across the", "start_timestamp": "02:25:02", "end_timestamp": "02:25:32", "start_second": 8702, "end_second": 8732, "url": "https://www.youtube.com/watch?v=BnpB3GrpsfM&t=8702s", "title": "L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20", "thumbnail": "https://i.ytimg.com/vi/BnpB3GrpsfM/maxresdefault.jpg"} {"video_id": "BnpB3GrpsfM", "text": "field so often if you write efficient code and use all the right tricks in terms of accelerating your models you can ring a lot out of the same you know the same level of hardware and just be efficient about that we have a library called the block sparse library that can help with that and provides a lot of these opsin honestly also libraries like right origin are doing a great job merging these in providing their own ops kind of more integrated into these these kind of wrappers so that's I think exciting for", "start_timestamp": "02:25:32", "end_timestamp": "02:25:59", "start_second": 8732, "end_second": 8759, "url": "https://www.youtube.com/watch?v=BnpB3GrpsfM&t=8732s", "title": "L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20", "thumbnail": "https://i.ytimg.com/vi/BnpB3GrpsfM/maxresdefault.jpg"} {"video_id": "BnpB3GrpsfM", "text": "the fellows at all yeah you know in terms of sweet spots for computer you know for 20 ATI desktops still can do along with space they just cost a fair amount of money and then you know your standard 80 100 bucks on a cloud provider is a very is a medium scale compute platform you know papers like electro can do a lot which 'single be 100 and I mean a 20 ATI is basically the cheapy 100 for 4 or 5 times less oh yeah that's about it honestly I think we have where you have about 15 minutes left for questions and you know I have a few more", "start_timestamp": "02:25:59", "end_timestamp": "02:26:35", "start_second": 8759, "end_second": 8795, "url": "https://www.youtube.com/watch?v=BnpB3GrpsfM&t=8759s", "title": "L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20", "thumbnail": "https://i.ytimg.com/vi/BnpB3GrpsfM/maxresdefault.jpg"} {"video_id": "BnpB3GrpsfM", "text": "random slides everyone this this is really great Alec thank you so much let's see if people have some questions hey Alec yeah how question so I was wondering if you could give you a views on like what do you see 0 shot language modeling something that could could could be production quality performance over time or do you think it's always gonna be lower than a collecting supervises and fine tuning some big current model just try to understand like the space between GPT and like Bert like models yeah oh yeah", "start_timestamp": "02:26:35", "end_timestamp": "02:27:29", "start_second": 8795, "end_second": 8849, "url": "https://www.youtube.com/watch?v=BnpB3GrpsfM&t=8795s", "title": "L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20", "thumbnail": "https://i.ytimg.com/vi/BnpB3GrpsfM/maxresdefault.jpg"} {"video_id": "BnpB3GrpsfM", "text": "so you know right now it is absolutely garbage from a production perspective like GPT - I mean well okay there's hints of life there you know for reading comprehension it's it's matching some of the original neural supervise baselines so I'd say there's hints of life there we're still talking about you know you need to do a lot more research and if you looked at kind of those scaling laws for like what kind of you know GPT - looked like like if you draw those out there's still quite a lot of order of magnitudes left to go so from a", "start_timestamp": "02:27:29", "end_timestamp": "02:27:59", "start_second": 8849, "end_second": 8879, "url": "https://www.youtube.com/watch?v=BnpB3GrpsfM&t=8849s", "title": "L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20", "thumbnail": "https://i.ytimg.com/vi/BnpB3GrpsfM/maxresdefault.jpg"} {"video_id": "BnpB3GrpsfM", "text": "pragmatic or practical perspective it's not really there right now and that might be the scary answer which is you know our models do rely on exploiting and you know I don't override this view but you know like it may just be to actually do these tests correctly you do just need you know much more compute and something like the zero shot setting so it's kind of like working I think I see it kind of is like working with you know like letting shoes or something like resistance training I think it's a fascinating research area to push on", "start_timestamp": "02:27:59", "end_timestamp": "02:28:25", "start_second": 8879, "end_second": 8905, "url": "https://www.youtube.com/watch?v=BnpB3GrpsfM&t=8879s", "title": "L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20", "thumbnail": "https://i.ytimg.com/vi/BnpB3GrpsfM/maxresdefault.jpg"} {"video_id": "BnpB3GrpsfM", "text": "because it does have some of these exciting qualities from like maybe representing the you know much more difficult and hopefully much more true representation of test performance but yeah it still has a long way to go so I think it's a fascinating research direction but here's a lot of pushing to be done on that thank you and yeah I think for a pragmatic perspective like you said you know you really should find tune on some supervised data and you know like I mentioned Burke models are still showing quite good robustness out", "start_timestamp": "02:28:25", "end_timestamp": "02:28:51", "start_second": 8905, "end_second": 8931, "url": "https://www.youtube.com/watch?v=BnpB3GrpsfM&t=8905s", "title": "L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20", "thumbnail": "https://i.ytimg.com/vi/BnpB3GrpsfM/maxresdefault.jpg"} {"video_id": "BnpB3GrpsfM", "text": "of distribution there I don't think there's been any good work comparing pure zero shot to learning of a task to like supervise the fine-tuning of a pre train model but I think we're talking about something that's like a few years out at least thank you thank you I saw a question here earlier you motivated Ellen's by comparing probabilities of pairs of strings to exact knowledge such as cats at first cat sets has this intuition comparing sentences I guess with exact knowledge been used for training general models a text or", "start_timestamp": "02:28:51", "end_timestamp": "02:29:21", "start_second": 8931, "end_second": 8961, "url": "https://www.youtube.com/watch?v=BnpB3GrpsfM&t=8931s", "title": "L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20", "thumbnail": "https://i.ytimg.com/vi/BnpB3GrpsfM/maxresdefault.jpg"} {"video_id": "BnpB3GrpsfM", "text": "Maxo like that ubiquitous so maybe this is about a comparative or contrastive method for training generative models where you compare sentences and know that like one should have higher probability than the other there was one one paper for reps in tation learning perspective which it's not quite the generative model side but it's representation learning CPC you know is that whole family of contrast methods is dominating you know unsupervised learning for image representations so it's somewhat of a contrast where in NLP", "start_timestamp": "02:29:21", "end_timestamp": "02:29:51", "start_second": 8961, "end_second": 8991, "url": "https://www.youtube.com/watch?v=BnpB3GrpsfM&t=8961s", "title": "L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20", "thumbnail": "https://i.ytimg.com/vi/BnpB3GrpsfM/maxresdefault.jpg"} {"video_id": "BnpB3GrpsfM", "text": "we haven't seen it really tick off yet so I think it's a very exciting research direction in the original CPC paper actually had some results that were promising on natural language but they you know like the original CPC paper in general we're exciting but nowhere near stay the art and a lot of the refinements in the last year or two on the image side really pushed that quite far I think you might have had a lecture just on that or about two so it would be very cool to see if someone could do that kind of similarly for natural", "start_timestamp": "02:29:51", "end_timestamp": "02:30:18", "start_second": 8991, "end_second": 9018, "url": "https://www.youtube.com/watch?v=BnpB3GrpsfM&t=8991s", "title": "L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20", "thumbnail": "https://i.ytimg.com/vi/BnpB3GrpsfM/maxresdefault.jpg"} {"video_id": "BnpB3GrpsfM", "text": "language but if it was about kind of exploiting more structured knowledge about like differences and encoding that into the generative model there is some pretty interesting work on this particularly from some more the like linguistic heavy folks and field of combining kind of hybrid systems of you know neural and with like kind of something like grammar constraints or the like and it's you know I'd say it's primarily focused a little bit more on you know the settings where you might expect encoding that inductive bias to", "start_timestamp": "02:30:18", "end_timestamp": "02:30:49", "start_second": 9018, "end_second": 9049, "url": "https://www.youtube.com/watch?v=BnpB3GrpsfM&t=9018s", "title": "L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20", "thumbnail": "https://i.ytimg.com/vi/BnpB3GrpsfM/maxresdefault.jpg"} {"video_id": "BnpB3GrpsfM", "text": "help which is like smaller data sets but you know at least personally my kind of I find some degree of at least from a pragmatic perspective a lot of current language modeling benchmarks I think are quite artificial because they work with such small amounts of data which from pragmatic perspective just doesn't make sense because there's all of what could be out there there's it's so easy to just write a scrape or your stuff or download a shard of comic roll and that's more data than you basically ever going to need to work with or you know", "start_timestamp": "02:30:49", "end_timestamp": "02:31:17", "start_second": 9049, "end_second": 9077, "url": "https://www.youtube.com/watch?v=BnpB3GrpsfM&t=9049s", "title": "L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20", "thumbnail": "https://i.ytimg.com/vi/BnpB3GrpsfM/maxresdefault.jpg"} {"video_id": "BnpB3GrpsfM", "text": "be able to process and so I think at least from a pragmatic perspective we should really be for how to use the large volumes of data we have you know I think it's a very valid other approach to push on data in isolation and you know how how much you know how data efficient we can get with limited set of data but I think it's probably add just to farm in extreme when you have you know only a million words of training data and things like country things so Alec a follow-up question on that it seems like one way to to learn languages read the", "start_timestamp": "02:31:17", "end_timestamp": "02:31:54", "start_second": 9077, "end_second": 9114, "url": "https://www.youtube.com/watch?v=BnpB3GrpsfM&t=9077s", "title": "L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20", "thumbnail": "https://i.ytimg.com/vi/BnpB3GrpsfM/maxresdefault.jpg"} {"video_id": "BnpB3GrpsfM", "text": "entire internet right another way to learn language is the way I think most people learn language which is absolutely you kind of I don't know how many words or how large the data set would be that somebody encounters by the time maybe they're six years old and they can speak pretty well maybe at that point they have any notion of kind of how much data is required in that context compared to how much data is required here oh it's it's awful at least for you know in for neural models I think it's um yeah for like a six-year-old child I", "start_timestamp": "02:31:54", "end_timestamp": "02:32:28", "start_second": 9114, "end_second": 9148, "url": "https://www.youtube.com/watch?v=BnpB3GrpsfM&t=9114s", "title": "L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20", "thumbnail": "https://i.ytimg.com/vi/BnpB3GrpsfM/maxresdefault.jpg"} {"video_id": "BnpB3GrpsfM", "text": "think it's maybe you know I just bashed on 1 million words being unrealistic but I think it's about one to ten million so you know compared to GPT to being ten billion tokens there's orders there's three orders of magnitude at least of headroom there potentially and i think that again understandably motivates why a lot of people do work on that city but my guess would be that to really make progress in that setting a lot of that is because of transfer between modalities and you know actually you know interacting with very", "start_timestamp": "02:32:28", "end_timestamp": "02:32:56", "start_second": 9148, "end_second": 9176, "url": "https://www.youtube.com/watch?v=BnpB3GrpsfM&t=9148s", "title": "L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20", "thumbnail": "https://i.ytimg.com/vi/BnpB3GrpsfM/maxresdefault.jpg"} {"video_id": "BnpB3GrpsfM", "text": "high quality sources of supervision like other people and you know being you're grounded agent that interacts with you know video and audio and like i think that that research is very interesting longer-term and you know we're probably going to saturate kind of what we can do with these ungrounded giant systems in the next few years or maybe it's even already starting in the last year so that's like very i think exciting next round of work and clearly like the numbers just show there's a huge amount of room to go got it thank you makes", "start_timestamp": "02:32:56", "end_timestamp": "02:33:31", "start_second": 9176, "end_second": 9211, "url": "https://www.youtube.com/watch?v=BnpB3GrpsfM&t=9176s", "title": "L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20", "thumbnail": "https://i.ytimg.com/vi/BnpB3GrpsfM/maxresdefault.jpg"} {"video_id": "BnpB3GrpsfM", "text": "that work well for when you are laying apply to like other abilities like video surgeon educators okay yeah so genetic did is actually a great example there there's a really I think Joshua Meyer and collaborators between I think was it proud and it's an NYU team in slush fair I think Rob Fergus is not working a lot on this so they took Bert and they applied it to protein sequences or I think sorry amino acid sequences and probably I don't have strong bio background but much of bio background but they were showing that", "start_timestamp": "02:33:31", "end_timestamp": "02:34:06", "start_second": 9211, "end_second": 9246, "url": "https://www.youtube.com/watch?v=BnpB3GrpsfM&t=9211s", "title": "L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20", "thumbnail": "https://i.ytimg.com/vi/BnpB3GrpsfM/maxresdefault.jpg"} {"video_id": "BnpB3GrpsfM", "text": "the same methods or you know learning like a lot of the structure in those different domains so like kind of the central unit analysis or the sentiment you know example I gave for pure language there was also another paper from I believe church the church lab at Harvard where they took like literally my code and ran it over amino acid sequences and we're showing that there was like instead of a central unit there was like a like a beta sheet unit or so current course finding like secondary or tertiary structure of proteins the", "start_timestamp": "02:34:06", "end_timestamp": "02:34:37", "start_second": 9246, "end_second": 9277, "url": "https://www.youtube.com/watch?v=BnpB3GrpsfM&t=9246s", "title": "L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20", "thumbnail": "https://i.ytimg.com/vi/BnpB3GrpsfM/maxresdefault.jpg"} {"video_id": "BnpB3GrpsfM", "text": "models were having units that were like understanding you know or even though these are like very nonparametric kind of abstract models that just like you know have a bunch of parameters that just factorize a probability distribution they're somehow learning the structure of the domain or hints of that so I think that's very exciting and that's another line of work I think given how exciting this stuff has been for MLP and how much of an impact it's made over the last few years whether it could work in other domains would be", "start_timestamp": "02:34:37", "end_timestamp": "02:35:04", "start_second": 9277, "end_second": 9304, "url": "https://www.youtube.com/watch?v=BnpB3GrpsfM&t=9277s", "title": "L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20", "thumbnail": "https://i.ytimg.com/vi/BnpB3GrpsfM/maxresdefault.jpg"} {"video_id": "BnpB3GrpsfM", "text": "quite interesting you know there's definitely differences so for video I think video just needs so much compute that it's like still maybe quite a few years off just because of the volume of data and you know the amount of compute that might be necessary but maybe I'm just being cynical there whereas I'm images you know there's a weird contrast which is like I mentioned the contrast in methods are doing quite well and if you just run a generative model where you know actually okay that's not quite right there's one paper from deep mind", "start_timestamp": "02:35:04", "end_timestamp": "02:35:31", "start_second": 9304, "end_second": 9331, "url": "https://www.youtube.com/watch?v=BnpB3GrpsfM&t=9304s", "title": "L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20", "thumbnail": "https://i.ytimg.com/vi/BnpB3GrpsfM/maxresdefault.jpg"} {"video_id": "BnpB3GrpsfM", "text": "called big by again where they took it immediately you know pretty different generative model and they were showing that those are starting to learn quite good representations or images at least from the standards of unsupervised learning still being crushed by the latest moco's or sim clears but they're you know they're quite promising and you know showing a kind of a foothold of this generative model kind of approach in other domains and maybe you know one more piece of context to shine on there I think there's some one of a nicety to", "start_timestamp": "02:35:31", "end_timestamp": "02:36:01", "start_second": 9331, "end_second": 9361, "url": "https://www.youtube.com/watch?v=BnpB3GrpsfM&t=9331s", "title": "L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20", "thumbnail": "https://i.ytimg.com/vi/BnpB3GrpsfM/maxresdefault.jpg"} {"video_id": "BnpB3GrpsfM", "text": "language because it's produced by you know people it kind of is naturally designed to be very clean and very high-level and yeah it removes all the noise so when I think we run and try to train the same generative models or approaches in domains like images or video it may just be that like when you're dealing with raw natural audio signals are you know sorry not raw natural signals they have so much noise like particularly a likelihood based generative model is just like spending so much effort and capacity trying to predict all that", "start_timestamp": "02:36:01", "end_timestamp": "02:36:30", "start_second": 9361, "end_second": 9390, "url": "https://www.youtube.com/watch?v=BnpB3GrpsfM&t=9361s", "title": "L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20", "thumbnail": "https://i.ytimg.com/vi/BnpB3GrpsfM/maxresdefault.jpg"} {"video_id": "BnpB3GrpsfM", "text": "noise and the you know this noise to signal ratio it's just a lot cooler that that just like makes it a much more difficult task right now yeah you know it's it's I think it's a very interesting research question so Alec we're about out of time here give any closing thoughts oh yeah let's wrap it up we're mostly there I guess you know what one thing again is like you know III one of the things that I really enjoyed about being able to have the opportunity to this talk was kind of going through and showing that full history here and", "start_timestamp": "02:36:30", "end_timestamp": "02:37:22", "start_second": 9390, "end_second": 9442, "url": "https://www.youtube.com/watch?v=BnpB3GrpsfM&t=9390s", "title": "L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20", "thumbnail": "https://i.ytimg.com/vi/BnpB3GrpsfM/maxresdefault.jpg"} {"video_id": "BnpB3GrpsfM", "text": "kind of you know I think it's it's a great example of how there's so many pieces that built on top of each other and you know there's so many different authors and so many different institutions that really contributed to this and you know even given that open area there's been a lot of climbers that have pushed on this stuff over the last few years and you know it really you see it just evolved like so many different pieces of the research with all the different you know things being brought to bear new models new datasets you know", "start_timestamp": "02:37:22", "end_timestamp": "02:37:50", "start_second": 9442, "end_second": 9470, "url": "https://www.youtube.com/watch?v=BnpB3GrpsfM&t=9442s", "title": "L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20", "thumbnail": "https://i.ytimg.com/vi/BnpB3GrpsfM/maxresdefault.jpg"} {"video_id": "1sJuWg5dULg", "text": "hello welcome to lecture 8 of deep and squeezed learning today we are going to talk about the strengths and weaknesses of various narrative models and representation learning methods that we've seen so far so the brain has 10 to the power 14 synapses and we only live for 10 to power 9 seconds and so we have a lot more parameters then then I'm the data we ingest so this motivates that we should do a lot on scores learning because in order to provide sufficient fodder for the number of parameters that we have in our brain we should be able", "start_timestamp": "00:00:00", "end_timestamp": "00:00:41", "start_second": 0, "end_second": 41, "url": "https://www.youtube.com/watch?v=1sJuWg5dULg&t=0s", "title": "L8 Round-up of Strengths and Weaknesses of Unsupervised Learning Methods -- UC Berkeley SP20", "thumbnail": "https://i.ytimg.com/vi/1sJuWg5dULg/maxresdefault.jpg"} {"video_id": "1sJuWg5dULg", "text": "to predict a lot more bits from the data that we ingest which is 10 to the power 5 order magnitude smaller right so this was a statement made by Jeff Fenton in this 2014 in a reddit so firstly you summary of the course so far we've looked at or aggressive Morrow's fix learn and pick so CNN picked two skin and paws glass pixel snail we looked at four models really only P family of models and also the connection between Auto regressive flows and in with Auto regressive flows next we covered latent variable models models with approximate", "start_timestamp": "00:00:41", "end_timestamp": "00:01:23", "start_second": 41, "end_second": 83, "url": "https://www.youtube.com/watch?v=1sJuWg5dULg&t=41s", "title": "L8 Round-up of Strengths and Weaknesses of Unsupervised Learning Methods -- UC Berkeley SP20", "thumbnail": "https://i.ytimg.com/vi/1sJuWg5dULg/maxresdefault.jpg"} {"video_id": "1sJuWg5dULg", "text": "density estimates using the variational lower bound and various associations of that like the VA e importance rated our encoder VQ BAE pixel BAE and so forth we also then jumped into a different class of jeredy models that don't work with the likelihood principle the impasse density models against energy based models in the moment matching principle and finally we questioned the idea of like whether we even need to learn generative models if all we care about is extracting useful features from unlabeled data and that God isn't with", "start_timestamp": "00:01:23", "end_timestamp": "00:01:59", "start_second": 83, "end_second": 119, "url": "https://www.youtube.com/watch?v=1sJuWg5dULg&t=83s", "title": "L8 Round-up of Strengths and Weaknesses of Unsupervised Learning Methods -- UC Berkeley SP20", "thumbnail": "https://i.ytimg.com/vi/1sJuWg5dULg/maxresdefault.jpg"} {"video_id": "1sJuWg5dULg", "text": "this topic because house provides representation and we saw that with the right kind of simple cognitive principles and a lot of data and compute we can learn really useful representations of unlabeled images that are competitive with supervisor plantations so represent so let's let's look at auto regressive models used in the in 2015 the main paper was Bush with which introduced this idea of masked or encoder for density estimation and it was able to produce these I'm miss digits which were reasonable looking but very jittery and", "start_timestamp": "00:01:59", "end_timestamp": "00:02:42", "start_second": 119, "end_second": 162, "url": "https://www.youtube.com/watch?v=1sJuWg5dULg&t=119s", "title": "L8 Round-up of Strengths and Weaknesses of Unsupervised Learning Methods -- UC Berkeley SP20", "thumbnail": "https://i.ytimg.com/vi/1sJuWg5dULg/maxresdefault.jpg"} {"video_id": "1sJuWg5dULg", "text": "this idea was extended to much stronger architect more expressive architectures well-suited for image modeling like masking illusions this individual introduced in the pixel or an analytics is seen and family of models and you certainly started seeing generative models working for higher dimensional and much more diverse assi Commission ad so these are samples from image net 64 by 64 you can see that the the structure across 4,000 pixels is pretty coherent but the color is not that good and therefore you're not actually able to", "start_timestamp": "00:02:42", "end_timestamp": "00:03:18", "start_second": 162, "end_second": 198, "url": "https://www.youtube.com/watch?v=1sJuWg5dULg&t=162s", "title": "L8 Round-up of Strengths and Weaknesses of Unsupervised Learning Methods -- UC Berkeley SP20", "thumbnail": "https://i.ytimg.com/vi/1sJuWg5dULg/maxresdefault.jpg"} {"video_id": "1sJuWg5dULg", "text": "identify any visible class from imagenet but this was a big big jump from the quality you saw and made and this idea of mass convolutions it has also been applied for one dimensional data like audio and in order to model long-range cover in audio samples the idea of using dilated combinations was introduced and this was also applied for a text-to-speech system where you're going to convert linguistic and text features to raw audio and that can be used in any Indonesian assistant like the Google assistant and this was the Wayman", "start_timestamp": "00:03:18", "end_timestamp": "00:04:01", "start_second": 198, "end_second": 241, "url": "https://www.youtube.com/watch?v=1sJuWg5dULg&t=198s", "title": "L8 Round-up of Strengths and Weaknesses of Unsupervised Learning Methods -- UC Berkeley SP20", "thumbnail": "https://i.ytimg.com/vi/1sJuWg5dULg/maxresdefault.jpg"} {"video_id": "1sJuWg5dULg", "text": "architecture that was commercially deployed after after a year and the same idea of using mass conversions with Auto regressive pixel level modeling has also been applied for Rio prediction why are you looking at the pass frames and encoding them with a convolutional STM and then you're taking the embedded representation as a conditioning information for a pixel scene and decoder that generates the next frame pixel by pixel and it's able to produce coherent video look like a robot moving out to Tehran so over time", "start_timestamp": "00:04:01", "end_timestamp": "00:04:41", "start_second": 241, "end_second": 281, "url": "https://www.youtube.com/watch?v=1sJuWg5dULg&t=241s", "title": "L8 Round-up of Strengths and Weaknesses of Unsupervised Learning Methods -- UC Berkeley SP20", "thumbnail": "https://i.ytimg.com/vi/1sJuWg5dULg/maxresdefault.jpg"} {"video_id": "1sJuWg5dULg", "text": "the order aggressive modeling community has expanded further and further in terms of the level of engineering and architectural innovation and on the left you can see if the subscale pixel networks which have very coherent samples because of the clever conditioning can assume to use on the right you see hierarchical auto regressive image models with auxillary decoders where the idea of using latent space auto regressive models was introduced by quantizing representations or encoders and and modeling pixel CNN", "start_timestamp": "00:04:41", "end_timestamp": "00:05:14", "start_second": 281, "end_second": 314, "url": "https://www.youtube.com/watch?v=1sJuWg5dULg&t=281s", "title": "L8 Round-up of Strengths and Weaknesses of Unsupervised Learning Methods -- UC Berkeley SP20", "thumbnail": "https://i.ytimg.com/vi/1sJuWg5dULg/maxresdefault.jpg"} {"video_id": "1sJuWg5dULg", "text": "the latent space which is also similar to the vqv a a idea that you seen in the VA a lecture so apart from images and audio and video auto regressive models have had immense success in language and these are samples from GPT to riots it would actually produce a coherent story about unicorns and like a like a story of how unicorn skin when their own language and also talks about a scientist who is able to observe all this phenomenon and this shows that language modeling at the level of a paragraph or even multiple", "start_timestamp": "00:05:14", "end_timestamp": "00:05:55", "start_second": 314, "end_second": 355, "url": "https://www.youtube.com/watch?v=1sJuWg5dULg&t=314s", "title": "L8 Round-up of Strengths and Weaknesses of Unsupervised Learning Methods -- UC Berkeley SP20", "thumbnail": "https://i.ytimg.com/vi/1sJuWg5dULg/maxresdefault.jpg"} {"video_id": "1sJuWg5dULg", "text": "paragraphs as possible by just training large models which used to order aggressive structures this slide shows the evolution of language models over time we're on the first you see Shannon's three grand models which I reasonably good but not super coherent across the full sentence and then Ilya sutskever is model of using an RNN is able to produce a couple of sentences but not completely making sense and then over time they using bigger LSD and bigger transformers you ended up with the quality that's UPD to experts right now so all these huge", "start_timestamp": "00:05:55", "end_timestamp": "00:06:38", "start_second": 355, "end_second": 398, "url": "https://www.youtube.com/watch?v=1sJuWg5dULg&t=355s", "title": "L8 Round-up of Strengths and Weaknesses of Unsupervised Learning Methods -- UC Berkeley SP20", "thumbnail": "https://i.ytimg.com/vi/1sJuWg5dULg/maxresdefault.jpg"} {"video_id": "1sJuWg5dULg", "text": "advances have been possible due to multiple reasons and let's go through them quickly the first thing is just being able to train with larger batch sizes because of more computer availability and training with larger bad sites then we stabilizes the training of these models and optimizes these losses much better making the models wider making a modest deeper figuring our clever race condition your next year you're building a conditional class conditional or audio condition or text condition model the figuring out", "start_timestamp": "00:06:38", "end_timestamp": "00:07:10", "start_second": 398, "end_second": 430, "url": "https://www.youtube.com/watch?v=1sJuWg5dULg&t=398s", "title": "L8 Round-up of Strengths and Weaknesses of Unsupervised Learning Methods -- UC Berkeley SP20", "thumbnail": "https://i.ytimg.com/vi/1sJuWg5dULg/maxresdefault.jpg"} {"video_id": "1sJuWg5dULg", "text": "ways to get the conditioning information cleverly is very useful pre-processing like in wavenet we use some new law pre-processing to quantize continuous audio into discrete entities are for example in pixels you're actually using categorical information for modeling rather than rather than using gaussians so these are these are and in language you using by parent coding which is pre trained on a huge corpus and therefore your mod in on modeling neither at the character level or at the word level but your modeling in the sub word level and", "start_timestamp": "00:07:10", "end_timestamp": "00:07:47", "start_second": 430, "end_second": 467, "url": "https://www.youtube.com/watch?v=1sJuWg5dULg&t=430s", "title": "L8 Round-up of Strengths and Weaknesses of Unsupervised Learning Methods -- UC Berkeley SP20", "thumbnail": "https://i.ytimg.com/vi/1sJuWg5dULg/maxresdefault.jpg"} {"video_id": "1sJuWg5dULg", "text": "that's much more useful for generalization and also building more efficient models compute power and as we progress in the last two three years we just have at we just where access to a lot more compute like TPU so I like big GPU rigs which have lots GPUs connected really with a really fast interconnect and therefore be able to train data data parallel model is much better and we're to train see that several weeks or basic training are usually producing much better results and also making fewer assumptions about the whole problem like", "start_timestamp": "00:07:47", "end_timestamp": "00:08:30", "start_second": 467, "end_second": 510, "url": "https://www.youtube.com/watch?v=1sJuWg5dULg&t=467s", "title": "L8 Round-up of Strengths and Weaknesses of Unsupervised Learning Methods -- UC Berkeley SP20", "thumbnail": "https://i.ytimg.com/vi/1sJuWg5dULg/maxresdefault.jpg"} {"video_id": "1sJuWg5dULg", "text": "before trying the idea of this predicting categorical distributions for every pixel why would he want to imagine that pixels are definitely gonna be modeled with calcium's instead of categorical distributions like indy really doesn't make any sense but then practically it's better for a neural network to work with cross entropy losses there are also been architectural advances that made all these was much better so mass conversions were applied in the original Pisa CNN but as transformers and dilated communist art", "start_timestamp": "00:08:30", "end_timestamp": "00:09:06", "start_second": 510, "end_second": 546, "url": "https://www.youtube.com/watch?v=1sJuWg5dULg&t=510s", "title": "L8 Round-up of Strengths and Weaknesses of Unsupervised Learning Methods -- UC Berkeley SP20", "thumbnail": "https://i.ytimg.com/vi/1sJuWg5dULg/maxresdefault.jpg"} {"video_id": "1sJuWg5dULg", "text": "exists the samples just got much better with more coherent structure across long range dependencies and and making the whole modeling problem look more like supervised learning helps a lot and therefore relying relying heavily on oh they'll be here crossing will be lost and optimizes that have been much better tuned for this loss ensures that generative modeling can also benefit from all this but engineering advancements so now what's the future for our regressive models we're only scratching the surface of what's", "start_timestamp": "00:09:06", "end_timestamp": "00:09:43", "start_second": 546, "end_second": 583, "url": "https://www.youtube.com/watch?v=1sJuWg5dULg&t=546s", "title": "L8 Round-up of Strengths and Weaknesses of Unsupervised Learning Methods -- UC Berkeley SP20", "thumbnail": "https://i.ytimg.com/vi/1sJuWg5dULg/maxresdefault.jpg"} {"video_id": "1sJuWg5dULg", "text": "possible and and once we have motor pilot training we'll be able to realize a lot more for instance be able to train trillion parameter models on all of the Internet's text and that that way we could compress all the Internet's text into a giant neural network that can be a like a know-it-all language model and secondly we can figure out ways to Train one single model for multiple modalities just even bigger generative model they could work at a video level on YouTube or image level Instagram text level Cabiria so that way it's able to", "start_timestamp": "00:09:43", "end_timestamp": "00:10:24", "start_second": 583, "end_second": 624, "url": "https://www.youtube.com/watch?v=1sJuWg5dULg&t=583s", "title": "L8 Round-up of Strengths and Weaknesses of Unsupervised Learning Methods -- UC Berkeley SP20", "thumbnail": "https://i.ytimg.com/vi/1sJuWg5dULg/maxresdefault.jpg"} {"video_id": "1sJuWg5dULg", "text": "probably correlate information across multiple entities and chameleons for expansion so for all these kind of modeling requires hardware and software advances from auto pilot training we should it's also possible to make or aggressive models more useful by figuring out faster ways to sample with better low-level primitives at the CUDA that will like for instance fast kernels and and better act like for example wave are an N uses all these mechanisms for production components and doesn't need to be distilled into something like a", "start_timestamp": "00:10:24", "end_timestamp": "00:11:02", "start_second": 624, "end_second": 662, "url": "https://www.youtube.com/watch?v=1sJuWg5dULg&t=624s", "title": "L8 Round-up of Strengths and Weaknesses of Unsupervised Learning Methods -- UC Berkeley SP20", "thumbnail": "https://i.ytimg.com/vi/1sJuWg5dULg/maxresdefault.jpg"} {"video_id": "1sJuWg5dULg", "text": "parallel bayonet this work as a standalone auto regressive model and still be deployed on an Android phone hybrid models with much weaker or aggressive structure but that can be trained on a large escape could be revisited and and of course all these architectural innovations that help in long-range dependencies would always help in you know as you keep moving to bigger image this or a video or something like that these kind of ideas should up a lot so like a summary of auto regressive model could be that it", "start_timestamp": "00:11:02", "end_timestamp": "00:11:38", "start_second": 662, "end_second": 698, "url": "https://www.youtube.com/watch?v=1sJuWg5dULg&t=662s", "title": "L8 Round-up of Strengths and Weaknesses of Unsupervised Learning Methods -- UC Berkeley SP20", "thumbnail": "https://i.ytimg.com/vi/1sJuWg5dULg/maxresdefault.jpg"} {"video_id": "1sJuWg5dULg", "text": "is an active topic but a lot of cutting-edge to us and there's a lot of moscow for a new engineering and creative architecture design and larger models and data sets are clearly needed to you know realize the full potential of these class of models and standalone they are very successful across all modalities without any conditioning information like class labels so that's that's like a very appealing property of these models every Universal in that sense and also they can work without much engineering for sampling time so", "start_timestamp": "00:11:38", "end_timestamp": "00:12:14", "start_second": 698, "end_second": 734, "url": "https://www.youtube.com/watch?v=1sJuWg5dULg&t=698s", "title": "L8 Round-up of Strengths and Weaknesses of Unsupervised Learning Methods -- UC Berkeley SP20", "thumbnail": "https://i.ytimg.com/vi/1sJuWg5dULg/maxresdefault.jpg"} {"video_id": "1sJuWg5dULg", "text": "that makes them really look creative but but but nevertheless for production if you you should really cut down on the sounding time to be useful and so innovating on the low-level primitives was very important so that said there are a lot of negatives for aggressive modeling one is you don't extract any representation there is no bottleneck structure and sampling times not good for deployment it's not particularly usable for downstream tasks like for instance a language Maru you need to sample multiple times to see coherent samples", "start_timestamp": "00:12:14", "end_timestamp": "00:12:52", "start_second": 734, "end_second": 772, "url": "https://www.youtube.com/watch?v=1sJuWg5dULg&t=734s", "title": "L8 Round-up of Strengths and Weaknesses of Unsupervised Learning Methods -- UC Berkeley SP20", "thumbnail": "https://i.ytimg.com/vi/1sJuWg5dULg/maxresdefault.jpg"} {"video_id": "1sJuWg5dULg", "text": "so you can't just roll out a language model that's a software and there are no interpolations that you can see to visualize what the models actually learning and every time you sample it's going to take a long time to produce like a diverse set of samples so that's it about auto regressive models now let's look at flow models in flow models it all started with the nice architecture by loaned in and those the model was already producing very good digits on the endless data set and on the T of tedious it was producing", "start_timestamp": "00:12:52", "end_timestamp": "00:13:27", "start_second": 772, "end_second": 807, "url": "https://www.youtube.com/watch?v=1sJuWg5dULg&t=772s", "title": "L8 Round-up of Strengths and Weaknesses of Unsupervised Learning Methods -- UC Berkeley SP20", "thumbnail": "https://i.ytimg.com/vi/1sJuWg5dULg/maxresdefault.jpg"} {"video_id": "1sJuWg5dULg", "text": "reasonable phases but it really was bad on see far and SPH India said the samples were very blurry but it all improved with the real end we'd be architecture which introduced other kinds of flows and rational room to make the models better and then the glow model from King model was published where the real and Ruby model was taken to another level by making it prettiest much larger images and overdone in our lab called flow pass class advanced the likelihood scores for flow based models to competitive scores that with that of", "start_timestamp": "00:13:27", "end_timestamp": "00:14:05", "start_second": 807, "end_second": 845, "url": "https://www.youtube.com/watch?v=1sJuWg5dULg&t=807s", "title": "L8 Round-up of Strengths and Weaknesses of Unsupervised Learning Methods -- UC Berkeley SP20", "thumbnail": "https://i.ytimg.com/vi/1sJuWg5dULg/maxresdefault.jpg"} {"video_id": "1sJuWg5dULg", "text": "autoregressive models for the first time and this was done by this architecture engineering and scale so this shows the power of flow models of potential they have in terms of closing the gap in density estimation between autoregressive models without having the powerful or aggressive structure but at the same time being really fast with sampling and also potentially useful for inference so given all these practices there's a lot of future work left in terms of how to learn the masks how do you actually completely close the gap", "start_timestamp": "00:14:05", "end_timestamp": "00:14:38", "start_second": 845, "end_second": 878, "url": "https://www.youtube.com/watch?v=1sJuWg5dULg&t=845s", "title": "L8 Round-up of Strengths and Weaknesses of Unsupervised Learning Methods -- UC Berkeley SP20", "thumbnail": "https://i.ytimg.com/vi/1sJuWg5dULg/maxresdefault.jpg"} {"video_id": "1sJuWg5dULg", "text": "with our regressive models whether you want to use very expressive fluids but very few or whether you want to use shallow flows which are not particularly expressive but then keep on stacking them so that you can get a very expressive compose model how do you use multi scale losses for a trait and how do you trade off between your density estimates and your sample quality and how to use the representations you derive at various levels of the flow model for downstream tasks all these are like fundamental advances think about", "start_timestamp": "00:14:38", "end_timestamp": "00:15:15", "start_second": 878, "end_second": 915, "url": "https://www.youtube.com/watch?v=1sJuWg5dULg&t=878s", "title": "L8 Round-up of Strengths and Weaknesses of Unsupervised Learning Methods -- UC Berkeley SP20", "thumbnail": "https://i.ytimg.com/vi/1sJuWg5dULg/maxresdefault.jpg"} {"video_id": "1sJuWg5dULg", "text": "for flow models and also how do how do you carefully initialize so that flow models can train very fast so in terms of core achievements that you can aim for you can aim for producing low level samples which are truer models that have way fewer parameters the globe uses half a billion parameters for all the celebrity faces and that's unlikely a scale and how do you make it work potentially for even larger images how do you do dimensionality reduction with flows and think about other other flow models like conditional flow models and", "start_timestamp": "00:15:15", "end_timestamp": "00:15:55", "start_second": 915, "end_second": 955, "url": "https://www.youtube.com/watch?v=1sJuWg5dULg&t=915s", "title": "L8 Round-up of Strengths and Weaknesses of Unsupervised Learning Methods -- UC Berkeley SP20", "thumbnail": "https://i.ytimg.com/vi/1sJuWg5dULg/maxresdefault.jpg"} {"video_id": "1sJuWg5dULg", "text": "you know how do you actually close the cap and sample quality de Gans and also close the likely skoros gap between autoregressive models so the models would provide the pathway to do both and it's it's interesting to think about how to do all these things together so the negative of flow model says you expect to have the same dimension at every layer every stack of the flow and so it's unlikely to scale if your data is getting bigger and higher dimensional and unless you innovate on how to do dimensional reduction sauce it's unlike it'd be", "start_timestamp": "00:15:55", "end_timestamp": "00:16:29", "start_second": 955, "end_second": 989, "url": "https://www.youtube.com/watch?v=1sJuWg5dULg&t=955s", "title": "L8 Round-up of Strengths and Weaknesses of Unsupervised Learning Methods -- UC Berkeley SP20", "thumbnail": "https://i.ytimg.com/vi/1sJuWg5dULg/maxresdefault.jpg"} {"video_id": "1sJuWg5dULg", "text": "useful and you really need to carefully initialize and use things like AK norm for good numbers so that's that's another negative because it may not be directly usable for another modality or another data set or another kind of architecture so let's look at late engraver models will see the various different be strengths and weaknesses and what have been some visible successes in bas it all started with the original Emnes modeling by dirk Kingma where you could see various types of digits and strokes and the slopes of the strokes", "start_timestamp": "00:16:29", "end_timestamp": "00:17:11", "start_second": 989, "end_second": 1031, "url": "https://www.youtube.com/watch?v=1sJuWg5dULg&t=989s", "title": "L8 Round-up of Strengths and Weaknesses of Unsupervised Learning Methods -- UC Berkeley SP20", "thumbnail": "https://i.ytimg.com/vi/1sJuWg5dULg/maxresdefault.jpg"} {"video_id": "1sJuWg5dULg", "text": "and shades across multiple digits and then it got extended to much better more powerful data sets like Elsa in bedrooms by pix ovae and also image not 64 by 64 creating much better global sound globally more coherent samples 10 pixel CNN because of modeling latent structure and then there's the latent variable models innovation in terms of using hierarchical models and multi stack using hierarchical Laden inference and producing really high quality sound really faces on par with slow models so there are well-known applications of V", "start_timestamp": "00:17:11", "end_timestamp": "00:17:55", "start_second": 1031, "end_second": 1075, "url": "https://www.youtube.com/watch?v=1sJuWg5dULg&t=1031s", "title": "L8 Round-up of Strengths and Weaknesses of Unsupervised Learning Methods -- UC Berkeley SP20", "thumbnail": "https://i.ytimg.com/vi/1sJuWg5dULg/maxresdefault.jpg"} {"video_id": "1sJuWg5dULg", "text": "like sketch iron and role models and BW is used for modeling visual concepts and there are applications like deep mines jeredy cry networks which does view synthesis of a separate view by taking in two provided views and embedding into a latent rifle and interpolating the lane space for a query view across across multiple possibilities and therefore you can just collect data in a completely new environment from first-person vision you can you can keep a track of all their poses when you're recording things and then in principle", "start_timestamp": "00:17:55", "end_timestamp": "00:18:34", "start_second": 1075, "end_second": 1114, "url": "https://www.youtube.com/watch?v=1sJuWg5dULg&t=1075s", "title": "L8 Round-up of Strengths and Weaknesses of Unsupervised Learning Methods -- UC Berkeley SP20", "thumbnail": "https://i.ytimg.com/vi/1sJuWg5dULg/maxresdefault.jpg"} {"video_id": "1sJuWg5dULg", "text": "you could figure out how a particular scene looks like from any other viewpoint and therefore reconstruct the entire room or entire environment completely through this kind of a synthesis model that has rational inference so we have practically used in these kind of architectures and there are lots of advantages of EA's you get a compressed bottleneck representation you can get approximate density estimates you can interpolate and visualize what the model learns you can potentially get disentangle representations where", "start_timestamp": "00:18:34", "end_timestamp": "00:19:07", "start_second": 1114, "end_second": 1147, "url": "https://www.youtube.com/watch?v=1sJuWg5dULg&t=1114s", "title": "L8 Round-up of Strengths and Weaknesses of Unsupervised Learning Methods -- UC Berkeley SP20", "thumbnail": "https://i.ytimg.com/vi/1sJuWg5dULg/maxresdefault.jpg"} {"video_id": "1sJuWg5dULg", "text": "different readings correspond to different aspects of data and it is like a model that allows you to do all these things together at once like you basically can sample so you are a gyrator model you have a density estimate so you can use for our distribution detection as a density model you have latent variables so you you do representation learning and you also have a bottleneck representation so you are able to reduce the dimensionality of your original data set so a VA is the only model that lets you do all these four things together and", "start_timestamp": "00:19:07", "end_timestamp": "00:19:39", "start_second": 1147, "end_second": 1179, "url": "https://www.youtube.com/watch?v=1sJuWg5dULg&t=1147s", "title": "L8 Round-up of Strengths and Weaknesses of Unsupervised Learning Methods -- UC Berkeley SP20", "thumbnail": "https://i.ytimg.com/vi/1sJuWg5dULg/maxresdefault.jpg"} {"video_id": "1sJuWg5dULg", "text": "that makes it very appealing that said there are disadvantages you often end up with Lurie samples and assumption of a factorize Gaussian for the posterior or for the decoder this may be very limiting and you need more powerful decoders or more powerful posteriors and large scale successes are still yet to be shown and even though people have tried to like get more interpretable more disentangling variables by prioritizing the KL term over the reconstruction term the last it's still only work on toy problems and they may", "start_timestamp": "00:19:39", "end_timestamp": "00:20:14", "start_second": 1179, "end_second": 1214, "url": "https://www.youtube.com/watch?v=1sJuWg5dULg&t=1179s", "title": "L8 Round-up of Strengths and Weaknesses of Unsupervised Learning Methods -- UC Berkeley SP20", "thumbnail": "https://i.ytimg.com/vi/1sJuWg5dULg/maxresdefault.jpg"} {"video_id": "1sJuWg5dULg", "text": "actually be better ways to do representation learning or generation or like yeah interpolation in some form hierarchical Layton's individually so expecting for one model to all of them well may be truly hard and so a we may not be the state-of-the-art models on anything but maybe a model that lets you do all all that it all these things recently well in using a single single single modeling framework so that's that that's the that's what you lose when you want is everything within one model so that these are the disadvantages to me", "start_timestamp": "00:20:14", "end_timestamp": "00:20:55", "start_second": 1214, "end_second": 1255, "url": "https://www.youtube.com/watch?v=1sJuWg5dULg&t=1214s", "title": "L8 Round-up of Strengths and Weaknesses of Unsupervised Learning Methods -- UC Berkeley SP20", "thumbnail": "https://i.ytimg.com/vi/1sJuWg5dULg/maxresdefault.jpg"} {"video_id": "1sJuWg5dULg", "text": "but there's obviously scope for future work you can but you can use bigger decoders more powerful posteriors you can think about how to do hierarchical Leyton's to learn covers and fine-grained features and discrete Leyton's like weak uva and also large scale training like slow models have been done like glow or focus bus so next let's cover implicit models but we look at general adversarial networks and just just basically what what's happening ganz though we also covered moment matching energy based models in", "start_timestamp": "00:20:55", "end_timestamp": "00:21:35", "start_second": 1255, "end_second": 1295, "url": "https://www.youtube.com/watch?v=1sJuWg5dULg&t=1255s", "title": "L8 Round-up of Strengths and Weaknesses of Unsupervised Learning Methods -- UC Berkeley SP20", "thumbnail": "https://i.ytimg.com/vi/1sJuWg5dULg/maxresdefault.jpg"} {"video_id": "1sJuWg5dULg", "text": "class the Gann samples the quality of Gann samples has dramatically advanced from the primitive samples that you saw in the original Gann where you saw X reasonably looking good faces but then the c4 samples it's not pretty cooing too critical in terms of what is the object or class of C far that's been captured but it certainly looked different from Larry BAE samples at the time next you saw DC Gann which clearly advanced some the some quality of dance to a state where again to assign you a looking much and much more exciting than", "start_timestamp": "00:21:35", "end_timestamp": "00:22:18", "start_second": 1295, "end_second": 1338, "url": "https://www.youtube.com/watch?v=1sJuWg5dULg&t=1295s", "title": "L8 Round-up of Strengths and Weaknesses of Unsupervised Learning Methods -- UC Berkeley SP20", "thumbnail": "https://i.ytimg.com/vi/1sJuWg5dULg/maxresdefault.jpg"} {"video_id": "1sJuWg5dULg", "text": "any other model because the samples were much sharper and all these bedrooms were very high dimensional and then recently again giving again has been taken over by began stag and classic models were clearly careful attention to detail in terms of architecture design and also really really large-scale training like large pad sizes and a lot of stabilization tricks can produce these amazing photorealistic samples that you've already seen plenty of times in the class so I'm not going to go over them in terms of future work for Ganz I", "start_timestamp": "00:22:18", "end_timestamp": "00:22:56", "start_second": 1338, "end_second": 1376, "url": "https://www.youtube.com/watch?v=1sJuWg5dULg&t=1338s", "title": "L8 Round-up of Strengths and Weaknesses of Unsupervised Learning Methods -- UC Berkeley SP20", "thumbnail": "https://i.ytimg.com/vi/1sJuWg5dULg/maxresdefault.jpg"} {"video_id": "1sJuWg5dULg", "text": "think I think it's really hard to bet against cans to say hey this is work cans weakened its most likely that if you put sufficient effort in engineering you can get it again to function well on those things as well but but nevertheless there's still more progress we made an unconditional cans more collapse and also more complex scenes and video generation will be cool for instance will be nice to get a model that works on real driving data where and a lot of pedestrians are walking and then you want to be able to simulate future", "start_timestamp": "00:22:56", "end_timestamp": "00:23:30", "start_second": 1376, "end_second": 1410, "url": "https://www.youtube.com/watch?v=1sJuWg5dULg&t=1376s", "title": "L8 Round-up of Strengths and Weaknesses of Unsupervised Learning Methods -- UC Berkeley SP20", "thumbnail": "https://i.ytimg.com/vi/1sJuWg5dULg/maxresdefault.jpg"} {"video_id": "1sJuWg5dULg", "text": "you have to keep track of multiple people multiple objects multiple cars road signs and so forth so it's a very complicated jeredy modeling problem and it'll be interesting to see it ganz which are known to identify only a few cues in your dataset would they still work in such complex settings where you need to keep track of multiple things at once so future work in terms of modeling you can like think of more purchasable Lipsius knows better conditioning tricks like how to feed noise if your various levels like for instance stai again basically", "start_timestamp": "00:23:30", "end_timestamp": "00:24:08", "start_second": 1410, "end_second": 1448, "url": "https://www.youtube.com/watch?v=1sJuWg5dULg&t=1410s", "title": "L8 Round-up of Strengths and Weaknesses of Unsupervised Learning Methods -- UC Berkeley SP20", "thumbnail": "https://i.ytimg.com/vi/1sJuWg5dULg/maxresdefault.jpg"} {"video_id": "1sJuWg5dULg", "text": "innovated their batch or instance normalization of how to design better architectures working on sampling and down something ops to use how do you how to do channels of sampling and done something without introducing a lot of parameters what is the right objective function for your discriminator and how to scale and train ganz in a stable manner for like larger problems and how to preserve it at various different levels like how do I instance noise a feature noise so that it can stabilize the training of the discriminator much", "start_timestamp": "00:24:08", "end_timestamp": "00:24:41", "start_second": 1448, "end_second": 1481, "url": "https://www.youtube.com/watch?v=1sJuWg5dULg&t=1448s", "title": "L8 Round-up of Strengths and Weaknesses of Unsupervised Learning Methods -- UC Berkeley SP20", "thumbnail": "https://i.ytimg.com/vi/1sJuWg5dULg/maxresdefault.jpg"} {"video_id": "1sJuWg5dULg", "text": "better so all those things are very very interesting and think about in terms of negatives again scans if one could say there's plenty of engineering details and it's hard to clearly identify which is the most important core component that helps you reproduce these high-quality images and it's also very time consuming to ablate for these details so and and and and and it's very clear we need to improve on the sample diversity but then we also don't have very good metrics for evaluations so we need to work with what we have and even", "start_timestamp": "00:24:41", "end_timestamp": "00:25:19", "start_second": 1481, "end_second": 1519, "url": "https://www.youtube.com/watch?v=1sJuWg5dULg&t=1481s", "title": "L8 Round-up of Strengths and Weaknesses of Unsupervised Learning Methods -- UC Berkeley SP20", "thumbnail": "https://i.ytimg.com/vi/1sJuWg5dULg/maxresdefault.jpg"} {"video_id": "1sJuWg5dULg", "text": "though it may seem like we're improving a lot on the current metrics we use for again evaluations objectively the sample diversity is not a spurious likelihood based models so how do we actually come up with better valuation measures also one thing to think about with all these aspects like good evaluations good metrics relations these are not particularly specific to the scans these can be said for any any any kind of model as with any other model so if you were to make a choice between Ganon or density model", "start_timestamp": "00:25:19", "end_timestamp": "00:25:55", "start_second": 1519, "end_second": 1555, "url": "https://www.youtube.com/watch?v=1sJuWg5dULg&t=1519s", "title": "L8 Round-up of Strengths and Weaknesses of Unsupervised Learning Methods -- UC Berkeley SP20", "thumbnail": "https://i.ytimg.com/vi/1sJuWg5dULg/maxresdefault.jpg"} {"video_id": "1sJuWg5dULg", "text": "one would imagine you need a lot of engineering details for Ganz but it's not particularly true even for density models the architectural engineering has been comparable level of detail and you know trickery that you need for Ganz and secondly there is a lot of attempted theoretically understanding Ganz so the trade-off between having blurry samples versus of being okay with mode collapse is basically the same trade-off that you make when you care more about compression at the cost of sample quality was this you wanting to have", "start_timestamp": "00:25:55", "end_timestamp": "00:26:34", "start_second": 1555, "end_second": 1594, "url": "https://www.youtube.com/watch?v=1sJuWg5dULg&t=1555s", "title": "L8 Round-up of Strengths and Weaknesses of Unsupervised Learning Methods -- UC Berkeley SP20", "thumbnail": "https://i.ytimg.com/vi/1sJuWg5dULg/maxresdefault.jpg"} {"video_id": "1sJuWg5dULg", "text": "really good samples at the cost of missing some modes so it's basically which direction of kale that you care about and the reverse direction you care about more if you don't want any spurious sample but the forward direction you care about more if you really want to make sure that your modeling is good and you're not going to make any mistakes even though your you're not gonna miss out anything you in there you may make some mistakes at some of some of the points so mostly apart from the fact that they can produce amazing samples cans are", "start_timestamp": "00:26:34", "end_timestamp": "00:27:09", "start_second": 1594, "end_second": 1629, "url": "https://www.youtube.com/watch?v=1sJuWg5dULg&t=1594s", "title": "L8 Round-up of Strengths and Weaknesses of Unsupervised Learning Methods -- UC Berkeley SP20", "thumbnail": "https://i.ytimg.com/vi/1sJuWg5dULg/maxresdefault.jpg"} {"video_id": "1sJuWg5dULg", "text": "popular because they can work with much less compute for instance in order to generate a 1 megapixel image for an auto regressive model or even a Leighton space our aggressive model you need to use at least 512 course or TPU to do that because you need such large pad sizes whereas for gans you can make it work with a single V 100 GPU and then so there so that's that's one reason why gangs are clearly preferred over than 10 C models because I'm amount of time taking the train as a sample and you can also see better interpolations and", "start_timestamp": "00:27:09", "end_timestamp": "00:27:46", "start_second": 1629, "end_second": 1666, "url": "https://www.youtube.com/watch?v=1sJuWg5dULg&t=1629s", "title": "L8 Round-up of Strengths and Weaknesses of Unsupervised Learning Methods -- UC Berkeley SP20", "thumbnail": "https://i.ytimg.com/vi/1sJuWg5dULg/maxresdefault.jpg"} {"video_id": "1sJuWg5dULg", "text": "better conditional generation in cans so this dis leads to adoption by people who are more interested in art and fine tuning to like interesting artistic datasets you're not particularly machine learning relevant and that's one of the other reasons again a speaker plot so on the bright side we can think about how like many technological advances have been possible without the correct science and so ganz can we consider in that way as well and this is a slide from young Conan the epistemology of deep learning where explains that", "start_timestamp": "00:27:46", "end_timestamp": "00:28:22", "start_second": 1666, "end_second": 1702, "url": "https://www.youtube.com/watch?v=1sJuWg5dULg&t=1666s", "title": "L8 Round-up of Strengths and Weaknesses of Unsupervised Learning Methods -- UC Berkeley SP20", "thumbnail": "https://i.ytimg.com/vi/1sJuWg5dULg/maxresdefault.jpg"} {"video_id": "1sJuWg5dULg", "text": "several technologies in the past have preceded their science that explains them for example the steam engine was before the thermodynamics so it's doing better theory for ganz is something that could still be innovated on in the future so here is a taxonomy of generative models from in Goodfellas new ribs tutorial apart from Markov chain Boltzmann machines and Markov change and are the stochastic networks we have pretty much covered everything else we've covered Nate may fix Lauren and how do you exchange of variables scale the flow", "start_timestamp": "00:28:22", "end_timestamp": "00:29:00", "start_second": 1702, "end_second": 1740, "url": "https://www.youtube.com/watch?v=1sJuWg5dULg&t=1702s", "title": "L8 Round-up of Strengths and Weaknesses of Unsupervised Learning Methods -- UC Berkeley SP20", "thumbnail": "https://i.ytimg.com/vi/1sJuWg5dULg/maxresdefault.jpg"} {"video_id": "1sJuWg5dULg", "text": "models or really be models all these are explicit density models and then we also covered approximate in steam models vary from our encoders the variation lower bound and then recovered implicit density model estate they can other models that I'm not being covered are not particularly popular or very used so that's the reason we focus on the more popular ones and if you have if you're if you have been and trained density models and you're figuring out which density model you should be using here are some pointers if you only care", "start_timestamp": "00:29:00", "end_timestamp": "00:29:33", "start_second": 1740, "end_second": 1773, "url": "https://www.youtube.com/watch?v=1sJuWg5dULg&t=1740s", "title": "L8 Round-up of Strengths and Weaknesses of Unsupervised Learning Methods -- UC Berkeley SP20", "thumbnail": "https://i.ytimg.com/vi/1sJuWg5dULg/maxresdefault.jpg"} {"video_id": "1sJuWg5dULg", "text": "about the density estimates disco for our aggressive models you don't worry about sampling time here if you care a lot about sampling times in autoregressive may still be fine if your sequences are not that big or if you use lightweight models but if you really cannot afford to wait for the sampling time you really want really fast samples but you still want to go for a density modeling you could think about using Vikan regressive models like paralytic so CNN and you could also think of doing latent space modeling like like latent", "start_timestamp": "00:29:33", "end_timestamp": "00:30:05", "start_second": 1773, "end_second": 1805, "url": "https://www.youtube.com/watch?v=1sJuWg5dULg&t=1773s", "title": "L8 Round-up of Strengths and Weaknesses of Unsupervised Learning Methods -- UC Berkeley SP20", "thumbnail": "https://i.ytimg.com/vi/1sJuWg5dULg/maxresdefault.jpg"} {"video_id": "1sJuWg5dULg", "text": "space or like a week you BAE you may probably not even needed quantization bottleneck it could still work with like continuous values and so models are also pretty billing for modeling continuous value data that density estimates for continuous value data especially even when they're actually continuous and it's hard to figure out how to even quantize them so so that that's that's another interesting aspect of flow models and if you also want to think about how how to have like representations and also sampling but", "start_timestamp": "00:30:05", "end_timestamp": "00:30:46", "start_second": 1805, "end_second": 1846, "url": "https://www.youtube.com/watch?v=1sJuWg5dULg&t=1805s", "title": "L8 Round-up of Strengths and Weaknesses of Unsupervised Learning Methods -- UC Berkeley SP20", "thumbnail": "https://i.ytimg.com/vi/1sJuWg5dULg/maxresdefault.jpg"} {"video_id": "1sJuWg5dULg", "text": "you want to have a simple possible model v's with factorize decoders maybe the natural drugs so given given these appealing properties or density models like when would you use cans you would use guns when you really care about having good samples and you have really really large images high-quality images for and you don't want something photorealistic you have a lot of conditioning information like pose or the class or edge edge maps and you just want to add texture to them cans are really good in these initial image", "start_timestamp": "00:30:46", "end_timestamp": "00:31:17", "start_second": 1846, "end_second": 1877, "url": "https://www.youtube.com/watch?v=1sJuWg5dULg&t=1846s", "title": "L8 Round-up of Strengths and Weaknesses of Unsupervised Learning Methods -- UC Berkeley SP20", "thumbnail": "https://i.ytimg.com/vi/1sJuWg5dULg/maxresdefault.jpg"} {"video_id": "1sJuWg5dULg", "text": "translation problems or rear the video and if all they care about is perceptual quality and controllable generation and you don't have a lot of compute this is often the case for any any kind of start up again it's like the best choice to go for so that's it for generative models next let's look at South provides representation learning it which is our final topic so south supervised image classification has seen rapid advance in the last one and a half years just the end of 2018 the top one accuracy of image net linear classification", "start_timestamp": "00:31:17", "end_timestamp": "00:31:57", "start_second": 1877, "end_second": 1917, "url": "https://www.youtube.com/watch?v=1sJuWg5dULg&t=1877s", "title": "L8 Round-up of Strengths and Weaknesses of Unsupervised Learning Methods -- UC Berkeley SP20", "thumbnail": "https://i.ytimg.com/vi/1sJuWg5dULg/maxresdefault.jpg"} {"video_id": "1sJuWg5dULg", "text": "benchmark was 48 percent and now it's seventy six point five percent so this rapid advance has been made in multiple labs because of this mode of learning called contrast to learning and contrast the learning task can be simply summarize this a dictionary lookup task and there are two ways to do this pretext contrasted learning which is you either build it as a predictive coding task or you build it as an instance discrimination task and in predictive coding you have multiple mechanisms to do that once you either used end-to-end", "start_timestamp": "00:31:57", "end_timestamp": "00:32:31", "start_second": 1917, "end_second": 1951, "url": "https://www.youtube.com/watch?v=1sJuWg5dULg&t=1917s", "title": "L8 Round-up of Strengths and Weaknesses of Unsupervised Learning Methods -- UC Berkeley SP20", "thumbnail": "https://i.ytimg.com/vi/1sJuWg5dULg/maxresdefault.jpg"} {"video_id": "1sJuWg5dULg", "text": "mechanism or you use the momentum decoder momentum encoder make using the momentum encoder for the keys and the predictive coding success story has been achieved in the contrast operator coding or CPC particularly the CPC version two and and and the instances combination success has been achieved in moko and sim clear moko means momentum contrast and Sinclair's into an instance contrast they use the corresponding mechanisms of contrast learning so let's look at CPC version two moko and Sinclair in terms of their positives and the negatives so", "start_timestamp": "00:32:31", "end_timestamp": "00:33:12", "start_second": 1951, "end_second": 1992, "url": "https://www.youtube.com/watch?v=1sJuWg5dULg&t=1951s", "title": "L8 Round-up of Strengths and Weaknesses of Unsupervised Learning Methods -- UC Berkeley SP20", "thumbnail": "https://i.ytimg.com/vi/1sJuWg5dULg/maxresdefault.jpg"} {"video_id": "1sJuWg5dULg", "text": "CPC version two we're doing spatial contrast prediction so that principle is very generic and it can apply to any morality or domain so you don't need to know the underlying data augmentation in variances in this work and it can be considered as latent space channel tomorrow and also it's much easier to adapt for audio video text and perform multimodal training disadvantages it splits your input into a lot of patches or frames or even audio chunks and therefore your inputs are now your inputs are now basically split into", "start_timestamp": "00:33:12", "end_timestamp": "00:33:48", "start_second": 1992, "end_second": 2028, "url": "https://www.youtube.com/watch?v=1sJuWg5dULg&t=1992s", "title": "L8 Round-up of Strengths and Weaknesses of Unsupervised Learning Methods -- UC Berkeley SP20", "thumbnail": "https://i.ytimg.com/vi/1sJuWg5dULg/maxresdefault.jpg"} {"video_id": "1sJuWg5dULg", "text": "a lot of different parts that you have to carefully delineate and you also need to carefully pick what part are you predicting from what so that involves a lot of design choices to make type of parameters that you can only know by trial and error so that makes it really hard for you to use it on a domain or task that you don't really understand well and then you require multiple forward passes for these smaller versions of the inputs now and so that means that you be pre-training on something much smaller but potentially", "start_timestamp": "00:33:48", "end_timestamp": "00:34:19", "start_second": 2028, "end_second": 2059, "url": "https://www.youtube.com/watch?v=1sJuWg5dULg&t=2028s", "title": "L8 Round-up of Strengths and Weaknesses of Unsupervised Learning Methods -- UC Berkeley SP20", "thumbnail": "https://i.ytimg.com/vi/1sJuWg5dULg/maxresdefault.jpg"} {"video_id": "1sJuWg5dULg", "text": "fine-tuning are much larger versions of the sequences or images so this may not be an optimal thing to do when you're doing local predictions local spatial predictions Bosch num is hard to use so applying mass ROM is hard but then you really want to use batch room for a downstream task so that makes CPC version too little sore in sense it's not particularly suitable for downstream tasks if you really care about state-of-the-art performance and finally the splitting process mechanism is very slow on a on a matrix multiplication", "start_timestamp": "00:34:19", "end_timestamp": "00:34:55", "start_second": 2059, "end_second": 2095, "url": "https://www.youtube.com/watch?v=1sJuWg5dULg&t=2059s", "title": "L8 Round-up of Strengths and Weaknesses of Unsupervised Learning Methods -- UC Berkeley SP20", "thumbnail": "https://i.ytimg.com/vi/1sJuWg5dULg/maxresdefault.jpg"} {"video_id": "1sJuWg5dULg", "text": "specialized hardware like GPUs so it's because you do a lot of reshapes and transposes and so it's never an optimal thing to do so here's the summary of moco one of the main advantages of moco is it is very minimal so it's very easy to use and replicate and it has no architectural change can be easily applied for downstream tasks there is no notion of a patch and it's distilling in variances for images using data augmentations and so the pre-training procedure looks very much like supervised learning and therefore it can", "start_timestamp": "00:34:55", "end_timestamp": "00:35:31", "start_second": 2095, "end_second": 2131, "url": "https://www.youtube.com/watch?v=1sJuWg5dULg&t=2095s", "title": "L8 Round-up of Strengths and Weaknesses of Unsupervised Learning Methods -- UC Berkeley SP20", "thumbnail": "https://i.ytimg.com/vi/1sJuWg5dULg/maxresdefault.jpg"} {"video_id": "1sJuWg5dULg", "text": "get comparable or even better results and the momentum encoder memory bank can assume adds a lot of stability to the training and decouples back size from the number of negatives and therefore this lets you train with way fewer GPUs than what's needed for CPC or like methods the disadvantage with moco is that because you introduce momentum and date you need to figure out what's the right decay rate for that and that has an extra type of parameter and another disadvantage is in image augmentation the invariances may not be applicable to", "start_timestamp": "00:35:31", "end_timestamp": "00:36:05", "start_second": 2131, "end_second": 2165, "url": "https://www.youtube.com/watch?v=1sJuWg5dULg&t=2131s", "title": "L8 Round-up of Strengths and Weaknesses of Unsupervised Learning Methods -- UC Berkeley SP20", "thumbnail": "https://i.ytimg.com/vi/1sJuWg5dULg/maxresdefault.jpg"} {"video_id": "1sJuWg5dULg", "text": "other modalities so this may be in method this works only for a visual image recognition and finally let's look at simply er which can be considered as an end-to-end version of Tomoko where you just look you're using all the negatives from your batch and there is no momentum encoder so advantages or sim clear are the same as that of moko with the additional advantage that you don't have a momentum in kora now so it's going to be asked minimally supervised learning but the disadvantage is now you just need really large batch sizes", "start_timestamp": "00:36:05", "end_timestamp": "00:36:39", "start_second": 2165, "end_second": 2199, "url": "https://www.youtube.com/watch?v=1sJuWg5dULg&t=2165s", "title": "L8 Round-up of Strengths and Weaknesses of Unsupervised Learning Methods -- UC Berkeley SP20", "thumbnail": "https://i.ytimg.com/vi/1sJuWg5dULg/maxresdefault.jpg"} {"video_id": "1sJuWg5dULg", "text": "because you need a lot of negatives because moko decouples the negatives from a bad size it doesn't need as much compute as sim cleared us and similar to moko they documentation invariance may be very specific to image recognition so in terms of future work left for sauce provision the gap between some supervised learning and supervised learning is to not close if you consider just the same amount of compute training time and the same candidate augmentations use so and also fine-tuning to downstream tasks the", "start_timestamp": "00:36:39", "end_timestamp": "00:37:13", "start_second": 2199, "end_second": 2233, "url": "https://www.youtube.com/watch?v=1sJuWg5dULg&t=2199s", "title": "L8 Round-up of Strengths and Weaknesses of Unsupervised Learning Methods -- UC Berkeley SP20", "thumbnail": "https://i.ytimg.com/vi/1sJuWg5dULg/maxresdefault.jpg"} {"video_id": "1sJuWg5dULg", "text": "gains are not significantly high enough that the paradigm shift has been made in vision so that way maybe new objectives are also needed and finally all these sub supervised successes have relied on using image net and it's not clear if supervised learning we just work from images in the wild or from the internet which is really the dream and which is really why people wanna do something so that's it for like subspace learning as in in terms of utility for downstream tasks let's look at always learning in the context of", "start_timestamp": "00:37:13", "end_timestamp": "00:37:54", "start_second": 2233, "end_second": 2274, "url": "https://www.youtube.com/watch?v=1sJuWg5dULg&t=2233s", "title": "L8 Round-up of Strengths and Weaknesses of Unsupervised Learning Methods -- UC Berkeley SP20", "thumbnail": "https://i.ytimg.com/vi/1sJuWg5dULg/maxresdefault.jpg"} {"video_id": "1sJuWg5dULg", "text": "intelligence like being able to act in an environment so here is a video of this quake3 game where yeah like that you can see some characters and then you can see some bullets that there are going to be fired and you know you see all these different walls and fires and other characters and when you're looking at all this you're able to already accurately parse the scene make sense of what's going on and you're also able to clearly separate out the objects from what's not objects and and so we need to be able to do that as well we shouldn't", "start_timestamp": "00:37:54", "end_timestamp": "00:38:36", "start_second": 2274, "end_second": 2316, "url": "https://www.youtube.com/watch?v=1sJuWg5dULg&t=2274s", "title": "L8 Round-up of Strengths and Weaknesses of Unsupervised Learning Methods -- UC Berkeley SP20", "thumbnail": "https://i.ytimg.com/vi/1sJuWg5dULg/maxresdefault.jpg"} {"video_id": "1sJuWg5dULg", "text": "be working at the level of pixels we should be able to predict the future in a much more semantically in space and so modeling the pixel space for these high dimensional videos is really hard and in order to build really dungeon agents which that can planning faster than real time we should be able to do it in the lane space that's more abstract so how do we do that what is the right kind of abstraction to build and how do we learn role models in that Lane space that can this ignore noise and work in a much more semantic space it's really the", "start_timestamp": "00:38:36", "end_timestamp": "00:39:09", "start_second": 2316, "end_second": 2349, "url": "https://www.youtube.com/watch?v=1sJuWg5dULg&t=2316s", "title": "L8 Round-up of Strengths and Weaknesses of Unsupervised Learning Methods -- UC Berkeley SP20", "thumbnail": "https://i.ytimg.com/vi/1sJuWg5dULg/maxresdefault.jpg"} {"video_id": "1sJuWg5dULg", "text": "hardest question to think about and this has also been summarized multiple times by omnicom that if you have very good internal world model you'll be able to plan with it and a wide lot of mistakes there and our relation usually makes and and how to do that is one of the most important questions so if you want to have the overall view of subspace learning across all these different problems for image recognition we saw or assesses like city scene workers in clear moco version to transfer learning it works really well in language but the", "start_timestamp": "00:39:09", "end_timestamp": "00:39:48", "start_second": 2349, "end_second": 2388, "url": "https://www.youtube.com/watch?v=1sJuWg5dULg&t=2349s", "title": "L8 Round-up of Strengths and Weaknesses of Unsupervised Learning Methods -- UC Berkeley SP20", "thumbnail": "https://i.ytimg.com/vi/1sJuWg5dULg/maxresdefault.jpg"} {"video_id": "1sJuWg5dULg", "text": "exact details will be covered in a future lecture and transfer learning and vision also works reasonably well now been shown in CPC and moco but there's like close to nothing in terms of how to use of supervised learning for RL so that's the very ripe area for future and then as far as like you know using sound supervision in the context of general intelligence is considered its it's potentially going to be extremely useful in the context of transfer learning and learning use of abstractions for planning or imaginations so that's just a lot of", "start_timestamp": "00:39:48", "end_timestamp": "00:40:26", "start_second": 2388, "end_second": 2426, "url": "https://www.youtube.com/watch?v=1sJuWg5dULg&t=2388s", "title": "L8 Round-up of Strengths and Weaknesses of Unsupervised Learning Methods -- UC Berkeley SP20", "thumbnail": "https://i.ytimg.com/vi/1sJuWg5dULg/maxresdefault.jpg"} {"video_id": "1sJuWg5dULg", "text": "work to be done there so that's that's that's it for the summary of the class it's pretty much ends with our original motivation which is how do we build this intelligence cake and and a lot of it is gonna be done through supervised learning and and so in terms of future lectures they're gonna look at more applied topics which are not falling into the main main main lecture stream which is that we be looking at semi spread learning we'll also be looking at the whole area of one square learning for language which is language models", "start_timestamp": "00:40:26", "end_timestamp": "00:41:08", "start_second": 2426, "end_second": 2468, "url": "https://www.youtube.com/watch?v=1sJuWg5dULg&t=2426s", "title": "L8 Round-up of Strengths and Weaknesses of Unsupervised Learning Methods -- UC Berkeley SP20", "thumbnail": "https://i.ytimg.com/vi/1sJuWg5dULg/maxresdefault.jpg"} {"video_id": "1sJuWg5dULg", "text": "and bird and then finally we look at how representation learning or supervised learning has been applied in the context of reinforcement learning so and and we will also cover things like how to do unsupervised distribution alignment that is given completely to different data sets with a lot of common information how do we align the two manifolds together and without any prior data and you see how generative models and unsupervised learning can be used in the context of building compression algorithms so that's the next next", "start_timestamp": "00:41:08", "end_timestamp": "00:41:41", "start_second": 2468, "end_second": 2501, "url": "https://www.youtube.com/watch?v=1sJuWg5dULg&t=2468s", "title": "L8 Round-up of Strengths and Weaknesses of Unsupervised Learning Methods -- UC Berkeley SP20", "thumbnail": "https://i.ytimg.com/vi/1sJuWg5dULg/maxresdefault.jpg"} {"video_id": "OOMFR6snocY", "text": "[Music] okay so hi I'm Lucas easy one first of all thanks to debug and to be a developers for inviting us today so what we will try to do today is share some of our experiences when it comes to IP architectures we'll also try to give some recommendations about what to do and what not to do when you're designing an architecture in a larger enterprise environment we will try to give examples where we can from our own experience and you will see we will cover actually several topics so feel free to ask any questions if you know we go cross one", "start_timestamp": "00:00:00", "end_timestamp": "00:01:11", "start_second": 0, "end_second": 71, "url": "https://www.youtube.com/watch?v=OOMFR6snocY&t=0s", "title": "Building high performance and scalable architectures for enterprises\u2014Luka Samar\u017eija & Ivan Sokol", "thumbnail": "https://i.ytimg.com/vi/OOMFR6snocY/maxresdefault.jpg"} {"video_id": "OOMFR6snocY", "text": "topic too fast because number of topics that we want to share so okay let's start so actually what is an enterprise IT landscape and how does it look like you probably guess that it's big given and I we work in telcos most of our careers which is a really big act landscape but just to make it more clear I'd like to compare it with the buildings and let's say a neighborhood so it looks like something like this you know it's a being it's modern it's neat it's beautiful I'm kidding of course it doesn't look like that at all probably", "start_timestamp": "00:01:11", "end_timestamp": "00:02:06", "start_second": 71, "end_second": 126, "url": "https://www.youtube.com/watch?v=OOMFR6snocY&t=71s", "title": "Building high performance and scalable architectures for enterprises\u2014Luka Samar\u017eija & Ivan Sokol", "thumbnail": "https://i.ytimg.com/vi/OOMFR6snocY/maxresdefault.jpg"} {"video_id": "OOMFR6snocY", "text": "only chief architects thing that looks like this actually it looks something more like this or this or this if you're lucky the reason is that most enterprises have been building their IT for the last twenty thirty years and doing it mostly by adding new systems on top of existing ones rarely decommissioning your ones so now you have a really big combination of really cool and modern stuff and some really uncooled and all that stuff that's it that actually makes it really interesting but also challenging so but", "start_timestamp": "00:02:06", "end_timestamp": "00:02:53", "start_second": 126, "end_second": 173, "url": "https://www.youtube.com/watch?v=OOMFR6snocY&t=126s", "title": "Building high performance and scalable architectures for enterprises\u2014Luka Samar\u017eija & Ivan Sokol", "thumbnail": "https://i.ytimg.com/vi/OOMFR6snocY/maxresdefault.jpg"} {"video_id": "OOMFR6snocY", "text": "what happens at one point since you're building building building on the other it starts to crumble and you actually have two choices you need to you need to change something or as you like to call it these days do a transformation project and then you can do it in two ways basically you can go all the Intuit Big Bang changed 80% of your landscape or you can do it progressively in our experience I have done one transformation that was a big bang approach let's actually first don't don't do Big Bang transformation", "start_timestamp": "00:02:53", "end_timestamp": "00:03:40", "start_second": 173, "end_second": 220, "url": "https://www.youtube.com/watch?v=OOMFR6snocY&t=173s", "title": "Building high performance and scalable architectures for enterprises\u2014Luka Samar\u017eija & Ivan Sokol", "thumbnail": "https://i.ytimg.com/vi/OOMFR6snocY/maxresdefault.jpg"} {"video_id": "OOMFR6snocY", "text": "projects try to do it one step at a time and progressively if possible so how can you do it well first you need to identify what are your biggest pain points what is the what is the thing that is most troubling your business with the current IT landscape then think of a better way to doing that when you do that take a step back because you're probably already going to white and you're looking at the project of three to four years you can't do all of it do a minimum of what you can think think of but but it makes sense that it really", "start_timestamp": "00:03:40", "end_timestamp": "00:04:25", "start_second": 220, "end_second": 265, "url": "https://www.youtube.com/watch?v=OOMFR6snocY&t=220s", "title": "Building high performance and scalable architectures for enterprises\u2014Luka Samar\u017eija & Ivan Sokol", "thumbnail": "https://i.ytimg.com/vi/OOMFR6snocY/maxresdefault.jpg"} {"video_id": "OOMFR6snocY", "text": "solves the biggest issue that you can and that you can integrated with the existing systems that you have so in order for you to do that there are actually three things that you need to consider and have in mind for us in our experience microservices architecture is a must in doing that why not just because it's a hype and everyone in doing that but because you can also release in a lot of books that talk about macro services architecture it's as people say so are done right it's finally finally we have the technology and the means to do", "start_timestamp": "00:04:25", "end_timestamp": "00:05:10", "start_second": 265, "end_second": 310, "url": "https://www.youtube.com/watch?v=OOMFR6snocY&t=265s", "title": "Building high performance and scalable architectures for enterprises\u2014Luka Samar\u017eija & Ivan Sokol", "thumbnail": "https://i.ytimg.com/vi/OOMFR6snocY/maxresdefault.jpg"} {"video_id": "OOMFR6snocY", "text": "service-oriented architecture in a good way to make it scalable and maintainable so that's why macro services architecture is a must but when you can really do a lot of wood or doing a lot of bed is a in integration part so you need to be smart when it comes to integration patterns and also is in all development unit who developed development practices system otherwise won't be able to tactical so those are actually free areas that we will try to cover in our presentation first the micro services architecture the main", "start_timestamp": "00:05:10", "end_timestamp": "00:05:56", "start_second": 310, "end_second": 356, "url": "https://www.youtube.com/watch?v=OOMFR6snocY&t=310s", "title": "Building high performance and scalable architectures for enterprises\u2014Luka Samar\u017eija & Ivan Sokol", "thumbnail": "https://i.ytimg.com/vi/OOMFR6snocY/maxresdefault.jpg"} {"video_id": "OOMFR6snocY", "text": "message that I want to convey here is the micro service isn't the support application now don't get me wrong I have nothing against people would I like it actually but though many times I've seen someone will the Java spring boot application deploy it on the docker container and says I have a macro service you don't have a micro service unless that micro service has data when we come to the definition of a micro service in its essence it's a small autonomous application that can handle one domain independently autonomous and", "start_timestamp": "00:05:56", "end_timestamp": "00:06:34", "start_second": 356, "end_second": 394, "url": "https://www.youtube.com/watch?v=OOMFR6snocY&t=356s", "title": "Building high performance and scalable architectures for enterprises\u2014Luka Samar\u017eija & Ivan Sokol", "thumbnail": "https://i.ytimg.com/vi/OOMFR6snocY/maxresdefault.jpg"} {"video_id": "OOMFR6snocY", "text": "independently you can't do that if you don't share data you can't do that if you're creating a facade on my or proxy my cursor that keeps baiting legacy and this query in that legacy for all day I mean you can do it and that there are uses for such such applications but you can forget about any kind of performance or scalability if you query an existing legacy banking systems for every time someone the queries your API on a micro service and needs to get some data that's the most important peak we will talk later", "start_timestamp": "00:06:34", "end_timestamp": "00:07:15", "start_second": 394, "end_second": 435, "url": "https://www.youtube.com/watch?v=OOMFR6snocY&t=394s", "title": "Building high performance and scalable architectures for enterprises\u2014Luka Samar\u017eija & Ivan Sokol", "thumbnail": "https://i.ytimg.com/vi/OOMFR6snocY/maxresdefault.jpg"} {"video_id": "OOMFR6snocY", "text": "even will talk about what are good ways to get data out of legacy in inter macro circuit so I'm going to that you'll hear that later other thing that is important is how you design your micro service what is important is a lot of times people have a set of functionalities you put it in a micro service and you you just go from there it's a great way to build a my product what that means is build a micro service that's actually monolith just your deploy it on docker and the quality Mac series in micro service design you should really look at", "start_timestamp": "00:07:15", "end_timestamp": "00:07:57", "start_second": 435, "end_second": 477, "url": "https://www.youtube.com/watch?v=OOMFR6snocY&t=435s", "title": "Building high performance and scalable architectures for enterprises\u2014Luka Samar\u017eija & Ivan Sokol", "thumbnail": "https://i.ytimg.com/vi/OOMFR6snocY/maxresdefault.jpg"} {"video_id": "OOMFR6snocY", "text": "domain driven design that's another concept you have great books about it especially the part you have a book by Eric Evans domain driven design when he talks to bounded context but what that means actually in a nutshell is that when you design a data model for an enterprise you can't design one data model that will work for the entire enterprise because that's its action is called an enterprise functions for example you take customer entity and you talk to finance you talk to say all the top customer customer care each of those", "start_timestamp": "00:07:57", "end_timestamp": "00:08:46", "start_second": 477, "end_second": 526, "url": "https://www.youtube.com/watch?v=OOMFR6snocY&t=477s", "title": "Building high performance and scalable architectures for enterprises\u2014Luka Samar\u017eija & Ivan Sokol", "thumbnail": "https://i.ytimg.com/vi/OOMFR6snocY/maxresdefault.jpg"} {"video_id": "OOMFR6snocY", "text": "Department look at the customer differently in finance they look at did he pay his bill and suffocated in sales they look at good services is he using can we upsell cross-sell upgraded in customer care they are looking at does have problems does he have problems in the past what kind of problems etc so the models are not different and what you need to do is you need to establish a boundary in which your model is unique and that is basically in one domain we will could have one model that is unique and then when you talk to another team or", "start_timestamp": "00:08:46", "end_timestamp": "00:09:30", "start_second": 526, "end_second": 570, "url": "https://www.youtube.com/watch?v=OOMFR6snocY&t=526s", "title": "Building high performance and scalable architectures for enterprises\u2014Luka Samar\u017eija & Ivan Sokol", "thumbnail": "https://i.ytimg.com/vi/OOMFR6snocY/maxresdefault.jpg"} {"video_id": "OOMFR6snocY", "text": "whatever the physic designing another domain you need to establish what the boundaries are exact like and what is important how to transform and connect your model with their model it's really a good concept I haven't talk any more about it but you should look at it really a lot of materials online about it so the next topic when it comes to micro services actually this is one of my favorites and also a lot of debate is around debt when it comes to the communication patterns between micro services how do they", "start_timestamp": "00:09:30", "end_timestamp": "00:10:12", "start_second": 570, "end_second": 612, "url": "https://www.youtube.com/watch?v=OOMFR6snocY&t=570s", "title": "Building high performance and scalable architectures for enterprises\u2014Luka Samar\u017eija & Ivan Sokol", "thumbnail": "https://i.ytimg.com/vi/OOMFR6snocY/maxresdefault.jpg"} {"video_id": "OOMFR6snocY", "text": "communicate with each other there's actually two big patterns one is choreography the other is orchestration choreography you have micro services that are independent they function as let's say individuals with a set of rules in which they talk to each other but there is no central micro service or whatever that is orchestrating them in orchestration you have a central entity that is orchestrate that is telling the other micro services what to do which one is better neither because it depends on the use case that you have", "start_timestamp": "00:10:12", "end_timestamp": "00:11:03", "start_second": 612, "end_second": 663, "url": "https://www.youtube.com/watch?v=OOMFR6snocY&t=612s", "title": "Building high performance and scalable architectures for enterprises\u2014Luka Samar\u017eija & Ivan Sokol", "thumbnail": "https://i.ytimg.com/vi/OOMFR6snocY/maxresdefault.jpg"} {"video_id": "OOMFR6snocY", "text": "mostly in whatever you are designing NIT there is no one-size-fits-all you need to be open-minded and you need to look at exact use case that you have so for example a choreography is a really great it is actually my preferred method because in choreography the main benefit is that these micro services are completely decoupled when one goes down it doesn't influence the rest if you need to remove one micro service and implement it in using a different technology or whatever you can do it because they communicate asynchronously", "start_timestamp": "00:11:03", "end_timestamp": "00:11:55", "start_second": 663, "end_second": 715, "url": "https://www.youtube.com/watch?v=OOMFR6snocY&t=663s", "title": "Building high performance and scalable architectures for enterprises\u2014Luka Samar\u017eija & Ivan Sokol", "thumbnail": "https://i.ytimg.com/vi/OOMFR6snocY/maxresdefault.jpg"} {"video_id": "OOMFR6snocY", "text": "they have a set of interfaces that are standardized and this really gives you a lot of flexibility but what is a problem with choreography is that if you have a complex system or a complex process that you need to implement it's really hard to visual the communication so if you can't visual that which visual visual visualize that communication don't use choreography you can visuals visualize it by using open tracing and technologies like that there there are ways but if you can't do it don't want choreography orchestration on the other", "start_timestamp": "00:11:55", "end_timestamp": "00:12:39", "start_second": 715, "end_second": 759, "url": "https://www.youtube.com/watch?v=OOMFR6snocY&t=715s", "title": "Building high performance and scalable architectures for enterprises\u2014Luka Samar\u017eija & Ivan Sokol", "thumbnail": "https://i.ytimg.com/vi/OOMFR6snocY/maxresdefault.jpg"} {"video_id": "OOMFR6snocY", "text": "part is great for example when I use it in an order execution process when you have a process that is sequential and you really need to follow the exact amount of steps that are needed to talking for example in telco when you do order execution you need to create a customer you need to create his assets you need to send his equipment to the delivery service you need to talk to other telephone operators regarding number transfer and stuff like that you need to activate the subscription for the customer in billing", "start_timestamp": "00:12:39", "end_timestamp": "00:13:30", "start_second": 759, "end_second": 810, "url": "https://www.youtube.com/watch?v=OOMFR6snocY&t=759s", "title": "Building high performance and scalable architectures for enterprises\u2014Luka Samar\u017eija & Ivan Sokol", "thumbnail": "https://i.ytimg.com/vi/OOMFR6snocY/maxresdefault.jpg"} {"video_id": "OOMFR6snocY", "text": "there's a lot of activities that you need to do all of those can be implemented as different micro-services but many work is straight you are really sure that everything is happening the very data truth but you shouldn't do orchestration in code because the same is choreography you won't have any visibility of that when you do orchestration my advice is use a BPM tool if a really good BPM tools on the market that you can use and they intend visibility in the control the process is great so just to show you this is one BPM tool and", "start_timestamp": "00:13:30", "end_timestamp": "00:14:16", "start_second": 810, "end_second": 856, "url": "https://www.youtube.com/watch?v=OOMFR6snocY&t=810s", "title": "Building high performance and scalable architectures for enterprises\u2014Luka Samar\u017eija & Ivan Sokol", "thumbnail": "https://i.ytimg.com/vi/OOMFR6snocY/maxresdefault.jpg"} {"video_id": "OOMFR6snocY", "text": "this is how it how it looks like you have where are the process instances some are in incidence some are not so it's a it's really a good way to see what is happening in that process you really do if something is amiss in incident you can easily see what's happening so the level of control here is really great so that's my advice when it comes to orchestration but the most important part is no one-size-fits-all so let's move on just some quick best practices when it comes to micro-services design actually the only", "start_timestamp": "00:14:16", "end_timestamp": "00:15:13", "start_second": 856, "end_second": 913, "url": "https://www.youtube.com/watch?v=OOMFR6snocY&t=856s", "title": "Building high performance and scalable architectures for enterprises\u2014Luka Samar\u017eija & Ivan Sokol", "thumbnail": "https://i.ytimg.com/vi/OOMFR6snocY/maxresdefault.jpg"} {"video_id": "OOMFR6snocY", "text": "don't here is don't put everything in code use the technologies that are on the market and that are really great for example if you have a lot of business rules that you need to implement there are great rule engines that are fast and that you can use instead of writing millions of ifs and cases in your code the only trick here is if you're losing in a row if you are using using and there are there two to two ways you can use the rule and you can embed it in your code and then it's very to just use the engine and edit code or you can", "start_timestamp": "00:15:13", "end_timestamp": "00:15:52", "start_second": 913, "end_second": 952, "url": "https://www.youtube.com/watch?v=OOMFR6snocY&t=913s", "title": "Building high performance and scalable architectures for enterprises\u2014Luka Samar\u017eija & Ivan Sokol", "thumbnail": "https://i.ytimg.com/vi/OOMFR6snocY/maxresdefault.jpg"} {"video_id": "OOMFR6snocY", "text": "build it as a independent standalone service but if you do that it's important that it's a passed rule that the rule engine only executes rules it doesn't query any database or back-end or anything for data it should look something like that this is request this is response you give in the the rule that it needs to execute you give him the data and then it executes the rule inhibit the response back and it should be in memory and it should be really fast as you see here it should really be a number of milliseconds the seconds that you can", "start_timestamp": "00:15:52", "end_timestamp": "00:16:41", "start_second": 952, "end_second": 1001, "url": "https://www.youtube.com/watch?v=OOMFR6snocY&t=952s", "title": "Building high performance and scalable architectures for enterprises\u2014Luka Samar\u017eija & Ivan Sokol", "thumbnail": "https://i.ytimg.com/vi/OOMFR6snocY/maxresdefault.jpg"} {"video_id": "OOMFR6snocY", "text": "count on your fingers and toes because if it's slower than that you can forget about any kind of performance that uses the rule engine so it's really important that it's fast proof it's in memory and it doesn't query anything it's just an engine also use artifact repositories they're really easy to use really beneficial use tools for logging don't log to file system database and then if you have an incident spend three hours wearing debt use pools the terror in on the market protect the self is for monitoring use time series databases", "start_timestamp": "00:16:41", "end_timestamp": "00:17:26", "start_second": 1001, "end_second": 1046, "url": "https://www.youtube.com/watch?v=OOMFR6snocY&t=1001s", "title": "Building high performance and scalable architectures for enterprises\u2014Luka Samar\u017eija & Ivan Sokol", "thumbnail": "https://i.ytimg.com/vi/OOMFR6snocY/maxresdefault.jpg"} {"video_id": "OOMFR6snocY", "text": "because they can handle a lot of data and they have great visualization tools that you can look at in the operations part so I'm not going into specific technologies that you can use but for all of these technologies you have open source solutions if you want to know I can tell you what we are using but maybe later on or in the question space and the last but probably the most important things he is work together with business because a micro service or any softer that is its own purpose doesn't make sense so you should you should build it", "start_timestamp": "00:17:26", "end_timestamp": "00:18:15", "start_second": 1046, "end_second": 1095, "url": "https://www.youtube.com/watch?v=OOMFR6snocY&t=1046s", "title": "Building high performance and scalable architectures for enterprises\u2014Luka Samar\u017eija & Ivan Sokol", "thumbnail": "https://i.ytimg.com/vi/OOMFR6snocY/maxresdefault.jpg"} {"video_id": "OOMFR6snocY", "text": "together with people from business and you should build stuff that really solves some business issue that can be some new capabilities of extra functionalities they can be just that you are doing refactoring transform suddenly some old implementation into micro service because you want better performance which will result in better customer experience these are all valid reasons but don't do it without talking to business and having really good memories because basically micro service should be analog to a business domain", "start_timestamp": "00:18:15", "end_timestamp": "00:18:53", "start_second": 1095, "end_second": 1133, "url": "https://www.youtube.com/watch?v=OOMFR6snocY&t=1095s", "title": "Building high performance and scalable architectures for enterprises\u2014Luka Samar\u017eija & Ivan Sokol", "thumbnail": "https://i.ytimg.com/vi/OOMFR6snocY/maxresdefault.jpg"} {"video_id": "OOMFR6snocY", "text": "and it should implement business capabilities so that is really an important part here so they could actually be the first step when you're designing the micro-service know what is the challenge and what are the capabilities from the business side that you need to solve and then go to the technical part and that's it about the micro services architecture I will now give the work to even we will talk more about integration about integration patterns so when we were doing this kind of migrations basically integration", "start_timestamp": "00:18:53", "end_timestamp": "00:19:31", "start_second": 1133, "end_second": 1171, "url": "https://www.youtube.com/watch?v=OOMFR6snocY&t=1133s", "title": "Building high performance and scalable architectures for enterprises\u2014Luka Samar\u017eija & Ivan Sokol", "thumbnail": "https://i.ytimg.com/vi/OOMFR6snocY/maxresdefault.jpg"} {"video_id": "OOMFR6snocY", "text": "become as you move to micro service integration becomes more of a hot topic than it was before so some of our experiences don't use multi layer of API clothes what I mean by this we have a situation currently were fronting is going bacon that is calling second packet that's calling terror that comes forth bacon so basically you have this connection of flows where each layer is introducing their own bottlenecks there are slowdowns so don't do this second is what Luke also mentioned in micro service architecture try to do the", "start_timestamp": "00:19:31", "end_timestamp": "00:20:17", "start_second": 1171, "end_second": 1217, "url": "https://www.youtube.com/watch?v=OOMFR6snocY&t=1171s", "title": "Building high performance and scalable architectures for enterprises\u2014Luka Samar\u017eija & Ivan Sokol", "thumbnail": "https://i.ytimg.com/vi/OOMFR6snocY/maxresdefault.jpg"} {"video_id": "OOMFR6snocY", "text": "main driven design in telco industry where we are happy that we have a global guidelines regarding this there is a community gathered around TM for telemedicine forum organization that already built for us some models so we know what are the customers what are the products what are the services resources payments so we already have domain driven design that can be followed basically we already have domain prepared that just need adjustment for a specific tau and this domain is extensive enough so that you can extend it without let's say many", "start_timestamp": "00:20:17", "end_timestamp": "00:21:02", "start_second": 1217, "end_second": 1262, "url": "https://www.youtube.com/watch?v=OOMFR6snocY&t=1217s", "title": "Building high performance and scalable architectures for enterprises\u2014Luka Samar\u017eija & Ivan Sokol", "thumbnail": "https://i.ytimg.com/vi/OOMFR6snocY/maxresdefault.jpg"} {"video_id": "OOMFR6snocY", "text": "travels of course no brainers don't use non scalable or low performance by non-scalable I mean try to not reach the limit where you need to pay for extra licenses or where you are on all the technology that you cannot horizontally scale I mean you can always add more CPU power but this is something that needs to be avoided and also try to use the try enough to use no documented API I know we have cases where there's an API working for 10 years it works good it's just that developers that did that long time goal of the company and you", "start_timestamp": "00:21:02", "end_timestamp": "00:21:45", "start_second": 1262, "end_second": 1305, "url": "https://www.youtube.com/watch?v=OOMFR6snocY&t=1262s", "title": "Building high performance and scalable architectures for enterprises\u2014Luka Samar\u017eija & Ivan Sokol", "thumbnail": "https://i.ytimg.com/vi/OOMFR6snocY/maxresdefault.jpg"} {"video_id": "OOMFR6snocY", "text": "don't want to rely on such API because when the problem starts then you will need to redesign the API in a matter of hours just for it to work so if you have something that is old and I mean how could we have a lot of old technologies that are there because they work and we have systems that are old 10 or more years you have stuff that is not documented so what to do in integration patterns try to use data provider API so if you cannot come close to the data by varying the database try to come to the layer that is closest to that database", "start_timestamp": "00:21:45", "end_timestamp": "00:22:26", "start_second": 1305, "end_second": 1346, "url": "https://www.youtube.com/watch?v=OOMFR6snocY&t=1305s", "title": "Building high performance and scalable architectures for enterprises\u2014Luka Samar\u017eija & Ivan Sokol", "thumbnail": "https://i.ytimg.com/vi/OOMFR6snocY/maxresdefault.jpg"} {"video_id": "OOMFR6snocY", "text": "layer in some cases you can of the show some on the shelf solution where it is a black box you only have the API provided by the solution no one is telling you ok in the transformation let's replace it you purchased if you want to have return of investment over it and basically you should use it but you don't want to use first API that is going second API that is calling this code safety I try to consume it directly and then use rule engine to apply any rules that you have over data to provide domain-driven the main data that these", "start_timestamp": "00:22:26", "end_timestamp": "00:23:09", "start_second": 1346, "end_second": 1389, "url": "https://www.youtube.com/watch?v=OOMFR6snocY&t=1346s", "title": "Building high performance and scalable architectures for enterprises\u2014Luka Samar\u017eija & Ivan Sokol", "thumbnail": "https://i.ytimg.com/vi/OOMFR6snocY/maxresdefault.jpg"} {"video_id": "OOMFR6snocY", "text": "needs for that particular purpose and try to do something that is a try to ruin something that is easily upgradeable to new principles so if you have old APR don't let you see okay you need to spend a little more effort to be compliant with the new proposals new strategy invest in that API clean it out and upgraded don't built everything from scratch but of course when you have 10 or 20 years of development a lot of things are piled around so sometimes you just need to have a clean cut and do it from the beginning", "start_timestamp": "00:23:09", "end_timestamp": "00:23:48", "start_second": 1389, "end_second": 1428, "url": "https://www.youtube.com/watch?v=OOMFR6snocY&t=1389s", "title": "Building high performance and scalable architectures for enterprises\u2014Luka Samar\u017eija & Ivan Sokol", "thumbnail": "https://i.ytimg.com/vi/OOMFR6snocY/maxresdefault.jpg"} {"video_id": "OOMFR6snocY", "text": "regarding the integration patterns we found let's say something like five turns I will present some of here and what is important is that one pattern doesn't fit all so first when we are trying to figure out what betters we first want split it up by CQRS principle when we looked at our logs and how users are accessing our data most of the data access is read access so a lot of people on the self-care device is going to see what is the state of their consumption are they paid have they do the bills and stuff like this so most of", "start_timestamp": "00:23:48", "end_timestamp": "00:24:33", "start_second": 1428, "end_second": 1473, "url": "https://www.youtube.com/watch?v=OOMFR6snocY&t=1428s", "title": "Building high performance and scalable architectures for enterprises\u2014Luka Samar\u017eija & Ivan Sokol", "thumbnail": "https://i.ytimg.com/vi/OOMFR6snocY/maxresdefault.jpg"} {"video_id": "OOMFR6snocY", "text": "the data is coming from down code to the customer and then then you have a payment of bills activating some new adam strange's tariff and this is the smaller amount of command like queries so basically one pattern will not feel cold try to separate your integration patterns in your API by CQRS principle do stuff enhancing if if you have problems with executing the commands give the user the info okay it is your request has been received and do it quickly and then we're on if you have five ten or 20 seconds where you need to process this", "start_timestamp": "00:24:33", "end_timestamp": "00:25:19", "start_second": 1473, "end_second": 1519, "url": "https://www.youtube.com/watch?v=OOMFR6snocY&t=1473s", "title": "Building high performance and scalable architectures for enterprises\u2014Luka Samar\u017eija & Ivan Sokol", "thumbnail": "https://i.ytimg.com/vi/OOMFR6snocY/maxresdefault.jpg"} {"video_id": "OOMFR6snocY", "text": "data is Luka mentioned our orders are complex sometimes we go from creating the customer to checking the dista to see if the speed is good enough so that he can receive some HD channels it takes some time so try to do it innocent way and try to keep it simple so these are some of the patterns I will share with you so this is the CDC of data replication better so we have legacy beckons legacy API what we are trying to do is copy the data from these databases to our application database and then do a data transformation from legacy model", "start_timestamp": "00:25:19", "end_timestamp": "00:26:03", "start_second": 1519, "end_second": 1563, "url": "https://www.youtube.com/watch?v=OOMFR6snocY&t=1519s", "title": "Building high performance and scalable architectures for enterprises\u2014Luka Samar\u017eija & Ivan Sokol", "thumbnail": "https://i.ytimg.com/vi/OOMFR6snocY/maxresdefault.jpg"} {"video_id": "OOMFR6snocY", "text": "to new the main model so in legacy VIII you often have situations where in some applications there is a mix up of many models we have for example customer data and with five different databases so this shouldn't be the case in the domain driven design so what we are trying to do is pull this data as fast as possible through the data replication layer and then transform this data to a domain driven date and then when fronton describing your microservice you have data already prepared in the model that is by the domain driven design and what", "start_timestamp": "00:26:03", "end_timestamp": "00:26:46", "start_second": 1563, "end_second": 1606, "url": "https://www.youtube.com/watch?v=OOMFR6snocY&t=1563s", "title": "Building high performance and scalable architectures for enterprises\u2014Luka Samar\u017eija & Ivan Sokol", "thumbnail": "https://i.ytimg.com/vi/OOMFR6snocY/maxresdefault.jpg"} {"video_id": "OOMFR6snocY", "text": "is important here data replication it's not only a CDC so depending on how and is the data change how often users needs to be aware of this data change it can be a job it can be a night and drop so basically it's up to you to see what fits your purpose and how you do the replication CDC he's only one of the options that that is applicable the second pattern that we saw is when you have off-the-shelf solution or where you need to consume something not directly from the database so how we did it for example in our case", "start_timestamp": "00:26:46", "end_timestamp": "00:27:30", "start_second": 1606, "end_second": 1650, "url": "https://www.youtube.com/watch?v=OOMFR6snocY&t=1606s", "title": "Building high performance and scalable architectures for enterprises\u2014Luka Samar\u017eija & Ivan Sokol", "thumbnail": "https://i.ytimg.com/vi/OOMFR6snocY/maxresdefault.jpg"} {"video_id": "OOMFR6snocY", "text": "this is the users consumption API that is connected to a different billing systems and this data exchange really really frequently CDC and the amount of data that is coming through through the mediation system it's really big so CDC would just introduce the override that is not good enough so what we did first we simplify the architecture currently we have these multi layer API flows that we're introducing their timelines and bottlenecks now we connected either directly to the database or where we had all the shells who should be connected", "start_timestamp": "00:27:30", "end_timestamp": "00:28:09", "start_second": 1650, "end_second": 1689, "url": "https://www.youtube.com/watch?v=OOMFR6snocY&t=1650s", "title": "Building high performance and scalable architectures for enterprises\u2014Luka Samar\u017eija & Ivan Sokol", "thumbnail": "https://i.ytimg.com/vi/OOMFR6snocY/maxresdefault.jpg"} {"video_id": "OOMFR6snocY", "text": "to a soft services that that of the shell solution presented and then we are doing the query this solutions transforming it in specific service layer and then presenting it to the customers of course I put the cash here because usually users don't want to see every second but what their data consumption so you can put this kind of data in a cache for 10 or 50 minutes 10 minutes should be enough for users to buy safe if he wants to return to the to the consumption data he can check it out but also need to be aware of you need to", "start_timestamp": "00:28:09", "end_timestamp": "00:28:55", "start_second": 1689, "end_second": 1735, "url": "https://www.youtube.com/watch?v=OOMFR6snocY&t=1689s", "title": "Building high performance and scalable architectures for enterprises\u2014Luka Samar\u017eija & Ivan Sokol", "thumbnail": "https://i.ytimg.com/vi/OOMFR6snocY/maxresdefault.jpg"} {"video_id": "OOMFR6snocY", "text": "sometimes give him a pull to refresh capability if he really wants to see data exactly this time you can do it by demand but most of the users just when login to the South Korea they want to see the current consumption and if they jump over to the details you already have this data cache you can you can show it to them okay and the third layer I was mentioning regarding the commands that are being executed basically if you have situation where we are executing for example changing the death of a customer and this takes some time", "start_timestamp": "00:28:55", "end_timestamp": "00:29:33", "start_second": 1735, "end_second": 1773, "url": "https://www.youtube.com/watch?v=OOMFR6snocY&t=1735s", "title": "Building high performance and scalable architectures for enterprises\u2014Luka Samar\u017eija & Ivan Sokol", "thumbnail": "https://i.ytimg.com/vi/OOMFR6snocY/maxresdefault.jpg"} {"video_id": "OOMFR6snocY", "text": "because a process needs to be run you need to connect to all those legacy systems that are responsible for changing the tariff from the what's a BSS layer to OSS there sometimes this takes time so trying to see what are the transactions that you have minimal number number of errors so if you have a change start process that is really if qualification of well staff user can jump into is really good at first then your execution part will mostly be successful but if it's slow you can use the pattern where you store the message", "start_timestamp": "00:29:33", "end_timestamp": "00:30:11", "start_second": 1773, "end_second": 1811, "url": "https://www.youtube.com/watch?v=OOMFR6snocY&t=1773s", "title": "Building high performance and scalable architectures for enterprises\u2014Luka Samar\u017eija & Ivan Sokol", "thumbnail": "https://i.ytimg.com/vi/OOMFR6snocY/maxresdefault.jpg"} {"video_id": "OOMFR6snocY", "text": "you tell immediately to the user your request has been stored and then you start this in a synchronous way executing this message when this is successful send a push notification to the user so we have self care applications we have SMS these we can send to the user now your type has been activated so try to use whether you see that you have a really higher percentage of successful executions trying to do this in an ASIC way okay so other things regarding data aggregation and data consolidation don't do it on demand so", "start_timestamp": "00:30:11", "end_timestamp": "00:30:52", "start_second": 1811, "end_second": 1852, "url": "https://www.youtube.com/watch?v=OOMFR6snocY&t=1811s", "title": "Building high performance and scalable architectures for enterprises\u2014Luka Samar\u017eija & Ivan Sokol", "thumbnail": "https://i.ytimg.com/vi/OOMFR6snocY/maxresdefault.jpg"} {"video_id": "OOMFR6snocY", "text": "this is more let's say applicable to beckoned for fronting players or front-end closer api's don't try to do on the on demand or on fly aggregation to Mapes try to set up a domain and prepare it how how easy it is fitting don't try to do aggregation over many domains it really means that you did something wrong if you need metrics if you need aggregated metrics try to prepare it in advance by relying on event-driven messages so instead of querying the database to see how much SIM cards customer has you can do once he is activating a new", "start_timestamp": "00:30:52", "end_timestamp": "00:31:37", "start_second": 1852, "end_second": 1897, "url": "https://www.youtube.com/watch?v=OOMFR6snocY&t=1852s", "title": "Building high performance and scalable architectures for enterprises\u2014Luka Samar\u017eija & Ivan Sokol", "thumbnail": "https://i.ytimg.com/vi/OOMFR6snocY/maxresdefault.jpg"} {"video_id": "OOMFR6snocY", "text": "SIM card or something like this put this invent and store it somewhere in the bucket and then when you want to know how much SIM cards because queried bucket don't go to the database and try to query select counter from now number to return number of SIM cards also if you have situations where some of the micro services are sharing the same database but in a different schema don't try to consolidate data on the database level database should be treated as private fields so they are owned by microcircuits and there is a layer of a", "start_timestamp": "00:31:37", "end_timestamp": "00:32:20", "start_second": 1897, "end_second": 1940, "url": "https://www.youtube.com/watch?v=OOMFR6snocY&t=1897s", "title": "Building high performance and scalable architectures for enterprises\u2014Luka Samar\u017eija & Ivan Sokol", "thumbnail": "https://i.ytimg.com/vi/OOMFR6snocY/maxresdefault.jpg"} {"video_id": "OOMFR6snocY", "text": "business logic above this data where perhaps when you are querying Sunday you won't receive a correct response so if you need to aggregate something prepare in advance do it in advance and don't do it on the fly especially directly on the database later it is okay as this is done by data where perhaps this is their job but I'm speaking about direct confirmation of data from digital channels from self-corrects also regarding the micro service approach and api's try to do some sort of management this really makes things easier so you", "start_timestamp": "00:32:20", "end_timestamp": "00:33:00", "start_second": 1940, "end_second": 1980, "url": "https://www.youtube.com/watch?v=OOMFR6snocY&t=1940s", "title": "Building high performance and scalable architectures for enterprises\u2014Luka Samar\u017eija & Ivan Sokol", "thumbnail": "https://i.ytimg.com/vi/OOMFR6snocY/maxresdefault.jpg"} {"video_id": "OOMFR6snocY", "text": "can free API management tools you have a really expensive one so depending on your budget try to fit something but it is really important that over your api's you know who is technical and business owner you have the governance you know when new version of API will be set up you know when old version will be retired you will know who are in some cases we needed to introduce throttling to find out who are our consumers because some people we didn't even know we're consuming the PLC cool procedures started complaining about", "start_timestamp": "00:33:00", "end_timestamp": "00:33:49", "start_second": 1980, "end_second": 2029, "url": "https://www.youtube.com/watch?v=OOMFR6snocY&t=1980s", "title": "Building high performance and scalable architectures for enterprises\u2014Luka Samar\u017eija & Ivan Sokol", "thumbnail": "https://i.ytimg.com/vi/OOMFR6snocY/maxresdefault.jpg"} {"video_id": "OOMFR6snocY", "text": "timeouts so API management where you know what is the current version of API where we know who is consuming it and who is the owner of it this is a way to go don't treat ideas only as one protocol API so this is what is the domain and layer separation the main separation is domain driven design separation and layer separation is try to keep things controller service and repository to keep them out of each other way so every layer has its own boundary and the boundary shouldn't be crossed if you manage to do this then it's not a", "start_timestamp": "00:33:49", "end_timestamp": "00:34:30", "start_second": 2029, "end_second": 2070, "url": "https://www.youtube.com/watch?v=OOMFR6snocY&t=2029s", "title": "Building high performance and scalable architectures for enterprises\u2014Luka Samar\u017eija & Ivan Sokol", "thumbnail": "https://i.ytimg.com/vi/OOMFR6snocY/maxresdefault.jpg"} {"video_id": "OOMFR6snocY", "text": "problem for you to have an API that has repository and service and then it has controllers towards rest towards GRP see towards the Kafka or other streaming platforms so try to separate it both horizontally and vertically clean architecture programming the development practices so I try to split them up by organization level and T blow so in order to have a good development and good api's some rules in the organization needs to be set up it talking in small companies when you have a small team and startup you are", "start_timestamp": "00:34:30", "end_timestamp": "00:35:13", "start_second": 2070, "end_second": 2113, "url": "https://www.youtube.com/watch?v=OOMFR6snocY&t=2070s", "title": "Building high performance and scalable architectures for enterprises\u2014Luka Samar\u017eija & Ivan Sokol", "thumbnail": "https://i.ytimg.com/vi/OOMFR6snocY/maxresdefault.jpg"} {"video_id": "OOMFR6snocY", "text": "creating your own rules but on organization level you need to have support from operations department commits have support from DBA from system department so some rules needs to be set up what you don't want to do is to have multiple text acts try to find the text type that is okay for you and try to stick out with it you don't want to get yourself in a situation where some developer developed micro services in I don't know Godwin ergo because it was really right technology for him at that time and that feel of the company cool", "start_timestamp": "00:35:13", "end_timestamp": "00:35:52", "start_second": 2113, "end_second": 2152, "url": "https://www.youtube.com/watch?v=OOMFR6snocY&t=2113s", "title": "Building high performance and scalable architectures for enterprises\u2014Luka Samar\u017eija & Ivan Sokol", "thumbnail": "https://i.ytimg.com/vi/OOMFR6snocY/maxresdefault.jpg"} {"video_id": "OOMFR6snocY", "text": "govern disk so try to have a tech stack that is let's say a future proof and that you see it is really something used in beckons on beckons where let's say lucky here we are not as volatile eyes for example JavaScript frameworks became Java before we have Java now just we have different flavors but this is it also try not to limit the tech stack in a way that I don't know if you need something where you see graph databases will be good don't be limited by ok do it in the positives or something like this don't do like I say tech", "start_timestamp": "00:35:52", "end_timestamp": "00:36:33", "start_second": 2152, "end_second": 2193, "url": "https://www.youtube.com/watch?v=OOMFR6snocY&t=2152s", "title": "Building high performance and scalable architectures for enterprises\u2014Luka Samar\u017eija & Ivan Sokol", "thumbnail": "https://i.ytimg.com/vi/OOMFR6snocY/maxresdefault.jpg"} {"video_id": "OOMFR6snocY", "text": "development I mean this is no transformation you are trying to move your team also from all these new technologies and you have a bunch of experts on all technology and their juniors in new technology and sometimes when you have deadlines the decision will be you we can do it in 30 minutes in all technology or in 3 hours in new technology invest in new first time it will be three hours next time it will be two and a half I mean no one no one no one learns King in 30 minutes so you need experience you don't want to go", "start_timestamp": "00:36:33", "end_timestamp": "00:37:08", "start_second": 2193, "end_second": 2228, "url": "https://www.youtube.com/watch?v=OOMFR6snocY&t=2193s", "title": "Building high performance and scalable architectures for enterprises\u2014Luka Samar\u017eija & Ivan Sokol", "thumbnail": "https://i.ytimg.com/vi/OOMFR6snocY/maxresdefault.jpg"} {"video_id": "OOMFR6snocY", "text": "back because then you achieve nothing don't skip continuous integration I know that for enterprises such as ours continuous delivery where you with your coat production is usually a no-go because we want to help human you need acceptance tests because we want to have deployments to production over several systems that needs to be synchronized so it's ok to split CI and CB but don't skip CI so caliber books that has minimum CI minimum CI at least that is running automatically code review in unit tests and whatever", "start_timestamp": "00:37:08", "end_timestamp": "00:37:51", "start_second": 2228, "end_second": 2271, "url": "https://www.youtube.com/watch?v=OOMFR6snocY&t=2228s", "title": "Building high performance and scalable architectures for enterprises\u2014Luka Samar\u017eija & Ivan Sokol", "thumbnail": "https://i.ytimg.com/vi/OOMFR6snocY/maxresdefault.jpg"} {"video_id": "OOMFR6snocY", "text": "tests you put in the code don't do a constant move forward so we also can experience where new user stories are just arriving arriving arriving and you didn't manage to have time for optimization I don't mean just a small organization when you are solving some visible technical depth where you put something to commit to do and then you find some time in the next sprint to fix it I'm talking about getting the overall picture seen so as developer I learn everyday I find some blog I find some new technology so because of the move forward so six", "start_timestamp": "00:37:51", "end_timestamp": "00:38:31", "start_second": 2271, "end_second": 2311, "url": "https://www.youtube.com/watch?v=OOMFR6snocY&t=2271s", "title": "Building high performance and scalable architectures for enterprises\u2014Luka Samar\u017eija & Ivan Sokol", "thumbnail": "https://i.ytimg.com/vi/OOMFR6snocY/maxresdefault.jpg"} {"video_id": "OOMFR6snocY", "text": "months from now I will be smarter regarding some things that I am current and kind of writing some code so six months later perhaps I I need some Sprint's to sit down to the overall picture and to see okay what can be optimized what I learned in the last six months that can help me move further otherwise you will just have let's say cloak that all parts are working but it's still showing the wrong time and try to cover clear member robz try to have a governance that is known when exactly something in is expected", "start_timestamp": "00:38:31", "end_timestamp": "00:39:09", "start_second": 2311, "end_second": 2349, "url": "https://www.youtube.com/watch?v=OOMFR6snocY&t=2311s", "title": "Building high performance and scalable architectures for enterprises\u2014Luka Samar\u017eija & Ivan Sokol", "thumbnail": "https://i.ytimg.com/vi/OOMFR6snocY/maxresdefault.jpg"} {"video_id": "OOMFR6snocY", "text": "developers like to know when they have meetings with their product owners they want to know up until when the user story will be groomed and finalized but they can start development so scrum is a really good principle here because you can set up the deadlines you can set up when it's expected what is expected and what amount of work needs to be done in the next five or ten days so on the organization level try to stick with these principles try to have this in a dedicated time slows so that developers can plan this is the greatest thing that", "start_timestamp": "00:39:09", "end_timestamp": "00:39:48", "start_second": 2349, "end_second": 2388, "url": "https://www.youtube.com/watch?v=OOMFR6snocY&t=2349s", "title": "Building high performance and scalable architectures for enterprises\u2014Luka Samar\u017eija & Ivan Sokol", "thumbnail": "https://i.ytimg.com/vi/OOMFR6snocY/maxresdefault.jpg"} {"video_id": "OOMFR6snocY", "text": "can be done when you have the sprint and when you have the agile approach on the team level you want to have guidelines especially when you're doing digital transformation for something new you need to create the guidelines you need to tell either to let's say new technology junior developers course and many are experts that you have that they which where they don't know the telcos tech you want to have for them the guidelines how things are built so for example we have our horizontal split finger our own products where it is not what", "start_timestamp": "00:39:48", "end_timestamp": "00:40:34", "start_second": 2388, "end_second": 2434, "url": "https://www.youtube.com/watch?v=OOMFR6snocY&t=2388s", "title": "Building high performance and scalable architectures for enterprises\u2014Luka Samar\u017eija & Ivan Sokol", "thumbnail": "https://i.ytimg.com/vi/OOMFR6snocY/maxresdefault.jpg"} {"video_id": "OOMFR6snocY", "text": "each product does and what kind of code should be in some kind of credit so this is a guideline that is really helpful to a team don't prioritize that line over tests this is easier said than done but what I would suggest is and never communicated to business how much time you need for writing only code when you're writing a code without tests you don't write code you you write you're writing a wake-up call into a by the gracious theme that something is not working product so you are the ones that will be cold when something is not", "start_timestamp": "00:40:34", "end_timestamp": "00:41:13", "start_second": 2434, "end_second": 2473, "url": "https://www.youtube.com/watch?v=OOMFR6snocY&t=2434s", "title": "Building high performance and scalable architectures for enterprises\u2014Luka Samar\u017eija & Ivan Sokol", "thumbnail": "https://i.ytimg.com/vi/OOMFR6snocY/maxresdefault.jpg"} {"video_id": "OOMFR6snocY", "text": "working not your stakeholder and in my experiences older you get you are much let's say much less open towards this kind of course try to do code reviews and try to establish recording practices so this is basically the constant work so you need to on the team level to help retrospective to gather feedback from your team what was good what was not good this is the only way how team will work on and you need to also do a knowledge transfer so basically it's not just knowledge transfer to a new team members it's not just transfer when you", "start_timestamp": "00:41:13", "end_timestamp": "00:42:09", "start_second": 2473, "end_second": 2529, "url": "https://www.youtube.com/watch?v=OOMFR6snocY&t=2473s", "title": "Building high performance and scalable architectures for enterprises\u2014Luka Samar\u017eija & Ivan Sokol", "thumbnail": "https://i.ytimg.com/vi/OOMFR6snocY/maxresdefault.jpg"} {"video_id": "OOMFR6snocY", "text": "found something you race and blow we found something that can Ament assertions you will do search J or compass so basically when you find something that will speed up the team you you hear some R&D priming sprint because this is how you move forward of also regarding the library dependency management it is okay to make a test out but you will run into problems where you have one library in Springwood - you have second in two lows one to live - so when they when you start connecting them together things will break out because there will be", "start_timestamp": "00:42:09", "end_timestamp": "00:43:06", "start_second": 2529, "end_second": 2586, "url": "https://www.youtube.com/watch?v=OOMFR6snocY&t=2529s", "title": "Building high performance and scalable architectures for enterprises\u2014Luka Samar\u017eija & Ivan Sokol", "thumbnail": "https://i.ytimg.com/vi/OOMFR6snocY/maxresdefault.jpg"} {"video_id": "OOMFR6snocY", "text": "resistance is so to try to create a project that will have your Bill of Materials of dependencies and then try to agree okay let's use this for next portal and then after three months we will see what has been upgraded and then move forward any pressure let's say a bit slower here because they are not always let's say on the latest on the edge technology but this can also be good because you can check you can leave others to solve the initial bugs that new version of technology growth so you can be much more let's say seven safe", "start_timestamp": "00:43:06", "end_timestamp": "00:43:48", "start_second": 2586, "end_second": 2628, "url": "https://www.youtube.com/watch?v=OOMFR6snocY&t=2586s", "title": "Building high performance and scalable architectures for enterprises\u2014Luka Samar\u017eija & Ivan Sokol", "thumbnail": "https://i.ytimg.com/vi/OOMFR6snocY/maxresdefault.jpg"} {"video_id": "OOMFR6snocY", "text": "side here also what I wanted to want to show is basically do the unit tests they are really helpful when you are moving forward and you are doing this dependency upgrade if you run your unit tests when jumping from java to java and everything us it will be sure that okay this can work so this is let's say a really important thing to do so this is it from from my side post we are now for any questions that you have see there is only one question regarding two videos okay and so I'm back I'm back with you guys", "start_timestamp": "00:43:48", "end_timestamp": "00:44:47", "start_second": 2628, "end_second": 2687, "url": "https://www.youtube.com/watch?v=OOMFR6snocY&t=2628s", "title": "Building high performance and scalable architectures for enterprises\u2014Luka Samar\u017eija & Ivan Sokol", "thumbnail": "https://i.ytimg.com/vi/OOMFR6snocY/maxresdefault.jpg"} {"video_id": "OOMFR6snocY", "text": "okay let me let me publish question number one will there will those videos be available afterwards yes they will if the question refers to rid of this session yes video will be available at the developers website and that bug website so it may be two days let's see another another question do you recommend using graph QL guys yeah great I think I like it a lot but for front-end to me that beckoned forefront and better and then there is you should use graph QL and it really makes sense especially when it comes to performance", "start_timestamp": "00:44:47", "end_timestamp": "00:45:49", "start_second": 2687, "end_second": 2749, "url": "https://www.youtube.com/watch?v=OOMFR6snocY&t=2687s", "title": "Building high performance and scalable architectures for enterprises\u2014Luka Samar\u017eija & Ivan Sokol", "thumbnail": "https://i.ytimg.com/vi/OOMFR6snocY/maxresdefault.jpg"} {"video_id": "OOMFR6snocY", "text": "but when it comes to you know internal our integration between various systems in the landscape there are some other protocols like GRDC or rest like revenues but for front back in the Disco as close to the front as possible but when you are providing domain driven design especially when you are an engine present you need to have authorization concept and who can see what and in this case is for us rest or other let's say different communication or district GRP C or even driven are better choice because you have stabilized model and", "start_timestamp": "00:45:49", "end_timestamp": "00:46:50", "start_second": 2749, "end_second": 2810, "url": "https://www.youtube.com/watch?v=OOMFR6snocY&t=2749s", "title": "Building high performance and scalable architectures for enterprises\u2014Luka Samar\u017eija & Ivan Sokol", "thumbnail": "https://i.ytimg.com/vi/OOMFR6snocY/maxresdefault.jpg"} {"video_id": "OOMFR6snocY", "text": "you know what to expect and what to return and then over this you can easily provide authorization rules what needs to be removed or pre-checked and Jeff Kurr can be a part of BFF that can then query the main driven micro services yeah yeah I would say that graphic is definitely a future-proof fronting because you look at a lot of VI that expose the front and the mostly have too much data so they are big and clumsy and have a great old formance or they don't have enough data so you have to carry several api's which is a even worse than", "start_timestamp": "00:46:50", "end_timestamp": "00:47:33", "start_second": 2810, "end_second": 2853, "url": "https://www.youtube.com/watch?v=OOMFR6snocY&t=2810s", "title": "Building high performance and scalable architectures for enterprises\u2014Luka Samar\u017eija & Ivan Sokol", "thumbnail": "https://i.ytimg.com/vi/OOMFR6snocY/maxresdefault.jpg"} {"video_id": "OOMFR6snocY", "text": "you have a lot of network latency so yeah forefront indefinitely I think that is the future but we have another question regarding grant how do we handle graphically on with microservices know the kind of severe let's say focused on the banking services as we are trying to provide clinical if you have spec sheet there I know that it's a moment of time I piloted I think it was a framework that had this kind of support on micro service though where it can consume a different micro services and then do the graph QL processing go", "start_timestamp": "00:47:33", "end_timestamp": "00:48:11", "start_second": 2853, "end_second": 2891, "url": "https://www.youtube.com/watch?v=OOMFR6snocY&t=2853s", "title": "Building high performance and scalable architectures for enterprises\u2014Luka Samar\u017eija & Ivan Sokol", "thumbnail": "https://i.ytimg.com/vi/OOMFR6snocY/maxresdefault.jpg"} {"video_id": "OOMFR6snocY", "text": "read some of the caps you can check that framework to see if it works for you we are not using ok Mario is asking how larger the teams and how do you break down big organizational goals to team product stories ok definitely actually the Arita entire company is now let's say in an age of transformation we went from a standard way of doing project which was basically waterfall you know you have marketing that thinks of an idea then do a specification it goes 2080 implements now a transferring to the agile way of working where we have sort", "start_timestamp": "00:48:11", "end_timestamp": "00:49:06", "start_second": 2891, "end_second": 2946, "url": "https://www.youtube.com/watch?v=OOMFR6snocY&t=2891s", "title": "Building high performance and scalable architectures for enterprises\u2014Luka Samar\u017eija & Ivan Sokol", "thumbnail": "https://i.ytimg.com/vi/OOMFR6snocY/maxresdefault.jpg"} {"video_id": "OOMFR6snocY", "text": "of a matrix we have agile tribes that are in charge of larger domain with the squats inside that they charge of smaller domains so that's what we're doing on the level of the entire organization and when we connect it to the architecture what we want to do is have one squad in charge of one micro service that's that's our goal so the tribes handle the backlog when it comes to functionality capabilities that stock we want to do when it comes to develop development use force that are in charge of each squad instead of one micro service and that's", "start_timestamp": "00:49:06", "end_timestamp": "00:49:50", "start_second": 2946, "end_second": 2990, "url": "https://www.youtube.com/watch?v=OOMFR6snocY&t=2946s", "title": "Building high performance and scalable architectures for enterprises\u2014Luka Samar\u017eija & Ivan Sokol", "thumbnail": "https://i.ytimg.com/vi/OOMFR6snocY/maxresdefault.jpg"} {"video_id": "OOMFR6snocY", "text": "how we want to achieve that but we are in the setup phase okay Marty McFly is asking can you share your software stack which databases are you using basically I put stack on the one of the slides we are using for some cases yeah okay if you have a lot of Oracle databases in our legacy stack but for all new stuffs we are using all the center also databases okay what approach you use for performance and integration tests in micro services distributed architecture so we are relying on most here regarding the integration test so of course I", "start_timestamp": "00:49:50", "end_timestamp": "00:50:42", "start_second": 2990, "end_second": 3042, "url": "https://www.youtube.com/watch?v=OOMFR6snocY&t=2990s", "title": "Building high performance and scalable architectures for enterprises\u2014Luka Samar\u017eija & Ivan Sokol", "thumbnail": "https://i.ytimg.com/vi/OOMFR6snocY/maxresdefault.jpg"} {"video_id": "OOMFR6snocY", "text": "mentioned that we always have user acceptance testing that is done by testing team but when you are developing for us a great Mowgli's word test containers where you could mock a database and also a wire because most of our integrations are based on I just saw or PLC code but this cannot replace the real end-to-end testing but it can speed up development because you don't need to be let's say you don't need to create test users fitting for some of the needs that you want to test you can just set up a test container with test", "start_timestamp": "00:50:42", "end_timestamp": "00:51:18", "start_second": 3042, "end_second": 3078, "url": "https://www.youtube.com/watch?v=OOMFR6snocY&t=3042s", "title": "Building high performance and scalable architectures for enterprises\u2014Luka Samar\u017eija & Ivan Sokol", "thumbnail": "https://i.ytimg.com/vi/OOMFR6snocY/maxresdefault.jpg"} {"video_id": "OOMFR6snocY", "text": "database and then put some your own test data values or set up a variable that is returning what's officers should return okay so Mario is asking how do you do public API is in micro service environment does every team own their own public API or is there some common part currently we don't work most of our API are punished either to our South Carroll occasions or to our business partners so this is something where we don't have much experience between Luca mentioned how we are owning the ice but they're usually", "start_timestamp": "00:51:18", "end_timestamp": "00:52:09", "start_second": 3078, "end_second": 3129, "url": "https://www.youtube.com/watch?v=OOMFR6snocY&t=3078s", "title": "Building high performance and scalable architectures for enterprises\u2014Luka Samar\u017eija & Ivan Sokol", "thumbnail": "https://i.ytimg.com/vi/OOMFR6snocY/maxresdefault.jpg"} {"video_id": "OOMFR6snocY", "text": "consumed by our own applications okay so you said spraying boot-up doesn't equal micro service what technologies do you suggest to use we are using Java spin boot actually it's not enough just to implement an application Java string boot to call the macro service to implement a micro service in Java spin would you need to have a Java string motive you need to have a database with data you have to expose API on that micro service and that micro service should encamp encapsulate one specific domain that is connected to business domain", "start_timestamp": "00:52:09", "end_timestamp": "00:53:05", "start_second": 3129, "end_second": 3185, "url": "https://www.youtube.com/watch?v=OOMFR6snocY&t=3129s", "title": "Building high performance and scalable architectures for enterprises\u2014Luka Samar\u017eija & Ivan Sokol", "thumbnail": "https://i.ytimg.com/vi/OOMFR6snocY/maxresdefault.jpg"} {"video_id": "OOMFR6snocY", "text": "that is automatic services so feel free to use Java spin booty reuse it also but it's just that a lot of times people build Java spin boot applications expose an API but they have no data the Java spring boot application and there is a lot of different systems for data and this is not a market service this is a proxy you can you can implement it whatever technology but micro service should know on its own date that's what disk I mean we also have the pattern that is not gone into data because as I mentioned we have the", "start_timestamp": "00:53:05", "end_timestamp": "00:53:49", "start_second": 3185, "end_second": 3229, "url": "https://www.youtube.com/watch?v=OOMFR6snocY&t=3185s", "title": "Building high performance and scalable architectures for enterprises\u2014Luka Samar\u017eija & Ivan Sokol", "thumbnail": "https://i.ytimg.com/vi/OOMFR6snocY/maxresdefault.jpg"} {"video_id": "OOMFR6snocY", "text": "technologies or systems that are purchased the billing system mediation system so you don't want to say now ok let's draw billing out of the portfolio and do our own billing you don't do that but if you are consuming data from be linked and you want to create micro service that is not real micro service bye-bye we can officially path you want to go as closest to the data as possible and then provide your own business rules again business rules exposed to our standard business rule solution and then expose this to the outer layer but if", "start_timestamp": "00:53:49", "end_timestamp": "00:54:23", "start_second": 3229, "end_second": 3263, "url": "https://www.youtube.com/watch?v=OOMFR6snocY&t=3229s", "title": "Building high performance and scalable architectures for enterprises\u2014Luka Samar\u017eija & Ivan Sokol", "thumbnail": "https://i.ytimg.com/vi/OOMFR6snocY/maxresdefault.jpg"} {"video_id": "OOMFR6snocY", "text": "you just create a spring boot that that is consuming some legacy API that is going God knows where then you have that multi layer flow that you really want to avoid yeah and you are not scalable it doesn't matter how many micro services you deploy because it still where is the one single database for example billing system for data so that should get your bottleneck and single point of failure if that goes down all of the micro services doesn't matter if you have one more connect so in order to build really scalable like service has to have its", "start_timestamp": "00:54:23", "end_timestamp": "00:55:07", "start_second": 3263, "end_second": 3307, "url": "https://www.youtube.com/watch?v=OOMFR6snocY&t=3263s", "title": "Building high performance and scalable architectures for enterprises\u2014Luka Samar\u017eija & Ivan Sokol", "thumbnail": "https://i.ytimg.com/vi/OOMFR6snocY/maxresdefault.jpg"} {"video_id": "OOMFR6snocY", "text": "own date but as even says if you clearly can't do that every time so the those patterns were you connect to legacy systems but connect as little layers as possible ok so we have our last question coming from Portugal hello from Portugal great talk how do you deal with scalability of software from vendors or do you avoid using software that are behind licensing okay he was vendor software definitely and we use some license after all so you know Intel code there is really a lot of domains and for some you have a", "start_timestamp": "00:55:07", "end_timestamp": "00:56:01", "start_second": 3307, "end_second": 3361, "url": "https://www.youtube.com/watch?v=OOMFR6snocY&t=3307s", "title": "Building high performance and scalable architectures for enterprises\u2014Luka Samar\u017eija & Ivan Sokol", "thumbnail": "https://i.ytimg.com/vi/OOMFR6snocY/maxresdefault.jpg"} {"video_id": "OOMFR6snocY", "text": "excellent excellent out-of-the-box vendor software that you can use and it really doesn't make sense to you know invent the wheel and implement their own billing yeah yeah you know implement 20 micro services that does not tell Kabila for example then you should use you use a vendor solution you paid the license and when it comes to scalability those softer scalable you know you can create a cluster deploy so several instances and that's let's say well no one approach how you can make monolith soft who is consuming and how much today so", "start_timestamp": "00:56:01", "end_timestamp": "00:56:54", "start_second": 3361, "end_second": 3414, "url": "https://www.youtube.com/watch?v=OOMFR6snocY&t=3361s", "title": "Building high performance and scalable architectures for enterprises\u2014Luka Samar\u017eija & Ivan Sokol", "thumbnail": "https://i.ytimg.com/vi/OOMFR6snocY/maxresdefault.jpg"} {"video_id": "OOMFR6snocY", "text": "for example users consumption API that I mention it is connected to a billing system that this purchase con solution so if you can query data from billing then put in cash or even a long term cash because you see that this data won't be changed regularly for example we have a say is containing god bills to baby so if you once query the pay bill you know the amount you know when it was paid you can put it in cash if user comes inquiries he must to see his last 12 bills or some aggregations I don't know average bill amount buried for last", "start_timestamp": "00:56:54", "end_timestamp": "00:57:40", "start_second": 3414, "end_second": 3460, "url": "https://www.youtube.com/watch?v=OOMFR6snocY&t=3414s", "title": "Building high performance and scalable architectures for enterprises\u2014Luka Samar\u017eija & Ivan Sokol", "thumbnail": "https://i.ytimg.com/vi/OOMFR6snocY/maxresdefault.jpg"} {"video_id": "OOMFR6snocY", "text": "12 months you don't need to query yes it be you can have it in your cash and query the cash or in memory store in this case cash is more like a memory storage and so it's dependent scenarios you try to find how data is used and how often data is changed on the back end and then you made a wrapper around that that can make your life easier where you don't need to go directly to the back and then you don't need to scare it that much okay a lot of questions today thank you Luca thank you even for your time [Music]", "start_timestamp": "00:57:40", "end_timestamp": "00:58:37", "start_second": 3460, "end_second": 3517, "url": "https://www.youtube.com/watch?v=OOMFR6snocY&t=3460s", "title": "Building high performance and scalable architectures for enterprises\u2014Luka Samar\u017eija & Ivan Sokol", "thumbnail": "https://i.ytimg.com/vi/OOMFR6snocY/maxresdefault.jpg"} {"video_id": "YBlNQK0Ao6g", "text": "okay I'm sure many of you have already seen this because it was rather widely announced but the open AI team has announced a new model that produces pictures instead of text so as you can see right here on the Left you'll always see like a half a picture and on the right is the ground truth so they took this picture they simply cut the bottom half right here and then they let the model sort of imagine what they cut away and what it comes up with is pretty cool I have to say like look at the birds like this is", "start_timestamp": "00:00:00", "end_timestamp": "00:00:38", "start_second": 0, "end_second": 38, "url": "https://www.youtube.com/watch?v=YBlNQK0Ao6g&t=0s", "title": "Image GPT: Generative Pretraining from Pixels (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/YBlNQK0Ao6g/maxresdefault.jpg"} {"video_id": "YBlNQK0Ao6g", "text": "just awesome but the special thing about this isn't that it simply completes pictures the special thing about it is it does it one pixel by pixel so basically it goes at this pixel right here and asks ok what's that pixel and then what's that pixel and then what's that pixel and so on so it is basically a like a language model but four pixels in that it goes over the images in order basically like this or like always from left to right left to right left to right and it has no clue of the spatial relations between the pixels it needs to", "start_timestamp": "00:00:38", "end_timestamp": "00:01:24", "start_second": 38, "end_second": 84, "url": "https://www.youtube.com/watch?v=YBlNQK0Ao6g&t=38s", "title": "Image GPT: Generative Pretraining from Pixels (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/YBlNQK0Ao6g/maxresdefault.jpg"} {"video_id": "YBlNQK0Ao6g", "text": "learn that by itself as opposed to a convolutional neural network which is specifically designed such that if you want to predict this pixel right here then it's specifically designed to say ok the most important information is probably around that pixel and then some like other important information is while around that pixel so cnn's are built with this in mind whereas this model right here which is also known as image GPT isn't doesn't have any of that it's simply a transformer model that goes over these pixels one by one and we'll see how", "start_timestamp": "00:01:24", "end_timestamp": "00:02:04", "start_second": 84, "end_second": 124, "url": "https://www.youtube.com/watch?v=YBlNQK0Ao6g&t=84s", "title": "Image GPT: Generative Pretraining from Pixels (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/YBlNQK0Ao6g/maxresdefault.jpg"} {"video_id": "YBlNQK0Ao6g", "text": "that's done there are some more examples right here particularly cool is the cat and you see that there is the beginning of this little white thing here which is this card and the completions of the model yes very interesting the model can of course as a language model can also sample by itself just random images you sample them once through and this is what it comes up with so these are pretty good quality images for a model that just produces one pixel by one pixel now this idea of one pixel by pixel isn't new this has", "start_timestamp": "00:02:04", "end_timestamp": "00:02:48", "start_second": 124, "end_second": 168, "url": "https://www.youtube.com/watch?v=YBlNQK0Ao6g&t=124s", "title": "Image GPT: Generative Pretraining from Pixels (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/YBlNQK0Ao6g/maxresdefault.jpg"} {"video_id": "YBlNQK0Ao6g", "text": "been around before but the investigation here is basically how much can we how far can we push these generative models for pre-training hi there this is Yannick from post-production I've realized that I've forgotten to even read the name of the paper so it's called generative pre-training from pixels by March and Alec Radford round child jeff whoo he won't jus profile a dairy wall David Lewin and Ilya sutskever and since Henry AI labs has already made a video on this this video is going to be more of kind of a rumble", "start_timestamp": "00:02:48", "end_timestamp": "00:03:28", "start_second": 168, "end_second": 208, "url": "https://www.youtube.com/watch?v=YBlNQK0Ao6g&t=168s", "title": "Image GPT: Generative Pretraining from Pixels (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/YBlNQK0Ao6g/maxresdefault.jpg"} {"video_id": "YBlNQK0Ao6g", "text": "rant about what I find interesting about the paper and some thoughts about it rather than like a classic explanation I hope you still enjoy that so what you saw on the right wasn't even though this isn't the final result the supposed result this is simply the pre-training task it's fun to look at it but the actual object objective of the paper is the following what if we train we pre train on a large data set to generate work good images like these or we to complete images like these and then we fine-tune on a", "start_timestamp": "00:03:28", "end_timestamp": "00:04:10", "start_second": 208, "end_second": 250, "url": "https://www.youtube.com/watch?v=YBlNQK0Ao6g&t=208s", "title": "Image GPT: Generative Pretraining from Pixels (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/YBlNQK0Ao6g/maxresdefault.jpg"} {"video_id": "YBlNQK0Ao6g", "text": "classification task and the answer is here they say on C 410 we achieve 60 96.3% accuracy with a linear probe outperforming a super wide supervised the wide ResNet and the 99 cent accuracy with full fine tuning matching the top supervisor pre-trained models an even larger model trained on a mixture of imagenet and web images is competitive with self supervised benchmarks on image net achieving 72 top one accuracy on a linear probe of our features so the goal here is that you have a data set that you want to trend", "start_timestamp": "00:04:10", "end_timestamp": "00:04:54", "start_second": 250, "end_second": 294, "url": "https://www.youtube.com/watch?v=YBlNQK0Ao6g&t=250s", "title": "Image GPT: Generative Pretraining from Pixels (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/YBlNQK0Ao6g/maxresdefault.jpg"} {"video_id": "YBlNQK0Ao6g", "text": "like train a classifier on so usually you have a data set and the data set has images and you put them through like a convolutional neural network and then you have to classify the image into one of I don't know how many classes on C for ten that's ten classes on image and it's a thousand and the data set is these images together with these labels now the idea of pre training is that you some where have a bigger data set that is sort of similar to the small data set but it's similar enough such that the network could learn something so what", "start_timestamp": "00:04:54", "end_timestamp": "00:05:34", "start_second": 294, "end_second": 334, "url": "https://www.youtube.com/watch?v=YBlNQK0Ao6g&t=294s", "title": "Image GPT: Generative Pretraining from Pixels (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/YBlNQK0Ao6g/maxresdefault.jpg"} {"video_id": "YBlNQK0Ao6g", "text": "you want to do first is your first one it take the large data set terrain this network right here and then in a second step fine-tune the network on this smaller data set and you sort of hope that what you learned from the large data set right here transfers over a little bit of knowledge you already have a little bit of knowledge and you can make better use of the data that you have right here now the question is how do you do this pre training and of course this has a long tradition well long for maybe two or three years right", "start_timestamp": "00:05:34", "end_timestamp": "00:06:07", "start_second": 334, "end_second": 367, "url": "https://www.youtube.com/watch?v=YBlNQK0Ao6g&t=334s", "title": "Image GPT: Generative Pretraining from Pixels (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/YBlNQK0Ao6g/maxresdefault.jpg"} {"video_id": "YBlNQK0Ao6g", "text": "now in the language community where people they pre trained these large models like we've just seen GP t3 or Bert was one of them they pre trained these large Transformer models on text and then to fine-tune them on classification tasks for text and that's what this paper is doing right here they pre trained a transformer that is a GPT to scale model they pre train it on image generation and then they fine-tune it or transfer learn it to classification tasks and the point of the papers to say that like in text data in text data we have made", "start_timestamp": "00:06:07", "end_timestamp": "00:06:53", "start_second": 367, "end_second": 413, "url": "https://www.youtube.com/watch?v=YBlNQK0Ao6g&t=367s", "title": "Image GPT: Generative Pretraining from Pixels (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/YBlNQK0Ao6g/maxresdefault.jpg"} {"video_id": "YBlNQK0Ao6g", "text": "pretty good pretty good experiences with doing this with pre-training a generative model and then fine-tuning on a classification task while so far in images all we've ever done is we've pre-trained this pre training task he usually is a classification task or like a self supervised task with a contrastive loss or something like this what they're doing new is the generative modeling in the pre as a pre training and again this isn't like entirely new but they show that if you throw a lot of computers at it and lots of data and a", "start_timestamp": "00:06:53", "end_timestamp": "00:07:37", "start_second": 413, "end_second": 457, "url": "https://www.youtube.com/watch?v=YBlNQK0Ao6g&t=413s", "title": "Image GPT: Generative Pretraining from Pixels (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/YBlNQK0Ao6g/maxresdefault.jpg"} {"video_id": "YBlNQK0Ao6g", "text": "big model then that can work equally well to these self supervised tasks so their model as I said is pretty pretty simple they take an image and they unrolled the image now an fully unrolled image on let's say image net has 224 squared pixels and that times three right because you have three color channels that's too large even for an open a supercomputer so what they do is first they down scale the image so they down scale it's not as drastic as here where you just get a three by three image but they do down scale it to like", "start_timestamp": "00:07:37", "end_timestamp": "00:08:15", "start_second": 457, "end_second": 495, "url": "https://www.youtube.com/watch?v=YBlNQK0Ao6g&t=457s", "title": "Image GPT: Generative Pretraining from Pixels (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/YBlNQK0Ao6g/maxresdefault.jpg"} {"video_id": "YBlNQK0Ao6g", "text": "a 32 by 32 or a 64 by 64 then they unroll it which simply means they go through the image like this and make a sequence out of it because their models are naturally made for text sequences they simply put the image into a text sequence they further simplify this by reducing the three colour channels to a single one so they have their own color representation and basically yeah they reduce the three colour channels to one channel that simply indexes the color in their color representation and they say still pretty good it's pretty faithful", "start_timestamp": "00:08:15", "end_timestamp": "00:08:58", "start_second": 495, "end_second": 538, "url": "https://www.youtube.com/watch?v=YBlNQK0Ao6g&t=495s", "title": "Image GPT: Generative Pretraining from Pixels (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/YBlNQK0Ao6g/maxresdefault.jpg"} {"video_id": "YBlNQK0Ao6g", "text": "so ultimately they end up with like 32 squared length representation of their image and then they do one of two things they either do auto regressive generative pre-training which is the sort of GPT - style pre-training and the the idea here is that you always want to predict the next pixel of a sequence so you can see right here that's the sequence that you are sorry that's the sequence that you input and you always want to predict what is the next pixel and in this case you've seen you see that we've already predicted everything", "start_timestamp": "00:08:58", "end_timestamp": "00:09:41", "start_second": 538, "end_second": 581, "url": "https://www.youtube.com/watch?v=YBlNQK0Ao6g&t=538s", "title": "Image GPT: Generative Pretraining from Pixels (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/YBlNQK0Ao6g/maxresdefault.jpg"} {"video_id": "YBlNQK0Ao6g", "text": "here we've already predicted everything up to this red pixel so you want to know what's this next pixel this thing right here what's this going to be and the diagram here basically shows you how the attention flow so every position in this transformer and if you don't know what a transformer is I haven't made a video about attention is all you need where these are explained but briefly every position here can sort of send information can send information only in one direction as to so you train all of these in parallel and when you predict", "start_timestamp": "00:09:41", "end_timestamp": "00:10:21", "start_second": 581, "end_second": 621, "url": "https://www.youtube.com/watch?v=YBlNQK0Ao6g&t=581s", "title": "Image GPT: Generative Pretraining from Pixels (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/YBlNQK0Ao6g/maxresdefault.jpg"} {"video_id": "YBlNQK0Ao6g", "text": "this pixel right here you only want information from whatever it was before that pixel otherwise the model could cheat right otherwise the model could simply learn to copy over the value but the attention pattern here is simply to show you that this is auto regressive and it's in one direction so you always want to predict the next pixel and then from all of this you want to predict the next pixel and from all of this you want to predict the next pixel this is in contrast to this objective here that comes from Bert and", "start_timestamp": "00:10:21", "end_timestamp": "00:10:54", "start_second": 621, "end_second": 654, "url": "https://www.youtube.com/watch?v=YBlNQK0Ao6g&t=621s", "title": "Image GPT: Generative Pretraining from Pixels (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/YBlNQK0Ao6g/maxresdefault.jpg"} {"video_id": "YBlNQK0Ao6g", "text": "I've also made a video on Bert what you do in Bert is you simply take that image and you cross a block out two of the pixels or many of the pixels and you simply ask your network to reconstruct those pixels okay and now you can see the attention flows in all direction birth the B stands actually for a bi-directional so this is the contrast to the autoregressive pre training framework now the these two things have been applied in text both the autoregressive is usually it's easier to actually make it produce something like", "start_timestamp": "00:10:54", "end_timestamp": "00:11:31", "start_second": 654, "end_second": 691, "url": "https://www.youtube.com/watch?v=YBlNQK0Ao6g&t=654s", "title": "Image GPT: Generative Pretraining from Pixels (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/YBlNQK0Ao6g/maxresdefault.jpg"} {"video_id": "YBlNQK0Ao6g", "text": "we saw producing these images because you can always just predict the next pixel and then the next and then the next and then the next whereas in Beart it's a bit more unclear how you would produce things in a consistent manner because the predictions of these two pixels right here they are independent it's one forward pass and then both of these are predicted but other papers have tried to solve this like this not excel net I forget I forget its name it's something with an X and yeah but but these are the two objectives they look at and it turns", "start_timestamp": "00:11:31", "end_timestamp": "00:12:13", "start_second": 691, "end_second": 733, "url": "https://www.youtube.com/watch?v=YBlNQK0Ao6g&t=691s", "title": "Image GPT: Generative Pretraining from Pixels (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/YBlNQK0Ao6g/maxresdefault.jpg"} {"video_id": "YBlNQK0Ao6g", "text": "out they sort of trade off a bit they work equally well or a bit better and a bit worse depending on the task so once they have done this so they simply feed images and you will notice that you don't need any labels for this so what you'll do is simply input an image and then simply take away half of it like this and then predict that pixel and then you want to predict that pixel and then you want to predict that pixel right that's all like you do with text and invert you simply input an image cross out pixels and then predict them", "start_timestamp": "00:12:13", "end_timestamp": "00:12:49", "start_second": 733, "end_second": 769, "url": "https://www.youtube.com/watch?v=YBlNQK0Ao6g&t=733s", "title": "Image GPT: Generative Pretraining from Pixels (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/YBlNQK0Ao6g/maxresdefault.jpg"} {"video_id": "YBlNQK0Ao6g", "text": "so you don't need labels for this and that's why you can do it with this big data set and you can do it in an unsupervised fashion so you can just crawl the internet for four images and just feed this in into there and it will sort of learn to produce these images now the question is if you produce if you learn to produce these images does that help you for classification and there they have two methods of of assessing this the bottom one here is the fine-tuning method where you simply so this is supposed to be the", "start_timestamp": "00:12:49", "end_timestamp": "00:13:26", "start_second": 769, "end_second": 806, "url": "https://www.youtube.com/watch?v=YBlNQK0Ao6g&t=769s", "title": "Image GPT: Generative Pretraining from Pixels (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/YBlNQK0Ao6g/maxresdefault.jpg"} {"video_id": "YBlNQK0Ao6g", "text": "representation you learn in the different layers of the network so this is supposed to be this thing right here what you'll do is you'll simply fine-tune that means you on top of this representation you add a classification head that has two outputs cat or dog and you train this entire network on your small data set that we discussed before so you train the entire network all of the parameters this is called fine tuning in contrast to that what you can do is you can simply and this is the easy way you can simply add this", "start_timestamp": "00:13:26", "end_timestamp": "00:14:03", "start_second": 806, "end_second": 843, "url": "https://www.youtube.com/watch?v=YBlNQK0Ao6g&t=806s", "title": "Image GPT: Generative Pretraining from Pixels (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/YBlNQK0Ao6g/maxresdefault.jpg"} {"video_id": "YBlNQK0Ao6g", "text": "classification head with two outputs and then only train this classification head and that is won't perform as well but it gives you sort of a better idea of how good is the representation that this network right here learned and on top of that so if you spin this idea further you can actually go and do this at any intermediate layer right here so you can forward propagate until layer two right here and then here you add your classification head into the two into the two classes and you only train the classification head that being said you", "start_timestamp": "00:14:03", "end_timestamp": "00:14:42", "start_second": 843, "end_second": 882, "url": "https://www.youtube.com/watch?v=YBlNQK0Ao6g&t=843s", "title": "Image GPT: Generative Pretraining from Pixels (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/YBlNQK0Ao6g/maxresdefault.jpg"} {"video_id": "YBlNQK0Ao6g", "text": "can also do this with fine tuning but in this case this is called a linear probe and it is often used to assess how good the a representation in intermediate layers is whereas what it actually does is assessing how linearly classifiable a representation is which isn't the same as how useful or how informative but it is one way to to assess these things okay so these are the two things they assess alright so for as for data sets for C 410 they use like C for 10 and c for 100 as data sets and the STL 10 and there you have to keep in mind the pre", "start_timestamp": "00:14:42", "end_timestamp": "00:15:25", "start_second": 882, "end_second": 925, "url": "https://www.youtube.com/watch?v=YBlNQK0Ao6g&t=882s", "title": "Image GPT: Generative Pretraining from Pixels (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/YBlNQK0Ao6g/maxresdefault.jpg"} {"video_id": "YBlNQK0Ao6g", "text": "training is done on imagenet for those so that you pre train on imagenet without the labels and then you transfer learn or fine tune or or linear probe on these small data sets whereas later we're going to look at image net and they're the pre-training as I understand it is done on image net itself but also a wider collection of a hundred million or so images from the web from the internet okay so as you can see right here this is what happens if you do this linear probing and you can see it works pretty well so you get like", "start_timestamp": "00:15:25", "end_timestamp": "00:16:11", "start_second": 925, "end_second": 971, "url": "https://www.youtube.com/watch?v=YBlNQK0Ao6g&t=925s", "title": "Image GPT: Generative Pretraining from Pixels (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/YBlNQK0Ao6g/maxresdefault.jpg"} {"video_id": "YBlNQK0Ao6g", "text": "a ninety-five ninety-six percent accuracy with linear probes this is very powerful so it's not easy to get 96 percent on C for ten I mean current state of the art is like ninety nine percent but still 96 percent is pretty good and this is the so the entire network there is this big giant network that you input your image into and then there is this one linear layer that does the classification and all of this right here has not been trained with classification in mind it simply has been trained to reproduce images it", "start_timestamp": "00:16:11", "end_timestamp": "00:16:52", "start_second": 971, "end_second": 1012, "url": "https://www.youtube.com/watch?v=YBlNQK0Ao6g&t=971s", "title": "Image GPT: Generative Pretraining from Pixels (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/YBlNQK0Ao6g/maxresdefault.jpg"} {"video_id": "YBlNQK0Ao6g", "text": "hasn't even been trained on C for ten as far as I understand has been trained on image net so the the this is to stress how cool or how significant this result is basically that just a linear probe on top of that will give you such a good accuracy and the second thing that is obvious right here is this bottom axis is the layer so this is the layer where they attach the linear probe and usually if you pre train a network with a classification task in mind so you pre train it with the labels or maybe even without the labels in a self supervised", "start_timestamp": "00:16:52", "end_timestamp": "00:17:33", "start_second": 1012, "end_second": 1053, "url": "https://www.youtube.com/watch?v=YBlNQK0Ao6g&t=1012s", "title": "Image GPT: Generative Pretraining from Pixels (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/YBlNQK0Ao6g/maxresdefault.jpg"} {"video_id": "YBlNQK0Ao6g", "text": "the way or something like this usually the last layer has the best representation for classification but here the special thing is that the intermediate layers in the middle have the best representation you can see that representation quality in terms of linear probing falls off as they sort of it falls off as they go into higher layers and this is consistent across the datasets as you can see and the the idea here is or the way they interpret it is that if you have an image right here Dada Dada Dada and they you've blocked part of it", "start_timestamp": "00:17:33", "end_timestamp": "00:18:19", "start_second": 1053, "end_second": 1099, "url": "https://www.youtube.com/watch?v=YBlNQK0Ao6g&t=1053s", "title": "Image GPT: Generative Pretraining from Pixels (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/YBlNQK0Ao6g/maxresdefault.jpg"} {"video_id": "YBlNQK0Ao6g", "text": "so you've blocked this and this wrong way around this so you've generated everything and now your task is to predict the next pixel right so you're you train to predict this next pixel right here and the idea is that as you put the image through the network what it will do is sort of since the first layers they're going to be if you're going to be similar to a CNN they're going to be doing some low level of feature transformation thing right but also the last layers they're going to really care about what's the exact", "start_timestamp": "00:18:19", "end_timestamp": "00:19:07", "start_second": 1099, "end_second": 1147, "url": "https://www.youtube.com/watch?v=YBlNQK0Ao6g&t=1099s", "title": "Image GPT: Generative Pretraining from Pixels (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/YBlNQK0Ao6g/maxresdefault.jpg"} {"video_id": "YBlNQK0Ao6g", "text": "pixel that goes here right since it's their job to to do that they're going to care what color does it need to have you know what exact luminosity and so on how does it fit in with the previous pixels and so on where as so that's that's also good but it's not just low level information and consistency with other pixels or something like this at some point if you want to generate consistent images and we saw that this model can generate consistent images at some point there needs to be some kind of a notion of the global information in the picture", "start_timestamp": "00:19:07", "end_timestamp": "00:19:47", "start_second": 1147, "end_second": 1187, "url": "https://www.youtube.com/watch?v=YBlNQK0Ao6g&t=1147s", "title": "Image GPT: Generative Pretraining from Pixels (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/YBlNQK0Ao6g/maxresdefault.jpg"} {"video_id": "YBlNQK0Ao6g", "text": "right such because the images are consistent throughout so there needs to be some some notion of what is in that image as a whole and that's the exact information that we need for classification and the only way that could actually be is here in the middle since you know that's the place so the hypothesis is that the these models somehow learn a higher-level representation of global information somewhere in the middle before they then specify that information again down to predict the actual pixel and that's why", "start_timestamp": "00:19:47", "end_timestamp": "00:20:24", "start_second": 1187, "end_second": 1224, "url": "https://www.youtube.com/watch?v=YBlNQK0Ao6g&t=1187s", "title": "Image GPT: Generative Pretraining from Pixels (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/YBlNQK0Ao6g/maxresdefault.jpg"} {"video_id": "YBlNQK0Ao6g", "text": "the best representations for classification are in the middle so this is one of the this is actually the interesting finding or one of the interesting findings of this paper means cool that they can reach a good accuracy but to recognize that maybe in these these generative models they have some intermediate stage where they represent the global information and that will actually make the best representation okay the second cool thing right here is that you can see they have different sizes of models so the IGP TL I believe", "start_timestamp": "00:20:24", "end_timestamp": "00:21:04", "start_second": 1224, "end_second": 1264, "url": "https://www.youtube.com/watch?v=YBlNQK0Ao6g&t=1224s", "title": "Image GPT: Generative Pretraining from Pixels (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/YBlNQK0Ao6g/maxresdefault.jpg"} {"video_id": "YBlNQK0Ao6g", "text": "is something like sixty layers then this is like 48 layers and this is 32 layers we don't really so these are on the olive on the scale of GPT to either a little bigger or a little smaller it's not like a GPT three scale where you need a ginormous supercomputer though they do do a lot of computation but it this still sort of fits within hardware of a standard size and not like exascale what's interesting right here is that you can see the larger models they reach a lower validation loss so here is the validation loss larger model if you", "start_timestamp": "00:21:04", "end_timestamp": "00:21:47", "start_second": 1264, "end_second": 1307, "url": "https://www.youtube.com/watch?v=YBlNQK0Ao6g&t=1264s", "title": "Image GPT: Generative Pretraining from Pixels (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/YBlNQK0Ao6g/maxresdefault.jpg"} {"video_id": "YBlNQK0Ao6g", "text": "train them on so these checkpoints here are always after the same amount of steps the larger models do reach a lower validation loss right here as you can see so this is the large this is the medium this is the small and also you can see that on this axis is the linear probe accuracy so this is whenever you you go and you find the best intermediate layer for linear probing you probe it and you record the accuracy so you can see a general trend as your validation loss goes down the linear probe accuracy goes up so there", "start_timestamp": "00:21:47", "end_timestamp": "00:22:23", "start_second": 1307, "end_second": 1343, "url": "https://www.youtube.com/watch?v=YBlNQK0Ao6g&t=1307s", "title": "Image GPT: Generative Pretraining from Pixels (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/YBlNQK0Ao6g/maxresdefault.jpg"} {"video_id": "YBlNQK0Ao6g", "text": "is a connection like it is in text models and text models there's a connection of the perplexity of your language model and the quality that of the representation you get for downstream tasks in this model it seems to be the exact same thing there is a connection between reaching lower validation laws and reaching a higher performance on classification so that's one interesting thing the general trend - up to the upper right corner the other interesting and even arguably even more interesting thing is that if you look at", "start_timestamp": "00:22:23", "end_timestamp": "00:23:00", "start_second": 1343, "end_second": 1380, "url": "https://www.youtube.com/watch?v=YBlNQK0Ao6g&t=1343s", "title": "Image GPT: Generative Pretraining from Pixels (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/YBlNQK0Ao6g/maxresdefault.jpg"} {"video_id": "YBlNQK0Ao6g", "text": "the same validation loss so at this point all of these models have the same validation all's yet still the bigger model is better right you can see right here the bigger model outperforms the smaller model even though they have the same validation loss on the image modeling task and this is also something that openly I in their in their text papers has stressed that the larger models they seem to be somehow more capable of forming good representations even you know even if they have the same loss so again this this could just be", "start_timestamp": "00:23:00", "end_timestamp": "00:23:41", "start_second": 1380, "end_second": 1421, "url": "https://www.youtube.com/watch?v=YBlNQK0Ao6g&t=1380s", "title": "Image GPT: Generative Pretraining from Pixels (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/YBlNQK0Ao6g/maxresdefault.jpg"} {"video_id": "YBlNQK0Ao6g", "text": "sort of a training data remember training data remembering thing and when I set that in GPT three I didn't actually mean explicit remembering of training data I meant a kind of a fuzzy remembering of training data of I formulate that in the in the comments but I I feel a lot of people have misunderstood me there here I think it's a much harder harder to estimate what's going on also since image pixels humans don't have a super good model of an on image pixels in their head as we have about text as you can see if you then fine tune so for now", "start_timestamp": "00:23:41", "end_timestamp": "00:24:20", "start_second": 1421, "end_second": 1460, "url": "https://www.youtube.com/watch?v=YBlNQK0Ao6g&t=1421s", "title": "Image GPT: Generative Pretraining from Pixels (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/YBlNQK0Ao6g/maxresdefault.jpg"} {"video_id": "YBlNQK0Ao6g", "text": "we've just do linear probing if you fine-tune these architectures then you reach like a 99% accuracy on C for ten which is on par with the best models that we have so G pipe is supervised pre trained on imagenet but also I guess uses a bunch of data element ation while these image GPT it uses minimal data augmentation I think they simply random crop a little bit and that's about it so they also experiment around with this Bert objective so until now this was all the this was all this autoregressive objective and I feel the open hi people", "start_timestamp": "00:24:20", "end_timestamp": "00:25:11", "start_second": 1460, "end_second": 1511, "url": "https://www.youtube.com/watch?v=YBlNQK0Ao6g&t=1460s", "title": "Image GPT: Generative Pretraining from Pixels (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/YBlNQK0Ao6g/maxresdefault.jpg"} {"video_id": "YBlNQK0Ao6g", "text": "are a bit more of a fan of the autoregressive objective just given what they've done so far in their papers and you can see here comparison of the two objectives on C for 10 and on image net again C for 10 is pre trained with image net and image net itself is pre trained with like a larger collection of images from the web all the pre training is done without labels now the blue is what you can reach with a linear probe and the orange is then on top of that what you can reach by fine-tuning okay so no linear profile tuning oh I have to say", "start_timestamp": "00:25:11", "end_timestamp": "00:25:53", "start_second": 1511, "end_second": 1553, "url": "https://www.youtube.com/watch?v=YBlNQK0Ao6g&t=1511s", "title": "Image GPT: Generative Pretraining from Pixels (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/YBlNQK0Ao6g/maxresdefault.jpg"} {"video_id": "YBlNQK0Ao6g", "text": "that the fine tuning is always done at the end so even though the linear probe even though the linear probe can be attached anywhere in between and it's often useful to do that as we saw because the in between layers are the best they say they tried fine-tuning also in from in between but it always worked out best whenever you fine-tune whenever you fine-tune you take actually the last layer so that kind of gives you an idea that the model is then it's sort of what seems to be important is this coming up with the higher level representation and", "start_timestamp": "00:25:53", "end_timestamp": "00:26:33", "start_second": 1553, "end_second": 1593, "url": "https://www.youtube.com/watch?v=YBlNQK0Ao6g&t=1553s", "title": "Image GPT: Generative Pretraining from Pixels (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/YBlNQK0Ao6g/maxresdefault.jpg"} {"video_id": "YBlNQK0Ao6g", "text": "then once you fine tune you're probably able to push that representation through to the end because of your training signal but if you hadn't done the pre-training you wouldn't even have that higher level representation and then the signal I guess is not strong enough to back propagate through the whole model it would be very interesting if they investigate if they do this linear probe analysis again after they fine-tune the model that and to see if then still it is the intermediate layers that have the best representation or if now the best", "start_timestamp": "00:26:33", "end_timestamp": "00:27:12", "start_second": 1593, "end_second": 1632, "url": "https://www.youtube.com/watch?v=YBlNQK0Ao6g&t=1593s", "title": "Image GPT: Generative Pretraining from Pixels (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/YBlNQK0Ao6g/maxresdefault.jpg"} {"video_id": "YBlNQK0Ao6g", "text": "representation in a linear probe sense shifted towards the end I'm gonna guess it's shifted towards the end but I sort of want to even see if the accuracy of the linear probe in the middle does it keep the same right so does the curve go like this this is the linear probe when you simply pre-trained right this is linear probe accuracy the question would be does it change to be like this or does it change to be like this this is supposed to be the same at the end so basically does it stay as good as it is but simply get better at", "start_timestamp": "00:27:12", "end_timestamp": "00:27:54", "start_second": 1632, "end_second": 1674, "url": "https://www.youtube.com/watch?v=YBlNQK0Ao6g&t=1632s", "title": "Image GPT: Generative Pretraining from Pixels (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/YBlNQK0Ao6g/maxresdefault.jpg"} {"video_id": "YBlNQK0Ao6g", "text": "the end or does the representation like in this curve does the good representation now shift towards the end and leave the lower layer with even more capacity to do some low level stuff yeah maybe they've done this I haven't seen it so and as you can see these Bert and autoregressive object if they sort of trade off so the birthda tends to do poorly in the linear probe setting but then it catches up during fine tuning in C for 10 almost being at the level of the autoregressive and in an image net actually outperforming it this this", "start_timestamp": "00:27:54", "end_timestamp": "00:28:35", "start_second": 1674, "end_second": 1715, "url": "https://www.youtube.com/watch?v=YBlNQK0Ao6g&t=1674s", "title": "Image GPT: Generative Pretraining from Pixels (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/YBlNQK0Ao6g/maxresdefault.jpg"} {"video_id": "YBlNQK0Ao6g", "text": "darker thing here it simply means that you a ver across different maskings of Bert because I guess even in classification it's not entirely clear how to get a signal out of Bert because they don't do this CLS vector with Bert what they do for classification and linear probing and it's written up here they simply take the they do an average pooling I think they do an average pooling of the of all the representations of the sequence and the last thing that I've also forgotten there's a lot of stuff when they fine", "start_timestamp": "00:28:35", "end_timestamp": "00:29:18", "start_second": 1715, "end_second": 1758, "url": "https://www.youtube.com/watch?v=YBlNQK0Ao6g&t=1715s", "title": "Image GPT: Generative Pretraining from Pixels (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/YBlNQK0Ao6g/maxresdefault.jpg"} {"video_id": "YBlNQK0Ao6g", "text": "tune while fine-tuning while fine-tuning the classification loss yields reasonable downstream performance we find empirically that the joint objective the generative objective and the classification objective works even better okay so even when you fine-tune with this model you have to keep the generative modeling part the generative loss around and then it performs even more better more well whatever that word is so that's also something to think about I think this this paper right here it kind of lays down a lot of cool", "start_timestamp": "00:29:18", "end_timestamp": "00:30:05", "start_second": 1758, "end_second": 1805, "url": "https://www.youtube.com/watch?v=YBlNQK0Ao6g&t=1758s", "title": "Image GPT: Generative Pretraining from Pixels (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/YBlNQK0Ao6g/maxresdefault.jpg"} {"video_id": "YBlNQK0Ao6g", "text": "things that you can think about and it gives rise to a lot of hypotheses of how does this stuff work why does this stuff work I don't even think that the the numbers are the most important thing it's mostly the fact of the effects and what does it mean okay so this was my take on it it was it's more kind of a my rant of what I find special about this paper then about the actual paper you can look at the paper their numbers are pretty good on imagenet they do not reach the same like SuperDuper performance as they do on c 410 and i", "start_timestamp": "00:30:05", "end_timestamp": "00:30:47", "start_second": 1805, "end_second": 1847, "url": "https://www.youtube.com/watch?v=YBlNQK0Ao6g&t=1805s", "title": "Image GPT: Generative Pretraining from Pixels (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/YBlNQK0Ao6g/maxresdefault.jpg"} {"video_id": "YBlNQK0Ao6g", "text": "guess that's probably because they have to downscale the image net images way more than they have to downscale to see 410 images because those are of course only 32 by 32 so because they have to downscale so much they lose probably a lot of information and I would be interested to see if there is a way to involve convolution in this in all of this so to do the downscaling that in a learned manner with convolutions or something I'm sure this has all been done already I'm just lazy to look it up yeah so I invite you", "start_timestamp": "00:30:47", "end_timestamp": "00:31:25", "start_second": 1847, "end_second": 1885, "url": "https://www.youtube.com/watch?v=YBlNQK0Ao6g&t=1847s", "title": "Image GPT: Generative Pretraining from Pixels (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/YBlNQK0Ao6g/maxresdefault.jpg"} {"video_id": "T7o3xvJLuHk", "text": "convolution is a measure of overlap between two functions as one slides over the other mathematically it's a sum of products the standard convolution operation is slow to perform however we can speed this up with an alternative method that is the topic of this video depth wise separable convolution let's first very quickly go over the basics of convolution on an input volume consider an input volume f.o shape d f cross d f cross m where DF is the width and height of the input volume and M is the number of input channels if a color image was", "start_timestamp": "00:00:00", "end_timestamp": "00:00:40", "start_second": 0, "end_second": 40, "url": "https://www.youtube.com/watch?v=T7o3xvJLuHk&t=0s", "title": "Depthwise Separable Convolution - A FASTER CONVOLUTION!", "thumbnail": "https://i.ytimg.com/vi/T7o3xvJLuHk/maxresdefault.jpg"} {"video_id": "T7o3xvJLuHk", "text": "an input then M would be equal to 3/4 the RG and B channels we apply convolution on a kernel K of shape DK cross DK cross M this will give us an output of shape D G cross DG cross 1 if we apply n such kernels on the input then we get an output volume G of shape DG cross DG cross n the convolution operation takes the sum of products of the input and the kernel to return a scalar this operation is continued by sliding the kernel over the input I've explained this concept in detail on my video on convolution neural networks", "start_timestamp": "00:00:40", "end_timestamp": "00:01:22", "start_second": 40, "end_second": 82, "url": "https://www.youtube.com/watch?v=T7o3xvJLuHk&t=40s", "title": "Depthwise Separable Convolution - A FASTER CONVOLUTION!", "thumbnail": "https://i.ytimg.com/vi/T7o3xvJLuHk/maxresdefault.jpg"} {"video_id": "T7o3xvJLuHk", "text": "check that out for a clear understanding I'm more concerned now with the cost of this convolution operation so let's take a look at that we can measure the computation required for convolution by taking a look at the number of multiplications required so why is that it's because multiplication is an expensive operation relative to addition so let's determine the number of multiplications for one convolution operation the number of multiplications is the number of elements in that kernel so that would be D K times D K times M", "start_timestamp": "00:01:22", "end_timestamp": "00:02:02", "start_second": 82, "end_second": 122, "url": "https://www.youtube.com/watch?v=T7o3xvJLuHk&t=82s", "title": "Depthwise Separable Convolution - A FASTER CONVOLUTION!", "thumbnail": "https://i.ytimg.com/vi/T7o3xvJLuHk/maxresdefault.jpg"} {"video_id": "T7o3xvJLuHk", "text": "multiplications but we slide this kernel over the input we perform DG convolutions along the width and DG convolutions along the height and hence D G cross DG convolutions over all so the number of multiplications in the convolution of one kernel over the entire input f is DG square times D K square times M now this is for just one kernel but if we have n such kernels which makes the absolute total number of multiplications become n times D G square times D K square times M multiplications let's now take a look at", "start_timestamp": "00:02:02", "end_timestamp": "00:02:45", "start_second": 122, "end_second": 165, "url": "https://www.youtube.com/watch?v=T7o3xvJLuHk&t=122s", "title": "Depthwise Separable Convolution - A FASTER CONVOLUTION!", "thumbnail": "https://i.ytimg.com/vi/T7o3xvJLuHk/maxresdefault.jpg"} {"video_id": "T7o3xvJLuHk", "text": "depth wise separable convolutions in standard convolution the application of filters across all input channels and the combination of these values are done in a single step def y separable convolution on the other hand breaks us down into two parts the first is depth wise convolution that is it performs the filtering stage and then point wise convolution which performs the combining stage let's get into some details here depth wise convolution applies convolution to a single input channel at a time this is different from the", "start_timestamp": "00:02:45", "end_timestamp": "00:03:21", "start_second": 165, "end_second": 201, "url": "https://www.youtube.com/watch?v=T7o3xvJLuHk&t=165s", "title": "Depthwise Separable Convolution - A FASTER CONVOLUTION!", "thumbnail": "https://i.ytimg.com/vi/T7o3xvJLuHk/maxresdefault.jpg"} {"video_id": "T7o3xvJLuHk", "text": "standard convolution that applies convolution to all channels let us take the same input volume F to understand this process F has a shape D F cross D F cross M where D F is the width and height of the input volume and M is the number of input channels like I mentioned before for depth wise convolution we use filters or kernels K of shape DK cross DK cross one here DK is the width and height of the square kernel and it has a depth of 1 because this convolution is only applied to a channel unlike standard convolution", "start_timestamp": "00:03:21", "end_timestamp": "00:03:59", "start_second": 201, "end_second": 239, "url": "https://www.youtube.com/watch?v=T7o3xvJLuHk&t=201s", "title": "Depthwise Separable Convolution - A FASTER CONVOLUTION!", "thumbnail": "https://i.ytimg.com/vi/T7o3xvJLuHk/maxresdefault.jpg"} {"video_id": "T7o3xvJLuHk", "text": "which is applied throughout the entire day and since we apply one kernel to a single input channel we require M such DK cross DK cross one kernels over the entire input volume F for each of these M convolutions we end up with an output DG cross DG cross one in shape now stacking these outputs together we have an output volume of G which is of shape DG cross DG cross M this is the end of the first phase that is the end of depth wise convolution now this is succeeded by point wise convolution point wise convolution involves performing the", "start_timestamp": "00:03:59", "end_timestamp": "00:04:46", "start_second": 239, "end_second": 286, "url": "https://www.youtube.com/watch?v=T7o3xvJLuHk&t=239s", "title": "Depthwise Separable Convolution - A FASTER CONVOLUTION!", "thumbnail": "https://i.ytimg.com/vi/T7o3xvJLuHk/maxresdefault.jpg"} {"video_id": "T7o3xvJLuHk", "text": "linear combination of each of these layers here the input is the volume of shape DG cross DG cross M the filter K PC has a shape one cross one cross M this is basically a 1 Cross 1 convolution operation over all M layers the output will thus have the same input width and height as the input D G cross DG for each filter assuming that we want to use some n such filters the output volume becomes D G cross DG cross n so that's great we got this down now let's take a look at the complexity of this convolution we can split this into two", "start_timestamp": "00:04:46", "end_timestamp": "00:05:33", "start_second": 286, "end_second": 333, "url": "https://www.youtube.com/watch?v=T7o3xvJLuHk&t=286s", "title": "Depthwise Separable Convolution - A FASTER CONVOLUTION!", "thumbnail": "https://i.ytimg.com/vi/T7o3xvJLuHk/maxresdefault.jpg"} {"video_id": "T7o3xvJLuHk", "text": "parts as we have two phases first we compute the number of multiplications in depth wise convolution so here the kernels have a shape DK cross D K cross 1 so the number of multiplications on one convolution operation is all DK times DK DK square when applied over the entire input channel this convolution is performed DG x DG number of times so the number of multiplications for the kernel over the input channel becomes DG square times DK square now such multiplications are applied over all em input channels for each", "start_timestamp": "00:05:33", "end_timestamp": "00:06:15", "start_second": 333, "end_second": 375, "url": "https://www.youtube.com/watch?v=T7o3xvJLuHk&t=333s", "title": "Depthwise Separable Convolution - A FASTER CONVOLUTION!", "thumbnail": "https://i.ytimg.com/vi/T7o3xvJLuHk/maxresdefault.jpg"} {"video_id": "T7o3xvJLuHk", "text": "channel we have a different kernel and hence the total number of multiplications in the first phase that is depth wise convolution is M times D G square times D K square next we compute the number of multiplications in the second phase that is point wise convolution here the kernels have a shape one cross one cross M where m is the depth of the input volume and hence the number of multiplications for one instance of convolution is M this is applied to the entire output of the first phase which has a width and height", "start_timestamp": "00:06:15", "end_timestamp": "00:06:54", "start_second": 375, "end_second": 414, "url": "https://www.youtube.com/watch?v=T7o3xvJLuHk&t=375s", "title": "Depthwise Separable Convolution - A FASTER CONVOLUTION!", "thumbnail": "https://i.ytimg.com/vi/T7o3xvJLuHk/maxresdefault.jpg"} {"video_id": "T7o3xvJLuHk", "text": "of D G so the total number of multiplications for this kernel is d G times D G times M so for some n kernels will have n times D G times D G times M such multiplications and thus the total number of multiplications is the sum of multiplications in the depth wise convolution stage plus the number of multiplications in the point-wise convolution stage we can take M times D G squared common now we compare the standard convolution with depth wise convolution we get the ratio as the sum of reciprocal of the depth of output", "start_timestamp": "00:06:54", "end_timestamp": "00:07:35", "start_second": 414, "end_second": 455, "url": "https://www.youtube.com/watch?v=T7o3xvJLuHk&t=414s", "title": "Depthwise Separable Convolution - A FASTER CONVOLUTION!", "thumbnail": "https://i.ytimg.com/vi/T7o3xvJLuHk/maxresdefault.jpg"} {"video_id": "T7o3xvJLuHk", "text": "volume that is n and the reciprocal of the squared dimensions of the kernel DK to put this into perspective of how effective depth wise convolution is let us take an example so consider the output feature volume n of 1024 and a kernel of size 3 that's DK is equal to 3 plugging these values into the relation we get zero point 1 1 2 in other words standard convolution has 9 times more the number of multiplications as that of depth Y separable convolution this is a lot of computing power we can also quickly compare the number of parameters", "start_timestamp": "00:07:35", "end_timestamp": "00:08:16", "start_second": 455, "end_second": 496, "url": "https://www.youtube.com/watch?v=T7o3xvJLuHk&t=455s", "title": "Depthwise Separable Convolution - A FASTER CONVOLUTION!", "thumbnail": "https://i.ytimg.com/vi/T7o3xvJLuHk/maxresdefault.jpg"} {"video_id": "T7o3xvJLuHk", "text": "in both convolutions in standard convolution each kernel has k times D K times M learn about parameters since there are n such kernels there are n times M times D K squared parameters in depth by separable convolutions will split this once again into two parts in the depth wise convolution phase we use M kernels of shape DK cross DK in point wise convolution we use n kernels of shape 1 Cross 1 cross M so the total is M times DK square plus M times n or we can just take M common taking the ratio we get the same ratio as we did for", "start_timestamp": "00:08:16", "end_timestamp": "00:09:02", "start_second": 496, "end_second": 542, "url": "https://www.youtube.com/watch?v=T7o3xvJLuHk&t=496s", "title": "Depthwise Separable Convolution - A FASTER CONVOLUTION!", "thumbnail": "https://i.ytimg.com/vi/T7o3xvJLuHk/maxresdefault.jpg"} {"video_id": "T7o3xvJLuHk", "text": "computational power required so we understood exactly what depth wise convolution is and also its computation power with respect to the traditional standard convolution but where exactly has this been used well there are some very interesting papers here the first is on multi model neural networks these are networks designed to solve multiple problems using a single network a multi model network has four parts the first is modality Nets to convert different input types to a universal internal representation then we have an encoder", "start_timestamp": "00:09:02", "end_timestamp": "00:09:41", "start_second": 542, "end_second": 581, "url": "https://www.youtube.com/watch?v=T7o3xvJLuHk&t=542s", "title": "Depthwise Separable Convolution - A FASTER CONVOLUTION!", "thumbnail": "https://i.ytimg.com/vi/T7o3xvJLuHk/maxresdefault.jpg"} {"video_id": "T7o3xvJLuHk", "text": "to process inputs we have a mixer to encode inputs with previous outputs and we have a decoder to generate outputs a fundamental component of each of these parts is depth wise separable convolution it works effectively in such large networks next up we have exception a convolution neural network architecture based entirely on depth wise separable convolution layers it has shown the state-of-the-art performance on large datasets like Google's jft image data set it's a repository of 350 million images with 17,000 class labels", "start_timestamp": "00:09:41", "end_timestamp": "00:10:22", "start_second": 581, "end_second": 622, "url": "https://www.youtube.com/watch?v=T7o3xvJLuHk&t=581s", "title": "Depthwise Separable Convolution - A FASTER CONVOLUTION!", "thumbnail": "https://i.ytimg.com/vi/T7o3xvJLuHk/maxresdefault.jpg"} {"video_id": "T7o3xvJLuHk", "text": "to put this into perspective the popular image net took 3 days to Train however to Train even a subset of this jft data set it took a month and it didn't even converge in fact it would have approximately taken about three months to converge how'd they let it run to its full length so that's useful this paper is pushing convolution neural networks to use depth Y separable convolution as the de facto up third we have mobile Nets a neural network architecture that strives to minimize latency of smaller scale networks so", "start_timestamp": "00:10:22", "end_timestamp": "00:10:58", "start_second": 622, "end_second": 658, "url": "https://www.youtube.com/watch?v=T7o3xvJLuHk&t=622s", "title": "Depthwise Separable Convolution - A FASTER CONVOLUTION!", "thumbnail": "https://i.ytimg.com/vi/T7o3xvJLuHk/maxresdefault.jpg"} {"video_id": "T7o3xvJLuHk", "text": "that computer vision applications run well on mobile devices mobile nets used F Y separable convolutions in its 28 layer architecture this paper compares the performance of mobile nets with fully connected layers versus depth wise separable convolution layers it turns out the accuracy on image net only drops a 1% while using significantly less number of parameters from twenty nine point three million the number of parameters it's down to just 4.2 million we can see the mulch as the number of multiplications and additions which is a", "start_timestamp": "00:10:58", "end_timestamp": "00:11:35", "start_second": 658, "end_second": 695, "url": "https://www.youtube.com/watch?v=T7o3xvJLuHk&t=658s", "title": "Depthwise Separable Convolution - A FASTER CONVOLUTION!", "thumbnail": "https://i.ytimg.com/vi/T7o3xvJLuHk/maxresdefault.jpg"} {"video_id": "T7o3xvJLuHk", "text": "direct measure of computation has also significantly decreased for depth by separable convolution mobile Nets so here are some things to remember in this video depth Y separable convolution decreases the computation and number of parameters when compared to standard convolution second is that depth Y separable convolution is a combination of depth wise convolution followed by a point wise convolution depth wise convolution is the filtering step and point wise convolution can be thought of as the combination step", "start_timestamp": "00:11:35", "end_timestamp": "00:12:09", "start_second": 695, "end_second": 729, "url": "https://www.youtube.com/watch?v=T7o3xvJLuHk&t=695s", "title": "Depthwise Separable Convolution - A FASTER CONVOLUTION!", "thumbnail": "https://i.ytimg.com/vi/T7o3xvJLuHk/maxresdefault.jpg"} {"video_id": "T7o3xvJLuHk", "text": "finally they have been successfully implemented in neural network architectures like multi model networks exception and mobile nets and that's all I have for you now thank you all for stopping by today if you liked the video hit that like button if you want to stick around hit that subscribe button if you really want to stick around hit that Bell icon next to the subscribe button so as to be notified of my uploads immediately links to important papers are down below so check them out have a good day and I'll see you in the", "start_timestamp": "00:12:09", "end_timestamp": "00:12:40", "start_second": 729, "end_second": 760, "url": "https://www.youtube.com/watch?v=T7o3xvJLuHk&t=729s", "title": "Depthwise Separable Convolution - A FASTER CONVOLUTION!", "thumbnail": "https://i.ytimg.com/vi/T7o3xvJLuHk/maxresdefault.jpg"} {"video_id": "_V-WpE8cmpc", "text": "i-it's it's a wonderful to be here I have been remiss in that I have not been to Prague for for a decade so it was it's wonderful to be back in Prague and it's wonderful to be in this fancy new Institute and so I think because because there is various types of people here there are some vision people some graphics people and some some others that learning done learning I'm going to give you kind of a an overview of some of the stuff that we have been doing but don't go into too much detail because that might be too boring for others and", "start_timestamp": "00:00:00", "end_timestamp": "00:00:43", "start_second": 0, "end_second": 43, "url": "https://www.youtube.com/watch?v=_V-WpE8cmpc&t=0s", "title": "Alexei Efros: Self-supervision, Meta-supervision, Curiosity: Making Computers Study Harder", "thumbnail": "https://i.ytimg.com/vi/_V-WpE8cmpc/maxresdefault.jpg"} {"video_id": "_V-WpE8cmpc", "text": "and and maybe too much for a foursome but then I'm I'm here the rest of the day so I'm happy to to chat about these things more so of course you know how it is with with with with with professors you know all the work has done by the graduate students and the postdocs who are amazing and then the professor just pushed together in the slides and and and and presents the work and in this case actually even the slides most of them have been done by their by the Graduate since so I'm really just just a audio recording of really the all the", "start_timestamp": "00:00:43", "end_timestamp": "00:01:20", "start_second": 43, "end_second": 80, "url": "https://www.youtube.com/watch?v=_V-WpE8cmpc&t=43s", "title": "Alexei Efros: Self-supervision, Meta-supervision, Curiosity: Making Computers Study Harder", "thumbnail": "https://i.ytimg.com/vi/_V-WpE8cmpc/maxresdefault.jpg"} {"video_id": "_V-WpE8cmpc", "text": "wonderful work that that they have been doing so probably if you didn't live under a rock for the last few years you have heard of the deep networks and how they have revolutionised computer vision and kind of the standard classic way of doing this is it's basically a classic supervised learning problem you are giving a network which you can think of as a big black box a pairs of input images and output labels XY pairs okay and this big black box essentially you can think of it as memorizing these these the score currents or the the its", "start_timestamp": "00:01:20", "end_timestamp": "00:02:07", "start_second": 80, "end_second": 127, "url": "https://www.youtube.com/watch?v=_V-WpE8cmpc&t=80s", "title": "Alexei Efros: Self-supervision, Meta-supervision, Curiosity: Making Computers Study Harder", "thumbnail": "https://i.ytimg.com/vi/_V-WpE8cmpc/maxresdefault.jpg"} {"video_id": "_V-WpE8cmpc", "text": "memory its modeling the associations between the XS and the Y's okay and of course you need to have lot and lots and lots of these training pairs so you have lots of people clicking on a bunch of imager a lots of you know millions of images and what do they what is their labels and once you have trained on millions of these pairs of images and labels then given a new image this magic black box can tell you what label it is and this is what supervised direct supervised learning and this particularly deep learning has", "start_timestamp": "00:02:07", "end_timestamp": "00:02:45", "start_second": 127, "end_second": 165, "url": "https://www.youtube.com/watch?v=_V-WpE8cmpc&t=127s", "title": "Alexei Efros: Self-supervision, Meta-supervision, Curiosity: Making Computers Study Harder", "thumbnail": "https://i.ytimg.com/vi/_V-WpE8cmpc/maxresdefault.jpg"} {"video_id": "_V-WpE8cmpc", "text": "been all about ok but there are some problems that that I'll mention of this beautiful story one obvious problem is that this this labeling bit is very expensive millions of images don't come cheap you have to have people actually label them and for every new problem we need to label more images so that's that's that's a problem of cost but there is also another problem that is a little bit more subtle and here is an illustration of this so here is an image that is basically you can think of it as a texture synthesized version of the", "start_timestamp": "00:02:45", "end_timestamp": "00:03:24", "start_second": 165, "end_second": 204, "url": "https://www.youtube.com/watch?v=_V-WpE8cmpc&t=165s", "title": "Alexei Efros: Self-supervision, Meta-supervision, Curiosity: Making Computers Study Harder", "thumbnail": "https://i.ytimg.com/vi/_V-WpE8cmpc/maxresdefault.jpg"} {"video_id": "_V-WpE8cmpc", "text": "original image you basically some of the pixels have been moved around in certain ways but to kind of to preserve the kind of the statistics of that of the image it's it's it's actually work by by Leon Gattis ok so we change the input but the neural network is perfectly happy to still call it a collie in fact I can give you other random images like this right and it's still basically happy to call it by that same class collie ok and what this suggests is that this magical black box the convolutional neural network is not actually doing that much", "start_timestamp": "00:03:24", "end_timestamp": "00:04:10", "start_second": 204, "end_second": 250, "url": "https://www.youtube.com/watch?v=_V-WpE8cmpc&t=204s", "title": "Alexei Efros: Self-supervision, Meta-supervision, Curiosity: Making Computers Study Harder", "thumbnail": "https://i.ytimg.com/vi/_V-WpE8cmpc/maxresdefault.jpg"} {"video_id": "_V-WpE8cmpc", "text": "it's not doing what we think of a lot of computer vision should be all about it's not doing you know figure-ground detection it's not fine the the you know the the foreground region it's not finding the dog it's not segmenting the dog out from the background it's not doing any of the foreground background or occlusion reasoning none of that it doesn't need to do any of that because it probably just looks at the snout in a couple of eyes and then says oh yeah that's a collie okay and maybe some his called histogram so it doesn't need to work", "start_timestamp": "00:04:10", "end_timestamp": "00:04:46", "start_second": 250, "end_second": 286, "url": "https://www.youtube.com/watch?v=_V-WpE8cmpc&t=250s", "title": "Alexei Efros: Self-supervision, Meta-supervision, Curiosity: Making Computers Study Harder", "thumbnail": "https://i.ytimg.com/vi/_V-WpE8cmpc/maxresdefault.jpg"} {"video_id": "_V-WpE8cmpc", "text": "super hard to solve this problem okay and this is you know a cause for worry because in this particular case this is imagenet classification task so you have a thousand classes and so maybe this is not maybe you don't need to work that hard because you might not need to have to to to really worry about oops you might not need to worry about finding the boundaries of objects so even reasoning about objects with only a thousand classes but the issue is that we don't really have more than a thousand classes labeled and so in the", "start_timestamp": "00:04:46", "end_timestamp": "00:05:34", "start_second": 286, "end_second": 334, "url": "https://www.youtube.com/watch?v=_V-WpE8cmpc&t=286s", "title": "Alexei Efros: Self-supervision, Meta-supervision, Curiosity: Making Computers Study Harder", "thumbnail": "https://i.ytimg.com/vi/_V-WpE8cmpc/maxresdefault.jpg"} {"video_id": "_V-WpE8cmpc", "text": "end what this magical neural network is doing it's not really object detection it's really more like texture classification it's classifying dogcatcher collie texture so this if you know 1,000 weight classification tasks is the only thing you want to do maybe this is not so bad but here is an example of something that you know Joseph and and and a lot of us have been worried about for a long time action recognition okay action recognition the same thing but in time so you you have you have a video and you want to", "start_timestamp": "00:05:34", "end_timestamp": "00:06:18", "start_second": 334, "end_second": 378, "url": "https://www.youtube.com/watch?v=_V-WpE8cmpc&t=334s", "title": "Alexei Efros: Self-supervision, Meta-supervision, Curiosity: Making Computers Study Harder", "thumbnail": "https://i.ytimg.com/vi/_V-WpE8cmpc/maxresdefault.jpg"} {"video_id": "_V-WpE8cmpc", "text": "recognize what action is being performed and the one very weird result in action classification has been that giving more frames of video to the classifier did not seem to improve performance that just a single frame oh thank you perfect the single flame is is good enough all right look at that perfect okay thank you okay so for example so you basically for a single frame you do basically just as well and this was a big strange result that people don't know why it was so but if we look at for example here is an", "start_timestamp": "00:06:18", "end_timestamp": "00:07:16", "start_second": 378, "end_second": 436, "url": "https://www.youtube.com/watch?v=_V-WpE8cmpc&t=378s", "title": "Alexei Efros: Self-supervision, Meta-supervision, Curiosity: Making Computers Study Harder", "thumbnail": "https://i.ytimg.com/vi/_V-WpE8cmpc/maxresdefault.jpg"} {"video_id": "_V-WpE8cmpc", "text": "opening fridge action so you go to the fridge you extend your hand you pull on the fridge door you open the fridge and then you close it right and you want to recognize other actions that are opening fridge actions okay if you run a classifier for this task you label a whole bunch of opening the fridge actions as positives and then others as negatives you train your network and then if you look at what the performance is great by the way the performance is very very good but if you look at which frames did the classifier actually pay", "start_timestamp": "00:07:16", "end_timestamp": "00:07:53", "start_second": 436, "end_second": 473, "url": "https://www.youtube.com/watch?v=_V-WpE8cmpc&t=436s", "title": "Alexei Efros: Self-supervision, Meta-supervision, Curiosity: Making Computers Study Harder", "thumbnail": "https://i.ytimg.com/vi/_V-WpE8cmpc/maxresdefault.jpg"} {"video_id": "_V-WpE8cmpc", "text": "attention to it's just this one so it doesn't care it doesn't track the hand it doesn't care about the the fridge door really all it cares about is again it's the texture of an open refrigerator okay and once it sees an open refrigerator texture it knows oh this must be a fridge opening action what else could it be right so again it's it's taking the easy way out it's being lazy because it doesn't have to work hard okay and maybe if you asked me a year or two ago you know how do we deal with this problem I would say that", "start_timestamp": "00:07:53", "end_timestamp": "00:08:32", "start_second": 473, "end_second": 512, "url": "https://www.youtube.com/watch?v=_V-WpE8cmpc&t=473s", "title": "Alexei Efros: Self-supervision, Meta-supervision, Curiosity: Making Computers Study Harder", "thumbnail": "https://i.ytimg.com/vi/_V-WpE8cmpc/maxresdefault.jpg"} {"video_id": "_V-WpE8cmpc", "text": "this is all an issue of a data set bias that the only pictures of opening refrigerators are in this opening fridge action so let's add some negative images of just open fridges like from from you know maybe Amazon product search and then everything will be fine now I'm starting to think that while data set bias is a problem it's not the whole problem because in a sense this data set bias will never go away it's there is no way that the data the data is finite so they will never be able to fix all the holes there will", "start_timestamp": "00:08:32", "end_timestamp": "00:09:14", "start_second": 512, "end_second": 554, "url": "https://www.youtube.com/watch?v=_V-WpE8cmpc&t=512s", "title": "Alexei Efros: Self-supervision, Meta-supervision, Curiosity: Making Computers Study Harder", "thumbnail": "https://i.ytimg.com/vi/_V-WpE8cmpc/maxresdefault.jpg"} {"video_id": "_V-WpE8cmpc", "text": "always be a way to cheat because if there is a finite amount of data there is always a way to to find a path that that is that is somehow you know cheating through the data and and so it's it's there is it's kind of like playing a you know this game children's game of Wacom all you you you you push something down and something else pops up okay and and also if you think if you ask a machine learning people about it this this is not even their problem because the machine learning people say look you train on the training set and", "start_timestamp": "00:09:14", "end_timestamp": "00:09:53", "start_second": 554, "end_second": 593, "url": "https://www.youtube.com/watch?v=_V-WpE8cmpc&t=554s", "title": "Alexei Efros: Self-supervision, Meta-supervision, Curiosity: Making Computers Study Harder", "thumbnail": "https://i.ytimg.com/vi/_V-WpE8cmpc/maxresdefault.jpg"} {"video_id": "_V-WpE8cmpc", "text": "then you've evaluated on the validation set over your method or over your data set okay so you take your data set you split it into two the training and the test set and as long as it does well on the test set you're fine right and the test set comes from the same distribution as the training set so it's the same statistics what we have here is that we want to test our algorithms on something that's not really in the in the test set of the of the data set that's something else so we train on say detecting cars from from imagenet data", "start_timestamp": "00:09:53", "end_timestamp": "00:10:29", "start_second": 593, "end_second": 629, "url": "https://www.youtube.com/watch?v=_V-WpE8cmpc&t=593s", "title": "Alexei Efros: Self-supervision, Meta-supervision, Curiosity: Making Computers Study Harder", "thumbnail": "https://i.ytimg.com/vi/_V-WpE8cmpc/maxresdefault.jpg"} {"video_id": "_V-WpE8cmpc", "text": "set but then we want to go out on the street and and and detect cars there and the cars on the street don't really have the same distribution as the cars that you were trained on but we still want to do it so in a sense our problem is actually quite somewhat different than the problem with in machine learning that we actually do want to test on things we never really trained on so we wanted to really be general and so the way forward I see is that somehow we need to better use the data we have there is no hope to ever get all the", "start_timestamp": "00:10:29", "end_timestamp": "00:11:05", "start_second": 629, "end_second": 665, "url": "https://www.youtube.com/watch?v=_V-WpE8cmpc&t=629s", "title": "Alexei Efros: Self-supervision, Meta-supervision, Curiosity: Making Computers Study Harder", "thumbnail": "https://i.ytimg.com/vi/_V-WpE8cmpc/maxresdefault.jpg"} {"video_id": "_V-WpE8cmpc", "text": "data that we that that that that will make the problem perfectly can concrete but we need to somehow use the data that we have better I'll give you a couple of examples of the way I think about this so in in a you can think of it as as a way that a well around country is run compared to a badly run country's run it's not that in a well-run country you cannot cheat the laws of course you can but the system is set up in such a way that it's actually more expensive to cheat than to follow the law okay and so even though you can cheat you don't do", "start_timestamp": "00:11:05", "end_timestamp": "00:11:52", "start_second": 665, "end_second": 712, "url": "https://www.youtube.com/watch?v=_V-WpE8cmpc&t=665s", "title": "Alexei Efros: Self-supervision, Meta-supervision, Curiosity: Making Computers Study Harder", "thumbnail": "https://i.ytimg.com/vi/_V-WpE8cmpc/maxresdefault.jpg"} {"video_id": "_V-WpE8cmpc", "text": "it because it's not in your self-interest in a poorly run country it's cheaper to cheat so everybody cheats and there is no way you know there is nothing you can do about it okay so we need to somehow set up our problem is such a way that it's more expensive for the network to cheat let the easiest thing the network can do is to do the right thing that's that's the goal okay and if you think about the way that we do this direct supervised learning input image output label and just train on these pairs that just sets up sets up", "start_timestamp": "00:11:52", "end_timestamp": "00:12:28", "start_second": 712, "end_second": 748, "url": "https://www.youtube.com/watch?v=_V-WpE8cmpc&t=712s", "title": "Alexei Efros: Self-supervision, Meta-supervision, Curiosity: Making Computers Study Harder", "thumbnail": "https://i.ytimg.com/vi/_V-WpE8cmpc/maxresdefault.jpg"} {"video_id": "_V-WpE8cmpc", "text": "your your life for cheating because so in in in in in my class you know we have a whole semester worth of material and then the debt of the semester there is a final exam so of course most people don't do anything during the whole semester the night before the exam they they look at some exams from previous years and they try to memorize this you know the question and answer question answer question answer question alright and they basically memorize the whole all of this set of question and answer pairs and then they go to the examine", "start_timestamp": "00:12:28", "end_timestamp": "00:13:06", "start_second": 748, "end_second": 786, "url": "https://www.youtube.com/watch?v=_V-WpE8cmpc&t=748s", "title": "Alexei Efros: Self-supervision, Meta-supervision, Curiosity: Making Computers Study Harder", "thumbnail": "https://i.ytimg.com/vi/_V-WpE8cmpc/maxresdefault.jpg"} {"video_id": "_V-WpE8cmpc", "text": "actually they do pretty well right this is not a bad strategy to pass the final exam it's a bad strategy if you actually want to learn but it's not the best strategy to pass the exam because you know I'm lazy I'm going to make the exam this year to be not that different from the exam last year in any case there is very small set of problems you can ask that is easy to grade etc etcetera right and so this kind of memorization of question answer question answer it's actually the correct thing to do if your goal is to", "start_timestamp": "00:13:06", "end_timestamp": "00:13:38", "start_second": 786, "end_second": 818, "url": "https://www.youtube.com/watch?v=_V-WpE8cmpc&t=786s", "title": "Alexei Efros: Self-supervision, Meta-supervision, Curiosity: Making Computers Study Harder", "thumbnail": "https://i.ytimg.com/vi/_V-WpE8cmpc/maxresdefault.jpg"} {"video_id": "_V-WpE8cmpc", "text": "pass the exam but of course our goal is not to pass the exam our goal is to to actually learn the material so how do you learn the material how do you actually learn you know arithmetic for example when you're a little kid to really learn it what you do is you don't get yourself question-answer pairs you look at the question you try really hard to solve it and then once you solve it you got some answer you go to the back of the book to compare it with them with the right answer and then that's how you kind of try to update yourself okay and", "start_timestamp": "00:13:38", "end_timestamp": "00:14:08", "start_second": 818, "end_second": 848, "url": "https://www.youtube.com/watch?v=_V-WpE8cmpc&t=818s", "title": "Alexei Efros: Self-supervision, Meta-supervision, Curiosity: Making Computers Study Harder", "thumbnail": "https://i.ytimg.com/vi/_V-WpE8cmpc/maxresdefault.jpg"} {"video_id": "_V-WpE8cmpc", "text": "so basically that's the kind of idea that that we want to try to push our computers to do to try to study harder to try to learn things that are more generalizable that are not just good enough to pass the test but to actually understand the world okay so that's kind of the the preamble and the the way we have been working on this in in in my lab is we have been doing it in three different paths and I'm just gonna kind of quickly show some of their some of the results of it so the first is self supervision the idea of not having a", "start_timestamp": "00:14:08", "end_timestamp": "00:14:54", "start_second": 848, "end_second": 894, "url": "https://www.youtube.com/watch?v=_V-WpE8cmpc&t=848s", "title": "Alexei Efros: Self-supervision, Meta-supervision, Curiosity: Making Computers Study Harder", "thumbnail": "https://i.ytimg.com/vi/_V-WpE8cmpc/maxresdefault.jpg"} {"video_id": "_V-WpE8cmpc", "text": "expert tell you the correct answer but let the computer figure out the correct answer second is what we call meta supervision it's I'm not sure how standard this term is I think we might just have came up with it and the idea here is that you don't supervise the data the correct answer you supervise how the answer supposed to behave okay and finally if there is time I also want to mention a little bit about you know what if there is no correct answer what if you're just learning by just playing around you know if you don't have a goal", "start_timestamp": "00:14:54", "end_timestamp": "00:15:29", "start_second": 894, "end_second": 929, "url": "https://www.youtube.com/watch?v=_V-WpE8cmpc&t=894s", "title": "Alexei Efros: Self-supervision, Meta-supervision, Curiosity: Making Computers Study Harder", "thumbnail": "https://i.ytimg.com/vi/_V-WpE8cmpc/maxresdefault.jpg"} {"video_id": "_V-WpE8cmpc", "text": "there is no there is nothing to cheat there is no need to cheat because you're just playing around there is no goal and so the idea here is to see if we just get removed the goal remove the you know the whatever whatever we're trying to optimize and see if we can just just play and be curious can that get get us some representation that's more generalizable okay so I'm gonna show some examples of all of these in the next oops alright okay so first is self supervision here is a evocative drawing by Asher of what we mean here and this", "start_timestamp": "00:15:29", "end_timestamp": "00:16:06", "start_second": 929, "end_second": 966, "url": "https://www.youtube.com/watch?v=_V-WpE8cmpc&t=929s", "title": "Alexei Efros: Self-supervision, Meta-supervision, Curiosity: Making Computers Study Harder", "thumbnail": "https://i.ytimg.com/vi/_V-WpE8cmpc/maxresdefault.jpg"} {"video_id": "_V-WpE8cmpc", "text": "is something that actually kind of had been classic in in in in deep machine learning under the heading of representation learning so we basically want to somehow have a compact representation of an input image and we want to compute this representation maybe without any labels and the kind of classic way to do this is what's called an out encoder which says let's have a representation that is small okay so there is a bottleneck here but that if we unpack it and compress it we can reconstruct the original input okay and", "start_timestamp": "00:16:06", "end_timestamp": "00:16:44", "start_second": 966, "end_second": 1004, "url": "https://www.youtube.com/watch?v=_V-WpE8cmpc&t=966s", "title": "Alexei Efros: Self-supervision, Meta-supervision, Curiosity: Making Computers Study Harder", "thumbnail": "https://i.ytimg.com/vi/_V-WpE8cmpc/maxresdefault.jpg"} {"video_id": "_V-WpE8cmpc", "text": "then you train this kind of a out encoder set up for many many many images but you don't have any don't need any labels here right it's just just pairs of just just a single image and this is this a very influential idea unfortunately it doesn't actually work in practice the representation that you learn here if you're doing it for any kind of real data like a big image for example not a tiny you know third you do way third you image but a big image that doesn't actually work okay and the reason it doesn't work is that this is", "start_timestamp": "00:16:44", "end_timestamp": "00:17:20", "start_second": 1004, "end_second": 1040, "url": "https://www.youtube.com/watch?v=_V-WpE8cmpc&t=1004s", "title": "Alexei Efros: Self-supervision, Meta-supervision, Curiosity: Making Computers Study Harder", "thumbnail": "https://i.ytimg.com/vi/_V-WpE8cmpc/maxresdefault.jpg"} {"video_id": "_V-WpE8cmpc", "text": "you can think of it really as data compression right you're compressing your data but and data compression is related to machine learning but it's not quite the same because data compression doesn't care about how do you perform on new images it only cares about how you compress the training images that you got and so what we propose to do is to think about this in terms of not data compression but data prediction to make the computer try to work harder and say instead of just compressing the data let's see if we can train it to", "start_timestamp": "00:17:20", "end_timestamp": "00:17:55", "start_second": 1040, "end_second": 1075, "url": "https://www.youtube.com/watch?v=_V-WpE8cmpc&t=1040s", "title": "Alexei Efros: Self-supervision, Meta-supervision, Curiosity: Making Computers Study Harder", "thumbnail": "https://i.ytimg.com/vi/_V-WpE8cmpc/maxresdefault.jpg"} {"video_id": "_V-WpE8cmpc", "text": "predict the data so one way to do this very simple way is you only give it half of the input signal and you train it to predict the other half so now we not just the compression it's not just that you keep taking the pixels and compressing them you need to think a little bit more you need to think about context and what should go well with with the input that you got ok and one very simple way of doing this is to split the data in terms of color and and and luminance so this was our paper a few years back where we said okay let's take let's take", "start_timestamp": "00:17:55", "end_timestamp": "00:18:33", "start_second": 1075, "end_second": 1113, "url": "https://www.youtube.com/watch?v=_V-WpE8cmpc&t=1075s", "title": "Alexei Efros: Self-supervision, Meta-supervision, Curiosity: Making Computers Study Harder", "thumbnail": "https://i.ytimg.com/vi/_V-WpE8cmpc/maxresdefault.jpg"} {"video_id": "_V-WpE8cmpc", "text": "a color image separate it into a luminance channel and a chrominance channel and then train a network to predict from the grayscale to the the color and then you know you can get a nice beautiful image but hopefully also you'd learn the representation that is actually meaningful and somehow captures something about the natural world okay so of course you need to show some pictures first so it actually does learn a reasonably good representation of color but the cool thing that suggest that maybe is also learning something", "start_timestamp": "00:18:33", "end_timestamp": "00:19:13", "start_second": 1113, "end_second": 1153, "url": "https://www.youtube.com/watch?v=_V-WpE8cmpc&t=1113s", "title": "Alexei Efros: Self-supervision, Meta-supervision, Curiosity: Making Computers Study Harder", "thumbnail": "https://i.ytimg.com/vi/_V-WpE8cmpc/maxresdefault.jpg"} {"video_id": "_V-WpE8cmpc", "text": "else is some of the failures so here is a couple of instructive failures can somebody see what the failure is it's let me what that year the year is a little bit off but it's a but somebody said the tongue do you see the tongue there is no tongue and yet it's it's point it's coloring it pink why could this why would this be well we were confused too but then we looked at the training data and in the training data these poodles they all have their tongues out so if this was just a stupid compression it would not this this air would not", "start_timestamp": "00:19:13", "end_timestamp": "00:19:59", "start_second": 1153, "end_second": 1199, "url": "https://www.youtube.com/watch?v=_V-WpE8cmpc&t=1153s", "title": "Alexei Efros: Self-supervision, Meta-supervision, Curiosity: Making Computers Study Harder", "thumbnail": "https://i.ytimg.com/vi/_V-WpE8cmpc/maxresdefault.jpg"} {"video_id": "_V-WpE8cmpc", "text": "happen but here it seems like the network is actually recognizing that this is a dog it's recognizing the breed of the dog it's remembering the the similar dogs that has seen before and then it's making and mistake but it's a reasonable mistake that maybe if all the dogs have had their tongues out in the training set maybe that's also true in a test ok and indeed so if we could you know we did a various tests but I will just show you one way to to see what's what's been learned in this representation is to to", "start_timestamp": "00:19:59", "end_timestamp": "00:20:38", "start_second": 1199, "end_second": 1238, "url": "https://www.youtube.com/watch?v=_V-WpE8cmpc&t=1199s", "title": "Alexei Efros: Self-supervision, Meta-supervision, Curiosity: Making Computers Study Harder", "thumbnail": "https://i.ytimg.com/vi/_V-WpE8cmpc/maxresdefault.jpg"} {"video_id": "_V-WpE8cmpc", "text": "do what we call deep net electrophysiologist so we kind of probe the different represent features of that compressed representation and see what they wear with a fire think of Amalek neurons firing so where do they fire and so what we found is that there is a we found a neuron that fires only on faces there another neuron that only finds and fires and dog faces another one that fires on flowers ok so it's basically it was able to kind of disentangle from massive pixels of the input it was able to to find the kind of specialized neurons for", "start_timestamp": "00:20:38", "end_timestamp": "00:21:15", "start_second": 1238, "end_second": 1275, "url": "https://www.youtube.com/watch?v=_V-WpE8cmpc&t=1238s", "title": "Alexei Efros: Self-supervision, Meta-supervision, Curiosity: Making Computers Study Harder", "thumbnail": "https://i.ytimg.com/vi/_V-WpE8cmpc/maxresdefault.jpg"} {"video_id": "_V-WpE8cmpc", "text": "different different parts of the visual world even though the labels we had were just the color it was it's a very very weak label and a way label that that that didn't have any semantics in it and yet we are basically getting something out that is semantic okay and so this is this is kind of a hopeful direction it's their presentation is not as good as all the kind of semantic train representations yet but I still feel that it is it is optimistic direction because hopefully it might be more general in in the long run but this is", "start_timestamp": "00:21:15", "end_timestamp": "00:21:55", "start_second": 1275, "end_second": 1315, "url": "https://www.youtube.com/watch?v=_V-WpE8cmpc&t=1275s", "title": "Alexei Efros: Self-supervision, Meta-supervision, Curiosity: Making Computers Study Harder", "thumbnail": "https://i.ytimg.com/vi/_V-WpE8cmpc/maxresdefault.jpg"} {"video_id": "_V-WpE8cmpc", "text": "still to be to be to be found out originally actually this concept of self surprise learning one of the early papers ended by psychologist Virginia dasa was on thinking about it intra of the different modalities of of the sensory signal so instead of saying okay Kohler versus grayscale although that that true is kind of biologically plausible you have the rods and the cones and yeah rods in the cones and you can say that the rods in the codes kind of code train each other but it's much more reasonable to think about it in", "start_timestamp": "00:21:55", "end_timestamp": "00:22:34", "start_second": 1315, "end_second": 1354, "url": "https://www.youtube.com/watch?v=_V-WpE8cmpc&t=1315s", "title": "Alexei Efros: Self-supervision, Meta-supervision, Curiosity: Making Computers Study Harder", "thumbnail": "https://i.ytimg.com/vi/_V-WpE8cmpc/maxresdefault.jpg"} {"video_id": "_V-WpE8cmpc", "text": "terms of different modalities for example sight and sound okay so the idea here is that you know you learn about cows by seeing the cow hearing the Moo associating those two together and using that as a kind of learning signal okay and so just recently we decided we're going to try to try to try to use self surprise learning in this in this domain and this is one motivation for why this kind of this kind of thing is is is is a need at any one moment we are being bombarded by sensory information our brains do a remarkable job of making", "start_timestamp": "00:22:34", "end_timestamp": "00:23:20", "start_second": 1354, "end_second": 1400, "url": "https://www.youtube.com/watch?v=_V-WpE8cmpc&t=1354s", "title": "Alexei Efros: Self-supervision, Meta-supervision, Curiosity: Making Computers Study Harder", "thumbnail": "https://i.ytimg.com/vi/_V-WpE8cmpc/maxresdefault.jpg"} {"video_id": "_V-WpE8cmpc", "text": "sense of it all [Music] it seems easy enough to separate the sounds we hear from the sites we see but there is one illusion the reveals this isn't always the case have a look at this what do you hear ba ba ba bi yes ba ba ba ba but look what happens when we change the picture and yet the sound hasn't changed in every clip you are only ever hearing bar with a bee ah it's an illusion known as the McGurk effect take another look ba concentrate first on the right of the screen ah now to the left of the screen ba the", "start_timestamp": "00:23:20", "end_timestamp": "00:24:36", "start_second": 1400, "end_second": 1476, "url": "https://www.youtube.com/watch?v=_V-WpE8cmpc&t=1400s", "title": "Alexei Efros: Self-supervision, Meta-supervision, Curiosity: Making Computers Study Harder", "thumbnail": "https://i.ytimg.com/vi/_V-WpE8cmpc/maxresdefault.jpg"} {"video_id": "_V-WpE8cmpc", "text": "illusion occurs don't because what you are seeing clashes with what you are hearing in the illusion what we see overrides what we hear so the mouth movements we see as we look at a face can actually influence what we believe we're hearing if we close our eyes we actually hear the sound as it is if we open our eyes we actually see how the mouth movements can influence what we're hearing so did the people here I wonder how it works for non-english speaker so so yeah so I think that some some of it is specific to English speaker that the", "start_timestamp": "00:24:36", "end_timestamp": "00:25:17", "start_second": 1476, "end_second": 1517, "url": "https://www.youtube.com/watch?v=_V-WpE8cmpc&t=1476s", "title": "Alexei Efros: Self-supervision, Meta-supervision, Curiosity: Making Computers Study Harder", "thumbnail": "https://i.ytimg.com/vi/_V-WpE8cmpc/maxresdefault.jpg"} {"video_id": "_V-WpE8cmpc", "text": "first sound is you you make your mouth like this right and so even though the sound is exactly the same the lips move very differently and so your brain how many people got the the effect how many people heard the five okay very good very good so it seems to be working very good right so this motivates the idea that if you want a representation for example a video representation you to combine the visual and audio and you probably want to combine it pretty early this is this is a very kind of powerful effect even if you know about the effect", "start_timestamp": "00:25:17", "end_timestamp": "00:25:58", "start_second": 1517, "end_second": 1558, "url": "https://www.youtube.com/watch?v=_V-WpE8cmpc&t=1517s", "title": "Alexei Efros: Self-supervision, Meta-supervision, Curiosity: Making Computers Study Harder", "thumbnail": "https://i.ytimg.com/vi/_V-WpE8cmpc/maxresdefault.jpg"} {"video_id": "_V-WpE8cmpc", "text": "if you're very well aware you you steer this thing all the time you you cannot turn it off it is very very powerful low level effect and so said it this suggests that the coupling of audio and video probably happens pretty early on and so the idea that we had was to recreate an video representation that takes in audio and visual features at the same time okay so kind of classic video representations you basically have some some way to go from a series of frames to representation and then also the same thing for for audio and what we propose", "start_timestamp": "00:25:58", "end_timestamp": "00:26:41", "start_second": 1558, "end_second": 1601, "url": "https://www.youtube.com/watch?v=_V-WpE8cmpc&t=1558s", "title": "Alexei Efros: Self-supervision, Meta-supervision, Curiosity: Making Computers Study Harder", "thumbnail": "https://i.ytimg.com/vi/_V-WpE8cmpc/maxresdefault.jpg"} {"video_id": "_V-WpE8cmpc", "text": "is to to combine it together and then have a joint audio and video representation you know for for a number of layers at the top so that there's this information can kind of a cook together and get us a join out your video representation okay but how do we train this representation and again we want to train it and without supervision it will self supervise training so one thing that was kind of the obvious first idea that we had was why don't we train say a binary classifier where the positives are videos with the correct", "start_timestamp": "00:26:41", "end_timestamp": "00:27:21", "start_second": 1601, "end_second": 1641, "url": "https://www.youtube.com/watch?v=_V-WpE8cmpc&t=1601s", "title": "Alexei Efros: Self-supervision, Meta-supervision, Curiosity: Making Computers Study Harder", "thumbnail": "https://i.ytimg.com/vi/_V-WpE8cmpc/maxresdefault.jpg"} {"video_id": "_V-WpE8cmpc", "text": "sound and negatives are videos with a wrong sound of sound from some other video for example okay and then we train this classifier and hopefully it will learn to to figure out that the correct sound correct video correct sound there is a correspondence okay this unfortunately doesn't work well at all and the reason it doesn't work well at all is because again it's a problem of cheating because if you have if you take a random video and a random audio the video could be me giving a talk and the audio could be you know", "start_timestamp": "00:27:21", "end_timestamp": "00:28:01", "start_second": 1641, "end_second": 1681, "url": "https://www.youtube.com/watch?v=_V-WpE8cmpc&t=1641s", "title": "Alexei Efros: Self-supervision, Meta-supervision, Curiosity: Making Computers Study Harder", "thumbnail": "https://i.ytimg.com/vi/_V-WpE8cmpc/maxresdefault.jpg"} {"video_id": "_V-WpE8cmpc", "text": "some sort of a you know sound from the restaurant for example right you just look at the picture you look at it but people sitting and listening and you know that this this is not a restaurant so just just to look at overall picture just a single frame will be enough to tell you you know this is a presentation that is a restaurant that is a rock concert so you don't even need to listen to anything you can just have a kind of a 1:1 label which says okay this is a restaurant or a rock concert so we need again to try to make the computer work", "start_timestamp": "00:28:01", "end_timestamp": "00:28:39", "start_second": 1681, "end_second": 1719, "url": "https://www.youtube.com/watch?v=_V-WpE8cmpc&t=1681s", "title": "Alexei Efros: Self-supervision, Meta-supervision, Curiosity: Making Computers Study Harder", "thumbnail": "https://i.ytimg.com/vi/_V-WpE8cmpc/maxresdefault.jpg"} {"video_id": "_V-WpE8cmpc", "text": "harder and so here is our also simple idea - and the idea - is that we're going to have as positives again videos with the correct sound but as negatives we're going to have that same video that same audio but we're going to displace it in time a little bit okay now this becomes a much harder problem to solve because it's the router is correct the video is correct the only thing that's not correct now is that there is a little bit of a time lag so this representation really needs to very carefully pay attention to to the", "start_timestamp": "00:28:39", "end_timestamp": "00:29:20", "start_second": 1719, "end_second": 1760, "url": "https://www.youtube.com/watch?v=_V-WpE8cmpc&t=1719s", "title": "Alexei Efros: Self-supervision, Meta-supervision, Curiosity: Making Computers Study Harder", "thumbnail": "https://i.ytimg.com/vi/_V-WpE8cmpc/maxresdefault.jpg"} {"video_id": "_V-WpE8cmpc", "text": "connected to the registration between between those two okay so here is the idea so the correct the correct samples are line samples are just you know how would you and video together okay and then and then incorrect once all was the same except couple of seconds is placed and we have to careful because because if you displace for maybe like a second or less than a second humans are actually not even that sensitive to it so it needs to be a little bit of displacement okay so now we trained this representation for a", "start_timestamp": "00:29:20", "end_timestamp": "00:30:09", "start_second": 1760, "end_second": 1809, "url": "https://www.youtube.com/watch?v=_V-WpE8cmpc&t=1760s", "title": "Alexei Efros: Self-supervision, Meta-supervision, Curiosity: Making Computers Study Harder", "thumbnail": "https://i.ytimg.com/vi/_V-WpE8cmpc/maxresdefault.jpg"} {"video_id": "_V-WpE8cmpc", "text": "long long time you know weeks of GPU time and then we have a representation that has the whole thing working right here hope the whole thing working and then we can think we can look at what we can do with that representation so one thing we can do is we can now visualize the source of the sound because what we can do is we can say okay given this task of you know is it is it aligned or not aligned we can actually just use the kind of classic class activation visualization maps to see what pixels is it using to tell if things are aligned", "start_timestamp": "00:30:09", "end_timestamp": "00:30:49", "start_second": 1809, "end_second": 1849, "url": "https://www.youtube.com/watch?v=_V-WpE8cmpc&t=1809s", "title": "Alexei Efros: Self-supervision, Meta-supervision, Curiosity: Making Computers Study Harder", "thumbnail": "https://i.ytimg.com/vi/_V-WpE8cmpc/maxresdefault.jpg"} {"video_id": "_V-WpE8cmpc", "text": "or not because those are the pixels that are the sound producing pixels okay so here is an example of some of the places where it thought those were the important places that it's that it decided where the sound producing places and here is some some some visualizations of this over time [Music] okay so this is again completely automatic no labels of any kind okay another thing we can do is we can just plug this in into your stand the kind of action recognition data said a lot of those data sets have audio in them so we", "start_timestamp": "00:30:49", "end_timestamp": "00:31:50", "start_second": 1849, "end_second": 1910, "url": "https://www.youtube.com/watch?v=_V-WpE8cmpc&t=1849s", "title": "Alexei Efros: Self-supervision, Meta-supervision, Curiosity: Making Computers Study Harder", "thumbnail": "https://i.ytimg.com/vi/_V-WpE8cmpc/maxresdefault.jpg"} {"video_id": "_V-WpE8cmpc", "text": "thought okay can we use this audio channel to improve our our method and we are actually definitely getting improvement over audio and we are improving other self supervised method we're still not as good as something that is kind of trained with lots and lots of relevant semantic data but again the hope is that as we get to harder and harder data sets that semantic labels will be harder and harder to get and so the self supervised methods hopefully will get better okay and finally kind of a cool fun thing we", "start_timestamp": "00:31:50", "end_timestamp": "00:32:31", "start_second": 1910, "end_second": 1951, "url": "https://www.youtube.com/watch?v=_V-WpE8cmpc&t=1910s", "title": "Alexei Efros: Self-supervision, Meta-supervision, Curiosity: Making Computers Study Harder", "thumbnail": "https://i.ytimg.com/vi/_V-WpE8cmpc/maxresdefault.jpg"} {"video_id": "_V-WpE8cmpc", "text": "could do is to see if we can do off on screen separation of the audio sources so an example would be if you have a speaker speaking in on to the camera and then there's somebody else speaking that is not being seen our feature is only going to focus on the on the speaker that is being seen and so we can subtract away the speaker who is not being seen okay so let's let's see what I so unfortunately we thought oh this is such a cool idea nobody would ever think about it at the same time like four more groups who are basically doing the same", "start_timestamp": "00:32:31", "end_timestamp": "00:33:13", "start_second": 1951, "end_second": 1993, "url": "https://www.youtube.com/watch?v=_V-WpE8cmpc&t=1951s", "title": "Alexei Efros: Self-supervision, Meta-supervision, Curiosity: Making Computers Study Harder", "thumbnail": "https://i.ytimg.com/vi/_V-WpE8cmpc/maxresdefault.jpg"} {"video_id": "_V-WpE8cmpc", "text": "thing luckily we heard about each other in time so we actually cross sighted in so it's all fine our method most of these folks actually that was the goal of their papers so they could have they basically work on that particular problem for us it's just basically one application of our feature that just kind of falls out of our method so we'd say that we're kind of a it's it's more of an application of our stuff but hopefully we are also able to to do other other things as well and the idea here is basically we take our", "start_timestamp": "00:33:13", "end_timestamp": "00:33:48", "start_second": 1993, "end_second": 2028, "url": "https://www.youtube.com/watch?v=_V-WpE8cmpc&t=1993s", "title": "Alexei Efros: Self-supervision, Meta-supervision, Curiosity: Making Computers Study Harder", "thumbnail": "https://i.ytimg.com/vi/_V-WpE8cmpc/maxresdefault.jpg"} {"video_id": "_V-WpE8cmpc", "text": "representation and then we seed it into kind of a standard encoder/decoder representation that starts with a spectrogram and basically learns the separated into into things that part of the spectrogram that have evidence in the image or in the video and the part that does not have evidence in the video and then you can then you can play either one or the other right once you separate you can play either one and so here is an example then asking about it because they're not interesting facts to you that's not true I have a plenty of", "start_timestamp": "00:33:48", "end_timestamp": "00:34:28", "start_second": 2028, "end_second": 2068, "url": "https://www.youtube.com/watch?v=_V-WpE8cmpc&t=2028s", "title": "Alexei Efros: Self-supervision, Meta-supervision, Curiosity: Making Computers Study Harder", "thumbnail": "https://i.ytimg.com/vi/_V-WpE8cmpc/maxresdefault.jpg"} {"video_id": "_V-WpE8cmpc", "text": "questions integration you a sample about your by talking about your flight now I know there's a bunch of people talking one is old screen one is off screen let's see what we can do so this is just the on screen been asking about it because they're not interesting facts to you and then people all about no I'm not I don't want to okay and this is the off screen that's not true I have a plenty of questions integration you have to disability Oscar by talking about your flight so there's there is a little bit of noise there but mostly does the right", "start_timestamp": "00:34:28", "end_timestamp": "00:35:05", "start_second": 2068, "end_second": 2105, "url": "https://www.youtube.com/watch?v=_V-WpE8cmpc&t=2068s", "title": "Alexei Efros: Self-supervision, Meta-supervision, Curiosity: Making Computers Study Harder", "thumbnail": "https://i.ytimg.com/vi/_V-WpE8cmpc/maxresdefault.jpg"} {"video_id": "_V-WpE8cmpc", "text": "thing thank you so much thank okay and this is midnight is no second have day let's talk about digital omens there are some fears so much were able to show to the rest of the world the unshakable japan-us alliance okay we have both speakers here okay you're saying that they're not and we can we can hide one of the speakers and then gets away okay could even do something all right laughter okay all right all right okay most of it is gone but not all okay so so that is that is kind of a various ways of trying to get data to", "start_timestamp": "00:35:05", "end_timestamp": "00:36:46", "start_second": 2105, "end_second": 2206, "url": "https://www.youtube.com/watch?v=_V-WpE8cmpc&t=2105s", "title": "Alexei Efros: Self-supervision, Meta-supervision, Curiosity: Making Computers Study Harder", "thumbnail": "https://i.ytimg.com/vi/_V-WpE8cmpc/maxresdefault.jpg"} {"video_id": "_V-WpE8cmpc", "text": "supervise itself and hopefully learn the representation that that is that is you know useful for other tasks the second topic I want to mention briefly is what we call meta supervision and the idea here is instead of telling what the correct answer should be we tell how that correct answer should behave so what do I mean by that so the kind of the direct supervision is a the direct supervision is is you have input X and you train a function f of X and you want that function f of X to produce Y's okay that's direct supervision you know from", "start_timestamp": "00:36:46", "end_timestamp": "00:37:34", "start_second": 2206, "end_second": 2254, "url": "https://www.youtube.com/watch?v=_V-WpE8cmpc&t=2206s", "title": "Alexei Efros: Self-supervision, Meta-supervision, Curiosity: Making Computers Study Harder", "thumbnail": "https://i.ytimg.com/vi/_V-WpE8cmpc/maxresdefault.jpg"} {"video_id": "_V-WpE8cmpc", "text": "X's to eyes what are other ways we can we can set up this problem one way is we train a function of f of X that produces something in the domain Y in the set capital y so we don't tell it what particular y we want we just want it to be in the set of Y's okay and one example of this is generative adversarial networks I'll give a brief overview based on a paper we had a couple last year the colorization example that I showed it had a lot of kind of a ways of hacking it to make it look good and some what you want is you", "start_timestamp": "00:37:34", "end_timestamp": "00:38:26", "start_second": 2254, "end_second": 2306, "url": "https://www.youtube.com/watch?v=_V-WpE8cmpc&t=2254s", "title": "Alexei Efros: Self-supervision, Meta-supervision, Curiosity: Making Computers Study Harder", "thumbnail": "https://i.ytimg.com/vi/_V-WpE8cmpc/maxresdefault.jpg"} {"video_id": "_V-WpE8cmpc", "text": "really want this white wall to be white on the back but because we kind of told it to all you need to be more colorful the wall becomes not white okay so it's kind of overshooting it and the annoying thing is that there is no way for the algorithm to look at this that's just not looking realistic you know do something better make it better have a try to optimize for things looking realistic so we don't know how to do that well actually we do so you know for any kind of a problem we do have like colic or is polarization or", "start_timestamp": "00:38:26", "end_timestamp": "00:39:04", "start_second": 2306, "end_second": 2344, "url": "https://www.youtube.com/watch?v=_V-WpE8cmpc&t=2306s", "title": "Alexei Efros: Self-supervision, Meta-supervision, Curiosity: Making Computers Study Harder", "thumbnail": "https://i.ytimg.com/vi/_V-WpE8cmpc/maxresdefault.jpg"} {"video_id": "_V-WpE8cmpc", "text": "super-resolution or whatever it would be nice if we had this function that would tell us you know make something realistic enough it is this loss function this commune universal it says make make images look real we do have that function right now that's that's called a graduate student okay that's where the graduate student basically keeps hacking all algorithm until you know enough of the pictures look good and then then we send it to to publish but it would be better if the computer were doing it themselves okay so one way", "start_timestamp": "00:39:04", "end_timestamp": "00:39:37", "start_second": 2344, "end_second": 2377, "url": "https://www.youtube.com/watch?v=_V-WpE8cmpc&t=2344s", "title": "Alexei Efros: Self-supervision, Meta-supervision, Curiosity: Making Computers Study Harder", "thumbnail": "https://i.ytimg.com/vi/_V-WpE8cmpc/maxresdefault.jpg"} {"video_id": "_V-WpE8cmpc", "text": "that we we can do this is we can basically have somehow the computer send the resulting images to Amazon Mechanical Turk ask a whole bunch of people if this is good looking or not good looking and use that signal to update the algorithm very very expensive okay but what what we can do is we can use this idea that that recently came out that kind of does something similar okay because remember we have a lot of real images so what we could do is we can have another network that can act as as an amazon turk are deciding if", "start_timestamp": "00:39:37", "end_timestamp": "00:40:19", "start_second": 2377, "end_second": 2419, "url": "https://www.youtube.com/watch?v=_V-WpE8cmpc&t=2377s", "title": "Alexei Efros: Self-supervision, Meta-supervision, Curiosity: Making Computers Study Harder", "thumbnail": "https://i.ytimg.com/vi/_V-WpE8cmpc/maxresdefault.jpg"} {"video_id": "_V-WpE8cmpc", "text": "something looks good or not okay and that network is basically going to tell us if the image that we generated if it looks real that is does it can you distinguish that image from a set of real images and if their answer is no that means that we are doing well okay and this is the idea behind this giant material models in Goodfellow and colleagues that that has kind of really energized this whole field of image synthesis let's think about it you know what is it actually doing quickly so we have our function that", "start_timestamp": "00:40:19", "end_timestamp": "00:41:01", "start_second": 2419, "end_second": 2461, "url": "https://www.youtube.com/watch?v=_V-WpE8cmpc&t=2419s", "title": "Alexei Efros: Self-supervision, Meta-supervision, Curiosity: Making Computers Study Harder", "thumbnail": "https://i.ytimg.com/vi/_V-WpE8cmpc/maxresdefault.jpg"} {"video_id": "_V-WpE8cmpc", "text": "translates say from greyscale to color okay and now what we want is we want to add another network on top of that and that network we want to decide how to decide if the image here if it looks real or it looks fake okay so what we want is we want G the network G to fool the network D so we want D to think that this is a real image whereas in fact it was generated by by by G okay and of course G who doesn't want to get fooled so do you really trust ooh to figure out if it can if it can if it can tell and the idea is to basically have these two", "start_timestamp": "00:41:01", "end_timestamp": "00:41:47", "start_second": 2461, "end_second": 2507, "url": "https://www.youtube.com/watch?v=_V-WpE8cmpc&t=2461s", "title": "Alexei Efros: Self-supervision, Meta-supervision, Curiosity: Making Computers Study Harder", "thumbnail": "https://i.ytimg.com/vi/_V-WpE8cmpc/maxresdefault.jpg"} {"video_id": "_V-WpE8cmpc", "text": "networks battle it out in a kind of a duel like an arms race and the idea with as just any arms races when you have competition when you have a duel both get better and better and better that's that's that's the beautiful story Oh of gas right so here is a little bit of math so let's say that in this particular context if we're trying to see what G wants to do d wants to have a high probability for generated images if the image was generated by G we wanted to have a high probability of saying that it's it's a fake image okay whereas", "start_timestamp": "00:41:47", "end_timestamp": "00:42:22", "start_second": 2507, "end_second": 2542, "url": "https://www.youtube.com/watch?v=_V-WpE8cmpc&t=2507s", "title": "Alexei Efros: Self-supervision, Meta-supervision, Curiosity: Making Computers Study Harder", "thumbnail": "https://i.ytimg.com/vi/_V-WpE8cmpc/maxresdefault.jpg"} {"video_id": "_V-WpE8cmpc", "text": "if the image is actually a real image then we want to have it say no this is a low probability that it was faked okay so we want to have a G such that it maximizes this quantity okay at the same time of course what we want G to do is to do the opposite it wants to minimize that same quantity right so d G is going to get back signal from D that says okay I figured you out and then G is gonna say okay let me see what I can do better now to improve my generator generated image so that G will have a hard time so maybe I'll", "start_timestamp": "00:42:22", "end_timestamp": "00:43:10", "start_second": 2542, "end_second": 2590, "url": "https://www.youtube.com/watch?v=_V-WpE8cmpc&t=2542s", "title": "Alexei Efros: Self-supervision, Meta-supervision, Curiosity: Making Computers Study Harder", "thumbnail": "https://i.ytimg.com/vi/_V-WpE8cmpc/maxresdefault.jpg"} {"video_id": "_V-WpE8cmpc", "text": "something that looks a little bit better okay and try to fool it but of course I don't want to fool just this particular G I want to fool a the best possible D so this is where we get this whole minimax formulation where you want to have the best D and then minimize the the G to - to do the best of that okay so one way to think about it is that now this D you can think of it as a loss function it's kind of like l1 or l2 it basically tells what you need to do how do you optimize G such that it gets closer and closer to the goal the only", "start_timestamp": "00:43:10", "end_timestamp": "00:43:57", "start_second": 2590, "end_second": 2637, "url": "https://www.youtube.com/watch?v=_V-WpE8cmpc&t=2590s", "title": "Alexei Efros: Self-supervision, Meta-supervision, Curiosity: Making Computers Study Harder", "thumbnail": "https://i.ytimg.com/vi/_V-WpE8cmpc/maxresdefault.jpg"} {"video_id": "_V-WpE8cmpc", "text": "difference is that instead of being nel - now this G is learnt the G learns what does that mean to get closer to the goal for that particular problem and what it means is that it basically means to be indistinguishable from the real samples from this from this data domain okay so we're almost done but not quite because here is an example imagine that my G went completely crazy and started producing cats for any input image it got it produced the cat okay now is this a real immature a fake image it's a real image it's a cat it's a it's my Student", "start_timestamp": "00:43:57", "end_timestamp": "00:44:39", "start_second": 2637, "end_second": 2679, "url": "https://www.youtube.com/watch?v=_V-WpE8cmpc&t=2637s", "title": "Alexei Efros: Self-supervision, Meta-supervision, Curiosity: Making Computers Study Harder", "thumbnail": "https://i.ytimg.com/vi/_V-WpE8cmpc/maxresdefault.jpg"} {"video_id": "_V-WpE8cmpc", "text": "Union's cat named Aquarius very nice yet so it's a real image but it's not really what we met so we need to give it a little bit more constraints so basically what we want to do is we want to give D not just the generate image but also the input X so it can look at the pair of both of them together to say is it is the G of X the result of starting with X so that is is this a real pair or a fake pair and now we're all good now this is the conditional gann case and now we're we're able to to get it to work and", "start_timestamp": "00:44:39", "end_timestamp": "00:45:24", "start_second": 2679, "end_second": 2724, "url": "https://www.youtube.com/watch?v=_V-WpE8cmpc&t=2679s", "title": "Alexei Efros: Self-supervision, Meta-supervision, Curiosity: Making Computers Study Harder", "thumbnail": "https://i.ytimg.com/vi/_V-WpE8cmpc/maxresdefault.jpg"} {"video_id": "_V-WpE8cmpc", "text": "this is the the final thing that is being optimized now I'm of course hiding a whole bunch of things under the rug here this is not a pretty optimization as you might imagine it is very complex complicated to optimize this thing and and and there is a lot and a lot of work on trying to make it simpler so far it's still it's it's it's it's more art than a science how do you optimize this thing but my graduate students are really amazing at doing this so so we were able to get this working and so now we can we", "start_timestamp": "00:45:24", "end_timestamp": "00:45:59", "start_second": 2724, "end_second": 2759, "url": "https://www.youtube.com/watch?v=_V-WpE8cmpc&t=2724s", "title": "Alexei Efros: Self-supervision, Meta-supervision, Curiosity: Making Computers Study Harder", "thumbnail": "https://i.ytimg.com/vi/_V-WpE8cmpc/maxresdefault.jpg"} {"video_id": "_V-WpE8cmpc", "text": "can you know plug in greyscale and call her pair and then we can colorize images we can do the exactly the same code we plug in Google Streetview and satellites and then we can you know we can basically hallucinate satellites from from maps or we can do it the other way around exactly the same code but because the G is getting optimized for every different pair it basically learns what is important for every domain we can generate from labels we can generate facades we can go from day to night we can go to from thermal imaging to to", "start_timestamp": "00:45:59", "end_timestamp": "00:46:41", "start_second": 2759, "end_second": 2801, "url": "https://www.youtube.com/watch?v=_V-WpE8cmpc&t=2759s", "title": "Alexei Efros: Self-supervision, Meta-supervision, Curiosity: Making Computers Study Harder", "thumbnail": "https://i.ytimg.com/vi/_V-WpE8cmpc/maxresdefault.jpg"} {"video_id": "_V-WpE8cmpc", "text": "normal RGB imaging we can take edges image edges and produce images that could have come from those things okay this kind of looks cool but actually it's not that complicated H maps actually contain a lot of the information the cool thing is that we can then train on this and test on just you know human sketches and even there it's actually doing something reasonable which is which is quite kind of neat and then we we put this online the code online and and a lot of kind of artists decided to do cool things with this and", "start_timestamp": "00:46:41", "end_timestamp": "00:47:23", "start_second": 2801, "end_second": 2843, "url": "https://www.youtube.com/watch?v=_V-WpE8cmpc&t=2801s", "title": "Alexei Efros: Self-supervision, Meta-supervision, Curiosity: Making Computers Study Harder", "thumbnail": "https://i.ytimg.com/vi/_V-WpE8cmpc/maxresdefault.jpg"} {"video_id": "_V-WpE8cmpc", "text": "so this is this is kind of a neat thing where you you don't even need to do results of your papers anymore you just kind of post the code online and then just download results from from that other peoples have done and and somebody even did a little edges to cats thing you can try to yourself you you you you draw something you hit the the pigs button and then it will get you and get your cat okay there you go the the the best the best yeah the best use of company computer technology in my at least for me I don't", "start_timestamp": "00:47:23", "end_timestamp": "00:48:03", "start_second": 2843, "end_second": 2883, "url": "https://www.youtube.com/watch?v=_V-WpE8cmpc&t=2843s", "title": "Alexei Efros: Self-supervision, Meta-supervision, Curiosity: Making Computers Study Harder", "thumbnail": "https://i.ytimg.com/vi/_V-WpE8cmpc/maxresdefault.jpg"} {"video_id": "_V-WpE8cmpc", "text": "know this is yeah this is the the pinnacle of my research career I think I get lots of cats on the Internet so so this is an example of Gann so as a again we we talked about direct supervision again basically cell we supervise of not on the particular label but on a set why there are other types of metal supervision we can think about so one is one of my favorite ones its cycle consistency okay the idea is that we don't know the answer why we don't have label for that but what we know is that if we if we have our f of X which", "start_timestamp": "00:48:03", "end_timestamp": "00:48:56", "start_second": 2883, "end_second": 2936, "url": "https://www.youtube.com/watch?v=_V-WpE8cmpc&t=2883s", "title": "Alexei Efros: Self-supervision, Meta-supervision, Curiosity: Making Computers Study Harder", "thumbnail": "https://i.ytimg.com/vi/_V-WpE8cmpc/maxresdefault.jpg"} {"video_id": "_V-WpE8cmpc", "text": "produces some Y and then we apply a G of that of that we should get back to X okay and this is a constraint that people have used a lot especially in tracking in computer vision you track forwards you get somewhere you don't know where you are then you track backwards in time along the video and the idea is that you you should end up where you started with and if you don't then something is wrong okay but we can use this as a constraint to again for optimization so for example let's say that we want to do this kind of a pics", "start_timestamp": "00:48:56", "end_timestamp": "00:49:30", "start_second": 2936, "end_second": 2970, "url": "https://www.youtube.com/watch?v=_V-WpE8cmpc&t=2936s", "title": "Alexei Efros: Self-supervision, Meta-supervision, Curiosity: Making Computers Study Harder", "thumbnail": "https://i.ytimg.com/vi/_V-WpE8cmpc/maxresdefault.jpg"} {"video_id": "_V-WpE8cmpc", "text": "to pix image to image translation but we don't have labeled pairs so let's say we want to translate from horses into zebras right there is no possible label data for this so how do we do this well we can take an inspiration from actually Mark Twain and ID of back-translation in in in in in in language and the idea of back translation is that if you want something translated in a foreign language you don't know you hire one translator to translate to that language and you hire another one to decide back into language you do know and then you", "start_timestamp": "00:49:30", "end_timestamp": "00:50:09", "start_second": 2970, "end_second": 3009, "url": "https://www.youtube.com/watch?v=_V-WpE8cmpc&t=2970s", "title": "Alexei Efros: Self-supervision, Meta-supervision, Curiosity: Making Computers Study Harder", "thumbnail": "https://i.ytimg.com/vi/_V-WpE8cmpc/maxresdefault.jpg"} {"video_id": "_V-WpE8cmpc", "text": "double check that it kind of still make sense right and Mark Twain wrote this book jumping frog in English then in French then clawed back into a civilized language once more by patient and renew merited toil so here he was showing that in this particular case the translation was not a good one so he translated back and and sure that was it was it was not not looking good okay and so what we're gonna do here is we're going to basically do the same idea we're going to now have a translator G that goes from domain acts of the main why okay", "start_timestamp": "00:50:09", "end_timestamp": "00:50:46", "start_second": 3009, "end_second": 3046, "url": "https://www.youtube.com/watch?v=_V-WpE8cmpc&t=3009s", "title": "Alexei Efros: Self-supervision, Meta-supervision, Curiosity: Making Computers Study Harder", "thumbnail": "https://i.ytimg.com/vi/_V-WpE8cmpc/maxresdefault.jpg"} {"video_id": "_V-WpE8cmpc", "text": "and then translator F that goes back okay and and and and that's all now because we one do we don't want it to cheat and just stay where it is we want it to also have this adversarial loss this this disg and loss it says that when you get to domain why you better be indistinguishable from a real thing and why and when you get to domain X you better be indistinguishable from something real in X okay so what we're doing here is we're since starting with an image X we translate into a zebra domain again we don't have a label for", "start_timestamp": "00:50:46", "end_timestamp": "00:51:25", "start_second": 3046, "end_second": 3085, "url": "https://www.youtube.com/watch?v=_V-WpE8cmpc&t=3046s", "title": "Alexei Efros: Self-supervision, Meta-supervision, Curiosity: Making Computers Study Harder", "thumbnail": "https://i.ytimg.com/vi/_V-WpE8cmpc/maxresdefault.jpg"} {"video_id": "_V-WpE8cmpc", "text": "that but we know that it should look like a real zebra and then we translate it back and then if we didn't if we don't get exactly where we started with well that's our loss that's what we want to minimize that is exactly this the thing that we are going to want to minimize okay and if you kind of a step back and squint at this thing what does it look like it's an hour old friend out encoder right you have the input you reconstruct that input the only difference is that it's an out in Golder that instead of a", "start_timestamp": "00:51:25", "end_timestamp": "00:52:00", "start_second": 3085, "end_second": 3120, "url": "https://www.youtube.com/watch?v=_V-WpE8cmpc&t=3085s", "title": "Alexei Efros: Self-supervision, Meta-supervision, Curiosity: Making Computers Study Harder", "thumbnail": "https://i.ytimg.com/vi/_V-WpE8cmpc/maxresdefault.jpg"} {"video_id": "_V-WpE8cmpc", "text": "bottleneck it just has a different domain in the middle so it basically just has go through something else it's forced to go from some other representation and then come back and then you also do it the other way around okay and so then we can turn horses into zebras and vice versa we can even do it in in in videos just one frame at a time the failures are kind of fun I showed this picture in Moscow last year and I thought that's it I've they'll not let me out but they did we could also do kind of nice things on", "start_timestamp": "00:52:00", "end_timestamp": "00:52:38", "start_second": 3120, "end_second": 3158, "url": "https://www.youtube.com/watch?v=_V-WpE8cmpc&t=3120s", "title": "Alexei Efros: Self-supervision, Meta-supervision, Curiosity: Making Computers Study Harder", "thumbnail": "https://i.ytimg.com/vi/_V-WpE8cmpc/maxresdefault.jpg"} {"video_id": "_V-WpE8cmpc", "text": "going from images to paintings okay by just kind of matching from domain of photographs the domain of in this case Suzanne paintings and I'm particularly happy about those clouds and you've probably seen a lot of the stylization papers and results they usually look at a single single image that you want to stylize for a particular image here we can take a whole domain we can take all of Suzanne's painting all the thousands of them and learn the representation that kind of models the whole Cezanne okay and and the clouds look pretty nice", "start_timestamp": "00:52:38", "end_timestamp": "00:53:18", "start_second": 3158, "end_second": 3198, "url": "https://www.youtube.com/watch?v=_V-WpE8cmpc&t=3158s", "title": "Alexei Efros: Self-supervision, Meta-supervision, Curiosity: Making Computers Study Harder", "thumbnail": "https://i.ytimg.com/vi/_V-WpE8cmpc/maxresdefault.jpg"} {"video_id": "_V-WpE8cmpc", "text": "we can also go the other way around we can go from a painting into something that kind of hopefully hard to distinguish from a real image now if this was a perfect talk this would also be Suzanne and Suzanne didn't work Monet is simpler so so I will show you Monet but we're still working on Suzanne hopefully we can get Suzanne - okay we can also apply this to translating between video games in the real world so this is Grand Theft Auto and this is making it look like kitty so now you can see it's like old German looking and you", "start_timestamp": "00:53:18", "end_timestamp": "00:53:54", "start_second": 3198, "end_second": 3234, "url": "https://www.youtube.com/watch?v=_V-WpE8cmpc&t=3198s", "title": "Alexei Efros: Self-supervision, Meta-supervision, Curiosity: Making Computers Study Harder", "thumbnail": "https://i.ytimg.com/vi/_V-WpE8cmpc/maxresdefault.jpg"} {"video_id": "_V-WpE8cmpc", "text": "can go the other way around which is even cooler they kind of walking around with a with with like make your reality like a video game so here is an example of reality as a video game I can let me show you just because this is just such a cool such a cool result this is this is again people have been just playing around with with these things all right I know it's not alright never mind it's forgot this is like artists just taking our code and running with it and okay I don't have the okay nevermind I will I will I'll give give you the", "start_timestamp": "00:53:54", "end_timestamp": "00:55:09", "start_second": 3234, "end_second": 3309, "url": "https://www.youtube.com/watch?v=_V-WpE8cmpc&t=3234s", "title": "Alexei Efros: Self-supervision, Meta-supervision, Curiosity: Making Computers Study Harder", "thumbnail": "https://i.ytimg.com/vi/_V-WpE8cmpc/maxresdefault.jpg"} {"video_id": "_V-WpE8cmpc", "text": "pointer but yeah finally I want to show a very cute example of you know I in in my you know in my group we have been playing around with a lot of making fake imagery from from early on and so now a lot of people are worried about you know all this fake news and you know put in screwing things up so we thought we could try to play on the other side of the fence as well and see if we can detect if image is not being realistic okay and the top one is actually from my old paper with James Hayes where we learn to kind of fill in holes and", "start_timestamp": "00:55:09", "end_timestamp": "00:55:59", "start_second": 3309, "end_second": 3359, "url": "https://www.youtube.com/watch?v=_V-WpE8cmpc&t=3309s", "title": "Alexei Efros: Self-supervision, Meta-supervision, Curiosity: Making Computers Study Harder", "thumbnail": "https://i.ytimg.com/vi/_V-WpE8cmpc/maxresdefault.jpg"} {"video_id": "_V-WpE8cmpc", "text": "create these image composites and here we can finally you know detect this and and and recover the that this image was what's fake and we are also going to do it with this the same idea of of self supervision and metal supervision this is with a couple of wonderful former Berkeley undergrads and and under Owens so given this image it might not might look reasonable to you but in fact of course it is fake and how do we how do we detect this well if we had enough fake examples we can just again do this you know supervise", "start_timestamp": "00:55:59", "end_timestamp": "00:56:40", "start_second": 3359, "end_second": 3400, "url": "https://www.youtube.com/watch?v=_V-WpE8cmpc&t=3359s", "title": "Alexei Efros: Self-supervision, Meta-supervision, Curiosity: Making Computers Study Harder", "thumbnail": "https://i.ytimg.com/vi/_V-WpE8cmpc/maxresdefault.jpg"} {"video_id": "_V-WpE8cmpc", "text": "direct supervision they're out but we just don't have enough of these positive examples and so what we're going to do is we're trying to think about this as a normally detection so we're going to find to see if we can learn if an image is consistent with itself okay and the idea here is the following we can look at a couple of patches of this image and we see is there some sort of some kind of fingerprint in these in this patches that tells us that they might have come from different imaging systems that they", "start_timestamp": "00:56:40", "end_timestamp": "00:57:16", "start_second": 3400, "end_second": 3436, "url": "https://www.youtube.com/watch?v=_V-WpE8cmpc&t=3400s", "title": "Alexei Efros: Self-supervision, Meta-supervision, Curiosity: Making Computers Study Harder", "thumbnail": "https://i.ytimg.com/vi/_V-WpE8cmpc/maxresdefault.jpg"} {"video_id": "_V-WpE8cmpc", "text": "are not from the same camera or and not from the same image okay now if we had access to the actual images that this was taken from created with then we could actually look at the the metadata that comes with the image and then we can realize that it's actually you know the cameras are different the focal lengths and different cetera et cetera but of course in real life we don't have access to any of that and so what we're going to do is we're going to train an algorithm to see if for a pair of images if those images if we can learn if a", "start_timestamp": "00:57:16", "end_timestamp": "00:57:58", "start_second": 3436, "end_second": 3478, "url": "https://www.youtube.com/watch?v=_V-WpE8cmpc&t=3436s", "title": "Alexei Efros: Self-supervision, Meta-supervision, Curiosity: Making Computers Study Harder", "thumbnail": "https://i.ytimg.com/vi/_V-WpE8cmpc/maxresdefault.jpg"} {"video_id": "_V-WpE8cmpc", "text": "pair of patches comes from the same image or not now that by itself you could also do that that doesn't work as well because then again you don't have enough enough data for it for that so instead what we're gonna do is we're gonna train if a pair of patches have the same is it metadata tag there's a many different metadata exif metadata tags like camera brand focal length JPEG compression etc etc and for each one we can predict not the value of that tag but is it the same or is it different okay and so the idea is that we have a", "start_timestamp": "00:57:58", "end_timestamp": "00:58:43", "start_second": 3478, "end_second": 3523, "url": "https://www.youtube.com/watch?v=_V-WpE8cmpc&t=3478s", "title": "Alexei Efros: Self-supervision, Meta-supervision, Curiosity: Making Computers Study Harder", "thumbnail": "https://i.ytimg.com/vi/_V-WpE8cmpc/maxresdefault.jpg"} {"video_id": "_V-WpE8cmpc", "text": "whole bunch of we train with a whole bunch of real images we don't need any fake images for this so for every pair of real images we take a couple of random patches and then we look at the exif tags that are similar in this images and we train those things to say okay yes those are similar and the different ones we say no those are those are different so basically we train something like 80 different classifiers that says for every different for every single active tag is this going to be the same or is it going", "start_timestamp": "00:58:43", "end_timestamp": "00:59:17", "start_second": 3523, "end_second": 3557, "url": "https://www.youtube.com/watch?v=_V-WpE8cmpc&t=3523s", "title": "Alexei Efros: Self-supervision, Meta-supervision, Curiosity: Making Computers Study Harder", "thumbnail": "https://i.ytimg.com/vi/_V-WpE8cmpc/maxresdefault.jpg"} {"video_id": "_V-WpE8cmpc", "text": "to be different okay and so now we have a kind of a way to establish if a pair of images if pair of patches are consistent and more along one of the dimensions the dimension being you know are they do they come from the same camera do they have the same resolution do they have the same jpg etc etcetera okay so here is the different task tags and how well the how well we can predict if they if they come from the same image or not so you can say that the lens make is one of top-performing months so it is basically like the the what who produced", "start_timestamp": "00:59:17", "end_timestamp": "01:00:01", "start_second": 3557, "end_second": 3601, "url": "https://www.youtube.com/watch?v=_V-WpE8cmpc&t=3557s", "title": "Alexei Efros: Self-supervision, Meta-supervision, Curiosity: Making Computers Study Harder", "thumbnail": "https://i.ytimg.com/vi/_V-WpE8cmpc/maxresdefault.jpg"} {"video_id": "_V-WpE8cmpc", "text": "the lens then you have a custom renderer is basically some Apple iPhone thing it basically says iPhone then then then you have a bunch of various things that really code for a different processing that's done by different cameras so they're all kind of things that different cameras do differently whereas things like image date and time or GPS coordinate are basically a chance level as you would expect okay and then what we do is we combine them all together oh and we also have some other consistencies that are like try to do it", "start_timestamp": "01:00:01", "end_timestamp": "01:00:36", "start_second": 3601, "end_second": 3636, "url": "https://www.youtube.com/watch?v=_V-WpE8cmpc&t=3601s", "title": "Alexei Efros: Self-supervision, Meta-supervision, Curiosity: Making Computers Study Harder", "thumbnail": "https://i.ytimg.com/vi/_V-WpE8cmpc/maxresdefault.jpg"} {"video_id": "_V-WpE8cmpc", "text": "like blurring and and our Erie JPEG in etc etc and then a test time here is an image here is a manipulated image and what we do is we just have a pair go and find a whole bunch of different pairs of images and for every single active tag we could predict a map of if those two images are consistent or not consistent with each other these two patches okay and so we have a kind of consistency map for every single tag like camera or or focal length etc and then we combine them together into an overall consistency heat map and then", "start_timestamp": "01:00:36", "end_timestamp": "01:01:17", "start_second": 3636, "end_second": 3677, "url": "https://www.youtube.com/watch?v=_V-WpE8cmpc&t=3636s", "title": "Alexei Efros: Self-supervision, Meta-supervision, Curiosity: Making Computers Study Harder", "thumbnail": "https://i.ytimg.com/vi/_V-WpE8cmpc/maxresdefault.jpg"} {"video_id": "_V-WpE8cmpc", "text": "once we have a heat map we can we can run normalized cuts and actually cut it into inter and thing or not and so here we can predict that all look at this this is this is a this is the inconsistent part and here is here is what it found and consistency is here here actually we didn't even notice it but it detected that the shadow on the on the floor was also painted in not just the guy on the top not sure nice nicely it works it you know for normal images it doesn't fire usually which is good and you know we're beating most of", "start_timestamp": "01:01:17", "end_timestamp": "01:01:59", "start_second": 3677, "end_second": 3719, "url": "https://www.youtube.com/watch?v=_V-WpE8cmpc&t=3677s", "title": "Alexei Efros: Self-supervision, Meta-supervision, Curiosity: Making Computers Study Harder", "thumbnail": "https://i.ytimg.com/vi/_V-WpE8cmpc/maxresdefault.jpg"} {"video_id": "_V-WpE8cmpc", "text": "the pretty pretty much all all the other methods that are have been supervised we are beating them with this kind of self supervised method and for some images we don't have the ground truth so we don't know so I don't know who knows maybe this is how conspiracy theories are are born so I think I'm over time there so I I will I will I will skip the last part but I'm happy to talk about curiosity 101 so I think oops there you go thank you very much [Applause] any questions actually all know on the last part detecting detecting fake images so", "start_timestamp": "01:01:59", "end_timestamp": "01:03:07", "start_second": 3719, "end_second": 3787, "url": "https://www.youtube.com/watch?v=_V-WpE8cmpc&t=3719s", "title": "Alexei Efros: Self-supervision, Meta-supervision, Curiosity: Making Computers Study Harder", "thumbnail": "https://i.ytimg.com/vi/_V-WpE8cmpc/maxresdefault.jpg"} {"video_id": "_V-WpE8cmpc", "text": "manipulated images so I would I wonder what would happen if you actually plug it into this game cycle where you're trying to actually make make wadding image look like it was taken with different objects with different camera and then having this discriminatory trying to discriminate it it which of the sides would win eventually if any so yeah so in I think in in you know longer so I think what they were saying is what happens if you have the the the critic the guy who is trying to find fakes connect with the generator that tries to", "start_timestamp": "01:03:07", "end_timestamp": "01:03:56", "start_second": 3787, "end_second": 3836, "url": "https://www.youtube.com/watch?v=_V-WpE8cmpc&t=3787s", "title": "Alexei Efros: Self-supervision, Meta-supervision, Curiosity: Making Computers Study Harder", "thumbnail": "https://i.ytimg.com/vi/_V-WpE8cmpc/maxresdefault.jpg"} {"video_id": "_V-WpE8cmpc", "text": "make them better right and then have them battle with each other with this because there's an ongoing thing where people are trying to find fake images and then on the other hand improving that's right that's right that's right no I think this is this is actually that that's that's what we are talking you're thinking about doing doing doing next this kind of to have the the the fake detector not be a kind of a static one but to to to learn by having a generator generate better and better fakes and so hopefully then the", "start_timestamp": "01:03:56", "end_timestamp": "01:04:32", "start_second": 3836, "end_second": 3872, "url": "https://www.youtube.com/watch?v=_V-WpE8cmpc&t=3836s", "title": "Alexei Efros: Self-supervision, Meta-supervision, Curiosity: Making Computers Study Harder", "thumbnail": "https://i.ytimg.com/vi/_V-WpE8cmpc/maxresdefault.jpg"} {"video_id": "_V-WpE8cmpc", "text": "detector becomes better and better and better technology to improve the detectors of face is exactly the same technology that you will use to improve the producing of the images right that is that is true but at least for now with these grand formulations in the end so that it this is kind of a weird thing that began really the Khmers converges when the detector cannot know the difference between the real and the fake right so that would be when the the thing actually converges in reality though the detector always wins so we we cannot", "start_timestamp": "01:04:32", "end_timestamp": "01:05:14", "start_second": 3872, "end_second": 3914, "url": "https://www.youtube.com/watch?v=_V-WpE8cmpc&t=3872s", "title": "Alexei Efros: Self-supervision, Meta-supervision, Curiosity: Making Computers Study Harder", "thumbnail": "https://i.ytimg.com/vi/_V-WpE8cmpc/maxresdefault.jpg"} {"video_id": "_V-WpE8cmpc", "text": "fool the detector so those gaps never really converge so for now and it kind of may be reasonable because it's always easier to criticize them to create right always easy to be critic than it would then than a than a painter right so for now that might be okay but but in generally I think it's a it's it's it's an arms race it's an arms race and and and yeah there is no there is no gonna be a perfect solution there is always going to be something that the generator can do that that defeats the the defenses that's why I think it needs to", "start_timestamp": "01:05:14", "end_timestamp": "01:05:55", "start_second": 3914, "end_second": 3955, "url": "https://www.youtube.com/watch?v=_V-WpE8cmpc&t=3914s", "title": "Alexei Efros: Self-supervision, Meta-supervision, Curiosity: Making Computers Study Harder", "thumbnail": "https://i.ytimg.com/vi/_V-WpE8cmpc/maxresdefault.jpg"} {"video_id": "_V-WpE8cmpc", "text": "be active that's why the the the this the the the fake detector needs to keep thinking of some other ways that the generator could potentially be fooling it and and be prepared for that if I may just smoke before so many of these fake images are created by copying you know patches of the same image somewhere else in the image is that something that this detector will be able to detect no no we actually we looked at this and most of those most of the fakes are not copied from from the same image at least the ones that we have looked at they're", "start_timestamp": "01:05:55", "end_timestamp": "01:06:34", "start_second": 3955, "end_second": 3994, "url": "https://www.youtube.com/watch?v=_V-WpE8cmpc&t=3955s", "title": "Alexei Efros: Self-supervision, Meta-supervision, Curiosity: Making Computers Study Harder", "thumbnail": "https://i.ytimg.com/vi/_V-WpE8cmpc/maxresdefault.jpg"} {"video_id": "_V-WpE8cmpc", "text": "usually you know you go you find something on the internet you put you know picture of the Pope or Putin or whatever or Trump yes yes so in this thing we're basically where we're focusing on what's called image splicing where you have two sources and then you create an image out of those two sources yeah so if you if you move things within the same image it might still possibly detect things we have seen a couple of examples where that happens of kind of the copy/move thing because it sometimes it screws up the the JPEG compression for example but", "start_timestamp": "01:06:34", "end_timestamp": "01:07:19", "start_second": 3994, "end_second": 4039, "url": "https://www.youtube.com/watch?v=_V-WpE8cmpc&t=3994s", "title": "Alexei Efros: Self-supervision, Meta-supervision, Curiosity: Making Computers Study Harder", "thumbnail": "https://i.ytimg.com/vi/_V-WpE8cmpc/maxresdefault.jpg"} {"video_id": "_V-WpE8cmpc", "text": "in general it's not it's not trained for that it's not it's that's not what it's looking for it's looking for really different imaging pipelines so it's not it's not supposed to be working for that example for that thing yeah mm-hm allow me I don't hear it that close doing videos I will I will I will repeat yes so the question is if we have tried it on on videos with people imitating other people different voices we haven't but I think this is this is yeah we can or any of the the papers online the code is online so anyone can actually run it", "start_timestamp": "01:07:19", "end_timestamp": "01:08:33", "start_second": 4039, "end_second": 4113, "url": "https://www.youtube.com/watch?v=_V-WpE8cmpc&t=4039s", "title": "Alexei Efros: Self-supervision, Meta-supervision, Curiosity: Making Computers Study Harder", "thumbnail": "https://i.ytimg.com/vi/_V-WpE8cmpc/maxresdefault.jpg"} {"video_id": "_V-WpE8cmpc", "text": "and see that would be an interesting thing to to run and see we have played with them very very ventriloquist's so when you have a puppet and you talk as a puppet and then you talk as as yourself and it kind of works it kind of works for that so hopefully it should be it should be doing something like this yes it's not it's not meant to be kind of a detective detecting detecting fakes it's it's really mad it's meant to be fooled by the same things humans are fooled by so if the impersonator is a good one hopefully our method will work too", "start_timestamp": "01:08:33", "end_timestamp": "01:09:18", "start_second": 4113, "end_second": 4158, "url": "https://www.youtube.com/watch?v=_V-WpE8cmpc&t=4113s", "title": "Alexei Efros: Self-supervision, Meta-supervision, Curiosity: Making Computers Study Harder", "thumbnail": "https://i.ytimg.com/vi/_V-WpE8cmpc/maxresdefault.jpg"} {"video_id": "_V-WpE8cmpc", "text": "yeah yeah question about this domain to domain mappings and you have to like one directional and cycling example Google Maps in the wandering back to them the paintings were in the cytosine in the cytosine you care about the consistency that you come back to the center and you say the painting were actually working only in one direction the pay they the the paintings work in both directions paintings two two two photos they just don't look so good so III didn't well I didn't show the results that didn't didn't work very", "start_timestamp": "01:09:18", "end_timestamp": "01:10:04", "start_second": 4158, "end_second": 4204, "url": "https://www.youtube.com/watch?v=_V-WpE8cmpc&t=4158s", "title": "Alexei Efros: Self-supervision, Meta-supervision, Curiosity: Making Computers Study Harder", "thumbnail": "https://i.ytimg.com/vi/_V-WpE8cmpc/maxresdefault.jpg"} {"video_id": "_V-WpE8cmpc", "text": "well so remember in them in the Google Maps you go to the Google page and you can you can copy you can you can you can you can copy the the Google map and then you switch to the satellite and you can copy the satellite of that same thing so you have aligned inputs and outputs right so this is a much simpler problem so there you don't need any consistency because you have the X in the Y's given to you and so then you can just do it directly with the paintings and them the photographs there is no alignment you", "start_timestamp": "01:10:04", "end_timestamp": "01:10:55", "start_second": 4204, "end_second": 4255, "url": "https://www.youtube.com/watch?v=_V-WpE8cmpc&t=4204s", "title": "Alexei Efros: Self-supervision, Meta-supervision, Curiosity: Making Computers Study Harder", "thumbnail": "https://i.ytimg.com/vi/_V-WpE8cmpc/maxresdefault.jpg"} {"video_id": "_V-WpE8cmpc", "text": "have the suzanne paintings and you have photographs and you don't have a one-to-one correspondence and so for that the cycle is is is imported so that's the only thing we have basically we there is no other constraint so then it's very portable and we come back to the polarization for example okay then the colored image is not strange today oh it is it is because there is a there is a hold on the the gun yeah so so let me see let me see zouri so in this in this setup you are given the input and the output okay and you are your", "start_timestamp": "01:10:55", "end_timestamp": "01:12:00", "start_second": 4255, "end_second": 4320, "url": "https://www.youtube.com/watch?v=_V-WpE8cmpc&t=4255s", "title": "Alexei Efros: Self-supervision, Meta-supervision, Curiosity: Making Computers Study Harder", "thumbnail": "https://i.ytimg.com/vi/_V-WpE8cmpc/maxresdefault.jpg"} {"video_id": "_V-WpE8cmpc", "text": "basically where is the picture your training on the grayscale and the color okay so you're given the input and the output and the Gann is just making sure that the output that the generator produces is similar in the perceptual space to what we humans expect okay because if you don't have the Gann then then you get weird results like this one right this one right this is what happens if you just kind of do regression with the sake something like l2 loss yes yeah I I did not mention that I I'm sorry yes it does have the", "start_timestamp": "01:12:00", "end_timestamp": "01:12:57", "start_second": 4320, "end_second": 4377, "url": "https://www.youtube.com/watch?v=_V-WpE8cmpc&t=4320s", "title": "Alexei Efros: Self-supervision, Meta-supervision, Curiosity: Making Computers Study Harder", "thumbnail": "https://i.ytimg.com/vi/_V-WpE8cmpc/maxresdefault.jpg"} {"video_id": "_V-WpE8cmpc", "text": "kind of this standard l2 loss plus the gamblers yes you're right I'm sorry I yeah I dropped out that that's like yes so for this you have the Delta loss plus the Gann loss in the second one with the cycle we don't have in the l2 loss because there is no there is no data for that yes yes I'm sorry yes this good good point yes mm-hmm no it'sit's true I think I think it's it's it's not it's not clear what heat map what do you want to call her because again the the the producer of the sub for example when you're when you're", "start_timestamp": "01:12:57", "end_timestamp": "01:14:21", "start_second": 4377, "end_second": 4461, "url": "https://www.youtube.com/watch?v=_V-WpE8cmpc&t=4377s", "title": "Alexei Efros: Self-supervision, Meta-supervision, Curiosity: Making Computers Study Harder", "thumbnail": "https://i.ytimg.com/vi/_V-WpE8cmpc/maxresdefault.jpg"} {"video_id": "_V-WpE8cmpc", "text": "playing the the organ the producer of the sound is the big the big pipe that's the thing produces the sound but of course what what it's being it's it's coloring is the guy player pressing the keys and so I think it's it's not a very clear what is it that we want you know do we want the actual physical thing that produces the sound or do we want the the actuator and so here I think we are not we're not we just wanted to see what what would it be sighs so we are happy with anything that connects with with sound production in some way or the", "start_timestamp": "01:14:21", "end_timestamp": "01:15:05", "start_second": 4461, "end_second": 4505, "url": "https://www.youtube.com/watch?v=_V-WpE8cmpc&t=4461s", "title": "Alexei Efros: Self-supervision, Meta-supervision, Curiosity: Making Computers Study Harder", "thumbnail": "https://i.ytimg.com/vi/_V-WpE8cmpc/maxresdefault.jpg"} {"video_id": "_V-WpE8cmpc", "text": "other but yeah it you're completely right it's not I think if I think it's really it's it's going to be looking at correlations not causations and I think actually I would be happy if the dance party thing if it shows the people dancing I think that would be actually pretty cool but yeah you're completely right it's not a it's just kind of a type of visualization that that shows okay this these are the pixels that connect [Music] so the question is I think this is a very very very good question that that there was a little bit of a slide of", "start_timestamp": "01:15:05", "end_timestamp": "01:16:23", "start_second": 4505, "end_second": 4583, "url": "https://www.youtube.com/watch?v=_V-WpE8cmpc&t=4505s", "title": "Alexei Efros: Self-supervision, Meta-supervision, Curiosity: Making Computers Study Harder", "thumbnail": "https://i.ytimg.com/vi/_V-WpE8cmpc/maxresdefault.jpg"} {"video_id": "_V-WpE8cmpc", "text": "hand I said that you know the the tasks were too easy and the computers were easily cheating on the kind of the classification tasks and then I ended up making tasks harder but also changing the tasks to something like you know pretty pictures or connecting audio and visual etc etc what about going back to you know detecting car detecting dogs and and and classification so I think I personally am NOT a big fan of the classification task to begin with I think it's a it's a it's a task that is designed to to be cheated on because", "start_timestamp": "01:16:23", "end_timestamp": "01:17:14", "start_second": 4583, "end_second": 4634, "url": "https://www.youtube.com/watch?v=_V-WpE8cmpc&t=4583s", "title": "Alexei Efros: Self-supervision, Meta-supervision, Curiosity: Making Computers Study Harder", "thumbnail": "https://i.ytimg.com/vi/_V-WpE8cmpc/maxresdefault.jpg"} {"video_id": "_V-WpE8cmpc", "text": "it's a task that basically assumes that you have a close do you have a close the world your world is a thousand classes and you're basically deciding one of the thousand things right so your your your your your chance performance is actually not that bad your chance performance on on something like image net is is one in one and two hundred right it's it's actually the chance is pretty high in the real world we're in the open world where the potential number of that you need to recognize is almost infinite okay", "start_timestamp": "01:17:14", "end_timestamp": "01:17:52", "start_second": 4634, "end_second": 4672, "url": "https://www.youtube.com/watch?v=_V-WpE8cmpc&t=4634s", "title": "Alexei Efros: Self-supervision, Meta-supervision, Curiosity: Making Computers Study Harder", "thumbnail": "https://i.ytimg.com/vi/_V-WpE8cmpc/maxresdefault.jpg"} {"video_id": "_V-WpE8cmpc", "text": "and so I think that actually a lot of the problems with with with the with with the come with these networks cheating is because we're testing them on tasks that are very constrained we're cheating is actually actually the right thing to do so we're testing them on something that is that is not that that that that that is a kind of a very it's a it's a it's a it's a specialist task right what I think these methods will excel at is the generalist tasks something where you train on something and then you you you apply to something", "start_timestamp": "01:17:52", "end_timestamp": "01:18:38", "start_second": 4672, "end_second": 4718, "url": "https://www.youtube.com/watch?v=_V-WpE8cmpc&t=4672s", "title": "Alexei Efros: Self-supervision, Meta-supervision, Curiosity: Making Computers Study Harder", "thumbnail": "https://i.ytimg.com/vi/_V-WpE8cmpc/maxresdefault.jpg"} {"video_id": "_V-WpE8cmpc", "text": "completely different and hopefully it will work better and we have already seen this like for example the colorization feature that we have trained it does better than imagenet if the task is for example then to predict depth from a single image right so if the task is very different from the task it was trained on the cell supervised features work better if the task is similar to what it was trained on then the semantic tasks work better so I think that the the big goal is that we want to produce a generalist computer", "start_timestamp": "01:18:38", "end_timestamp": "01:19:12", "start_second": 4718, "end_second": 4752, "url": "https://www.youtube.com/watch?v=_V-WpE8cmpc&t=4718s", "title": "Alexei Efros: Self-supervision, Meta-supervision, Curiosity: Making Computers Study Harder", "thumbnail": "https://i.ytimg.com/vi/_V-WpE8cmpc/maxresdefault.jpg"} {"video_id": "_V-WpE8cmpc", "text": "something that is able to deal with novel situations in a reasonable matter not something that we already have if your goal is to just learn a specialist that will tell you no different types of of you know of Viennese pastries from each other you know you have a thousand different pastries and you want to tell name all of them then I think the current direct supervision methods are exactly the thing you need to do give in this kind of closed world but if you want to have a general algorithm that can do the pastries it can do the", "start_timestamp": "01:19:12", "end_timestamp": "01:19:50", "start_second": 4752, "end_second": 4790, "url": "https://www.youtube.com/watch?v=_V-WpE8cmpc&t=4752s", "title": "Alexei Efros: Self-supervision, Meta-supervision, Curiosity: Making Computers Study Harder", "thumbnail": "https://i.ytimg.com/vi/_V-WpE8cmpc/maxresdefault.jpg"} {"video_id": "eIho2S0ZahI", "text": "The human voice: It's the instrument we all play. It's the most powerful sound in the world, probably. It's the only one that can start a war or say \"I love you.\" And yet many people have the experience that when they speak, people don't listen to them. And why is that? How can we speak powerfully to make change in the world? What I'd like to suggest, there are a number of habits that we need to move away from. I've assembled for your pleasure here seven deadly sins of speaking. I'm not pretending this is an exhaustive list,", "start_timestamp": "00:00:00", "end_timestamp": "00:00:43", "start_second": 0, "end_second": 43, "url": "https://www.youtube.com/watch?v=eIho2S0ZahI&t=0s", "title": "How to speak so that people want to listen | Julian Treasure", "thumbnail": "https://i.ytimg.com/vi/eIho2S0ZahI/maxresdefault.jpg"} {"video_id": "eIho2S0ZahI", "text": "but these seven, I think, are pretty large habits that we can all fall into. First, gossip. Speaking ill of somebody who's not present. Not a nice habit, and we know perfectly well the person gossiping, five minutes later, will be gossiping about us. Second, judging. We know people who are like this in conversation, and it's very hard to listen to somebody if you know that you're being judged and found wanting at the same time. Third, negativity. You can fall into this. My mother, in the last years of her life, became very negative,", "start_timestamp": "00:00:43", "end_timestamp": "00:01:18", "start_second": 43, "end_second": 78, "url": "https://www.youtube.com/watch?v=eIho2S0ZahI&t=43s", "title": "How to speak so that people want to listen | Julian Treasure", "thumbnail": "https://i.ytimg.com/vi/eIho2S0ZahI/maxresdefault.jpg"} {"video_id": "eIho2S0ZahI", "text": "and it's hard to listen. I remember one day, I said to her, \"It's October 1 today,\" and she said, \"I know, isn't it dreadful?\" (Laughter) It's hard to listen when somebody's that negative. (Laughter) And another form of negativity, complaining. Well, this is the national art of the U.K. It's our national sport. We complain about the weather, sport, about politics, about everything, but actually, complaining is viral misery. It's not spreading sunshine and lightness in the world. Excuses. We've all met this guy.", "start_timestamp": "00:01:18", "end_timestamp": "00:01:51", "start_second": 78, "end_second": 111, "url": "https://www.youtube.com/watch?v=eIho2S0ZahI&t=78s", "title": "How to speak so that people want to listen | Julian Treasure", "thumbnail": "https://i.ytimg.com/vi/eIho2S0ZahI/maxresdefault.jpg"} {"video_id": "eIho2S0ZahI", "text": "Maybe we've all been this guy. Some people have a blamethrower. They just pass it on to everybody else and don't take responsibility for their actions, and again, hard to listen to somebody who is being like that. Penultimate, the sixth of the seven, embroidery, exaggeration. It demeans our language, actually, sometimes. For example, if I see something that really is awesome, what do I call it? (Laughter) And then, of course, this exaggeration becomes lying, and we don't want to listen to people we know are lying to us.", "start_timestamp": "00:01:51", "end_timestamp": "00:02:24", "start_second": 111, "end_second": 144, "url": "https://www.youtube.com/watch?v=eIho2S0ZahI&t=111s", "title": "How to speak so that people want to listen | Julian Treasure", "thumbnail": "https://i.ytimg.com/vi/eIho2S0ZahI/maxresdefault.jpg"} {"video_id": "eIho2S0ZahI", "text": "And finally, dogmatism. The confusion of facts with opinions. When those two things get conflated, you're listening into the wind. You know, somebody is bombarding you with their opinions as if they were true. It's difficult to listen to that. So here they are, seven deadly sins of speaking. These are things I think we need to avoid. But is there a positive way to think about this? Yes, there is. I'd like to suggest that there are four really powerful cornerstones, foundations, that we can stand on if we want our speech", "start_timestamp": "00:02:24", "end_timestamp": "00:02:58", "start_second": 144, "end_second": 178, "url": "https://www.youtube.com/watch?v=eIho2S0ZahI&t=144s", "title": "How to speak so that people want to listen | Julian Treasure", "thumbnail": "https://i.ytimg.com/vi/eIho2S0ZahI/maxresdefault.jpg"} {"video_id": "eIho2S0ZahI", "text": "to be powerful and to make change in the world. Fortunately, these things spell a word. The word is \"hail,\" and it has a great definition as well. I'm not talking about the stuff that falls from the sky and hits you on the head. I'm talking about this definition, to greet or acclaim enthusiastically, which is how I think our words will be received if we stand on these four things. So what do they stand for? See if you can guess. The H, honesty, of course, being true in what you say, being straight and clear.", "start_timestamp": "00:02:58", "end_timestamp": "00:03:28", "start_second": 178, "end_second": 208, "url": "https://www.youtube.com/watch?v=eIho2S0ZahI&t=178s", "title": "How to speak so that people want to listen | Julian Treasure", "thumbnail": "https://i.ytimg.com/vi/eIho2S0ZahI/maxresdefault.jpg"} {"video_id": "eIho2S0ZahI", "text": "The A is authenticity, just being yourself. A friend of mine described it as standing in your own truth, which I think is a lovely way to put it. The I is integrity, being your word, actually doing what you say, and being somebody people can trust. And the L is love. I don't mean romantic love, but I do mean wishing people well, for two reasons. First of all, I think absolute honesty may not be what we want. I mean, my goodness, you look ugly this morning. Perhaps that's not necessary. Tempered with love, of course, honesty is a great thing.", "start_timestamp": "00:03:28", "end_timestamp": "00:04:05", "start_second": 208, "end_second": 245, "url": "https://www.youtube.com/watch?v=eIho2S0ZahI&t=208s", "title": "How to speak so that people want to listen | Julian Treasure", "thumbnail": "https://i.ytimg.com/vi/eIho2S0ZahI/maxresdefault.jpg"} {"video_id": "eIho2S0ZahI", "text": "But also, if you're really wishing somebody well, it's very hard to judge them at the same time. I'm not even sure you can do those two things simultaneously. So hail. Also, now that's what you say, and it's like the old song, it is what you say, it's also the way that you say it. You have an amazing toolbox. This instrument is incredible, and yet this is a toolbox that very few people have ever opened. I'd like to have a little rummage in there with you now and just pull a few tools out that you might like to take away and play with,", "start_timestamp": "00:04:05", "end_timestamp": "00:04:36", "start_second": 245, "end_second": 276, "url": "https://www.youtube.com/watch?v=eIho2S0ZahI&t=245s", "title": "How to speak so that people want to listen | Julian Treasure", "thumbnail": "https://i.ytimg.com/vi/eIho2S0ZahI/maxresdefault.jpg"} {"video_id": "eIho2S0ZahI", "text": "which will increase the power of your speaking. Register, for example. Now, falsetto register may not be very useful most of the time, but there's a register in between. I'm not going to get very technical about this for any of you who are voice coaches. You can locate your voice, however. So if I talk up here in my nose, you can hear the difference. If I go down here in my throat, which is where most of us speak from most of the time. But if you want weight, you need to go down here to the chest. You hear the difference?", "start_timestamp": "00:04:36", "end_timestamp": "00:05:04", "start_second": 276, "end_second": 304, "url": "https://www.youtube.com/watch?v=eIho2S0ZahI&t=276s", "title": "How to speak so that people want to listen | Julian Treasure", "thumbnail": "https://i.ytimg.com/vi/eIho2S0ZahI/maxresdefault.jpg"} {"video_id": "eIho2S0ZahI", "text": "We vote for politicians with lower voices, it's true, because we associate depth with power and with authority. That's register. Then we have timbre. It's the way your voice feels. Again, the research shows that we prefer voices which are rich, smooth, warm, like hot chocolate. Well if that's not you, that's not the end of the world, because you can train. Go and get a voice coach. And there are amazing things you can do with breathing, with posture, and with exercises to improve the timbre of your voice. Then prosody. I love prosody.", "start_timestamp": "00:05:04", "end_timestamp": "00:05:41", "start_second": 304, "end_second": 341, "url": "https://www.youtube.com/watch?v=eIho2S0ZahI&t=304s", "title": "How to speak so that people want to listen | Julian Treasure", "thumbnail": "https://i.ytimg.com/vi/eIho2S0ZahI/maxresdefault.jpg"} {"video_id": "eIho2S0ZahI", "text": "This is the sing-song, the meta-language that we use in order to impart meaning. It's root one for meaning in conversation. People who speak all on one note are really quite hard to listen to if they don't have any prosody at all. That's where the word \"monotonic\" comes from, or monotonous, monotone. Also, we have repetitive prosody now coming in, where every sentence ends as if it were a question when it's actually not a question, it's a statement? (Laughter) And if you repeat that one, it's actually restricting your ability to communicate through prosody,", "start_timestamp": "00:05:41", "end_timestamp": "00:06:16", "start_second": 341, "end_second": 376, "url": "https://www.youtube.com/watch?v=eIho2S0ZahI&t=341s", "title": "How to speak so that people want to listen | Julian Treasure", "thumbnail": "https://i.ytimg.com/vi/eIho2S0ZahI/maxresdefault.jpg"} {"video_id": "eIho2S0ZahI", "text": "which I think is a shame, so let's try and break that habit. Pace. I can get very excited by saying something really quickly, or I can slow right down to emphasize, and at the end of that, of course, is our old friend silence. There's nothing wrong with a bit of silence in a talk, is there? We don't have to fill it with ums and ahs. It can be very powerful. Of course, pitch often goes along with pace to indicate arousal, but you can do it just with pitch. Where did you leave my keys? (Higher pitch) Where did you leave my keys?", "start_timestamp": "00:06:16", "end_timestamp": "00:06:52", "start_second": 376, "end_second": 412, "url": "https://www.youtube.com/watch?v=eIho2S0ZahI&t=376s", "title": "How to speak so that people want to listen | Julian Treasure", "thumbnail": "https://i.ytimg.com/vi/eIho2S0ZahI/maxresdefault.jpg"} {"video_id": "eIho2S0ZahI", "text": "So, slightly different meaning in those two deliveries. And finally, volume. (Loud) I can get really excited by using volume. Sorry about that, if I startled anybody. Or, I can have you really pay attention by getting very quiet. Some people broadcast the whole time. Try not to do that. That's called sodcasting, (Laughter) Imposing your sound on people around you carelessly and inconsiderately. Not nice. Of course, where this all comes into play most of all is when you've got something really important to do.", "start_timestamp": "00:06:52", "end_timestamp": "00:07:26", "start_second": 412, "end_second": 446, "url": "https://www.youtube.com/watch?v=eIho2S0ZahI&t=412s", "title": "How to speak so that people want to listen | Julian Treasure", "thumbnail": "https://i.ytimg.com/vi/eIho2S0ZahI/maxresdefault.jpg"} {"video_id": "eIho2S0ZahI", "text": "It might be standing on a stage like this and giving a talk to people. It might be proposing marriage, asking for a raise, a wedding speech. Whatever it is, if it's really important, you owe it to yourself to look at this toolbox and the engine that it's going to work on, and no engine works well without being warmed up. Warm up your voice. Actually, let me show you how to do that. Would you all like to stand up for a moment? I'm going to show you the six vocal warm-up exercises that I do before every talk I ever do.", "start_timestamp": "00:07:26", "end_timestamp": "00:07:58", "start_second": 446, "end_second": 478, "url": "https://www.youtube.com/watch?v=eIho2S0ZahI&t=446s", "title": "How to speak so that people want to listen | Julian Treasure", "thumbnail": "https://i.ytimg.com/vi/eIho2S0ZahI/maxresdefault.jpg"} {"video_id": "eIho2S0ZahI", "text": "Any time you're going to talk to anybody important, do these. First, arms up, deep breath in, and sigh out, ahhhhh, like that. One more time. Ahhhh, very good. Now we're going to warm up our lips, and we're going to go Ba, Ba, Ba, Ba, Ba, Ba, Ba, Ba. Very good. And now, brrrrrrrrrr, just like when you were a kid. Brrrr. Now your lips should be coming alive. We're going to do the tongue next with exaggerated la, la, la, la, la, la, la, la, la. Beautiful. You're getting really good at this. And then, roll an R. Rrrrrrr.", "start_timestamp": "00:07:58", "end_timestamp": "00:08:37", "start_second": 478, "end_second": 517, "url": "https://www.youtube.com/watch?v=eIho2S0ZahI&t=478s", "title": "How to speak so that people want to listen | Julian Treasure", "thumbnail": "https://i.ytimg.com/vi/eIho2S0ZahI/maxresdefault.jpg"} {"video_id": "eIho2S0ZahI", "text": "That's like champagne for the tongue. Finally, and if I can only do one, the pros call this the siren. It's really good. It starts with \"we\" and goes to \"aw.\" The \"we\" is high, the \"aw\" is low. So you go, weeeaawww, weeeaawww. Fantastic. Give yourselves a round of applause. Take a seat, thank you. (Applause) Next time you speak, do those in advance. Now let me just put this in context to close. This is a serious point here. This is where we are now, right? We speak not very well to people who simply aren't listening", "start_timestamp": "00:08:37", "end_timestamp": "00:09:12", "start_second": 517, "end_second": 552, "url": "https://www.youtube.com/watch?v=eIho2S0ZahI&t=517s", "title": "How to speak so that people want to listen | Julian Treasure", "thumbnail": "https://i.ytimg.com/vi/eIho2S0ZahI/maxresdefault.jpg"} {"video_id": "eIho2S0ZahI", "text": "in an environment that's all about noise and bad acoustics. I have talked about that on this stage in different phases. What would the world be like if we were speaking powerfully to people who were listening consciously in environments which were actually fit for purpose? Or to make that a bit larger, what would the world be like if we were creating sound consciously and consuming sound consciously and designing all our environments consciously for sound? That would be a world that does sound beautiful, and one where understanding would be the norm,", "start_timestamp": "00:09:12", "end_timestamp": "00:09:46", "start_second": 552, "end_second": 586, "url": "https://www.youtube.com/watch?v=eIho2S0ZahI&t=552s", "title": "How to speak so that people want to listen | Julian Treasure", "thumbnail": "https://i.ytimg.com/vi/eIho2S0ZahI/maxresdefault.jpg"} {"video_id": "_3eaVy8c-xk", "text": "machine learning it's a buzzword but I would also claim it's a lot more than just a buzzword how many have experience with machine learning it's a small portion area it's great to see when I started learning machine learning I found it difficult to understand how the different algorithm worked and what the main difference between them was I find it really difficult understand where I should start to learn machine learning but in this lightning speech you hopefully will learn fundamental difference between different categories", "start_timestamp": "00:00:00", "end_timestamp": "00:00:33", "start_second": 0, "end_second": 33, "url": "https://www.youtube.com/watch?v=_3eaVy8c-xk&t=0s", "title": "Machine learning algorithms, choosing the correct algorithm for your problem - Joakim Lehn", "thumbnail": "https://i.ytimg.com/vi/_3eaVy8c-xk/hqdefault.jpg"} {"video_id": "_3eaVy8c-xk", "text": "of algorithms this will helpful both beginners with also those who there who have a little experience with machine learning all right so real quick about myself my name is Joakim Lyon I'm a consultant here in Oslo for the Nordic consulting firm knowit's and I've been fascinated with machine learning for the last couple of years doing some project both personally and professionally and I must say there are many different algorithms out there just take a look at this this is a small portion of really great algorithms so", "start_timestamp": "00:00:33", "end_timestamp": "00:01:04", "start_second": 33, "end_second": 64, "url": "https://www.youtube.com/watch?v=_3eaVy8c-xk&t=33s", "title": "Machine learning algorithms, choosing the correct algorithm for your problem - Joakim Lehn", "thumbnail": "https://i.ytimg.com/vi/_3eaVy8c-xk/hqdefault.jpg"} {"video_id": "_3eaVy8c-xk", "text": "where they begin obviously some algorithms are more suited for certain problems that others and the result will vary greatly of how well your algorithm is suited for a problem but luckily you can divide these algorithms into four different categories of machine learning so the four different categories is supervised learning and supervised learning semi-supervised learning and reinforcement learning and when you face a machine learning problem it is important to understand which category it fits into so today we're going to", "start_timestamp": "00:01:04", "end_timestamp": "00:01:37", "start_second": 64, "end_second": 97, "url": "https://www.youtube.com/watch?v=_3eaVy8c-xk&t=64s", "title": "Machine learning algorithms, choosing the correct algorithm for your problem - Joakim Lehn", "thumbnail": "https://i.ytimg.com/vi/_3eaVy8c-xk/hqdefault.jpg"} {"video_id": "_3eaVy8c-xk", "text": "explore these four different categories before we get through really good stuff I need to explain two key words there in machine learning you have something called features this is basically a property of your training data and a label is the output you get from your model after training it so you could say features input levels output it's partially true because you can also have labels on your input data I'm gonna explain that with an example let's say you want the machine learning algorithm to estimate the height of a person based", "start_timestamp": "00:01:37", "end_timestamp": "00:02:06", "start_second": 97, "end_second": 126, "url": "https://www.youtube.com/watch?v=_3eaVy8c-xk&t=97s", "title": "Machine learning algorithms, choosing the correct algorithm for your problem - Joakim Lehn", "thumbnail": "https://i.ytimg.com/vi/_3eaVy8c-xk/hqdefault.jpg"} {"video_id": "_3eaVy8c-xk", "text": "on age and gender then age and gender are features and the height you want to find is the label and if you have a training set with a lot of people with their height corresponding to age and gender then you have a labeled training set so the first category is called supervised learning and in supervised learning you have training data that consists of a set of training examples you have a labeled training set and the basic idea is to find the most optimal model parameters to predict unknown labels on other objects let's look at a few examples", "start_timestamp": "00:02:06", "end_timestamp": "00:02:37", "start_second": 126, "end_second": 157, "url": "https://www.youtube.com/watch?v=_3eaVy8c-xk&t=126s", "title": "Machine learning algorithms, choosing the correct algorithm for your problem - Joakim Lehn", "thumbnail": "https://i.ytimg.com/vi/_3eaVy8c-xk/hqdefault.jpg"} {"video_id": "_3eaVy8c-xk", "text": "let's say I want to estimate the value of a used car based on age mark the mileage you name it then a machine learning algorithm can do this pretty well if you give it a training set with a lot of Sol cars with the corresponding value another example could be a mail it spammers it not spam a machine learning algorithm can do this if it has a large training sets a great algorithm within the supervised domain is called decision trees the reason I picked this one is because it's more intuitive than most others so in", "start_timestamp": "00:02:37", "end_timestamp": "00:03:12", "start_second": 157, "end_second": 192, "url": "https://www.youtube.com/watch?v=_3eaVy8c-xk&t=157s", "title": "Machine learning algorithms, choosing the correct algorithm for your problem - Joakim Lehn", "thumbnail": "https://i.ytimg.com/vi/_3eaVy8c-xk/hqdefault.jpg"} {"video_id": "_3eaVy8c-xk", "text": "decision trees you have nodes and in every node you choose the best split between a lot of features and you make this procedure recursively until we finish with the stopping criteria again I'm gonna illustrate this with an example let's say you want to find out we should accept a new job offer the first thought might be well how much is the salary is it above some threshold if it's not you're definitely not gonna take the job but if it is then do you have to commute for a long while do they are free coffee we'd have a food table I", "start_timestamp": "00:03:12", "end_timestamp": "00:03:41", "start_second": 192, "end_second": 221, "url": "https://www.youtube.com/watch?v=_3eaVy8c-xk&t=192s", "title": "Machine learning algorithms, choosing the correct algorithm for your problem - Joakim Lehn", "thumbnail": "https://i.ytimg.com/vi/_3eaVy8c-xk/hqdefault.jpg"} {"video_id": "_3eaVy8c-xk", "text": "don't know you might ask yourself questions like this but at some point you can end up and of stopping leaf and I'm gonna either accept or decline the job offer so that was supervised learning next category is called unsupervised learning and then unsupervised learning you only have input data and no corresponding output variables you have no labels in your training sets and the goal phone supports learning is to model the underlying structure or distribution in the data in order to learn more about the data so the algorithms are left on", "start_timestamp": "00:03:41", "end_timestamp": "00:04:11", "start_second": 221, "end_second": 251, "url": "https://www.youtube.com/watch?v=_3eaVy8c-xk&t=221s", "title": "Machine learning algorithms, choosing the correct algorithm for your problem - Joakim Lehn", "thumbnail": "https://i.ytimg.com/vi/_3eaVy8c-xk/hqdefault.jpg"} {"video_id": "_3eaVy8c-xk", "text": "their own to discover and present the interesting structures in the data let's look at a couple examples let's see you want to group customers by their purchasing behavior people buy item 810 to buy item B then obviously you should recommend these items to people interested in one of them another example could be in Netflix Netflix video recommendation system it recommends TV series movies and whatnot and they do this by using a series of unsupervised learning algorithms a great algorithm here is called k-means not for", "start_timestamp": "00:04:11", "end_timestamp": "00:04:43", "start_second": 251, "end_second": 283, "url": "https://www.youtube.com/watch?v=_3eaVy8c-xk&t=251s", "title": "Machine learning algorithms, choosing the correct algorithm for your problem - Joakim Lehn", "thumbnail": "https://i.ytimg.com/vi/_3eaVy8c-xk/hqdefault.jpg"} {"video_id": "_3eaVy8c-xk", "text": "Netflix maybe but for other purposes and here we try to divide all the data into K clusters and you select random K points in new clusters and in the Centers and the cluster of other objects are defined by the closest cluster Center you tune this algorithm by selecting the K number of clusters you can use this algorithm for many things let's say you have not hotel chain anyone to open a new hotel in a city where do you place your hotel hopefully you start off by looking at potential sites gather a lot of data", "start_timestamp": "00:04:43", "end_timestamp": "00:05:12", "start_second": 283, "end_second": 312, "url": "https://www.youtube.com/watch?v=_3eaVy8c-xk&t=283s", "title": "Machine learning algorithms, choosing the correct algorithm for your problem - Joakim Lehn", "thumbnail": "https://i.ytimg.com/vi/_3eaVy8c-xk/hqdefault.jpg"} {"video_id": "_3eaVy8c-xk", "text": "like we see close to downtown the restaurants nearby is it easy to get to the hotel and so on and maybe hopefully from all this data and algorithm can find clusters in this data to show interesting spots for your tab so most unsupervised learning third category of problems falls between unsupervised and supervised problems and it's called semi-supervised learning here we have partially labeled data and many real problems fall into this category because it can be really expensive or at least time-consuming to", "start_timestamp": "00:05:12", "end_timestamp": "00:05:45", "start_second": 312, "end_second": 345, "url": "https://www.youtube.com/watch?v=_3eaVy8c-xk&t=312s", "title": "Machine learning algorithms, choosing the correct algorithm for your problem - Joakim Lehn", "thumbnail": "https://i.ytimg.com/vi/_3eaVy8c-xk/hqdefault.jpg"} {"video_id": "_3eaVy8c-xk", "text": "try to label all the data let's save a million pictures and you're gonna label them takes too much time unlabeled data however is cheap and usually easy to collect and store so here a mixture of techniques from the unsupervised unsupervised domain can be used an example here could be as I already mentioned if you have a photo archive you might label some of the images like there's a cat in this picture people skiing topless person on the beach I don't know and from this label pictures you have a lot of unlabeled pictures and you can", "start_timestamp": "00:05:45", "end_timestamp": "00:06:17", "start_second": 345, "end_second": 377, "url": "https://www.youtube.com/watch?v=_3eaVy8c-xk&t=345s", "title": "Machine learning algorithms, choosing the correct algorithm for your problem - Joakim Lehn", "thumbnail": "https://i.ytimg.com/vi/_3eaVy8c-xk/hqdefault.jpg"} {"video_id": "_3eaVy8c-xk", "text": "try to label those with an algorithm so the last category of problems fall into reinforcement learning and it's not like any of the supervised learning algorithms because you don't have any label data and you don't have any unlabeled data usually you don't have any training data the idea is to create a software agent and it's got some states and it's gonna perform some action in environments the environment is gonna either punish it or reward it somehow and they can end up in new state and you do this recursively and you can imagine this by", "start_timestamp": "00:06:17", "end_timestamp": "00:06:50", "start_second": 377, "end_second": 410, "url": "https://www.youtube.com/watch?v=_3eaVy8c-xk&t=377s", "title": "Machine learning algorithms, choosing the correct algorithm for your problem - Joakim Lehn", "thumbnail": "https://i.ytimg.com/vi/_3eaVy8c-xk/hqdefault.jpg"} {"video_id": "_3eaVy8c-xk", "text": "saying you're a robot and you wake up and in a strange place you can perform activities and you're gonna get rewards from the environment so after you get more rewards you get more clever and your actions get more complex and you're training to behave the most effective way on each step so this is kind of a human way to learn and human way to think and we made some incredible progress within the reinforcement domain last years as first speaker mentioned alphago was a great great example here they managed to be the best player in", "start_timestamp": "00:06:50", "end_timestamp": "00:07:19", "start_second": 410, "end_second": 439, "url": "https://www.youtube.com/watch?v=_3eaVy8c-xk&t=410s", "title": "Machine learning algorithms, choosing the correct algorithm for your problem - Joakim Lehn", "thumbnail": "https://i.ytimg.com/vi/_3eaVy8c-xk/hqdefault.jpg"} {"video_id": "_3eaVy8c-xk", "text": "the Google ingo and the reason why I bring this up is because it made some moves that humanity has never seen before and they now teach some of the moves it did during the game at go schools in China they have go schools in China which is surprising and I find it really interesting that humans can now learn from machine and not just the other way around another really cool example I think is from pretty recent time from open AI they managed to create an AI that could beat some of the best players in the world in dota 2 and dota", "start_timestamp": "00:07:19", "end_timestamp": "00:07:49", "start_second": 439, "end_second": 469, "url": "https://www.youtube.com/watch?v=_3eaVy8c-xk&t=439s", "title": "Machine learning algorithms, choosing the correct algorithm for your problem - Joakim Lehn", "thumbnail": "https://i.ytimg.com/vi/_3eaVy8c-xk/hqdefault.jpg"} {"video_id": "_3eaVy8c-xk", "text": "2 is a real-time game so the world was quite shocked to see that this happen already we fought this was the years and years until it could happen because it's vastly more complex than traditional board games like chess and this is my personal dream project I'm really hoping I can beat myself but by creating an AI that can beat me in Mario Kart and not to brag but I'm quite good at Mario Kart so I'm not sure if I'm program skills is cute enough we'll see so hopefully the last 10 minutes you learned what's here", "start_timestamp": "00:07:49", "end_timestamp": "00:08:17", "start_second": 469, "end_second": 497, "url": "https://www.youtube.com/watch?v=_3eaVy8c-xk&t=469s", "title": "Machine learning algorithms, choosing the correct algorithm for your problem - Joakim Lehn", "thumbnail": "https://i.ytimg.com/vi/_3eaVy8c-xk/hqdefault.jpg"} {"video_id": "_3eaVy8c-xk", "text": "in supervised learning all there is labeled and the algorithm learns to predict the output from the input data when unsupervised learning all there is unlabeled and the algorithm learns the inherent structure from the input data semi-supervised learning have some data labeled some unlabeled mostly unlabeled and a mixture of supervised and unsupervised techniques can be used reinforcement learning is an area of machine learning concerned with how a soft region out to take action in environment so as to maximize some", "start_timestamp": "00:08:17", "end_timestamp": "00:08:48", "start_second": 497, "end_second": 528, "url": "https://www.youtube.com/watch?v=_3eaVy8c-xk&t=497s", "title": "Machine learning algorithms, choosing the correct algorithm for your problem - Joakim Lehn", "thumbnail": "https://i.ytimg.com/vi/_3eaVy8c-xk/hqdefault.jpg"} {"video_id": "qoGBO3q5ikI", "text": "thank you it's a pleasure to be here I will give a talk for probably only about 30 minutes and we're really going to talk primarily about the basics of stroke and then I want to leave plenty of time for people to ask questions about topics of their interest stroke is a very broad topic lots of different research going on so hard to summarize it all in a short talk but be happy to address anything that's that's on your mind so you heard a little bit about me I actually came to Stanford first in 1984 right after graduating Medical", "start_timestamp": "00:00:00", "end_timestamp": "00:00:34", "start_second": 0, "end_second": 34, "url": "https://www.youtube.com/watch?v=qoGBO3q5ikI&t=0s", "title": "Stroke: The Basics", "thumbnail": "https://i.ytimg.com/vi/qoGBO3q5ikI/maxresdefault.jpg"} {"video_id": "qoGBO3q5ikI", "text": "School came up here for my internal medicine neurology training and then a stroke fellowship so I've spent my entire career thirty years here at Stanford have never looked for a job elsewhere because I love being here it's a great place to interact with other physicians and have a team approach to taking care of patients as well as doing research and as was mentioned we formed the Stanford Stroke Center in 1992 I began on the faculty after finishing my training in 89 at the same time that Gary Steinberg who's a neurosurgeon was", "start_timestamp": "00:00:34", "end_timestamp": "00:01:10", "start_second": 34, "end_second": 70, "url": "https://www.youtube.com/watch?v=qoGBO3q5ikI&t=34s", "title": "Stroke: The Basics", "thumbnail": "https://i.ytimg.com/vi/qoGBO3q5ikI/maxresdefault.jpg"} {"video_id": "qoGBO3q5ikI", "text": "just finishing up and starting as a new faculty member and neurosurgery with an interest in stroke and Michael marks was a young guy who was interested in putting catheters up into the brain to try to treat stroke so he was a radiologist I was a neurologist and Gary was a neurosurgeon and the typical approach to stroke back then was that those three groups did their own thing that they didn't work together that they worked independently with what they could offer and the idea that we had is that it would be a novel approach to", "start_timestamp": "00:01:10", "end_timestamp": "00:01:39", "start_second": 70, "end_second": 99, "url": "https://www.youtube.com/watch?v=qoGBO3q5ikI&t=70s", "title": "Stroke: The Basics", "thumbnail": "https://i.ytimg.com/vi/qoGBO3q5ikI/maxresdefault.jpg"} {"video_id": "qoGBO3q5ikI", "text": "work together as a comprehensive type of team where we could try to use the approach of the the team rather than the individual to treat patients and Stanford Hospital was very welcoming to that idea to give it a try gave us some funding to try to start the program together and all three of us have stayed around for all this time because we really enjoy doing stroke research taking care of patients and trying to Train individuals who want to do make a difference in stroke so we've have medical students residents fellows", "start_timestamp": "00:01:39", "end_timestamp": "00:02:11", "start_second": 99, "end_second": 131, "url": "https://www.youtube.com/watch?v=qoGBO3q5ikI&t=99s", "title": "Stroke: The Basics", "thumbnail": "https://i.ytimg.com/vi/qoGBO3q5ikI/maxresdefault.jpg"} {"video_id": "qoGBO3q5ikI", "text": "who are training to be strokes about and now they've moved on to different places to start stroke programs and now it's in vogue to do comprehensive Stroke Center to have multi collaborative approaches and they're doing accreditation now for comprehensive stroke centers so we were very happy to be the first in the country to be chosen as a accredited comprehensive Stroke Center so that's the background on the center and now what I want to do is just give a bit of an overview about stroke how to prevent it how to treat it", "start_timestamp": "00:02:11", "end_timestamp": "00:02:44", "start_second": 131, "end_second": 164, "url": "https://www.youtube.com/watch?v=qoGBO3q5ikI&t=131s", "title": "Stroke: The Basics", "thumbnail": "https://i.ytimg.com/vi/qoGBO3q5ikI/maxresdefault.jpg"} {"video_id": "qoGBO3q5ikI", "text": "and then we'll end with a case example of somebody who recognized a stroke and alerted us to it right away so that we were able to to treat her husband and you can kind of look at the the different impression of a stroke in somebody who's having one versus somebody who's watching one and hopefully by at the end you guys will be ready so that if you see somebody who's having a stroke you're gonna know what to do alright so stroke is a big problem it used to be the third leading cause of death it is now the fourth leading cause of death", "start_timestamp": "00:02:44", "end_timestamp": "00:03:19", "start_second": 164, "end_second": 199, "url": "https://www.youtube.com/watch?v=qoGBO3q5ikI&t=164s", "title": "Stroke: The Basics", "thumbnail": "https://i.ytimg.com/vi/qoGBO3q5ikI/maxresdefault.jpg"} {"video_id": "qoGBO3q5ikI", "text": "but death is really the least of the problems with stroke because most people with strokes don't die but most people with strokes are disabled so what people typically fear most about stroke is not that they're gonna die from it but it's going to be disabling and that they're going to wind up in a nursing home or not be able to do the things that they like to do before because of the injury to the brain stroke continues to be very common it's become a little bit less common because of the treatments that we've had for risk factors particularly", "start_timestamp": "00:03:19", "end_timestamp": "00:03:51", "start_second": 199, "end_second": 231, "url": "https://www.youtube.com/watch?v=qoGBO3q5ikI&t=199s", "title": "Stroke: The Basics", "thumbnail": "https://i.ytimg.com/vi/qoGBO3q5ikI/maxresdefault.jpg"} {"video_id": "qoGBO3q5ikI", "text": "high blood pressure as we'll talk about so that the chance of a given individual to have a stroke is actually less now than it was a few years ago because of risk factor control but the population of course is getting older baby boomer boomers like myself are starting to head up into the stroke prone age groups so that means that even though we have a little bit better control of the risk factors the number of strokes that we anticipate over the next two decades is very high and that we're going to go well above the 780,000 strokes that", "start_timestamp": "00:03:51", "end_timestamp": "00:04:22", "start_second": 231, "end_second": 262, "url": "https://www.youtube.com/watch?v=qoGBO3q5ikI&t=231s", "title": "Stroke: The Basics", "thumbnail": "https://i.ytimg.com/vi/qoGBO3q5ikI/maxresdefault.jpg"} {"video_id": "qoGBO3q5ikI", "text": "occur right now in the US every year so this is a huge problem and I suspect that all of you have had your lives touched by stroke in one way or the other because you're here but even if you go to a general audience you'll find that most people usually about one out of every three individuals has had their life touched by stroke either because it's a family member for them themselves or somebody very close to them so incredibly common and this is one of the things that that drew me to stroke that it was not treatable at all", "start_timestamp": "00:04:22", "end_timestamp": "00:04:51", "start_second": 262, "end_second": 291, "url": "https://www.youtube.com/watch?v=qoGBO3q5ikI&t=262s", "title": "Stroke: The Basics", "thumbnail": "https://i.ytimg.com/vi/qoGBO3q5ikI/maxresdefault.jpg"} {"video_id": "qoGBO3q5ikI", "text": "when I started and that this was such a common problem that it was something that could really get excited about trying to make a difference so let's look at a little bit of the prognosis after stroke as I mentioned stroke is not a major problem in terms of given individual who's having a stroke is relatively unlikely to die from it even though it's the fourth leading cause of death it's only about a 15 percent chance that you'll die from the stroke the type of strokes that are most likely to cause mortality are the bleeding type", "start_timestamp": "00:04:51", "end_timestamp": "00:05:25", "start_second": 291, "end_second": 325, "url": "https://www.youtube.com/watch?v=qoGBO3q5ikI&t=291s", "title": "Stroke: The Basics", "thumbnail": "https://i.ytimg.com/vi/qoGBO3q5ikI/maxresdefault.jpg"} {"video_id": "qoGBO3q5ikI", "text": "of strokes the hemorrhages and we'll talk about that the strokes that also can cause mortality or where you have a very large ischemic stroke and we'll will discuss that as well but usually what we're looking at is rehabilitation recovery and that the the brain can rewire itself which again is something new that we didn't realize when I was in training is that the brain has the potential for rewiring and that with patients and with therapy that most patients will make a lot of improvement particularly in the first several months", "start_timestamp": "00:05:25", "end_timestamp": "00:05:56", "start_second": 325, "end_second": 356, "url": "https://www.youtube.com/watch?v=qoGBO3q5ikI&t=325s", "title": "Stroke: The Basics", "thumbnail": "https://i.ytimg.com/vi/qoGBO3q5ikI/maxresdefault.jpg"} {"video_id": "qoGBO3q5ikI", "text": "after a stroke but the improvement can go on for many years after the stroke stroke is one of the most costly medical diseases not just because it's expensive when somebody's in the hospital and they may be in the hospital for many days or even weeks with a stroke but it's because stroke can hit at any age and even though it's more common in older patients many people of a stroke are still working so when you're looking at not only the cost of the stroke itself but you're looking at the cost of the lost productivity permit from the person", "start_timestamp": "00:05:56", "end_timestamp": "00:06:27", "start_second": 356, "end_second": 387, "url": "https://www.youtube.com/watch?v=qoGBO3q5ikI&t=356s", "title": "Stroke: The Basics", "thumbnail": "https://i.ytimg.com/vi/qoGBO3q5ikI/maxresdefault.jpg"} {"video_id": "qoGBO3q5ikI", "text": "and people who have major stroke with major disabilities and you're looking at nursing home care so it really adds up to being one of the most expensive medical conditions that we have okay so time for some audience participation who can give me a definition of what a stroke is right here okay that's a good definition loss of oxygen to the brain is what was said anybody want to add to that or take a little different approach a blood clot in the brain that certainly can cause a stroke any other definitions yeah that's one of the things that", "start_timestamp": "00:06:27", "end_timestamp": "00:07:17", "start_second": 387, "end_second": 437, "url": "https://www.youtube.com/watch?v=qoGBO3q5ikI&t=387s", "title": "Stroke: The Basics", "thumbnail": "https://i.ytimg.com/vi/qoGBO3q5ikI/maxresdefault.jpg"} {"video_id": "qoGBO3q5ikI", "text": "stroke is going to do cause paralysis so hemorrhage absolutely bleeding in the brain is in addition to a blood clot going up into the brain and blocking off the blood flow a blood vessel rupturing and causing hemorrhage is a stroke so you can start to get the feeling from the responses that stroke is not totally straightforward and it is complicated so the definition that I like is that stroke is the brain injury that occurs when there is an abrupt disruption of the blood flow to the brain so that abrupt disruption of the blood flow to", "start_timestamp": "00:07:17", "end_timestamp": "00:07:50", "start_second": 437, "end_second": 470, "url": "https://www.youtube.com/watch?v=qoGBO3q5ikI&t=437s", "title": "Stroke: The Basics", "thumbnail": "https://i.ytimg.com/vi/qoGBO3q5ikI/maxresdefault.jpg"} {"video_id": "qoGBO3q5ikI", "text": "the brain can cause for two major reasons and you guys hit on both of them one is that the blood flow is blocked by a clot that a clot comes from somewhere or it's formed in the Blayne and it prevents the blood from flowing into the brain the other cause of stroke is a blood vessel that ruptures so then you either get bleeding into the brain or around the surface of the brain and you can imagine those are very different problems right a blood vessel that's blocked up with a clot we're going to approach that very differently than we", "start_timestamp": "00:07:50", "end_timestamp": "00:08:22", "start_second": 470, "end_second": 502, "url": "https://www.youtube.com/watch?v=qoGBO3q5ikI&t=470s", "title": "Stroke: The Basics", "thumbnail": "https://i.ytimg.com/vi/qoGBO3q5ikI/maxresdefault.jpg"} {"video_id": "qoGBO3q5ikI", "text": "would approach a blood vessel that has ruptured and now has blood spilling into her around the brain so two major problems which one do you think is more common absolutely absolutely this is much more common so we're going to focus on this but we'll talk briefly about some of the ruptured arteries so ruptured arteries there's two major flavors here one is that it's a blood vessel within the substance of the brain that ruptures and this is a CT scan which is an x-ray picture this white blob is blood bleeding into", "start_timestamp": "00:08:22", "end_timestamp": "00:08:57", "start_second": 502, "end_second": 537, "url": "https://www.youtube.com/watch?v=qoGBO3q5ikI&t=502s", "title": "Stroke: The Basics", "thumbnail": "https://i.ytimg.com/vi/qoGBO3q5ikI/maxresdefault.jpg"} {"video_id": "qoGBO3q5ikI", "text": "the brain patient didn't make it and you can see the blood here in the center of the brain so this is a blood vessel that's piercing into the brain which ruptures and why do you think that might happen yeah so an accident usually will cause some injury to the surface of the brain but a blood vessel deep in the brain usually ruptures for a different reason than an accident so an aneurysm is a good reason for a blood vessel to rupture we're going to get to that in the minute the aneurysms usually form at the blood vessels at the base of the", "start_timestamp": "00:08:57", "end_timestamp": "00:09:33", "start_second": 537, "end_second": 573, "url": "https://www.youtube.com/watch?v=qoGBO3q5ikI&t=537s", "title": "Stroke: The Basics", "thumbnail": "https://i.ytimg.com/vi/qoGBO3q5ikI/maxresdefault.jpg"} {"video_id": "qoGBO3q5ikI", "text": "brain but a blood vessel that's piercing deep into the brain typically will rupture because of many years of high blood pressure that many years of high blood pressure pushing against the surface of that blood vessel wall weakens the blood vessel wall and one day it gives way and it starts bleeding so if we could control everybody's blood pressure we would see very few of these hemorrhages because the vast majority of these deep in the brain hemorrhages are caused by high blood pressure there are some other unusual reasons like having", "start_timestamp": "00:09:33", "end_timestamp": "00:10:06", "start_second": 573, "end_second": 606, "url": "https://www.youtube.com/watch?v=qoGBO3q5ikI&t=573s", "title": "Stroke: The Basics", "thumbnail": "https://i.ytimg.com/vi/qoGBO3q5ikI/maxresdefault.jpg"} {"video_id": "qoGBO3q5ikI", "text": "an abnormal cluster of blood vessels in the brain that we call an AV malformation certain drugs like cocaine or amphetamine can cause blood vessels on the brain to rupture but hypertension is the one that that we could really make a huge difference if we controlled it so the aneurysm which you guys mentioned causes what's known as a subarachnoid hemorrhage so the aneurysms sit actually on the surface of the brain they sit in an area called the Circle of Willis that we're going to talk about in just a few minutes and when they bleed", "start_timestamp": "00:10:06", "end_timestamp": "00:10:38", "start_second": 606, "end_second": 638, "url": "https://www.youtube.com/watch?v=qoGBO3q5ikI&t=606s", "title": "Stroke: The Basics", "thumbnail": "https://i.ytimg.com/vi/qoGBO3q5ikI/maxresdefault.jpg"} {"video_id": "qoGBO3q5ikI", "text": "the blood goes all around the surface of the brain so you can imagine this is something that the brain doesn't like to happen to be covered in blood because blood is very irritating to the surface of the brain so having blood in the brain like in this example or having blood on the surface will cause bad headaches alright so typically somebody who's having a bleeding type of stroke will have a severe headache often the worst headache of their life now that's going to be very different from the more common types of stroke", "start_timestamp": "00:10:38", "end_timestamp": "00:11:08", "start_second": 638, "end_second": 668, "url": "https://www.youtube.com/watch?v=qoGBO3q5ikI&t=638s", "title": "Stroke: The Basics", "thumbnail": "https://i.ytimg.com/vi/qoGBO3q5ikI/maxresdefault.jpg"} {"video_id": "qoGBO3q5ikI", "text": "usually have no headache or very mild headache which are the blood clot blocking off blood flow but these more serious brain hemorrhage are typically going to present with neurologic symptoms and a very very bad headache because of that blood okay let's now shift gears and we're not going to talk so much about the bleeding types because that's only about 15% of strokes about 85% of strokes or when you have a blood vessel either in the neck or in the brain that's blocked off with a blood clot so we need to know a little bit", "start_timestamp": "00:11:08", "end_timestamp": "00:11:40", "start_second": 668, "end_second": 700, "url": "https://www.youtube.com/watch?v=qoGBO3q5ikI&t=668s", "title": "Stroke: The Basics", "thumbnail": "https://i.ytimg.com/vi/qoGBO3q5ikI/maxresdefault.jpg"} {"video_id": "qoGBO3q5ikI", "text": "about the anatomy to understand stroke there's two sets of blood vessels that bring blood to the brain the anterior circulation the front circulation or the carotid arteries and these are very easily accessible you can feel your own carotid if you put your finger just below the angle of the jaw and don't press too hard there but if you press gently you can feel the carotid artery right here which is pulsing so you've got one on either side going up the front of the neck in the back of the neck you have the vertebral", "start_timestamp": "00:11:40", "end_timestamp": "00:12:11", "start_second": 700, "end_second": 731, "url": "https://www.youtube.com/watch?v=qoGBO3q5ikI&t=700s", "title": "Stroke: The Basics", "thumbnail": "https://i.ytimg.com/vi/qoGBO3q5ikI/maxresdefault.jpg"} {"video_id": "qoGBO3q5ikI", "text": "column and you have the vertebral arteries and if you look at the picture you can see that these blood vessels actually go through little holes in the spine and then the two sets meet together to go up to the brain stem so the carotid arteries are going up to the hemispheres of the brain the vertebral arteries are going in the back towards the brainstem now these two sets of blood vessels come together with what's called the Circle of Willis so you see this circle here it links the two together and it's very nice if you're", "start_timestamp": "00:12:11", "end_timestamp": "00:12:43", "start_second": 731, "end_second": 763, "url": "https://www.youtube.com/watch?v=qoGBO3q5ikI&t=731s", "title": "Stroke: The Basics", "thumbnail": "https://i.ytimg.com/vi/qoGBO3q5ikI/maxresdefault.jpg"} {"video_id": "qoGBO3q5ikI", "text": "born with a good Circle of Willis because what that means is that if you have one blood vessel that's blocked up that you can get help from the other blood vessels that's known as collateral circulation means one blood vessel can help out another one that's blocked up and some people can block off the whole carotid artery and not have a problem because the other vessels are helping out some people can actually block off both of their carotid arteries the biggest blood supply to the brain slowly blocks off and the other", "start_timestamp": "00:12:43", "end_timestamp": "00:13:14", "start_second": 763, "end_second": 794, "url": "https://www.youtube.com/watch?v=qoGBO3q5ikI&t=763s", "title": "Stroke: The Basics", "thumbnail": "https://i.ytimg.com/vi/qoGBO3q5ikI/maxresdefault.jpg"} {"video_id": "qoGBO3q5ikI", "text": "vessels in the back can take over so the Circle of Willis is important then how the blood vessels come together on the surf to the brain to help each other out is also important these are things that we don't have control over this is how you were born you're either born with a nice Circle of Willis or not and you're born with a nice collateral circulation it doesn't necessarily matter how old you are but how you were born so you can thank your parents for if you had good collaterals or not there's things that", "start_timestamp": "00:13:14", "end_timestamp": "00:13:44", "start_second": 794, "end_second": 824, "url": "https://www.youtube.com/watch?v=qoGBO3q5ikI&t=794s", "title": "Stroke: The Basics", "thumbnail": "https://i.ytimg.com/vi/qoGBO3q5ikI/maxresdefault.jpg"} {"video_id": "qoGBO3q5ikI", "text": "you can do to make the collaterals work less well like smoking cigarettes will block up the collaterals having high blood pressure high cholesterol will block them up but you're kind of born with a set of collaterals so one of the things that you can imagine is since these blood vessels go to different regions of the brain the symptoms of the stroke are going to be quite different depending on what blood vessel is blocked so we'll talk a little bit about that first we'll show a little bit more detailed picture of what this Sokolov", "start_timestamp": "00:13:44", "end_timestamp": "00:14:11", "start_second": 824, "end_second": 851, "url": "https://www.youtube.com/watch?v=qoGBO3q5ikI&t=824s", "title": "Stroke: The Basics", "thumbnail": "https://i.ytimg.com/vi/qoGBO3q5ikI/maxresdefault.jpg"} {"video_id": "qoGBO3q5ikI", "text": "Willis looks like you can see there's a whole bunch of blood vessels that come together to make this circle we can take pictures of that to try to understand when somebody's having stroke what other blood vessels are going to be available to help out okay so let's get into the symptoms the symptoms depend on what part of the brain is involved symptoms of a stroke are much more complicated than symptoms of a heart attack if we're talking about the stroke that is due to blood clots we refer to that as a scheme 'extra which means", "start_timestamp": "00:14:11", "end_timestamp": "00:14:39", "start_second": 851, "end_second": 879, "url": "https://www.youtube.com/watch?v=qoGBO3q5ikI&t=851s", "title": "Stroke: The Basics", "thumbnail": "https://i.ytimg.com/vi/qoGBO3q5ikI/maxresdefault.jpg"} {"video_id": "qoGBO3q5ikI", "text": "there's no bleeding in the brain and if there's no bleeding in the brain it usually doesn't hurt okay so that's one of the disadvantages for stroke compared to heart attack if it doesn't hurt people are much less motivated to go to the emergency room right stroke also frequently occurs in the middle of the night so you're asleep and you don't notice it because it doesn't hurt yeah I had a heart attack in the middle of the night it would wake you up from sleep the brain is very complicated so the right side of the brain is going to be", "start_timestamp": "00:14:39", "end_timestamp": "00:15:08", "start_second": 879, "end_second": 908, "url": "https://www.youtube.com/watch?v=qoGBO3q5ikI&t=879s", "title": "Stroke: The Basics", "thumbnail": "https://i.ytimg.com/vi/qoGBO3q5ikI/maxresdefault.jpg"} {"video_id": "qoGBO3q5ikI", "text": "doing things that are very different from the left side of the brain so the hemispheres of the brain are primarily fed by the carotid arteries we call that the anterior circulation the brainstem in the cerebellum which is a coordination area are primarily fed by the vertebral we call that the posterior circulation so depending on what blood vessel is blocked up the symptoms can be incredibly variable so what would you expect might happen if you blocked off the left carotid artery what kind of symptoms yeah so we heard language and that's", "start_timestamp": "00:15:08", "end_timestamp": "00:15:47", "start_second": 908, "end_second": 947, "url": "https://www.youtube.com/watch?v=qoGBO3q5ikI&t=908s", "title": "Stroke: The Basics", "thumbnail": "https://i.ytimg.com/vi/qoGBO3q5ikI/maxresdefault.jpg"} {"video_id": "qoGBO3q5ikI", "text": "right most of the language function for both left-handers and right-handers is in the left hemisphere so and then heard numbness so what side of the body would be numb if you blocked off the blood flow to the left side of the brain right and what in addition to numbness and language trouble what else would be a prominent symptom somebody mentioned it earlier what's that vision yeah vision can be lost because the carotid artery is going to supply the blood supply to the eye on that side so if your left carotid is blocked you may", "start_timestamp": "00:15:47", "end_timestamp": "00:16:21", "start_second": 947, "end_second": 981, "url": "https://www.youtube.com/watch?v=qoGBO3q5ikI&t=947s", "title": "Stroke: The Basics", "thumbnail": "https://i.ytimg.com/vi/qoGBO3q5ikI/maxresdefault.jpg"} {"video_id": "qoGBO3q5ikI", "text": "have some trouble with your left eye it is also supplying so part of the visual areas that look at the right side of the world so you could have be missing vision from the right side of the world or the left eye if your left carotid was blocked well one of the most prominent symptoms is going to be weakness all right people mention that before so it's the right side of the body that's going to be weak if you have trouble in the left hemisphere and the weakness from stroke normally comes on abruptly it's not the type of weakness that builds up", "start_timestamp": "00:16:21", "end_timestamp": "00:16:52", "start_second": 981, "end_second": 1012, "url": "https://www.youtube.com/watch?v=qoGBO3q5ikI&t=981s", "title": "Stroke: The Basics", "thumbnail": "https://i.ytimg.com/vi/qoGBO3q5ikI/maxresdefault.jpg"} {"video_id": "qoGBO3q5ikI", "text": "slowly over days weeks or months it's a type of weakness where you're absolutely fine one minute and then you can't move the arm the next moment so it's abrupt onset because when that blood clot blocks off the blood flow suddenly the function stops doesn't mean the tissue dies right away but suddenly the function stops so the symptoms come on very quickly so somebody with left carotid stroke may not be able to speak or they may not be able to understand words so they may be able to make no words nonsense words or have trouble", "start_timestamp": "00:16:52", "end_timestamp": "00:17:24", "start_second": 1012, "end_second": 1044, "url": "https://www.youtube.com/watch?v=qoGBO3q5ikI&t=1012s", "title": "Stroke: The Basics", "thumbnail": "https://i.ytimg.com/vi/qoGBO3q5ikI/maxresdefault.jpg"} {"video_id": "qoGBO3q5ikI", "text": "getting the words out they may also have trouble understanding what you're saying so when you're talking to them it sounds like you're talking a foreign language so that if you ask them to do something like close their eyes or lift up their arm they won't because they can't understand what you're saying so different language areas in different parts of the left hemisphere can affect the patient differently so it could be very mild where there having trouble finding the right word or saying a few nonsense words all the way", "start_timestamp": "00:17:24", "end_timestamp": "00:17:53", "start_second": 1044, "end_second": 1073, "url": "https://www.youtube.com/watch?v=qoGBO3q5ikI&t=1044s", "title": "Stroke: The Basics", "thumbnail": "https://i.ytimg.com/vi/qoGBO3q5ikI/maxresdefault.jpg"} {"video_id": "qoGBO3q5ikI", "text": "to what we call a Broca's aphasia where you can't get any words out you become completely mute okay now on the right side of the brain there's usually no language function some left handers I'll have some language function on the right side but what they will have would be the control of the left body and they have something called neglect particularly if it's kind of the mid portion what we call the parietal lobe which means they don't realize they're having a stroke okay so people who are having a stroke on the left hemisphere", "start_timestamp": "00:17:53", "end_timestamp": "00:18:25", "start_second": 1073, "end_second": 1105, "url": "https://www.youtube.com/watch?v=qoGBO3q5ikI&t=1073s", "title": "Stroke: The Basics", "thumbnail": "https://i.ytimg.com/vi/qoGBO3q5ikI/maxresdefault.jpg"} {"video_id": "qoGBO3q5ikI", "text": "usually can't tell you about it very well because they can't talk so they can't call 9:1 and say I'm having a stroke people in the having a stroke on the right side they don't think they're having a stroke so even though they can't talk and call nine-one-one they see no reason to because they don't realize that their left side is weak so they often will be confused but neglect means that they don't realize what's going on that they can't tell that that their left arm is not working or the left leg is not working they just seem", "start_timestamp": "00:18:25", "end_timestamp": "00:18:55", "start_second": 1105, "end_second": 1135, "url": "https://www.youtube.com/watch?v=qoGBO3q5ikI&t=1105s", "title": "Stroke: The Basics", "thumbnail": "https://i.ytimg.com/vi/qoGBO3q5ikI/maxresdefault.jpg"} {"video_id": "qoGBO3q5ikI", "text": "to be you know thinking that it's not even their own arms sometimes you mean you can lift up their weak left arm and say whose arm is this and they say well that's my aunt Tilly's arm or they or that's your arm they don't even recognize that part so again you can see why we have trouble getting people in because stroke has a lot of complicated different symptoms and that it's hard for the patient who's having a stroke to call 911 so somebody else has to do it right stroke in the back part of the brain can be very confusing also because sometimes", "start_timestamp": "00:18:55", "end_timestamp": "00:19:27", "start_second": 1135, "end_second": 1167, "url": "https://www.youtube.com/watch?v=qoGBO3q5ikI&t=1135s", "title": "Stroke: The Basics", "thumbnail": "https://i.ytimg.com/vi/qoGBO3q5ikI/maxresdefault.jpg"} {"video_id": "qoGBO3q5ikI", "text": "it doesn't have as dramatic a presentation with the weakness and the language trouble it may just be double vision the sir seeing double or it may be some vertigo things are spinning around and people are off-balance they're trying to walk and they wind up leaning to one side or the coordination goes off when they're trying to use their arm or leg so different symptoms and sometimes it can be very dramatic where you have the right and left hemisphere are all funneling down through the brainstem so if you have a", "start_timestamp": "00:19:27", "end_timestamp": "00:20:01", "start_second": 1167, "end_second": 1201, "url": "https://www.youtube.com/watch?v=qoGBO3q5ikI&t=1167s", "title": "Stroke: The Basics", "thumbnail": "https://i.ytimg.com/vi/qoGBO3q5ikI/maxresdefault.jpg"} {"video_id": "qoGBO3q5ikI", "text": "bad stroke in this posterior area you could lose control of all four extremities which we call a quadruped heiresses you can't move your arms you can't move your legs and if it's really really bad it may be the only thing you can move is to move your eyes up and down which is called a locked-in syndrome so again it could be anything from very subtle a little bit of double vision a little bit of unsteadiness to all the way to having no movement in the arms or legs so it's not so easy to figure out that somebody is having a", "start_timestamp": "00:20:01", "end_timestamp": "00:20:29", "start_second": 1201, "end_second": 1229, "url": "https://www.youtube.com/watch?v=qoGBO3q5ikI&t=1201s", "title": "Stroke: The Basics", "thumbnail": "https://i.ytimg.com/vi/qoGBO3q5ikI/maxresdefault.jpg"} {"video_id": "qoGBO3q5ikI", "text": "stroke from other conditions if they're having some of the subtle symptoms and sometimes you know even the neurologists are puzzled in the emergency room so oftentimes we're going to have to do some imaging to sort it out but you can see what the main symptoms are the main symptoms are going to be trouble with the language weakness on one side numbness on one side headache if it's a bleeding kind of stroke so if you have those symptoms coming on abruptly that should be a key to call 9-1-1 okay so what's going to cause a blood clot to", "start_timestamp": "00:20:29", "end_timestamp": "00:21:00", "start_second": 1229, "end_second": 1260, "url": "https://www.youtube.com/watch?v=qoGBO3q5ikI&t=1229s", "title": "Stroke: The Basics", "thumbnail": "https://i.ytimg.com/vi/qoGBO3q5ikI/maxresdefault.jpg"} {"video_id": "qoGBO3q5ikI", "text": "block off a blood vessel and cause a stroke well most of the causes are summarized here the number one cause is atherosclerosis also known as hardening of the arteries so it's the same process that causes a heart attack it builds up cholesterol plaque and the blood vessels of the heart and that's going to cause a lack of blood flow and a lack of oxygen to the heart if it does it on the way to the brain it's going to cause a stroke so this is the aortic arch that comes off the heart this can develop atherosclerosis here's the carotid", "start_timestamp": "00:21:00", "end_timestamp": "00:21:34", "start_second": 1260, "end_second": 1294, "url": "https://www.youtube.com/watch?v=qoGBO3q5ikI&t=1260s", "title": "Stroke: The Basics", "thumbnail": "https://i.ytimg.com/vi/qoGBO3q5ikI/maxresdefault.jpg"} {"video_id": "qoGBO3q5ikI", "text": "artery here's the blood vessels in the brain atherosclerosis can happen anywhere along this tree that's the number one cause of stroke atherosclerosis next most common is going to be an embolism from the heart an embolism means a clot that started in one spot and it travels somewhere else so the most common heart condition that could cause a clot to form in the heart is called atrial fibrillation so atrial fibrillation is when the top chambers of the heart instead of beating nice and regular like this start to do", "start_timestamp": "00:21:34", "end_timestamp": "00:22:08", "start_second": 1294, "end_second": 1328, "url": "https://www.youtube.com/watch?v=qoGBO3q5ikI&t=1294s", "title": "Stroke: The Basics", "thumbnail": "https://i.ytimg.com/vi/qoGBO3q5ikI/maxresdefault.jpg"} {"video_id": "qoGBO3q5ikI", "text": "this they fibrillate so the blood swirls in the top chambers of the heart and swirling around it can form a clot the clot can then break loose and favorite place where to go is up into the brain so patients with atrial fibrillation are at risk of stroke and we have treatments to prevent them from having a stroke if you have a sick heart valve that can be a spot where a clot can form a sticky heart valve can form a clot and if you have a big heart attack then the main pumping chambers the ventricles aren't working very well the", "start_timestamp": "00:22:08", "end_timestamp": "00:22:39", "start_second": 1328, "end_second": 1359, "url": "https://www.youtube.com/watch?v=qoGBO3q5ikI&t=1328s", "title": "Stroke: The Basics", "thumbnail": "https://i.ytimg.com/vi/qoGBO3q5ikI/maxresdefault.jpg"} {"video_id": "qoGBO3q5ikI", "text": "blood can also swirl around there and form a clot so when somebody comes in with a stroke we're thinking about let's take a look at the blood vessels and look for atherosclerosis let's take a look at the heart and see if we've got something going on with the heart that could predispose to a clot formation and if that's not the cause then you can start thinking about the blood right what if the blood gets too sticky for some reason sticky blood could form a blood clot and the blood clot then could cause a stroke so a variety of causes", "start_timestamp": "00:22:39", "end_timestamp": "00:23:09", "start_second": 1359, "end_second": 1389, "url": "https://www.youtube.com/watch?v=qoGBO3q5ikI&t=1359s", "title": "Stroke: The Basics", "thumbnail": "https://i.ytimg.com/vi/qoGBO3q5ikI/maxresdefault.jpg"} {"video_id": "qoGBO3q5ikI", "text": "but most of them relate to the blood vessels the heart or the blood itself okay so number one cause of stroke is atherosclerosis we're gonna show a movie to show you how a thorough sclerosis progresses over time and unfortunately everybody gets some there is no way to go through a long life and not develop some atherosclerosis in fact even when you look at soldiers who who died in a military accident if you look at their blood vessels you'll start to see the beginnings of atherosclerosis so we all have to deal with it a little bit one", "start_timestamp": "00:23:09", "end_timestamp": "00:23:41", "start_second": 1389, "end_second": 1421, "url": "https://www.youtube.com/watch?v=qoGBO3q5ikI&t=1389s", "title": "Stroke: The Basics", "thumbnail": "https://i.ytimg.com/vi/qoGBO3q5ikI/maxresdefault.jpg"} {"video_id": "qoGBO3q5ikI", "text": "way or the other so here is a speeding up over what one might see over many many years okay so this is the blood vessel and this is where the blood would travel and this is the atherosclerosis plaque you don't need an infection to start that having infections may promote some atherosclerosis but infection it's not the biggest cause of atherosclerosis we'll talk about the risk factors in a minute but what do you think is in that plaque this is not there is some inflammatory component but that's not the main component cholesterol exactly", "start_timestamp": "00:23:41", "end_timestamp": "00:24:18", "start_second": 1421, "end_second": 1458, "url": "https://www.youtube.com/watch?v=qoGBO3q5ikI&t=1421s", "title": "Stroke: The Basics", "thumbnail": "https://i.ytimg.com/vi/qoGBO3q5ikI/maxresdefault.jpg"} {"video_id": "qoGBO3q5ikI", "text": "so what we're seeing here is a buildup of cholesterol on the wall if you look carefully you would see some inflammatory reaction so the cholesterol itself is really not such a huge problem in this stage it's well contained this is called the endothelium the inside lining of the blood vessel wall and as long as this endothelium is smooth and shiny and not sticky it doesn't matter so much that there's cholesterol building up here okay because it can't really do much as long as it's encased in this smooth and Ophelia lining the blood here does", "start_timestamp": "00:24:18", "end_timestamp": "00:24:53", "start_second": 1458, "end_second": 1493, "url": "https://www.youtube.com/watch?v=qoGBO3q5ikI&t=1458s", "title": "Stroke: The Basics", "thumbnail": "https://i.ytimg.com/vi/qoGBO3q5ikI/maxresdefault.jpg"} {"video_id": "qoGBO3q5ikI", "text": "absolutely fine until you block off about 70% of the area here right so here is less than 50% blocked there is no reduction in blood flow here blood flow is getting in just fine if this gets to the point where it's starting to block 70 80 90 percent it's going to start to reduce blood flow so it's like a plumbing problem not enough blood flow but that's not usually how atherosclerosis gets you usually it's what we're going to see here is that before it blocks off all the vessel the plaque is going to rupture okay so you", "start_timestamp": "00:24:53", "end_timestamp": "00:25:29", "start_second": 1493, "end_second": 1529, "url": "https://www.youtube.com/watch?v=qoGBO3q5ikI&t=1493s", "title": "Stroke: The Basics", "thumbnail": "https://i.ytimg.com/vi/qoGBO3q5ikI/maxresdefault.jpg"} {"video_id": "qoGBO3q5ikI", "text": "see this spot here this is the lining the lining is now ruptured this is called plaque rupture this cholesterol material is extremely sticky so when this cholesterol material comes in contact with flowing blood what's gonna happen here we go alright it comes in contact with the blood boom we form a clot so that clot can do what it's done here and completely blocked off the blood flow now we have a completely blocked vessel or if it doesn't completely block it off pieces of that clot may blake loose and head up into", "start_timestamp": "00:25:29", "end_timestamp": "00:26:03", "start_second": 1529, "end_second": 1563, "url": "https://www.youtube.com/watch?v=qoGBO3q5ikI&t=1529s", "title": "Stroke: The Basics", "thumbnail": "https://i.ytimg.com/vi/qoGBO3q5ikI/maxresdefault.jpg"} {"video_id": "qoGBO3q5ikI", "text": "the brain so this is what we're fighting against this progression of atherosclerosis so causes a stroke atherosclerosis and the large vessels meaning the ones we talked about in the neck about 25% atherosclerosis and the small vessels vessels in the brain about 20% of stroke 20% are clots that form in the heart from atrial fibrillation or other heart problems 15% we talked about bleeding bleeding in the brain or bleeding in the subarachnoid space and look at this even now when we do a very detailed evaluation we take", "start_timestamp": "00:26:03", "end_timestamp": "00:26:40", "start_second": 1563, "end_second": 1600, "url": "https://www.youtube.com/watch?v=qoGBO3q5ikI&t=1563s", "title": "Stroke: The Basics", "thumbnail": "https://i.ytimg.com/vi/qoGBO3q5ikI/maxresdefault.jpg"} {"video_id": "qoGBO3q5ikI", "text": "pictures of everything 30% we don't the cause cryptogenic stroke is what we call that or idiopathic some people say idiopathic means the doctors an idiot they couldn't find it right and this is frustrating for the doctor it's very frustrating for the patient because the patient wants to know why did I have a stroke it's actually a good thing overall because the prognosis for having another stroke is quite favorable if you've been very thorough and you've found no cause so even though patients are not happy overall it's a good sign", "start_timestamp": "00:26:40", "end_timestamp": "00:27:16", "start_second": 1600, "end_second": 1636, "url": "https://www.youtube.com/watch?v=qoGBO3q5ikI&t=1600s", "title": "Stroke: The Basics", "thumbnail": "https://i.ytimg.com/vi/qoGBO3q5ikI/maxresdefault.jpg"} {"video_id": "qoGBO3q5ikI", "text": "if you find that the vessels are all blocked up you can imagine the prognosis is not as good but sometimes you can deal with that get the vessels cleaned up and usually the heart conditions can be dealt with pretty well so that if somebody's stroke came from the heart typically we can find ways to prevent it from happening again okay so lots of stroke is caused by atherosclerosis atherosclerosis affects the vessels in the heart in a similar way to the vessels in the neck and the brain so no surprise how to prevent them right", "start_timestamp": "00:27:16", "end_timestamp": "00:27:45", "start_second": 1636, "end_second": 1665, "url": "https://www.youtube.com/watch?v=qoGBO3q5ikI&t=1636s", "title": "Stroke: The Basics", "thumbnail": "https://i.ytimg.com/vi/qoGBO3q5ikI/maxresdefault.jpg"} {"video_id": "qoGBO3q5ikI", "text": "you've all heard this before cigarette smoking promotes atherosclerosis it makes the blood vessels stickier it makes the blood vessels narrow so this is one of the you know biggest you know most treatable risk factors because treating it is going to can reverse the situation with a lot of the damage to the blood vessels alcohol abuse is a problem for stroke for a number of reasons it can cause heart rhythm problems it can cause increased risk of bleeding in the brain and it can cause an increased risk of blood clots forming in the brain so", "start_timestamp": "00:27:45", "end_timestamp": "00:28:18", "start_second": 1665, "end_second": 1698, "url": "https://www.youtube.com/watch?v=qoGBO3q5ikI&t=1665s", "title": "Stroke: The Basics", "thumbnail": "https://i.ytimg.com/vi/qoGBO3q5ikI/maxresdefault.jpg"} {"video_id": "qoGBO3q5ikI", "text": "heavy alcohol use is a bad thing drinking a glass of wine a day is slightly protective right so it's not a recommendation to start drinking if you don't drink but drinking a glass of wine it's not a risk factor for stroke actually it may reduce stroke risks just to touch compared to people who don't drink at all but drinking too much is a big risk factor physical inactivity lack of exercise is a risk factor and then the whole host of the usual medical conditions for stroke high blood pressure is the most important it's the", "start_timestamp": "00:28:18", "end_timestamp": "00:28:50", "start_second": 1698, "end_second": 1730, "url": "https://www.youtube.com/watch?v=qoGBO3q5ikI&t=1698s", "title": "Stroke: The Basics", "thumbnail": "https://i.ytimg.com/vi/qoGBO3q5ikI/maxresdefault.jpg"} {"video_id": "qoGBO3q5ikI", "text": "most powerful prevalent risk factor for a stroke so again both for the atherosclerosis type of strokes as well as the bleeding the brain type of strokes this is what we could do to prevent the most and that would be control everybody's blood pressure atrial fibrillation we talked about cholesterol as obviously what's building up the blood vessels and patients who have diabetes have more atherosclerosis faster than people who don't have diabetes so getting good control of the diabetes is important if you've had a stroke", "start_timestamp": "00:28:50", "end_timestamp": "00:29:21", "start_second": 1730, "end_second": 1761, "url": "https://www.youtube.com/watch?v=qoGBO3q5ikI&t=1730s", "title": "Stroke: The Basics", "thumbnail": "https://i.ytimg.com/vi/qoGBO3q5ikI/maxresdefault.jpg"} {"video_id": "qoGBO3q5ikI", "text": "you're increased risk for having another one so again we need to try to sort out why did that stroke occur and what can we do to prevent it so a transient ischemic attack is atia who knows what a T ia is how would you define that I like that definition so you've got a blood clot that's gone into the brain and before any injury to the brain has happened your body dissolve that blood clot you have little enzymes in your blood that try to dissolve blood clots and if it dissolves it before it does any damage it's a TI a so the symptoms will be the", "start_timestamp": "00:29:21", "end_timestamp": "00:30:05", "start_second": 1761, "end_second": 1805, "url": "https://www.youtube.com/watch?v=qoGBO3q5ikI&t=1761s", "title": "Stroke: The Basics", "thumbnail": "https://i.ytimg.com/vi/qoGBO3q5ikI/maxresdefault.jpg"} {"video_id": "qoGBO3q5ikI", "text": "same symptoms of stroke but they'll be brief you know 10 15 20 minutes would be typical duration of symptoms and we know it's at EIA by taking a picture and seeing exactly what you said there's no injury if the symptoms last for 20 minutes and we take a picture and we see that there's a little bit of injury then it's a stroke just like a heart attack versus angina a heart attack is when you damage some heart muscle and Gina's when you've got chest pain because not enough blood flow to the heart but you didn't damage any heart muscle", "start_timestamp": "00:30:05", "end_timestamp": "00:30:34", "start_second": 1805, "end_second": 1834, "url": "https://www.youtube.com/watch?v=qoGBO3q5ikI&t=1805s", "title": "Stroke: The Basics", "thumbnail": "https://i.ytimg.com/vi/qoGBO3q5ikI/maxresdefault.jpg"} {"video_id": "qoGBO3q5ikI", "text": "and what you can see is that if you've had at EIA the risk of having a stroke in the next two days is 5% okay so what does that tell you about a TI a better not ignore it right it's not the type of thing you schedule an appointment and a week and tell your doctor that you had a 20 minute episode where you couldn't lift your arm or you had a 15 minute episode where you couldn't talk in the right side of the body was numb because the highest-risk time is in the first couple of days so we want to see these patients get into the emergency room we", "start_timestamp": "00:30:34", "end_timestamp": "00:31:09", "start_second": 1834, "end_second": 1869, "url": "https://www.youtube.com/watch?v=qoGBO3q5ikI&t=1834s", "title": "Stroke: The Basics", "thumbnail": "https://i.ytimg.com/vi/qoGBO3q5ikI/maxresdefault.jpg"} {"video_id": "qoGBO3q5ikI", "text": "want to have them evaluated very rapidly if they have the symptoms of a stroke but they disappeared so what are we going to do well if you control the blood pressure you're going to reduce your stroke risk by up to 40% that's huge absolutely huge to reduce stroke risk by 40% so if you're somebody at high risk of stroke 10% per year you could reduce it down to 6% per year just by getting the blood pressure under control smoking is amazing that people who've been smoking even they've smoked for many years if they stop they can cut", "start_timestamp": "00:31:09", "end_timestamp": "00:31:43", "start_second": 1869, "end_second": 1903, "url": "https://www.youtube.com/watch?v=qoGBO3q5ikI&t=1869s", "title": "Stroke: The Basics", "thumbnail": "https://i.ytimg.com/vi/qoGBO3q5ikI/maxresdefault.jpg"} {"video_id": "qoGBO3q5ikI", "text": "their stroke risk in half within a year of stopping smoking so incredibly motivational - you know it's hard to stop smoking but a huge payoff if you stop and the risk from stroke smoking goes almost away after five years if your cholesterol is high it will your stroke risk can be cut down by about 20 percent by using these statin medications okay like torva statin or lipitor is a popular one or simvastatin pravastatin these are medicines that reduce cholesterol and probably have other beneficial effects that we'll talk about", "start_timestamp": "00:31:43", "end_timestamp": "00:32:16", "start_second": 1903, "end_second": 1936, "url": "https://www.youtube.com/watch?v=qoGBO3q5ikI&t=1903s", "title": "Stroke: The Basics", "thumbnail": "https://i.ytimg.com/vi/qoGBO3q5ikI/maxresdefault.jpg"} {"video_id": "qoGBO3q5ikI", "text": "in a minute and look at blood pressure so here's a graph that shows you the relationship between cardiovascular events and blood pressure over a 15-year time period so this includes both heart attacks and strokes and you can look at the rate of heart attacks and strokes and somebody whose blood pressure is in the 140 over 90 type of range 130 over 85 and 120 over 80 and you can see that the lower the better so that lower blood pressures lower risks of cardiovascular events particularly strokes so what we like to", "start_timestamp": "00:32:16", "end_timestamp": "00:32:50", "start_second": 1936, "end_second": 1970, "url": "https://www.youtube.com/watch?v=qoGBO3q5ikI&t=1936s", "title": "Stroke: The Basics", "thumbnail": "https://i.ytimg.com/vi/qoGBO3q5ikI/maxresdefault.jpg"} {"video_id": "qoGBO3q5ikI", "text": "see is blood pressure is getting down towards 120 over 80 if people are spending a lot of time up in the 130s or one 40s that's too high okay so eventually you get to a point where you can't go too low you go too low with your blood pressure you pass out but most people even elderly individuals can tolerate blood pressures down in the low 120s and for the top number the systolic and the diastolic down close to 80 and that's what we try to shoot for and particularly somebody who's had a TI a or a previous stroke", "start_timestamp": "00:32:50", "end_timestamp": "00:33:20", "start_second": 1970, "end_second": 2000, "url": "https://www.youtube.com/watch?v=qoGBO3q5ikI&t=1970s", "title": "Stroke: The Basics", "thumbnail": "https://i.ytimg.com/vi/qoGBO3q5ikI/maxresdefault.jpg"} {"video_id": "qoGBO3q5ikI", "text": "we're going to be pretty aggressive about trying to get the blood pressure down we talked about the cholesterol medicines these statins these are often very controversial medicines that patients have a lot of rumors that they've heard about that these are going to cause their muscles to dissolve or their liver to cause problems but these medicines seem to have benefits above and beyond just reducing the cholesterol they have what are called pliat trophic effects which means that they help preserve that endothelial cell layer", "start_timestamp": "00:33:20", "end_timestamp": "00:33:49", "start_second": 2000, "end_second": 2029, "url": "https://www.youtube.com/watch?v=qoGBO3q5ikI&t=2000s", "title": "Stroke: The Basics", "thumbnail": "https://i.ytimg.com/vi/qoGBO3q5ikI/maxresdefault.jpg"} {"video_id": "qoGBO3q5ikI", "text": "that smooth layer that's helping to protect that plaque from rupturing so that there's benefits from these agents in in many ways but we want to see the cholesterol down did a big study many years ago called the sparkle study where we took patients who'd had a stroke we put them on a high dose of this atorvastatin medicine lipitor which is a common statin 80 milligrams and many of the patients were nervous oh that's too high it's a big big dose and the other comparison was the placebo right so big dose of atorvastatin versus placebo and", "start_timestamp": "00:33:49", "end_timestamp": "00:34:26", "start_second": 2029, "end_second": 2066, "url": "https://www.youtube.com/watch?v=qoGBO3q5ikI&t=2029s", "title": "Stroke: The Basics", "thumbnail": "https://i.ytimg.com/vi/qoGBO3q5ikI/maxresdefault.jpg"} {"video_id": "qoGBO3q5ikI", "text": "the number who complained of muscle aches in the statin group was identical to the number complained of muscle aches in the placebo group so the conclusion is in our age everybody's got muscle aches right and we like to blame it on something so patients like to blame it on these statins but the stands really don't cause muscle aches a lot more than than not taking a statin there's some very small risk it's gonna irritate the liver so you need to check blood tests periodically to make sure that it's not doing that but most people tolerate", "start_timestamp": "00:34:26", "end_timestamp": "00:34:56", "start_second": 2066, "end_second": 2096, "url": "https://www.youtube.com/watch?v=qoGBO3q5ikI&t=2066s", "title": "Stroke: The Basics", "thumbnail": "https://i.ytimg.com/vi/qoGBO3q5ikI/maxresdefault.jpg"} {"video_id": "qoGBO3q5ikI", "text": "these medications very well so virtually everybody comes to the hospital with a TI a or an ischemic stroke is going to expect to go out on one of these statin agents unless their cholesterol is really low on its own and just like blood pressure what the studies have shown is that the lower the bad cholesterol the better so the LDL cholesterol we used to think get it down to 120 then 100 now we're targeting more like 70 or 80 for the LDL bad cholesterol again particularly in people who've got evidence of atherosclerosis", "start_timestamp": "00:34:56", "end_timestamp": "00:35:28", "start_second": 2096, "end_second": 2128, "url": "https://www.youtube.com/watch?v=qoGBO3q5ikI&t=2096s", "title": "Stroke: The Basics", "thumbnail": "https://i.ytimg.com/vi/qoGBO3q5ikI/maxresdefault.jpg"} {"video_id": "qoGBO3q5ikI", "text": "we'd had a hope that giving estrogen replacement to postmenopausal women who had stroke would help them have a lower rate of stroke and when we tested it we found it didn't work if anything it made things worse so that the estrogens can make the blood a little bit stickier so that's not something that we do for a postmenopausal woman who has a stroke and even somebody on birth control pills is that a little bit increased risk of forming clots and their legs are having a stroke because the estrogens can make the blood a little bit stickier so how", "start_timestamp": "00:35:28", "end_timestamp": "00:36:02", "start_second": 2128, "end_second": 2162, "url": "https://www.youtube.com/watch?v=qoGBO3q5ikI&t=2128s", "title": "Stroke: The Basics", "thumbnail": "https://i.ytimg.com/vi/qoGBO3q5ikI/maxresdefault.jpg"} {"video_id": "qoGBO3q5ikI", "text": "do we make the blood a little bit less sticky well here's a detailed picture schematic looking at blood clotting so blood clotting involves something called platelets which become activated when you have injury right to help you form a blood clot but they can also be activated by that cholesterol plaque the atherosclerosis and they can trap blood cells in this fibrin fibrous net to form a blood clot so taking a medicine that makes the platelets less sticky can reduce the chance that a blood clot is going to form on a sticky surface so the", "start_timestamp": "00:36:02", "end_timestamp": "00:36:43", "start_second": 2162, "end_second": 2203, "url": "https://www.youtube.com/watch?v=qoGBO3q5ikI&t=2162s", "title": "Stroke: The Basics", "thumbnail": "https://i.ytimg.com/vi/qoGBO3q5ikI/maxresdefault.jpg"} {"video_id": "qoGBO3q5ikI", "text": "most famous aspirin it's a good medicine to make the blood less sticky then we have other prescription medicines like plavix which is clopidogrel or aggregate ragnaroks which is a combination a diaper a tamal which is another prescription medicine and aspirin they can make the blood less sticky if you have a blood clot that's forming in the heart usually aspirin is not enough to prevent that so somebody with atrial fibrillation we're going to use a stronger type of blood thinning medicine that we call an anticoagulant the most", "start_timestamp": "00:36:43", "end_timestamp": "00:37:15", "start_second": 2203, "end_second": 2235, "url": "https://www.youtube.com/watch?v=qoGBO3q5ikI&t=2203s", "title": "Stroke: The Basics", "thumbnail": "https://i.ytimg.com/vi/qoGBO3q5ikI/maxresdefault.jpg"} {"video_id": "qoGBO3q5ikI", "text": "famous is coumadin now we have four new prescription medicines that have some advantages over coumadin as more potent blood thinners and if you have to give it intravenously then its medicines called heparin okay so how are we going to choose what we need to do is the patient comes in and we need to sort out why they had the stroke sometimes there's rare causes that we don't have time to talk about tonight but most of the time you're going to find out that the guilty party is atherosclerosis or it's a problem at the heart or as I said", "start_timestamp": "00:37:15", "end_timestamp": "00:37:47", "start_second": 2235, "end_second": 2267, "url": "https://www.youtube.com/watch?v=qoGBO3q5ikI&t=2235s", "title": "Stroke: The Basics", "thumbnail": "https://i.ytimg.com/vi/qoGBO3q5ikI/maxresdefault.jpg"} {"video_id": "qoGBO3q5ikI", "text": "about 30% of the time we don't figure it out if we don't figure it out we read it as if it's atherosclerosis and we assume that there may be some mild atherosclerosis that we couldn't find yeah that we didn't just didn't have sensitive enough tests to pick it up so these patients are typically going to go out on antiplatelet therapy so the least expensive is aspirin clopidogrel which is plavix is also a popular choice if it is a problem with a heart valve atrial fibrillation a heart attack forming a blood clot in the heart then typically", "start_timestamp": "00:37:47", "end_timestamp": "00:38:20", "start_second": 2267, "end_second": 2300, "url": "https://www.youtube.com/watch?v=qoGBO3q5ikI&t=2267s", "title": "Stroke: The Basics", "thumbnail": "https://i.ytimg.com/vi/qoGBO3q5ikI/maxresdefault.jpg"} {"video_id": "qoGBO3q5ikI", "text": "we're going to use one of these more potent anticoagulants and you can imagine those are a little more dangerous they have higher bleeding risk they need a little bit more monitoring in general so those are the typical choices here's some studies so we've been involved in several of these over the years saying that if you have one of these heart conditions like atrial fibrillation that aspirin works but the stronger blood thinners like coumadin which is also known as warfarin they work better okay the way that warfarin was discovered is", "start_timestamp": "00:38:20", "end_timestamp": "00:38:52", "start_second": 2300, "end_second": 2332, "url": "https://www.youtube.com/watch?v=qoGBO3q5ikI&t=2300s", "title": "Stroke: The Basics", "thumbnail": "https://i.ytimg.com/vi/qoGBO3q5ikI/maxresdefault.jpg"} {"video_id": "qoGBO3q5ikI", "text": "that this is a medicine that can cause rats to bleed it's rat poison big dose of warfarin and the rat dies and they bleed so some patients don't like this because they don't like us to prescribe rat poison and as I said now we have newer agents that don't have that checkered history that work even better than warfarin and are very safe options now the other thing you can do is try to pull the atherosclerosis out so there's the surgical option which is called carotid endarterectomy this is a picture of somebody's carotid", "start_timestamp": "00:38:52", "end_timestamp": "00:39:26", "start_second": 2332, "end_second": 2366, "url": "https://www.youtube.com/watch?v=qoGBO3q5ikI&t=2332s", "title": "Stroke: The Basics", "thumbnail": "https://i.ytimg.com/vi/qoGBO3q5ikI/maxresdefault.jpg"} {"video_id": "qoGBO3q5ikI", "text": "you know like in that video I showed you this you know is blocked up with one of those cholesterol plaques it can cut it open and carefully dissect it off the other approaches you can go up there with a balloon and you can squish it against the wall and put a little metal stent in there which is called carotid stenting so this would be approach if you had a minor stroke or you had a TI a and you find out that that kurata it's blocked up we're probably going to want to think about cleaning that out and here's a actual picture of what this", "start_timestamp": "00:39:26", "end_timestamp": "00:39:56", "start_second": 2366, "end_second": 2396, "url": "https://www.youtube.com/watch?v=qoGBO3q5ikI&t=2366s", "title": "Stroke: The Basics", "thumbnail": "https://i.ytimg.com/vi/qoGBO3q5ikI/maxresdefault.jpg"} {"video_id": "qoGBO3q5ikI", "text": "looks like so you know the years of haagen-dazs and peanut butter and potato chips you can see that the remnants here have wound up in the carotid artery and what's this red thing boycott yeah so that's the platelets that have formed this clot and it's really a combination of the cholesterol plaque and the clot that's causing the trouble so how to prevent a stroke we're going to control the treatable risk factors high blood pressure being the most important diabetes smoking and cholesterol exercise you're going to", "start_timestamp": "00:39:56", "end_timestamp": "00:40:28", "start_second": 2396, "end_second": 2428, "url": "https://www.youtube.com/watch?v=qoGBO3q5ikI&t=2396s", "title": "Stroke: The Basics", "thumbnail": "https://i.ytimg.com/vi/qoGBO3q5ikI/maxresdefault.jpg"} {"video_id": "qoGBO3q5ikI", "text": "take an antiplatelet agent or an anticoagulant to make the blood less sticky and if you've got a big buildup of plaque we may be able to get it out of there surgically so what we hope to prevent strokes but when they occur we want people to call 9-1-1 and get into the emergency room immediately because if you're in the midst of having a stroke which is a blood clot sitting here the other name is thrombus we need to treat it so how could you treat it if somebody had a stroke that happened 15 minutes ago and they rushed into the", "start_timestamp": "00:40:28", "end_timestamp": "00:40:59", "start_second": 2428, "end_second": 2459, "url": "https://www.youtube.com/watch?v=qoGBO3q5ikI&t=2428s", "title": "Stroke: The Basics", "thumbnail": "https://i.ytimg.com/vi/qoGBO3q5ikI/maxresdefault.jpg"} {"video_id": "qoGBO3q5ikI", "text": "emergency room TPA what's that but a window of like three hours to give TPA TPA is like Drano for the brain right it is a clot buster it dissolves blood clots so it would make a lot of sense that if a stroke is caused by a blood clot like 85% of them are that if you could dissolve that clot and make it disappear quickly that you could make the patient better so TPA is a clot dissolve it's made by Genentech just up the road and that is the mainstay of treatment that was the first treatment ever proven to be effective for stroke the amount of", "start_timestamp": "00:40:59", "end_timestamp": "00:41:39", "start_second": 2459, "end_second": 2499, "url": "https://www.youtube.com/watch?v=qoGBO3q5ikI&t=2459s", "title": "Stroke: The Basics", "thumbnail": "https://i.ytimg.com/vi/qoGBO3q5ikI/maxresdefault.jpg"} {"video_id": "qoGBO3q5ikI", "text": "injury that's going to happen from the stroke is going to depend how long the blood vessel the blood vessel is blocked by the clot and how good the collateral flow is right so if you have other blood vessels who are helping out then the stroke will progress much more slowly so again we hope that we have good collateral flow if we ever have a stroke we also hope that that clot can be dissolved quickly so TPA stands for tissue plasminogen activator and what that is is that in everybody's blood they have something called plasminogen", "start_timestamp": "00:41:39", "end_timestamp": "00:42:10", "start_second": 2499, "end_second": 2530, "url": "https://www.youtube.com/watch?v=qoGBO3q5ikI&t=2499s", "title": "Stroke: The Basics", "thumbnail": "https://i.ytimg.com/vi/qoGBO3q5ikI/maxresdefault.jpg"} {"video_id": "qoGBO3q5ikI", "text": "which is basically a key that can unlock a clot and if you activate that plasminogen it will start to tear these ropes apart that are the mesh work of a clod the ropes are called fibrin so when you activate the plasminogen to something called plasmon it starts to eat away at these little ropes and it can dissolve the clot so your own system is going to try to do it on the on their own but if it's a big clot probably won't be able to solve it on your own and you probably won't have a TI a you know need some help from more", "start_timestamp": "00:42:10", "end_timestamp": "00:42:45", "start_second": 2530, "end_second": 2565, "url": "https://www.youtube.com/watch?v=qoGBO3q5ikI&t=2530s", "title": "Stroke: The Basics", "thumbnail": "https://i.ytimg.com/vi/qoGBO3q5ikI/maxresdefault.jpg"} {"video_id": "qoGBO3q5ikI", "text": "TPA to dissolve that clot so the FDA just as you mentioned said we have three hours to give it so the approval of the FDA is that we can give it if somebody shows up and we can get it into them within three hours when they come to the ER we have to take a picture of the brain because you can imagine if you have one of those bleeding strokes the worst thing you could do would be to give TPA they're already bleeding in the brain you don't want to give a clot dissolving medicine so it means we got to get a picture of the brain we have to do some", "start_timestamp": "00:42:45", "end_timestamp": "00:43:17", "start_second": 2565, "end_second": 2597, "url": "https://www.youtube.com/watch?v=qoGBO3q5ikI&t=2565s", "title": "Stroke: The Basics", "thumbnail": "https://i.ytimg.com/vi/qoGBO3q5ikI/maxresdefault.jpg"} {"video_id": "qoGBO3q5ikI", "text": "blood tests we have to check what's going on even if things go very quickly in the ER that typically is going to take at least 45 minutes to do all those things you need to do so a three hour treatment window means the patient better have arrived within two hours of when the symptoms started or we can't give it right so if you woke up with the symptoms we don't know when it started right if nobody was there to call 9-1-1 we don't know when it was started so this is the biggest problem we have a very tight treatment window there have", "start_timestamp": "00:43:17", "end_timestamp": "00:43:49", "start_second": 2597, "end_second": 2629, "url": "https://www.youtube.com/watch?v=qoGBO3q5ikI&t=2597s", "title": "Stroke: The Basics", "thumbnail": "https://i.ytimg.com/vi/qoGBO3q5ikI/maxresdefault.jpg"} {"video_id": "qoGBO3q5ikI", "text": "been studies that show that TPA actually works out to about four and a half or five hours and in Europe the equivalent of the FDA has approved it to four and a half hours so it's Stanford and many other stroke centers even though our FDA says three hours we'll go out to about four and a half five hours with TPA but we're doing this against the advice of the FDA because they say give it very very early but that means that only about 5% of stroke patients are going to get it because they don't come in soon enough we've done a study here at", "start_timestamp": "00:43:49", "end_timestamp": "00:44:23", "start_second": 2629, "end_second": 2663, "url": "https://www.youtube.com/watch?v=qoGBO3q5ikI&t=2629s", "title": "Stroke: The Basics", "thumbnail": "https://i.ytimg.com/vi/qoGBO3q5ikI/maxresdefault.jpg"} {"video_id": "qoGBO3q5ikI", "text": "Stanford showing very nice benefit of TPA out to six hours if you can take a picture of the brain and show that the stroke has not gotten very big in that six hours so this is called penumbral imaging the penumbra means the part of the brain that is likely to die over the next few hours but is not yet dead so to be able to get to this we developed an MRI sequence called diffusion imaging it was one of the Stanford faculty members who discovered this MRI technique which allows you to see the stroke as it's occurring before", "start_timestamp": "00:44:23", "end_timestamp": "00:44:59", "start_second": 2663, "end_second": 2699, "url": "https://www.youtube.com/watch?v=qoGBO3q5ikI&t=2663s", "title": "Stroke: The Basics", "thumbnail": "https://i.ytimg.com/vi/qoGBO3q5ikI/maxresdefault.jpg"} {"video_id": "qoGBO3q5ikI", "text": "that we would just do CT scans and a CT scan doesn't show up the stroke usually 4 6 8 10 hours it's too late so we want to see the stroke as it's developing so with this technique you can see this pink area is the tissue that is irreversibly injured it's dead but the green area is the tissue that's likely to die over the next several hours so if a patient comes in like this one at six hours after symptom onset and they have a small amount of tissue that's are irreversibly injured but a large amount that is still salvageable", "start_timestamp": "00:44:59", "end_timestamp": "00:45:32", "start_second": 2699, "end_second": 2732, "url": "https://www.youtube.com/watch?v=qoGBO3q5ikI&t=2699s", "title": "Stroke: The Basics", "thumbnail": "https://i.ytimg.com/vi/qoGBO3q5ikI/maxresdefault.jpg"} {"video_id": "qoGBO3q5ikI", "text": "we call that a large penumbra that means that there's areas to save salvageable tissue and for these patients we're going to be very aggressive we're going to try to get that blood clot dissolved even though they're beyond the approved window for TPA so we can give TPA a little bit longer than its approved or we can physically go after the clot and the first mechanical device to be approved by the FDA to pull clots out of the brain is this mercy retriever this is from a company in Mountain View called concentric which has subsequently", "start_timestamp": "00:45:32", "end_timestamp": "00:46:05", "start_second": 2732, "end_second": 2765, "url": "https://www.youtube.com/watch?v=qoGBO3q5ikI&t=2732s", "title": "Stroke: The Basics", "thumbnail": "https://i.ytimg.com/vi/qoGBO3q5ikI/maxresdefault.jpg"} {"video_id": "qoGBO3q5ikI", "text": "been sold to Stryker so both TPA and this first approved mechanical device are local Bay Area products so the person who developed this was actually a French neuroradiologist and you can imagine was thinking maybe a little bit about wine and how you get the cork out of a wine bottle it's kind of the same idea of how you get a blood clot out of a blood vessel so this catheter is a tube that you're going to put all the way up into the brain and you're going to try to get this corkscrew like device into the clot so", "start_timestamp": "00:46:05", "end_timestamp": "00:46:40", "start_second": 2765, "end_second": 2800, "url": "https://www.youtube.com/watch?v=qoGBO3q5ikI&t=2765s", "title": "Stroke: The Basics", "thumbnail": "https://i.ytimg.com/vi/qoGBO3q5ikI/maxresdefault.jpg"} {"video_id": "qoGBO3q5ikI", "text": "here's a blood clot in the brain you bring this wire up by putting it through a blood vessel in the leg all the way up into the brain and it turns into a corkscrew you try to screw it into the clot get a hold of the clot and then pull it out and this was the first device we now have more sophisticated devices that don't look like cork screws but basically do the same thing they go up into the brain they capture the clot and they bring it out those devices have approvals out to eight hours so it gives us more time but still many patients", "start_timestamp": "00:46:40", "end_timestamp": "00:47:16", "start_second": 2800, "end_second": 2836, "url": "https://www.youtube.com/watch?v=qoGBO3q5ikI&t=2800s", "title": "Stroke: The Basics", "thumbnail": "https://i.ytimg.com/vi/qoGBO3q5ikI/maxresdefault.jpg"} {"video_id": "qoGBO3q5ikI", "text": "don't come into the emergency room until the next day right they think they slept on their arm wrong and then they wait till it doesn't get better the next day they call their primary care doctor and say hey my arm doesn't move they say go to the ER but it's too late okay we're gonna end and we're gonna end with this case study this is a patient I had a few years ago 45 year old man who was recovering from a surgical procedure at Stanford and he was there in his WA in the room with his wife when he had the abrupt onset of left-sided paralysis it", "start_timestamp": "00:47:16", "end_timestamp": "00:47:48", "start_second": 2836, "end_second": 2868, "url": "https://www.youtube.com/watch?v=qoGBO3q5ikI&t=2836s", "title": "Stroke: The Basics", "thumbnail": "https://i.ytimg.com/vi/qoGBO3q5ikI/maxresdefault.jpg"} {"video_id": "qoGBO3q5ikI", "text": "was confused so what do you think the problem might be right side of his brain yeah yeah he didn't recognize that he was having a stroke okay so it was again that area of the brain the parietal of where you don't recognize that your he knew something was going on and you'll see in the video that he was aware but his wife was on top of it and alerted the staff immediately that something was up with her husband now what do you think the treatment option might be ROK's up at the top yeah on the the right side you want to give somebody", "start_timestamp": "00:47:48", "end_timestamp": "00:48:28", "start_second": 2868, "end_second": 2908, "url": "https://www.youtube.com/watch?v=qoGBO3q5ikI&t=2868s", "title": "Stroke: The Basics", "thumbnail": "https://i.ytimg.com/vi/qoGBO3q5ikI/maxresdefault.jpg"} {"video_id": "qoGBO3q5ikI", "text": "like this TPA why not just had a surgical procedure right so TPA you have to think carefully before you give it because TPA can be a double-edged sword right it can cause bleeding as well as dissolve a blood clot so somebody who's either already got a big stroke or who has just had a surgical procedure TPA could be a problem so what other option would you have that wouldn't involve TPA yeah one of these clot removal devices and this was a patient who was in the hospital very shortly after that corkscrew-like", "start_timestamp": "00:48:28", "end_timestamp": "00:49:07", "start_second": 2908, "end_second": 2947, "url": "https://www.youtube.com/watch?v=qoGBO3q5ikI&t=2908s", "title": "Stroke: The Basics", "thumbnail": "https://i.ytimg.com/vi/qoGBO3q5ikI/maxresdefault.jpg"} {"video_id": "qoGBO3q5ikI", "text": "catheter was approved so the patient went straight to the cath lab they pulled it out and he got all better from the stroke so dramatic recovery and let me show you the video so you can look at the different perspective of the patient and his wife my speech was felt a little labored I felt tired what I felt the tactile sensation seemed to be okay every place on my bites and I could do that it was just that motor skills were a challenge it's not good that you have a stroke but we sure had a stroke of luck to be here so he was completely", "start_timestamp": "00:49:07", "end_timestamp": "00:50:23", "start_second": 2947, "end_second": 3023, "url": "https://www.youtube.com/watch?v=qoGBO3q5ikI&t=2947s", "title": "Stroke: The Basics", "thumbnail": "https://i.ytimg.com/vi/qoGBO3q5ikI/maxresdefault.jpg"} {"video_id": "qoGBO3q5ikI", "text": "paralyzed and did you hear his description a little problem with my hand a little motor problem so yeah it's the he's no never would have called night one one even though he's completely paralyzed on the left side so it is time for some questions or comments yeah go ahead yeah so it's a great great question the question is what's what's this three hour window business why are we being so restricted and we don't like being restricted the reason for the treat three hour window is because there were big studies done using the three hour", "start_timestamp": "00:50:23", "end_timestamp": "00:51:09", "start_second": 3023, "end_second": 3069, "url": "https://www.youtube.com/watch?v=qoGBO3q5ikI&t=3023s", "title": "Stroke: The Basics", "thumbnail": "https://i.ytimg.com/vi/qoGBO3q5ikI/maxresdefault.jpg"} {"video_id": "qoGBO3q5ikI", "text": "window back in 1995 showing benefit of patients treated up to three hours and subsequent studies that have looked at later time windows have not been as positive as I said there was a big European study that showed benefit to four and a half so we don't like to street stroke based on time because everybody's brain is differently it's different particularly the collaterals so we see patients like this one that I showed you here who at six hours had TPA and had this clot dissolved and had a fantastic recovery from the stroke because this", "start_timestamp": "00:51:09", "end_timestamp": "00:51:47", "start_second": 3069, "end_second": 3107, "url": "https://www.youtube.com/watch?v=qoGBO3q5ikI&t=3069s", "title": "Stroke: The Basics", "thumbnail": "https://i.ytimg.com/vi/qoGBO3q5ikI/maxresdefault.jpg"} {"video_id": "qoGBO3q5ikI", "text": "patient had good collaterals and only little damage was done by six hours we also see patients who had an hour and a half have massive damage so TPA will not help them they came in within the 3-hour window but the damage was already done so we need a different approach rather than TPA so the arbitrary time windows we don't like and what we've been doing is studies to try to show that it makes a lot more sense to look at the brain with sophisticated imaging and then try to figure out who has salvageable tissue", "start_timestamp": "00:51:47", "end_timestamp": "00:52:18", "start_second": 3107, "end_second": 3138, "url": "https://www.youtube.com/watch?v=qoGBO3q5ikI&t=3107s", "title": "Stroke: The Basics", "thumbnail": "https://i.ytimg.com/vi/qoGBO3q5ikI/maxresdefault.jpg"} {"video_id": "qoGBO3q5ikI", "text": "so if you wake up with a stroke and come to Stanford and we take a picture and it looks like this we're going to go after that blood clot if you show up two hours into the stroke and the tissue is already massively injured we're not going to remove the blood clot we're going to try to do something to reduce the swelling because a big stroke it's going to cause even more trouble from the swelling so it's a great question but that's the rule of the FDA yeah what difference genetic differences between people it would say some people", "start_timestamp": "00:52:18", "end_timestamp": "00:52:50", "start_second": 3138, "end_second": 3170, "url": "https://www.youtube.com/watch?v=qoGBO3q5ikI&t=3138s", "title": "Stroke: The Basics", "thumbnail": "https://i.ytimg.com/vi/qoGBO3q5ikI/maxresdefault.jpg"} {"video_id": "qoGBO3q5ikI", "text": "haven't need more cholesterol than other people no no no you know there's really not some people who need more it there was some concern that if you have very little cholesterol that your blood vessels may be a little bit more likely to cause a hemorrhage and there is some evidence that if you look at some Asian populations who tend to run very low cholesterol that they may have slightly higher brain hemorrhage rates but when you look at the studies that use the high dose of the statin versus a medium dose both were heart attack and for", "start_timestamp": "00:52:50", "end_timestamp": "00:53:28", "start_second": 3170, "end_second": 3208, "url": "https://www.youtube.com/watch?v=qoGBO3q5ikI&t=3170s", "title": "Stroke: The Basics", "thumbnail": "https://i.ytimg.com/vi/qoGBO3q5ikI/maxresdefault.jpg"} {"video_id": "qoGBO3q5ikI", "text": "stroke the lower the cholesterol goes the lower the stroke rate it's like the blood pressure the lower the better so there may be some point you know you can't go down to zero right it's like the blood pressure you can't go to low but in general what we found from the studies is lower cholesterol is lower heart attack and stroke rates the goal for somebody varies on what their situation is but if you've had a stroke from atherosclerosis that we want to get down below 80 on what's called the LDL the low-density right the LDL the bad", "start_timestamp": "00:53:28", "end_timestamp": "00:54:01", "start_second": 3208, "end_second": 3241, "url": "https://www.youtube.com/watch?v=qoGBO3q5ikI&t=3208s", "title": "Stroke: The Basics", "thumbnail": "https://i.ytimg.com/vi/qoGBO3q5ikI/maxresdefault.jpg"} {"video_id": "qoGBO3q5ikI", "text": "cholesterol by HDL you want to get it as high as possible yeah so the question is that passing out after eating so postprandial syncope that's not a stroke but it is a problem which can be difficult to treat and usually it's eating you know multiple small meals to try to deal with that you know it's difficult to comment about individual people's medical problems from from the podium but usually it doesn't get worse it's usually something that can be managed yeah okay up front yeah there was a team some years ago at University", "start_timestamp": "00:54:01", "end_timestamp": "00:54:56", "start_second": 3241, "end_second": 3296, "url": "https://www.youtube.com/watch?v=qoGBO3q5ikI&t=3241s", "title": "Stroke: The Basics", "thumbnail": "https://i.ytimg.com/vi/qoGBO3q5ikI/maxresdefault.jpg"} {"video_id": "qoGBO3q5ikI", "text": "of Iowa - look the ischemic optic neuropathy and found that it correlated actually with - too little blood pressure in the optic artery causing a loss of blood flow of the blood vessel itself collapsed and and they showed that the coupling of taking an antihypertensive when going to bed with with the natural diurnal Brides and follow the blood pressure that was the worst time to be taking a medication like that and it seems that when I mentioned this to people in the medical profession most people oh that's that that's new to", "start_timestamp": "00:54:56", "end_timestamp": "00:55:38", "start_second": 3296, "end_second": 3338, "url": "https://www.youtube.com/watch?v=qoGBO3q5ikI&t=3296s", "title": "Stroke: The Basics", "thumbnail": "https://i.ytimg.com/vi/qoGBO3q5ikI/maxresdefault.jpg"} {"video_id": "qoGBO3q5ikI", "text": "me I've never heard of that but I'm wondering whether anybody is looking at these 30% of cases that are idiopathic whether there's any chance that there was a blood vessel in the brain that actually collapsed because of because I heard a blood pressure drop and medication was a strong antihypertensive at the same time actually caused blood pressure blood pressure to drop in and blood flow to the brain to be kind of off to that or that part of the brain we kind of because there wasn't sufficient planning for this yeah so it's a", "start_timestamp": "00:55:38", "end_timestamp": "00:56:11", "start_second": 3338, "end_second": 3371, "url": "https://www.youtube.com/watch?v=qoGBO3q5ikI&t=3338s", "title": "Stroke: The Basics", "thumbnail": "https://i.ytimg.com/vi/qoGBO3q5ikI/maxresdefault.jpg"} {"video_id": "qoGBO3q5ikI", "text": "complicated question but what's being asked is that we talked about blood pressure being too high as a cause a stroke can blood pressure being too low because a stroke as well and the answer is yes that there certainly are situations particularly for patients who have you know major blockages of vessels where you know the plumbing is blocked up you can imagine if you don't have enough pressure you're going to run into trouble and typically the type of strokes we see from that are called watershed strokes where you have two", "start_timestamp": "00:56:11", "end_timestamp": "00:56:40", "start_second": 3371, "end_second": 3400, "url": "https://www.youtube.com/watch?v=qoGBO3q5ikI&t=3371s", "title": "Stroke: The Basics", "thumbnail": "https://i.ytimg.com/vi/qoGBO3q5ikI/maxresdefault.jpg"} {"video_id": "qoGBO3q5ikI", "text": "major vessels and in between them is going to be the area of low flow just like if you have two sprinklers right and you turn down the pressure you're going to get a dry area in between so there are issues and like most things there there's a you know an advantage and a disadvantage you do not want blood pressure to go too low under certain circumstances and the other part of the question was that you know the timing of blood pressure medicines which is very important what you can do is a 24 hour blood pressure monitor and sometimes", "start_timestamp": "00:56:40", "end_timestamp": "00:57:09", "start_second": 3400, "end_second": 3429, "url": "https://www.youtube.com/watch?v=qoGBO3q5ikI&t=3400s", "title": "Stroke: The Basics", "thumbnail": "https://i.ytimg.com/vi/qoGBO3q5ikI/maxresdefault.jpg"} {"video_id": "qoGBO3q5ikI", "text": "it'll be very surprising to see the over the course of the day things are fluctuating a lot and just taking a blood pressure in the office can be very misleading often times people's blood pressure is quite high when they go in because we're scary with our white coats on so it can be helpful and planning out how to give the blood pressure medicine certainly is an important issue in the back yeah it's a great question so question is had symptoms that may or may not have been at EIA the doctors didn't agree that", "start_timestamp": "00:57:09", "end_timestamp": "00:58:03", "start_second": 3429, "end_second": 3483, "url": "https://www.youtube.com/watch?v=qoGBO3q5ikI&t=3429s", "title": "Stroke: The Basics", "thumbnail": "https://i.ytimg.com/vi/qoGBO3q5ikI/maxresdefault.jpg"} {"video_id": "qoGBO3q5ikI", "text": "it's not unusual with a stroke it's pretty easy to say yes or no because the picture will say is there brain injury or not TI a no brain injury so there are lots of things that can mimic it the good news is that most of those treatments for at EIA are things that we should all do whether we've had a TI or not right blood pressure under control cholesterol under control taking an aspirin as a pretty benign approach so it is not unusual to have controversies even neurologists will disagree even stroke specialists who see a patient", "start_timestamp": "00:58:03", "end_timestamp": "00:58:31", "start_second": 3483, "end_second": 3511, "url": "https://www.youtube.com/watch?v=qoGBO3q5ikI&t=3483s", "title": "Stroke: The Basics", "thumbnail": "https://i.ytimg.com/vi/qoGBO3q5ikI/maxresdefault.jpg"} {"video_id": "qoGBO3q5ikI", "text": "they cannot agree on TI a if you image it at the time you can look at the blood flow and actually see a blood flow reduction but that doesn't happen very often that you're in and get the picture in the midst of the TI a or right afterwards where we may see a footprint of that blood pressure reduction so not unusual but the type of things that you're going to do after TI a are good things to do anyway yes great question does every er have TPI and you know many years ago the answer was certainly no that you know people", "start_timestamp": "00:58:31", "end_timestamp": "00:59:14", "start_second": 3511, "end_second": 3554, "url": "https://www.youtube.com/watch?v=qoGBO3q5ikI&t=3511s", "title": "Stroke: The Basics", "thumbnail": "https://i.ytimg.com/vi/qoGBO3q5ikI/maxresdefault.jpg"} {"video_id": "qoGBO3q5ikI", "text": "were slow to adopt TPA because it was a big change from what they were used to doing all this rushing around stroke patients there was no treatment they were like the last priority so in the Bay Area that most of the hospitals now are what are called primary stroke centers meaning that they have TPA and they have a plan and then some stroke centers have the comprehensive designation meaning they can go up with catheters and defense you're imaging so in our area we're pretty lucky of it if you call 9-1-1 at this point it's almost", "start_timestamp": "00:59:14", "end_timestamp": "00:59:46", "start_second": 3554, "end_second": 3586, "url": "https://www.youtube.com/watch?v=qoGBO3q5ikI&t=3554s", "title": "Stroke: The Basics", "thumbnail": "https://i.ytimg.com/vi/qoGBO3q5ikI/maxresdefault.jpg"} {"video_id": "qoGBO3q5ikI", "text": "certain that the hospital you're going to go to is TPA ready that wasn't the case a decade ago TPA is typically administered the question is how's TPA administered typically it's just given by vein intravenous infusion sometimes when we go up there with one of those catheters you try to pull out the clot and little pieces break loose and then we'll squirt some TPA to get the smaller cloth so it can be administered directly up into the clot in the brain but typically it's through the vein question in the back there yes your head", "start_timestamp": "00:59:46", "end_timestamp": "01:00:25", "start_second": 3586, "end_second": 3625, "url": "https://www.youtube.com/watch?v=qoGBO3q5ikI&t=3586s", "title": "Stroke: The Basics", "thumbnail": "https://i.ytimg.com/vi/qoGBO3q5ikI/maxresdefault.jpg"} {"video_id": "qoGBO3q5ikI", "text": "the question is for the statin agents does it matter if it's named brand or generic there doesn't seem to be a key difference there so I think the generic is fine far in the back yes it's inserted in the question is where as the catheter inserted its inserted in the femoral artery and then up into the brain yes so the femoral artery is it's right next to the groin area so it's the top of the leg where you you get that blood vessel yeah so question can you go elsewhere it doesn't really take any time to push it from the leg up to the", "start_timestamp": "01:00:25", "end_timestamp": "01:01:03", "start_second": 3625, "end_second": 3663, "url": "https://www.youtube.com/watch?v=qoGBO3q5ikI&t=3625s", "title": "Stroke: The Basics", "thumbnail": "https://i.ytimg.com/vi/qoGBO3q5ikI/maxresdefault.jpg"} {"video_id": "qoGBO3q5ikI", "text": "brain versus the arm and it's easier it's a bigger target it's a little safer to go into the leg than the arm but if the leg is all blocked up you can go into the arm years ago when they first started to do this they went through the neck that didn't go so well up right I'm interested in you mentioned there are those discrete situations that they would be used in or shouldn't many coumadin people be thinking about going on to those and secondly do the hospitals in our area generally yeah so the question about these new", "start_timestamp": "01:01:03", "end_timestamp": "01:01:37", "start_second": 3663, "end_second": 3697, "url": "https://www.youtube.com/watch?v=qoGBO3q5ikI&t=3663s", "title": "Stroke: The Basics", "thumbnail": "https://i.ytimg.com/vi/qoGBO3q5ikI/maxresdefault.jpg"} {"video_id": "qoGBO3q5ikI", "text": "anticoagulants and they're there for so some insurance was call cover one versus the other the first one on the market was called Pradaxa you've probably heard about that one most hospitals would you're gonna have access to that and many will have the other ones that have come out more recently so these are typically for patients with atrial fibrillation they have not been shown to be effective for people who have heart valve problems like a mechanical heart valve so it kind of depends on the reason that somebody's on coumadin if", "start_timestamp": "01:01:37", "end_timestamp": "01:02:09", "start_second": 3697, "end_second": 3729, "url": "https://www.youtube.com/watch?v=qoGBO3q5ikI&t=3697s", "title": "Stroke: The Basics", "thumbnail": "https://i.ytimg.com/vi/qoGBO3q5ikI/maxresdefault.jpg"} {"video_id": "qoGBO3q5ikI", "text": "they're on for atrial fibrillation and they're having some trouble with the coumadin there's a you know these offers some advantages for somebody who has a stroke from atrial fibrillation most doctors now would be talking about using one of these newer agents rather than going with coumadin because you don't have to do the frequent blood tests there's not so much in the way of food and drug interactions so they're much more convenient for the patient back there yeah you sorry I don't know anybody's name yeah so the question is can mini-strokes", "start_timestamp": "01:02:09", "end_timestamp": "01:02:47", "start_second": 3729, "end_second": 3767, "url": "https://www.youtube.com/watch?v=qoGBO3q5ikI&t=3729s", "title": "Stroke: The Basics", "thumbnail": "https://i.ytimg.com/vi/qoGBO3q5ikI/maxresdefault.jpg"} {"video_id": "qoGBO3q5ikI", "text": "lead to dementia so a TI a doesn't lead to dementia because as we said by definition a TI a doesn't damage the brain but little tiny strokes they can lead to dementia so dementia being a lack of cognitive ability so you can imagine if you're starting to knock off lots of little spots the connections between different areas of the brain are going to be affected those typically occur again in people with high blood pressure that the small little blood vessels deep in the brain get narrowed and you start to get these little tiny", "start_timestamp": "01:02:47", "end_timestamp": "01:03:14", "start_second": 3767, "end_second": 3794, "url": "https://www.youtube.com/watch?v=qoGBO3q5ikI&t=3767s", "title": "Stroke: The Basics", "thumbnail": "https://i.ytimg.com/vi/qoGBO3q5ikI/maxresdefault.jpg"} {"video_id": "qoGBO3q5ikI", "text": "strokes that people often call lacunar strokes and they can cause dementia so we don't want those to build up if there's if we can prevent a second row there go ahead yeah so question is about omega-3 and it's been a controversial area we find that the the statins have much more evidence to support them so we would recommend the statin agents the guidelines recommend the statins over the omega-3 question in the question over there have we expired our time I think one more question okay our curricular I have friends who", "start_timestamp": "01:03:14", "end_timestamp": "01:04:13", "start_second": 3794, "end_second": 3853, "url": "https://www.youtube.com/watch?v=qoGBO3q5ikI&t=3794s", "title": "Stroke: The Basics", "thumbnail": "https://i.ytimg.com/vi/qoGBO3q5ikI/maxresdefault.jpg"} {"video_id": "qoGBO3q5ikI", "text": "yeah so the final question is about these new anticoagulants and are they easier to regulate in the coumadin and they are definitely easier to regulate they were all tested all four of them in huge huge trials where one half took coumadin and had all the regulation issues and the other half took these new medicines with no regulation okay so you weren't upping and adjusting the dose like you have to do with coumadin and all four of these you know were either as good or better than coumadin in these big trials so they were as good or", "start_timestamp": "01:04:13", "end_timestamp": "01:04:47", "start_second": 3853, "end_second": 3887, "url": "https://www.youtube.com/watch?v=qoGBO3q5ikI&t=3853s", "title": "Stroke: The Basics", "thumbnail": "https://i.ytimg.com/vi/qoGBO3q5ikI/maxresdefault.jpg"} {"video_id": "qoGBO3q5ikI", "text": "better without all that regulation so somebody who's blood thinning is jumping all over the place on coumadin these are much smoother now the disadvantage is that we don't have great tests to monitor how much is in the blood with coumadin we can tell exactly how much is in there with it with the blood tests the newer agents that's a little bit trickier and then there's a little bit of an issue with reversal of these newer agents that coumadin is hard to reverse also but we have medicines with a lot more we have a lot more experience", "start_timestamp": "01:04:47", "end_timestamp": "01:05:19", "start_second": 3887, "end_second": 3919, "url": "https://www.youtube.com/watch?v=qoGBO3q5ikI&t=3887s", "title": "Stroke: The Basics", "thumbnail": "https://i.ytimg.com/vi/qoGBO3q5ikI/maxresdefault.jpg"} {"video_id": "LHXXI4-IEns", "text": "hi and welcome to an illustrated guide to recurrent neural networks a Michael also known as learned vector I'm a machine learning engineering the natural language processing and voice assistance space if you're just getting started in machine learning and want to get some intuition behind recurrent neural networks those videos for you if you want to get into machine learning recurrent neural networks are a powerful technique that's important to understand if you use smart phones and frequently surf the internet odds are you use", "start_timestamp": "00:00:00", "end_timestamp": "00:00:27", "start_second": 0, "end_second": 27, "url": "https://www.youtube.com/watch?v=LHXXI4-IEns&t=0s", "title": "Illustrated Guide to Recurrent Neural Networks: Understanding the Intuition", "thumbnail": "https://i.ytimg.com/vi/LHXXI4-IEns/maxresdefault.jpg"} {"video_id": "LHXXI4-IEns", "text": "applications that leverages RN ends recurrent neural networks are using speech recognition language translation stock prediction it's even using image recognition to describe it content in pictures so I know there are many guys on recurrent neural networks but I want to share illustrations along with an explanation of how I came to understand it in this video I'm going to avoid all the math and focus on the intuition behind RN ends instead by the end of this video you should have a good understanding of RN ends and hopefully", "start_timestamp": "00:00:27", "end_timestamp": "00:00:57", "start_second": 27, "end_second": 57, "url": "https://www.youtube.com/watch?v=LHXXI4-IEns&t=27s", "title": "Illustrated Guide to Recurrent Neural Networks: Understanding the Intuition", "thumbnail": "https://i.ytimg.com/vi/LHXXI4-IEns/maxresdefault.jpg"} {"video_id": "LHXXI4-IEns", "text": "have that light bulb moment so RN ends are neural networks that are good at modeling sequence data to understand what that means let's do a thought experiment say you take a still snapshot of a ball moving in time let's also say you want to predict a direction that the ball is moving so with only the information that you see on the screen how would you do this well you can go ahead and take a guess but any answer you come up with would be that a random guess without knowledge of where the ball has been you weren't having an", "start_timestamp": "00:00:57", "end_timestamp": "00:01:30", "start_second": 57, "end_second": 90, "url": "https://www.youtube.com/watch?v=LHXXI4-IEns&t=57s", "title": "Illustrated Guide to Recurrent Neural Networks: Understanding the Intuition", "thumbnail": "https://i.ytimg.com/vi/LHXXI4-IEns/maxresdefault.jpg"} {"video_id": "LHXXI4-IEns", "text": "update of to predict where it's going if you record many snapshots of the balls position in succession you will have enough information to make a better prediction so this is a sequence a particular order in which one thing follows another with this information you can now see that the ball is moving to the right sequence data comes in many forms audio is the natural sequence you can chop up an audio spectrogram into chunks and feed that into RN ends text is another form of sequences you can break text up into sequence of characters or", "start_timestamp": "00:01:30", "end_timestamp": "00:02:03", "start_second": 90, "end_second": 123, "url": "https://www.youtube.com/watch?v=LHXXI4-IEns&t=90s", "title": "Illustrated Guide to Recurrent Neural Networks: Understanding the Intuition", "thumbnail": "https://i.ytimg.com/vi/LHXXI4-IEns/maxresdefault.jpg"} {"video_id": "LHXXI4-IEns", "text": "sequence of words okay so our ends are good at processing sequence data for predictions but how well they do that by having a concept I like to call sequential memory to get a good intuition behind what sequential memory means I like to invite you to say the alphabet in your head go on give it a try that was pretty easy right if you were taught the specific sequence it should come easily to you now try saying the alphabet backward I bet that was much harder unless you practice the sequence before you'll likely have a", "start_timestamp": "00:02:03", "end_timestamp": "00:02:42", "start_second": 123, "end_second": 162, "url": "https://www.youtube.com/watch?v=LHXXI4-IEns&t=123s", "title": "Illustrated Guide to Recurrent Neural Networks: Understanding the Intuition", "thumbnail": "https://i.ytimg.com/vi/LHXXI4-IEns/maxresdefault.jpg"} {"video_id": "LHXXI4-IEns", "text": "hard time here's a fun one start out the letter F at first just struggle with the first few letters but then after your brain picks up the pattern the rest will come naturally so there's a very logical reason why this can be difficult you learn the alphabet as a sequence sequential memory is a mechanism that makes it easier for your brain to recognize sequence patterns all right so are n ends have this abstract concept of sequential memory but how the heck does it replicate that concept well let's look at a traditional neural network", "start_timestamp": "00:02:42", "end_timestamp": "00:03:14", "start_second": 162, "end_second": 194, "url": "https://www.youtube.com/watch?v=LHXXI4-IEns&t=162s", "title": "Illustrated Guide to Recurrent Neural Networks: Understanding the Intuition", "thumbnail": "https://i.ytimg.com/vi/LHXXI4-IEns/maxresdefault.jpg"} {"video_id": "LHXXI4-IEns", "text": "also known as a feed-forward neural network as an input layer hidden layer and output layer how do we get a feed-forward neural network to be able to use previous information to affect later ones what have we had a loop in a neural network that can pass previous information forward and that's essentially what a recurrent neural network does an RNN has a looping mechanism that acts as a highway to allow information to flow from one step to the next this information is the hidden state which is a representation", "start_timestamp": "00:03:14", "end_timestamp": "00:03:43", "start_second": 194, "end_second": 223, "url": "https://www.youtube.com/watch?v=LHXXI4-IEns&t=194s", "title": "Illustrated Guide to Recurrent Neural Networks: Understanding the Intuition", "thumbnail": "https://i.ytimg.com/vi/LHXXI4-IEns/maxresdefault.jpg"} {"video_id": "LHXXI4-IEns", "text": "of previous inputs let's run through an art and use case to have a better understanding of how this works let's say we want to build a chatbot they're pretty popular nowadays let's say the chatbox can classify intentions from the user's inputted text to tackle this problem first we're going to encode the sequence of texts using an RNN then we're going to feed the RNA and output into a feed-forward neural network which will classify the intents okay so a user types in what time is it to start we break up the sentence into", "start_timestamp": "00:03:43", "end_timestamp": "00:04:16", "start_second": 223, "end_second": 256, "url": "https://www.youtube.com/watch?v=LHXXI4-IEns&t=223s", "title": "Illustrated Guide to Recurrent Neural Networks: Understanding the Intuition", "thumbnail": "https://i.ytimg.com/vi/LHXXI4-IEns/maxresdefault.jpg"} {"video_id": "LHXXI4-IEns", "text": "individual words rnns work sequentially so we feed it one word at a time the first step is to feed what into the RNA the RNA encode what and produces an output for the next time we beat the work time in a hidden state from the previous step remember that the hidden state represent information from all previous steps the RNN now has information about the work what in time we repeat this process until the final step you can see about a final step the Arnon has encoded information from all the words in the previous steps since", "start_timestamp": "00:04:16", "end_timestamp": "00:04:52", "start_second": 256, "end_second": 292, "url": "https://www.youtube.com/watch?v=LHXXI4-IEns&t=256s", "title": "Illustrated Guide to Recurrent Neural Networks: Understanding the Intuition", "thumbnail": "https://i.ytimg.com/vi/LHXXI4-IEns/maxresdefault.jpg"} {"video_id": "LHXXI4-IEns", "text": "the final output was created from the rest of the sequence we should be able to take the final output and pass it to the feed-forward layer to classify in intent for those of you who like looking at code here are some Python showcasing the control flow first you initialize your network layers in the initial hidden state the shape and dimensions of the hidden state will be dependent on the shape and dimension of your recurrent rail network then you loop through your inputs past a word and hence a into the artnet DRN and returns", "start_timestamp": "00:04:52", "end_timestamp": "00:05:23", "start_second": 292, "end_second": 323, "url": "https://www.youtube.com/watch?v=LHXXI4-IEns&t=292s", "title": "Illustrated Guide to Recurrent Neural Networks: Understanding the Intuition", "thumbnail": "https://i.ytimg.com/vi/LHXXI4-IEns/maxresdefault.jpg"} {"video_id": "LHXXI4-IEns", "text": "the output at a modified hidden state this modified hidden state should now contain information from all your previous steps you continue to loop until you're out of words last you pass the output to the feed board layer and it returns a prediction and that's it the control flow of doing a forward pass of a recurrent neural network is a for loop okay now back to our visualization you may have noticed the odd distribution of colors in the hidden states this is to illustrate an issue with our n ends known as short-term", "start_timestamp": "00:05:23", "end_timestamp": "00:05:54", "start_second": 323, "end_second": 354, "url": "https://www.youtube.com/watch?v=LHXXI4-IEns&t=323s", "title": "Illustrated Guide to Recurrent Neural Networks: Understanding the Intuition", "thumbnail": "https://i.ytimg.com/vi/LHXXI4-IEns/maxresdefault.jpg"} {"video_id": "LHXXI4-IEns", "text": "memory short-term memory is caused by the infamous vanishing gradient problem which is also prevalent in other neural network architectures so as yarn and processes more steps it has troubles retaining information from previous steps as you can see the information from the word what and time is almost non-existent at the final step short-term memory and vanishing gradient is due to the nature of back propagation algorithm used to Train and optimize neural networks to understand why this is let's take a look at the effects of", "start_timestamp": "00:05:54", "end_timestamp": "00:06:24", "start_second": 354, "end_second": 384, "url": "https://www.youtube.com/watch?v=LHXXI4-IEns&t=354s", "title": "Illustrated Guide to Recurrent Neural Networks: Understanding the Intuition", "thumbnail": "https://i.ytimg.com/vi/LHXXI4-IEns/maxresdefault.jpg"} {"video_id": "LHXXI4-IEns", "text": "back propagation on a deep feet board neural network training and neural network has three major steps first it does a forward pass and makes a prediction second it compares the prediction to the ground truth using a loss function the loss function outputs an error value which is an estimate of how badly the network is performing last it uses the error value to do back propagation which calculates the gradients for each node in the network the gradient is a value used to adjust the network's internal weights allowing", "start_timestamp": "00:06:24", "end_timestamp": "00:06:55", "start_second": 384, "end_second": 415, "url": "https://www.youtube.com/watch?v=LHXXI4-IEns&t=384s", "title": "Illustrated Guide to Recurrent Neural Networks: Understanding the Intuition", "thumbnail": "https://i.ytimg.com/vi/LHXXI4-IEns/maxresdefault.jpg"} {"video_id": "LHXXI4-IEns", "text": "the network to learn the bigger the gradient the bigger the adjustments and vice versa here's where the problem lies when doing back propagation each node in a layer calculates its gradient with respect to the effects of the gradients and the layer before so the adjustments in the layer before it is small then the adjustments in the current layer will be even smaller this cost gradients to exponentially shrink as it back propagates down the earlier layers failed to do any learning as the internal weights are barely being", "start_timestamp": "00:06:55", "end_timestamp": "00:07:26", "start_second": 415, "end_second": 446, "url": "https://www.youtube.com/watch?v=LHXXI4-IEns&t=415s", "title": "Illustrated Guide to Recurrent Neural Networks: Understanding the Intuition", "thumbnail": "https://i.ytimg.com/vi/LHXXI4-IEns/maxresdefault.jpg"} {"video_id": "LHXXI4-IEns", "text": "adjusted due to extremely small gradient and that's the vanishing gradient problem let's see how this applies to recurrent neural networks you can think of each time step and over current no network as a layer to train a recurrent neural network use an application of backpropagation called back propagation through time the gradients value will exponentially shrink as it propagates for each time step again the gradient is used to make the adjustments in the neural networks weights thus allowing it to learn small gradients means small", "start_timestamp": "00:07:26", "end_timestamp": "00:07:57", "start_second": 446, "end_second": 477, "url": "https://www.youtube.com/watch?v=LHXXI4-IEns&t=446s", "title": "Illustrated Guide to Recurrent Neural Networks: Understanding the Intuition", "thumbnail": "https://i.ytimg.com/vi/LHXXI4-IEns/maxresdefault.jpg"} {"video_id": "LHXXI4-IEns", "text": "adjustments this causes the early layers to not learn because of the vanish ingredients the RNN doesn't learn the long-range dependencies across time steps this means that there is a possibility that the word light and time are not considered when trying to predict a user's intention the network has to make its best guess with is it that's pretty ambiguous and would be difficult even for a human so not being able to learn on earlier time steps causes the network tap short-term memory okay so RNN suffer from short-term", "start_timestamp": "00:07:57", "end_timestamp": "00:08:31", "start_second": 477, "end_second": 511, "url": "https://www.youtube.com/watch?v=LHXXI4-IEns&t=477s", "title": "Illustrated Guide to Recurrent Neural Networks: Understanding the Intuition", "thumbnail": "https://i.ytimg.com/vi/LHXXI4-IEns/maxresdefault.jpg"} {"video_id": "LHXXI4-IEns", "text": "memory so how do we combat that to mitigate short-term memory to specialized recurrent neural networks were created one called long short term memory or lsdm for sure the other is gated recurrent units or gr use LS TMS and gr use essentially fokin just like our meds but they're capable of learning long-term dependencies using mechanism called gates these gates are different tensor operations that can learn what information to add or remove to the hidden state because of this ability short-term memory is less of an", "start_timestamp": "00:08:31", "end_timestamp": "00:09:04", "start_second": 511, "end_second": 544, "url": "https://www.youtube.com/watch?v=LHXXI4-IEns&t=511s", "title": "Illustrated Guide to Recurrent Neural Networks: Understanding the Intuition", "thumbnail": "https://i.ytimg.com/vi/LHXXI4-IEns/maxresdefault.jpg"} {"video_id": "LHXXI4-IEns", "text": "issue for them to sum this up RNs are good for processing sequence data for predictions but suffer from short-term memory the short-term memory issue for vanilla arm ends doesn't mean to skip them completely and you still more involved versions like LST MS or gr use RNs have the benefit of training faster and uses less computational resources that's because there are less tensor operations to compute you could use LST MS or gr use when you expect a model longer sequences with long-term dependencies if you're interested in", "start_timestamp": "00:09:04", "end_timestamp": "00:09:34", "start_second": 544, "end_second": 574, "url": "https://www.youtube.com/watch?v=LHXXI4-IEns&t=544s", "title": "Illustrated Guide to Recurrent Neural Networks: Understanding the Intuition", "thumbnail": "https://i.ytimg.com/vi/LHXXI4-IEns/maxresdefault.jpg"} {"video_id": "S27pHKBEp30", "text": "all right cool well thanks everybody um so I'm gonna give the second talk tonight which I'm not crazy about and and I don't want this pattern to to repeat but you know Andrew and I wanted to kick this series off and and felt like me talking twice or better than then not but we're gonna we're gonna get more diversity of folks if any of you want to give a talk yourselves you know somebody who you think might that'd be awesome but a topic that I feel is important for practitioners to understand is a real sea change in", "start_timestamp": "00:00:00", "end_timestamp": "00:00:38", "start_second": 0, "end_second": 38, "url": "https://www.youtube.com/watch?v=S27pHKBEp30&t=0s", "title": "LSTM is dead. Long Live Transformers!", "thumbnail": "https://i.ytimg.com/vi/S27pHKBEp30/maxresdefault.jpg"} {"video_id": "S27pHKBEp30", "text": "natural language processing that's you know all of like 12 months old but is one these things I think is incredibly significant in in the field and that is the advance of the Transformers so the outline for this talk is to start out with some background on natural language processing and sequence modeling and then talk about the LS TM why it's awesome and amazing but still not good enough and then go into Transformus and talk about how they work and why they're amazing so for background on natural language processing NLP I'm gonna be talking just", "start_timestamp": "00:00:38", "end_timestamp": "00:01:18", "start_second": 38, "end_second": 78, "url": "https://www.youtube.com/watch?v=S27pHKBEp30&t=38s", "title": "LSTM is dead. Long Live Transformers!", "thumbnail": "https://i.ytimg.com/vi/S27pHKBEp30/maxresdefault.jpg"} {"video_id": "S27pHKBEp30", "text": "about a subset of NLP which is the supervised learning a part of it so not structured prediction sequence prediction but where you're taking the document as some input and trying to predict some fairly straightforward output about it like is this document spam right and so what this means is that you need to somehow take your document and represent it as a fixed-size vector because I'm not aware of any linear algebra that works on vectors of variable dimensionality and the challenge with this is that documents are of variable length right", "start_timestamp": "00:01:18", "end_timestamp": "00:01:58", "start_second": 78, "end_second": 118, "url": "https://www.youtube.com/watch?v=S27pHKBEp30&t=78s", "title": "LSTM is dead. Long Live Transformers!", "thumbnail": "https://i.ytimg.com/vi/S27pHKBEp30/maxresdefault.jpg"} {"video_id": "S27pHKBEp30", "text": "so you have to come up with some way of taking that document and meaningfully encoding it into a fixed size vector right so the classic way of doing this is the bag of words right where you have one dimension per unique word in your vocabulary so English has I don't know about a hundred thousand words in the vocabulary right and so you have a hundred thousand dimensional vector most of them are zero because most words are not present in your document and the ones that are have some value that's maybe account or", "start_timestamp": "00:01:58", "end_timestamp": "00:02:26", "start_second": 118, "end_second": 146, "url": "https://www.youtube.com/watch?v=S27pHKBEp30&t=118s", "title": "LSTM is dead. Long Live Transformers!", "thumbnail": "https://i.ytimg.com/vi/S27pHKBEp30/maxresdefault.jpg"} {"video_id": "S27pHKBEp30", "text": "tf-idf score or something like that and that is your vector and this naturally leads to sparse data where again it's mostly zero so you don't store the zeros because that's computationally inefficient you store lists of a position value tuples or maybe just a list of positions and this makes the computation much cheaper and this works this works reasonably well a key limitation is that when you're looking an actual document order matters right these two documents mean completely different things right but a bag of words model will score them", "start_timestamp": "00:02:26", "end_timestamp": "00:03:02", "start_second": 146, "end_second": 182, "url": "https://www.youtube.com/watch?v=S27pHKBEp30&t=146s", "title": "LSTM is dead. Long Live Transformers!", "thumbnail": "https://i.ytimg.com/vi/S27pHKBEp30/maxresdefault.jpg"} {"video_id": "S27pHKBEp30", "text": "identically every single time because they have the exact same vectors for for what words are present so the solution to that in this context is in grams you can have by grams which every pair of possible words or trigrams are for every combination of three words which would easily distinguish between those two but now you're up to what is that a quadrillion dimensional vector and you can do it but you know you start running into all sorts of problems when you walk down that path so a in neural network land it's the natural way to just solve", "start_timestamp": "00:03:02", "end_timestamp": "00:03:37", "start_second": 182, "end_second": 217, "url": "https://www.youtube.com/watch?v=S27pHKBEp30&t=182s", "title": "LSTM is dead. Long Live Transformers!", "thumbnail": "https://i.ytimg.com/vi/S27pHKBEp30/maxresdefault.jpg"} {"video_id": "S27pHKBEp30", "text": "this problem is the R and n which is the recurrent neural network not the recursive neural network I've made that mistake but RN ends are a new approach to this which asked the question how do you calculate a function on a variable-length set of input and they answer it using a for loop in math where they recursively define the output at any stage as a function of the inputs at the previous stages and the previous output and then for the purpose of supervised learning the final output is just the final hidden state here and so", "start_timestamp": "00:03:37", "end_timestamp": "00:04:16", "start_second": 217, "end_second": 256, "url": "https://www.youtube.com/watch?v=S27pHKBEp30&t=217s", "title": "LSTM is dead. Long Live Transformers!", "thumbnail": "https://i.ytimg.com/vi/S27pHKBEp30/maxresdefault.jpg"} {"video_id": "S27pHKBEp30", "text": "visually this looks like this activation which takes an input from the raw document X and also itself in the previous time you can unroll this and visualize it as a very deep neural network where there the final answer the number you're looking at the end is this and it's this deep neural network that processes every one of the inputs along the way alright and the problem with this classic vanilla all right on this plane recurrent neural network is vanishing an exploding gradients right so you take this recursive definition of", "start_timestamp": "00:04:16", "end_timestamp": "00:04:50", "start_second": 256, "end_second": 290, "url": "https://www.youtube.com/watch?v=S27pHKBEp30&t=256s", "title": "LSTM is dead. Long Live Transformers!", "thumbnail": "https://i.ytimg.com/vi/S27pHKBEp30/maxresdefault.jpg"} {"video_id": "S27pHKBEp30", "text": "the hidden state and you imagine what happens just three points in right and so you're calling this a function this a transformation over and over and over again on your data and classically in the vanilla want a case this is just some matrix multiplication some learned matrix W times your input X and so when you go out and say a hundred words in you're taking that W vector W matrix and you're multiplying it a hundred times alright so in in simple math in in real number math we know that if you take any number less than one and raise it to a", "start_timestamp": "00:04:50", "end_timestamp": "00:05:26", "start_second": 290, "end_second": 326, "url": "https://www.youtube.com/watch?v=S27pHKBEp30&t=290s", "title": "LSTM is dead. Long Live Transformers!", "thumbnail": "https://i.ytimg.com/vi/S27pHKBEp30/maxresdefault.jpg"} {"video_id": "S27pHKBEp30", "text": "very high dimensional value sorry very high exponent you get some incredibly small number and if your number is slightly larger than one then it blows up to something big and if you go if your X even higher if you have longer documents this gets even worse and in linear algebra this is about the same except you need to think about the eigenvalues of the matrix so the eigenvalues is say how much the matrix is going to grow or shrink vectors when the transformation is applied and if your eigenvalues are less than one in this transformation you're", "start_timestamp": "00:05:26", "end_timestamp": "00:05:57", "start_second": 326, "end_second": 357, "url": "https://www.youtube.com/watch?v=S27pHKBEp30&t=326s", "title": "LSTM is dead. Long Live Transformers!", "thumbnail": "https://i.ytimg.com/vi/S27pHKBEp30/maxresdefault.jpg"} {"video_id": "S27pHKBEp30", "text": "going to get these gradients that go to zero as you use this matrix over and over again if they're greater than one then your gradients are going to explode all right and so this made vanilla RN ends extremely difficult to work with and basically just didn't work on anything but fairly short sequences all right so LST m to the rescue right so I wrote this document a few years ago called the rise and fall and rise and fall of LST M so at least Tim came around in the dark ages and then it went into the AI winter it came back again", "start_timestamp": "00:05:57", "end_timestamp": "00:06:29", "start_second": 357, "end_second": 389, "url": "https://www.youtube.com/watch?v=S27pHKBEp30&t=357s", "title": "LSTM is dead. Long Live Transformers!", "thumbnail": "https://i.ytimg.com/vi/S27pHKBEp30/maxresdefault.jpg"} {"video_id": "S27pHKBEp30", "text": "for awhile but I think it's on its way out again now with with transformers so Ellis Tim to be clear is a kind of recurrent neural network it just houses more sophisticated cell inside and it was invented originally in the dark ages on stone tablet that has been recovered into a PDF that you can access on Sep hawk right there's a server III kid but seven and you're gonna both grade I enjoy they're both quite a bit but they did a bunch of amazing work in the 90s that was really well ahead of its time and and often get neglected and", "start_timestamp": "00:06:29", "end_timestamp": "00:07:09", "start_second": 389, "end_second": 429, "url": "https://www.youtube.com/watch?v=S27pHKBEp30&t=389s", "title": "LSTM is dead. Long Live Transformers!", "thumbnail": "https://i.ytimg.com/vi/S27pHKBEp30/maxresdefault.jpg"} {"video_id": "S27pHKBEp30", "text": "forgotten as time goes on that's totally not fair because they did an amazing research so the LST emcell looks like this it actually has two hidden states and the the input coming along the bottom and the output up the top again and these two hidden states and I'm not going to go into it in detail and you should totally look at Christopher ollas blog post if you want to dive into it but the key point is that these these transformations these the matrix multiplies right and they are not applied recursively on the main hidden", "start_timestamp": "00:07:09", "end_timestamp": "00:07:38", "start_second": 429, "end_second": 458, "url": "https://www.youtube.com/watch?v=S27pHKBEp30&t=429s", "title": "LSTM is dead. Long Live Transformers!", "thumbnail": "https://i.ytimg.com/vi/S27pHKBEp30/maxresdefault.jpg"} {"video_id": "S27pHKBEp30", "text": "vector all you're doing is you're adding in or the forget gate yeah you actually don't really need it but you're adding in some some new number and so the OS TM is actually a lot like a res net it's a lot like a CNN resonate in that you're adding new values on to the activation as you go through the layers right and so this solves the exploding and vanishing gradients problems however LST M is still pretty difficult to train because you still have these very long gradient paths even even without even with those residual connections you're", "start_timestamp": "00:07:38", "end_timestamp": "00:08:15", "start_second": 458, "end_second": 495, "url": "https://www.youtube.com/watch?v=S27pHKBEp30&t=458s", "title": "LSTM is dead. Long Live Transformers!", "thumbnail": "https://i.ytimg.com/vi/S27pHKBEp30/maxresdefault.jpg"} {"video_id": "S27pHKBEp30", "text": "still propagating gradients from the end all the way through this transformation cell over at the beginning and for a long document this means very very deep networks that aren't just Toria Slee difficult to train and more importantly transfer learning never really worked on these LST M models right one of the great things about image net and cnn's is that you can train a convolutional net on millions of images in image net and take that neural network and fine-tune it for some new problem that you have and the the starting state of", "start_timestamp": "00:08:15", "end_timestamp": "00:08:50", "start_second": 495, "end_second": 530, "url": "https://www.youtube.com/watch?v=S27pHKBEp30&t=495s", "title": "LSTM is dead. Long Live Transformers!", "thumbnail": "https://i.ytimg.com/vi/S27pHKBEp30/maxresdefault.jpg"} {"video_id": "S27pHKBEp30", "text": "the you mention at CNN gives you a great a great place to start from when you're looking for a new neural network and makes training on your own problem much he there was much less data that never really worked with Ellis Jim sometimes it did but it just wasn't very reliable which means that anytime you're using an LS TM you need a new label data set that's specific to your task and that's expensive okay so this this changed dramatically just about a year ago when the burp model was was released so you'll hear people talk about", "start_timestamp": "00:08:50", "end_timestamp": "00:09:24", "start_second": 530, "end_second": 564, "url": "https://www.youtube.com/watch?v=S27pHKBEp30&t=530s", "title": "LSTM is dead. Long Live Transformers!", "thumbnail": "https://i.ytimg.com/vi/S27pHKBEp30/maxresdefault.jpg"} {"video_id": "S27pHKBEp30", "text": "Transformers and Muppets together and the reason for this is that the original paper on this technique that describes the network architecture it was called the transformer network and then the Bert paper is a muppet news and Elmo paper and you know researchers just run with the joke um so this is just context you understand what people are talking about if they say well use them up in network so this I think it was the natural progression of the sequence of document models and it was the transformer model was first described", "start_timestamp": "00:09:24", "end_timestamp": "00:09:54", "start_second": 564, "end_second": 594, "url": "https://www.youtube.com/watch?v=S27pHKBEp30&t=564s", "title": "LSTM is dead. Long Live Transformers!", "thumbnail": "https://i.ytimg.com/vi/S27pHKBEp30/maxresdefault.jpg"} {"video_id": "S27pHKBEp30", "text": "about two and a half years ago in this paper attention is all you need and this paper was addressing machine translation so think about taking a document in in English and converting it into French right and so the classic way to do this in neural network is encoder/decoder here's the full structure there's a lot going on here right so we're just going to focus on the encoder part because that's all you need for these supervised learning problems the decoder is similar anyway so zooming in on the encoder part", "start_timestamp": "00:09:54", "end_timestamp": "00:10:22", "start_second": 594, "end_second": 622, "url": "https://www.youtube.com/watch?v=S27pHKBEp30&t=594s", "title": "LSTM is dead. Long Live Transformers!", "thumbnail": "https://i.ytimg.com/vi/S27pHKBEp30/maxresdefault.jpg"} {"video_id": "S27pHKBEp30", "text": "of it there's still quite a bit going on and so we're but basically there's three parts there's we're gonna talk about first we're going to talk about this attention part then we'll talk about the part of the bottom of the positional coding the top parts just not that hard it's just a simple fully connected layer so the attention mechanism in the middle is the key to making this thing work on documents of variable lengths and the way they do that is by having an all-to-all comparison for every layer of the neural network it considers every", "start_timestamp": "00:10:22", "end_timestamp": "00:10:52", "start_second": 622, "end_second": 652, "url": "https://www.youtube.com/watch?v=S27pHKBEp30&t=622s", "title": "LSTM is dead. Long Live Transformers!", "thumbnail": "https://i.ytimg.com/vi/S27pHKBEp30/maxresdefault.jpg"} {"video_id": "S27pHKBEp30", "text": "pause for every output of the next layer considers every plausible input from the previous layer in this N squared way and it does this weighted sum of the previous ones where the waiting is the learned function right and then it applies just a fully connected layer after it but it this is this is great for for a number of reasons one is that you can you can look at this thing and you can visually see what it's doing so here is this translation problem of converting from the English sentence the agreement on the European", "start_timestamp": "00:10:52", "end_timestamp": "00:11:23", "start_second": 652, "end_second": 683, "url": "https://www.youtube.com/watch?v=S27pHKBEp30&t=652s", "title": "LSTM is dead. Long Live Transformers!", "thumbnail": "https://i.ytimg.com/vi/S27pHKBEp30/maxresdefault.jpg"} {"video_id": "S27pHKBEp30", "text": "Economic Area was signed in August 1992 and translate that into French my apologies la casa la zone economic European at this may and oh I forgot 1992 right and you can see the attention so as its generating lips as a generating the each token in the output it's it's starting with this whole thing's name button its generating is these output tokens one at a time and it says okay first you got to translate the the way I do that it translates into la and all I'm doing is looking at this next I'll put a color all I'm doing is", "start_timestamp": "00:11:23", "end_timestamp": "00:11:58", "start_second": 683, "end_second": 718, "url": "https://www.youtube.com/watch?v=S27pHKBEp30&t=683s", "title": "LSTM is dead. Long Live Transformers!", "thumbnail": "https://i.ytimg.com/vi/S27pHKBEp30/maxresdefault.jpg"} {"video_id": "S27pHKBEp30", "text": "looking at agreement then sir is on la is the okay now interesting European Economic Area translates into zone economic European so the order is reversed right you can see the attention mechanism is reversed also or you can see very clearly what this thing is doing as it's running along and the way it works in the attention are setting the transformer model the way they describe it is with query and key vectors so for every output position you generate a query and for every input you're considering you generate a key", "start_timestamp": "00:11:58", "end_timestamp": "00:12:31", "start_second": 718, "end_second": 751, "url": "https://www.youtube.com/watch?v=S27pHKBEp30&t=718s", "title": "LSTM is dead. Long Live Transformers!", "thumbnail": "https://i.ytimg.com/vi/S27pHKBEp30/maxresdefault.jpg"} {"video_id": "S27pHKBEp30", "text": "and then the relevant score is just the dot product of those two right and to visualize that you first you combine the key the query and the key values and that gives you the relevant scores you you use the softmax normalize them and then you do a weighted average of the values the third version of each token to get your output now to explain this in a little bit more detail I'm going to go through it in pseudocode so this looks like Python it wouldn't actually run but I think it's close enough to help people understand what's going on", "start_timestamp": "00:12:31", "end_timestamp": "00:13:04", "start_second": 751, "end_second": 784, "url": "https://www.youtube.com/watch?v=S27pHKBEp30&t=751s", "title": "LSTM is dead. Long Live Transformers!", "thumbnail": "https://i.ytimg.com/vi/S27pHKBEp30/maxresdefault.jpg"} {"video_id": "S27pHKBEp30", "text": "so you've got this attention function right and it takes as input a list of tensors I know you don't need to do that a list of 10 serious one per token on the input and then the first thing it does it goes through each everything in the sequence and it computes the query the key and the value by multiplying the appropriate input vector by Q k and V which are these learned matrices right so it learns this transformation from the previous layer to whatever should be the query the key and the value at the at the next layer then it", "start_timestamp": "00:13:04", "end_timestamp": "00:13:40", "start_second": 784, "end_second": 820, "url": "https://www.youtube.com/watch?v=S27pHKBEp30&t=784s", "title": "LSTM is dead. Long Live Transformers!", "thumbnail": "https://i.ytimg.com/vi/S27pHKBEp30/maxresdefault.jpg"} {"video_id": "S27pHKBEp30", "text": "goes through this double nested loop alright so for every output token it figures out okay this is the query I'm working with and then it goes through everything in the input and it multiplies that query with the the key from the possible key and it computes a whole bunch of relevant scores and then it normalizes these relevant scores using a soft Max which makes sure that they just all add up to one so you can sensibly can use that to compute a weighted sum of all of the values so you know you just go through for each output", "start_timestamp": "00:13:40", "end_timestamp": "00:14:14", "start_second": 820, "end_second": 854, "url": "https://www.youtube.com/watch?v=S27pHKBEp30&t=820s", "title": "LSTM is dead. Long Live Transformers!", "thumbnail": "https://i.ytimg.com/vi/S27pHKBEp30/maxresdefault.jpg"} {"video_id": "S27pHKBEp30", "text": "you go through each of the each of the input tokens the value score which is calculated for them and you multiply it by the relevance this is just a floating point number from 0 to 1 and you get a weighted average which is the output and you return that so this is what's going on in the attention mechanism which can be which can be pretty confusing when you just look at it look at the diagram that like that but I hope this I hope this explains it a little bit I'm sure we'll get some questions on this so relevant scores are interpretable as I", "start_timestamp": "00:14:14", "end_timestamp": "00:14:48", "start_second": 854, "end_second": 888, "url": "https://www.youtube.com/watch?v=S27pHKBEp30&t=854s", "title": "LSTM is dead. Long Live Transformers!", "thumbnail": "https://i.ytimg.com/vi/S27pHKBEp30/maxresdefault.jpg"} {"video_id": "S27pHKBEp30", "text": "say and and this is is super helpful right now the an innovation I think it was novel in the transformer paper is multi-headed attention and this is one of these really clever ID and important innovations that it's not actually all that complicated at all I you just do that same thing that same attention mechanism eight times whatever whatever value of 8 you want to use and that lets the network learn eight different things to pay attention to so in the translation case it can learn an attention mechanism for grammar one for", "start_timestamp": "00:14:48", "end_timestamp": "00:15:23", "start_second": 888, "end_second": 923, "url": "https://www.youtube.com/watch?v=S27pHKBEp30&t=888s", "title": "LSTM is dead. Long Live Transformers!", "thumbnail": "https://i.ytimg.com/vi/S27pHKBEp30/maxresdefault.jpg"} {"video_id": "S27pHKBEp30", "text": "vocabulary one for gender one for kent's whatever it is right whatever the thing needs to it can look at different parts of the input document for different purposes and do this at each layer right so you can kind of intuitively see how this would be a really flexible mechanism for for processing a document or any sequence okay so that is when the key things that enables the transfer model that's the multi-headed attention part of it now let's look down here at the positional encoding which is which is critical and novel in a", "start_timestamp": "00:15:23", "end_timestamp": "00:15:54", "start_second": 923, "end_second": 954, "url": "https://www.youtube.com/watch?v=S27pHKBEp30&t=923s", "title": "LSTM is dead. Long Live Transformers!", "thumbnail": "https://i.ytimg.com/vi/S27pHKBEp30/maxresdefault.jpg"} {"video_id": "S27pHKBEp30", "text": "critical innovation that I think is incredibly clever so without this positional encoding attention mechanisms are just bags of words right there's nothing seeing what the difference is between work to live or live to work right there they're just all positions they're all equivalent positions you're just going to compute some score for each of them so what they did is they took a lesson from Fourier theory and added in a bunch of sines and cosines as extra dimensions sorry not as extra dimensions but onto the the word embeddings so going back so", "start_timestamp": "00:15:54", "end_timestamp": "00:16:32", "start_second": 954, "end_second": 992, "url": "https://www.youtube.com/watch?v=S27pHKBEp30&t=954s", "title": "LSTM is dead. Long Live Transformers!", "thumbnail": "https://i.ytimg.com/vi/S27pHKBEp30/maxresdefault.jpg"} {"video_id": "S27pHKBEp30", "text": "what they do is they take the inputs they use word Tyvek to calculate some vector for each input token and then onto that onto that embedding they add a bunch of sine and cosines of different frequencies starting at just pi and then stretching out longer and longer and longer and if you look at the whole thing it looks like this and what this does is it lets the model reason about the relative position of any tokens right so if you can kind of imagine that the model can say if the orange dimension is slightly higher than the", "start_timestamp": "00:16:32", "end_timestamp": "00:17:05", "start_second": 992, "end_second": 1025, "url": "https://www.youtube.com/watch?v=S27pHKBEp30&t=992s", "title": "LSTM is dead. Long Live Transformers!", "thumbnail": "https://i.ytimg.com/vi/S27pHKBEp30/maxresdefault.jpg"} {"video_id": "S27pHKBEp30", "text": "blue dimension on one word versus another then you can see how it knows that that token is to the left or right of the other and because it has this at all these different wavelengths it can look across the entire document at kind of arbitrary scales to see whether one idea is before or after another the key thing is that this is how the system understands position and isn't just a bag of words for Fortran when doing the attention okay so transformers there's the two key innovations as positional encoding and", "start_timestamp": "00:17:05", "end_timestamp": "00:17:38", "start_second": 1025, "end_second": 1058, "url": "https://www.youtube.com/watch?v=S27pHKBEp30&t=1025s", "title": "LSTM is dead. Long Live Transformers!", "thumbnail": "https://i.ytimg.com/vi/S27pHKBEp30/maxresdefault.jpg"} {"video_id": "S27pHKBEp30", "text": "multi-headed attention transformers are awesome even though there are N squared and the length of the document these all-to-all comparisons can be done almost for free in a modern GPU GPUs changed all sorts of things right you can do a thousand by thousand matrix multiply as fast as you can do a ten by two in a lot of cases because they have so much parallelism they have so much bandwidth that but a fixed latency for every operation so you can do these massive massive multiplies almost for free in a lot of cases so doing things", "start_timestamp": "00:17:38", "end_timestamp": "00:18:07", "start_second": 1058, "end_second": 1087, "url": "https://www.youtube.com/watch?v=S27pHKBEp30&t=1058s", "title": "LSTM is dead. Long Live Transformers!", "thumbnail": "https://i.ytimg.com/vi/S27pHKBEp30/maxresdefault.jpg"} {"video_id": "S27pHKBEp30", "text": "in M Squared is is not actually necessarily much more expensive whereas in an RNN like an L STM you can't do anything with token 11 until you're completely done processing token 10 all right so this is a key advantage of transformers they're much more computationally efficient also you don't need to use any of these sigmoid or tan h activation functions which are built into the LS TM model these things of scale your activations to 0 1 why are these things problematic so these were bread-and-butter in the old days of of", "start_timestamp": "00:18:07", "end_timestamp": "00:18:43", "start_second": 1087, "end_second": 1123, "url": "https://www.youtube.com/watch?v=S27pHKBEp30&t=1087s", "title": "LSTM is dead. Long Live Transformers!", "thumbnail": "https://i.ytimg.com/vi/S27pHKBEp30/maxresdefault.jpg"} {"video_id": "S27pHKBEp30", "text": "neural networks people would use these between layers all the time and they make sense there that kind of biologically inspired you take any activation you scale it from 0 to 1 or minus 1 to 1 but they're actually really really problematic because if you get a neuron which has a very high activation value then you've got this number up here which is 1 and you take the derivative of that and it's 0 or it's some very very small number and so your gradient descent can't tell the difference between an activation up here", "start_timestamp": "00:18:43", "end_timestamp": "00:19:16", "start_second": 1123, "end_second": 1156, "url": "https://www.youtube.com/watch?v=S27pHKBEp30&t=1123s", "title": "LSTM is dead. Long Live Transformers!", "thumbnail": "https://i.ytimg.com/vi/S27pHKBEp30/maxresdefault.jpg"} {"video_id": "S27pHKBEp30", "text": "and one way over on the other side so it's very easy for the trainer to get confused if your activations don't stay near this middle part all right and that's problematic compare that to rel U which is the standard these days and really you yes it does have this this very very large dead space but if you're not in the dead space then there's nothing stopping it from getting getting bigger and bigger and scaling off to infinity and one of the reasons why when the intuitions behind why this works better as Geoffrey Hinton puts it is", "start_timestamp": "00:19:16", "end_timestamp": "00:19:47", "start_second": 1156, "end_second": 1187, "url": "https://www.youtube.com/watch?v=S27pHKBEp30&t=1156s", "title": "LSTM is dead. Long Live Transformers!", "thumbnail": "https://i.ytimg.com/vi/S27pHKBEp30/maxresdefault.jpg"} {"video_id": "S27pHKBEp30", "text": "that this allows each neuron it to express a stronger opinion right in an LS sorry in a sigmoid there is really no difference between the activation being three or eight or twenty or a hundred the output is the same right it all I can say is kind of yes no maybe right but in with our Lu it can say the inactivation of five or a hundred or a thousand and these are all meaningfully different values that can be used for different purposes down the line right so each neuron it can express more information also the gradient doesn't", "start_timestamp": "00:19:47", "end_timestamp": "00:20:23", "start_second": 1187, "end_second": 1223, "url": "https://www.youtube.com/watch?v=S27pHKBEp30&t=1187s", "title": "LSTM is dead. Long Live Transformers!", "thumbnail": "https://i.ytimg.com/vi/S27pHKBEp30/maxresdefault.jpg"} {"video_id": "S27pHKBEp30", "text": "saturate we talked about that and very critically and I think this is really underappreciated values are really insensitive to random initialization if you're working with a bunch of sigmoid layers you need to pick those random values at the beginning of your training to make sure that your activation values are in that middle part where you're going to get reasonable gradients and people used to worry a lot about what initialization to use for your neural network you don't hear people worrying about that much at all anymore and rail", "start_timestamp": "00:20:23", "end_timestamp": "00:20:53", "start_second": 1223, "end_second": 1253, "url": "https://www.youtube.com/watch?v=S27pHKBEp30&t=1223s", "title": "LSTM is dead. Long Live Transformers!", "thumbnail": "https://i.ytimg.com/vi/S27pHKBEp30/maxresdefault.jpg"} {"video_id": "S27pHKBEp30", "text": "users are really the key reason why that is also really runs great on low precision Hardware those those floating of the smooth activation functions they need 32-bit float maybe you can get it to work in 16-bit float sometimes but you're not going to be running it an 8-bit int without a ton of careful work and that is the kind of things are really easy to do with a rel u based network and a lot of hardware is going in that direction because it takes vastly fewer transistors and a lot less power to do 8-bit integer math versus", "start_timestamp": "00:20:53", "end_timestamp": "00:21:24", "start_second": 1253, "end_second": 1284, "url": "https://www.youtube.com/watch?v=S27pHKBEp30&t=1253s", "title": "LSTM is dead. Long Live Transformers!", "thumbnail": "https://i.ytimg.com/vi/S27pHKBEp30/maxresdefault.jpg"} {"video_id": "S27pHKBEp30", "text": "32-bit float it's also stupidly easy to compute the gradient it's one or at zero right you just take that top bit and you're done so the derivatives ridiculously usually rel you have some downsides it does have those dead neurons on on the left side you can fix that with a leaky rail you there's this discontinuity in the gradient of the origin you can fix that with Gail u which burnt uses and so this brings me to a little aside about general deep learning wisdom if you're designing a new network for whatever reason don't", "start_timestamp": "00:21:24", "end_timestamp": "00:21:57", "start_second": 1284, "end_second": 1317, "url": "https://www.youtube.com/watch?v=S27pHKBEp30&t=1284s", "title": "LSTM is dead. Long Live Transformers!", "thumbnail": "https://i.ytimg.com/vi/S27pHKBEp30/maxresdefault.jpg"} {"video_id": "S27pHKBEp30", "text": "bother messing with different kinds of activations don't bother trying sigmoid or tant they're they're probably not going to work out very well but different optimizers do matter atom is a great place to start it's super fast it tends to give pretty good results it has a bit of a tendency to overfit if you really are trying to squeeze the juice out of your system and you want the best results SGD is likely to get you a better result but it's going to take quite a bit more time to Verge sometimes rmsprop beats the pants", "start_timestamp": "00:21:57", "end_timestamp": "00:22:25", "start_second": 1317, "end_second": 1345, "url": "https://www.youtube.com/watch?v=S27pHKBEp30&t=1317s", "title": "LSTM is dead. Long Live Transformers!", "thumbnail": "https://i.ytimg.com/vi/S27pHKBEp30/maxresdefault.jpg"} {"video_id": "S27pHKBEp30", "text": "off both of them it's worth playing around with these with these things I told you about why I think SWA is great there's this system called attitude my old team at Amazon released where you don't even need to take a learning rate it dynamically calculates the ideal learning rate scheduled at every point during training for you it's kind of magical so it's worth playing around with different optimizers but don't mess with the with the activation functions okay let's pop out right there's a bunch of a bunch of Theory bunch of math and and", "start_timestamp": "00:22:25", "end_timestamp": "00:22:53", "start_second": 1345, "end_second": 1373, "url": "https://www.youtube.com/watch?v=S27pHKBEp30&t=1345s", "title": "LSTM is dead. Long Live Transformers!", "thumbnail": "https://i.ytimg.com/vi/S27pHKBEp30/maxresdefault.jpg"} {"video_id": "S27pHKBEp30", "text": "ideas in there how do we actually apply this stuff in code so if you want to use a transformer I strongly recommend hopping over to the the fine folks at hugging face and using their transformer package they have both pi torch and tensor flow implementations pre-trained models ready to fine tune and I'll show you how easy it is here's how to fine tune a Bert model in just 12 lines of code you just pick what kind of Bert you want the base model that's a paying attention upper and lower case you get the tokenizer to convert your string", "start_timestamp": "00:22:53", "end_timestamp": "00:23:27", "start_second": 1373, "end_second": 1407, "url": "https://www.youtube.com/watch?v=S27pHKBEp30&t=1373s", "title": "LSTM is dead. Long Live Transformers!", "thumbnail": "https://i.ytimg.com/vi/S27pHKBEp30/maxresdefault.jpg"} {"video_id": "S27pHKBEp30", "text": "into tokens you download the pre trained model in one line of code pick your data set for your own problem process the data set with the tokenizer to get training validation splits shuffle one batch um four more lines of code another four lines of code to instantiate your optimizer define your loss function pick a metric it's tensorflow so you got to compile it and then you call fit and that's it that's all you need to do use - all you need to do to fine-tune a state-of-the-art language model on your specific problem", "start_timestamp": "00:23:27", "end_timestamp": "00:24:01", "start_second": 1407, "end_second": 1441, "url": "https://www.youtube.com/watch?v=S27pHKBEp30&t=1407s", "title": "LSTM is dead. Long Live Transformers!", "thumbnail": "https://i.ytimg.com/vi/S27pHKBEp30/maxresdefault.jpg"} {"video_id": "S27pHKBEp30", "text": "and the fact you can do this on some pre trained model that's that's seen tons and tons of data that easily is really amazing and there's even bigger models out there right so Nvidia made this bottle called megatron with eight billion parameters they ran a hundreds of GPUs for over a week spent vast quantities of cash well I mean they own the stuffs but so not really but they they put a ton of energy into training this I've heard people a lot of people complaining about how much greenhouse gas comes from training model like", "start_timestamp": "00:24:01", "end_timestamp": "00:24:31", "start_second": 1441, "end_second": 1471, "url": "https://www.youtube.com/watch?v=S27pHKBEp30&t=1441s", "title": "LSTM is dead. Long Live Transformers!", "thumbnail": "https://i.ytimg.com/vi/S27pHKBEp30/maxresdefault.jpg"} {"video_id": "S27pHKBEp30", "text": "Megatron I think that's totally the wrong way of looking at this because they only need to do this once in the history of the world and everybody in this room can do it without having to burn those GPUs again right these things are reusable and fine tunable I don't think they've actually released this yet but but they might and somebody else will right so you don't need to do that that expensive work over and over again write this thing learns a base model really well the folks at Facebook trained this Roberta model on two and a half", "start_timestamp": "00:24:31", "end_timestamp": "00:25:04", "start_second": 1471, "end_second": 1504, "url": "https://www.youtube.com/watch?v=S27pHKBEp30&t=1471s", "title": "LSTM is dead. Long Live Transformers!", "thumbnail": "https://i.ytimg.com/vi/S27pHKBEp30/maxresdefault.jpg"} {"video_id": "S27pHKBEp30", "text": "terabytes of data across over a hundred languages and this thing understands low resource languages like Swahili and an Urdu in ways that the it's just vastly better than what's been done before and again these are reusable if you need a model that understands all the world's languages this is accessible to you by leveraging other people's work and before Bert and transformers and the Muppets this just was not possible now you can leverage other people's work in this way and I think that's really amazing so to sum up the key advantages", "start_timestamp": "00:25:04", "end_timestamp": "00:25:39", "start_second": 1504, "end_second": 1539, "url": "https://www.youtube.com/watch?v=S27pHKBEp30&t=1504s", "title": "LSTM is dead. Long Live Transformers!", "thumbnail": "https://i.ytimg.com/vi/S27pHKBEp30/maxresdefault.jpg"} {"video_id": "S27pHKBEp30", "text": "of these transforming networks yes they're easier to train they're more efficient all that yada yada yada but more importantly transfer learning actually works with them right you can take a pre trained model fine-tune it for your task without a specific data set and another really critical point which I didn't get a chance to go into is that these things are originally trained on large quantities of unsupervised text you can just take all of the world's text data and use this as training data the way it works very very", "start_timestamp": "00:25:39", "end_timestamp": "00:26:07", "start_second": 1539, "end_second": 1567, "url": "https://www.youtube.com/watch?v=S27pHKBEp30&t=1539s", "title": "LSTM is dead. Long Live Transformers!", "thumbnail": "https://i.ytimg.com/vi/S27pHKBEp30/maxresdefault.jpg"} {"video_id": "S27pHKBEp30", "text": "quickly is kind of comparable to how word Tyvek works where the language model tries to predict them some missing words from a document and in that's enough for it to understand how to build a supervised model using vast quantities of text without any effort to label them Ellis team still has its place in particular if the sequence length is very long or infinite you can't do n squared right and that happens if you're doing real time control like for a robot or a thermostat or something like that you can't have the entire sequence and for", "start_timestamp": "00:26:07", "end_timestamp": "00:26:44", "start_second": 1567, "end_second": 1604, "url": "https://www.youtube.com/watch?v=S27pHKBEp30&t=1567s", "title": "LSTM is dead. Long Live Transformers!", "thumbnail": "https://i.ytimg.com/vi/S27pHKBEp30/maxresdefault.jpg"} {"video_id": "S27pHKBEp30", "text": "some reason you can't pre train on some large corpus LS TM seems to outperform transformers when your dataset size is is relatively small and fixed and with that I will take questions well you yes yeah would CNN how do you compare words CNN transformer so when when I wrote this paper the rise and fall and rise and fall of LST M I predicted that time that word CNN's were going to be the thing that replaced LST M I did not I did not see this this transformer thing coming so a word CNN has a lot of the advantages in terms of", "start_timestamp": "00:26:44", "end_timestamp": "00:27:34", "start_second": 1604, "end_second": 1654, "url": "https://www.youtube.com/watch?v=S27pHKBEp30&t=1604s", "title": "LSTM is dead. Long Live Transformers!", "thumbnail": "https://i.ytimg.com/vi/S27pHKBEp30/maxresdefault.jpg"} {"video_id": "S27pHKBEp30", "text": "parallelism and the ability to use rel you and the key difference is that it only looks at a fixed size window fixed size part of the document instead of looking at the entire document at once and so it's it's got a fair amount fundamentally in common word CNN's have an easier task easier time identifying diagrams trigrams things like that because it's got those direct comparisons right it doesn't need this positional encoding trick to try to infer with with fourier waves what where things are relative to each other so", "start_timestamp": "00:27:34", "end_timestamp": "00:28:10", "start_second": 1654, "end_second": 1690, "url": "https://www.youtube.com/watch?v=S27pHKBEp30&t=1654s", "title": "LSTM is dead. Long Live Transformers!", "thumbnail": "https://i.ytimg.com/vi/S27pHKBEp30/maxresdefault.jpg"} {"video_id": "XBJ2f68LuO4", "text": "hi everyone I hope everyone here okay so I'm pleased to be here with you today to share with you some of my experience in Catalan during several years so the title is called is easy so it's quite different from other talks where it was more technical about how to win competitions here I will keep it rather simple and I will talk about what I was doing before and what I started doing to improve myself so I will start by giving some general advices that everyone should follow and I will talk about some technical sir", "start_timestamp": "00:00:00", "end_timestamp": "00:00:57", "start_second": 0, "end_second": 57, "url": "https://www.youtube.com/watch?v=XBJ2f68LuO4&t=0s", "title": "Gold is easy: Kaggle tips and tricks | by Chahhou Mohamed | Kaggle Days Dubai | Kaggle", "thumbnail": "https://i.ytimg.com/vi/XBJ2f68LuO4/maxresdefault.jpg"} {"video_id": "XBJ2f68LuO4", "text": "they me about how to improve or to check for improvements in your models and then I will talk quickly about some case studies and see that you will see that getting a gold medal is not sometimes well most of the times not that difficult so starting with the advices first advice positive mind if others can do it you can also do it that is how I started when I started kaggle I said if those people can do it I can do it I compete a lot and I managed to become again my second advice understand the problem I try to find you IDs never starts", "start_timestamp": "00:00:57", "end_timestamp": "00:01:43", "start_second": 57, "end_second": 103, "url": "https://www.youtube.com/watch?v=XBJ2f68LuO4&t=57s", "title": "Gold is easy: Kaggle tips and tricks | by Chahhou Mohamed | Kaggle Days Dubai | Kaggle", "thumbnail": "https://i.ytimg.com/vi/XBJ2f68LuO4/maxresdefault.jpg"} {"video_id": "XBJ2f68LuO4", "text": "fine-tuning your put your model during the first week while you are still doing some feature engineering steps you are still looking for a good architectures you will just waste most of your time doing by training spend this time trying different approaches different architectures to come up with different models for your final example turtle bites don't use caramels when you start a competition you will mostly end up using a slice variation of that kernel so don't look at caramels when you start try doing that data analysis", "start_timestamp": "00:01:43", "end_timestamp": "00:02:20", "start_second": 103, "end_second": 140, "url": "https://www.youtube.com/watch?v=XBJ2f68LuO4&t=103s", "title": "Gold is easy: Kaggle tips and tricks | by Chahhou Mohamed | Kaggle Days Dubai | Kaggle", "thumbnail": "https://i.ytimg.com/vi/XBJ2f68LuO4/maxresdefault.jpg"} {"video_id": "XBJ2f68LuO4", "text": "by yourself try to come up with your own models with your or Peter engineering you will end up with models that are different from doors that you will find on Kegel later on you can use ideas you can borrow ideas from cattle camels to help your model improve even better if you have the chance to work in teams don't share anything except some important insights so when I work in team with my teammates we don't share the architectures of our models we don't share the features that we create we only share important insights that we", "start_timestamp": "00:02:20", "end_timestamp": "00:03:00", "start_second": 140, "end_second": 180, "url": "https://www.youtube.com/watch?v=XBJ2f68LuO4&t=140s", "title": "Gold is easy: Kaggle tips and tricks | by Chahhou Mohamed | Kaggle Days Dubai | Kaggle", "thumbnail": "https://i.ytimg.com/vi/XBJ2f68LuO4/maxresdefault.jpg"} {"video_id": "XBJ2f68LuO4", "text": "find by doing the analysis of the data if something doesn't work don't stick to it you will mainly waste your time find a different approach try different modeling approaches to your problem you will end up with different solutions let me help you another advice is you should always keep it simple we saw previously in different competitions some awesome solutions that use hundreds of models which stuck in and as a as beginners or even at some advanced levels you may say that I can will never be able to do something like", "start_timestamp": "00:03:00", "end_timestamp": "00:03:45", "start_second": 180, "end_second": 225, "url": "https://www.youtube.com/watch?v=XBJ2f68LuO4&t=180s", "title": "Gold is easy: Kaggle tips and tricks | by Chahhou Mohamed | Kaggle Days Dubai | Kaggle", "thumbnail": "https://i.ytimg.com/vi/XBJ2f68LuO4/maxresdefault.jpg"} {"video_id": "XBJ2f68LuO4", "text": "this stuck in is very an advanced topic so try to to make simple models simple models will also work and to help you to get a gold medal don't understa mage the power of neural networks even on tabular data mostly on tabular data people try to use only gradient boosting decision trees because they are so powerful moral networks can be as good as present boosting decision trees if you are able to come up with a good architectures and it will also help you your final ensemble if you are working with two models right inputs", "start_timestamp": "00:03:45", "end_timestamp": "00:04:20", "start_second": 225, "end_second": 260, "url": "https://www.youtube.com/watch?v=XBJ2f68LuO4&t=225s", "title": "Gold is easy: Kaggle tips and tricks | by Chahhou Mohamed | Kaggle Days Dubai | Kaggle", "thumbnail": "https://i.ytimg.com/vi/XBJ2f68LuO4/maxresdefault.jpg"} {"video_id": "XBJ2f68LuO4", "text": "boosting decision trees and or immensely another point is throws for IDs that's how a college is try to work on different fields to image images in AP time series suppler don't mean classification regression try to work on as many different topics as possible ideas that you may use in in images for example can be used on tabular data if you know how to model your problem in order to use such ideas and the last idea and the last point is thinking outside of the box don't think don't try to do what other people does do because", "start_timestamp": "00:04:20", "end_timestamp": "00:05:09", "start_second": 260, "end_second": 309, "url": "https://www.youtube.com/watch?v=XBJ2f68LuO4&t=260s", "title": "Gold is easy: Kaggle tips and tricks | by Chahhou Mohamed | Kaggle Days Dubai | Kaggle", "thumbnail": "https://i.ytimg.com/vi/XBJ2f68LuO4/maxresdefault.jpg"} {"video_id": "XBJ2f68LuO4", "text": "we'll end up with the same solution and you will end up with the same idea try to do it things that people don't think about trying new thoughts new new ideas most of the time there will be crazy ideas that may not work but you have to try them and if one of those ideas work it will get you the gold medal so checking for improvement I would start talking talking about Peter importance how to assess the importance of so one thing that's all so what does I have no ties among calculus is that they are always trying", "start_timestamp": "00:05:09", "end_timestamp": "00:06:00", "start_second": 309, "end_second": 360, "url": "https://www.youtube.com/watch?v=XBJ2f68LuO4&t=309s", "title": "Gold is easy: Kaggle tips and tricks | by Chahhou Mohamed | Kaggle Days Dubai | Kaggle", "thumbnail": "https://i.ytimg.com/vi/XBJ2f68LuO4/maxresdefault.jpg"} {"video_id": "XBJ2f68LuO4", "text": "to eliminate features looking for the lorikeets filters trying to eliminate them in order to improve the performance of the model and to make it faster to trade so I never do that when I start a competition but let's say I never do that it's during the first modelling approach but later on you have to do it so why I don't use it I thought eliminates ranking filters when I thought because you may have correlated features that they explain the low ranking of some filters when we will do feature engineering new filters that will you", "start_timestamp": "00:06:00", "end_timestamp": "00:06:37", "start_second": 360, "end_second": 397, "url": "https://www.youtube.com/watch?v=XBJ2f68LuO4&t=360s", "title": "Gold is easy: Kaggle tips and tricks | by Chahhou Mohamed | Kaggle Days Dubai | Kaggle", "thumbnail": "https://i.ytimg.com/vi/XBJ2f68LuO4/maxresdefault.jpg"} {"video_id": "XBJ2f68LuO4", "text": "will end up coming with may have some good interactions with the lower ranking filters and you will see an increase if they're in their importance and also the ranking of the filters is dependent of the complexity of the models some low ranking filters may have they become really important if you increase the complexity or if you decrease the complexity some some high important filters may decrease so here is an example using the Titanic exam so on the left you can see the original row features and on the right I just added a", "start_timestamp": "00:06:37", "end_timestamp": "00:07:19", "start_second": 397, "end_second": 439, "url": "https://www.youtube.com/watch?v=XBJ2f68LuO4&t=397s", "title": "Gold is easy: Kaggle tips and tricks | by Chahhou Mohamed | Kaggle Days Dubai | Kaggle", "thumbnail": "https://i.ytimg.com/vi/XBJ2f68LuO4/maxresdefault.jpg"} {"video_id": "XBJ2f68LuO4", "text": "new feature that I called noise filter that I made from a normal distribution and as you can see I made a grid search trying to find the best parameters for the two models and you can see here that the noisy feature is ranking third which doesn't make sense this is just random noise so if you start looking at the lorikeets features you will try to eliminate some of those filters and will completely forget about the noisy filter so this is something I have noticed when I was working on the first silicon petition I made up", "start_timestamp": "00:07:19", "end_timestamp": "00:07:58", "start_second": 439, "end_second": 478, "url": "https://www.youtube.com/watch?v=XBJ2f68LuO4&t=439s", "title": "Gold is easy: Kaggle tips and tricks | by Chahhou Mohamed | Kaggle Days Dubai | Kaggle", "thumbnail": "https://i.ytimg.com/vi/XBJ2f68LuO4/maxresdefault.jpg"} {"video_id": "XBJ2f68LuO4", "text": "hundreds of meters and I tried to use the the noise Peter trick which is just made a noisy filter and see the ranking and if anything is rank it below the noisy filter it just means that it it's just noise you can eliminate this but it happens that in practice doesn't work and my noisy Peter even though I have hundreds of meters was always ranking in the top five filters so don't look at the low ranking Peters but start looking at those that are ranking very very high your model is probably overfitting on some noisy", "start_timestamp": "00:07:58", "end_timestamp": "00:08:41", "start_second": 478, "end_second": 521, "url": "https://www.youtube.com/watch?v=XBJ2f68LuO4&t=478s", "title": "Gold is easy: Kaggle tips and tricks | by Chahhou Mohamed | Kaggle Days Dubai | Kaggle", "thumbnail": "https://i.ytimg.com/vi/XBJ2f68LuO4/maxresdefault.jpg"} {"video_id": "XBJ2f68LuO4", "text": "filters and by eliminating those features first because the low rugged Peters may become more important because your model will try to use them instead of using the noisy Peters so the main purpose is to detect the overfitting meters you have two choices you remove them if it helps your cross-validation and if it hurts your cross-validation then just try to apply some transformation on those filters to have a better generalization so the second part of checking the improvements is about Twitter engineering and this is", "start_timestamp": "00:08:41", "end_timestamp": "00:09:28", "start_second": 521, "end_second": 568, "url": "https://www.youtube.com/watch?v=XBJ2f68LuO4&t=521s", "title": "Gold is easy: Kaggle tips and tricks | by Chahhou Mohamed | Kaggle Days Dubai | Kaggle", "thumbnail": "https://i.ytimg.com/vi/XBJ2f68LuO4/maxresdefault.jpg"} {"video_id": "XBJ2f68LuO4", "text": "quite tricky to to find what feature is working or not so when you come up with a new Peter and you want to check if this feature is helpful in your model or not well if you see very important increase in your score you know for sure that the Peter is helpful but most of the time the filters just give you some little improvements from literally trees of maybe decrease in your first validation spot so how will you assess that for example the third change happens from using this filter or is it just from some random seeds that you are", "start_timestamp": "00:09:28", "end_timestamp": "00:10:14", "start_second": 568, "end_second": 614, "url": "https://www.youtube.com/watch?v=XBJ2f68LuO4&t=568s", "title": "Gold is easy: Kaggle tips and tricks | by Chahhou Mohamed | Kaggle Days Dubai | Kaggle", "thumbnail": "https://i.ytimg.com/vi/XBJ2f68LuO4/maxresdefault.jpg"} {"video_id": "XBJ2f68LuO4", "text": "you into your model so first approach that you can use is doing bucket so what is bunion is just use the same model with different seats and try to to average the predictions out of those models and see the final your final cross-validation spot so instead of using a five fold cross validation this will allow you to use just one one random seat it's just one model I prefer using three bucks or 3/4 is almost as fast as doing a 5 first foundation because you have less training data and which just refers instead of 5 volts", "start_timestamp": "00:10:14", "end_timestamp": "00:11:01", "start_second": 614, "end_second": 661, "url": "https://www.youtube.com/watch?v=XBJ2f68LuO4&t=614s", "title": "Gold is easy: Kaggle tips and tricks | by Chahhou Mohamed | Kaggle Days Dubai | Kaggle", "thumbnail": "https://i.ytimg.com/vi/XBJ2f68LuO4/maxresdefault.jpg"} {"video_id": "XBJ2f68LuO4", "text": "oils almost as fast that it allows me to use 3 different models with 3 differences and by beginners blending those models I somehow gets rid of the randomness of somebody my final score is not really dependent of the seat that have been used in my model and I can be quite sure that the improvements that I see is not coming from randomness but really from the improvement so this is one way to do it the second way and this is how I do things when I I mean I do Peter engineering if I never use or I never test only one feature at once I create", "start_timestamp": "00:11:01", "end_timestamp": "00:11:48", "start_second": 661, "end_second": 708, "url": "https://www.youtube.com/watch?v=XBJ2f68LuO4&t=661s", "title": "Gold is easy: Kaggle tips and tricks | by Chahhou Mohamed | Kaggle Days Dubai | Kaggle", "thumbnail": "https://i.ytimg.com/vi/XBJ2f68LuO4/maxresdefault.jpg"} {"video_id": "XBJ2f68LuO4", "text": "what I call or I will be back to this line later on I always do what I call a bunch of features so I create let's say for example five features and instead of trying those features one by one I try to fill the full bunch of features at once and if improvement there is it will be not very small improvement but rather a quite important improvement and this works very very well so why does it work so well well let's imagine that you have two features a future one which is just a mean computed on some feature X grouped by", "start_timestamp": "00:11:48", "end_timestamp": "00:12:34", "start_second": 708, "end_second": 754, "url": "https://www.youtube.com/watch?v=XBJ2f68LuO4&t=708s", "title": "Gold is easy: Kaggle tips and tricks | by Chahhou Mohamed | Kaggle Days Dubai | Kaggle", "thumbnail": "https://i.ytimg.com/vi/XBJ2f68LuO4/maxresdefault.jpg"} {"video_id": "XBJ2f68LuO4", "text": "future while and some other filter to that use the feature one but you just remove the mean from your filter types you may end up if you try those features one by one you will may end up but not see but not seeing any improvement in your model but using them but in your model you may come up with some interesting improvements why because those two features are quite related between them one is the mean of the group and the other is just the difference of the filter X to the two this mean and maybe this feature two is", "start_timestamp": "00:12:34", "end_timestamp": "00:13:12", "start_second": 754, "end_second": 792, "url": "https://www.youtube.com/watch?v=XBJ2f68LuO4&t=754s", "title": "Gold is easy: Kaggle tips and tricks | by Chahhou Mohamed | Kaggle Days Dubai | Kaggle", "thumbnail": "https://i.ytimg.com/vi/XBJ2f68LuO4/maxresdefault.jpg"} {"video_id": "XBJ2f68LuO4", "text": "not working quite good by itself unless it sees to have some kind of interaction with the mean of the group so by creating a bunch a bunch of features you are allowing for more interactions then trying feature by filter and when you are done with this bunch of feature you make for example five features and you see improvements at that point you can start looking if you have any overfitting features and you can sometimes but say what I do is starting doing feature selection during my bunch of future creation so I create like 10", "start_timestamp": "00:13:12", "end_timestamp": "00:13:53", "start_second": 792, "end_second": 833, "url": "https://www.youtube.com/watch?v=XBJ2f68LuO4&t=792s", "title": "Gold is easy: Kaggle tips and tricks | by Chahhou Mohamed | Kaggle Days Dubai | Kaggle", "thumbnail": "https://i.ytimg.com/vi/XBJ2f68LuO4/maxresdefault.jpg"} {"video_id": "XBJ2f68LuO4", "text": "meters I see improvement I check for overfitting features I remove them and see if all the filters are useful so in some users may not be used for into these bands by just remove it one of the most powerful features that I have found so far and almost always work is this veto this kind of features you provide some feature wide and you compute some statistics using another feature and then you just remove that beam from the original row so you can apply you apply this on categorical filter features you can also do it on", "start_timestamp": "00:13:53", "end_timestamp": "00:14:41", "start_second": 833, "end_second": 881, "url": "https://www.youtube.com/watch?v=XBJ2f68LuO4&t=833s", "title": "Gold is easy: Kaggle tips and tricks | by Chahhou Mohamed | Kaggle Days Dubai | Kaggle", "thumbnail": "https://i.ytimg.com/vi/XBJ2f68LuO4/maxresdefault.jpg"} {"video_id": "XBJ2f68LuO4", "text": "discrete amiracle meters and you can also apply it on continuous features by doing some discretization just before also tabular data can have special special and temporal information in them so you may group by different filters with different granularities and they also derive your own the regularities from - - to make your blue pipe working so we in the two sigma competition within that end up in the second place just by using those those kind of features so we have some technical indicators and some fundamental", "start_timestamp": "00:14:41", "end_timestamp": "00:15:30", "start_second": 881, "end_second": 930, "url": "https://www.youtube.com/watch?v=XBJ2f68LuO4&t=881s", "title": "Gold is easy: Kaggle tips and tricks | by Chahhou Mohamed | Kaggle Days Dubai | Kaggle", "thumbnail": "https://i.ytimg.com/vi/XBJ2f68LuO4/maxresdefault.jpg"} {"video_id": "XBJ2f68LuO4", "text": "indicators and we we did this on some features and our our global our while Peter was the day so for each day we just compute the mean and look at the deviation of each sample of each observation to the mean a touch even day and we end up with different models that we went together and that was enough to win the competition just with those kind of the turtle third parts for improvement is about to be on fire so here I'm going to talk about our pliers in the targets that you are trying to predict so our pliers can be", "start_timestamp": "00:15:30", "end_timestamp": "00:16:19", "start_second": 930, "end_second": 979, "url": "https://www.youtube.com/watch?v=XBJ2f68LuO4&t=930s", "title": "Gold is easy: Kaggle tips and tricks | by Chahhou Mohamed | Kaggle Days Dubai | Kaggle", "thumbnail": "https://i.ytimg.com/vi/XBJ2f68LuO4/maxresdefault.jpg"} {"video_id": "XBJ2f68LuO4", "text": "due to different reasons so for example the home pricing you may contain some typos maybe you have some under priced sales because this happened between family members or maybe there was some fraud that happens so you know something that you can not predict because it's under price or maybe you have something over priced some cells that's big houses on some big island but you have very few samples of them so you can not find some interesting patterns not enough samples to learn anything interesting from London so how to deal with this kind of", "start_timestamp": "00:16:19", "end_timestamp": "00:17:03", "start_second": 979, "end_second": 1023, "url": "https://www.youtube.com/watch?v=XBJ2f68LuO4&t=979s", "title": "Gold is easy: Kaggle tips and tricks | by Chahhou Mohamed | Kaggle Days Dubai | Kaggle", "thumbnail": "https://i.ytimg.com/vi/XBJ2f68LuO4/maxresdefault.jpg"} {"video_id": "XBJ2f68LuO4", "text": "our piles so during the trading pains you can start by keeping the outliers and have some kind of deadlines for then you can apply some transformation of the target and see if there is any improvements this alone you can do what we call insulation so winds organization and we've talked about it later this is the second thing that you should try and the third thing that you should try is removing the outliers so winterization is just you need to cut the target so if you see some values that are higher than a given threshold", "start_timestamp": "00:17:03", "end_timestamp": "00:17:46", "start_second": 1023, "end_second": 1066, "url": "https://www.youtube.com/watch?v=XBJ2f68LuO4&t=1023s", "title": "Gold is easy: Kaggle tips and tricks | by Chahhou Mohamed | Kaggle Days Dubai | Kaggle", "thumbnail": "https://i.ytimg.com/vi/XBJ2f68LuO4/maxresdefault.jpg"} {"video_id": "XBJ2f68LuO4", "text": "you just put any higher value to that threshold okay or you can also cut the error so joining for addiction during the learning phase you have to write your own loss and what you do is you compute the error and for big errors you need to cap them so that your model will not will not be biased towards those predators those outliers especially if you are using some metrics like the mean squared error which encases the importance of the outliers so after trying the winterization will have a new spot that you can compare with your", "start_timestamp": "00:17:46", "end_timestamp": "00:18:30", "start_second": 1066, "end_second": 1110, "url": "https://www.youtube.com/watch?v=XBJ2f68LuO4&t=1066s", "title": "Gold is easy: Kaggle tips and tricks | by Chahhou Mohamed | Kaggle Days Dubai | Kaggle", "thumbnail": "https://i.ytimg.com/vi/XBJ2f68LuO4/maxresdefault.jpg"} {"video_id": "XBJ2f68LuO4", "text": "baseline and the third approach is the removal of the outliers so how can you do the removal you just build a model and predict on the train dataset and you have to use a cross validation approach it's mandatory otherwise it's not going to work and you will predict and using those prediction you will get an error and based on this error you will find an optimal threshold and you will remove from your training data all the samples that have an error higher than the threshold that you have fixed during the testing you should not apply the", "start_timestamp": "00:18:30", "end_timestamp": "00:19:13", "start_second": 1110, "end_second": 1153, "url": "https://www.youtube.com/watch?v=XBJ2f68LuO4&t=1110s", "title": "Gold is easy: Kaggle tips and tricks | by Chahhou Mohamed | Kaggle Days Dubai | Kaggle", "thumbnail": "https://i.ytimg.com/vi/XBJ2f68LuO4/maxresdefault.jpg"} {"video_id": "XBJ2f68LuO4", "text": "winterization or the test but regarding the removal of the outliers in general you can keep the outliers in your testing data set unless you are sure 100% that the outliers can make a hug in no case they can they can be predicted so you can remove them from from your from your test also when you see improvements when you are doing feature engineering and you see some improvement with a new feature how do you know that this feature is important because if you keep the outliers in your testing data maybe with some luck you are predicting", "start_timestamp": "00:19:13", "end_timestamp": "00:19:58", "start_second": 1153, "end_second": 1198, "url": "https://www.youtube.com/watch?v=XBJ2f68LuO4&t=1153s", "title": "Gold is easy: Kaggle tips and tricks | by Chahhou Mohamed | Kaggle Days Dubai | Kaggle", "thumbnail": "https://i.ytimg.com/vi/XBJ2f68LuO4/maxresdefault.jpg"} {"video_id": "XBJ2f68LuO4", "text": "a good out player that will improve your score but the feature by itself is meaningless it's just some luck that happened on some of my own so you should always check on test data with and without our players to see if the improvement is from some kind of general pattern that your model has have discovered or it's just luck some outliers okay some additional tips that I can give you here always try blending your models so what I mean here is doing when working on tabular data you will do a lot of feature engineering", "start_timestamp": "00:19:58", "end_timestamp": "00:20:46", "start_second": 1198, "end_second": 1246, "url": "https://www.youtube.com/watch?v=XBJ2f68LuO4&t=1198s", "title": "Gold is easy: Kaggle tips and tricks | by Chahhou Mohamed | Kaggle Days Dubai | Kaggle", "thumbnail": "https://i.ytimg.com/vi/XBJ2f68LuO4/maxresdefault.jpg"} {"video_id": "XBJ2f68LuO4", "text": "you will sometimes end up with hundreds of feet off so instead of using all those features in just one model you can create make different models with different subsets of features and try to blend together all those models and among with with the the feature engineer that we did in the two sigma competition we tried five different models I mean five same models but with different subsets of features and did how about a lot because what we have no ties we are we were trying to predict in the future so that was the time series time series", "start_timestamp": "00:20:46", "end_timestamp": "00:21:28", "start_second": 1246, "end_second": 1288, "url": "https://www.youtube.com/watch?v=XBJ2f68LuO4&t=1246s", "title": "Gold is easy: Kaggle tips and tricks | by Chahhou Mohamed | Kaggle Days Dubai | Kaggle", "thumbnail": "https://i.ytimg.com/vi/XBJ2f68LuO4/maxresdefault.jpg"} {"video_id": "XBJ2f68LuO4", "text": "problem and we were trying to predict in the future on a large time sample and we noticed that sometimes adding one feature in our model did improve our prediction for some modes but decreased our score for other mounts so each feature was not working on the whole time or the whole unseen future so they were working on some periods I'm not working on some periods and so what we did is we clustered somehow those features in two different spots though the features that were working on some periods we put them together and those", "start_timestamp": "00:21:28", "end_timestamp": "00:22:12", "start_second": 1288, "end_second": 1332, "url": "https://www.youtube.com/watch?v=XBJ2f68LuO4&t=1288s", "title": "Gold is easy: Kaggle tips and tricks | by Chahhou Mohamed | Kaggle Days Dubai | Kaggle", "thumbnail": "https://i.ytimg.com/vi/XBJ2f68LuO4/maxresdefault.jpg"} {"video_id": "XBJ2f68LuO4", "text": "that were working quite well or some other periods we put them on another model and by blending all those model what else we end up with something that was quite effective on the whole period also another advice that you should always write here is to try to when using begin so you have the same model and you are bugging with different seats try to let your models over fits a little bit by religion them over fits a little bit sometimes the packing will be will increase your score more than trying to find the best number of epochs for", "start_timestamp": "00:22:12", "end_timestamp": "00:22:53", "start_second": 1332, "end_second": 1373, "url": "https://www.youtube.com/watch?v=XBJ2f68LuO4&t=1332s", "title": "Gold is easy: Kaggle tips and tricks | by Chahhou Mohamed | Kaggle Days Dubai | Kaggle", "thumbnail": "https://i.ytimg.com/vi/XBJ2f68LuO4/maxresdefault.jpg"} {"video_id": "XBJ2f68LuO4", "text": "example if you are using neural network when you are using a lot of features try a very small feature fraction so this is one of the parameters of the gradient boosting decision tree this is the number of filters that will be used for each decision tree so most of the time we are using something like 80% or 90% but if you are using a lot of features you should use something or pill or rather try something like 10% or even 5% of the filters so your trees will be really very low correlated they will be almost independent from each other", "start_timestamp": "00:22:53", "end_timestamp": "00:23:38", "start_second": 1373, "end_second": 1418, "url": "https://www.youtube.com/watch?v=XBJ2f68LuO4&t=1373s", "title": "Gold is easy: Kaggle tips and tricks | by Chahhou Mohamed | Kaggle Days Dubai | Kaggle", "thumbnail": "https://i.ytimg.com/vi/XBJ2f68LuO4/maxresdefault.jpg"} {"video_id": "XBJ2f68LuO4", "text": "this will increase significantly your finance ecology some some tips about the neural networks neural networks are doing feature engineering by 5 in 10 so most of the time and I will fill neural networks I don't do any feature engineering I just try to find to come up with a good architecture platform so when you are doing this and in order to increase the feature engineering parts of the neural network try to apply the impedance also on discrete numerical Peter so most of the time we know we only do impedance on categorically", "start_timestamp": "00:23:38", "end_timestamp": "00:24:19", "start_second": 1418, "end_second": 1459, "url": "https://www.youtube.com/watch?v=XBJ2f68LuO4&t=1418s", "title": "Gold is easy: Kaggle tips and tricks | by Chahhou Mohamed | Kaggle Days Dubai | Kaggle", "thumbnail": "https://i.ytimg.com/vi/XBJ2f68LuO4/maxresdefault.jpg"} {"video_id": "XBJ2f68LuO4", "text": "categorical features this help helps in fitter engineering for the knowledge report but to doing that on the street needle computers you will embed those features and you will probably encode more information in those impedance than the initial value such that you are provided with so I have noticed that those impedance of this place inverter filters always increase by final spot in this in order always to another network even if you are working on tabular days they work great in ensembles and they have they are almost as good as great", "start_timestamp": "00:24:19", "end_timestamp": "00:25:00", "start_second": 1459, "end_second": 1500, "url": "https://www.youtube.com/watch?v=XBJ2f68LuO4&t=1459s", "title": "Gold is easy: Kaggle tips and tricks | by Chahhou Mohamed | Kaggle Days Dubai | Kaggle", "thumbnail": "https://i.ytimg.com/vi/XBJ2f68LuO4/maxresdefault.jpg"} {"video_id": "XBJ2f68LuO4", "text": "important decision so when you are doing feature engineering use it for example I GPM and you have lots of teachers that may be used to do feature engineering it's very difficult to tribe or combinations with all teachers you may end up with thousands and thousands of pizza before finding boss that may help you so one thing that's I usually don't I find it quite useful is that I make a neural network and the gradient boosting decision tree I have two scores to baseline scores I remove the same feature from both the", "start_timestamp": "00:25:00", "end_timestamp": "00:25:45", "start_second": 1500, "end_second": 1545, "url": "https://www.youtube.com/watch?v=XBJ2f68LuO4&t=1500s", "title": "Gold is easy: Kaggle tips and tricks | by Chahhou Mohamed | Kaggle Days Dubai | Kaggle", "thumbnail": "https://i.ytimg.com/vi/XBJ2f68LuO4/maxresdefault.jpg"} {"video_id": "XBJ2f68LuO4", "text": "models and I check the difference in the scores if I see that the spot of the neural network have decreased much more than the graduate within decision tree then I know for sure that the neural network while doing some kind of fitter engineering using the Twitter so that helped me decide which feature should I focus on when doing little engineering in writing poster decisions another trick does may use may be used to improve the generalization of your models especially when working with neural network histor and only replaced", "start_timestamp": "00:25:45", "end_timestamp": "00:26:22", "start_second": 1545, "end_second": 1582, "url": "https://www.youtube.com/watch?v=XBJ2f68LuO4&t=1545s", "title": "Gold is easy: Kaggle tips and tricks | by Chahhou Mohamed | Kaggle Days Dubai | Kaggle", "thumbnail": "https://i.ytimg.com/vi/XBJ2f68LuO4/maxresdefault.jpg"} {"video_id": "XBJ2f68LuO4", "text": "some of the filters with newer values okay during the training phase you have some some generator and but for each epoch you replace some values randomly with the newer values most of the times doesn't work but sometimes is may be helpful for the generalization also when working with neural networks and this is this always help when you are working with neural networks when you have a filter a numerical filter with null values always create an additional feature which is just a boolean filter indicating if the corresponding value", "start_timestamp": "00:26:22", "end_timestamp": "00:27:05", "start_second": 1582, "end_second": 1625, "url": "https://www.youtube.com/watch?v=XBJ2f68LuO4&t=1582s", "title": "Gold is easy: Kaggle tips and tricks | by Chahhou Mohamed | Kaggle Days Dubai | Kaggle", "thumbnail": "https://i.ytimg.com/vi/XBJ2f68LuO4/maxresdefault.jpg"} {"video_id": "XBJ2f68LuO4", "text": "doesn't know or not because you are replacing the null values into the original filter with some kind of mean or median or something like that and you need to help your neural network by this additional information so back to the outliers so when you are doing the removal of the outliers from your training set you know that later on you will do some feature engineering and you can you can ask yourself well if I remove those outliers now maybe filter engineering may help me later to predict those outliers color correctly so how", "start_timestamp": "00:27:05", "end_timestamp": "00:27:47", "start_second": 1625, "end_second": 1667, "url": "https://www.youtube.com/watch?v=XBJ2f68LuO4&t=1625s", "title": "Gold is easy: Kaggle tips and tricks | by Chahhou Mohamed | Kaggle Days Dubai | Kaggle", "thumbnail": "https://i.ytimg.com/vi/XBJ2f68LuO4/maxresdefault.jpg"} {"video_id": "XBJ2f68LuO4", "text": "can I decide if the removal of the outlier should be done before or alternate feature engineering so first approach just keep the outliers until you are done with the feat of engineering so you are sure that nothing can help you in predicting the vampires or you can do something that I usually do and that is that is what I did in the second stage of the zero competition is that I made neural network because neural networks are doing feature engineering implicitly okay so if I see that I'm not able to predict with my neural network", "start_timestamp": "00:27:47", "end_timestamp": "00:28:32", "start_second": 1667, "end_second": 1712, "url": "https://www.youtube.com/watch?v=XBJ2f68LuO4&t=1667s", "title": "Gold is easy: Kaggle tips and tricks | by Chahhou Mohamed | Kaggle Days Dubai | Kaggle", "thumbnail": "https://i.ytimg.com/vi/XBJ2f68LuO4/maxresdefault.jpg"} {"video_id": "XBJ2f68LuO4", "text": "my based knowledge network I'm not able to predict suppliers and knowing that neural networks are orange is already doing the feature engineering part that I have some insights that probably feature engineering with rotten garage in person decision tree will probably not be useful to predict guitar players so this helped me to decide if I should remove to our players soon or Lenten okay so I will show you here some case studies some example of competitions and you will see how it's maybe sometimes easy to get a gold medal or even a top", "start_timestamp": "00:28:32", "end_timestamp": "00:29:17", "start_second": 1712, "end_second": 1757, "url": "https://www.youtube.com/watch?v=XBJ2f68LuO4&t=1712s", "title": "Gold is easy: Kaggle tips and tricks | by Chahhou Mohamed | Kaggle Days Dubai | Kaggle", "thumbnail": "https://i.ytimg.com/vi/XBJ2f68LuO4/maxresdefault.jpg"} {"video_id": "XBJ2f68LuO4", "text": "three position so we start with the mercury competition that was a capital carrier competition so you have limited hardware just one hour to run the training and the inference phase of your model so we we were provided with those eyes with those features the item description that was the most important feature the name which was just this short description of the the item category condition shipping and brand name so when we started this competition I was in the team and we were doing the team and we focused on Elijah be a model", "start_timestamp": "00:29:17", "end_timestamp": "00:30:00", "start_second": 1757, "end_second": 1800, "url": "https://www.youtube.com/watch?v=XBJ2f68LuO4&t=1757s", "title": "Gold is easy: Kaggle tips and tricks | by Chahhou Mohamed | Kaggle Days Dubai | Kaggle", "thumbnail": "https://i.ytimg.com/vi/XBJ2f68LuO4/maxresdefault.jpg"} {"video_id": "XBJ2f68LuO4", "text": "because they they were so good we tried some neural networks some rich models and the light GBM was the best model that we can that we could make and even though it was our best model we could not make it in the top ten of the leaderboard so we tried some pencils with neural network they didn't work we even tried to add some impedance coming from neural networks as new features in Nigeria and it didn't work and the running time for alleged GBM was so high that we couldn't add any importance other models because", "start_timestamp": "00:30:00", "end_timestamp": "00:30:39", "start_second": 1800, "end_second": 1839, "url": "https://www.youtube.com/watch?v=XBJ2f68LuO4&t=1800s", "title": "Gold is easy: Kaggle tips and tricks | by Chahhou Mohamed | Kaggle Days Dubai | Kaggle", "thumbnail": "https://i.ytimg.com/vi/XBJ2f68LuO4/maxresdefault.jpg"} {"video_id": "XBJ2f68LuO4", "text": "we are we were limited to one hour of training so if something doesn't work change your approach we were so sad to throw dancing that water that took us so much time to eat to make but we have no choice but dropping that model and come up continue so we tried different models that were pretty fast to train and we used to neural networks and one rich model so one model one neural networks used embeddings of the the words as as input and another model just were used to the one hot encoding as input so we have a sparse", "start_timestamp": "00:30:39", "end_timestamp": "00:31:26", "start_second": 1839, "end_second": 1886, "url": "https://www.youtube.com/watch?v=XBJ2f68LuO4&t=1839s", "title": "Gold is easy: Kaggle tips and tricks | by Chahhou Mohamed | Kaggle Days Dubai | Kaggle", "thumbnail": "https://i.ytimg.com/vi/XBJ2f68LuO4/maxresdefault.jpg"} {"video_id": "XBJ2f68LuO4", "text": "input and some kind of dense in H which work or in vengeance so if you look at the architectures of our robots one model is just some dense layers it straightforward feed-forward neural network with a sparse input which is one half hour one hot in fighting so the count vectorizer here one of the parameters that I forget to put here was binary you can equal as through so the world is seen with the item description Swann otherwise it's you know so it's somehow some kind of one not encoder the second model was using the embedding and the", "start_timestamp": "00:31:26", "end_timestamp": "00:32:10", "start_second": 1886, "end_second": 1930, "url": "https://www.youtube.com/watch?v=XBJ2f68LuO4&t=1886s", "title": "Gold is easy: Kaggle tips and tricks | by Chahhou Mohamed | Kaggle Days Dubai | Kaggle", "thumbnail": "https://i.ytimg.com/vi/XBJ2f68LuO4/maxresdefault.jpg"} {"video_id": "XBJ2f68LuO4", "text": "importance see that had to be done is you know is to use a shared embedding because the name the item description were using the same vocabulary the words have the same meaning so you have to use the same embedding photons details and then we did something that that you can that have been used in the past X which is just try to capture the meaning of the sentence using an average of different different impedance of your sentence of your item description and as you can see there is no competition here just every agent of", "start_timestamp": "00:32:10", "end_timestamp": "00:32:52", "start_second": 1930, "end_second": 1972, "url": "https://www.youtube.com/watch?v=XBJ2f68LuO4&t=1930s", "title": "Gold is easy: Kaggle tips and tricks | by Chahhou Mohamed | Kaggle Days Dubai | Kaggle", "thumbnail": "https://i.ytimg.com/vi/XBJ2f68LuO4/maxresdefault.jpg"} {"video_id": "XBJ2f68LuO4", "text": "embeddings and putting forward in these two dense layer was enough to have a good spot that was our second model and the third model was a simpler each model taken as he put the description in the name filters so in quran 2.1 and we made a cat sound and grin with n equal to 2 and here instead of using the tradition to classical vibrant with a sliding window we use all possible combinations of by grams for the name because the name was so short that we were we could authorize or sell to use those kind of eyebrows and using classical background", "start_timestamp": "00:32:52", "end_timestamp": "00:33:38", "start_second": 1972, "end_second": 2018, "url": "https://www.youtube.com/watch?v=XBJ2f68LuO4&t=1972s", "title": "Gold is easy: Kaggle tips and tricks | by Chahhou Mohamed | Kaggle Days Dubai | Kaggle", "thumbnail": "https://i.ytimg.com/vi/XBJ2f68LuO4/maxresdefault.jpg"} {"video_id": "XBJ2f68LuO4", "text": "and his kind of migrants make the difference in the final concern so why we went for this kind of approach when we notice that the score of each model by itself is not important so even though our light JPM will have the best score well it was not helpful in any ensemble and we note iced by that by Insane plane or light GBM with a rich model the score of the rich model by itself has no importance so we could let out reach over feet or not over feet the score of the rich model give give us the same final performance so we decided", "start_timestamp": "00:33:38", "end_timestamp": "00:34:25", "start_second": 2018, "end_second": 2065, "url": "https://www.youtube.com/watch?v=XBJ2f68LuO4&t=2018s", "title": "Gold is easy: Kaggle tips and tricks | by Chahhou Mohamed | Kaggle Days Dubai | Kaggle", "thumbnail": "https://i.ytimg.com/vi/XBJ2f68LuO4/maxresdefault.jpg"} {"video_id": "XBJ2f68LuO4", "text": "that instead of looking for the performance of each model we should optimize for the performance of the ensemble by Infirmary and we tried to to art and models that were improving the final in fact another thing that have versus let's let in the models of repeat a bit but have have that helped us in the finally movements we also because we have only one hour to run the car locks so instead of decreasing the learning rate after each epoch to epoch we were increasing the batch size there is a nice paper about skis that you can find on the internet", "start_timestamp": "00:34:25", "end_timestamp": "00:35:06", "start_second": 2065, "end_second": 2106, "url": "https://www.youtube.com/watch?v=XBJ2f68LuO4&t=2065s", "title": "Gold is easy: Kaggle tips and tricks | by Chahhou Mohamed | Kaggle Days Dubai | Kaggle", "thumbnail": "https://i.ytimg.com/vi/XBJ2f68LuO4/maxresdefault.jpg"} {"video_id": "XBJ2f68LuO4", "text": "and also removing the mean from the target before the training was quite help second competition this was a solo competition I did just for fun because I was waiting for the zero results so here we have just one Peter which is you have a question and you should predict if the question sensor or instance so this is an NLP problem I started training from scratch I have a baseline score [Music] sorry little when I and you cannot question so training from scotch give your baseline it's obvious that it's not gonna work because the data set", "start_timestamp": "00:35:06", "end_timestamp": "00:36:04", "start_second": 2106, "end_second": 2164, "url": "https://www.youtube.com/watch?v=XBJ2f68LuO4&t=2106s", "title": "Gold is easy: Kaggle tips and tricks | by Chahhou Mohamed | Kaggle Days Dubai | Kaggle", "thumbnail": "https://i.ytimg.com/vi/XBJ2f68LuO4/maxresdefault.jpg"} {"video_id": "XBJ2f68LuO4", "text": "was too small so next I used some embeddings with fine tuning it was helpful but not enough to to get an eye sore again so didn't work what did work is to use the ambient the impedance and don't fine-tune your model so the layers were not trainable for the impedance and this did work so I list the model train for some epochs and fine tuning just the last book was I was helpful in improving the score and then by began back in two different models with different different seats the despoiling moved much more so I think that almost", "start_timestamp": "00:36:04", "end_timestamp": "00:36:51", "start_second": 2164, "end_second": 2211, "url": "https://www.youtube.com/watch?v=XBJ2f68LuO4&t=2164s", "title": "Gold is easy: Kaggle tips and tricks | by Chahhou Mohamed | Kaggle Days Dubai | Kaggle", "thumbnail": "https://i.ytimg.com/vi/XBJ2f68LuO4/maxresdefault.jpg"} {"video_id": "XBJ2f68LuO4", "text": "everyone on capital used this approach so what makes the difference in getting that gold medal so here are the few things that I made and were quite different from other controls so because we are we are not changing our embeddings we are the layers are not trainable so you have to find the maximum impedance of every world you cannot learn impedance for words for new words for rare words so you have to find those embedded so what I did is an iterative pre-processing step if I don't find the word in an embedding I do some", "start_timestamp": "00:36:51", "end_timestamp": "00:37:38", "start_second": 2211, "end_second": 2258, "url": "https://www.youtube.com/watch?v=XBJ2f68LuO4&t=2211s", "title": "Gold is easy: Kaggle tips and tricks | by Chahhou Mohamed | Kaggle Days Dubai | Kaggle", "thumbnail": "https://i.ytimg.com/vi/XBJ2f68LuO4/maxresdefault.jpg"} {"video_id": "XBJ2f68LuO4", "text": "pre-processing and check again if if a building is available and I did this so for example tried lower case and upper case in remove the accents etc and check again if the embedding is available and the last pre-processing step this Damon this is the last thing that I do and if I don't find the the stem of the world Indian bed and then I considered the world as a rare world the nice and this was the most important tips that I have used in the pre-processing and it improved significantly the performance of the model the next thing that I did", "start_timestamp": "00:37:38", "end_timestamp": "00:38:21", "start_second": 2258, "end_second": 2301, "url": "https://www.youtube.com/watch?v=XBJ2f68LuO4&t=2258s", "title": "Gold is easy: Kaggle tips and tricks | by Chahhou Mohamed | Kaggle Days Dubai | Kaggle", "thumbnail": "https://i.ytimg.com/vi/XBJ2f68LuO4/maxresdefault.jpg"} {"video_id": "XBJ2f68LuO4", "text": "is because the impedance of the rail words are not available and I'm not gonna train I'm not gonna learn them because my layers are not trainable so I cannot use them then when I will do some global enumeration or global max so I always multiply those entities by zero otherwise I will use some random values that will not change during the trader this part this was quite helpful in to improving the scroll and later I didn't use the global average chain because I have a lot of zeros embedded in my in my my inputs", "start_timestamp": "00:38:21", "end_timestamp": "00:39:03", "start_second": 2301, "end_second": 2343, "url": "https://www.youtube.com/watch?v=XBJ2f68LuO4&t=2301s", "title": "Gold is easy: Kaggle tips and tricks | by Chahhou Mohamed | Kaggle Days Dubai | Kaggle", "thumbnail": "https://i.ytimg.com/vi/XBJ2f68LuO4/maxresdefault.jpg"} {"video_id": "XBJ2f68LuO4", "text": "because of the padding and because of the rare words that I multiply by zero so I have a lot of zeros and averages would not work so what I need is instead of mid narration I just sum up the values of the intervals then I apply some budget augmentation and here is by our total that's the full network and the spread is straightforward to understand so I have an input which is which is the question by itself and another input which is just a binary indicator telling me if the end if the the world is in the middle or if the", "start_timestamp": "00:39:03", "end_timestamp": "00:39:45", "start_second": 2343, "end_second": 2385, "url": "https://www.youtube.com/watch?v=XBJ2f68LuO4&t=2343s", "title": "Gold is easy: Kaggle tips and tricks | by Chahhou Mohamed | Kaggle Days Dubai | Kaggle", "thumbnail": "https://i.ytimg.com/vi/XBJ2f68LuO4/maxresdefault.jpg"} {"video_id": "XBJ2f68LuO4", "text": "world is a rare world which is not in the ended then I use the the globe in bed in here so I don't initialize my impedance with some randomness that I use a transfer learning to initialize my impedance and then I multiplied this is the tricky part here I'm not supplied by embeddings with the mask so rail words will be multiplied by zero impedance with the equal to zero otherwise I will use the transfer learning then there are two parts here this is the first X approach you just try to capture some kind of meaning of the sentence so usually what", "start_timestamp": "00:39:45", "end_timestamp": "00:40:29", "start_second": 2385, "end_second": 2429, "url": "https://www.youtube.com/watch?v=XBJ2f68LuO4&t=2385s", "title": "Gold is easy: Kaggle tips and tricks | by Chahhou Mohamed | Kaggle Days Dubai | Kaggle", "thumbnail": "https://i.ytimg.com/vi/XBJ2f68LuO4/maxresdefault.jpg"} {"video_id": "XBJ2f68LuO4", "text": "we do is average all the images but the Brazil was not working because I mostly have zeros my sentences so what I did is just some the impedance apply much normalization with my second part of the network is just use an STM or on my input and applying a global maximum and again I cannot use the global average plane I just used this found concatenate everything and straight forward turn the neural network as you can see quite simple but those three little tricks make the difference and I end up just in a few days of work in the 14 place I was", "start_timestamp": "00:40:29", "end_timestamp": "00:41:18", "start_second": 2429, "end_second": 2478, "url": "https://www.youtube.com/watch?v=XBJ2f68LuO4&t=2429s", "title": "Gold is easy: Kaggle tips and tricks | by Chahhou Mohamed | Kaggle Days Dubai | Kaggle", "thumbnail": "https://i.ytimg.com/vi/XBJ2f68LuO4/maxresdefault.jpg"} {"video_id": "XBJ2f68LuO4", "text": "ragged 14 and I got about middle which is which something as simple as that the last competition I will talk about is a recommendation system to or to music so the problem is the following we have a user who is listening to some song but sometimes type t1 and we want to predict if the same user went to read listen to the same song at some time same cheap g2 and t2 minus t1 is less than one mouth so we are trying to predict if there is any releasing of the Sun for each user we have some statistic here but the", "start_timestamp": "00:41:18", "end_timestamp": "00:42:15", "start_second": 2478, "end_second": 2535, "url": "https://www.youtube.com/watch?v=XBJ2f68LuO4&t=2478s", "title": "Gold is easy: Kaggle tips and tricks | by Chahhou Mohamed | Kaggle Days Dubai | Kaggle", "thumbnail": "https://i.ytimg.com/vi/XBJ2f68LuO4/maxresdefault.jpg"} {"video_id": "XBJ2f68LuO4", "text": "important thing to note here is that the contract problem was was extreme here we have almost 26 percent of new songs in the testing data so we have to deal with this false start problem those are the features that were provided and the main the most important feature where the user ID the song ID the suicide and the artist day so does work those are the features I will focus on so what we did so we used here the light GBM water so we computed different statistical features based on the user the song and the artist and we noticed that using", "start_timestamp": "00:42:15", "end_timestamp": "00:43:01", "start_second": 2535, "end_second": 2581, "url": "https://www.youtube.com/watch?v=XBJ2f68LuO4&t=2535s", "title": "Gold is easy: Kaggle tips and tricks | by Chahhou Mohamed | Kaggle Days Dubai | Kaggle", "thumbnail": "https://i.ytimg.com/vi/XBJ2f68LuO4/maxresdefault.jpg"} {"video_id": "XBJ2f68LuO4", "text": "statistic using the target was not working and you will see next okay so the data set didn't include any time stamp or session information but we noticed by doing some analysis that it was coronary ordered so what we need is we approximate sessions of the users by using some little trick just look at the indexes shifted by one try to check the difference if the values are small this means that those samples are in the same session if the value is high this means that this happening in some other sessions this is the first trick that we", "start_timestamp": "00:43:01", "end_timestamp": "00:43:49", "start_second": 2581, "end_second": 2629, "url": "https://www.youtube.com/watch?v=XBJ2f68LuO4&t=2581s", "title": "Gold is easy: Kaggle tips and tricks | by Chahhou Mohamed | Kaggle Days Dubai | Kaggle", "thumbnail": "https://i.ytimg.com/vi/XBJ2f68LuO4/maxresdefault.jpg"} {"video_id": "XBJ2f68LuO4", "text": "use second one we notice that using the user and the stock ID did helped in our cross validation but the model was really overfitting orders and as I said before when you have features and your model is overfitting on it either you remove them or you apply some transformation because they will happen we apply some transformation and we come up with different SVD embeddings for those filters we also noticed this is the main target was evolving and as you can see there with there is a nice trend here and we tried to capture that", "start_timestamp": "00:43:49", "end_timestamp": "00:44:32", "start_second": 2629, "end_second": 2672, "url": "https://www.youtube.com/watch?v=XBJ2f68LuO4&t=2629s", "title": "Gold is easy: Kaggle tips and tricks | by Chahhou Mohamed | Kaggle Days Dubai | Kaggle", "thumbnail": "https://i.ytimg.com/vi/XBJ2f68LuO4/maxresdefault.jpg"} {"video_id": "XBJ2f68LuO4", "text": "trend so we made two linear regression models we trained them on the the sessions and we extract those features from the linear models and we added those as filters in our light area also in order this is my last slide in order to to somehow overcome the cold side problem of the songs what we did is we created a judge Jason's in matrix of the last songs that each user has at recent two and so we took the last 30 songs of each user and we put a one in our matrix with those with two suns are in those last songs otherwise mr. zero and then", "start_timestamp": "00:44:32", "end_timestamp": "00:45:24", "start_second": 2672, "end_second": 2724, "url": "https://www.youtube.com/watch?v=XBJ2f68LuO4&t=2672s", "title": "Gold is easy: Kaggle tips and tricks | by Chahhou Mohamed | Kaggle Days Dubai | Kaggle", "thumbnail": "https://i.ytimg.com/vi/XBJ2f68LuO4/maxresdefault.jpg"} {"video_id": "XBJ2f68LuO4", "text": "we applied the nest video dot and use the evasion it did help in to overcome the called such problem because you cancel some new sound that have never been seen before it will be related to other songs that have been different by the users and the invasion will capture this kind of information and we end up again like we did in the two sigma approach by using five models by flight GPI models and we just use different subsets of features for each model and then we play together all the moments thank you for your", "start_timestamp": "00:45:24", "end_timestamp": "00:46:05", "start_second": 2724, "end_second": 2765, "url": "https://www.youtube.com/watch?v=XBJ2f68LuO4&t=2724s", "title": "Gold is easy: Kaggle tips and tricks | by Chahhou Mohamed | Kaggle Days Dubai | Kaggle", "thumbnail": "https://i.ytimg.com/vi/XBJ2f68LuO4/maxresdefault.jpg"} {"video_id": "XBJ2f68LuO4", "text": "attention any questions I don't know if sorting so okay oh yeah I can Early's do you recommend any good like books or learning materials what like learning materials or books do you recommend for anymore I'm asked to give some recommendation about good materials I think that's Tagalog clear knocks on table discussion are the best materials when I started doing data science I learned everything from the cargo platform I didn't started by reading papers or reading books I start directly doing competitions and reading discussion and", "start_timestamp": "00:46:05", "end_timestamp": "00:47:24", "start_second": 2765, "end_second": 2844, "url": "https://www.youtube.com/watch?v=XBJ2f68LuO4&t=2765s", "title": "Gold is easy: Kaggle tips and tricks | by Chahhou Mohamed | Kaggle Days Dubai | Kaggle", "thumbnail": "https://i.ytimg.com/vi/XBJ2f68LuO4/maxresdefault.jpg"} {"video_id": "XBJ2f68LuO4", "text": "Carols later on to travel they don't understand then I start reading papers scientific papers to to get more insights about why things are working or not you have a question can you talk a little bit more about what is the use of fitting base models for example linear or logistic regression and using them as in input to more sophisticated model so the question is in the last exam that I will show you why did I use the linear regression models and why I used them in my life GB model so because we had that trend of", "start_timestamp": "00:47:24", "end_timestamp": "00:48:30", "start_second": 2844, "end_second": 2910, "url": "https://www.youtube.com/watch?v=XBJ2f68LuO4&t=2844s", "title": "Gold is easy: Kaggle tips and tricks | by Chahhou Mohamed | Kaggle Days Dubai | Kaggle", "thumbnail": "https://i.ytimg.com/vi/XBJ2f68LuO4/maxresdefault.jpg"} {"video_id": "XBJ2f68LuO4", "text": "the meaning of the targets that was clearly decreasing in time I should capture that trend and decision trees based models cannot capture this kind of trends so the ideal is to use a linear model to capture the Train and from those models was linear models I extracted as filters the slope for each user I captured the slot and the difference between the starting and the last value predicted by my linear regression model so this somehow is capturing the dynamic change of the user behavior but I cannot capture with", "start_timestamp": "00:48:30", "end_timestamp": "00:49:16", "start_second": 2910, "end_second": 2956, "url": "https://www.youtube.com/watch?v=XBJ2f68LuO4&t=2910s", "title": "Gold is easy: Kaggle tips and tricks | by Chahhou Mohamed | Kaggle Days Dubai | Kaggle", "thumbnail": "https://i.ytimg.com/vi/XBJ2f68LuO4/maxresdefault.jpg"} {"video_id": "gSMI5wZHe9w", "text": "[Music] thank you all right my paper was fixed match which is just a cool recent method for doing semi-supervised learning yeah so overview of the paper it came out just last month from Google research and like the headline result here is that they were able to get 78 percent accuracy on CFR 10 using one labeled training example per class which is yeah it was not selected arbitrarily looks like let's make them look good a couple of caveats about the room I assure you the results are extremely impressive yes as", "start_timestamp": "00:00:00", "end_timestamp": "00:01:02", "start_second": 0, "end_second": 62, "url": "https://www.youtube.com/watch?v=gSMI5wZHe9w&t=0s", "title": "FixMatch: Simplifying Semi-Supervised Learning with Consistency and Confidence-Covered by Adel Foda", "thumbnail": "https://i.ytimg.com/vi/gSMI5wZHe9w/maxresdefault.jpg"} {"video_id": "gSMI5wZHe9w", "text": "I said semi-supervised learning we'll talk a little bit about that and and the way to achieve this is doing kind of quite a natural combination of two of two previously known methods which will which will describe so what is semi-supervised learning the motivation for semi-supervised learning is that labeling can there are situations where labeling is very expensive but raw data can be very cheap like for example if you're driving around with the video camera and the thing to understand about the semi-supervised learning is that it", "start_timestamp": "00:01:02", "end_timestamp": "00:01:33", "start_second": 62, "end_second": 93, "url": "https://www.youtube.com/watch?v=gSMI5wZHe9w&t=62s", "title": "FixMatch: Simplifying Semi-Supervised Learning with Consistency and Confidence-Covered by Adel Foda", "thumbnail": "https://i.ytimg.com/vi/gSMI5wZHe9w/maxresdefault.jpg"} {"video_id": "gSMI5wZHe9w", "text": "is it is distinct from fuschia learning in the sense that you don't have few examples of the thing you have few labeled examples there still needs to be a whole bunch of unlabeled examples of the thing that you're looking for so for example in this diagram if the only examples of class white and class black that I have to learn from are these two then the classifier boundary that I learn is just this vertical line that's as good as any classifier boundary that I might come up but if I have a whole bunch of other labeled data available to", "start_timestamp": "00:01:33", "end_timestamp": "00:02:07", "start_second": 93, "end_second": 127, "url": "https://www.youtube.com/watch?v=gSMI5wZHe9w&t=93s", "title": "FixMatch: Simplifying Semi-Supervised Learning with Consistency and Confidence-Covered by Adel Foda", "thumbnail": "https://i.ytimg.com/vi/gSMI5wZHe9w/maxresdefault.jpg"} {"video_id": "gSMI5wZHe9w", "text": "me I can use that to inform my learning of the classifier boundary because the this tribution of the unlabeled data suggests that there's actually some clusters inside this data set and I can I can use that to you know come up with a a classifier boundary that's better than the one that I would have come up with if I only have those labeled examples so that's the distinction between few shot and semi-supervised all right so the first method which is a method for semi-supervised learning that went into this paper is so-called pseudo labeling", "start_timestamp": "00:02:07", "end_timestamp": "00:02:47", "start_second": 127, "end_second": 167, "url": "https://www.youtube.com/watch?v=gSMI5wZHe9w&t=127s", "title": "FixMatch: Simplifying Semi-Supervised Learning with Consistency and Confidence-Covered by Adel Foda", "thumbnail": "https://i.ytimg.com/vi/gSMI5wZHe9w/maxresdefault.jpg"} {"video_id": "gSMI5wZHe9w", "text": "so the point of pseudo labeling is that we've got a few labeled examples which we train some sort of weak model using the few labeled examples that we've got and then we start to use the weak model to make predictions on unlabeled data that we then treat as if they would have truth if the model is confident beyond the certain point so illustration I start off with my label examples one black one white and I've got all this unlabeled data now the hope is that if the model is trained using just these two data points it will be confident", "start_timestamp": "00:02:47", "end_timestamp": "00:03:27", "start_second": 167, "end_second": 207, "url": "https://www.youtube.com/watch?v=gSMI5wZHe9w&t=167s", "title": "FixMatch: Simplifying Semi-Supervised Learning with Consistency and Confidence-Covered by Adel Foda", "thumbnail": "https://i.ytimg.com/vi/gSMI5wZHe9w/maxresdefault.jpg"} {"video_id": "gSMI5wZHe9w", "text": "enough about the data points that are in the facility that I don't have vicinity of the of the data points that I've got labeled to label those correctly and then those form part of our training set so in effect we can jump from here to here we can jump from here to here and then we can sort of we can sort of keep going and expand that and then hopefully you know we end up we end up jumping correctly throughout those unlabeled clusters and essentially labeling inside those now the risk here is confirmation bias because in real world like problems", "start_timestamp": "00:03:27", "end_timestamp": "00:04:05", "start_second": 207, "end_second": 245, "url": "https://www.youtube.com/watch?v=gSMI5wZHe9w&t=207s", "title": "FixMatch: Simplifying Semi-Supervised Learning with Consistency and Confidence-Covered by Adel Foda", "thumbnail": "https://i.ytimg.com/vi/gSMI5wZHe9w/maxresdefault.jpg"} {"video_id": "gSMI5wZHe9w", "text": "the clusters between you know that the clusters that that's sort of defined the classes I'm not going to be this neatly separated right we're gonna have extremely high dimensional problems things that are in different classes are going to be are going to be close to are going to be closer to the points that we have labeled then of the things that are indeed corresponding classes and so on so so this is so this method runs into a problem that it does actually just label things incorrectly and then it keeps on learning from it's", "start_timestamp": "00:04:05", "end_timestamp": "00:04:32", "start_second": 245, "end_second": 272, "url": "https://www.youtube.com/watch?v=gSMI5wZHe9w&t=245s", "title": "FixMatch: Simplifying Semi-Supervised Learning with Consistency and Confidence-Covered by Adel Foda", "thumbnail": "https://i.ytimg.com/vi/gSMI5wZHe9w/maxresdefault.jpg"} {"video_id": "gSMI5wZHe9w", "text": "incorrect labels that it's generated so what we're hoping to do is to add another element another method of learning from the unlabeled data so that we can improve our confidence without sort of without having to jump to data points that that would not be correctly labeled by pseudo label so the second method is this idea of consistency regularization now this says this says that if you have if you have an unlabeled data point and the model is confident about what that data point should be above a certain level we can", "start_timestamp": "00:04:32", "end_timestamp": "00:05:13", "start_second": 272, "end_second": 313, "url": "https://www.youtube.com/watch?v=gSMI5wZHe9w&t=272s", "title": "FixMatch: Simplifying Semi-Supervised Learning with Consistency and Confidence-Covered by Adel Foda", "thumbnail": "https://i.ytimg.com/vi/gSMI5wZHe9w/maxresdefault.jpg"} {"video_id": "gSMI5wZHe9w", "text": "apply an augmentation to that data point so we can twist it around a little bit you know flip it change the colours invert you know just make it look different and the prediction that the model gives from those two versions of the data point ought to be the same so example if I have here a picture of a horse I can apply one random data augmentation to the picture and I can apply a second random data augmentation and then what I'm going to do is enforce that the model makes the same decision about this data point for both of those", "start_timestamp": "00:05:13", "end_timestamp": "00:05:53", "start_second": 313, "end_second": 353, "url": "https://www.youtube.com/watch?v=gSMI5wZHe9w&t=313s", "title": "FixMatch: Simplifying Semi-Supervised Learning with Consistency and Confidence-Covered by Adel Foda", "thumbnail": "https://i.ytimg.com/vi/gSMI5wZHe9w/maxresdefault.jpg"} {"video_id": "gSMI5wZHe9w", "text": "augmentations and the effect of this is that the model can start to pick up things about the image that it might not otherwise have paid attention to so for instance if my augmentation number two crops out just the lower right part of this force then the model might be forced to pay attention to other regions of the image like the the hind legs and the tail in order to decide that this is in fact a horse whereas if I had if I had not done that other data augmentation it might have only focused on the head and always made that made", "start_timestamp": "00:05:53", "end_timestamp": "00:06:28", "start_second": 353, "end_second": 388, "url": "https://www.youtube.com/watch?v=gSMI5wZHe9w&t=353s", "title": "FixMatch: Simplifying Semi-Supervised Learning with Consistency and Confidence-Covered by Adel Foda", "thumbnail": "https://i.ytimg.com/vi/gSMI5wZHe9w/maxresdefault.jpg"} {"video_id": "gSMI5wZHe9w", "text": "the decision on that basis so I'm sort of I'm applying two random augmentations in order to induce the model to look at look at different parts of the image to to learn what features are relevant for identifying horses alright so fixed match is going to combine these two ideas of pseudo labeling and consistency regularization so we start off with like a small label data set only a few data points we're going to do as much as much learning on that on that data set as possible and then we're going to go and pick some", "start_timestamp": "00:06:28", "end_timestamp": "00:07:04", "start_second": 388, "end_second": 424, "url": "https://www.youtube.com/watch?v=gSMI5wZHe9w&t=388s", "title": "FixMatch: Simplifying Semi-Supervised Learning with Consistency and Confidence-Covered by Adel Foda", "thumbnail": "https://i.ytimg.com/vi/gSMI5wZHe9w/maxresdefault.jpg"} {"video_id": "gSMI5wZHe9w", "text": "samples out of the unlabeled data set and then we're going to follow first of all the consistency regularization process so we pick we pick we pick an image which is which is which is unlabeled we apply two augmentations to it what one week augmentation that kind of preserves the general sense of what the images and then a strong augmentation and then we ask the following question on the weak augmentation was the mole I'm very confident about what that image was meaning like it achieved let's say greater than eighty percent you know", "start_timestamp": "00:07:04", "end_timestamp": "00:07:42", "start_second": 424, "end_second": 462, "url": "https://www.youtube.com/watch?v=gSMI5wZHe9w&t=424s", "title": "FixMatch: Simplifying Semi-Supervised Learning with Consistency and Confidence-Covered by Adel Foda", "thumbnail": "https://i.ytimg.com/vi/gSMI5wZHe9w/maxresdefault.jpg"} {"video_id": "gSMI5wZHe9w", "text": "greater than eighty percent confidence of that image was a horse and if so we're going to include that into our pseudo labeled set and then we're going to do the consistency regularization process whereby we enforce that the models predictions on the Augmented version of the data set on the Augmented version of the image becomes close to what we now treat as a true label for that image which means that we are sort of we're allowing it to we're allowing it to learn from from the unlabeled data set in the way I mentioned before which", "start_timestamp": "00:07:42", "end_timestamp": "00:08:21", "start_second": 462, "end_second": 501, "url": "https://www.youtube.com/watch?v=gSMI5wZHe9w&t=462s", "title": "FixMatch: Simplifying Semi-Supervised Learning with Consistency and Confidence-Covered by Adel Foda", "thumbnail": "https://i.ytimg.com/vi/gSMI5wZHe9w/maxresdefault.jpg"} {"video_id": "gSMI5wZHe9w", "text": "is which is that you know this this strong augmentation might remove the part of the image that the model was paying attention to previously so that we are now forcing the model to have a more general understanding of what it means to be a horse yeah yeah Andy basically similar representations of augmentations I think for images the knowledge that a particular augmentation is also angle preserving it's very slow underappreciated well I feel like a lot of work in DNA ends right now is like finding sneaky ways reprieve alleged so", "start_timestamp": "00:08:21", "end_timestamp": "00:09:46", "start_second": 501, "end_second": 586, "url": "https://www.youtube.com/watch?v=gSMI5wZHe9w&t=501s", "title": "FixMatch: Simplifying Semi-Supervised Learning with Consistency and Confidence-Covered by Adel Foda", "thumbnail": "https://i.ytimg.com/vi/gSMI5wZHe9w/maxresdefault.jpg"} {"video_id": "gSMI5wZHe9w", "text": "he's saying that like the fact that the fact that when we do a random augmentation the label should still be the same is like it's kind of like it's kind of alright let's see how this plays out so we start off with when we got one image in the labeled set so that's our label set this is the set of images on which the set of unlabeled images on which the model is confident enough above a certain threshold that it can say what they are to get the pseudo label and these are the pairs these are the training pairs that we're gonna that", "start_timestamp": "00:09:46", "end_timestamp": "00:11:09", "start_second": 586, "end_second": 669, "url": "https://www.youtube.com/watch?v=gSMI5wZHe9w&t=586s", "title": "FixMatch: Simplifying Semi-Supervised Learning with Consistency and Confidence-Covered by Adel Foda", "thumbnail": "https://i.ytimg.com/vi/gSMI5wZHe9w/maxresdefault.jpg"} {"video_id": "gSMI5wZHe9w", "text": "we're going to train on for this step so training pairs we're just going to take the horse image augment it a little bit and then train our model so the model becomes a little bit better at or at recognizing horses fine nothing has you know nothing has changed significantly same labeled set there's still you know the model is still not confident enough to say that any other images an image of a horse so we just go and generate another augmentation and we keep on training but at a point the model will become confident enough about", "start_timestamp": "00:11:09", "end_timestamp": "00:11:48", "start_second": 669, "end_second": 708, "url": "https://www.youtube.com/watch?v=gSMI5wZHe9w&t=669s", "title": "FixMatch: Simplifying Semi-Supervised Learning with Consistency and Confidence-Covered by Adel Foda", "thumbnail": "https://i.ytimg.com/vi/gSMI5wZHe9w/maxresdefault.jpg"} {"video_id": "gSMI5wZHe9w", "text": "some other image and it'll say you know what I believe with probability greater than tau that this other image which came from the unlabeled set is also horse so now we are able to do the consistency regularization process not only with the image that was in our labeled set but also with the image that was in the unlabeled set and this now allows the model to learn more about horses then it knew from that from that one unlabeled image like we started off with unlabeled image it had particular characteristics", "start_timestamp": "00:11:48", "end_timestamp": "00:12:21", "start_second": 708, "end_second": 741, "url": "https://www.youtube.com/watch?v=gSMI5wZHe9w&t=708s", "title": "FixMatch: Simplifying Semi-Supervised Learning with Consistency and Confidence-Covered by Adel Foda", "thumbnail": "https://i.ytimg.com/vi/gSMI5wZHe9w/maxresdefault.jpg"} {"video_id": "gSMI5wZHe9w", "text": "now we've you know been able to add a second image and we can learn some different things that will hopefully allow us to jump to a third image and a fourth image and so on so yeah we just keep training for a bit and then the model becomes confident about the next image and you see like this process proceeds iteratively okay so results like I said are impressive what you can do with this so with forty labels they had an error rate of thirteen point eight one so like 86 percent accuracy and and and the error rate that I mentioned at the start of", "start_timestamp": "00:12:21", "end_timestamp": "00:13:06", "start_second": 741, "end_second": 786, "url": "https://www.youtube.com/watch?v=gSMI5wZHe9w&t=741s", "title": "FixMatch: Simplifying Semi-Supervised Learning with Consistency and Confidence-Covered by Adel Foda", "thumbnail": "https://i.ytimg.com/vi/gSMI5wZHe9w/maxresdefault.jpg"} {"video_id": "gSMI5wZHe9w", "text": "twenty two percent with one labeled example per class that is derived by picking the best starting point for each of the classes so they went through all of they went through all of say far ten or there's actually papers on this and they found like the plain picture that is the most plain out of [Laughter] yeah yes that's how they got that if they're if they got a random one I think they got like 68 yeah all ten by way so that's the most cat that's how clean your labels have to be so first of all you know I mean they're showing this on", "start_timestamp": "00:13:06", "end_timestamp": "00:14:59", "start_second": 786, "end_second": 899, "url": "https://www.youtube.com/watch?v=gSMI5wZHe9w&t=786s", "title": "FixMatch: Simplifying Semi-Supervised Learning with Consistency and Confidence-Covered by Adel Foda", "thumbnail": "https://i.ytimg.com/vi/gSMI5wZHe9w/maxresdefault.jpg"} {"video_id": "gSMI5wZHe9w", "text": "CFR ten which which we all know is perfectly labeled and in real in real life you know that doesn't generally hold completely agree like secondly this already has a chance of miss labeling things just on a time like even if your data set was perfectly labeled it's with it that would already be a probability that it goes in and miss label stuff so you know having incorrect labels in the first place would would add to that however since it's a semi supervised method the initial label data set is small so presumably it would be possible", "start_timestamp": "00:14:59", "end_timestamp": "00:15:26", "start_second": 899, "end_second": 926, "url": "https://www.youtube.com/watch?v=gSMI5wZHe9w&t=899s", "title": "FixMatch: Simplifying Semi-Supervised Learning with Consistency and Confidence-Covered by Adel Foda", "thumbnail": "https://i.ytimg.com/vi/gSMI5wZHe9w/maxresdefault.jpg"} {"video_id": "gSMI5wZHe9w", "text": "to curate that thing and make it as correct as possible yeah I don't recall this cushon of that exact quite a bit of variance yeah I think that they do experiments but they pick like five random five random starting points and the variance can be like twenty percentage points of accuracy yeah well I guess you can pick a few random ones see how much variance there is in your specific domain and then you know our understanding is that it's not as bad as mix-match yep transfer learning I would suspect not because this is CFR 10", "start_timestamp": "00:15:26", "end_timestamp": "00:18:20", "start_second": 926, "end_second": 1100, "url": "https://www.youtube.com/watch?v=gSMI5wZHe9w&t=926s", "title": "FixMatch: Simplifying Semi-Supervised Learning with Consistency and Confidence-Covered by Adel Foda", "thumbnail": "https://i.ytimg.com/vi/gSMI5wZHe9w/maxresdefault.jpg"} {"video_id": "gSMI5wZHe9w", "text": "so I'm not sure yep I believe so yeah I'm not a however so I'm confident but yesterday the question was did I use transfer learning here I think transfer lying would probably defeat the purpose yeah yep all right let's so let's just get going so this idea of like the you know the most bird bird this is they refer to it as this prototypic allottee and they have they have some examples of what happens when you have a more and less prototypical training set so this is the one that we saw before that was as prototypical as as as could be found and", "start_timestamp": "00:18:20", "end_timestamp": "00:18:58", "start_second": 1100, "end_second": 1138, "url": "https://www.youtube.com/watch?v=gSMI5wZHe9w&t=1100s", "title": "FixMatch: Simplifying Semi-Supervised Learning with Consistency and Confidence-Covered by Adel Foda", "thumbnail": "https://i.ytimg.com/vi/gSMI5wZHe9w/maxresdefault.jpg"} {"video_id": "gSMI5wZHe9w", "text": "this one down the bottom here is the least prototypical and this is a curve of how the accuracy had the accuracy of the trained model went on each of those sets so it is strongly affected by the starting point so limitations as I mentioned before there's there's prototypic allottee I have kind of a question mark doesn't discuss in the paper but like class imbalance like you know if you have many more examples of one thing than of another which you know I've heard that that can happen I'm not entirely convinced like I think it would", "start_timestamp": "00:18:58", "end_timestamp": "00:19:33", "start_second": 1138, "end_second": 1173, "url": "https://www.youtube.com/watch?v=gSMI5wZHe9w&t=1138s", "title": "FixMatch: Simplifying Semi-Supervised Learning with Consistency and Confidence-Covered by Adel Foda", "thumbnail": "https://i.ytimg.com/vi/gSMI5wZHe9w/maxresdefault.jpg"} {"video_id": "9-cyC6O81Bk", "text": "[Music] yeah hello everybody my name is Daniel I work for a company called odd phone and we are here today to talk about efficient developers so the first thing that I want to do is define what we mean by efficient to be efficient means achieving maximum productivity without wasting resources and in our case as developers that resource is usually time which means to be efficient is to do things fast but this definition is missing one key element for us that we can see in this quote by Peter Drucker to be efficient is to do the things", "start_timestamp": "00:00:00", "end_timestamp": "00:00:46", "start_second": 0, "end_second": 46, "url": "https://www.youtube.com/watch?v=9-cyC6O81Bk&t=0s", "title": "Habits of Efficient Developers", "thumbnail": "https://i.ytimg.com/vi/9-cyC6O81Bk/maxresdefault.jpg"} {"video_id": "9-cyC6O81Bk", "text": "right well to be effective is to do the right things so the be efficient is not that we have to be very very fast it's the things that we do we have to do them in a proper way so the decisions that we take today don't slow us down in the future now that we have this quote here you maybe wonder well what is more important to be efficient or to be effective obviously doesn't make any sense to go really really fast if you end up in the wrong place but equally if we know where we want to go but we never achieve that place it doesn't make sense", "start_timestamp": "00:00:46", "end_timestamp": "00:01:18", "start_second": 46, "end_second": 78, "url": "https://www.youtube.com/watch?v=9-cyC6O81Bk&t=46s", "title": "Habits of Efficient Developers", "thumbnail": "https://i.ytimg.com/vi/9-cyC6O81Bk/maxresdefault.jpg"} {"video_id": "9-cyC6O81Bk", "text": "either in fact there is some synergy between to the between the two if you are efficient it means that you spend less time takes you less time to do things which means that you have more time in your hands to stop look around and make sure that you're going in the right direction so to be efficient allows you to be more effective now in this talk we are just going to focus on what it takes what it makes us efficient and we're going to stop stop talking about focus there are plenty of studies they tried to quantify what is the cost", "start_timestamp": "00:01:18", "end_timestamp": "00:01:51", "start_second": 78, "end_second": 111, "url": "https://www.youtube.com/watch?v=9-cyC6O81Bk&t=78s", "title": "Habits of Efficient Developers", "thumbnail": "https://i.ytimg.com/vi/9-cyC6O81Bk/maxresdefault.jpg"} {"video_id": "9-cyC6O81Bk", "text": "of an interruption for us developers and it seems that the cost is around ten or fifteen minutes so every time that somebody comes and interrupts you it take us between ten and fifteen minutes to load the context reload the context of the tasks that we were doing to be able to be productive so it's fair amount to be efficient to minimize the number of interruptions that we get so we have long periods of time where we can focus on the task at hand there are basically two types of interruptions the ones that you control", "start_timestamp": "00:01:51", "end_timestamp": "00:02:24", "start_second": 111, "end_second": 144, "url": "https://www.youtube.com/watch?v=9-cyC6O81Bk&t=111s", "title": "Habits of Efficient Developers", "thumbnail": "https://i.ytimg.com/vi/9-cyC6O81Bk/maxresdefault.jpg"} {"video_id": "9-cyC6O81Bk", "text": "and the ones that other people control the ones that you control this one's email notifications I probably the worst offenders if you think that that little pop-up doesn't nothing for your concentration the truth is that for your brain it looks more something like this you cannot just stop looking at it millions of years of evolution have made our brain really sensible to any unexpected movement mostly because the fear of being eaten so when that little pop-up shows up on your screen you have to focus on it you don't have an option so", "start_timestamp": "00:02:24", "end_timestamp": "00:03:04", "start_second": 144, "end_second": 184, "url": "https://www.youtube.com/watch?v=9-cyC6O81Bk&t=144s", "title": "Habits of Efficient Developers", "thumbnail": "https://i.ytimg.com/vi/9-cyC6O81Bk/maxresdefault.jpg"} {"video_id": "9-cyC6O81Bk", "text": "efficient developers the first thing that they do they disable all notification not just email notifications but absolutely all notifications in fact you don't even want to see that little number with the number of in that the number of unread notifications on your screen because as soon as that little number changes your brain is going to pick up the change and you're going to start thinking about well what could send my an email or what do I need to do so you should always you can always deal with emails whenever you", "start_timestamp": "00:03:04", "end_timestamp": "00:03:42", "start_second": 184, "end_second": 222, "url": "https://www.youtube.com/watch?v=9-cyC6O81Bk&t=184s", "title": "Habits of Efficient Developers", "thumbnail": "https://i.ytimg.com/vi/9-cyC6O81Bk/maxresdefault.jpg"} {"video_id": "9-cyC6O81Bk", "text": "want whenever you have the time nobody should expect that you'd reply immediately to their emails there are other means of of communication that are more appropriate if something is really really urgent emails are asynchronous and it's more efficient to deal with them in batches the only notification that you want to say is the one that tells you that you broke the build and please never be one of these guys that they type an email and they want to tell you just to make sure that you received an email this is", "start_timestamp": "00:03:42", "end_timestamp": "00:04:16", "start_second": 222, "end_second": 256, "url": "https://www.youtube.com/watch?v=9-cyC6O81Bk&t=222s", "title": "Habits of Efficient Developers", "thumbnail": "https://i.ytimg.com/vi/9-cyC6O81Bk/maxresdefault.jpg"} {"video_id": "9-cyC6O81Bk", "text": "really really annoying and this brings us to the other types of interruptions they want that you don't control what can you do with somebody comes to your desk and interrupt your flow I know three possible options the first one is to wear some headphones really really big so when somebody comes you pretend that you didn't see him and you hope that he will just walk a row we'll walk away the second option is that you have a very good team late somebody like this guy somebody that is able to tackle any interruption before", "start_timestamp": "00:04:16", "end_timestamp": "00:04:54", "start_second": 256, "end_second": 294, "url": "https://www.youtube.com/watch?v=9-cyC6O81Bk&t=256s", "title": "Habits of Efficient Developers", "thumbnail": "https://i.ytimg.com/vi/9-cyC6O81Bk/maxresdefault.jpg"} {"video_id": "9-cyC6O81Bk", "text": "it reaches the team the third option is to do peripheral grameen if you are doing pair programming when somebody comes to interrupt you what should happen is that one of the two developers in the pair so stand up walk away a couple of meters and deal with the interruption and when he's done he goes back to the other pair and that all the guy that was able to keep focus it works like a really fast cash to get him into a productive State a lot faster the additional benefit of pair programming is that because you have another", "start_timestamp": "00:04:54", "end_timestamp": "00:05:26", "start_second": 294, "end_second": 326, "url": "https://www.youtube.com/watch?v=9-cyC6O81Bk&t=294s", "title": "Habits of Efficient Developers", "thumbnail": "https://i.ytimg.com/vi/9-cyC6O81Bk/maxresdefault.jpg"} {"video_id": "9-cyC6O81Bk", "text": "developer looking at what you are doing all the time you are not going to check the news or your phone or or your email as often so that peer pressure is just going to cause you to be more efficient I don't think I need to explain this right we all know this and I don't like to sound like your mom when she tells you to get your greens so let's move to the next one efficient developers just do one thing at a time and the reason is exactly the same why we hate interruptions yeah because of the context switches here we see that doing", "start_timestamp": "00:05:26", "end_timestamp": "00:06:02", "start_second": 326, "end_second": 362, "url": "https://www.youtube.com/watch?v=9-cyC6O81Bk&t=326s", "title": "Habits of Efficient Developers", "thumbnail": "https://i.ytimg.com/vi/9-cyC6O81Bk/maxresdefault.jpg"} {"video_id": "9-cyC6O81Bk", "text": "the blue tasks after doing finis in the green tasks it just takes less time in fact it is not just that it takes less time when you try to do multiple things at the same time the quality of your work usually suffers we all know that the definition of multitasking is just screwing several things at the same time so you should always focus on one thing if you finish and then you move to the next task you may we are all going to spend thousands upon thousands of hours in front of your ID it's one of our main tools so you really need to know it", "start_timestamp": "00:06:02", "end_timestamp": "00:06:44", "start_second": 362, "end_second": 404, "url": "https://www.youtube.com/watch?v=9-cyC6O81Bk&t=362s", "title": "Habits of Efficient Developers", "thumbnail": "https://i.ytimg.com/vi/9-cyC6O81Bk/maxresdefault.jpg"} {"video_id": "9-cyC6O81Bk", "text": "inside out because any efficiency that you can use in your IDE is going to be multiplied but those are thousands of hours that you are going to spend front of it yes basically need to know two things its functionality and its or cuts now just because you are sitting already in front of it for six hours a day it doesn't mean that you are going to master it to master your ID you have to make a conscious and deliberate effort to learn it to find what functionality you don't know about it you can't read the release notes you can", "start_timestamp": "00:06:44", "end_timestamp": "00:07:18", "start_second": 404, "end_second": 438, "url": "https://www.youtube.com/watch?v=9-cyC6O81Bk&t=404s", "title": "Habits of Efficient Developers", "thumbnail": "https://i.ytimg.com/vi/9-cyC6O81Bk/maxresdefault.jpg"} {"video_id": "9-cyC6O81Bk", "text": "follow blogs or YouTube channels of people using it or you can just do pair programming when you do pair programming each of your partners is going to show you how they use the ID and it's going to show you functionality that you didn't know about or it's going to show you more efficient ways of doing some tasks also you can teach him your tips and tricks so you make the whole team more efficient I'm always surprised about the amount of manual work that with developers can put up with and I found it very paradoxical given that we", "start_timestamp": "00:07:18", "end_timestamp": "00:07:56", "start_second": 438, "end_second": 476, "url": "https://www.youtube.com/watch?v=9-cyC6O81Bk&t=438s", "title": "Habits of Efficient Developers", "thumbnail": "https://i.ytimg.com/vi/9-cyC6O81Bk/maxresdefault.jpg"} {"video_id": "9-cyC6O81Bk", "text": "are paid to automate all these people's job manual work is not just slower to do but it's dull boring and it's error problem so I'll always wonder well why we keep doing it in one of the main reason is that we sometimes for God that we are developers and as developers we have this very rare and powerful skill that allow us to create an army of minions that will do as we say they will not complain it they never get tired and they do it really really fast and I think we don't use this skill often enough sometimes it may be because", "start_timestamp": "00:07:56", "end_timestamp": "00:08:33", "start_second": 476, "end_second": 513, "url": "https://www.youtube.com/watch?v=9-cyC6O81Bk&t=476s", "title": "Habits of Efficient Developers", "thumbnail": "https://i.ytimg.com/vi/9-cyC6O81Bk/maxresdefault.jpg"} {"video_id": "9-cyC6O81Bk", "text": "we just end up with this kind of minions but that's a different talk just to make sure that these things I'm going to repeat it again you are developers you don't do things that a computer that your computer can do for you so efficient developers before starting any task they always think well can I write a program to do the task or at least to help me to do the task and I'm not just talking about automating some work that can take you hours I'm also talking about automating tasks that can take you five seconds but you", "start_timestamp": "00:08:33", "end_timestamp": "00:09:10", "start_second": 513, "end_second": 550, "url": "https://www.youtube.com/watch?v=9-cyC6O81Bk&t=513s", "title": "Habits of Efficient Developers", "thumbnail": "https://i.ytimg.com/vi/9-cyC6O81Bk/maxresdefault.jpg"} {"video_id": "9-cyC6O81Bk", "text": "do several times a day and I am so talking about driving programs for one of tasks that you're never going to do again first because maybe is more efficient you can do it faster but second because I hope that writing programs is fun this is fun for me more finalists and doing things manually to write simple programs there is nothing like all good but she'll have been a developer for eighteen years and during those 18 years I have changed operating systems I have changed programming languages I have changed IDs I have changed my mind", "start_timestamp": "00:09:10", "end_timestamp": "00:09:44", "start_second": 550, "end_second": 584, "url": "https://www.youtube.com/watch?v=9-cyC6O81Bk&t=550s", "title": "Habits of Efficient Developers", "thumbnail": "https://i.ytimg.com/vi/9-cyC6O81Bk/maxresdefault.jpg"} {"video_id": "9-cyC6O81Bk", "text": "up about all kinds of ideas and practices the only thing that has been constant during those 18 years has been bash so I want to show a little demo about what bash can do for you the beginning of picking off in the fall I didn't cope so let's say that your business manager tells you that some other some other team has created a program and that program is collecting news and he wants to know how many news per country we are we have so you jump into the box and the first thing that you are going to do is write a program this is a", "start_timestamp": "00:09:44", "end_timestamp": "00:10:23", "start_second": 584, "end_second": 623, "url": "https://www.youtube.com/watch?v=9-cyC6O81Bk&t=584s", "title": "Habits of Efficient Developers", "thumbnail": "https://i.ytimg.com/vi/9-cyC6O81Bk/maxresdefault.jpg"} {"video_id": "9-cyC6O81Bk", "text": "program that is going to tell you how many news we have this is a program right from 4000 so now let's see what we have in one of those news so it seems that it's a piece of JSON right but kind of hard to read so we format it so we see here some news with an ID some social stats the site that we come from and we have here the country yeah so what we are going to do is write a little program to extract that country so country code quote single quote one single code okay let's see if that works yep so we see always make mouse here the", "start_timestamp": "00:10:23", "end_timestamp": "00:11:07", "start_second": 623, "end_second": 667, "url": "https://www.youtube.com/watch?v=9-cyC6O81Bk&t=623s", "title": "Habits of Efficient Developers", "thumbnail": "https://i.ytimg.com/vi/9-cyC6O81Bk/maxresdefault.jpg"} {"video_id": "9-cyC6O81Bk", "text": "country so now let's try to extract it we start doing grab - oh we get something and now we want to get the whole line so so you don't get too lost here it is our result country and then we have the u.s. so what we are going to do is now split the line with a split line with a delimiter of quote and one day feel I think is the threat of the fourth is the fourth feel cool so well now what we have it's a program to strike the news they the country for news so we just want to do the same for all the news and now we just need to sort it and count it", "start_timestamp": "00:11:07", "end_timestamp": "00:11:51", "start_second": 667, "end_second": 711, "url": "https://www.youtube.com/watch?v=9-cyC6O81Bk&t=667s", "title": "Habits of Efficient Developers", "thumbnail": "https://i.ytimg.com/vi/9-cyC6O81Bk/maxresdefault.jpg"} {"video_id": "9-cyC6O81Bk", "text": "this because we want to sort it again so we sit in order so we get that for this country there are around three thousand years but now that you have this program let's say that instead of the country we're interested on the side yeah we see there number of news per side but now your business manager comes inside like well I want to know what are the top three website for each country so we're just going to modify this program a little bit we grab country let's try for example I L and if I didn't get that wrong does that all the countries so we", "start_timestamp": "00:11:51", "end_timestamp": "00:12:33", "start_second": 711, "end_second": 753, "url": "https://www.youtube.com/watch?v=9-cyC6O81Bk&t=711s", "title": "Habits of Efficient Developers", "thumbnail": "https://i.ytimg.com/vi/9-cyC6O81Bk/maxresdefault.jpg"} {"video_id": "9-cyC6O81Bk", "text": "just want we just want to sort them and take the top three that's it now we are going to format it so it all have chosen one line so now we have a program that for one given country it tell us the top three so we just need to do a for loop right we just need to find all the countries that we can get basically from our previous program and we just need to write a for loop for in this two we echo the name of the country and we finished the for-loop Papa Papa back instead of IL is the country that we want to know about and because this", "start_timestamp": "00:12:33", "end_timestamp": "00:13:27", "start_second": 753, "end_second": 807, "url": "https://www.youtube.com/watch?v=9-cyC6O81Bk&t=753s", "title": "Habits of Efficient Developers", "thumbnail": "https://i.ytimg.com/vi/9-cyC6O81Bk/maxresdefault.jpg"} {"video_id": "9-cyC6O81Bk", "text": "is bash I need to quote quote quote quote unquote and it got something wrong did I miss anything mmm it's a new error site what what it doesn't matter so I'm going to then move to the next thing that is whenever you are writing a program you should always time limit the amount of time that you are expected to use on it this is a good example right because if I keep going I will spend the next 25 minutes just trying to get your work in so whenever you are trying to write a program or Eltham it a task the first thing that you should do it's time", "start_timestamp": "00:13:27", "end_timestamp": "00:14:24", "start_second": 807, "end_second": 864, "url": "https://www.youtube.com/watch?v=9-cyC6O81Bk&t=807s", "title": "Habits of Efficient Developers", "thumbnail": "https://i.ytimg.com/vi/9-cyC6O81Bk/maxresdefault.jpg"} {"video_id": "9-cyC6O81Bk", "text": "limited and if after paint that time limit you are not able to finish it just move and do it things manually right now even if you think that that's a waste of time right I tried to for five minutes to get this thing working and it didn't get it well the truth is that you have learned a little bit that's not waste time that's invested time on you learning and getting better okay so whatever yep there is a nice table from a CD that tells you how much time you can expend to automate some tasks so have a look at", "start_timestamp": "00:14:24", "end_timestamp": "00:15:01", "start_second": 864, "end_second": 901, "url": "https://www.youtube.com/watch?v=9-cyC6O81Bk&t=864s", "title": "Habits of Efficient Developers", "thumbnail": "https://i.ytimg.com/vi/9-cyC6O81Bk/maxresdefault.jpg"} {"video_id": "9-cyC6O81Bk", "text": "that and if we are talking about pro writing programs what you should always do it's about you will want about graphical user interfaces why because you cannot put a UI inside a for loop you guys don't compost they just live in their little world now I'm not saying that you should never use them because they are extremely useful when you are getting started when you are learning something new but once you are past that phase of beginner you we'll actually want to do more complex stuff and you guys just constrain what", "start_timestamp": "00:15:01", "end_timestamp": "00:15:33", "start_second": 901, "end_second": 933, "url": "https://www.youtube.com/watch?v=9-cyC6O81Bk&t=901s", "title": "Habits of Efficient Developers", "thumbnail": "https://i.ytimg.com/vi/9-cyC6O81Bk/maxresdefault.jpg"} {"video_id": "9-cyC6O81Bk", "text": "you what you can do and if we're talking about a body in your eyes the first way that you want to avoid is your own applications you are right there is nothing less efficient than starting application clicking things around and filling up forms to know if you're featured the new feature is working or if you broke anything a part of making this a more efficient automated test also give us the give us the confidence to refactor and change code because it's going to catch bugs and bugs are the worst time waste of all first you need", "start_timestamp": "00:15:33", "end_timestamp": "00:16:13", "start_second": 933, "end_second": 973, "url": "https://www.youtube.com/watch?v=9-cyC6O81Bk&t=933s", "title": "Habits of Efficient Developers", "thumbnail": "https://i.ytimg.com/vi/9-cyC6O81Bk/maxresdefault.jpg"} {"video_id": "9-cyC6O81Bk", "text": "to write that the bug then somebody has to review the back then you need to put the back into production and then by the time some user notice the bug you have gone through this massive context which because you probably wrote the bug several weeks ago and so even if you wrote the code that has debug the code is already alien for you and you have to dig into it and then you need to fix it you need to go review it you need to explain it to your boss you need to fill some JIRA issues and then you need to go again through all the", "start_timestamp": "00:16:13", "end_timestamp": "00:16:43", "start_second": 973, "end_second": 1003, "url": "https://www.youtube.com/watch?v=9-cyC6O81Bk&t=973s", "title": "Habits of Efficient Developers", "thumbnail": "https://i.ytimg.com/vi/9-cyC6O81Bk/maxresdefault.jpg"} {"video_id": "9-cyC6O81Bk", "text": "release process so bugs are just a big waste of thing but worse than a bug it's having the same bike twice right so whenever you go and fix a bug the first thing that you should do is write a test to prove that you are able to reproduce the bug you see it fail and then you fix it and the last thing that you want to avoid to do manually is setting up the development environment right this is not going to make it just more efficient but the whole team more efficient this is how distractions for any project that", "start_timestamp": "00:16:43", "end_timestamp": "00:17:21", "start_second": 1003, "end_second": 1041, "url": "https://www.youtube.com/watch?v=9-cyC6O81Bk&t=1003s", "title": "Habits of Efficient Developers", "thumbnail": "https://i.ytimg.com/vi/9-cyC6O81Bk/maxresdefault.jpg"} {"video_id": "9-cyC6O81Bk", "text": "I joined look like from my point of view and the only thing clear on them about them is they're going to they are not going to work maybe they are missing some step or they're not precise enough or maybe I will make some silly mistake when I try to follow them and the result is always the same two three four days of wasted time what you want to achieve its instructions as close as possible to this just one command and that one command should bring all the tools and configure them to be able to build run and test your application if you need a", "start_timestamp": "00:17:21", "end_timestamp": "00:17:56", "start_second": 1041, "end_second": 1076, "url": "https://www.youtube.com/watch?v=9-cyC6O81Bk&t=1041s", "title": "Habits of Efficient Developers", "thumbnail": "https://i.ytimg.com/vi/9-cyC6O81Bk/maxresdefault.jpg"} {"video_id": "9-cyC6O81Bk", "text": "database it should install the database and configure it and seed it with some data if you need any bill to maven NPM whatever it will download the correct version of maven and install it and configure any SDK that you need as you can see my tool of choice right now to do this it's docker compose which is part of the toker suite if you are not familiar with it it looks this an example and here we are think that our development environment in its three containers Postgres DP Redis DP and our own application this has multiple benefits", "start_timestamp": "00:17:56", "end_timestamp": "00:18:29", "start_second": 1076, "end_second": 1109, "url": "https://www.youtube.com/watch?v=9-cyC6O81Bk&t=1076s", "title": "Habits of Efficient Developers", "thumbnail": "https://i.ytimg.com/vi/9-cyC6O81Bk/maxresdefault.jpg"} {"video_id": "9-cyC6O81Bk", "text": "right first thing it takes use minutes for somebody new to get started but also if something stops working on your development environment you can just easily just wipe the whole thing and start again if there any change on the development environment you know share immediately with the whole team and these structures never get out of date also because docker is running things in isolated environments it means that if two projects that you're working on they use completely different versions of a database or a JD SDK well they're going", "start_timestamp": "00:18:29", "end_timestamp": "00:19:03", "start_second": 1109, "end_second": 1143, "url": "https://www.youtube.com/watch?v=9-cyC6O81Bk&t=1109s", "title": "Habits of Efficient Developers", "thumbnail": "https://i.ytimg.com/vi/9-cyC6O81Bk/maxresdefault.jpg"} {"video_id": "9-cyC6O81Bk", "text": "to be completely isolated so it doesn't bother one and the other and also because it's so easy to make changes it allows you it encourage you to experiment if you want to try a new JDK or SDK or a new version of the database just make the changes started and if you don't like it you just completely what the whole environment and the last section that we are going to talk about feedback doesn't matter what you were what you are working on you should always try to find the shortest and tightest feedback loop possible feedback is what it tell", "start_timestamp": "00:19:03", "end_timestamp": "00:19:38", "start_second": 1143, "end_second": 1178, "url": "https://www.youtube.com/watch?v=9-cyC6O81Bk&t=1143s", "title": "Habits of Efficient Developers", "thumbnail": "https://i.ytimg.com/vi/9-cyC6O81Bk/maxresdefault.jpg"} {"video_id": "9-cyC6O81Bk", "text": "us if we are going in the right direction feedback make us at the same time more efficient and more effective you want feedback often and early to make sure that you don't wander on the wrong path for too long with the consequent waste of time an energy if we talk about the benefits of automated test yeah we save time give us save it with it catches bugs allow us to the factor one is the best moment to try test well my opinion is before you start doing any coding if you're not familiar with the TDD workflow it's", "start_timestamp": "00:19:38", "end_timestamp": "00:20:16", "start_second": 1178, "end_second": 1216, "url": "https://www.youtube.com/watch?v=9-cyC6O81Bk&t=1178s", "title": "Habits of Efficient Developers", "thumbnail": "https://i.ytimg.com/vi/9-cyC6O81Bk/maxresdefault.jpg"} {"video_id": "9-cyC6O81Bk", "text": "basically this going to go really fast through it you first write one test and only one test you run it you see it fail you see it right and then just write enough code to make that test pass and then you refactor you clean up your code running the test just to make sure that you didn't break anything there are least four reasons why you want to use the this workflow the first one is the fast feed but that gives you as you are building the new feature to note that your code is doing what you spread it to do the second reason is", "start_timestamp": "00:20:16", "end_timestamp": "00:20:46", "start_second": 1216, "end_second": 1246, "url": "https://www.youtube.com/watch?v=9-cyC6O81Bk&t=1216s", "title": "Habits of Efficient Developers", "thumbnail": "https://i.ytimg.com/vi/9-cyC6O81Bk/maxresdefault.jpg"} {"video_id": "9-cyC6O81Bk", "text": "that if you truly believe that automated test saves you time you want that benefit as soon as possible as you are developing the new feature the third reason is organizational I have here years too many times the phrase I don't have time to write this or I'm not given the time to write this and for me just actually means that well I always write my code I finish my feature and once I finish my features when I do try my test and if there is any time pressure well you know I'm not going to I don't get time to write those tests and because", "start_timestamp": "00:20:46", "end_timestamp": "00:21:24", "start_second": 1246, "end_second": 1284, "url": "https://www.youtube.com/watch?v=9-cyC6O81Bk&t=1246s", "title": "Habits of Efficient Developers", "thumbnail": "https://i.ytimg.com/vi/9-cyC6O81Bk/maxresdefault.jpg"} {"video_id": "9-cyC6O81Bk", "text": "you don't write tests it means that you don't refactor your code because to refactor code you need a very good automated persuade and because you don't know factor code it means that your code starts to accumulate garbage and because your code starts to accumulate garbage it takes you a little more time to actually build new features and because it takes you more time to build features you get more time pressure and you add more time pressure so you have less time to write test closing a vicious circle cycle that always end up with the same", "start_timestamp": "00:21:24", "end_timestamp": "00:21:52", "start_second": 1284, "end_second": 1312, "url": "https://www.youtube.com/watch?v=9-cyC6O81Bk&t=1284s", "title": "Habits of Efficient Developers", "thumbnail": "https://i.ytimg.com/vi/9-cyC6O81Bk/maxresdefault.jpg"} {"video_id": "9-cyC6O81Bk", "text": "with us developers crying for our right and the four reason why you want to write your test first it's one of a mechanical reason because seen a test fail is the test that tests that the test test what is supposed to test or in simple words how do you know that your test doesn't have any bug if you write a test and you see it right there is a strong indication that is some piece of production code some logic that is not there if you write the test and you never say bread what you don't know if it is because you're ready", "start_timestamp": "00:21:52", "end_timestamp": "00:22:31", "start_second": 1312, "end_second": 1351, "url": "https://www.youtube.com/watch?v=9-cyC6O81Bk&t=1312s", "title": "Habits of Efficient Developers", "thumbnail": "https://i.ytimg.com/vi/9-cyC6O81Bk/maxresdefault.jpg"} {"video_id": "9-cyC6O81Bk", "text": "implemented a feature or because you forget on a certain in your test or the setup code is not correct now when you present this idea to a lot of people they always come up with this phrase I can't write a test first because I don't know what I'm going to build and this can mean different things it can mean that you don't understand what business is asking you to do right and in this case it's true you cannot write any test but you cannot write any production code either what you have to do is go back to business and ask for", "start_timestamp": "00:22:31", "end_timestamp": "00:23:03", "start_second": 1351, "end_second": 1383, "url": "https://www.youtube.com/watch?v=9-cyC6O81Bk&t=1351s", "title": "Habits of Efficient Developers", "thumbnail": "https://i.ytimg.com/vi/9-cyC6O81Bk/maxresdefault.jpg"} {"video_id": "9-cyC6O81Bk", "text": "clarification what do you want to do the other case is that you actually understand business and you're too early understand the logic that you need to build but you don't know if you are going to write one class or ten classes or if you are going to put an if statement or a switch or a factory factory factory you don't know what you're going to do right but you know understand the logic and you know understand the mechanics of the side effects so you know which database you are going to use you have juicy ten", "start_timestamp": "00:23:03", "end_timestamp": "00:23:30", "start_second": 1383, "end_second": 1410, "url": "https://www.youtube.com/watch?v=9-cyC6O81Bk&t=1383s", "title": "Habits of Efficient Developers", "thumbnail": "https://i.ytimg.com/vi/9-cyC6O81Bk/maxresdefault.jpg"} {"video_id": "9-cyC6O81Bk", "text": "thousand times already you know the table you know everything in all these cases you can actually run a test first but it's true that sometimes we actually don't know what how to do the side effects that we are asked for we for example maybe the logic for your new application functionality in its it needs to cause some restful endpoint to get some for exchange and you have never used it and you don't know the end point and you don't know what you need to give to it and you don't know what it's going to give you about or maybe you need to", "start_timestamp": "00:23:30", "end_timestamp": "00:24:02", "start_second": 1410, "end_second": 1442, "url": "https://www.youtube.com/watch?v=9-cyC6O81Bk&t=1410s", "title": "Habits of Efficient Developers", "thumbnail": "https://i.ytimg.com/vi/9-cyC6O81Bk/maxresdefault.jpg"} {"video_id": "9-cyC6O81Bk", "text": "consume some messages from a message queue and you have never done that so you don't know which libraries to use you don't know how they work in all those cases you don't really know what you need to what you are going to do there is always this face of exploration that we have in our in our job that is that we used to fill up those gaps to convert known side-effects into known side-effects and that's something that TDD doesn't help you with what you want to do it first read the documentation to see if you are able to", "start_timestamp": "00:24:02", "end_timestamp": "00:24:35", "start_second": 1442, "end_second": 1475, "url": "https://www.youtube.com/watch?v=9-cyC6O81Bk&t=1442s", "title": "Habits of Efficient Developers", "thumbnail": "https://i.ytimg.com/vi/9-cyC6O81Bk/maxresdefault.jpg"} {"video_id": "9-cyC6O81Bk", "text": "fill those gaps and the second thing you want to write a lot of little programs to test to play around with that technology for this the best tool that I know it's a rebel rebel stands for read eval print loop and it's just basically a fancy way of saying that you have like a common line interface inside your application starts off trying to explain it let's see it in action if it works this time so have already start an application with a rebel inside and what I'm going to do it from my IDE I'm going to connect to that rebel so let's say", "start_timestamp": "00:24:35", "end_timestamp": "00:25:15", "start_second": 1475, "end_second": 1515, "url": "https://www.youtube.com/watch?v=9-cyC6O81Bk&t=1475s", "title": "Habits of Efficient Developers", "thumbnail": "https://i.ytimg.com/vi/9-cyC6O81Bk/maxresdefault.jpg"} {"video_id": "9-cyC6O81Bk", "text": "that you didn't know how the plus function works so I'd write a piece of code in my IDE and I'm not should using a shortcut to send that piece of code to the application and the application tells me that 2 plus 3 is 5 yep so it writes the code on the top screen and I get the result on the bottom screen so as as I was saying this allows you to experiment with the library so maybe what happens if I pass three parameters seems to work what happens if I pass a very big number I get an exception what happens if I just pass", "start_timestamp": "00:25:15", "end_timestamp": "00:25:51", "start_second": 1515, "end_second": 1551, "url": "https://www.youtube.com/watch?v=9-cyC6O81Bk&t=1515s", "title": "Habits of Efficient Developers", "thumbnail": "https://i.ytimg.com/vi/9-cyC6O81Bk/maxresdefault.jpg"} {"video_id": "9-cyC6O81Bk", "text": "one parameter it works no parameters it works so this is just understanding how they are how the library works and this could be an HCP library messaging library some concurrency library your were just writing little programs and executing them to see what's the result let's do something slightly more fancy let's say that your business manager it tells you that you have to build a new feature and you know it some for exchange for that feature and one of your mates told you that there is a restful endpoint to do that and it gives", "start_timestamp": "00:25:51", "end_timestamp": "00:26:25", "start_second": 1551, "end_second": 1585, "url": "https://www.youtube.com/watch?v=9-cyC6O81Bk&t=1551s", "title": "Habits of Efficient Developers", "thumbnail": "https://i.ytimg.com/vi/9-cyC6O81Bk/maxresdefault.jpg"} {"video_id": "9-cyC6O81Bk", "text": "you the URL so we're all an HTP we make an estimate request and we see that we are getting some exception let try to format that sketched exception try/catch exception okay try it again so it tell us it's a 400 which means it's our fault and we see here somebody so what we are going to do is let's get the body okay that seems some piece of JSON so let's parse the JSON there it is so it seems that we're missing some date some query parameters query parameters yeah we get these change rates what happens if I pass all and older what happens if I", "start_timestamp": "00:26:25", "end_timestamp": "00:27:21", "start_second": 1585, "end_second": 1641, "url": "https://www.youtube.com/watch?v=9-cyC6O81Bk&t=1585s", "title": "Habits of Efficient Developers", "thumbnail": "https://i.ytimg.com/vi/9-cyC6O81Bk/maxresdefault.jpg"} {"video_id": "9-cyC6O81Bk", "text": "pass something in the future I'll still returns data that sorry to be worried what happens if I pass a string I get that error so what we are doing is proving how the real world works and how we are doing it we write a little program we run it we see the result so it's a very very fast feedback cycle now you may be when the world why don't you use something like postman to do this right it's just xdp restful must be postman well there's somebody feats of doing it like this way the first thing is you have a full language this is a", "start_timestamp": "00:27:21", "end_timestamp": "00:27:51", "start_second": 1641, "end_second": 1671, "url": "https://www.youtube.com/watch?v=9-cyC6O81Bk&t=1641s", "title": "Habits of Efficient Developers", "thumbnail": "https://i.ytimg.com/vi/9-cyC6O81Bk/maxresdefault.jpg"} {"video_id": "9-cyC6O81Bk", "text": "production language the one that they use in production which means that I can do four loops and if statements if I want to mix this data with something from the database well I know how to make database calls from the JVM and also if I now go and I grab this exploratory code inside a function this that you see here this is production code as you see it there is going to go to production I'm making the changes in the project this is not a different tool that then I need to get what I got from the tool and translated to Java or.net", "start_timestamp": "00:27:51", "end_timestamp": "00:28:33", "start_second": 1671, "end_second": 1713, "url": "https://www.youtube.com/watch?v=9-cyC6O81Bk&t=1671s", "title": "Habits of Efficient Developers", "thumbnail": "https://i.ytimg.com/vi/9-cyC6O81Bk/maxresdefault.jpg"} {"video_id": "9-cyC6O81Bk", "text": "or whatever you are using this is production code it's ready to go also because the repple is running inside your running application you can actually go and poke at the state you can look at the state of your running application and what we are doing here if you notice we are modifying or running application and we are doing all of this without having to compile or restart anything that's a very very quick feedback loop and I don't know if I mention it but we are connecting to this rebel through a socket and because we are connecting", "start_timestamp": "00:28:33", "end_timestamp": "00:29:06", "start_second": 1713, "end_second": 1746, "url": "https://www.youtube.com/watch?v=9-cyC6O81Bk&t=1713s", "title": "Habits of Efficient Developers", "thumbnail": "https://i.ytimg.com/vi/9-cyC6O81Bk/maxresdefault.jpg"} {"video_id": "9-cyC6O81Bk", "text": "through a circuit it means that we don't really need to be running this process in my local box it can be running tests or production so you could be suspecting modifying adding log statements into production code as without stopping the application this is extremely power powerful and you know with great power power comes great responsibility so use it with care the last thing that we are going to talk about it's code reviews code reviews tell us if the design of the code that we are doing if it fits the application", "start_timestamp": "00:29:06", "end_timestamp": "00:29:40", "start_second": 1746, "end_second": 1780, "url": "https://www.youtube.com/watch?v=9-cyC6O81Bk&t=1746s", "title": "Habits of Efficient Developers", "thumbnail": "https://i.ytimg.com/vi/9-cyC6O81Bk/maxresdefault.jpg"} {"video_id": "9-cyC6O81Bk", "text": "it allows other other one of your your teammates to tell you if you have any bugs and it also we can use it to share knowledge right it's a way of sharing knowledge so efficient developers want that code to be code review now there is something very truth behind code reviews when we are presented with this huge massive changes I don't know what your reaction but my reaction is something like oh my god yep when we get those but when we get small changes we are able to give useful feedback to the alpha of all the of the", "start_timestamp": "00:29:40", "end_timestamp": "00:30:17", "start_second": 1780, "end_second": 1817, "url": "https://www.youtube.com/watch?v=9-cyC6O81Bk&t=1780s", "title": "Habits of Efficient Developers", "thumbnail": "https://i.ytimg.com/vi/9-cyC6O81Bk/maxresdefault.jpg"} {"video_id": "9-cyC6O81Bk", "text": "change right because we are able to understand the change also even if you are very disciplined developer and you go through that really painful review process in my experience what does it happen when you go and tell the other like well you know I think your design is your no no no your sorry it's gonna improve your design or we could use a different library that will save us some time or some resources or whatever what usually happens tell whoever say like yeah I think you are right but you know I have already spent", "start_timestamp": "00:30:17", "end_timestamp": "00:30:52", "start_second": 1817, "end_second": 1852, "url": "https://www.youtube.com/watch?v=9-cyC6O81Bk&t=1817s", "title": "Habits of Efficient Developers", "thumbnail": "https://i.ytimg.com/vi/9-cyC6O81Bk/maxresdefault.jpg"} {"video_id": "9-cyC6O81Bk", "text": "like several days or weeks working on this and the end of the Sprint is tomorrow so even if I think you're right I don't think I'm going to have time to do your change what you're suggesting because it's going to take me several more days to do it also you know it's already working so let's do something different let's just commit the change as it is and we are going to ask the product owner to create a refactoring story I'm sure he will be delighted to put it on top of the priority queue I will know that those things never happen so you end up", "start_timestamp": "00:30:52", "end_timestamp": "00:31:34", "start_second": 1852, "end_second": 1894, "url": "https://www.youtube.com/watch?v=9-cyC6O81Bk&t=1852s", "title": "Habits of Efficient Developers", "thumbnail": "https://i.ytimg.com/vi/9-cyC6O81Bk/maxresdefault.jpg"} {"video_id": "9-cyC6O81Bk", "text": "again with words code that leads to this lower implemented feature with plaplapla blah blah blah so efficient developers they don't want just code reviews they want small and early code reviews so what they actually want discontinues co reviews this practice consists on on getting one of your teammates to sit just beside you and as you are implementing the future this this developer sitting beside you is going to suggest improvements on your code it's going to be catching bugs that you are doing and for their for the reviewer the", "start_timestamp": "00:31:34", "end_timestamp": "00:32:12", "start_second": 1894, "end_second": 1932, "url": "https://www.youtube.com/watch?v=9-cyC6O81Bk&t=1894s", "title": "Habits of Efficient Developers", "thumbnail": "https://i.ytimg.com/vi/9-cyC6O81Bk/maxresdefault.jpg"} {"video_id": "9-cyC6O81Bk", "text": "changes are really really small yeah as you type them he see those those changes and for you as the author you can get feedback even before you start writing any code additionally if for whatever reason you cannot able to finish the feature this other developer it's able to pick up that feature without any effort because he has been behind each of your decisions so you avoid those knowledge silos within the team also this this other developer can work as your personal stack overflow because maybe he has already found that", "start_timestamp": "00:32:12", "end_timestamp": "00:32:47", "start_second": 1932, "end_second": 1967, "url": "https://www.youtube.com/watch?v=9-cyC6O81Bk&t=1932s", "title": "Habits of Efficient Developers", "thumbnail": "https://i.ytimg.com/vi/9-cyC6O81Bk/maxresdefault.jpg"} {"video_id": "9-cyC6O81Bk", "text": "similar issue and he already knows how to fix it and sometimes you don't even need to ask the question because he sees what you are doing some people call also this program so that's all that I have very briefly focus master your ID your tools are both manual work and find yourself the fastest feedback loop possible and last words you should always find time to stop reflect on how you are working and never ever stop learning thank you very much thank you so much for your tips I will definitely start with the notifications", "start_timestamp": "00:32:47", "end_timestamp": "00:33:44", "start_second": 1967, "end_second": 2024, "url": "https://www.youtube.com/watch?v=9-cyC6O81Bk&t=1967s", "title": "Habits of Efficient Developers", "thumbnail": "https://i.ytimg.com/vi/9-cyC6O81Bk/maxresdefault.jpg"} {"video_id": "9-cyC6O81Bk", "text": "part tomorrow we got a lot of questions during your talk so let's start with the first one how do we balance avoiding interruption with work in small dynamic teams these need rapid feedback loops and frequent communication okay if your work is if your team is really small and if you are doing pair programming your team becomes tiny yeah and because the thing becomes tiny it means that that need of communication the number of not ages on day on the communication graph it just reduces so try to pair programming Thanks", "start_timestamp": "00:33:44", "end_timestamp": "00:34:31", "start_second": 2024, "end_second": 2071, "url": "https://www.youtube.com/watch?v=9-cyC6O81Bk&t=2024s", "title": "Habits of Efficient Developers", "thumbnail": "https://i.ytimg.com/vi/9-cyC6O81Bk/maxresdefault.jpg"} {"video_id": "9-cyC6O81Bk", "text": "where can we find your slides everybody wants Terry my laptop if somebody wants to grab my laptop the other can somebody come and pick up I would polish them in my personal blog that probably none of you great I don't know if they I will tweet it I will treat them now and would put them in in someplace that you can find it that would be great additionally there are recordings of all sessions so you can watch them later or sent the links to your colleagues another question how can we efficiently automate ourselves out of the job it", "start_timestamp": "00:34:31", "end_timestamp": "00:35:13", "start_second": 2071, "end_second": 2113, "url": "https://www.youtube.com/watch?v=9-cyC6O81Bk&t=2071s", "title": "Habits of Efficient Developers", "thumbnail": "https://i.ytimg.com/vi/9-cyC6O81Bk/maxresdefault.jpg"} {"video_id": "9-cyC6O81Bk", "text": "depends if you work for yourself that this is the best thing that you can do yeah this is it free money I think it's it depends on your ethics right if you can actually if you think that you can actually automate your work why not write you finish people that are using that tool or your business will be very thankful and you know we have plenty of jobs around the world so don't worry about your job there is a better world job working for you okay I think it's time for our last question what's the worst distraction for a", "start_timestamp": "00:35:13", "end_timestamp": "00:35:53", "start_second": 2113, "end_second": 2153, "url": "https://www.youtube.com/watch?v=9-cyC6O81Bk&t=2113s", "title": "Habits of Efficient Developers", "thumbnail": "https://i.ytimg.com/vi/9-cyC6O81Bk/maxresdefault.jpg"} {"video_id": "9-cyC6O81Bk", "text": "developer and it's it's like well why is it slick I don't think it is luck to be honest I think the worst distraction if you have a two-year-old that is knocking on your door and you work from home I think that worse but I don't think I'm a remote worker most of the advice that I give you it's when I was not a remote worker and we used to slack a lot and I just mute everybody but they know that is a really neat Mian they'd really need to reach me they know how to get hold of me so I don't know what we feel offended", "start_timestamp": "00:35:53", "end_timestamp": "00:36:30", "start_second": 2153, "end_second": 2190, "url": "https://www.youtube.com/watch?v=9-cyC6O81Bk&t=2153s", "title": "Habits of Efficient Developers", "thumbnail": "https://i.ytimg.com/vi/9-cyC6O81Bk/maxresdefault.jpg"} {"video_id": "VKnoyiNxflk", "text": "In 2016, JAMA published research demonstrating the efficacy of a deep learning algorithm. We were able to train a deep learning neural network to recapitulate the majority decision of 7 or 8 US board certified ophthalmologists in the task of grading for a diabetic retinopathy. The type of deep learning algorithm used to detect diabetic retinopathy in that study is called a Convolutional Neural Network, or CNN. CNNs enable computer systems to analyze and classify data. When applied to images, CNNs can recognize that an image shows a dog rather than a cat.", "start_timestamp": "00:00:00", "end_timestamp": "00:00:40", "start_second": 0, "end_second": 40, "url": "https://www.youtube.com/watch?v=VKnoyiNxflk&t=0s", "title": "Machine Learning For Medical Image Analysis - How It Works", "thumbnail": "https://i.ytimg.com/vi/VKnoyiNxflk/maxresdefault.jpg"} {"video_id": "VKnoyiNxflk", "text": "They can recognize the dog whether it's a small part or a large part of the picture - size doesn't matter for this technique. It can also classify the dog by breed. CNN systems have also been developed to help clinicians do their work including selecting cellular elements on pathological slides, correctly identifying the spatial orientation of chest radiographs, and, as Dr. Peng mentioned, automatically grading retinal images for diabetic retinopathy. So let's open the deep learning black box to understand how this works.", "start_timestamp": "00:00:40", "end_timestamp": "00:01:13", "start_second": 40, "end_second": 73, "url": "https://www.youtube.com/watch?v=VKnoyiNxflk&t=40s", "title": "Machine Learning For Medical Image Analysis - How It Works", "thumbnail": "https://i.ytimg.com/vi/VKnoyiNxflk/maxresdefault.jpg"} {"video_id": "VKnoyiNxflk", "text": "First, a CNN is not one process. It's actually a complex network of interconnected processes, organized in layers. With each layer, the CNN can detect higher-level, more abstract features. When the CNN is identifying these features, it uses something called a filter. Here's how Larry Carin, one of the authors of a JAMA Guide to Statistics and Methods article on CNNs, describes a filter: So, we think about a medical image, a medical image in radiology or ophthalmology or dermatology is characterized by local structure,", "start_timestamp": "00:01:13", "end_timestamp": "00:01:47", "start_second": 73, "end_second": 107, "url": "https://www.youtube.com/watch?v=VKnoyiNxflk&t=73s", "title": "Machine Learning For Medical Image Analysis - How It Works", "thumbnail": "https://i.ytimg.com/vi/VKnoyiNxflk/maxresdefault.jpg"} {"video_id": "VKnoyiNxflk", "text": "could be textures, it could be edges, it could be curves, corners, etc. And what these filters are doing are constituting little miniature versions of each of these little building blocks. And the way that the CNN looks for these building blocks is the C in CNN, and it stands for convolution. It's a mathematical operation that looks pretty complex. But, actually, it's very simple. It's a very simple concept. It's kind of like you've got this filter, and you're walking to every part of the image, and you're just asking the question, how much does this image look like that filter?", "start_timestamp": "00:01:47", "end_timestamp": "00:02:23", "start_second": 107, "end_second": 143, "url": "https://www.youtube.com/watch?v=VKnoyiNxflk&t=107s", "title": "Machine Learning For Medical Image Analysis - How It Works", "thumbnail": "https://i.ytimg.com/vi/VKnoyiNxflk/maxresdefault.jpg"} {"video_id": "VKnoyiNxflk", "text": "Think of it like this: you have a drawing, that's the image, and you have a stencil, that's the filter. You take that stencil and pass that stencil over that drawing that you have, and as you do that you will see that some parts of the drawing become more visible than others as you do that, right? And that process of sliding that stencil across this drawing is essentially the process of convolution. Now that we've explained what a filter is and introduced the concept of convolution, let's use an analogy of written language to understand the relationship between the filters", "start_timestamp": "00:02:23", "end_timestamp": "00:02:59", "start_second": 143, "end_second": 179, "url": "https://www.youtube.com/watch?v=VKnoyiNxflk&t=143s", "title": "Machine Learning For Medical Image Analysis - How It Works", "thumbnail": "https://i.ytimg.com/vi/VKnoyiNxflk/maxresdefault.jpg"} {"video_id": "VKnoyiNxflk", "text": "and the hierarchical structure of the layers in a CNN. We will simplify the explanation by using an analogy. The analogy is a written document. In order to communicate through writing, we organize it as a series of paragraphs, which are composed of sentences, those sentences are composed of words, and the words of letters. So reading a document requires assessing the relationship of letters to one another in increasing layers of complexity, which is a kind of \"deep\" hierarchy, like the hierarchy in image analysis.", "start_timestamp": "00:02:59", "end_timestamp": "00:03:29", "start_second": 179, "end_second": 209, "url": "https://www.youtube.com/watch?v=VKnoyiNxflk&t=179s", "title": "Machine Learning For Medical Image Analysis - How It Works", "thumbnail": "https://i.ytimg.com/vi/VKnoyiNxflk/maxresdefault.jpg"} {"video_id": "VKnoyiNxflk", "text": "Continuing with our analogy, let's say we're looking for the phrase Ada Lovelace in a paragraph. Ada Lovelace was a mathematician and writer who lived in the 19th century. And she holds the honor of having published the very first algorithm intended to be used by a machine to perform calculations, which makes her the first ever computer programmer. In the first layer of the network, a CNN looks for the basic building blocks of an image. The basic building blocks of written language are letters. So in this analogy, the filters the CNN uses in the first layer would be letters.", "start_timestamp": "00:03:29", "end_timestamp": "00:04:01", "start_second": 209, "end_second": 241, "url": "https://www.youtube.com/watch?v=VKnoyiNxflk&t=209s", "title": "Machine Learning For Medical Image Analysis - How It Works", "thumbnail": "https://i.ytimg.com/vi/VKnoyiNxflk/maxresdefault.jpg"} {"video_id": "VKnoyiNxflk", "text": "Let's zoom in on the word \"Ada.\" Here is what the convolution process would look like for the letter A. When the \"A\" filter overlies the letter \"A\" in the original image, the convolution output would generate a strong signal. This signal would then be mapped onto something called a feature map. The feature map represents how well elements in the image align with the filter. If something is there, the signal outputs white. If nothing is there, the signal outputs black. CNNs generate a feature map for every filter.", "start_timestamp": "00:04:01", "end_timestamp": "00:04:35", "start_second": 241, "end_second": 275, "url": "https://www.youtube.com/watch?v=VKnoyiNxflk&t=241s", "title": "Machine Learning For Medical Image Analysis - How It Works", "thumbnail": "https://i.ytimg.com/vi/VKnoyiNxflk/maxresdefault.jpg"} {"video_id": "VKnoyiNxflk", "text": "So in our analogy, there would be a feature map for every letter. These feature maps would then become the input for the second layer. In this layer, the CNN would spatially align and \"stack\" all those maps from the previous layer. This would allow the CNN to then look for short, specific sequences of letters in all the feature maps simultaneously. So the CNN would use a new set of filters to look for specific letters that are adjacent to one another in particular sequences. In our analogy, the second layer would look for places where the letters A, D,", "start_timestamp": "00:04:35", "end_timestamp": "00:05:09", "start_second": 275, "end_second": 309, "url": "https://www.youtube.com/watch?v=VKnoyiNxflk&t=275s", "title": "Machine Learning For Medical Image Analysis - How It Works", "thumbnail": "https://i.ytimg.com/vi/VKnoyiNxflk/maxresdefault.jpg"} {"video_id": "VKnoyiNxflk", "text": "and A are in sequence together making the word \"ADA\". It would also look for places where letters A, C, E, L, O and V are adjacent to one another using filters for LOVE and LACE. The output of the second layer would be the feature maps for those three sequences of letters. In other words, in those feature maps, strong signals would be present where the sequences ADA, LOVE and LACE are located in the original paragraph. In the third layer, the CNN would stack and align these three new maps and perform more convolutions-this time identifying", "start_timestamp": "00:05:09", "end_timestamp": "00:05:45", "start_second": 309, "end_second": 345, "url": "https://www.youtube.com/watch?v=VKnoyiNxflk&t=309s", "title": "Machine Learning For Medical Image Analysis - How It Works", "thumbnail": "https://i.ytimg.com/vi/VKnoyiNxflk/maxresdefault.jpg"} {"video_id": "VKnoyiNxflk", "text": "where longer words and groups of words are located. So the CNN could at this point identify where in the original paragraph the sequences of letters and words making the phrase \"ADA LOVELACE\" are located. In our analogy, we were looking for a phrase consisting of only two words. Had we been looking for a longer sentence or even a paragraph, the CNN would deal with the greater complexity by having more layers. We've omitted quite a few details about CNNs for simplicity, but this captures the essence of the model.", "start_timestamp": "00:05:45", "end_timestamp": "00:06:15", "start_second": 345, "end_second": 375, "url": "https://www.youtube.com/watch?v=VKnoyiNxflk&t=345s", "title": "Machine Learning For Medical Image Analysis - How It Works", "thumbnail": "https://i.ytimg.com/vi/VKnoyiNxflk/maxresdefault.jpg"} {"video_id": "VKnoyiNxflk", "text": "But what does this look like for actual images, like identifying diabetic retinopathy from an ocular photograph? Images are made out of pixels rather than letters. In a digital context, a pixel is the smallest, controllable unit of an image represented on a display. Each pixel is a representation of a tiny portion of the original image. Think about pixels like creating a drawing with dots where every dot has a color value and an intensity. The more dots used, the clearer the image becomes. The filters a CNN uses in that first layer are small squares of pixels that correspond", "start_timestamp": "00:06:15", "end_timestamp": "00:06:49", "start_second": 375, "end_second": 409, "url": "https://www.youtube.com/watch?v=VKnoyiNxflk&t=375s", "title": "Machine Learning For Medical Image Analysis - How It Works", "thumbnail": "https://i.ytimg.com/vi/VKnoyiNxflk/maxresdefault.jpg"} {"video_id": "VKnoyiNxflk", "text": "to things like textures, contrast between two colors, or edges. These are the image analysis-equivalents of the letters used in our analogy. And as a CNN goes up in the hierarchy, it looks for combinations of these filters, getting more and more complex with each layer. As the complexity increases, the CNN gets closer to identifying what it's looking for. So the specific features analyzed at each layer help put the whole thing together. So, for example, some of the earlier work showed that some layers tend to be better", "start_timestamp": "00:06:49", "end_timestamp": "00:07:20", "start_second": 409, "end_second": 440, "url": "https://www.youtube.com/watch?v=VKnoyiNxflk&t=409s", "title": "Machine Learning For Medical Image Analysis - How It Works", "thumbnail": "https://i.ytimg.com/vi/VKnoyiNxflk/maxresdefault.jpg"} {"video_id": "VKnoyiNxflk", "text": "at extracting, sort of like, edge-like information. Meaning that, for example, if you combine different kinds of horizontal edges, we might get a continuous line that resembles the retinal blood vessels. And as you combine more of those and start to encode more higher-level concepts such as, you know, is there a micro-aneurysm here, is there bleeding over here, is there other lesions in the image? And right at the very end is where these, after these multiple layers, the network will try to then condense all of that information down into a final prediction.", "start_timestamp": "00:07:20", "end_timestamp": "00:07:57", "start_second": 440, "end_second": 477, "url": "https://www.youtube.com/watch?v=VKnoyiNxflk&t=440s", "title": "Machine Learning For Medical Image Analysis - How It Works", "thumbnail": "https://i.ytimg.com/vi/VKnoyiNxflk/maxresdefault.jpg"} {"video_id": "VKnoyiNxflk", "text": "In this case, severe diabetic retinopathy. Developing a CNN to help identify diabetic retinopathy was motivated because many patients with diabetes are not getting screened frequently enough. We have to screen diabetic patients once a year or we should, and there are some barriers to getting that done. Some of it is just, you know, not having enough trained professionals to do that task. It's also not having that expertise available where the patient is. It's not that, you know, there aren't retina specialists", "start_timestamp": "00:07:57", "end_timestamp": "00:08:28", "start_second": 477, "end_second": 508, "url": "https://www.youtube.com/watch?v=VKnoyiNxflk&t=477s", "title": "Machine Learning For Medical Image Analysis - How It Works", "thumbnail": "https://i.ytimg.com/vi/VKnoyiNxflk/maxresdefault.jpg"} {"video_id": "VKnoyiNxflk", "text": "in a metropolitan city four hours away, it's that there isn't a retina specialist at your grocery store. And CNNs could facilitate the integration of diabetic retinopathy and other screening programs into primary care. But before that happens, more research, especially prospective clinical trials, are needed. The way we do approach these things is really the way that medicine usually works, which is to say, \"let's do validations of the method again and again and again until we're sure, we're reasonably confident that it really works on many kinds of images,", "start_timestamp": "00:08:28", "end_timestamp": "00:08:58", "start_second": 508, "end_second": 538, "url": "https://www.youtube.com/watch?v=VKnoyiNxflk&t=508s", "title": "Machine Learning For Medical Image Analysis - How It Works", "thumbnail": "https://i.ytimg.com/vi/VKnoyiNxflk/maxresdefault.jpg"} {"video_id": "VKnoyiNxflk", "text": "in many settings for, you know, many different patient populations.\" And so from my perspective that's really at the end of the day what's most important: does it work on real patients and is it reliable? The excitement generated by early results has already spurred several research groups to look into the efficacy of CNNs in clinical practice, which could potentially finally get CNNs from the bench to the bedside. I think we're on the third or fourth technological revolution where neural networks are coming to the forefront,", "start_timestamp": "00:08:58", "end_timestamp": "00:09:30", "start_second": 538, "end_second": 570, "url": "https://www.youtube.com/watch?v=VKnoyiNxflk&t=538s", "title": "Machine Learning For Medical Image Analysis - How It Works", "thumbnail": "https://i.ytimg.com/vi/VKnoyiNxflk/maxresdefault.jpg"} {"video_id": "VKnoyiNxflk", "text": "and I really hope that this time we'll get it right. But there were failures in the past where people used the technology in suboptimal ways and we don' t want it to happen again. One has to make sure that we have appropriate and sufficient data for development, validation and testing, and that we're solving actual clinical problems. At the end of the day, one thing to take away is that even if, as a clinician, it can be hard to understand exactly how a CNN arrives at its diagnosis, it can still be a useful tool.", "start_timestamp": "00:09:30", "end_timestamp": "00:10:04", "start_second": 570, "end_second": 604, "url": "https://www.youtube.com/watch?v=VKnoyiNxflk&t=570s", "title": "Machine Learning For Medical Image Analysis - How It Works", "thumbnail": "https://i.ytimg.com/vi/VKnoyiNxflk/maxresdefault.jpg"} {"video_id": "VKnoyiNxflk", "text": "And this is similar to how many clinicians use other widely-adopted technologies. Consider antibodies: You know, as a clinician I may not know exactly where that part of an antibody kind of binds to, but I'm comfortable after looking at some of this clinical validation of using Lucentis, for example, for an injection, right. This is kind of like any new breakthrough technology: needs validation and needs transparency, but I think, you know, the medical community in general responds very well to new technologies that have been validated.", "start_timestamp": "00:10:04", "end_timestamp": "00:10:36", "start_second": 604, "end_second": 636, "url": "https://www.youtube.com/watch?v=VKnoyiNxflk&t=604s", "title": "Machine Learning For Medical Image Analysis - How It Works", "thumbnail": "https://i.ytimg.com/vi/VKnoyiNxflk/maxresdefault.jpg"} {"video_id": "StnbUNn92vA", "text": "[Music] thank you very much hi to everybody my name is Jimmy Wynn and I am president of Bitcoin Association with me is Steve shatters who is the technical director of the Bitcoin SV node project which we'll explain as well as CTO of n chain the global leading research development and advisory firm and blockchain technologies we're here today that's about the coin SV and why it is the massively scaled blockchain to meet developer needs the answer to the question of what developers need is pretty simple but it's very profound big", "start_timestamp": "00:00:00", "end_timestamp": "00:00:53", "start_second": 0, "end_second": 53, "url": "https://www.youtube.com/watch?v=StnbUNn92vA&t=0s", "title": "Bitcoin SV: The Massively Scaled Blockchain to Meet Developer Needs\u2014Jimmy Nguyen & Steve Shadders", "thumbnail": "https://i.ytimg.com/vi/StnbUNn92vA/maxresdefault.jpg"} {"video_id": "StnbUNn92vA", "text": "nor if there are two things for you to remember today it is big and power those two things in combination are what make Bitcoin SV what developers as well as big enterprises need you don't have to just take my word for it Crona verse is an eSports and gaming company in the United States which does a monetization platform which uses blockchain technology to create more transparent fair or gaming in the eSports world it has a game that's coming into open beta soon called crypto fights which is like Dungeons & Dragons and it allows players", "start_timestamp": "00:00:53", "end_timestamp": "00:01:32", "start_second": 53, "end_second": 92, "url": "https://www.youtube.com/watch?v=StnbUNn92vA&t=53s", "title": "Bitcoin SV: The Massively Scaled Blockchain to Meet Developer Needs\u2014Jimmy Nguyen & Steve Shadders", "thumbnail": "https://i.ytimg.com/vi/StnbUNn92vA/maxresdefault.jpg"} {"video_id": "StnbUNn92vA", "text": "to create avatars to compete against each other it recently announced through a blog post of know why it was leaving the etherium based engine token system to use Bitcoin sv as a blockchain for storing in-game items as well as many other uses and its CEO and founder Adam claim wrote in this blog post what we'll talk a lot about today he wrote we decided to leave because of problems with ethereal and what he explained is to get the presentation back what happened here is that he explained that he and his company were making the best", "start_timestamp": "00:01:32", "end_timestamp": "00:02:16", "start_second": 92, "end_second": 136, "url": "https://www.youtube.com/watch?v=StnbUNn92vA&t=92s", "title": "Bitcoin SV: The Massively Scaled Blockchain to Meet Developer Needs\u2014Jimmy Nguyen & Steve Shadders", "thumbnail": "https://i.ytimg.com/vi/StnbUNn92vA/maxresdefault.jpg"} {"video_id": "StnbUNn92vA", "text": "voice for their business and the decision to transition at be SPT was the result of their own research and development efforts they discovered that aetherium is a cassadee slow cannot scale it is expensive that proof of work is the only proven consensus mechanism and they lack confidence that proof of stink or delegated proof of state will work and aetherium choose windows scaling approaches are still experimental or whereas Bitcoin sve have already demonstrated the scaling ability they also summarize the benefits", "start_timestamp": "00:02:16", "end_timestamp": "00:02:52", "start_second": 136, "end_second": 172, "url": "https://www.youtube.com/watch?v=StnbUNn92vA&t=136s", "title": "Bitcoin SV: The Massively Scaled Blockchain to Meet Developer Needs\u2014Jimmy Nguyen & Steve Shadders", "thumbnail": "https://i.ytimg.com/vi/StnbUNn92vA/maxresdefault.jpg"} {"video_id": "StnbUNn92vA", "text": "of transitioning to Bitcoin SV that it can allow them to create a better item protocol with the SB it's faster and cheaper to conduct trades due to the efficiency of the BSB blockchain DSP is capable of handling millions of transactions without slowing down which is critical to development work and converse users will often be able to use more available features the engine is not capable of such as storing the 3d model of a virtual in-game item on the blockchain which is something that the etherium based moon system could not", "start_timestamp": "00:02:52", "end_timestamp": "00:03:30", "start_second": 172, "end_second": 210, "url": "https://www.youtube.com/watch?v=StnbUNn92vA&t=172s", "title": "Bitcoin SV: The Massively Scaled Blockchain to Meet Developer Needs\u2014Jimmy Nguyen & Steve Shadders", "thumbnail": "https://i.ytimg.com/vi/StnbUNn92vA/maxresdefault.jpg"} {"video_id": "StnbUNn92vA", "text": "provide so that is a great summary of what bitcoin was born to be Bitcoin people think is merely a payment system a way to transfer monetary value but in the name itself we really understand what bitcoin was supposed to be a combination of data bit and coin monetary value the fusion of both data and money in a ledger system a blockchain so that is why we've got a battle to preserve the original protocol of Bitcoin Bitcoin has not been allowed over the years to be what it was born to be it has been knitted in its baling", "start_timestamp": "00:03:30", "end_timestamp": "00:04:11", "start_second": 210, "end_second": 251, "url": "https://www.youtube.com/watch?v=StnbUNn92vA&t=210s", "title": "Bitcoin SV: The Massively Scaled Blockchain to Meet Developer Needs\u2014Jimmy Nguyen & Steve Shadders", "thumbnail": "https://i.ytimg.com/vi/StnbUNn92vA/maxresdefault.jpg"} {"video_id": "StnbUNn92vA", "text": "capability so people do not realize Bitcoin blockchain can power tokens and smart contracts and many of their advanced applications so the SV and Bitcoin stands for Satoshi vision because bitcoin sv is one of the heating chains Bitcoin but the only one which adheres the original design protocol and vision of the creator Satoshi Nakamoto to be both a peer-to-peer electronic cash system where you can send monetary value as well as the global empathized blotching for the world and today Steve and I are going to explain to you why", "start_timestamp": "00:04:11", "end_timestamp": "00:04:48", "start_second": 251, "end_second": 288, "url": "https://www.youtube.com/watch?v=StnbUNn92vA&t=251s", "title": "Bitcoin SV: The Massively Scaled Blockchain to Meet Developer Needs\u2014Jimmy Nguyen & Steve Shadders", "thumbnail": "https://i.ytimg.com/vi/StnbUNn92vA/maxresdefault.jpg"} {"video_id": "StnbUNn92vA", "text": "that's important for developers and why that means it is the blockchain for developers to learn build upon Bitcoin SV has four key pillars in our design which Steve has led with another developer named Daniel Connolly who is in Austria Austria a stable protocol a scalable blockchain at massive scale security as well as safe instant transactions today I want to focus on the first two of those pillars and explain to you why that makes such a big power advantage for developers let's talk first about a stable protocol and", "start_timestamp": "00:04:48", "end_timestamp": "00:05:24", "start_second": 288, "end_second": 324, "url": "https://www.youtube.com/watch?v=StnbUNn92vA&t=288s", "title": "Bitcoin SV: The Massively Scaled Blockchain to Meet Developer Needs\u2014Jimmy Nguyen & Steve Shadders", "thumbnail": "https://i.ytimg.com/vi/StnbUNn92vA/maxresdefault.jpg"} {"video_id": "StnbUNn92vA", "text": "it's not just any protocol but one with bitcoins original design in February of this year the team behind Bitcoin sve led a the latest upgrade and hard fork of the protocol which was called Genesis because it was designed to restore as much of bitcoins original design and protocol as possible that's because over the decade of bitcoins existance former protocol developers have changed a lot of bitcoins original design it had in it so much of the capabilities developers are looking for for many applications but those were changed those are", "start_timestamp": "00:05:24", "end_timestamp": "00:06:02", "start_second": 324, "end_second": 362, "url": "https://www.youtube.com/watch?v=StnbUNn92vA&t=324s", "title": "Bitcoin SV: The Massively Scaled Blockchain to Meet Developer Needs\u2014Jimmy Nguyen & Steve Shadders", "thumbnail": "https://i.ytimg.com/vi/StnbUNn92vA/maxresdefault.jpg"} {"video_id": "StnbUNn92vA", "text": "restored through the Genesis hard fork and there were many changes in it but the key things for developers to know are that we removed artificial limits that were previously imposed on the protocols such as block size and transaction and data capacity we restored the full original functionality of Bitcoin script the programming language which is used within the Bitcoin protocol and Steve is going to talk to you about that later in this presentation and what that means for developers and we also remove some detrimental changes that were made to", "start_timestamp": "00:06:02", "end_timestamp": "00:06:34", "start_second": 362, "end_second": 394, "url": "https://www.youtube.com/watch?v=StnbUNn92vA&t=362s", "title": "Bitcoin SV: The Massively Scaled Blockchain to Meet Developer Needs\u2014Jimmy Nguyen & Steve Shadders", "thumbnail": "https://i.ytimg.com/vi/StnbUNn92vA/maxresdefault.jpg"} {"video_id": "StnbUNn92vA", "text": "bitcoins protocol over the years such as Sun setting what's called Paita script cache which has created some significant privacy and security issues in Bitcoin and this was designed to restore bitcoins original power and then this is very important for you to know to keep the protocol stable satoshi nakamoto recognized this far back in bitcoins life all the way back in 2010 ten years ago writing at the time that the nature of Bitcoin is such that once version 0.1 was released the core design was set in stone for the", "start_timestamp": "00:06:34", "end_timestamp": "00:07:10", "start_second": 394, "end_second": 430, "url": "https://www.youtube.com/watch?v=StnbUNn92vA&t=394s", "title": "Bitcoin SV: The Massively Scaled Blockchain to Meet Developer Needs\u2014Jimmy Nguyen & Steve Shadders", "thumbnail": "https://i.ytimg.com/vi/StnbUNn92vA/maxresdefault.jpg"} {"video_id": "StnbUNn92vA", "text": "rest of its lifetime why is that important it's because as developers as well as people who might support big enterprises you need a stable protocol in order to have confidence to build applications I talk to a lot of companies as well as developers around the world who are looking for a blockchain to build on and the one thing they do not want is a constantly changing protocol because otherwise it makes it very difficult to commit your time to start an application or startup venture or your business if you're a big", "start_timestamp": "00:07:10", "end_timestamp": "00:07:44", "start_second": 430, "end_second": 464, "url": "https://www.youtube.com/watch?v=StnbUNn92vA&t=430s", "title": "Bitcoin SV: The Massively Scaled Blockchain to Meet Developer Needs\u2014Jimmy Nguyen & Steve Shadders", "thumbnail": "https://i.ytimg.com/vi/StnbUNn92vA/maxresdefault.jpg"} {"video_id": "StnbUNn92vA", "text": "company to launch an application on something where the protocol may change so our team has committed to keeping the protocol stable now that the original Bitcoin design has been restored in Bitcoin SV with a stable protocol we also want to unleash the massive scaling power of Bitcoin there have been battles for many years over whether or not the Bitcoin blockchain should scale because Bitcoin core and the BTC network kept the block size small its network is only capable of doing about three transactions a second or at maximum", "start_timestamp": "00:07:44", "end_timestamp": "00:08:21", "start_second": 464, "end_second": 501, "url": "https://www.youtube.com/watch?v=StnbUNn92vA&t=464s", "title": "Bitcoin SV: The Massively Scaled Blockchain to Meet Developer Needs\u2014Jimmy Nguyen & Steve Shadders", "thumbnail": "https://i.ytimg.com/vi/StnbUNn92vA/maxresdefault.jpg"} {"video_id": "StnbUNn92vA", "text": "seven transactions a second that will nowhere rival or compete with the payment networks of the world such as Visa which averaged 2,000 transactions a second or at peak periods 56,000 transactions a second our belief for Bitcoin sv is that the Bitcoin blockchain can have unbounded scaling there should not be limits placed upon its scaling capability by the protocol developers so that limit has been lifted and the network can scale to whatever competitive market forces require Steve and his team have been leading a lot of", "start_timestamp": "00:08:21", "end_timestamp": "00:08:58", "start_second": 501, "end_second": 538, "url": "https://www.youtube.com/watch?v=StnbUNn92vA&t=501s", "title": "Bitcoin SV: The Massively Scaled Blockchain to Meet Developer Needs\u2014Jimmy Nguyen & Steve Shadders", "thumbnail": "https://i.ytimg.com/vi/StnbUNn92vA/maxresdefault.jpg"} {"video_id": "StnbUNn92vA", "text": "technical improvements and testing to ensure that Bitcoin can scale this is critical because other competing networks such as the etherium Network have run into scaling problems which are very well known now Vitalik boo turin in august of last year acknowledged that the etherium blockchain was almost full last summer that has led many businesses as well as developers to stop developing applications on that chain and instead look for alternatives what's happening on Bitcoin sv well there is no limit on the block size anymore which means in", "start_timestamp": "00:08:58", "end_timestamp": "00:09:36", "start_second": 538, "end_second": 576, "url": "https://www.youtube.com/watch?v=StnbUNn92vA&t=538s", "title": "Bitcoin SV: The Massively Scaled Blockchain to Meet Developer Needs\u2014Jimmy Nguyen & Steve Shadders", "thumbnail": "https://i.ytimg.com/vi/StnbUNn92vA/maxresdefault.jpg"} {"video_id": "StnbUNn92vA", "text": "any given block there's no limit on the potential number of transactions or data that can be included there are practical limitations that are being improved constantly but we've seen world record blocks mined on Bitcoin SV that are not just one or two megabyte in size in May of this year two blocks over 300 megabytes in size or mind which each had over a million individual transactions that's great but we can still do much more as the technical improvements continue but it's scaling of this type that is necessary", "start_timestamp": "00:09:36", "end_timestamp": "00:10:11", "start_second": 576, "end_second": 611, "url": "https://www.youtube.com/watch?v=StnbUNn92vA&t=576s", "title": "Bitcoin SV: The Massively Scaled Blockchain to Meet Developer Needs\u2014Jimmy Nguyen & Steve Shadders", "thumbnail": "https://i.ytimg.com/vi/StnbUNn92vA/maxresdefault.jpg"} {"video_id": "StnbUNn92vA", "text": "to create a blockchain for developers to have applications that can power many many transactions our team also runs a scaling test Network which is continually testing the bounds of what's capable on the Bitcoin blockchain they've been mining blocks on the scaling test network that are 2.7 to 6 million transactions per block and if that's sustainable which we believe it is and more on the main net you're going to have theoretical throughput that exceeds 6,000 transactions a second and that's what as a developer you want to", "start_timestamp": "00:10:11", "end_timestamp": "00:10:46", "start_second": 611, "end_second": 646, "url": "https://www.youtube.com/watch?v=StnbUNn92vA&t=611s", "title": "Bitcoin SV: The Massively Scaled Blockchain to Meet Developer Needs\u2014Jimmy Nguyen & Steve Shadders", "thumbnail": "https://i.ytimg.com/vi/StnbUNn92vA/maxresdefault.jpg"} {"video_id": "StnbUNn92vA", "text": "see you don't want to spend your time building something if you cannot support large numbers of transactions it also means the capability to do micro transactions meaning payments or data where you're sending small amounts of money that are attached to the transaction and by small we mean even fractions of a u.s. set and that's possible because of the massive scaling we're seeing on other networks such as aetherium that there are periods of congestion where the fees that are required to send a transaction skyrocket they get high here for example", "start_timestamp": "00:10:46", "end_timestamp": "00:11:23", "start_second": 646, "end_second": 683, "url": "https://www.youtube.com/watch?v=StnbUNn92vA&t=646s", "title": "Bitcoin SV: The Massively Scaled Blockchain to Meet Developer Needs\u2014Jimmy Nguyen & Steve Shadders", "thumbnail": "https://i.ytimg.com/vi/StnbUNn92vA/maxresdefault.jpg"} {"video_id": "StnbUNn92vA", "text": "is from November of last year there was a headline noting that the etherium transaction fees were soaring to over 30 US dollars to send a single transaction which is far too high if you want to create tokens or smart contract systems or Internet of Things communications on a blockchain whereas on Bitcoin SV because our scaling is unbounded and block size is now not limited by the protocol transaction fees are not just really low they're very reliable they are not one cent tomorrow but 30 US dollars a week from now that is very", "start_timestamp": "00:11:23", "end_timestamp": "00:12:00", "start_second": 683, "end_second": 720, "url": "https://www.youtube.com/watch?v=StnbUNn92vA&t=683s", "title": "Bitcoin SV: The Massively Scaled Blockchain to Meet Developer Needs\u2014Jimmy Nguyen & Steve Shadders", "thumbnail": "https://i.ytimg.com/vi/StnbUNn92vA/maxresdefault.jpg"} {"video_id": "StnbUNn92vA", "text": "difficult for businesses to predict their fee system I just checked yesterday and to send a payment transaction on Bitcoin I see the average as of yesterday was less than one hundredths of one u.s. cent that means you can create business models and payment systems that do many many things and it allows us to be in vision what the Internet can be a new world where all of our online interactions and online activity can be monetized where you might actually have to pay a small amount of money to do things on the", "start_timestamp": "00:12:00", "end_timestamp": "00:12:37", "start_second": 720, "end_second": 757, "url": "https://www.youtube.com/watch?v=StnbUNn92vA&t=720s", "title": "Bitcoin SV: The Massively Scaled Blockchain to Meet Developer Needs\u2014Jimmy Nguyen & Steve Shadders", "thumbnail": "https://i.ytimg.com/vi/StnbUNn92vA/maxresdefault.jpg"} {"video_id": "StnbUNn92vA", "text": "internet but you can get paid back and we're seeing in our ecosystem developers who are creative creating applications where they are reinventing social media so for example to stop fraud and fake accounts on something like Twitter you have to actually pay to post a social media message or to reply and like but then someone pays you back in tiny amounts of Bitcoin sv2 your wallet in order to engage with your content to like it to reply to it models where people pay to store your identity or data on chain but then you as a consumer", "start_timestamp": "00:12:37", "end_timestamp": "00:13:13", "start_second": 757, "end_second": 793, "url": "https://www.youtube.com/watch?v=StnbUNn92vA&t=757s", "title": "Bitcoin SV: The Massively Scaled Blockchain to Meet Developer Needs\u2014Jimmy Nguyen & Steve Shadders", "thumbnail": "https://i.ytimg.com/vi/StnbUNn92vA/maxresdefault.jpg"} {"video_id": "StnbUNn92vA", "text": "or user can get paid by authorizing companies to access your data on the blockchain small amounts again a Bitcoin same thing for reinventing how we access media what if you had to pay per article to read online news as opposed to paying a monthly or annual subscription fee so these are the fascinating types of creative ideas that can be explored if you have a blockchain where it costs only 1/100 of a u.s. n to do any individual transaction you can change how our entire online activity occurs and it means we can move to a", "start_timestamp": "00:13:13", "end_timestamp": "00:13:52", "start_second": 793, "end_second": 832, "url": "https://www.youtube.com/watch?v=StnbUNn92vA&t=793s", "title": "Bitcoin SV: The Massively Scaled Blockchain to Meet Developer Needs\u2014Jimmy Nguyen & Steve Shadders", "thumbnail": "https://i.ytimg.com/vi/StnbUNn92vA/maxresdefault.jpg"} {"video_id": "StnbUNn92vA", "text": "world where everything is not just online as the internet allowed us to create but impossibility anything could be unchain your data our interaction on the internet such as web searches our e-commerce systems everything can be recorded tracked and monetize through the Bitcoin blockchain as a ledger that combines your data with monetary value that leads to this concept which we call the meta net a new more commercial Internet for all of us as individual users because we're able to monetize our individual activity and", "start_timestamp": "00:13:52", "end_timestamp": "00:14:35", "start_second": 832, "end_second": 875, "url": "https://www.youtube.com/watch?v=StnbUNn92vA&t=832s", "title": "Bitcoin SV: The Massively Scaled Blockchain to Meet Developer Needs\u2014Jimmy Nguyen & Steve Shadders", "thumbnail": "https://i.ytimg.com/vi/StnbUNn92vA/maxresdefault.jpg"} {"video_id": "StnbUNn92vA", "text": "data on the online environments there's a whole paradigm that's happening in Bitcoin sv where we're taking a look at the blockchain as a universal server where data websites and content can be hosted on chain and accessed via Internet browsers and applications from a blockchain there's a protocol that engines team has created which organizes the data structure over Bitcoin so that people can create applications and easily find data from the blockchain so here's an image of imagining the blocks on the right of the screen as the", "start_timestamp": "00:14:35", "end_timestamp": "00:15:14", "start_second": 875, "end_second": 914, "url": "https://www.youtube.com/watch?v=StnbUNn92vA&t=875s", "title": "Bitcoin SV: The Massively Scaled Blockchain to Meet Developer Needs\u2014Jimmy Nguyen & Steve Shadders", "thumbnail": "https://i.ytimg.com/vi/StnbUNn92vA/maxresdefault.jpg"} {"video_id": "StnbUNn92vA", "text": "blockchain allowing you to store data people creating search engines to access that data you as a developer to create applications that leverage that data and allow people as users to even manage their identity from the blockchain creating a new form of internet so with this concept and with this big power of the blockchain with a stable protocol and massive scaling what are we seeing that enterprises as well as particularly developers can build on the Bitcoin so an example which developers will appreciate is a company called coda with", "start_timestamp": "00:15:14", "end_timestamp": "00:15:55", "start_second": 914, "end_second": 955, "url": "https://www.youtube.com/watch?v=StnbUNn92vA&t=914s", "title": "Bitcoin SV: The Massively Scaled Blockchain to Meet Developer Needs\u2014Jimmy Nguyen & Steve Shadders", "thumbnail": "https://i.ytimg.com/vi/StnbUNn92vA/maxresdefault.jpg"} {"video_id": "StnbUNn92vA", "text": "some young entrepreneurs out of Australia they want a second hackathon we ran in the Bitcoin SV world and what they've created is an API marketplace for developers because it's hard if you're a developer when you create you know something unique and a software application it's hard to monetize it people put things on github and allow other developers to use it for free but what if you could get rewarded for your work so Coda's created a marketplace or as a developer you can upload your work and charge other people developers", "start_timestamp": "00:15:55", "end_timestamp": "00:16:31", "start_second": 955, "end_second": 991, "url": "https://www.youtube.com/watch?v=StnbUNn92vA&t=955s", "title": "Bitcoin SV: The Massively Scaled Blockchain to Meet Developer Needs\u2014Jimmy Nguyen & Steve Shadders", "thumbnail": "https://i.ytimg.com/vi/StnbUNn92vA/maxresdefault.jpg"} {"video_id": "StnbUNn92vA", "text": "companies businesses around the world to use your work product by paying you small amounts of Bitcoin SV for example for every API call or a very set number of API calls whatever business a revenue model you want to set which is possible because of the tiny tiny transaction fees that it costs to send and engage in transactions on Bitcoin SV another good example is a company out of South Korea a company called one store which runs one of the largest mobile app stores in South Korea one of its executive is a big believer in Bitcoin sv and they", "start_timestamp": "00:16:31", "end_timestamp": "00:17:10", "start_second": 991, "end_second": 1030, "url": "https://www.youtube.com/watch?v=StnbUNn92vA&t=991s", "title": "Bitcoin SV: The Massively Scaled Blockchain to Meet Developer Needs\u2014Jimmy Nguyen & Steve Shadders", "thumbnail": "https://i.ytimg.com/vi/StnbUNn92vA/maxresdefault.jpg"} {"video_id": "StnbUNn92vA", "text": "created a mobile app called bus con which allows music artists to be able to be rewarded for their work as a fan you can download the mobile app you can view access listen to content such as music videos from your music artists that you like and then reward the music artist in a token called a touch token that is built on bitcoin sv and then the artist gets paid in small amounts of the coin SV tokens this has led as examples to a bright world of Bitcoin SV developer development happening across the world someone in Korea actually keeps a chart", "start_timestamp": "00:17:10", "end_timestamp": "00:17:52", "start_second": 1030, "end_second": 1072, "url": "https://www.youtube.com/watch?v=StnbUNn92vA&t=1030s", "title": "Bitcoin SV: The Massively Scaled Blockchain to Meet Developer Needs\u2014Jimmy Nguyen & Steve Shadders", "thumbnail": "https://i.ytimg.com/vi/StnbUNn92vA/maxresdefault.jpg"} {"video_id": "StnbUNn92vA", "text": "of the known company's services projects that are on Bitcoin SP and right now there's over 400 that are known I'm sure there's even far more you can see here a chart with companies and services we won't have to talk about them all but it shows you the number that's happening around the world there's also resources for developers as Steve will tell you about and then also people are creating protocol layers to create the rule sets for all kinds of different functionality from smart contracts to tokens and it's really", "start_timestamp": "00:17:52", "end_timestamp": "00:18:23", "start_second": 1072, "end_second": 1103, "url": "https://www.youtube.com/watch?v=StnbUNn92vA&t=1072s", "title": "Bitcoin SV: The Massively Scaled Blockchain to Meet Developer Needs\u2014Jimmy Nguyen & Steve Shadders", "thumbnail": "https://i.ytimg.com/vi/StnbUNn92vA/maxresdefault.jpg"} {"video_id": "StnbUNn92vA", "text": "expanding what is capable of being done on Bitcoin SV a company out of Spain called Han cash which runs one of the most popular wallets in the Bitcoin sed world has created hands cash connect it's an SDK which contains tool sets you could see here of six main functions instant payments encryption identity and login as examples to make it easy for developers and businesses to integrate bitcoin SV payments into any application whether it be a web-based mobile application even smart devices so these are the things that are available to", "start_timestamp": "00:18:23", "end_timestamp": "00:19:01", "start_second": 1103, "end_second": 1141, "url": "https://www.youtube.com/watch?v=StnbUNn92vA&t=1103s", "title": "Bitcoin SV: The Massively Scaled Blockchain to Meet Developer Needs\u2014Jimmy Nguyen & Steve Shadders", "thumbnail": "https://i.ytimg.com/vi/StnbUNn92vA/maxresdefault.jpg"} {"video_id": "StnbUNn92vA", "text": "make it easier for you as a developer to start integrating bitcoin sv into almost anything you can imagine recently max then which is a company that provides a browser which is one of the most popular in the world in certain regions with over 670 million people having downloaded it as their default browser they announced the beta launch of a new version of their browser called Maxton 6 and it is the first Bitcoin s be powered browser what does that mean they're building a browser for this meta net world I mentioned where applications", "start_timestamp": "00:19:01", "end_timestamp": "00:19:38", "start_second": 1141, "end_second": 1178, "url": "https://www.youtube.com/watch?v=StnbUNn92vA&t=1141s", "title": "Bitcoin SV: The Massively Scaled Blockchain to Meet Developer Needs\u2014Jimmy Nguyen & Steve Shadders", "thumbnail": "https://i.ytimg.com/vi/StnbUNn92vA/maxresdefault.jpg"} {"video_id": "StnbUNn92vA", "text": "and online activity are driven by the Bitcoin SP blockchain so in the future the browser will integrate very easily with bsv applications and you'll see here a variety of the things that they're already releasing on the top right corner of the screen you see something called blockchain ID manager VBox this is a tool system to allow you as a user to maintain your identity information on the blockchain so it's never deleted and allow you to log into applications across the internet from a single identity so you don't need for example to log", "start_timestamp": "00:19:38", "end_timestamp": "00:20:18", "start_second": 1178, "end_second": 1218, "url": "https://www.youtube.com/watch?v=StnbUNn92vA&t=1178s", "title": "Bitcoin SV: The Massively Scaled Blockchain to Meet Developer Needs\u2014Jimmy Nguyen & Steve Shadders", "thumbnail": "https://i.ytimg.com/vi/StnbUNn92vA/maxresdefault.jpg"} {"video_id": "StnbUNn92vA", "text": "into applications using Google or Facebook as a lot of people do today you can have all of your identity stored in one place on the blockchain and the browser here is designed to specifically make it easier to log into applications across this new meta net world using a ID system that is managed on the blockchain they're also creating a browser that will allow ecommerce sites when you're shopping to easily integrate with your Bitcoin SP wallets you can easily pay as you see this example on the right side with both your identity", "start_timestamp": "00:20:18", "end_timestamp": "00:20:54", "start_second": 1218, "end_second": 1254, "url": "https://www.youtube.com/watch?v=StnbUNn92vA&t=1218s", "title": "Bitcoin SV: The Massively Scaled Blockchain to Meet Developer Needs\u2014Jimmy Nguyen & Steve Shadders", "thumbnail": "https://i.ytimg.com/vi/StnbUNn92vA/maxresdefault.jpg"} {"video_id": "StnbUNn92vA", "text": "and Bitcoin from a wallet so Jeff Chen from Maxon who's the CEO summarized very well why we believe so much in Bitcoin sv4 development he said we want the Maxton browser to be the online application platform and users window to the exciting world of Bitcoin SV combined with the internet our goal is to make it so easy and seamless to use bsv applications that users do not even need to know that bsv is involved in the background consumers will just know that using an earning electronic cash from their online content data interactions", "start_timestamp": "00:20:54", "end_timestamp": "00:21:34", "start_second": 1254, "end_second": 1294, "url": "https://www.youtube.com/watch?v=StnbUNn92vA&t=1254s", "title": "Bitcoin SV: The Massively Scaled Blockchain to Meet Developer Needs\u2014Jimmy Nguyen & Steve Shadders", "thumbnail": "https://i.ytimg.com/vi/StnbUNn92vA/maxresdefault.jpg"} {"video_id": "StnbUNn92vA", "text": "and activities this is the future of the internet and why we believe in bitcoin sv that summarizes well what we think is the future and what Steve will talk to you about next Bitcoin is more than just a payment system it is technology plumbing that may sound boring but it is the plumbing the infrastructure just like the Internet Protocol is that drove Internet growth that will allow the creation and development of many fascinating applications so that in the future consumers users will probably not even know that things they are using are", "start_timestamp": "00:21:34", "end_timestamp": "00:22:12", "start_second": 1294, "end_second": 1332, "url": "https://www.youtube.com/watch?v=StnbUNn92vA&t=1294s", "title": "Bitcoin SV: The Massively Scaled Blockchain to Meet Developer Needs\u2014Jimmy Nguyen & Steve Shadders", "thumbnail": "https://i.ytimg.com/vi/StnbUNn92vA/maxresdefault.jpg"} {"video_id": "StnbUNn92vA", "text": "built on the Bitcoin blockchain it will work underneath invisibly to power worlds of data micro transactions and all of our future online activity so I'm going to turn it over to Steve shatters now to tell you a lot about what developers can look forward to using on between Thanksgiving just let me transition me everybody do I have the right screen on the right screen so someone will tell me in a moment by bank so here I want to talk to you a bit about some of the things that are possible on Bitcoin this week that don't", "start_timestamp": "00:22:12", "end_timestamp": "00:23:09", "start_second": 1332, "end_second": 1389, "url": "https://www.youtube.com/watch?v=StnbUNn92vA&t=1332s", "title": "Bitcoin SV: The Massively Scaled Blockchain to Meet Developer Needs\u2014Jimmy Nguyen & Steve Shadders", "thumbnail": "https://i.ytimg.com/vi/StnbUNn92vA/maxresdefault.jpg"} {"video_id": "StnbUNn92vA", "text": "really fit the narrative of of what you could or should be able to do with a blockchain at the moment I'm twelve of the body screen around the wrong way your good nasty good now okay so in the in the very beginning of Bitcoin Satoshi Nakamoto designed a system that was very open and had very few limitations you yet included a not code that allowed you to push up to four gigabytes of data to to the stack in a single Bitcoin script in fact you could use that up code many times so there was no effective limit to", "start_timestamp": "00:23:09", "end_timestamp": "00:23:48", "start_second": 1389, "end_second": 1428, "url": "https://www.youtube.com/watch?v=StnbUNn92vA&t=1389s", "title": "Bitcoin SV: The Massively Scaled Blockchain to Meet Developer Needs\u2014Jimmy Nguyen & Steve Shadders", "thumbnail": "https://i.ytimg.com/vi/StnbUNn92vA/maxresdefault.jpg"} {"video_id": "StnbUNn92vA", "text": "the amount of data that you put into into a transaction he made it very clear that Bitcoin was intended to scale and that the endgame for for miners was to be operating out of out of large data warehouses he also put an incredibly rich scripting language into into Bitcoin that made it very clear that bitcoin is not just about payments that the coin is about anything that you can imagine that that is possible by by using a turing-complete programming language but for a long time that's simply not been not been possible", "start_timestamp": "00:23:48", "end_timestamp": "00:24:27", "start_second": 1428, "end_second": 1467, "url": "https://www.youtube.com/watch?v=StnbUNn92vA&t=1428s", "title": "Bitcoin SV: The Massively Scaled Blockchain to Meet Developer Needs\u2014Jimmy Nguyen & Steve Shadders", "thumbnail": "https://i.ytimg.com/vi/StnbUNn92vA/maxresdefault.jpg"} {"video_id": "StnbUNn92vA", "text": "because well to be honest a bunch of people decided that we just weren't allowed to do that Bitcoin every developers do not want to be the guardians or the gatekeepers that say what you can do on Bitcoin Bitcoin is it is a free market and it really depends on what makes sense what makes sense economically what makes sense in terms of improving people's user experience what makes sense in terms of improving people's lives is what will govern what what can and can't be done on Bitcoin so let's step into a few of the things that", "start_timestamp": "00:24:27", "end_timestamp": "00:25:07", "start_second": 1467, "end_second": 1507, "url": "https://www.youtube.com/watch?v=StnbUNn92vA&t=1467s", "title": "Bitcoin SV: The Massively Scaled Blockchain to Meet Developer Needs\u2014Jimmy Nguyen & Steve Shadders", "thumbnail": "https://i.ytimg.com/vi/StnbUNn92vA/maxresdefault.jpg"} {"video_id": "StnbUNn92vA", "text": "are possible on Bitcoin that that don't fit the typical blockchain narrative so today we're going to talk about the three big big blocks big transactions and big scripts and I suppose I'm speaking to people here as a potential developers because I'm talking about the potential that can be unlocked as part of the developer experience and we'll start with big blocks but I make the point that big blocks are not something you should have to worry about we're just going to talk about big blocks because they are what enables you to to", "start_timestamp": "00:25:07", "end_timestamp": "00:25:45", "start_second": 1507, "end_second": 1545, "url": "https://www.youtube.com/watch?v=StnbUNn92vA&t=1507s", "title": "Bitcoin SV: The Massively Scaled Blockchain to Meet Developer Needs\u2014Jimmy Nguyen & Steve Shadders", "thumbnail": "https://i.ytimg.com/vi/StnbUNn92vA/maxresdefault.jpg"} {"video_id": "StnbUNn92vA", "text": "do all of these things that that we've been told so long enough possible so firstly why big blocks matter well it goes to the structure of the network and in fact the structure of the network the Bitcoin network that was described by by Satoshi Nakamoto and it's actually very different to the way that most blockchain networks look and you'll see over on the left the title of this slide is minimal hardware required what I mean by that is that as a developer or a user of Bitcoin you shouldn't have to have a data center full of computing equipment", "start_timestamp": "00:25:45", "end_timestamp": "00:26:22", "start_second": 1545, "end_second": 1582, "url": "https://www.youtube.com/watch?v=StnbUNn92vA&t=1545s", "title": "Bitcoin SV: The Massively Scaled Blockchain to Meet Developer Needs\u2014Jimmy Nguyen & Steve Shadders", "thumbnail": "https://i.ytimg.com/vi/StnbUNn92vA/maxresdefault.jpg"} {"video_id": "StnbUNn92vA", "text": "so that you can participate in the Bitcoin network you shouldn't even have to have a 200 gigabyte hard drive so that you can participate in the network by running the so-called full full node software what happens when when you have a world full of people that are trying to participate in a network on an equal footing like that is you develop a mesh network that starts to look a little bit like the diagram we have here this is a very small-scale imagining of that that the typical Bitcoin interaction if you consider the payment case is usually", "start_timestamp": "00:26:22", "end_timestamp": "00:27:05", "start_second": 1582, "end_second": 1625, "url": "https://www.youtube.com/watch?v=StnbUNn92vA&t=1582s", "title": "Bitcoin SV: The Massively Scaled Blockchain to Meet Developer Needs\u2014Jimmy Nguyen & Steve Shadders", "thumbnail": "https://i.ytimg.com/vi/StnbUNn92vA/maxresdefault.jpg"} {"video_id": "StnbUNn92vA", "text": "between a customer and a merchant or a shop in in this case and let me show you how that works when you have a mesh type network on a that is typical of most block chains today the customer in the shop already has a connection to each other because you've gone to their website to say that you want to buy a thing or rum or your you're trying to pay them so the shop has already told you that you need to pay them but you know in a typical block chain today you need to transmit that that last message which is the actual transaction through", "start_timestamp": "00:27:05", "end_timestamp": "00:27:44", "start_second": 1625, "end_second": 1664, "url": "https://www.youtube.com/watch?v=StnbUNn92vA&t=1625s", "title": "Bitcoin SV: The Massively Scaled Blockchain to Meet Developer Needs\u2014Jimmy Nguyen & Steve Shadders", "thumbnail": "https://i.ytimg.com/vi/StnbUNn92vA/maxresdefault.jpg"} {"video_id": "StnbUNn92vA", "text": "from customer to to merchants and and this is how it typically happens rather than sending it directly to the shop which would make sense because you've already got a connection to them you get the the cloud of of peers in the Bitcoin of the etherium or narrow networks whichever one it happens to be and your message in this case is the transaction bounces from box to box to box to box until it eventually pops out the other end now in a good day this will take 5 10 seconds on a bad day this can take potentially 30 seconds or", "start_timestamp": "00:27:44", "end_timestamp": "00:28:19", "start_second": 1664, "end_second": 1699, "url": "https://www.youtube.com/watch?v=StnbUNn92vA&t=1664s", "title": "Bitcoin SV: The Massively Scaled Blockchain to Meet Developer Needs\u2014Jimmy Nguyen & Steve Shadders", "thumbnail": "https://i.ytimg.com/vi/StnbUNn92vA/maxresdefault.jpg"} {"video_id": "StnbUNn92vA", "text": "even minutes and if you're talking about a payment experience that's just not good enough because we are competing with with the payment experience that that 7 billion people are used to it or maybe not 7 billion people are used to to the pay wave type metropoliz experience but but we know that you can walk into a shop literally tap a card and within a fraction of a second the transaction is done so that's what we have to compete with and 5 seconds is even as it is it's just not good enough but by the reason we have this", "start_timestamp": "00:28:19", "end_timestamp": "00:28:54", "start_second": 1699, "end_second": 1734, "url": "https://www.youtube.com/watch?v=StnbUNn92vA&t=1699s", "title": "Bitcoin SV: The Massively Scaled Blockchain to Meet Developer Needs\u2014Jimmy Nguyen & Steve Shadders", "thumbnail": "https://i.ytimg.com/vi/StnbUNn92vA/maxresdefault.jpg"} {"video_id": "StnbUNn92vA", "text": "configuration is simply because all of these nodes that you see on this on the screen right now are not actually economically incentivized to to run on well provisioned hardware everyone is told that everybody has to run a note of course you're not getting paid to run an ode so the costs associated with running running an ode or something that you're concerned about and that leads to this mesh kind of network configuration because the more connections you have with other people and the highest speed those connections", "start_timestamp": "00:28:54", "end_timestamp": "00:29:28", "start_second": 1734, "end_second": 1768, "url": "https://www.youtube.com/watch?v=StnbUNn92vA&t=1734s", "title": "Bitcoin SV: The Massively Scaled Blockchain to Meet Developer Needs\u2014Jimmy Nguyen & Steve Shadders", "thumbnail": "https://i.ytimg.com/vi/StnbUNn92vA/maxresdefault.jpg"} {"video_id": "StnbUNn92vA", "text": "are the more traffic you're going to be sending and receiving and of course the port is going to cost you and that's a real concern if there's no revenue coming in but the way that the Bitcoin network was intended to be configured was that the people that are responsible for the connectivity are the miners themselves and the miners are the ones that actually get paid as a part of the operations with a Bitcoin network now consider for a moment project project into the future where transaction fees transaction volume is high enough that", "start_timestamp": "00:29:28", "end_timestamp": "00:29:58", "start_second": 1768, "end_second": 1798, "url": "https://www.youtube.com/watch?v=StnbUNn92vA&t=1768s", "title": "Bitcoin SV: The Massively Scaled Blockchain to Meet Developer Needs\u2014Jimmy Nguyen & Steve Shadders", "thumbnail": "https://i.ytimg.com/vi/StnbUNn92vA/maxresdefault.jpg"} {"video_id": "StnbUNn92vA", "text": "just to pick an easy number to work with let's say there's six hundred thousand dollars worth of transactions fee revenue coming through every block every ten minutes now that sounds like a lot of money but break it down over millions and millions of transactions lots and lots of people paying a hundred to the cent each it's a lot of transactions but but I'll show you in a minute why why that actually becomes manageable so six hundred thousand dollars every ten minutes ten minutes is 600 seconds so that's a thousand dollars a second worth of", "start_timestamp": "00:29:58", "end_timestamp": "00:30:32", "start_second": 1798, "end_second": 1832, "url": "https://www.youtube.com/watch?v=StnbUNn92vA&t=1798s", "title": "Bitcoin SV: The Massively Scaled Blockchain to Meet Developer Needs\u2014Jimmy Nguyen & Steve Shadders", "thumbnail": "https://i.ytimg.com/vi/StnbUNn92vA/maxresdefault.jpg"} {"video_id": "StnbUNn92vA", "text": "revenue that's coming through the system now if you're a minor and your network connections aren't up to scratch and you can't receive the transactions fast enough even if you're delayed by five seconds in receiving the transactions that everybody else is getting straight away that's five thousand dollars worth of revenue that you're not going to get when you find a block and that's just you know one block is the unit 10 minutes fan imagine that over the course of the year so my land miners are highly incentivized to spend money on", "start_timestamp": "00:30:32", "end_timestamp": "00:31:03", "start_second": 1832, "end_second": 1863, "url": "https://www.youtube.com/watch?v=StnbUNn92vA&t=1832s", "title": "Bitcoin SV: The Massively Scaled Blockchain to Meet Developer Needs\u2014Jimmy Nguyen & Steve Shadders", "thumbnail": "https://i.ytimg.com/vi/StnbUNn92vA/maxresdefault.jpg"} {"video_id": "StnbUNn92vA", "text": "increasing a network connection capacity and they're very incentivized to be connected directly to all of the other miners so that they can all grab and Hoover up all of the the transactions that are coming into the network because each one of those transactions represents revenue and so what happens the network starts to reform itself into something that is much more like what satoshi nakamoto actually described in the early days of Bitcoin you see right there at the center these are nodes or otherwise known as miners they are the ones that", "start_timestamp": "00:31:03", "end_timestamp": "00:31:41", "start_second": 1863, "end_second": 1901, "url": "https://www.youtube.com/watch?v=StnbUNn92vA&t=1863s", "title": "Bitcoin SV: The Massively Scaled Blockchain to Meet Developer Needs\u2014Jimmy Nguyen & Steve Shadders", "thumbnail": "https://i.ytimg.com/vi/StnbUNn92vA/maxresdefault.jpg"} {"video_id": "StnbUNn92vA", "text": "actually run the network operate the network they're the ones that are incentivized to have full copies of the blockchain the massive hard drives that are requires as as blocks get bigger and out on the edge those whites white circles there they're actually peers I don't call them nodes I call them peers but they're actually the users of the network and that can be people who want to make payments or that can be applications that can be very big applications that are serving millions of users they don't mean to see", "start_timestamp": "00:31:41", "end_timestamp": "00:32:12", "start_second": 1901, "end_second": 1932, "url": "https://www.youtube.com/watch?v=StnbUNn92vA&t=1901s", "title": "Bitcoin SV: The Massively Scaled Blockchain to Meet Developer Needs\u2014Jimmy Nguyen & Steve Shadders", "thumbnail": "https://i.ytimg.com/vi/StnbUNn92vA/maxresdefault.jpg"} {"video_id": "StnbUNn92vA", "text": "everything that's going on in the Bitcoin network they need to be able to see the parts that consume them and they need to be connected to a very very fast backbone which is what the miners provide now I've only shown five mining nodes here but of course that number can vary quite significantly and I would expect it to be more but there's only room for so many circles on the page now remember back at the beginning of this animation I showed you the animation of this section I showed you the animation of the transaction floating from the", "start_timestamp": "00:32:12", "end_timestamp": "00:32:46", "start_second": 1932, "end_second": 1966, "url": "https://www.youtube.com/watch?v=StnbUNn92vA&t=1932s", "title": "Bitcoin SV: The Massively Scaled Blockchain to Meet Developer Needs\u2014Jimmy Nguyen & Steve Shadders", "thumbnail": "https://i.ytimg.com/vi/StnbUNn92vA/maxresdefault.jpg"} {"video_id": "StnbUNn92vA", "text": "customer all the way through the network bouncing around and eventually getting to the the recipient which was the merchant so let's see how it actually happened is the way that Satoshi described because the very first version of Bitcoin included a feature called IP to IP which is the very definition of peer-to-peer it's appear with an IP address connecting to another peer with an IP address this is so subsequently been removed from Bitcoin that we are putting it back in so this is actually how a transaction is supposed to flow", "start_timestamp": "00:32:46", "end_timestamp": "00:33:18", "start_second": 1966, "end_second": 1998, "url": "https://www.youtube.com/watch?v=StnbUNn92vA&t=1966s", "title": "Bitcoin SV: The Massively Scaled Blockchain to Meet Developer Needs\u2014Jimmy Nguyen & Steve Shadders", "thumbnail": "https://i.ytimg.com/vi/StnbUNn92vA/maxresdefault.jpg"} {"video_id": "StnbUNn92vA", "text": "customer and shop are already in touch with each other because they've decided to make a transaction and you know execute purchase of some goods or services etc so the customer sends the transaction directly to the shop makes sense because shop is the one that actually cares about getting the transaction confirmed so the shop takes responsibility for sending it out to the minor now the miners are all incredibly densely connected so instead of those five hops that we saw before let's see what happens this time bang immediately as", "start_timestamp": "00:33:18", "end_timestamp": "00:33:51", "start_second": 1998, "end_second": 2031, "url": "https://www.youtube.com/watch?v=StnbUNn92vA&t=1998s", "title": "Bitcoin SV: The Massively Scaled Blockchain to Meet Developer Needs\u2014Jimmy Nguyen & Steve Shadders", "thumbnail": "https://i.ytimg.com/vi/StnbUNn92vA/maxresdefault.jpg"} {"video_id": "StnbUNn92vA", "text": "soon as it hits one minor it's after all of the others and so you've got incredibly low latency you you've actually got a speed of confirmation that some that can match the the competitors in the Fiat world and not only that but the the minor that you send it to and in fact you could send it to more than one if you want to can actually send you in response back and tell you the status of your transaction so we'll get further into that in a little bit and more talk about the merchant API but back to the the overarching topic I guess which is big", "start_timestamp": "00:33:51", "end_timestamp": "00:34:25", "start_second": 2031, "end_second": 2065, "url": "https://www.youtube.com/watch?v=StnbUNn92vA&t=2031s", "title": "Bitcoin SV: The Massively Scaled Blockchain to Meet Developer Needs\u2014Jimmy Nguyen & Steve Shadders", "thumbnail": "https://i.ytimg.com/vi/StnbUNn92vA/maxresdefault.jpg"} {"video_id": "StnbUNn92vA", "text": "blocks it's being blocks that make this possible it's being blocks that drive that economic model that some that incentivize - to form this this backbone and this is just an example it's it's one of the blocks I think that Jimmy actually showed you before showing the 370 megabytes and 1.3 million transactions these are a couple of charts that show the trajectory of Bitcoin is for over the last 18 months on the top here this is actually a chart of the largest block in any 24-hour period the actual average it follows the", "start_timestamp": "00:34:25", "end_timestamp": "00:35:02", "start_second": 2065, "end_second": 2102, "url": "https://www.youtube.com/watch?v=StnbUNn92vA&t=2065s", "title": "Bitcoin SV: The Massively Scaled Blockchain to Meet Developer Needs\u2014Jimmy Nguyen & Steve Shadders", "thumbnail": "https://i.ytimg.com/vi/StnbUNn92vA/maxresdefault.jpg"} {"video_id": "StnbUNn92vA", "text": "same trend but the number is obviously a bit lower it's it's still running at about double BTC and as far as I know is the largest of any of the block chains in the world but you can see that steady growth it's it's a logarithmic curve if this was a linear graph then it would be a much much steeper angle and probably wouldn't fit on the page but that capacity is there it's been demonstrated day in day out since since Bitcoin as we emerged onto the scene and down below again is the same graph but this time we're measuring the number of", "start_timestamp": "00:35:02", "end_timestamp": "00:35:37", "start_second": 2102, "end_second": 2137, "url": "https://www.youtube.com/watch?v=StnbUNn92vA&t=2102s", "title": "Bitcoin SV: The Massively Scaled Blockchain to Meet Developer Needs\u2014Jimmy Nguyen & Steve Shadders", "thumbnail": "https://i.ytimg.com/vi/StnbUNn92vA/maxresdefault.jpg"} {"video_id": "StnbUNn92vA", "text": "transactions per block so it's growing and that keeps the between a sweet team on our toes of course because we've got to make sure that the software can all is supply as much capacity as there is demand for and we're fairly comfortable we're about a hundred times further in front than we need to be but it pays for us to always remain vigilant so why do big blocks matter for you well the fact that you can Democrat out of capacity is very important from a fee perspective because it's only when block chains run", "start_timestamp": "00:35:37", "end_timestamp": "00:36:12", "start_second": 2137, "end_second": 2172, "url": "https://www.youtube.com/watch?v=StnbUNn92vA&t=2137s", "title": "Bitcoin SV: The Massively Scaled Blockchain to Meet Developer Needs\u2014Jimmy Nguyen & Steve Shadders", "thumbnail": "https://i.ytimg.com/vi/StnbUNn92vA/maxresdefault.jpg"} {"video_id": "StnbUNn92vA", "text": "out of capacity that you start to see huge spikes in fees there's not enough for everybody so people start bidding against each other and they start bidding the price of the transaction up that should never happen on a block chain there should be a market but the market is is there to determine how low fees can go not how high they can go and of course from a developer's point of view you don't want to be looking over your shoulder constantly and counting how many transactions your application is making and how many bytes are in each", "start_timestamp": "00:36:12", "end_timestamp": "00:36:42", "start_second": 2172, "end_second": 2202, "url": "https://www.youtube.com/watch?v=StnbUNn92vA&t=2172s", "title": "Bitcoin SV: The Massively Scaled Blockchain to Meet Developer Needs\u2014Jimmy Nguyen & Steve Shadders", "thumbnail": "https://i.ytimg.com/vi/StnbUNn92vA/maxresdefault.jpg"} {"video_id": "StnbUNn92vA", "text": "transaction you might even want to be able to use many transactions in cases where you've traditionally been told that you have to use one imagine a large payment of a let's say $1,000 and you don't want it to be known to the whole world well something that you could do is break it up into 20 small completely separate transactions which is something that would be unthinkable on on for example BTC or aetherium because of those transaction fees so big transaction this has enabled all sorts of use cases that that have that are not", "start_timestamp": "00:36:42", "end_timestamp": "00:37:16", "start_second": 2202, "end_second": 2236, "url": "https://www.youtube.com/watch?v=StnbUNn92vA&t=2202s", "title": "Bitcoin SV: The Massively Scaled Blockchain to Meet Developer Needs\u2014Jimmy Nguyen & Steve Shadders", "thumbnail": "https://i.ytimg.com/vi/StnbUNn92vA/maxresdefault.jpg"} {"video_id": "StnbUNn92vA", "text": "at all common on on block chains in general we'll just pick a couple out there bits to Gramma's sounds very much like Instagram and it's much like what you would think it is it's you know image images your own images thought unchanged whether SV is an archival service which i think is going to be very useful for researchers in 50 or 100 years time because whether data that's being put on chain is is immutable that can't be changed if anyone ever tries to use a fake version of that data it's trivial for anyone to prove and of course crypto", "start_timestamp": "00:37:16", "end_timestamp": "00:37:55", "start_second": 2236, "end_second": 2275, "url": "https://www.youtube.com/watch?v=StnbUNn92vA&t=2236s", "title": "Bitcoin SV: The Massively Scaled Blockchain to Meet Developer Needs\u2014Jimmy Nguyen & Steve Shadders", "thumbnail": "https://i.ytimg.com/vi/StnbUNn92vA/maxresdefault.jpg"} {"video_id": "StnbUNn92vA", "text": "fights I guess the the more fun end of a patient development that you can hook into a blockchain for all to gain all sorts of benefits of immutability now this last example is a little bit drier but it's a really good example of why scale matters in terms of enabling use cases that otherwise would be impossible the HR day there is a big player in the USA a u.s. pharmaceutical industry they're addressing a specific problem the opioid crisis which is in the news pretty much every day in the United States it's killing a lot of people and they're", "start_timestamp": "00:37:55", "end_timestamp": "00:38:38", "start_second": 2275, "end_second": 2318, "url": "https://www.youtube.com/watch?v=StnbUNn92vA&t=2275s", "title": "Bitcoin SV: The Massively Scaled Blockchain to Meet Developer Needs\u2014Jimmy Nguyen & Steve Shadders", "thumbnail": "https://i.ytimg.com/vi/StnbUNn92vA/maxresdefault.jpg"} {"video_id": "StnbUNn92vA", "text": "trialing a solution right now we then shown in fact for dealing with this prescription data in a non chain way now this is health records so of course there are privacy is paramount which whilst I won't go into all of the details because it would take too long but it shows that the privacy of on chain data is entirely possible but one of the key concerns that they have in looking at a block chain solution was capacity when this pilot is rolled out to a full scale solution they're expecting 3.2 million transactions per", "start_timestamp": "00:38:38", "end_timestamp": "00:39:14", "start_second": 2318, "end_second": 2354, "url": "https://www.youtube.com/watch?v=StnbUNn92vA&t=2318s", "title": "Bitcoin SV: The Massively Scaled Blockchain to Meet Developer Needs\u2014Jimmy Nguyen & Steve Shadders", "thumbnail": "https://i.ytimg.com/vi/StnbUNn92vA/maxresdefault.jpg"} {"video_id": "StnbUNn92vA", "text": "day now how long would that take to process on various block chains with to scale limitations that they have well these are these are the rough numbers Bitcoin would take 44 days to process just one day's worth of transactions if they're iam quite a bit better but still they would need a time machine it's a little over two days right now Bitcoin SV can chew through that many in 32 minutes and with Tera node of course it will eat them all up you know in a single block so this last section is it is probably the part that's closest to", "start_timestamp": "00:39:14", "end_timestamp": "00:39:50", "start_second": 2354, "end_second": 2390, "url": "https://www.youtube.com/watch?v=StnbUNn92vA&t=2354s", "title": "Bitcoin SV: The Massively Scaled Blockchain to Meet Developer Needs\u2014Jimmy Nguyen & Steve Shadders", "thumbnail": "https://i.ytimg.com/vi/StnbUNn92vA/maxresdefault.jpg"} {"video_id": "StnbUNn92vA", "text": "my heart because I have to admit I'm a bit of a script nerd the script is the the programming language built into Bitcoin for anyone who's familiar with the theory and you're probably familiar with solidity which is something that a theorem did incredibly well it's really easy to use programming language but when you think about Pro on on Bitcoin directly in script you probably think of something ugly and horrendous a little bit like this script is basically an assembly type language or a bytecode type language and in fact", "start_timestamp": "00:39:50", "end_timestamp": "00:40:24", "start_second": 2390, "end_second": 2424, "url": "https://www.youtube.com/watch?v=StnbUNn92vA&t=2390s", "title": "Bitcoin SV: The Massively Scaled Blockchain to Meet Developer Needs\u2014Jimmy Nguyen & Steve Shadders", "thumbnail": "https://i.ytimg.com/vi/StnbUNn92vA/maxresdefault.jpg"} {"video_id": "StnbUNn92vA", "text": "aetherium has its own parallel to this the etherium virtual machine runs off bytecode which is very similar to Java bytecode and in fact for a lot of years you could you couldn't even make scripts like this because they were simply too long there were limitations imposed by by developers on by the Bitcoin developers on what you are allowed to put into scripts so all of the potential that's available here was was not available or not able to be used except maybe on a test net at home so this is what a theory of solidity looks like but", "start_timestamp": "00:40:24", "end_timestamp": "00:41:03", "start_second": 2424, "end_second": 2463, "url": "https://www.youtube.com/watch?v=StnbUNn92vA&t=2424s", "title": "Bitcoin SV: The Massively Scaled Blockchain to Meet Developer Needs\u2014Jimmy Nguyen & Steve Shadders", "thumbnail": "https://i.ytimg.com/vi/StnbUNn92vA/maxresdefault.jpg"} {"video_id": "StnbUNn92vA", "text": "but it gets a little bit better than this of course the the potential for this is only just beginning to be unlocked something that we'll be talking about extensively I think in the Devcon will be the new IDE contract programming language s script there's others as well that's it there on the right and I put them side by side so you can compare and see how how similar it actually is to a theory of solidity this has been one of the most important things I think the Bitcoin sv2 address was the simple fact that the developer experience on on", "start_timestamp": "00:41:03", "end_timestamp": "00:41:37", "start_second": 2463, "end_second": 2497, "url": "https://www.youtube.com/watch?v=StnbUNn92vA&t=2463s", "title": "Bitcoin SV: The Massively Scaled Blockchain to Meet Developer Needs\u2014Jimmy Nguyen & Steve Shadders", "thumbnail": "https://i.ytimg.com/vi/StnbUNn92vA/maxresdefault.jpg"} {"video_id": "StnbUNn92vA", "text": "aetherium was was so much better than it ever was on Bitcoin and they probably won't ever be any better on on BTC but on Bitcoin SV we've done something about that so a script is available is a BS code extension which is a probably one of the most popular IDs in the world right now and it's trivially easy to use and here's just an example of their documentation that shows how to do a rabbin signature which is one mechanism of digitally signing data in a transaction any arbitrary data ana script is not the only one there's also", "start_timestamp": "00:41:37", "end_timestamp": "00:42:15", "start_second": 2497, "end_second": 2535, "url": "https://www.youtube.com/watch?v=StnbUNn92vA&t=2497s", "title": "Bitcoin SV: The Massively Scaled Blockchain to Meet Developer Needs\u2014Jimmy Nguyen & Steve Shadders", "thumbnail": "https://i.ytimg.com/vi/StnbUNn92vA/maxresdefault.jpg"} {"video_id": "StnbUNn92vA", "text": "the run platform there's the gear SV platform and at the upcoming Def Con I'll actually be previewing a little pet project of my own which I I expect would be added if anyone does or someone a slide similar to this in the future it might might be sitting there alongside it so I'm looking forward to sharing that so in terms of developer tools I've popped up this old simplified payment verification slide again that you would remember from near the beginning of this it's it's relevant because I just want to show you one of", "start_timestamp": "00:42:15", "end_timestamp": "00:42:51", "start_second": 2535, "end_second": 2571, "url": "https://www.youtube.com/watch?v=StnbUNn92vA&t=2535s", "title": "Bitcoin SV: The Massively Scaled Blockchain to Meet Developer Needs\u2014Jimmy Nguyen & Steve Shadders", "thumbnail": "https://i.ytimg.com/vi/StnbUNn92vA/maxresdefault.jpg"} {"video_id": "StnbUNn92vA", "text": "the other tools that are available right now on BTC it's not that easy for a developer to even transmit transactions you need to implement the entire pierre-pierre protocol and you're missing a lot of potential functionality that is really important for example you can't find out what what transaction fee you need to attach to make sure your transaction will get accepted well we solved that problem with Bitcoin sv with an interface the likes of which almost every developer in the world has probably used and implemented before a", "start_timestamp": "00:42:51", "end_timestamp": "00:43:26", "start_second": 2571, "end_second": 2606, "url": "https://www.youtube.com/watch?v=StnbUNn92vA&t=2571s", "title": "Bitcoin SV: The Massively Scaled Blockchain to Meet Developer Needs\u2014Jimmy Nguyen & Steve Shadders", "thumbnail": "https://i.ytimg.com/vi/StnbUNn92vA/maxresdefault.jpg"} {"video_id": "StnbUNn92vA", "text": "simple rest interface and it's a very simple API but it's run by miners and it allows you as a very user the user who wants to make a payment or a developer who wants to build an application to ask a miner directly how much fee what is it what exactly is the fee I need and can you send can I send you a transaction yes I can and can you tell me whether you've accepted the transaction or not it seems like such a simple thing that it's it's just not the impossible on on another blockchains and this is one of the first things that", "start_timestamp": "00:43:26", "end_timestamp": "00:44:00", "start_second": 2606, "end_second": 2640, "url": "https://www.youtube.com/watch?v=StnbUNn92vA&t=2606s", "title": "Bitcoin SV: The Massively Scaled Blockchain to Meet Developer Needs\u2014Jimmy Nguyen & Steve Shadders", "thumbnail": "https://i.ytimg.com/vi/StnbUNn92vA/maxresdefault.jpg"} {"video_id": "StnbUNn92vA", "text": "we wanted to address with the point is free so of course there's a world of developer resources and again that will be covered more in more detail in the Devcon but just a couple I want to point out to you in fact I saw this in the chat earlier on someone someone mentioned the wiki wiki got Bitcoin s video it's a great resource for learning about Bitcoin from from the ground up it does have a bit of a technical focus of course but that it also tries on some of those pages to sort of really explain the general concepts of Bitcoin and", "start_timestamp": "00:44:00", "end_timestamp": "00:44:36", "start_second": 2640, "end_second": 2676, "url": "https://www.youtube.com/watch?v=StnbUNn92vA&t=2640s", "title": "Bitcoin SV: The Massively Scaled Blockchain to Meet Developer Needs\u2014Jimmy Nguyen & Steve Shadders", "thumbnail": "https://i.ytimg.com/vi/StnbUNn92vA/maxresdefault.jpg"} {"video_id": "StnbUNn92vA", "text": "other resource I really is bsv devs which points to a lot of tools and developer tools and various other other bits of tooling as well as just some fun applications so I recommend checking that out and of course the part of the point here is to to speak to developers I would be remiss of me as an changed CTO not to throw in a plug here and say that we are growing we're by no means the only ones getting involved in Bitcoin o2b development is is a really interesting potential career path and was we be interested in talking", "start_timestamp": "00:44:36", "end_timestamp": "00:45:15", "start_second": 2676, "end_second": 2715, "url": "https://www.youtube.com/watch?v=StnbUNn92vA&t=2676s", "title": "Bitcoin SV: The Massively Scaled Blockchain to Meet Developer Needs\u2014Jimmy Nguyen & Steve Shadders", "thumbnail": "https://i.ytimg.com/vi/StnbUNn92vA/maxresdefault.jpg"} {"video_id": "StnbUNn92vA", "text": "to two people who are a few than a few steps along the way through their learning experience there are many many other in companies who are operating one Bitcoin SV right now and I think there will be many many career opportunities going forward so I'll turn back over to to Jimmy to wrap up with some questions [Music] [Music] Jimmy I can't hear you were you muted by chance [Music] [Music] hi all sorry I was trying to sort out things let me just wrap up before we take questions by telling you about a few things that are coming up soon I", "start_timestamp": "00:45:15", "end_timestamp": "00:48:02", "start_second": 2715, "end_second": 2882, "url": "https://www.youtube.com/watch?v=StnbUNn92vA&t=2715s", "title": "Bitcoin SV: The Massively Scaled Blockchain to Meet Developer Needs\u2014Jimmy Nguyen & Steve Shadders", "thumbnail": "https://i.ytimg.com/vi/StnbUNn92vA/maxresdefault.jpg"} {"video_id": "StnbUNn92vA", "text": "don't think we have a hackathon that's happening right now that goes for two months the competition period just started about a week ago and it ends August 18th it's our third Bitcoin sv hackathon so you can enter as a individual or as a team a lot of people like to participate as teams there's still plenty of time to sign up you have the opportunity to win up to a prize pool of a hundred thousand US dollars in Bitcoin SV for a first second and third prize the theme is to build applications that connect the world to one global blockchain because", "start_timestamp": "00:48:02", "end_timestamp": "00:48:41", "start_second": 2882, "end_second": 2921, "url": "https://www.youtube.com/watch?v=StnbUNn92vA&t=2882s", "title": "Bitcoin SV: The Massively Scaled Blockchain to Meet Developer Needs\u2014Jimmy Nguyen & Steve Shadders", "thumbnail": "https://i.ytimg.com/vi/StnbUNn92vA/maxresdefault.jpg"} {"video_id": "StnbUNn92vA", "text": "we are big believers in the fact that the Bitcoin blockchain in Bitcoin SV can support all over the world's data so to find out more go to be SV hackathon dotnet and that is one of the great initiatives from Bitcoin Association in addition we have a developer conference coming up July 18th and 19th that we're partnering with we are developers on that will be two days with a lot of information content and advice from the end chain team as well as many companies across the world to help you learn how to build on Bitcoin SV you can register", "start_timestamp": "00:48:41", "end_timestamp": "00:49:16", "start_second": 2921, "end_second": 2956, "url": "https://www.youtube.com/watch?v=StnbUNn92vA&t=2921s", "title": "Bitcoin SV: The Massively Scaled Blockchain to Meet Developer Needs\u2014Jimmy Nguyen & Steve Shadders", "thumbnail": "https://i.ytimg.com/vi/StnbUNn92vA/maxresdefault.jpg"} {"video_id": "StnbUNn92vA", "text": "on the we're developers comm site as well as find out more information at be SV dev con dot net that's July 18th and 19th so it's coming up soon and it's free to register so we hope lots of you take advantage of that opportunity to learn more about Bitcoin SV and that is those are just some of the interesting initiatives we have Bitcoin Association is the global industry organization that advances the business of Bitcoin SD if you want to get involved and learn more about what we're doing visit our website at Bitcoin Association dotnet as an", "start_timestamp": "00:49:16", "end_timestamp": "00:49:54", "start_second": 2956, "end_second": 2994, "url": "https://www.youtube.com/watch?v=StnbUNn92vA&t=2956s", "title": "Bitcoin SV: The Massively Scaled Blockchain to Meet Developer Needs\u2014Jimmy Nguyen & Steve Shadders", "thumbnail": "https://i.ytimg.com/vi/StnbUNn92vA/maxresdefault.jpg"} {"video_id": "StnbUNn92vA", "text": "individual developer you can apply to become an Fillie 'it of the association our memberships are really focused on businesses but if you're not involved with a business that is a member become an affiliate of our association and sign up and get lots information about what we're doing in Bitcoin s deal and we're really excited to bring Bitcoin SP to the world we think it is going to be the global blockchain for enterprises and developers with big power as Steve said big blocks big transactions big script and big capabilities for everyone with", "start_timestamp": "00:49:54", "end_timestamp": "00:50:26", "start_second": 2994, "end_second": 3026, "url": "https://www.youtube.com/watch?v=StnbUNn92vA&t=2994s", "title": "Bitcoin SV: The Massively Scaled Blockchain to Meet Developer Needs\u2014Jimmy Nguyen & Steve Shadders", "thumbnail": "https://i.ytimg.com/vi/StnbUNn92vA/maxresdefault.jpg"} {"video_id": "StnbUNn92vA", "text": "that I think we have some time for questions so Steve one of the kitchens is is there a specific language you have to use for the DEF CON a specific language you have to use well no I mean the DEF CON itself is more of a more of a learning event I suppose but there'll be a number of tools which we'll be showing off some of which are actually languages themselves so the aim of Bitcoin is not to be language specific for example s script is one particular language that you can use for scripting but of course you don't need to actually", "start_timestamp": "00:50:26", "end_timestamp": "00:51:17", "start_second": 3026, "end_second": 3077, "url": "https://www.youtube.com/watch?v=StnbUNn92vA&t=3026s", "title": "Bitcoin SV: The Massively Scaled Blockchain to Meet Developer Needs\u2014Jimmy Nguyen & Steve Shadders", "thumbnail": "https://i.ytimg.com/vi/StnbUNn92vA/maxresdefault.jpg"} {"video_id": "StnbUNn92vA", "text": "get down into the weeds of Bitcoin scripts to make use of a Bitcoin there are plenty of tool sets available in all sorts of languages the JavaScript go - and rats Darla any anything you can imagine probably there are tools available so use whatever language you're comfortable with problem with the fluctuating price for B or C or B SVR other other blockchain tokens for some use cases yeah it is it is problematic and but well I'll get to addressing those problems in a moment but for many use cases it's not at all", "start_timestamp": "00:51:17", "end_timestamp": "00:52:08", "start_second": 3077, "end_second": 3128, "url": "https://www.youtube.com/watch?v=StnbUNn92vA&t=3077s", "title": "Bitcoin SV: The Massively Scaled Blockchain to Meet Developer Needs\u2014Jimmy Nguyen & Steve Shadders", "thumbnail": "https://i.ytimg.com/vi/StnbUNn92vA/maxresdefault.jpg"} {"video_id": "StnbUNn92vA", "text": "to use Bitcoin as B as a utility ledger often what you're not transmitting is large amounts of Bitcoin back and forth you're transmitting data and in fact the value carried in the transaction is the data itself not so not the amount of Bitcoin that you're transmitting so in those sorts of cases where you might only have a few satoshis attached to a transaction dust amounts then these fluctuations aren't aren't really going to matter longer-term now of course payment use cases and and anything whether we're the Satoshi value of the", "start_timestamp": "00:52:08", "end_timestamp": "00:52:44", "start_second": 3128, "end_second": 3164, "url": "https://www.youtube.com/watch?v=StnbUNn92vA&t=3128s", "title": "Bitcoin SV: The Massively Scaled Blockchain to Meet Developer Needs\u2014Jimmy Nguyen & Steve Shadders", "thumbnail": "https://i.ytimg.com/vi/StnbUNn92vA/maxresdefault.jpg"} {"video_id": "StnbUNn92vA", "text": "transaction is significant of course it does matter but I think that this is simply going to be something that will settle over time as usage picks up it's the the usage and the velocity of money in any currency that dictates how volatile currencies tend to be the US dollar for example is pretty stable currency because of the sheer amount of trade that goes on in and every day Killians and trillions of dollars every day so and it's not so much even the dollar value that's moving it's just the amount that it's moving around and how", "start_timestamp": "00:52:44", "end_timestamp": "00:53:19", "start_second": 3164, "end_second": 3199, "url": "https://www.youtube.com/watch?v=StnbUNn92vA&t=3164s", "title": "Bitcoin SV: The Massively Scaled Blockchain to Meet Developer Needs\u2014Jimmy Nguyen & Steve Shadders", "thumbnail": "https://i.ytimg.com/vi/StnbUNn92vA/maxresdefault.jpg"} {"video_id": "StnbUNn92vA", "text": "much people need to to have an acquire to use it for something so that's a longer-term problem but I'm confident that it will be solved and in the meantime there are many of other ways to use Bitcoin ESPE question are there any plans to remove the 25 transaction limit well it's not the 25 friends actually limit anymore it's the 50 transaction let me answer is yes of course there is this is probably the most asked question that I get the unfortunate part of the answer is it's a lot easier said than done there's layers and layers of", "start_timestamp": "00:53:19", "end_timestamp": "00:54:03", "start_second": 3199, "end_second": 3243, "url": "https://www.youtube.com/watch?v=StnbUNn92vA&t=3199s", "title": "Bitcoin SV: The Massively Scaled Blockchain to Meet Developer Needs\u2014Jimmy Nguyen & Steve Shadders", "thumbnail": "https://i.ytimg.com/vi/StnbUNn92vA/maxresdefault.jpg"} {"video_id": "StnbUNn92vA", "text": "of legacy code that had been added by BTC developers over the last 10 years that we're slowly gun picking we're getting very close though by the end of the year I will get my my life and my job on it that we will have it fixed by the end of the year there's a good chance that we'll have that limit substantially increased within the next few couple of months is there a place where you can get information about bsv scripting how it works how to program it and capabilities I would say probably one of your best resources is forgive", "start_timestamp": "00:54:03", "end_timestamp": "00:54:40", "start_second": 3243, "end_second": 3280, "url": "https://www.youtube.com/watch?v=StnbUNn92vA&t=3243s", "title": "Bitcoin SV: The Massively Scaled Blockchain to Meet Developer Needs\u2014Jimmy Nguyen & Steve Shadders", "thumbnail": "https://i.ytimg.com/vi/StnbUNn92vA/maxresdefault.jpg"} {"video_id": "StnbUNn92vA", "text": "the shameless plug the upcoming Def Con because there's at least two two presentations that are going to be specifically about Bitcoin script in different ways that you can make use of it now while there's a pause there was a comment earlier on somebody pointed out that I had a small little network on my wall I just thought I might show this because it's taken me almost a year to actually get funny in them I don't know if you can quite see see that there's a little bit of a hint as to what this actually is so this is actually an image", "start_timestamp": "00:54:40", "end_timestamp": "00:55:21", "start_second": 3280, "end_second": 3321, "url": "https://www.youtube.com/watch?v=StnbUNn92vA&t=3280s", "title": "Bitcoin SV: The Massively Scaled Blockchain to Meet Developer Needs\u2014Jimmy Nguyen & Steve Shadders", "thumbnail": "https://i.ytimg.com/vi/StnbUNn92vA/maxresdefault.jpg"} {"video_id": "StnbUNn92vA", "text": "of the Bitcoin SB network seven days after the hash ball in November 2018 it's a limited edition because one of our senior architects who collected the data and built this diagram gave it to me and then I made him delete the file so that it would have some some rarity value going forward in the future can we get does limited transaction outputs reduced or eliminated Steve you certainly can and it's simply a question of when or if we are again actively working on this and by way of working on a feature that sort of goes hand in hand", "start_timestamp": "00:55:21", "end_timestamp": "00:56:07", "start_second": 3321, "end_second": 3367, "url": "https://www.youtube.com/watch?v=StnbUNn92vA&t=3321s", "title": "Bitcoin SV: The Massively Scaled Blockchain to Meet Developer Needs\u2014Jimmy Nguyen & Steve Shadders", "thumbnail": "https://i.ytimg.com/vi/StnbUNn92vA/maxresdefault.jpg"} {"video_id": "StnbUNn92vA", "text": "in this which is enabling what I call consolidation transactions to be free that is where you have a large number of of inputs spending into a small number of outputs by doing that you're actually reducing the size of the UT Exocet which is a net benefit to miners so it makes sense to give them the option of that being able to be done for free and that opens up a world of different use cases very very micro payments you could offer a service that someone as a guess of value add to an existing transaction where someone just", "start_timestamp": "00:56:07", "end_timestamp": "00:56:43", "start_second": 3367, "end_second": 3403, "url": "https://www.youtube.com/watch?v=StnbUNn92vA&t=3367s", "title": "Bitcoin SV: The Massively Scaled Blockchain to Meet Developer Needs\u2014Jimmy Nguyen & Steve Shadders", "thumbnail": "https://i.ytimg.com/vi/StnbUNn92vA/maxresdefault.jpg"} {"video_id": "StnbUNn92vA", "text": "adds even or once atashi output to you as a third party service provider and now that's not very useful on its own but if you sit there and collect them and ask them once you've got a thousand or whatever then now you just make one of these free consolidation transactions and put them together into a into a single amount that's actually useful again how to build simple web app on top of the blockchain three seconds how long is a piece of string a really good way to do this is to go join a hackathon or something like that there's a lot of", "start_timestamp": "00:56:43", "end_timestamp": "00:57:21", "start_second": 3403, "end_second": 3441, "url": "https://www.youtube.com/watch?v=StnbUNn92vA&t=3403s", "title": "Bitcoin SV: The Massively Scaled Blockchain to Meet Developer Needs\u2014Jimmy Nguyen & Steve Shadders", "thumbnail": "https://i.ytimg.com/vi/StnbUNn92vA/maxresdefault.jpg"} {"video_id": "StnbUNn92vA", "text": "helpful people around that will help you one of the quickest onboarding libraries I suppose that I know of would be money button there's others that reducing other things but it's one that I know well so go take a look at their API and I know people have built an application in less than a day using that will there be systematic courses for beginners in the future let me address that one Bitcoin Association one of our key initiatives for this year and next year is really what you're seeing - a lot on developer education and training", "start_timestamp": "00:57:21", "end_timestamp": "00:57:56", "start_second": 3441, "end_second": 3476, "url": "https://www.youtube.com/watch?v=StnbUNn92vA&t=3441s", "title": "Bitcoin SV: The Massively Scaled Blockchain to Meet Developer Needs\u2014Jimmy Nguyen & Steve Shadders", "thumbnail": "https://i.ytimg.com/vi/StnbUNn92vA/maxresdefault.jpg"} {"video_id": "StnbUNn92vA", "text": "initiatives so ladling Wilson is our technical program manager overseeing this the answer is use we will be launching later this year a variety of online training programs one in conjunction with the Technical University that we're partnering with and they're just put up by Bitcoin association with a goal of providing both basic Bitcoin training for developers and then moving to a more immediate and at the youth levels allowing developers to complete online uses through an online education curriculum take assessment tests and", "start_timestamp": "00:57:56", "end_timestamp": "00:58:28", "start_second": 3476, "end_second": 3508, "url": "https://www.youtube.com/watch?v=StnbUNn92vA&t=3476s", "title": "Bitcoin SV: The Massively Scaled Blockchain to Meet Developer Needs\u2014Jimmy Nguyen & Steve Shadders", "thumbnail": "https://i.ytimg.com/vi/StnbUNn92vA/maxresdefault.jpg"} {"video_id": "StnbUNn92vA", "text": "eventually we want to a certification program to be able to survey developers at different levels of things upon the completion of our online education so look for more information about that later but I'll be coming and launching later this year Steve do you have any comments I think he means is that the maximum browser I think that's what that question means I have not actually had a chance to look at the maximum browser myself in depth and in detail but one thing that suppose that I will say about it is that it is at least", "start_timestamp": "00:58:28", "end_timestamp": "00:59:05", "start_second": 3508, "end_second": 3545, "url": "https://www.youtube.com/watch?v=StnbUNn92vA&t=3508s", "title": "Bitcoin SV: The Massively Scaled Blockchain to Meet Developer Needs\u2014Jimmy Nguyen & Steve Shadders", "thumbnail": "https://i.ytimg.com/vi/StnbUNn92vA/maxresdefault.jpg"} {"video_id": "StnbUNn92vA", "text": "a year or two ahead of when I expected it to be not maximum specifically but a browser that was geared toward toward another Net Applications I thought this was going to be something that would take quite a long time for someone to build so the approach that the team has has taken I personally think is really impressive I I look forward to seeing how other people start integrating its and and making it part of the the daily Bitcoin kind of user experience yeah I encourage everyone to download the master six beta test it out check it out", "start_timestamp": "00:59:05", "end_timestamp": "00:59:42", "start_second": 3545, "end_second": 3582, "url": "https://www.youtube.com/watch?v=StnbUNn92vA&t=3545s", "title": "Bitcoin SV: The Massively Scaled Blockchain to Meet Developer Needs\u2014Jimmy Nguyen & Steve Shadders", "thumbnail": "https://i.ytimg.com/vi/StnbUNn92vA/maxresdefault.jpg"} {"video_id": "StnbUNn92vA", "text": "play around with it I've started doing myself and a Jeff Kenya's team are definitely open to any feedback comments it's just a start and they're willing to take all kinds of suggestions for for that related to maximum Jeff and his team are also launching something called NB domain which is creating a new form of domain system for the watching like we have a you know internet URL the main system he's leading a initiative to create a new domain system for the watching world so I think that's really fascinating to watch as well Steve will", "start_timestamp": "00:59:42", "end_timestamp": "01:00:16", "start_second": 3582, "end_second": 3616, "url": "https://www.youtube.com/watch?v=StnbUNn92vA&t=3582s", "title": "Bitcoin SV: The Massively Scaled Blockchain to Meet Developer Needs\u2014Jimmy Nguyen & Steve Shadders", "thumbnail": "https://i.ytimg.com/vi/StnbUNn92vA/maxresdefault.jpg"} {"video_id": "StnbUNn92vA", "text": "be a better alternative for all op return or it's the final solution will there be a better alternative certainly not the final solution there's there's plenty of other solutions out there right now one thing that I'm kind of hoping people will start to explore in the near future is embedding data in in spendable outputs for example some people are familiar with some basic scripting the typical paid a filmic key hash type of script all you need to do is put an opportunity end of that script and then you can put more data on the end of it", "start_timestamp": "01:00:16", "end_timestamp": "01:00:53", "start_second": 3616, "end_second": 3653, "url": "https://www.youtube.com/watch?v=StnbUNn92vA&t=3616s", "title": "Bitcoin SV: The Massively Scaled Blockchain to Meet Developer Needs\u2014Jimmy Nguyen & Steve Shadders", "thumbnail": "https://i.ytimg.com/vi/StnbUNn92vA/maxresdefault.jpg"} {"video_id": "StnbUNn92vA", "text": "or before the script you can push data and then drop it again off the stack straight away is another way to embed data so that solves one problem that a lot of people I think have bitten there's been a lot of debate hot topic on Twitter lately what happens if minors don't keep all of that well as one solution is put it in a spendable output and they have have no option they'll probably charge you a higher fee for it that but it shows that there's many choices and many ways to do things in Bitcoin what I can say is up faults I've", "start_timestamp": "01:00:53", "end_timestamp": "01:01:28", "start_second": 3653, "end_second": 3688, "url": "https://www.youtube.com/watch?v=StnbUNn92vA&t=3653s", "title": "Bitcoin SV: The Massively Scaled Blockchain to Meet Developer Needs\u2014Jimmy Nguyen & Steve Shadders", "thumbnail": "https://i.ytimg.com/vi/StnbUNn92vA/maxresdefault.jpg"} {"video_id": "StnbUNn92vA", "text": "returned point never go away it's possible now we're not going to be changing the protocol again so there's nothing to stop you from using it that way if that's how you want to what are your favorite BSB projects jan what a tough question Rhian it's um it's like asking someone a parent to choose among their children what are our favorites there are so many great ones out there so I I personally can't name what obviously I think things in each or Davis pilot project to use the blockchain for opioid pharmaceutical", "start_timestamp": "01:01:28", "end_timestamp": "01:02:02", "start_second": 3688, "end_second": 3722, "url": "https://www.youtube.com/watch?v=StnbUNn92vA&t=3688s", "title": "Bitcoin SV: The Massively Scaled Blockchain to Meet Developer Needs\u2014Jimmy Nguyen & Steve Shadders", "thumbnail": "https://i.ytimg.com/vi/StnbUNn92vA/maxresdefault.jpg"} {"video_id": "StnbUNn92vA", "text": "prescription management and creating a bigger mission the global electronic health record so that consumers patients can be consumers of the health care data and monetize their own healthcare data I think that is fantastic because it demonstrates really the much bigger control and vision of what a global blockchain can be beyond just a payment system that's just one of my many many favorites and the other answer is anything Steve shadows does Steve you have any favorite you and I mentioned I have any favorites I mean there's a lot", "start_timestamp": "01:02:02", "end_timestamp": "01:02:35", "start_second": 3722, "end_second": 3755, "url": "https://www.youtube.com/watch?v=StnbUNn92vA&t=3722s", "title": "Bitcoin SV: The Massively Scaled Blockchain to Meet Developer Needs\u2014Jimmy Nguyen & Steve Shadders", "thumbnail": "https://i.ytimg.com/vi/StnbUNn92vA/maxresdefault.jpg"} {"video_id": "StnbUNn92vA", "text": "that I actually think a really impressive eschatos is is one that that's really not my socks off so much so that inspired me to start my own my own weekend project doing something kind of similar bit ping formerly known as Optimus v not a fan of the new name that I understand that I was just asking for a trademark or on the track I think that's a really clever way to to make use of Bitcoin s being you know coder I mean I was on the judging panel for the two hackathons and those two one you want a couple of them", "start_timestamp": "01:02:35", "end_timestamp": "01:03:12", "start_second": 3755, "end_second": 3792, "url": "https://www.youtube.com/watch?v=StnbUNn92vA&t=3755s", "title": "Bitcoin SV: The Massively Scaled Blockchain to Meet Developer Needs\u2014Jimmy Nguyen & Steve Shadders", "thumbnail": "https://i.ytimg.com/vi/StnbUNn92vA/maxresdefault.jpg"} {"video_id": "StnbUNn92vA", "text": "so so there's people who don't know what bit ping is it's a crowdsource and essentially network intelligence for monitoring website up and down time as well as other services on that use you know user mobile devices and devices around the world to provide that information we don't manage on the blockchain yeah do you plan to support hey ID parody if I recall rightly I think this is a an identity mechanism that was recently released and available it's meant to be sort of the payment mechanism agnostic it addresses BTC of", "start_timestamp": "01:03:12", "end_timestamp": "01:03:58", "start_second": 3792, "end_second": 3838, "url": "https://www.youtube.com/watch?v=StnbUNn92vA&t=3792s", "title": "Bitcoin SV: The Massively Scaled Blockchain to Meet Developer Needs\u2014Jimmy Nguyen & Steve Shadders", "thumbnail": "https://i.ytimg.com/vi/StnbUNn92vA/maxresdefault.jpg"} {"video_id": "StnbUNn92vA", "text": "the dresses ACH transfers and a whole bunch of other things when I plan to support it well I don't have any business that the personally needs it but do I think that a standard that promotes interoperability between payment systems is a good idea to DSV to support yes yes I do interoperability will help people to onboard on to the ESB and if a bunch of user facing applications are implementing this standard that it makes it much easier for those applications to integrate with us be in the future so yeah that note I", "start_timestamp": "01:03:58", "end_timestamp": "01:04:35", "start_second": 3838, "end_second": 3875, "url": "https://www.youtube.com/watch?v=StnbUNn92vA&t=3838s", "title": "Bitcoin SV: The Massively Scaled Blockchain to Meet Developer Needs\u2014Jimmy Nguyen & Steve Shadders", "thumbnail": "https://i.ytimg.com/vi/StnbUNn92vA/maxresdefault.jpg"} {"video_id": "StnbUNn92vA", "text": "take this moment to mention for our audience that one of the initiatives Bitcoin Association has launched and recently announced the initial committee members for it is a technical standards committee for Bitcoin SV we know that technology develops much more quickly and grows if there are interoperable standards so Steve is chairing the technical standards committee to evaluate just this exact kind of topic in many areas what standards can be recommended so that developers and businesses who are creating applications", "start_timestamp": "01:04:35", "end_timestamp": "01:05:07", "start_second": 3875, "end_second": 3907, "url": "https://www.youtube.com/watch?v=StnbUNn92vA&t=3875s", "title": "Bitcoin SV: The Massively Scaled Blockchain to Meet Developer Needs\u2014Jimmy Nguyen & Steve Shadders", "thumbnail": "https://i.ytimg.com/vi/StnbUNn92vA/maxresdefault.jpg"} {"video_id": "StnbUNn92vA", "text": "the Bitcoin has to be world can create things that communicate interact and interoperate with each other we think that's really vital it's one of the things we're doing to lead the initialization of the coin where would be the best place that I could talk to others about up coming up with ideas for the hackathon Steve there's a lot of different forums where where our application developers hang out of course the hackathon itself has its own discord channels where you can you can probably reach out to other devs and I", "start_timestamp": "01:05:07", "end_timestamp": "01:05:42", "start_second": 3907, "end_second": 3942, "url": "https://www.youtube.com/watch?v=StnbUNn92vA&t=3907s", "title": "Bitcoin SV: The Massively Scaled Blockchain to Meet Developer Needs\u2014Jimmy Nguyen & Steve Shadders", "thumbnail": "https://i.ytimg.com/vi/StnbUNn92vA/maxresdefault.jpg"} {"video_id": "StnbUNn92vA", "text": "think that the platform itself has facilities to help you with finding teams but outside of that there's there's a few telegram groups there's the Atlantis slack that's run by an Rider which is a common hangout for for developers I mean I don't have a huge amount of time to sort of hang around in all of these these various places so there may be it may well be others that I don't know about but here is a good place to start we'll go to any one of those and ask other people generally people are pretty friendly and willing", "start_timestamp": "01:05:42", "end_timestamp": "01:06:19", "start_second": 3942, "end_second": 3979, "url": "https://www.youtube.com/watch?v=StnbUNn92vA&t=3942s", "title": "Bitcoin SV: The Massively Scaled Blockchain to Meet Developer Needs\u2014Jimmy Nguyen & Steve Shadders", "thumbnail": "https://i.ytimg.com/vi/StnbUNn92vA/maxresdefault.jpg"} {"video_id": "StnbUNn92vA", "text": "to only to help you people get on board Thomas asks do you always have to compare hash outputs to create a locking script based on a push from TX or is it possible to unlock without knowing in advance exactly what the output will look like I spotted this question and I'm still trying to work I mean I know that you're talking about off push TX and hashed outputs is a part of the seek hash algorithm so this is a very very technical question but I'm not quite sure what the specific use case is that you're what you're thinking of on push", "start_timestamp": "01:06:19", "end_timestamp": "01:07:04", "start_second": 3979, "end_second": 4024, "url": "https://www.youtube.com/watch?v=StnbUNn92vA&t=3979s", "title": "Bitcoin SV: The Massively Scaled Blockchain to Meet Developer Needs\u2014Jimmy Nguyen & Steve Shadders", "thumbnail": "https://i.ytimg.com/vi/StnbUNn92vA/maxresdefault.jpg"} {"video_id": "StnbUNn92vA", "text": "TX's is a technique that's quite close to my heart because I've been forgive the pun that I've been not pushing it for the last few years and it took a while for someone to to grab onto it and actually try to implement it so I might take that question on notice so that I can actually give you a sense I answer rather than a been trying to guess what your what the question means and getting it wrong oh wow Henry what a question how long shatters until ESP transactions per second rivals that of MasterCard and Visa Steve give us a", "start_timestamp": "01:07:04", "end_timestamp": "01:07:41", "start_second": 4024, "end_second": 4061, "url": "https://www.youtube.com/watch?v=StnbUNn92vA&t=4024s", "title": "Bitcoin SV: The Massively Scaled Blockchain to Meet Developer Needs\u2014Jimmy Nguyen & Steve Shadders", "thumbnail": "https://i.ytimg.com/vi/StnbUNn92vA/maxresdefault.jpg"} {"video_id": "StnbUNn92vA", "text": "definite time well I mean in terms of capacity we're pretty close the the long-term average number that has been cited for the whole lifetime in Bitcoin is 1700 transactions per second and we know that in in in short bursts of a few hours that the the SV network is already capable of handling that their peak capacity is quite a bit bigger but that's that's about 50,000 that's probably going to take a leap beyond the Bitcoin SV note software and into tera note their Bitcoin is we may may may be able to achieve that sort of unsure but", "start_timestamp": "01:07:41", "end_timestamp": "01:08:19", "start_second": 4061, "end_second": 4099, "url": "https://www.youtube.com/watch?v=StnbUNn92vA&t=4061s", "title": "Bitcoin SV: The Massively Scaled Blockchain to Meet Developer Needs\u2014Jimmy Nguyen & Steve Shadders", "thumbnail": "https://i.ytimg.com/vi/StnbUNn92vA/maxresdefault.jpg"} {"video_id": "StnbUNn92vA", "text": "I'm personally working on tera known proof of prototyping myself it's one of the few projects that that I dust off the old old keyboard and and get in and code myself but I generally don't have a lot of time for coding because I'm too busy organizing things and then chained that we're making really good progress with that and I'm looking forward to doing some demonstrations of it as soon as as soon as I can I'm gonna get myself into trouble if I name any kind of a date because also get held to it but I would like to be demonstrating it before", "start_timestamp": "01:08:19", "end_timestamp": "01:08:55", "start_second": 4099, "end_second": 4135, "url": "https://www.youtube.com/watch?v=StnbUNn92vA&t=4099s", "title": "Bitcoin SV: The Massively Scaled Blockchain to Meet Developer Needs\u2014Jimmy Nguyen & Steve Shadders", "thumbnail": "https://i.ytimg.com/vi/StnbUNn92vA/maxresdefault.jpg"} {"video_id": "StnbUNn92vA", "text": "before the end of this year and for audience members who don't know what tera node is it is an enterprise class version of the Bitcoin SV node software that is designed to support as the name suggests terabyte size blocks you know your megabytes plus for massive massive scale Steve and team leading at first initially to reconstruct the Bitcoin node software from the ground up using a microservices architectural approach to create far more efficiencies well look for more information about that coming in the near future", "start_timestamp": "01:08:55", "end_timestamp": "01:09:31", "start_second": 4135, "end_second": 4171, "url": "https://www.youtube.com/watch?v=StnbUNn92vA&t=4135s", "title": "Bitcoin SV: The Massively Scaled Blockchain to Meet Developer Needs\u2014Jimmy Nguyen & Steve Shadders", "thumbnail": "https://i.ytimg.com/vi/StnbUNn92vA/maxresdefault.jpg"} {"video_id": "StnbUNn92vA", "text": "when will the courses for bitcoin engineer start I don't have an exact date and the but I know we're hoping to launch our partnership with a tech University for a massive online open course sometime in the fall of this year we're on October hoping and then our Bitcoin association for more online education curriculum we're hoping to get some of that play and beer do you have a timetable for change in a DAA the difficulty adjustment algorithm Tim Donovan gosh all these probe questions Steve what do you have to say about that I I get asked", "start_timestamp": "01:09:31", "end_timestamp": "01:10:16", "start_second": 4171, "end_second": 4216, "url": "https://www.youtube.com/watch?v=StnbUNn92vA&t=4171s", "title": "Bitcoin SV: The Massively Scaled Blockchain to Meet Developer Needs\u2014Jimmy Nguyen & Steve Shadders", "thumbnail": "https://i.ytimg.com/vi/StnbUNn92vA/maxresdefault.jpg"} {"video_id": "StnbUNn92vA", "text": "this one a lot and honestly scratched my head sometimes wondering why people are so interested in it because it doesn't really impact the for the day that they user experience all that much the one thing that is notable about Bitcoin SB is it actually really doesn't matter if we go for an hour or even two hours without a block because when one eventually gets found everything just gets cleared out in one one hit so but the answer to that question is it's it's dependent on transaction volume because the olds 2016 blaack difficulty", "start_timestamp": "01:10:16", "end_timestamp": "01:10:51", "start_second": 4216, "end_second": 4251, "url": "https://www.youtube.com/watch?v=StnbUNn92vA&t=4216s", "title": "Bitcoin SV: The Massively Scaled Blockchain to Meet Developer Needs\u2014Jimmy Nguyen & Steve Shadders", "thumbnail": "https://i.ytimg.com/vi/StnbUNn92vA/maxresdefault.jpg"} {"video_id": "StnbUNn92vA", "text": "adjustment algorithm creates a vulnerability for a lo hash rate chain which we call chain death attack whereby someone comes in with large amounts of hash rate from somewhere else pushes the difficulty way way way up and then goes away and if you're in a situation where it takes 24 hours to find a block well that two-week period is actually measured in blocks so it becomes 2016 times 24 hours and that's not an ideal situation to be in but when you've got large amounts of transaction volume coming in and fee revenue it changes", "start_timestamp": "01:10:51", "end_timestamp": "01:11:26", "start_second": 4251, "end_second": 4286, "url": "https://www.youtube.com/watch?v=StnbUNn92vA&t=4251s", "title": "Bitcoin SV: The Massively Scaled Blockchain to Meet Developer Needs\u2014Jimmy Nguyen & Steve Shadders", "thumbnail": "https://i.ytimg.com/vi/StnbUNn92vA/maxresdefault.jpg"} {"video_id": "StnbUNn92vA", "text": "that dynamic completely so we've been doing some studies internally on what different levels could create what sorts of scenarios in terms of if someone tried to pull off that sort of attack just to determine where the where the kind of safe level is and I've got some answers on that I'm there are security concerns which is why I'm not just bloating everything out but I know about right now and I think carefully about about how to approach this from a public discussion point of view because it doesn't in public", "start_timestamp": "01:11:26", "end_timestamp": "01:12:00", "start_second": 4286, "end_second": 4320, "url": "https://www.youtube.com/watch?v=StnbUNn92vA&t=4286s", "title": "Bitcoin SV: The Massively Scaled Blockchain to Meet Developer Needs\u2014Jimmy Nguyen & Steve Shadders", "thumbnail": "https://i.ytimg.com/vi/StnbUNn92vA/maxresdefault.jpg"} {"video_id": "StnbUNn92vA", "text": "discussion but I think their short answer is it's definitely not going to be this year my guess would be it would be probably some time probably not before middle of next year and maybe you know towards the end of year but as soon as we actually have enough data to be able to say yeah it's the it's the right time to do it or at least plan it and start setting a date then then we're talking about it publicly and getting as much public feedback as as possible we are like coming to the end of this time boxed the answer a last question or one", "start_timestamp": "01:12:00", "end_timestamp": "01:12:49", "start_second": 4320, "end_second": 4369, "url": "https://www.youtube.com/watch?v=StnbUNn92vA&t=4320s", "title": "Bitcoin SV: The Massively Scaled Blockchain to Meet Developer Needs\u2014Jimmy Nguyen & Steve Shadders", "thumbnail": "https://i.ytimg.com/vi/StnbUNn92vA/maxresdefault.jpg"} {"video_id": "StnbUNn92vA", "text": "that question then everything which comes after that is something I will forward to you and see that we get answers published so everybody can see that so on the last question is what do you see as the next steps required for the wider use of micro payments hmm steam you want to tackle that sure so a lot of the building blocks are either coming her either in place or there they're just sort of coming into place and a lot of this comes down to that animation that I showed you near the beginning of my presentation which", "start_timestamp": "01:12:49", "end_timestamp": "01:13:33", "start_second": 4369, "end_second": 4413, "url": "https://www.youtube.com/watch?v=StnbUNn92vA&t=4369s", "title": "Bitcoin SV: The Massively Scaled Blockchain to Meet Developer Needs\u2014Jimmy Nguyen & Steve Shadders", "thumbnail": "https://i.ytimg.com/vi/StnbUNn92vA/maxresdefault.jpg"} {"video_id": "StnbUNn92vA", "text": "governs the flow of a payment interaction between between two parties so there's multiple parts of that there's how do I find the person to connect to them directly which female is one of the potential will be at one of the solutions to that there's how do I get the transaction directly to the miner promptly and find out that it's definitely being accepted the merchant API is a component to that the floral of that even happens though there's the the negotiation between the the merchants are selling and the customer", "start_timestamp": "01:13:33", "end_timestamp": "01:14:08", "start_second": 4413, "end_second": 4448, "url": "https://www.youtube.com/watch?v=StnbUNn92vA&t=4413s", "title": "Bitcoin SV: The Massively Scaled Blockchain to Meet Developer Needs\u2014Jimmy Nguyen & Steve Shadders", "thumbnail": "https://i.ytimg.com/vi/StnbUNn92vA/maxresdefault.jpg"} {"video_id": "StnbUNn92vA", "text": "and that's part of it 270 so all of these components are coming together and of course they need to be implemented not just by miners or any particular service but all of the bollocks as well so they can operate happily together so completing the work on all of those steps I think basically defined what that pathway is how long have the tape and I'm not sure because it requires work to be done by a bunch of people that can't me who I don't I can't compel them but there seems to be a pretty strong appetite amongst many of the", "start_timestamp": "01:14:08", "end_timestamp": "01:14:40", "start_second": 4448, "end_second": 4480, "url": "https://www.youtube.com/watch?v=StnbUNn92vA&t=4448s", "title": "Bitcoin SV: The Massively Scaled Blockchain to Meet Developer Needs\u2014Jimmy Nguyen & Steve Shadders", "thumbnail": "https://i.ytimg.com/vi/StnbUNn92vA/maxresdefault.jpg"} {"video_id": "StnbUNn92vA", "text": "wallet applications to get on board with this and a lot of works already been done so I'm pretty happy with burgers that's the thing technical behind-the-scenes answer I think from a practical industry and because it's super effective the answer really is developers like all of you on this one program who are listening designing creating conceiving great applications has drive people to want to have some functionality that uses micropayment something like coda or developers can make money you're an API marketplace so", "start_timestamp": "01:14:40", "end_timestamp": "01:15:15", "start_second": 4480, "end_second": 4515, "url": "https://www.youtube.com/watch?v=StnbUNn92vA&t=4480s", "title": "Bitcoin SV: The Massively Scaled Blockchain to Meet Developer Needs\u2014Jimmy Nguyen & Steve Shadders", "thumbnail": "https://i.ytimg.com/vi/StnbUNn92vA/maxresdefault.jpg"} {"video_id": "StnbUNn92vA", "text": "it's the ingenuity and the creativity of developer creating really powerful applications have real utility things people want to use what do they even know it runs on Bitcoin or not that leads to real usage so we encourage people to just get building that's what we really believe in in the Bitcoin ESP world building a blockchain and the digital currency with real values we build complete useful applications that will make people want to use them and that will drive micro payments and Deitrick cool so thank you Jimmy thank", "start_timestamp": "01:15:15", "end_timestamp": "01:15:53", "start_second": 4515, "end_second": 4553, "url": "https://www.youtube.com/watch?v=StnbUNn92vA&t=4515s", "title": "Bitcoin SV: The Massively Scaled Blockchain to Meet Developer Needs\u2014Jimmy Nguyen & Steve Shadders", "thumbnail": "https://i.ytimg.com/vi/StnbUNn92vA/maxresdefault.jpg"} {"video_id": "NyoezBq14vE", "text": "good evening everybody my name is rose I'm with the Academy of Science st. Louis and we are one of the sponsors of tonight's talk along with the st. Louis Zoo and we're very pleased to be partnering with the zoo to bring you tonight science seminar before we get started this evening I'd like to tell you a little bit about the Academy of Science and who we are we're an independent science organization we're supported entirely through community contributions and we've been around for a very long time since 1856 it's our", "start_timestamp": "00:00:00", "end_timestamp": "00:00:32", "start_second": 0, "end_second": 32, "url": "https://www.youtube.com/watch?v=NyoezBq14vE&t=0s", "title": "Reversing Time, Photoacoustics and Other Optical Breakthroughs in Biomedical Imaging", "thumbnail": "https://i.ytimg.com/vi/NyoezBq14vE/maxresdefault.jpg"} {"video_id": "NyoezBq14vE", "text": "mission to promote the public understanding of science and inspire the next generation and at venues throughout the region we do that by connecting science in the community through free and our very low-cost public talks so in seminars and workshops and trips and tours that celebrate science at venues throughout the metropolitan st. Louis region and surrounding counties these trips tours and talks feature scientists and engineering professionals of both national and international renown in addition to advancing the public", "start_timestamp": "00:00:32", "end_timestamp": "00:01:02", "start_second": 32, "end_second": 62, "url": "https://www.youtube.com/watch?v=NyoezBq14vE&t=32s", "title": "Reversing Time, Photoacoustics and Other Optical Breakthroughs in Biomedical Imaging", "thumbnail": "https://i.ytimg.com/vi/NyoezBq14vE/maxresdefault.jpg"} {"video_id": "NyoezBq14vE", "text": "understanding of science excuse me as I had said it's our mission as well to inspire the next generation we do that through a number of free and low-cost opportunities that are expressly for teams such as their teen science cafes teen cafes Youth Leadership Council and Junior Academy of Science the Junior Academy is a pre-professional stem membership organization for students in grades 6 through 12 that offers hands-on opportunities in science engineering and medicine teens can attend unique behind-the-scenes explorations of", "start_timestamp": "00:01:02", "end_timestamp": "00:01:34", "start_second": 62, "end_second": 94, "url": "https://www.youtube.com/watch?v=NyoezBq14vE&t=62s", "title": "Reversing Time, Photoacoustics and Other Optical Breakthroughs in Biomedical Imaging", "thumbnail": "https://i.ytimg.com/vi/NyoezBq14vE/maxresdefault.jpg"} {"video_id": "NyoezBq14vE", "text": "leading engineering technology and science labs of our region you can access University Libraries participate in field opportunities in challenging and engaging science competitions for a full range of academic levels and Junior Academy members make real-world connections meeting top stem professionals so if you know a student or students or you are a student in grades 6 through 12 with a love for science memberships in the Junior Academy make great holiday gifts you can find more information on the Academy and", "start_timestamp": "00:01:34", "end_timestamp": "00:02:04", "start_second": 94, "end_second": 124, "url": "https://www.youtube.com/watch?v=NyoezBq14vE&t=94s", "title": "Reversing Time, Photoacoustics and Other Optical Breakthroughs in Biomedical Imaging", "thumbnail": "https://i.ytimg.com/vi/NyoezBq14vE/maxresdefault.jpg"} {"video_id": "NyoezBq14vE", "text": "all of our community wide events by visiting our website at academy of science stl dot org you may also visit us on Facebook or Twitter or before you leave tonight pick up some of the literature that is on the table just outside the auditorium I do want to mention a couple upcoming academy events you might have an interest in attending on Tuesday evening December 9 at the Missouri History Museum retired University of missouri-columbia associate professor of fisheries and wildlife sciences and the Nature Conservancy's Great Rivers", "start_timestamp": "00:02:04", "end_timestamp": "00:02:34", "start_second": 124, "end_second": 154, "url": "https://www.youtube.com/watch?v=NyoezBq14vE&t=124s", "title": "Reversing Time, Photoacoustics and Other Optical Breakthroughs in Biomedical Imaging", "thumbnail": "https://i.ytimg.com/vi/NyoezBq14vE/maxresdefault.jpg"} {"video_id": "NyoezBq14vE", "text": "partnership science advisor dr. David gala talks about an ecological history of the Missouri River and 21st century challenges in the Big Muddy this event is free and open to the public you do not need to register to attend and then on January 28th on the Washington University Medical School campus and as part of our teen science cafe series radiology and biomedical engineering professor dr. Susan Lackey talks about nanotechnology in the science of the small teen science cafes are open to all students in grades 6 through 12 they are", "start_timestamp": "00:02:34", "end_timestamp": "00:03:06", "start_second": 154, "end_second": 186, "url": "https://www.youtube.com/watch?v=NyoezBq14vE&t=154s", "title": "Reversing Time, Photoacoustics and Other Optical Breakthroughs in Biomedical Imaging", "thumbnail": "https://i.ytimg.com/vi/NyoezBq14vE/maxresdefault.jpg"} {"video_id": "NyoezBq14vE", "text": "interactive they are free and there is food but you do need to register to attend and you can do so by logging onto our website again at academy of science stl org you can find even more science opportunities talk some tours on our web site or listed on the event fliers and Academy literature that is available for you to take with you before you leave this evening if you'd like to receive a notification of upcoming academy public lectures and events there are some enews sign-up sheets that will make their way", "start_timestamp": "00:03:06", "end_timestamp": "00:03:33", "start_second": 186, "end_second": 213, "url": "https://www.youtube.com/watch?v=NyoezBq14vE&t=186s", "title": "Reversing Time, Photoacoustics and Other Optical Breakthroughs in Biomedical Imaging", "thumbnail": "https://i.ytimg.com/vi/NyoezBq14vE/maxresdefault.jpg"} {"video_id": "NyoezBq14vE", "text": "around the audience if you are a student and you need to verify your attendance at tonight's talk there will be some verification cards that are available following tonight's QA also out at the table outside the auditorium please turn off cell phone rings or any other electronic devices that might make noise during the program with all that said I'd like to introduce this evening speaker dr. Lee Hong Wong dr. Wong earned his BS and MS and optics from Zhang universities of Science and Technology in Wuhan China and his PhD in", "start_timestamp": "00:03:33", "end_timestamp": "00:04:07", "start_second": 213, "end_second": 247, "url": "https://www.youtube.com/watch?v=NyoezBq14vE&t=213s", "title": "Reversing Time, Photoacoustics and Other Optical Breakthroughs in Biomedical Imaging", "thumbnail": "https://i.ytimg.com/vi/NyoezBq14vE/maxresdefault.jpg"} {"video_id": "NyoezBq14vE", "text": "electrical engineering from Rice University he is a fellow of the Academy of Science State Lewis and a recipient of the Academy's 2014 James bees outstanding st. Louis scientist award recognized for his seminal work in photoacoustic tomography and bio photonics he is currently the gene K berry distinct professor of biomedical engineering at Washington University in st. Louis and his book biomechanical optics principles and imaging was one of the first text books in the field garnering the 2010 Joseph W Goodman book writing award dr.", "start_timestamp": "00:04:07", "end_timestamp": "00:04:41", "start_second": 247, "end_second": 281, "url": "https://www.youtube.com/watch?v=NyoezBq14vE&t=247s", "title": "Reversing Time, Photoacoustics and Other Optical Breakthroughs in Biomedical Imaging", "thumbnail": "https://i.ytimg.com/vi/NyoezBq14vE/maxresdefault.jpg"} {"video_id": "NyoezBq14vE", "text": "Wagner is also the co-author of a book on polarization and editor of the first book on photoacoustic tomography his laboratory invented or discovered get ready functional photoacoustic tomography 3d photo acoustic microscopy the photo acoustic Doppler effect photoacoustic reporter gene imaging focused scanning microwave induced thermoacoustic tomography universal photoacoustic reconstruction algorithm frequency swept ultrasound modulated optical tomography time reversed ultrasonic Lian decoded optical focusing", "start_timestamp": "00:04:41", "end_timestamp": "00:05:15", "start_second": 281, "end_second": 315, "url": "https://www.youtube.com/watch?v=NyoezBq14vE&t=281s", "title": "Reversing Time, Photoacoustics and Other Optical Breakthroughs in Biomedical Imaging", "thumbnail": "https://i.ytimg.com/vi/NyoezBq14vE/maxresdefault.jpg"} {"video_id": "NyoezBq14vE", "text": "sonoluminescence tomography Miller matrix Oct article coherence computed tomography and oblique incidence reflectometry his magic Harlow model of photon transport and scattering media is used worldwide he has published over 380 peer-reviewed articles and delivered 382 keynote plenary are invited talks he is prolific and well-known across the globe and we are very pleased to have him here with us tonight he's here with us to talk about photo acoustics and other optical engineering breakthroughs in biomedical engineering on behalf of the", "start_timestamp": "00:05:15", "end_timestamp": "00:05:52", "start_second": 315, "end_second": 352, "url": "https://www.youtube.com/watch?v=NyoezBq14vE&t=315s", "title": "Reversing Time, Photoacoustics and Other Optical Breakthroughs in Biomedical Imaging", "thumbnail": "https://i.ytimg.com/vi/NyoezBq14vE/maxresdefault.jpg"} {"video_id": "NyoezBq14vE", "text": "academy of science news in the st. Louis Zoo won't you please join me in welcoming dr. Lee hongwol well thank you rose for the very kind introduction thank you all for coming here tonight I'll be talking about several technologies we've been working on in our lab our goal is to image biological tissue using optical contrast starting from the motivations and challenges in our field in general I'll be talking about two major incarnations of photoacoustic tomography namely photoacoustic computed tomography and photo acoustic microscopy then I'll talk", "start_timestamp": "00:05:52", "end_timestamp": "00:06:49", "start_second": 352, "end_second": 409, "url": "https://www.youtube.com/watch?v=NyoezBq14vE&t=352s", "title": "Reversing Time, Photoacoustics and Other Optical Breakthroughs in Biomedical Imaging", "thumbnail": "https://i.ytimg.com/vi/NyoezBq14vE/maxresdefault.jpg"} {"video_id": "NyoezBq14vE", "text": "about some some of the newer technologies we've been working on time reversal and compressed ultra-fast photography the very first question is why do we bother with light for imaging purposes you know when you walk to a hospital you go to your video department you'll see a lot of imaging modalities already MRI ultrasound and x-ray what not now first of all it's very safe to use light because we're receiving light or optical photons all the time every day so we're dealing with what's called now ionizing radiation the radiation", "start_timestamp": "00:06:49", "end_timestamp": "00:07:30", "start_second": 409, "end_second": 450, "url": "https://www.youtube.com/watch?v=NyoezBq14vE&t=409s", "title": "Reversing Time, Photoacoustics and Other Optical Breakthroughs in Biomedical Imaging", "thumbnail": "https://i.ytimg.com/vi/NyoezBq14vE/maxresdefault.jpg"} {"video_id": "NyoezBq14vE", "text": "involves photons of very tiny energy you're talking about electron volts as a unit for photon energy so when you're talking about a cup of v's then the photons are not energetic enough to ionize molecules very safe but we use x-ray photons that are so energetic thousands of v's they'll knock off electrons out of molecules do eyeliners tissue and cause problems such as DNA DNA damage and so the ionizing radiation parted of the EMR electromagnetic spectrum causes the problems right so we're working this region which is very safe more", "start_timestamp": "00:07:30", "end_timestamp": "00:08:15", "start_second": 450, "end_second": 495, "url": "https://www.youtube.com/watch?v=NyoezBq14vE&t=450s", "title": "Reversing Time, Photoacoustics and Other Optical Breakthroughs in Biomedical Imaging", "thumbnail": "https://i.ytimg.com/vi/NyoezBq14vE/maxresdefault.jpg"} {"video_id": "NyoezBq14vE", "text": "importantly from a very fundamental perspective which is the physics that tells us light occupies a very special region of the EM spectrum this is the only part of the EM spectrum that allows us to probe molecules directly so we have access to the molecular information we know the importance of molecules in biomedicine right so we have to use a light and because light can probe molecules we can apply this to biomolecules all the biomolecules you can think of right you can read the four major classes of biomolecules some other endogenous or", "start_timestamp": "00:08:15", "end_timestamp": "00:09:00", "start_second": 495, "end_second": 540, "url": "https://www.youtube.com/watch?v=NyoezBq14vE&t=495s", "title": "Reversing Time, Photoacoustics and Other Optical Breakthroughs in Biomedical Imaging", "thumbnail": "https://i.ytimg.com/vi/NyoezBq14vE/maxresdefault.jpg"} {"video_id": "NyoezBq14vE", "text": "intrinsic biomolecules or even exotic molecules you know later on I'm gonna make a point that in fact by using photoacoustic tomography week we can potentially image any molecules right so this can be extremely powerful by using by imaging biomolecules we can provide all sorts of imaging functionalities for example in vivo functional imaging very much like functional MRI you might have heard of so x-ray will image structure very well but x-ray may not be able to tell if a piss piece of tissue is alive or dead right so we want to image not", "start_timestamp": "00:09:00", "end_timestamp": "00:09:45", "start_second": 540, "end_second": 585, "url": "https://www.youtube.com/watch?v=NyoezBq14vE&t=540s", "title": "Reversing Time, Photoacoustics and Other Optical Breakthroughs in Biomedical Imaging", "thumbnail": "https://i.ytimg.com/vi/NyoezBq14vE/maxresdefault.jpg"} {"video_id": "NyoezBq14vE", "text": "just the structure of the tissue but also the physiological function and photo acoustics or optical imaging in general can provide that type of information in vivo metabolic imaging so this is someone like PET imaging positron emission tomography you know we're looking at metabolic parameters right so I enumerated a couple of examples such as the metabolic rate of oxygen glucose consumption rate we all know that these are the two major mechanisms of metabolism your body right in vivo molecular imaging so we're able", "start_timestamp": "00:09:45", "end_timestamp": "00:10:30", "start_second": 585, "end_second": 630, "url": "https://www.youtube.com/watch?v=NyoezBq14vE&t=585s", "title": "Reversing Time, Photoacoustics and Other Optical Breakthroughs in Biomedical Imaging", "thumbnail": "https://i.ytimg.com/vi/NyoezBq14vE/maxresdefault.jpg"} {"video_id": "NyoezBq14vE", "text": "to image all sorts of biomarkers which can potentially be hallmarks of cancer for example and we can image reporter genes that allows us to track gene expressions we can even provide label-free in vivo histological imaging as we know standard histology is invasive you know when you go to the hospital if physicians who we need to do a histology on you that means they have to exercise tissue out of you all right so that's invasive then they'll go through a series of procedures and prepared a tissue before they can examine the", "start_timestamp": "00:10:30", "end_timestamp": "00:11:11", "start_second": 630, "end_second": 671, "url": "https://www.youtube.com/watch?v=NyoezBq14vE&t=630s", "title": "Reversing Time, Photoacoustics and Other Optical Breakthroughs in Biomedical Imaging", "thumbnail": "https://i.ytimg.com/vi/NyoezBq14vE/maxresdefault.jpg"} {"video_id": "NyoezBq14vE", "text": "tissue slices under microscope so the whole procedure is ex-vivo invasive and takes time can we move this technology and transform this to an in vivo situation where we can look at a tumor margin right as we know that a lot of cancer patients die because the cancer cannot be removed 100% so there's small residual of cancer cells and that are left around right so then the cancer will grow back can we use our technology to demarcate and basically find the boundaries of the tumor and remove the cancer 100% so that's going to save lives there are", "start_timestamp": "00:11:11", "end_timestamp": "00:11:55", "start_second": 671, "end_second": 715, "url": "https://www.youtube.com/watch?v=NyoezBq14vE&t=671s", "title": "Reversing Time, Photoacoustics and Other Optical Breakthroughs in Biomedical Imaging", "thumbnail": "https://i.ytimg.com/vi/NyoezBq14vE/maxresdefault.jpg"} {"video_id": "NyoezBq14vE", "text": "many many reasons why optics is so powerful but we face challenges one of the challenges is called diffraction right in high school or middle school we were taught that you can focus a light beam through a lens to a geometric point all right if you say that in a high school you get a full mark right but if you say that in college the professor may tell you you're wrong right so because light is a wave anyways cannot be focused to a point a geometric point right the ultimate size of the focal point depends on the", "start_timestamp": "00:11:55", "end_timestamp": "00:12:36", "start_second": 715, "end_second": 756, "url": "https://www.youtube.com/watch?v=NyoezBq14vE&t=715s", "title": "Reversing Time, Photoacoustics and Other Optical Breakthroughs in Biomedical Imaging", "thumbnail": "https://i.ytimg.com/vi/NyoezBq14vE/maxresdefault.jpg"} {"video_id": "NyoezBq14vE", "text": "wavelength right in this graph if you do your best try to focus as sharply as possible make this angle alpha 90 degrees your focus eyes will become wavelengths over - right that's how small you can focus this is called the diffraction limit so waves water fracked this year three of our colleagues working in our field were awarded the Nobel Prize in Chemistry actually for breaking through this limit so this limit has been come they can generate resolution smaller than the wavelength which is amazing achievement now we face even", "start_timestamp": "00:12:36", "end_timestamp": "00:13:22", "start_second": 756, "end_second": 802, "url": "https://www.youtube.com/watch?v=NyoezBq14vE&t=756s", "title": "Reversing Time, Photoacoustics and Other Optical Breakthroughs in Biomedical Imaging", "thumbnail": "https://i.ytimg.com/vi/NyoezBq14vE/maxresdefault.jpg"} {"video_id": "NyoezBq14vE", "text": "more challenges in penetration right our intuition tells us light will not penetrate through tissue because we can't even see through our own palm right so when we close our eyes our eyelids is going to block all the light we can't even see the outside but you will actually see light right this is why some people when they sleep they have to wear blindfolds you have to wear blindfolds our eyelids are not dark enough they're not blocking enough light right so photons you know basically it's a quantum term for light photons will", "start_timestamp": "00:13:22", "end_timestamp": "00:13:59", "start_second": 802, "end_second": 839, "url": "https://www.youtube.com/watch?v=NyoezBq14vE&t=802s", "title": "Reversing Time, Photoacoustics and Other Optical Breakthroughs in Biomedical Imaging", "thumbnail": "https://i.ytimg.com/vi/NyoezBq14vE/maxresdefault.jpg"} {"video_id": "NyoezBq14vE", "text": "actually go through the eyelids and reach the retina and this is why even when you close your eyes you will see some light where's the challenge if we want to penetrate penetrate tissue and get images 350 years ago microscopy was invented right that really revolutionized medicine or any previous before that point there was no technology that allows us to see cells with the Y field optical microscopy we can see cells now we're at micron resolution however you have to cut the tissue into very thin slices like ten", "start_timestamp": "00:13:59", "end_timestamp": "00:14:42", "start_second": 839, "end_second": 882, "url": "https://www.youtube.com/watch?v=NyoezBq14vE&t=839s", "title": "Reversing Time, Photoacoustics and Other Optical Breakthroughs in Biomedical Imaging", "thumbnail": "https://i.ytimg.com/vi/NyoezBq14vE/maxresdefault.jpg"} {"video_id": "NyoezBq14vE", "text": "microns in thickness and this is still the standard technique for histology or standard pathology analysis so what I call the penetration limit in this case is the operation right so the way from distortion in order to form a very good image you have to have a very regular wavefront you have you have a controlled wavefront so in tissue because of the distribution of the refractive index in other words the speed of light will travel this light will travel at different speeds of light depending on which part of the", "start_timestamp": "00:14:42", "end_timestamp": "00:15:23", "start_second": 882, "end_second": 923, "url": "https://www.youtube.com/watch?v=NyoezBq14vE&t=882s", "title": "Reversing Time, Photoacoustics and Other Optical Breakthroughs in Biomedical Imaging", "thumbnail": "https://i.ytimg.com/vi/NyoezBq14vE/maxresdefault.jpg"} {"video_id": "NyoezBq14vE", "text": "tissue lion is traveling through and that gives you a problem that's gonna cause distortion and preventing us from forming a good image right so that problem was overcome about maybe 20 years ago using modern optical microscopy you might have heard of a confocal microscopy two-photon microscopy or even optical coherence tomography these are really modern wonders of optical imaging technologies those technologies will reject multiple scattered photons they'll retain the on scattered or singly back scattered photons for imaging and there was", "start_timestamp": "00:15:23", "end_timestamp": "00:16:05", "start_second": 923, "end_second": 965, "url": "https://www.youtube.com/watch?v=NyoezBq14vE&t=923s", "title": "Reversing Time, Photoacoustics and Other Optical Breakthroughs in Biomedical Imaging", "thumbnail": "https://i.ytimg.com/vi/NyoezBq14vE/maxresdefault.jpg"} {"video_id": "NyoezBq14vE", "text": "sharpen up the images they overcome this operation limit in penetrate from the original limit of say a hundred microns to about a millimeter by the way 100 microns is roughly the diameter of our hair that's how small it is a millimeter you can picture if you have a little ruler you can picture a millimeter right so a millimeter doesn't sound very big but it's a huge advancement from 0.1 millimeters so 100 microns is 1/10 of a millimeter so that's a factor of 10 enhancement in terms of penetration right though those technologies have", "start_timestamp": "00:16:05", "end_timestamp": "00:16:48", "start_second": 965, "end_second": 1008, "url": "https://www.youtube.com/watch?v=NyoezBq14vE&t=965s", "title": "Reversing Time, Photoacoustics and Other Optical Breakthroughs in Biomedical Imaging", "thumbnail": "https://i.ytimg.com/vi/NyoezBq14vE/maxresdefault.jpg"} {"video_id": "NyoezBq14vE", "text": "impacted biomedicine enormously right not just for biology but they've also been used in medicine if you go to your ophthalmologist one day they may actually use Oct article coners tomography to examine the retina for you right because without Oct you can only see the surface of the retina you can't go beyond the retina there might be something going wrong sometimes it's behind the surface of the retina OCT is able to see through the entire resume layer very very powerful technique however this set of modern optical", "start_timestamp": "00:16:48", "end_timestamp": "00:17:27", "start_second": 1008, "end_second": 1047, "url": "https://www.youtube.com/watch?v=NyoezBq14vE&t=1008s", "title": "Reversing Time, Photoacoustics and Other Optical Breakthroughs in Biomedical Imaging", "thumbnail": "https://i.ytimg.com/vi/NyoezBq14vE/maxresdefault.jpg"} {"video_id": "NyoezBq14vE", "text": "microscopy techniques face the next challenge something I call diffusion limit right so they cannot go beyond one millimeter in depth in valid or tissue how do we overcome that problem so we use photo equipment rasca viewer in general photoacoustic tomography to overcome this problem where we actually use multiple scatter photons so I have a little cartoon here conventional microscopy use straight propagating photons the modern optical microscopy will tolerate a little bit of a scattering because they can be checked", "start_timestamp": "00:17:27", "end_timestamp": "00:18:05", "start_second": 1047, "end_second": 1085, "url": "https://www.youtube.com/watch?v=NyoezBq14vE&t=1047s", "title": "Reversing Time, Photoacoustics and Other Optical Breakthroughs in Biomedical Imaging", "thumbnail": "https://i.ytimg.com/vi/NyoezBq14vE/maxresdefault.jpg"} {"video_id": "NyoezBq14vE", "text": "multiple scattered light but to go deeper we have only multiple to scatter light left so you cannot afford to reject multiple to scatter light otherwise you will get no signal so we have to use those tortuous paths for our light propagation and somehow we have to get spatial information right so that's the huge challenge because oftentimes when photons wander around through this tortuous path you'll lose spatial information so that means you're not gonna get a sharper image we somehow have to form a sharp image our solution", "start_timestamp": "00:18:05", "end_timestamp": "00:18:43", "start_second": 1085, "end_second": 1123, "url": "https://www.youtube.com/watch?v=NyoezBq14vE&t=1085s", "title": "Reversing Time, Photoacoustics and Other Optical Breakthroughs in Biomedical Imaging", "thumbnail": "https://i.ytimg.com/vi/NyoezBq14vE/maxresdefault.jpg"} {"video_id": "NyoezBq14vE", "text": "is to use photo acoustics and as a result we advanced the penetration not just by one order magnitude by more than one order magnitude more than almost two orders of magnitude now we're able to penetrate multiple millimeters even multiple centimeters so you're talking about not only skin level penetration but also organ level penetration I'll show you some more examples later like anything else we face the next challenge right we cannot penetrate beyond right now seven centimeters how do we target the next challenge this this limit is", "start_timestamp": "00:18:43", "end_timestamp": "00:19:21", "start_second": 1123, "end_second": 1161, "url": "https://www.youtube.com/watch?v=NyoezBq14vE&t=1123s", "title": "Reversing Time, Photoacoustics and Other Optical Breakthroughs in Biomedical Imaging", "thumbnail": "https://i.ytimg.com/vi/NyoezBq14vE/maxresdefault.jpg"} {"video_id": "NyoezBq14vE", "text": "called the dissipation limit because beyond this steps even the multiple scatter photons are spread around so they're not very intense even though you have a lot of photons at that kind of depth so the next natural question is can we gather those photons they make them intense okay so there's upcoming technique called the wave from engineering with the internal guy stars there's hope there there's hope that we can potentially break through this dissipation limit and reach the next level this could be extremely exciting", "start_timestamp": "00:19:21", "end_timestamp": "00:19:57", "start_second": 1161, "end_second": 1197, "url": "https://www.youtube.com/watch?v=NyoezBq14vE&t=1161s", "title": "Reversing Time, Photoacoustics and Other Optical Breakthroughs in Biomedical Imaging", "thumbnail": "https://i.ytimg.com/vi/NyoezBq14vE/maxresdefault.jpg"} {"video_id": "NyoezBq14vE", "text": "if we reach there because now we're talking about tens of centimeters of penetration by using safe light now we're talking about potentially whole body human imaging using known as a radiation with rich contrast information so very very exciting I'll touch upon this topic later on and this is ability to time reversal now of course the ultimate limit is the absorption limit if you take all the scattering centers away out of the body you all you have left is absorption right so that's gonna limit your light penetration likely if", "start_timestamp": "00:19:57", "end_timestamp": "00:20:36", "start_second": 1197, "end_second": 1236, "url": "https://www.youtube.com/watch?v=NyoezBq14vE&t=1197s", "title": "Reversing Time, Photoacoustics and Other Optical Breakthroughs in Biomedical Imaging", "thumbnail": "https://i.ytimg.com/vi/NyoezBq14vE/maxresdefault.jpg"} {"video_id": "NyoezBq14vE", "text": "you use the right light wavelength the right light color that penetration is extremely long it's like a meter or so right all right so let's take a step back and ask yourself the question what does it take to get a good image if you we all know how x-ray chest x-ray works right chest x-ray works so well because x-ray does not scatter so much in tissue you get what's called ballistic light ballistic photons in x-ray regime so that's gonna cast a shadow directly for you and you can see that the bones and certain structures if you mimic x-ray", "start_timestamp": "00:20:36", "end_timestamp": "00:21:23", "start_second": 1236, "end_second": 1283, "url": "https://www.youtube.com/watch?v=NyoezBq14vE&t=1236s", "title": "Reversing Time, Photoacoustics and Other Optical Breakthroughs in Biomedical Imaging", "thumbnail": "https://i.ytimg.com/vi/NyoezBq14vE/maxresdefault.jpg"} {"video_id": "NyoezBq14vE", "text": "projection by using light when you have certain tissue thickness you're not gonna see see a shadow at all right because light will follow a highly tortuous photon path before reaching your observer right so on this side you won't see a shadow if we surgically cut open a tissue right that's invasive of course now the right hand side is well very well defined because this is all exposed to your error in the photon path will be well-defined allowing us for two form a very good image despite the torch was fallen paths on the left side so the", "start_timestamp": "00:21:23", "end_timestamp": "00:22:07", "start_second": 1283, "end_second": 1327, "url": "https://www.youtube.com/watch?v=NyoezBq14vE&t=1283s", "title": "Reversing Time, Photoacoustics and Other Optical Breakthroughs in Biomedical Imaging", "thumbnail": "https://i.ytimg.com/vi/NyoezBq14vE/maxresdefault.jpg"} {"video_id": "NyoezBq14vE", "text": "point I want to make is you only need one sided clarity you don't need both illumination clarity and detaching clarity well this is so invasive it's not very useful right a better idea is called optical clearing so we can introduce certain chemicals into the tissue that's going to turn a tissue transparent right so that's going to achieve the equivalent to case P giving us a very sharp image unfortunately this process is toxic so you may have turned a tissue transparent but this is gonna kill the tissue so it's not very useful", "start_timestamp": "00:22:07", "end_timestamp": "00:22:48", "start_second": 1327, "end_second": 1368, "url": "https://www.youtube.com/watch?v=NyoezBq14vE&t=1327s", "title": "Reversing Time, Photoacoustics and Other Optical Breakthroughs in Biomedical Imaging", "thumbnail": "https://i.ytimg.com/vi/NyoezBq14vE/maxresdefault.jpg"} {"video_id": "NyoezBq14vE", "text": "for you vivo imaging what we are doing is to convert light or photons into ultrasound signals so we're talking about acoustic signals here through the photo acoustic effect acoustic scattering is orders of magnitude like a thousand times weaker than optical scattering equivalently on the detection side we've reached case P or KC without invasiveness or toxicity so on the detection site now we're dealing with transparency that allows us to form a sharper image so that's the basic idea behind this technology right let's see", "start_timestamp": "00:22:48", "end_timestamp": "00:23:34", "start_second": 1368, "end_second": 1414, "url": "https://www.youtube.com/watch?v=NyoezBq14vE&t=1368s", "title": "Reversing Time, Photoacoustics and Other Optical Breakthroughs in Biomedical Imaging", "thumbnail": "https://i.ytimg.com/vi/NyoezBq14vE/maxresdefault.jpg"} {"video_id": "NyoezBq14vE", "text": "I'll give you more demonstrations later on to make sure we understand the concept photoacoustic very much like x-ray CT there's also photoacoustic CT we have different geometries I'm going to talk about circular geometry first photo acoustics as a physical phenomenon has been around for over 100 years in fact Alexander Graham Bell first reported photo acoustics he actually invented the concept of photo phone right very interestingly this paper was published only a few years after the first telephone was built he had a site", "start_timestamp": "00:23:34", "end_timestamp": "00:24:14", "start_second": 1414, "end_second": 1454, "url": "https://www.youtube.com/watch?v=NyoezBq14vE&t=1414s", "title": "Reversing Time, Photoacoustics and Other Optical Breakthroughs in Biomedical Imaging", "thumbnail": "https://i.ytimg.com/vi/NyoezBq14vE/maxresdefault.jpg"} {"video_id": "NyoezBq14vE", "text": "you're building a photo phone using light to communicate so his idea was to encode the voice or music into a lie beam then propagate a lie beam in space convert light back into sound again you cannot you hear that so he had no access to laser at a time that he was way be the head of the time you know because laser was invented in the 50s right so he had to use sunlight to demonstrate this principle now this idea never quite took off right for reasons you can imagine because together to work you have to have a line of sight", "start_timestamp": "00:24:14", "end_timestamp": "00:24:55", "start_second": 1454, "end_second": 1495, "url": "https://www.youtube.com/watch?v=NyoezBq14vE&t=1454s", "title": "Reversing Time, Photoacoustics and Other Optical Breakthroughs in Biomedical Imaging", "thumbnail": "https://i.ytimg.com/vi/NyoezBq14vE/maxresdefault.jpg"} {"video_id": "NyoezBq14vE", "text": "communication right but nowadays we have laser this idea may actually take off again with someone you actually revive this idea later for secure communication we have the opposite problem 100 years ago right so telephone allows you to talk between neighborhoods you know without worrying about line-of-sight transmission but now we're worrying about security so between towers for example you know you have directed communication you don't have to you don't have to worry about you know losing your message right for security", "start_timestamp": "00:24:55", "end_timestamp": "00:25:29", "start_second": 1495, "end_second": 1529, "url": "https://www.youtube.com/watch?v=NyoezBq14vE&t=1495s", "title": "Reversing Time, Photoacoustics and Other Optical Breakthroughs in Biomedical Imaging", "thumbnail": "https://i.ytimg.com/vi/NyoezBq14vE/maxresdefault.jpg"} {"video_id": "NyoezBq14vE", "text": "purposes so of course in those days there was no concept of tomography right that was not you mean in the lexicon it's a totally new modern concept we're using this very old physics with a very new modern imaging concept to form a new imaging modality a very simple analogy I can think of for photoacoustic CT is how can we pinpoint a single sound source single point sound source like a thunderbolt so when we see the lining we can reset our stopwatch that would be our time zero when we hear the sounder we can record a time delay t1 multiply", "start_timestamp": "00:25:29", "end_timestamp": "00:26:21", "start_second": 1529, "end_second": 1581, "url": "https://www.youtube.com/watch?v=NyoezBq14vE&t=1529s", "title": "Reversing Time, Photoacoustics and Other Optical Breakthroughs in Biomedical Imaging", "thumbnail": "https://i.ytimg.com/vi/NyoezBq14vE/maxresdefault.jpg"} {"video_id": "NyoezBq14vE", "text": "the time delay by the speed of sound we will define radius first ergo shell on which lining took place if we have three such measurements Center at different locations we will have three spherical shells the intersection of the three shells will pinpoint the thunderbolt right so photoacoustic CT in essence is a straightforward as this its triangulation right something we learned in middle school or high school right you can triangulate geometrically you can figure out the source right except in a real situation we don't have a", "start_timestamp": "00:26:21", "end_timestamp": "00:27:03", "start_second": 1581, "end_second": 1623, "url": "https://www.youtube.com/watch?v=NyoezBq14vE&t=1581s", "title": "Reversing Time, Photoacoustics and Other Optical Breakthroughs in Biomedical Imaging", "thumbnail": "https://i.ytimg.com/vi/NyoezBq14vE/maxresdefault.jpg"} {"video_id": "NyoezBq14vE", "text": "point source we're gonna use light to generate a volumetric 3d complex source we don't know the exact source distribution because the source distribution is the image we're after so we have to use more than three detectors but this gives you the basic concept at the simplest level so for photoacoustic CT the first thing we do is to expand the laser beam very different from a common use of laser where you want to use a very high intensity to drill a piece of metal for example so when we talk about laser you say well that would", "start_timestamp": "00:27:03", "end_timestamp": "00:27:41", "start_second": 1623, "end_second": 1661, "url": "https://www.youtube.com/watch?v=NyoezBq14vE&t=1623s", "title": "Reversing Time, Photoacoustics and Other Optical Breakthroughs in Biomedical Imaging", "thumbnail": "https://i.ytimg.com/vi/NyoezBq14vE/maxresdefault.jpg"} {"video_id": "NyoezBq14vE", "text": "be dangerous you know this is a laser beam here I want to have high collimation this is not very high power so the first thing we have to guarantee is safety what we do is to broaden the laser beam make sure the light intensity is within the safety limit so there's this institute called ANSI that regulates the safety of laser use if you stay within the safely safety element you're very very safe light will be scattered around so if you want to image deep you have to tolerate light scattering we actually allow photons to", "start_timestamp": "00:27:41", "end_timestamp": "00:28:21", "start_second": 1661, "end_second": 1701, "url": "https://www.youtube.com/watch?v=NyoezBq14vE&t=1661s", "title": "Reversing Time, Photoacoustics and Other Optical Breakthroughs in Biomedical Imaging", "thumbnail": "https://i.ytimg.com/vi/NyoezBq14vE/maxresdefault.jpg"} {"video_id": "NyoezBq14vE", "text": "scatter so you can penetrate multiple millimeters even multiple centimeters in biological tissue when light is absorbed it generates some heating we use very short laser pulses nanosecond right one nanosecond is a 1 1 billionth of a second right very very short laser pulses so this heating is very very rapid and we don't need a lot of heating milli degrees right so one one thousandth of a degree would be adequate that will give you a detectable signal already so if you generate hundred hundreds of milli henries of million agrees then you have", "start_timestamp": "00:28:21", "end_timestamp": "00:29:02", "start_second": 1701, "end_second": 1742, "url": "https://www.youtube.com/watch?v=NyoezBq14vE&t=1701s", "title": "Reversing Time, Photoacoustics and Other Optical Breakthroughs in Biomedical Imaging", "thumbnail": "https://i.ytimg.com/vi/NyoezBq14vE/maxresdefault.jpg"} {"video_id": "NyoezBq14vE", "text": "a very good signal to work with that allows you to form a bright image so this transiting allows us to generate ultrasonic emission is going to generate Christic wave through the photo acoustic effect okay so this is the bells effect and because of the acoustic transparency we can detect all the acoustic signals and form a very sharp image outside however the contrast comes from optical absorption so we're combining optical contrast without drisana krez Ellucian these one common question is why don't we use ultrasound tomography why do you", "start_timestamp": "00:29:02", "end_timestamp": "00:29:42", "start_second": 1742, "end_second": 1782, "url": "https://www.youtube.com/watch?v=NyoezBq14vE&t=1742s", "title": "Reversing Time, Photoacoustics and Other Optical Breakthroughs in Biomedical Imaging", "thumbnail": "https://i.ytimg.com/vi/NyoezBq14vE/maxresdefault.jpg"} {"video_id": "NyoezBq14vE", "text": "have to use photo acoustics so you notice on tomography as you know you actually fire ultrasound pulse into tissue then listen to the echoes right so medical ultrasound imaging originated from sauna of sonar right so after the Second World War scientists and physicians started to borrow that technique in a pointer for a human imaging so what ultrasound imaging does not provide is the optical contrast we want to have the optical mechanism as the contrast so we have access to molecular level information for example", "start_timestamp": "00:29:42", "end_timestamp": "00:30:20", "start_second": 1782, "end_second": 1820, "url": "https://www.youtube.com/watch?v=NyoezBq14vE&t=1782s", "title": "Reversing Time, Photoacoustics and Other Optical Breakthroughs in Biomedical Imaging", "thumbnail": "https://i.ytimg.com/vi/NyoezBq14vE/maxresdefault.jpg"} {"video_id": "NyoezBq14vE", "text": "using light later I'm going to show you we can see the color differences of the two forms of blood oxygenated blood and deoxygenated blood red versus blue our veins you know look blue right the arteries will look red so ultrasound will not be able to tell the difference that's just one example this is the first set of functional photoacoustic images it's also the first set of in vivo images acquired using photo acoustic CT this was reported in 2003 by our lab and we can see actually brain activation if he we go one side of the whiskers of", "start_timestamp": "00:30:20", "end_timestamp": "00:31:03", "start_second": 1820, "end_second": 1863, "url": "https://www.youtube.com/watch?v=NyoezBq14vE&t=1820s", "title": "Reversing Time, Photoacoustics and Other Optical Breakthroughs in Biomedical Imaging", "thumbnail": "https://i.ytimg.com/vi/NyoezBq14vE/maxresdefault.jpg"} {"video_id": "NyoezBq14vE", "text": "assam animal and the contralateral side of the brain would be hemodynamically activated so the blood flow will change and we can detect that change to show the brain activation on one side and those images were acquired now evasively so this work really started the growth of our field you know if you look at after 2003 our field really doubled roughly every three years in size after 2010 the conference on this topic became the largest in photonics West which is a twenty thousand attendee gathering right so this has surpassed a lot of the", "start_timestamp": "00:31:03", "end_timestamp": "00:31:47", "start_second": 1863, "end_second": 1907, "url": "https://www.youtube.com/watch?v=NyoezBq14vE&t=1863s", "title": "Reversing Time, Photoacoustics and Other Optical Breakthroughs in Biomedical Imaging", "thumbnail": "https://i.ytimg.com/vi/NyoezBq14vE/maxresdefault.jpg"} {"video_id": "NyoezBq14vE", "text": "competing technologies or you know peer technologies a natural question is why is this technology so exciting well this is probably the only technology that allows us to image from organelles all the way to organs in vivo and we use the same contrast right so we use the optic absorption as the contrast mechanism so I'm plotting different implementations of this technology right we actually have far more than that can pot here but this says this is a highly scalable technology we can image at very much microscopic level but we can also scale", "start_timestamp": "00:31:47", "end_timestamp": "00:32:31", "start_second": 1907, "end_second": 1951, "url": "https://www.youtube.com/watch?v=NyoezBq14vE&t=1907s", "title": "Reversing Time, Photoacoustics and Other Optical Breakthroughs in Biomedical Imaging", "thumbnail": "https://i.ytimg.com/vi/NyoezBq14vE/maxresdefault.jpg"} {"video_id": "NyoezBq14vE", "text": "to the macroscopic level this type of multi scale imaging can be very very important because in current medical practice or biomedical practice for cells and below we use optical microscopy so we're acquiring optical contrast but for tissues and above we switched to now optics we're gonna use MRI ultrasound x-ray CT what have you so we're talking about two different contrast mechanisms mechanisms preventing us from correlating images across all the lens skills as we know to understand about the problem we have to have a multiscale", "start_timestamp": "00:32:31", "end_timestamp": "00:33:10", "start_second": 1951, "end_second": 1990, "url": "https://www.youtube.com/watch?v=NyoezBq14vE&t=1951s", "title": "Reversing Time, Photoacoustics and Other Optical Breakthroughs in Biomedical Imaging", "thumbnail": "https://i.ytimg.com/vi/NyoezBq14vE/maxresdefault.jpg"} {"video_id": "NyoezBq14vE", "text": "understanding you have to correlate information across the learn skills so photo tomography can potentially allows us to do so by enabling multi scale biological research and enabling translation of microscopic discovery to macroscopic clinical practice at least accelerate the pace of such translation so for the 2003 work we actually use a single element transducer that took like 20 minutes to get a 2d image now we have ultrasone arrays with 512 elements within 2 seconds so we can get a 2d image in fact now in our lab we're", "start_timestamp": "00:33:10", "end_timestamp": "00:33:58", "start_second": 1990, "end_second": 2038, "url": "https://www.youtube.com/watch?v=NyoezBq14vE&t=1990s", "title": "Reversing Time, Photoacoustics and Other Optical Breakthroughs in Biomedical Imaging", "thumbnail": "https://i.ytimg.com/vi/NyoezBq14vE/maxresdefault.jpg"} {"video_id": "NyoezBq14vE", "text": "building a system that allows us to get an image within one millisecond so the technology is really getting faster and faster in terms of data acquisition so we can image the whole body of its Manimal right without without injecting any contrast agents we can see the internal organs we've got 10 defined boundaries detect tumors detect functions the farmers are very interested in in this type of technology because what is the alternative to see this type of contrast they have to use x-ray CT the radiation dose is a big problem to monitor the", "start_timestamp": "00:33:58", "end_timestamp": "00:34:37", "start_second": 2038, "end_second": 2077, "url": "https://www.youtube.com/watch?v=NyoezBq14vE&t=2038s", "title": "Reversing Time, Photoacoustics and Other Optical Breakthroughs in Biomedical Imaging", "thumbnail": "https://i.ytimg.com/vi/NyoezBq14vE/maxresdefault.jpg"} {"video_id": "NyoezBq14vE", "text": "same animal for development of a new drug you have to monitor the animal multiple time points over a time course of a couple months at least so the radiation dose can kill the animal and it doesn't give you enough information as well because x-rays sometimes does not provide enough information right you have to have the functional information to make your to test the efficacy of the drug further acoustic tomography is simultaneously analogous to MRI in pet so this is one example on this site we can see the hemodynamic contrast", "start_timestamp": "00:34:37", "end_timestamp": "00:35:19", "start_second": 2077, "end_second": 2119, "url": "https://www.youtube.com/watch?v=NyoezBq14vE&t=2077s", "title": "Reversing Time, Photoacoustics and Other Optical Breakthroughs in Biomedical Imaging", "thumbnail": "https://i.ytimg.com/vi/NyoezBq14vE/maxresdefault.jpg"} {"video_id": "NyoezBq14vE", "text": "right so this is the concentration of hemoglobin we detect the brain again in response to electric stimulation of the pause of the animal on one side you can see the counter lateral side of the brain activate so on this side we're showing why this is analogous to PET imaging so we're using a glucose analog that mimics glucose so when there's glucose uptake you can see stronger signals right this is very much like PET imaging so this can be potentially very powerful in the future as well because photo acoustics can serve as a backbone to", "start_timestamp": "00:35:19", "end_timestamp": "00:36:01", "start_second": 2119, "end_second": 2161, "url": "https://www.youtube.com/watch?v=NyoezBq14vE&t=2119s", "title": "Reversing Time, Photoacoustics and Other Optical Breakthroughs in Biomedical Imaging", "thumbnail": "https://i.ytimg.com/vi/NyoezBq14vE/maxresdefault.jpg"} {"video_id": "NyoezBq14vE", "text": "connect with other standard modalities like MRI and PET we're trying to push bring image into the ultimate level human brain imaging this is extremely challenging problem because human skull is a lot thicker than animal skull so that causes problems because the skull has a different speed of sound and soft tissue so this is like a photograph of the x-ray CT image X X we go a tall skull and this is a photograph and we put this k9 brain inside this skull which are the image using photo acoustic tomography actually some of the", "start_timestamp": "00:36:01", "end_timestamp": "00:36:45", "start_second": 2161, "end_second": 2205, "url": "https://www.youtube.com/watch?v=NyoezBq14vE&t=2161s", "title": "Reversing Time, Photoacoustics and Other Optical Breakthroughs in Biomedical Imaging", "thumbnail": "https://i.ytimg.com/vi/NyoezBq14vE/maxresdefault.jpg"} {"video_id": "NyoezBq14vE", "text": "colleagues predicted this will not be possible because the skull is so thick we got very very encouraging data so we got a image that shows certain structures which match the structures in the photograph very well the next big step is to push this to in vivo imaging of human brain unlike x-ray CT which always works in transmission mode because x-ray doesn't scatter much further acoustic waves propagate essentially in all directions so that means we're very flexible in terms of actually implementation I just talked about the circular geometry for", "start_timestamp": "00:36:45", "end_timestamp": "00:37:30", "start_second": 2205, "end_second": 2250, "url": "https://www.youtube.com/watch?v=NyoezBq14vE&t=2205s", "title": "Reversing Time, Photoacoustics and Other Optical Breakthroughs in Biomedical Imaging", "thumbnail": "https://i.ytimg.com/vi/NyoezBq14vE/maxresdefault.jpg"} {"video_id": "NyoezBq14vE", "text": "detection but we can also do it using a linear geometry so this is a hand hold ultrasound probe that you might have seen the hospital we have like hundreds of ultrasound detectors along this probe we use optical fibers optical fiber bundles to deliver light with a single laser shot we can illuminate a volume below this linear probe it allows us to form a two-dimensional image so all the data will be acquired within say 100 microseconds so your boy motion artifacts when you acquire this type of images and you can", "start_timestamp": "00:37:30", "end_timestamp": "00:38:10", "start_second": 2250, "end_second": 2290, "url": "https://www.youtube.com/watch?v=NyoezBq14vE&t=2250s", "title": "Reversing Time, Photoacoustics and Other Optical Breakthroughs in Biomedical Imaging", "thumbnail": "https://i.ytimg.com/vi/NyoezBq14vE/maxresdefault.jpg"} {"video_id": "NyoezBq14vE", "text": "see this is actually a standard ultrasound machine we worked in cooperation with Philips to modify this clinic ultrasound system for concurrent photoacoustic imaging yogic dual contrasts for the acoustic contrast in ultrasonic contrast so this system works around this point allows you to penetrate multiple centimeters and get hundreds of microns resolution this is a one example where we can see a tiny absorber buried at a depth of five centimeters right this was not possible using standard optical imaging right at", "start_timestamp": "00:38:10", "end_timestamp": "00:38:52", "start_second": 2290, "end_second": 2332, "url": "https://www.youtube.com/watch?v=NyoezBq14vE&t=2290s", "title": "Reversing Time, Photoacoustics and Other Optical Breakthroughs in Biomedical Imaging", "thumbnail": "https://i.ytimg.com/vi/NyoezBq14vE/maxresdefault.jpg"} {"video_id": "NyoezBq14vE", "text": "this type of resolution in fact this is a form of reporter gene imaging or a form of molecular imaging again three colleagues receive the Nobel Prize in 2008 for their discovery of force and proteins you might have heard of you know this is a huge discovery that allows us to follow gene expressions but it's all fluorescence based by using standard optical imaging you cannot penetrate these so all of the applications are limited to cells or animals at a very shallow depth now we're talking about multiple centimeters", "start_timestamp": "00:38:52", "end_timestamp": "00:39:34", "start_second": 2332, "end_second": 2374, "url": "https://www.youtube.com/watch?v=NyoezBq14vE&t=2332s", "title": "Reversing Time, Photoacoustics and Other Optical Breakthroughs in Biomedical Imaging", "thumbnail": "https://i.ytimg.com/vi/NyoezBq14vE/maxresdefault.jpg"} {"video_id": "NyoezBq14vE", "text": "of penetration for this type of imaging so this is going to extend the capability of fluorescent proteins and some of the derivatives this technology is being tested at Washington University for a human studies for human applications and this is one example where a breast tumor was imaged using standard ultrasound imaging and photoacoustic imaging we can see the CM tumor using a very safe light dose the laser intensity is one-half of the safety limit and a safety light safety limit is less than one-tenth of the dams threshold so this is", "start_timestamp": "00:39:34", "end_timestamp": "00:40:18", "start_second": 2374, "end_second": 2418, "url": "https://www.youtube.com/watch?v=NyoezBq14vE&t=2374s", "title": "Reversing Time, Photoacoustics and Other Optical Breakthroughs in Biomedical Imaging", "thumbnail": "https://i.ytimg.com/vi/NyoezBq14vE/maxresdefault.jpg"} {"video_id": "NyoezBq14vE", "text": "extremely safe level light those we're also targeting something that might have a more immediate application which is breast cancer staging you might have heard that the standard breast cancer staging is invasive you have to surgically remove the first draining node or the central nerve node what we want to do is to inject organic dye near the tumor it's gonna flow toward a central enough node then we use the photo acoustics to detect us in the lymph node and use a needle which is guided by the photo acoustic technology", "start_timestamp": "00:40:18", "end_timestamp": "00:40:57", "start_second": 2418, "end_second": 2457, "url": "https://www.youtube.com/watch?v=NyoezBq14vE&t=2418s", "title": "Reversing Time, Photoacoustics and Other Optical Breakthroughs in Biomedical Imaging", "thumbnail": "https://i.ytimg.com/vi/NyoezBq14vE/maxresdefault.jpg"} {"video_id": "NyoezBq14vE", "text": "to take cells out of the simple nerves node do you test if there's any presence of cancer cells so once this is a proven we can convert the surgical surgical procedure into a needle biopsy procedure this is some of the initial test results in humans you can see the central nodes note we can see the needle here at as well in there another clinical problem is to monitor the brain oxygen consumption in the surgery you want to do it in the war but there's no non-invasive technique allows us to do that well so the current technique is", "start_timestamp": "00:40:57", "end_timestamp": "00:41:39", "start_second": 2457, "end_second": 2499, "url": "https://www.youtube.com/watch?v=NyoezBq14vE&t=2457s", "title": "Reversing Time, Photoacoustics and Other Optical Breakthroughs in Biomedical Imaging", "thumbnail": "https://i.ytimg.com/vi/NyoezBq14vE/maxresdefault.jpg"} {"video_id": "NyoezBq14vE", "text": "invasive you have to insert into the carotid artery and jugular vein some sensors to detect the oxygen content so can we do this now you massively we can use photo cosec tomography to quantify the oxygen saturation we use so2 to represent the oxygen saturation level at both the caudate artery in the jugular vein and you'll see that they have different values the difference in so2 tells you how much oxygen has been extracted by the brain so this is an indicator that tells you how well our brain is consuming the oxygen if this", "start_timestamp": "00:41:39", "end_timestamp": "00:42:18", "start_second": 2499, "end_second": 2538, "url": "https://www.youtube.com/watch?v=NyoezBq14vE&t=2499s", "title": "Reversing Time, Photoacoustics and Other Optical Breakthroughs in Biomedical Imaging", "thumbnail": "https://i.ytimg.com/vi/NyoezBq14vE/maxresdefault.jpg"} {"video_id": "NyoezBq14vE", "text": "difference is too small that means the patient is not consuming enough oxygen are you going to do something about it otherwise the patient might wake up from the surgery bring that so we've tested seven healthy volunteers that reach so2 is very similar to the expected range so this is very good news for us as a initial test the next natural step is to move this into the war to work on real patients let me move on to the microscopy domain unlike the photoacoustic CT you will have to use math to form an image you know you have", "start_timestamp": "00:42:18", "end_timestamp": "00:43:01", "start_second": 2538, "end_second": 2581, "url": "https://www.youtube.com/watch?v=NyoezBq14vE&t=2538s", "title": "Reversing Time, Photoacoustics and Other Optical Breakthroughs in Biomedical Imaging", "thumbnail": "https://i.ytimg.com/vi/NyoezBq14vE/maxresdefault.jpg"} {"video_id": "NyoezBq14vE", "text": "hundreds of sensors that Christic sensors you get all the signals then you send the data to a computer use your algorithm to form a image someone like x-ray CT image reconstruction but it's actually more sophisticated because it deals with extra dimension so here we're gonna use acoustical lens very much too like lenses like I'm wearing a pair of lenses and everybody has a pair of lenses Eve even if you don't wear glasses right our eyeballs actually have lenses inside as well these are biologically made so here", "start_timestamp": "00:43:01", "end_timestamp": "00:43:40", "start_second": 2581, "end_second": 2620, "url": "https://www.youtube.com/watch?v=NyoezBq14vE&t=2581s", "title": "Reversing Time, Photoacoustics and Other Optical Breakthroughs in Biomedical Imaging", "thumbnail": "https://i.ytimg.com/vi/NyoezBq14vE/maxresdefault.jpg"} {"video_id": "NyoezBq14vE", "text": "we're gonna use the acoustic version of lenses to firmly image it directly let's just assume there's a target you know a piece of tissue we want an image we fire a light pulse to illuminate the tissue to generate photo acoustic waves and then we use this focused ultrasound transducer to receive the signals and you're gonna get a time trace like this so this is your time axis you get a voltage like this you'll see a spike corresponding to where the target is so if this guide is deeper then this spike is gonna appear later if you have", "start_timestamp": "00:43:40", "end_timestamp": "00:44:18", "start_second": 2620, "end_second": 2658, "url": "https://www.youtube.com/watch?v=NyoezBq14vE&t=2620s", "title": "Reversing Time, Photoacoustics and Other Optical Breakthroughs in Biomedical Imaging", "thumbnail": "https://i.ytimg.com/vi/NyoezBq14vE/maxresdefault.jpg"} {"video_id": "NyoezBq14vE", "text": "multiple absorbers or targets you'll see multiple spikes because we know the speed of sound in tissue we can convert this time of arrival into depth so this time trace is essentially a one dimensional image we call that a scan right or 1d image if we scan the system across the Tish we got a 2d image you would call that be Skinner right now if you're Astros game on a tissue surface you got a 3d image so that's the basic concept and this is the detail now maybe I should skip this is this photograph shows the first 3d", "start_timestamp": "00:44:18", "end_timestamp": "00:45:01", "start_second": 2658, "end_second": 2701, "url": "https://www.youtube.com/watch?v=NyoezBq14vE&t=2658s", "title": "Reversing Time, Photoacoustics and Other Optical Breakthroughs in Biomedical Imaging", "thumbnail": "https://i.ytimg.com/vi/NyoezBq14vE/maxresdefault.jpg"} {"video_id": "NyoezBq14vE", "text": "photo microscope which we published in 2005 so this is used to implement the idea I just described right so we focus light into tissue right to penetrate deep beyond like a millimeter which is the limit how far how well we can focus right beyond a millimeter you can't really focus so well and so we use this what's called a dark field we use a done' beam on a surface to minimize the surface interference and then the ultrasound transducer is comfo wholly located you just want to maximize your signal strength with a single laser shot you", "start_timestamp": "00:45:01", "end_timestamp": "00:45:46", "start_second": 2701, "end_second": 2746, "url": "https://www.youtube.com/watch?v=NyoezBq14vE&t=2701s", "title": "Reversing Time, Photoacoustics and Other Optical Breakthroughs in Biomedical Imaging", "thumbnail": "https://i.ytimg.com/vi/NyoezBq14vE/maxresdefault.jpg"} {"video_id": "NyoezBq14vE", "text": "get a 1d image in the depths duration so this head is raster scan XY scan in its water tray to get a 3d image so this is the map rates around this point it penetrates three millimeters we can scale to a few more millimeters if we won't add expensive resolution everybody you can get tens of microns resolution so the resolution is getting better this is an image of our own skin in this area of the palm there you can see blood vessels without injecting any contrast agents if you want to get the same type of images use x-ray you have to inject", "start_timestamp": "00:45:46", "end_timestamp": "00:46:27", "start_second": 2746, "end_second": 2787, "url": "https://www.youtube.com/watch?v=NyoezBq14vE&t=2746s", "title": "Reversing Time, Photoacoustics and Other Optical Breakthroughs in Biomedical Imaging", "thumbnail": "https://i.ytimg.com/vi/NyoezBq14vE/maxresdefault.jpg"} {"video_id": "NyoezBq14vE", "text": "iodine base contrast or some other heavy metal phase contrast even then you'll miss some of the smaller vessels because x-ray does not give you enough contrast for vessel imaging so we all know the importance of blood vessels this is a depth result or be scanned image showing you some of the standard skin structures now we're working with our dermatology department you apply this technology for melanoma imaging we've also miniaturized the probe so we can use this technology in the GI tract all right so you have to make the device", "start_timestamp": "00:46:27", "end_timestamp": "00:47:11", "start_second": 2787, "end_second": 2831, "url": "https://www.youtube.com/watch?v=NyoezBq14vE&t=2787s", "title": "Reversing Time, Photoacoustics and Other Optical Breakthroughs in Biomedical Imaging", "thumbnail": "https://i.ytimg.com/vi/NyoezBq14vE/maxresdefault.jpg"} {"video_id": "NyoezBq14vE", "text": "tiny in order to be inserted into the standard endoscope so unlike standard endoscopy like colonoscopy for example which only detects the surface of the lumen or the colon right so here we want to image beyond the surface these standard colonoscopy or upper GI endoscopy will miss anything beyond the surface we want to see deeper right seven millimeters is sufficiently deep in terms of the GI wall right so that allows us to see the deeper structures by providing both photo acoustic and ultrasonic contrast mechanisms so this", "start_timestamp": "00:47:11", "end_timestamp": "00:47:58", "start_second": 2831, "end_second": 2878, "url": "https://www.youtube.com/watch?v=NyoezBq14vE&t=2831s", "title": "Reversing Time, Photoacoustics and Other Optical Breakthroughs in Biomedical Imaging", "thumbnail": "https://i.ytimg.com/vi/NyoezBq14vE/maxresdefault.jpg"} {"video_id": "NyoezBq14vE", "text": "technology has been licensed to a large company for human translational commercialization I'll skip some of the details on the engineering side and this is by the way this is the first photo Costa endoscope we can also implement a photo acoustic microscopy at even finer resolution so if we sacrifice the penetration now we're back within one millimeter penetration these we do want to cover the very top layer of the of the thickness so you can focus light more tightly that allows you to get finer resolution and you do have to", "start_timestamp": "00:47:58", "end_timestamp": "00:48:41", "start_second": 2878, "end_second": 2921, "url": "https://www.youtube.com/watch?v=NyoezBq14vE&t=2878s", "title": "Reversing Time, Photoacoustics and Other Optical Breakthroughs in Biomedical Imaging", "thumbnail": "https://i.ytimg.com/vi/NyoezBq14vE/maxresdefault.jpg"} {"video_id": "NyoezBq14vE", "text": "somehow collect the ultrasound wave so we have to engineer this transducer or beam combine or what we call light some combiner that allows us to combine the light optical axis and acoustic axis make them coaxial right so with a single laser shot you get an image 1d image and then you can raster scan to get a 3d image so this technology works around this point right so your penetration is about one point two millimeters but you're getting a single-digit micron resolution so this is one example where you can monitor the same animal over time so", "start_timestamp": "00:48:41", "end_timestamp": "00:49:22", "start_second": 2921, "end_second": 2962, "url": "https://www.youtube.com/watch?v=NyoezBq14vE&t=2921s", "title": "Reversing Time, Photoacoustics and Other Optical Breakthroughs in Biomedical Imaging", "thumbnail": "https://i.ytimg.com/vi/NyoezBq14vE/maxresdefault.jpg"} {"video_id": "NyoezBq14vE", "text": "this is angiogenesis meaning new growth of blood vessels right so we can monitor this process because angiogenesis is a hallmark of cancer cancer cannot grow without growing more blood vessels so it's a very important process of cancer growth and of course this is just the one side of the story the flip side of the story is we can potentially we're actually doing that we're using the same technology to monitor drug the targets angiogenesis so we can use this as a therapeutic technology now therapeutic you use the anti-angiogenic drug to", "start_timestamp": "00:49:22", "end_timestamp": "00:50:04", "start_second": 2962, "end_second": 3004, "url": "https://www.youtube.com/watch?v=NyoezBq14vE&t=2962s", "title": "Reversing Time, Photoacoustics and Other Optical Breakthroughs in Biomedical Imaging", "thumbnail": "https://i.ytimg.com/vi/NyoezBq14vE/maxresdefault.jpg"} {"video_id": "NyoezBq14vE", "text": "treat into genesis and we can monitor the efficacy of the drug this is another example we can see essentially every single blood vessel in the skin including the smallest capillaries you can see here these are capillary pads you know the lines are single capillaries that are actually single of ourselves so we're looking at single cell lab already using this technology without injecting any contrast agents this is all what we coin dodged in this contrast we can also detect the color you know because our C and D arcy", "start_timestamp": "00:50:04", "end_timestamp": "00:50:44", "start_second": 3004, "end_second": 3044, "url": "https://www.youtube.com/watch?v=NyoezBq14vE&t=3004s", "title": "Reversing Time, Photoacoustics and Other Optical Breakthroughs in Biomedical Imaging", "thumbnail": "https://i.ytimg.com/vi/NyoezBq14vE/maxresdefault.jpg"} {"video_id": "NyoezBq14vE", "text": "hemoglobin molecules have different colors right so we can quantify the concentrations of both forms from which we can compute their oxygen saturation of hemoglobin right this is a very important parameter and we can detect arteries and veins by looking at the colors and this is actually coming out of our finger cuticle so we can image a human capillary loops and watch the color of the blood vessels and that tells you where oxygen is released the most it turned out at the tip of the capillary loop seems to release most of", "start_timestamp": "00:50:44", "end_timestamp": "00:51:24", "start_second": 3044, "end_second": 3084, "url": "https://www.youtube.com/watch?v=NyoezBq14vE&t=3044s", "title": "Reversing Time, Photoacoustics and Other Optical Breakthroughs in Biomedical Imaging", "thumbnail": "https://i.ytimg.com/vi/NyoezBq14vE/maxresdefault.jpg"} {"video_id": "NyoezBq14vE", "text": "the oxygen right so this is very interesting physiology in one day we can potentially use this to monitor some of the diseases involved in in human capillary level even some of the diabetic applications we're trying to push this photoacoustic oximetry to the ultimate level right this is the device working at 1 Hertz just for demonstration purposes this is 20 Hertz and we actually work at 200 Hertz at this rate we can see single red blood cells traveling right so you can resolve single red blood cells in real time in", "start_timestamp": "00:51:24", "end_timestamp": "00:52:08", "start_second": 3084, "end_second": 3128, "url": "https://www.youtube.com/watch?v=NyoezBq14vE&t=3084s", "title": "Reversing Time, Photoacoustics and Other Optical Breakthroughs in Biomedical Imaging", "thumbnail": "https://i.ytimg.com/vi/NyoezBq14vE/maxresdefault.jpg"} {"video_id": "NyoezBq14vE", "text": "vivo and watch how red blood cells bifurcate at this Junction we can also look at the color of each red blood cell and look at how oxygen is released from red blood cells as we know red blood cells are oxygen carriers so this is the ultimate level oximetry it allows us to study some of the fundamental biology related to oxygen delivery and of course this is very much relevant to to cancer metabolism now this is very recent the data is unpublished as we know cancer primary tumor will not normally kill patients is the metastasis that kills", "start_timestamp": "00:52:08", "end_timestamp": "00:52:57", "start_second": 3128, "end_second": 3177, "url": "https://www.youtube.com/watch?v=NyoezBq14vE&t=3128s", "title": "Reversing Time, Photoacoustics and Other Optical Breakthroughs in Biomedical Imaging", "thumbnail": "https://i.ytimg.com/vi/NyoezBq14vE/maxresdefault.jpg"} {"video_id": "NyoezBq14vE", "text": "the patients so when the primary tumor somehow transported to a distant site that grows a new tumor and a grow to grow everywhere that kills the patient so CTC or circulating tumor cells are very important because this is one mechanism where cancer cells spread now we're able to monitor you'll see these flashes of white ash these are circulating tumor cells if we can man identify them in the bloodstream right you can do multiple things to stop the metastasis right you could well you can imagine that potentially you can", "start_timestamp": "00:52:57", "end_timestamp": "00:53:43", "start_second": 3177, "end_second": 3223, "url": "https://www.youtube.com/watch?v=NyoezBq14vE&t=3177s", "title": "Reversing Time, Photoacoustics and Other Optical Breakthroughs in Biomedical Imaging", "thumbnail": "https://i.ytimg.com/vi/NyoezBq14vE/maxresdefault.jpg"} {"video_id": "NyoezBq14vE", "text": "simply zap there's circulating tumor cells using a higher energy laser pulse that would be a Wang approach there are certain other pharmaceutical approaches you can think of I talked about histology right so how can we make standard histology now invasive in vivo and we can actually detect the cell nuclei directly by using photo cosa microscopy we get an image that looks almost like standard HD staining histology so this is the standard histology this is the same piece of tissue that we acquired using our native", "start_timestamp": "00:53:43", "end_timestamp": "00:54:24", "start_second": 3223, "end_second": 3264, "url": "https://www.youtube.com/watch?v=NyoezBq14vE&t=3223s", "title": "Reversing Time, Photoacoustics and Other Optical Breakthroughs in Biomedical Imaging", "thumbnail": "https://i.ytimg.com/vi/NyoezBq14vE/maxresdefault.jpg"} {"video_id": "NyoezBq14vE", "text": "contrast we use the optical contrast so we can potentially move this into the o.r and find out brain tumor margin for example right so that can potentially improve patient survival we also try to beat that wavelength limit for resolution so you can see here this is the stander microscopy at a very fine resolution already but we improve that further to a resolution of 19 an au meters so this is sometimes called super resolution imaging we break through this wave lens however to limit for resolution now we're detecting mitochondrion we can", "start_timestamp": "00:54:24", "end_timestamp": "00:55:05", "start_second": 3264, "end_second": 3305, "url": "https://www.youtube.com/watch?v=NyoezBq14vE&t=3264s", "title": "Reversing Time, Photoacoustics and Other Optical Breakthroughs in Biomedical Imaging", "thumbnail": "https://i.ytimg.com/vi/NyoezBq14vE/maxresdefault.jpg"} {"video_id": "NyoezBq14vE", "text": "see some of the internal structures of a single metal counterion so now we're talking about this regime of resolution so so far I've demonstrated the scalability of photoacoustic tomography has applications in animals cells all the way to humans but we're also working on time reversal because eventually wanna beat the penetration limit the photo acoustic tomography go even beyond what's available right now so time reversal is possibly one solution this is a very very interesting concept you know the we are actually", "start_timestamp": "00:55:05", "end_timestamp": "00:55:48", "start_second": 3305, "end_second": 3348, "url": "https://www.youtube.com/watch?v=NyoezBq14vE&t=3305s", "title": "Reversing Time, Photoacoustics and Other Optical Breakthroughs in Biomedical Imaging", "thumbnail": "https://i.ytimg.com/vi/NyoezBq14vE/maxresdefault.jpg"} {"video_id": "NyoezBq14vE", "text": "inspired by astronomy the astronomers started something very similar because they want to observe the stars clearly but our atmosphere is going to distort the wave coming onto the earth so that allows actually pours the images what they do is the fire a laser beam to generate some sparks the sparks in the sky will serve as a guide star there is its artificial Chi star it allows us to correct the wavefront distortion as a result you can form a sharp image like this so it's very striking difference all right can we do the same thing in", "start_timestamp": "00:55:48", "end_timestamp": "00:56:31", "start_second": 3348, "end_second": 3391, "url": "https://www.youtube.com/watch?v=NyoezBq14vE&t=3348s", "title": "Reversing Time, Photoacoustics and Other Optical Breakthroughs in Biomedical Imaging", "thumbnail": "https://i.ytimg.com/vi/NyoezBq14vE/maxresdefault.jpg"} {"video_id": "NyoezBq14vE", "text": "biological tissue right so we don't have first of all we don't have a guys there how do we solve that problem in biological tissue well let me illustrate the time reversal concept using this cartoon let's say we have a tiger this there's a bottle of liquid right that's gonna cause way front distortion so if you have plane wave way from very nice way from on this side as a propagates through this bottle it becomes this wavy way from right this wave wave wave away from if it's reflected by this standard mirror like the mirror you use every", "start_timestamp": "00:56:31", "end_timestamp": "00:57:13", "start_second": 3391, "end_second": 3433, "url": "https://www.youtube.com/watch?v=NyoezBq14vE&t=3391s", "title": "Reversing Time, Photoacoustics and Other Optical Breakthroughs in Biomedical Imaging", "thumbnail": "https://i.ytimg.com/vi/NyoezBq14vE/maxresdefault.jpg"} {"video_id": "NyoezBq14vE", "text": "morning it's gonna cause basically return its way from it'll stay wavy and when you go through this bottle again it'll become even more wavy alright so the conventional mirror is not gonna give you anything that's reasonable it's not gonna give you a sharp image in fact you know by going through the bottle twice you get twice distorted you get an image that looks like a distorted tiger but if you use this special mirror something called the face conjugating mirror that's going to return the way frown in assert in the same direction", "start_timestamp": "00:57:13", "end_timestamp": "00:57:50", "start_second": 3433, "end_second": 3470, "url": "https://www.youtube.com/watch?v=NyoezBq14vE&t=3433s", "title": "Reversing Time, Photoacoustics and Other Optical Breakthroughs in Biomedical Imaging", "thumbnail": "https://i.ytimg.com/vi/NyoezBq14vE/maxresdefault.jpg"} {"video_id": "NyoezBq14vE", "text": "right so you notice here this way from curvatures are complementary to the from here right but this phase conjugate mirror is gonna return away from a in the same direction this is almost like a time reversal right the concept is like a time reversal you record some scene right if the person walks forward you play it backward the person is gonna walk backward so we want a wave we want an optical waves to walk backwards right so that's why the phase can't you mirror does right when you go through the bottle the wavefront distortion is gonna", "start_timestamp": "00:57:50", "end_timestamp": "00:58:28", "start_second": 3470, "end_second": 3508, "url": "https://www.youtube.com/watch?v=NyoezBq14vE&t=3470s", "title": "Reversing Time, Photoacoustics and Other Optical Breakthroughs in Biomedical Imaging", "thumbnail": "https://i.ytimg.com/vi/NyoezBq14vE/maxresdefault.jpg"} {"video_id": "NyoezBq14vE", "text": "be canceled through this bottle you get a very nice way from now you can see a very sharp image so that's the power of time reversal how do we implement the same concept in biological tissue we knew the guy star first of all to begin with which is really hard to do in tissue in the early days our field actually embedded some molecules into tissue that provides it guys that are the trouble is that's invasive you have to poke the tissue to you eject some dye or floor force and it's also not very flexible because once you have that", "start_timestamp": "00:58:28", "end_timestamp": "00:59:04", "start_second": 3508, "end_second": 3544, "url": "https://www.youtube.com/watch?v=NyoezBq14vE&t=3508s", "title": "Reversing Time, Photoacoustics and Other Optical Breakthroughs in Biomedical Imaging", "thumbnail": "https://i.ytimg.com/vi/NyoezBq14vE/maxresdefault.jpg"} {"video_id": "NyoezBq14vE", "text": "molecule in the location that's fixed you can only focus light to that position so what we do here is to focus ultrasound to a given point and that ultrasound is going to attack light passing through the outside of tissue we detect attacked photons and time reverse the tagged photons back and it will come back to the outer assumed focal point so this technology is called time reversed ultrasound coded or true optical focusing in other technologies to use photo quistis as a guy star right so this this is called optical speckle", "start_timestamp": "00:59:04", "end_timestamp": "00:59:43", "start_second": 3544, "end_second": 3583, "url": "https://www.youtube.com/watch?v=NyoezBq14vE&t=3544s", "title": "Reversing Time, Photoacoustics and Other Optical Breakthroughs in Biomedical Imaging", "thumbnail": "https://i.ytimg.com/vi/NyoezBq14vE/maxresdefault.jpg"} {"video_id": "NyoezBq14vE", "text": "right if you have a camera embedded in a piece of tissue this is what you would see in general yeah you got bright spots everywhere there there are actually a lot of photons down there even when it's deep but they're all spread all over the place so using this optical focus we can actually collect a lot of the bright spots and concentrate them onto this ultrasound focal spot if we were to use an your higher-order effect we catch you coalesce all of them into a single spot so this can be extremely powerful right", "start_timestamp": "00:59:43", "end_timestamp": "01:00:17", "start_second": 3583, "end_second": 3617, "url": "https://www.youtube.com/watch?v=NyoezBq14vE&t=3583s", "title": "Reversing Time, Photoacoustics and Other Optical Breakthroughs in Biomedical Imaging", "thumbnail": "https://i.ytimg.com/vi/NyoezBq14vE/maxresdefault.jpg"} {"video_id": "NyoezBq14vE", "text": "so you here's the signal by 6000 times it's a huge enhancement one more possibility of finding a guy star is to use motion right so when there's blood flow freaks for example the blood flow can be used to serve as a guy starts moving so we have this cartoon if you have a plane behind a child for example can we actually track that plane right so using the motion of the object as a guy star itself all right let me spend a couple more slides talk about one last technology we're working on in fact this paper up here today I knew actually this", "start_timestamp": "01:00:17", "end_timestamp": "01:01:08", "start_second": 3617, "end_second": 3668, "url": "https://www.youtube.com/watch?v=NyoezBq14vE&t=3617s", "title": "Reversing Time, Photoacoustics and Other Optical Breakthroughs in Biomedical Imaging", "thumbnail": "https://i.ytimg.com/vi/NyoezBq14vE/maxresdefault.jpg"} {"video_id": "NyoezBq14vE", "text": "our work made cover of the nature as we know this is the best scientific journal so we developed the technology that allows us to literally detect the fastest phenomenon possible right allowed by physics as we know Einstein's relativity theory says nothing can travel beyond the speed of light so we're detecting the light pulse itself and the light pulses are traveling in space at a speed of light we're capturing the light pulses as the light propagates in space get reflected by this mirror and if the my rate bends as", "start_timestamp": "01:01:08", "end_timestamp": "01:01:52", "start_second": 3668, "end_second": 3712, "url": "https://www.youtube.com/watch?v=NyoezBq14vE&t=3668s", "title": "Reversing Time, Photoacoustics and Other Optical Breakthroughs in Biomedical Imaging", "thumbnail": "https://i.ytimg.com/vi/NyoezBq14vE/maxresdefault.jpg"} {"video_id": "NyoezBq14vE", "text": "a crosses interface of two media you can see the angles will change right or you've had two pulses in two media and we can watch them a race and see which one will fly will travel faster and here you know what I saw a green pulse traveling here it'll excite some fluorescence and this red spot is actually a fluorescent light so we're watching this fluorescent light to decay and with a single laser shot we observed this phenomenon even the force and decay is on the nanosecond scale because we have picosecond resolution so that", "start_timestamp": "01:01:52", "end_timestamp": "01:02:38", "start_second": 3712, "end_second": 3758, "url": "https://www.youtube.com/watch?v=NyoezBq14vE&t=3712s", "title": "Reversing Time, Photoacoustics and Other Optical Breakthroughs in Biomedical Imaging", "thumbnail": "https://i.ytimg.com/vi/NyoezBq14vE/maxresdefault.jpg"} {"video_id": "NyoezBq14vE", "text": "nanosecond looks like eternity for us so this can be used for very high speed microscopy or you can potentially use this in combination with the Hubble telescope for example to look at very large phenomenon right like supernovae for example so this has very broad applications potentially and this is the technology we just developed I don't know how much detail we need to get into this maybe I should skip at this late hour you can ask me questions later if you're curious about this so I've covered a lot of different technologies", "start_timestamp": "01:02:38", "end_timestamp": "01:03:23", "start_second": 3758, "end_second": 3803, "url": "https://www.youtube.com/watch?v=NyoezBq14vE&t=3758s", "title": "Reversing Time, Photoacoustics and Other Optical Breakthroughs in Biomedical Imaging", "thumbnail": "https://i.ytimg.com/vi/NyoezBq14vE/maxresdefault.jpg"} {"video_id": "NyoezBq14vE", "text": "here started with the motivation and challenges in your field I talked about photoacoustic CT photo QC microscopy in two forms time reversal and what we call cup compressed ultra-fast photography Washington University requires me to discuss my financial financial interest with two companies which have commercialized the photo because it photoacoustic tomography and we're funded by NIH through various projects thank you very much I'll be happy to entertain any questions if there's any oh it's hard for me to see", "start_timestamp": "01:03:23", "end_timestamp": "01:04:59", "start_second": 3803, "end_second": 3899, "url": "https://www.youtube.com/watch?v=NyoezBq14vE&t=3803s", "title": "Reversing Time, Photoacoustics and Other Optical Breakthroughs in Biomedical Imaging", "thumbnail": "https://i.ytimg.com/vi/NyoezBq14vE/maxresdefault.jpg"} {"video_id": "NyoezBq14vE", "text": "so tissue imaging is the main goal for our research but you can potentially use it for some other applications even non biomedical right you can imagine robotic vision for underwater right so photo if there's turbidity what we call when there's a muddy water for example right you can't really see very far underwater so using photo acoustics so you can overcome the turbidity you fire some light pulses if your goal is to see a couple meters beyond you right so potentially like and propagate that for our generate acoustic waves then you can", "start_timestamp": "01:04:59", "end_timestamp": "01:05:38", "start_second": 3899, "end_second": 3938, "url": "https://www.youtube.com/watch?v=NyoezBq14vE&t=3899s", "title": "Reversing Time, Photoacoustics and Other Optical Breakthroughs in Biomedical Imaging", "thumbnail": "https://i.ytimg.com/vi/NyoezBq14vE/maxresdefault.jpg"} {"video_id": "NyoezBq14vE", "text": "pick them pick that up and form an image does that answer your question okay and there was yeah back there please so it hurt you a light-induced sound so what happens is light is first absorbed so you generate a transient heating a very short temperature rice right so on the order of nano second scale because we use nanosecond laser pulses and so that temperature rises as you can imagine is gonna cause thermo elastic expansion and that pushes tissue even though this is very my new every milli degree gives you eight millibars or 800", "start_timestamp": "01:05:38", "end_timestamp": "01:06:32", "start_second": 3938, "end_second": 3992, "url": "https://www.youtube.com/watch?v=NyoezBq14vE&t=3938s", "title": "Reversing Time, Photoacoustics and Other Optical Breakthroughs in Biomedical Imaging", "thumbnail": "https://i.ytimg.com/vi/NyoezBq14vE/maxresdefault.jpg"} {"video_id": "NyoezBq14vE", "text": "Pascal's of pressure rice which is already detectable so if you generate hundreds of million degrees temperature rise then you get a very bright signal to detect and that allows us to form a image so this is called a photo acoustic effect so basically it's a photo thermal and thermal acoustic in the end that we just call that photo acoustic affect the standard detectors has to be coupled acoustically to the tissue you can direct it touch it just like when you perform ultrasound imaging in the hospital they typically apply some gel", "start_timestamp": "01:06:32", "end_timestamp": "01:07:12", "start_second": 3992, "end_second": 4032, "url": "https://www.youtube.com/watch?v=NyoezBq14vE&t=3992s", "title": "Reversing Time, Photoacoustics and Other Optical Breakthroughs in Biomedical Imaging", "thumbnail": "https://i.ytimg.com/vi/NyoezBq14vE/maxresdefault.jpg"} {"video_id": "NyoezBq14vE", "text": "on the skin to to ensure acoustic coupling water coupling is one way gel coupling is more convenient for a lot of applications however our field is also developing non-contact sensing of acoustic waves through optical interferometry so if you shine light on to the tissue surface if there's vibration on the surface that displacement can be picked up through optical interferometry and that can be translated into acoustic wave as well so that's a potential non-contact version for now most of the implementations are", "start_timestamp": "01:07:12", "end_timestamp": "01:07:49", "start_second": 4032, "end_second": 4069, "url": "https://www.youtube.com/watch?v=NyoezBq14vE&t=4032s", "title": "Reversing Time, Photoacoustics and Other Optical Breakthroughs in Biomedical Imaging", "thumbnail": "https://i.ytimg.com/vi/NyoezBq14vE/maxresdefault.jpg"} {"video_id": "NyoezBq14vE", "text": "based on the contact version it's more traditional and it's better developed there's yeah please mm-hm so right now for breast cancer image in extra mammography is still the gold standard so we're still sticking with that one there are two types of directions that our field is taken one is to supplement extra mammography because x-ray mammography you know as since it's as sensitive as it is they have trouble with young women with reader graphically dense breast so where they really feel miserable the other flaw with x-ray mammography is the very", "start_timestamp": "01:07:49", "end_timestamp": "01:08:59", "start_second": 4069, "end_second": 4139, "url": "https://www.youtube.com/watch?v=NyoezBq14vE&t=4069s", "title": "Reversing Time, Photoacoustics and Other Optical Breakthroughs in Biomedical Imaging", "thumbnail": "https://i.ytimg.com/vi/NyoezBq14vE/maxresdefault.jpg"} {"video_id": "NyoezBq14vE", "text": "high false positive rate so that causes a lot of you know biopsies you know subsequent tests which turned out to be unnecessary eventually so can we use photo acoustics to supplement right if x-ray detects this as positive can we confirm if this is truly positive or not by detecting the functional information another way that's more takes longer it's going to take longer to validate is to star from breast cancer screening to begin with so can we use photoacoustic tomography to screen for breast cancer without using ionizing radiation at all", "start_timestamp": "01:08:59", "end_timestamp": "01:09:43", "start_second": 4139, "end_second": 4183, "url": "https://www.youtube.com/watch?v=NyoezBq14vE&t=4139s", "title": "Reversing Time, Photoacoustics and Other Optical Breakthroughs in Biomedical Imaging", "thumbnail": "https://i.ytimg.com/vi/NyoezBq14vE/maxresdefault.jpg"} {"video_id": "NyoezBq14vE", "text": "just start from from the functional contrast provided by photo acoustics the reason I said is going to take longer it's because even though breast cancer is a huge problem the rate of incidence is still relatively low you know you have to test like a thousand patients to get enough patients with breast cancer so you can establish the statistics so it's a more difficult problem any more questions yeah please all right images right yeah so it's a very good question your temp is still talking about photoacoustic tomography", "start_timestamp": "01:09:43", "end_timestamp": "01:10:43", "start_second": 4183, "end_second": 4243, "url": "https://www.youtube.com/watch?v=NyoezBq14vE&t=4183s", "title": "Reversing Time, Photoacoustics and Other Optical Breakthroughs in Biomedical Imaging", "thumbnail": "https://i.ytimg.com/vi/NyoezBq14vE/maxresdefault.jpg"} {"video_id": "NyoezBq14vE", "text": "right so there are two different ways of getting an image with the microscopy technique with a single laser shot you get a 1d image which is depth resolved so you eliminate along the vertical axis for example then you generally the essentially a volumetric acoustic source but then we use ultrasound transducer to pick up the signals so the acoustic time of rival is gonna tell you the depth information you get a 1d image with a single laser shot then you have to use scan linearly to get a 2d image then you have the if you want to get a 3d image", "start_timestamp": "01:10:43", "end_timestamp": "01:11:22", "start_second": 4243, "end_second": 4282, "url": "https://www.youtube.com/watch?v=NyoezBq14vE&t=4243s", "title": "Reversing Time, Photoacoustics and Other Optical Breakthroughs in Biomedical Imaging", "thumbnail": "https://i.ytimg.com/vi/NyoezBq14vE/maxresdefault.jpg"} {"video_id": "NyoezBq14vE", "text": "then you have the raster scan that's one way of getting an image 3d image another way is to use a ray of transducers so for example like the ringing array I showed right so we basically for the reign of 512 elements surrounding the head of animal or a body of animal or wherever we want an image like the breast then we use a broad field light illumination we don't focus light here at all we just light light photons basically bathe the tissue you generate the 3d volumetric source then your ring is going to detect signals from a slice", "start_timestamp": "01:11:22", "end_timestamp": "01:12:02", "start_second": 4282, "end_second": 4322, "url": "https://www.youtube.com/watch?v=NyoezBq14vE&t=4282s", "title": "Reversing Time, Photoacoustics and Other Optical Breakthroughs in Biomedical Imaging", "thumbnail": "https://i.ytimg.com/vi/NyoezBq14vE/maxresdefault.jpg"} {"video_id": "NyoezBq14vE", "text": "because we acoustically focus in the elevational direction the height direction and then we detect signal from the from this slice then we use image reconstruction to form a 2d image and if you want to get a third dimension of course you do the scan now this is a way the 1d array now if budget is not a problem then we can potentially get a 2d array I mean in the ideal world that you imagine we could surround the brass or the head with a lot of transducers in that case with a single laser shot you should be able to get a 3d image but", "start_timestamp": "01:12:02", "end_timestamp": "01:12:38", "start_second": 4322, "end_second": 4358, "url": "https://www.youtube.com/watch?v=NyoezBq14vE&t=4322s", "title": "Reversing Time, Photoacoustics and Other Optical Breakthroughs in Biomedical Imaging", "thumbnail": "https://i.ytimg.com/vi/NyoezBq14vE/maxresdefault.jpg"} {"video_id": "NyoezBq14vE", "text": "right now the the technology allows us to get a ring array or 1d array at a reasonable price but a two-dimensional array is I know quite expensive in fact the ultrasound world has been working on that for quite some time and they're talking about one happy in other words one dimensional density is high the second dimensional density is much lower because you simply cannot have that many transducers ultrasound transducers and still maintain the cost yeah please mm-hmm so I I was trained in laser physics so I got my PhD from rice and I my PhD thesis", "start_timestamp": "01:12:38", "end_timestamp": "01:13:45", "start_second": 4358, "end_second": 4425, "url": "https://www.youtube.com/watch?v=NyoezBq14vE&t=4358s", "title": "Reversing Time, Photoacoustics and Other Optical Breakthroughs in Biomedical Imaging", "thumbnail": "https://i.ytimg.com/vi/NyoezBq14vE/maxresdefault.jpg"} {"video_id": "NyoezBq14vE", "text": "was actually on my chemistry project after I got my PhD I decided to switch to something more applied so I worked for MD Anderson Cancer Center there was a laser lab I think that was a very good fit for my career goal so for applications in ori i think there are several potential applications so one of them is the cancer demarcation as I mentioned can we have a complete removal tumor so you minimize the recurrence rate another application as summation is the bring oxygen consumption monitoring we want to do that now you basically so", "start_timestamp": "01:13:45", "end_timestamp": "01:14:32", "start_second": 4425, "end_second": 4472, "url": "https://www.youtube.com/watch?v=NyoezBq14vE&t=4425s", "title": "Reversing Time, Photoacoustics and Other Optical Breakthroughs in Biomedical Imaging", "thumbnail": "https://i.ytimg.com/vi/NyoezBq14vE/maxresdefault.jpg"} {"video_id": "NyoezBq14vE", "text": "this is what's so good about Wash U right so even though most of my lab spaces in the hilltop or Danforth campus I have a lab in the medical school as well we have a lot of collaborators you know so physicians Wash U physicians are very eager to work with engineers they want to work with us the knock on my door come to my office want to work with engineers to apply our technology to their problems so we're targeting multiple problems it GI tract breast you know breast cancer staging melanoma you name it so we want to see the real", "start_timestamp": "01:14:32", "end_timestamp": "01:15:17", "start_second": 4472, "end_second": 4517, "url": "https://www.youtube.com/watch?v=NyoezBq14vE&t=4472s", "title": "Reversing Time, Photoacoustics and Other Optical Breakthroughs in Biomedical Imaging", "thumbnail": "https://i.ytimg.com/vi/NyoezBq14vE/maxresdefault.jpg"} {"video_id": "NyoezBq14vE", "text": "impact in the human side the photo acoustic tomography has been commercialized by several companies now I personally work with two of them so for preclinical imaging it has been you know established so there are products available they are being sold for a human imaging it's not there yet so I was not allowed to mention the name of the company which has licensed our IP for human imaging but so we're very happy that this is moving forward so it's going to take some time eventually I'd like to see this is this as a daily used at all", "start_timestamp": "01:15:17", "end_timestamp": "01:15:58", "start_second": 4517, "end_second": 4558, "url": "https://www.youtube.com/watch?v=NyoezBq14vE&t=4517s", "title": "Reversing Time, Photoacoustics and Other Optical Breakthroughs in Biomedical Imaging", "thumbnail": "https://i.ytimg.com/vi/NyoezBq14vE/maxresdefault.jpg"} {"video_id": "JgvyzIkgxF0", "text": "from the amazing results and vintage Atari games deep Minds victory with alphago stunning breakthroughs in robotic arm manipulation and even beating professional players at 1v1 dota the field of reinforcement learning has literally exploded in recent years ever since the impressive breakthrough on the imagenet classification challenge in 2012 the successes of supervised deep learning have continued to pile up and people from many different backgrounds have started using deep neural nets to solve a wide range of new tasks", "start_timestamp": "00:00:00", "end_timestamp": "00:00:29", "start_second": 0, "end_second": 29, "url": "https://www.youtube.com/watch?v=JgvyzIkgxF0&t=0s", "title": "An introduction to Reinforcement Learning", "thumbnail": "https://i.ytimg.com/vi/JgvyzIkgxF0/maxresdefault.jpg"} {"video_id": "JgvyzIkgxF0", "text": "including how to learn intelligent behavior in complex dynamic environments so in this episode I will give a general introduction into the field of reinforcement learning as well as an overview of the most challenging problems that we're facing today if you're looking for a solid introduction into the field of deep reinforcement learning then this episode is exactly what you're looking for my name is Xander and welcome to archive insights [Laughter] [Music] [Music] so in its 2017 Peter Emil gave a very inspiring demo in front of a large", "start_timestamp": "00:00:29", "end_timestamp": "00:01:07", "start_second": 29, "end_second": 67, "url": "https://www.youtube.com/watch?v=JgvyzIkgxF0&t=29s", "title": "An introduction to Reinforcement Learning", "thumbnail": "https://i.ytimg.com/vi/JgvyzIkgxF0/maxresdefault.jpg"} {"video_id": "JgvyzIkgxF0", "text": "audience of some of the brightest minds in AI and machine learning so you showed this video where a robot is cleaning a living room bringing somebody a bottle of beer and basically doing a whole range of mundane tasks that robots in sci-fi movies can do without question and then at the end of the video peter revealed that the robots actions were actually entirely remote-controlled by a human operator and the takeaway from this demo I think is a very important one it basically says that the robots we've been building for decades now are", "start_timestamp": "00:01:07", "end_timestamp": "00:01:36", "start_second": 67, "end_second": 96, "url": "https://www.youtube.com/watch?v=JgvyzIkgxF0&t=67s", "title": "An introduction to Reinforcement Learning", "thumbnail": "https://i.ytimg.com/vi/JgvyzIkgxF0/maxresdefault.jpg"} {"video_id": "JgvyzIkgxF0", "text": "physically perfectly capable of doing a wide range of useful tasks but the problem is that we can't embed them with the needed intelligence to do those things so basically creating useful state-of-the-art robotics is a software challenge and not a hardware problem and so it turns out that having a robot learn how to do something very simple like picking up a bottle of beer can be a very challenging task and so in this video I want to introduce you guys to the whole subfield in machine learning that's called reinforcement learning", "start_timestamp": "00:01:36", "end_timestamp": "00:02:07", "start_second": 96, "end_second": 127, "url": "https://www.youtube.com/watch?v=JgvyzIkgxF0&t=96s", "title": "An introduction to Reinforcement Learning", "thumbnail": "https://i.ytimg.com/vi/JgvyzIkgxF0/maxresdefault.jpg"} {"video_id": "JgvyzIkgxF0", "text": "which i think is one of the most promising directions to actually get to very intelligent robotic behavior so in the most common machine learning applications people use what we call supervised learning and this means that you give an input to your neural network model but you know the output that your model should produce and therefore you can compute gradients using something like the back propagation algorithm to train that network to produce your outputs so imagine you want to train a neural network to play the game of pong what", "start_timestamp": "00:02:07", "end_timestamp": "00:02:35", "start_second": 127, "end_second": 155, "url": "https://www.youtube.com/watch?v=JgvyzIkgxF0&t=127s", "title": "An introduction to Reinforcement Learning", "thumbnail": "https://i.ytimg.com/vi/JgvyzIkgxF0/maxresdefault.jpg"} {"video_id": "JgvyzIkgxF0", "text": "you would do in a supervised setting is you would have a good human gamer play the game of pong for a couple of hours and you would create a data set where you log all of the frames that that human is seeing on the screen as well as the actions that he takes in response to those frames so whatever is pushing the up arrow or the down arrow and we can then feed those input frames through a very simple neural network that at the output can produce two simple actions it's either going to select the up action or the down action and by simply", "start_timestamp": "00:02:35", "end_timestamp": "00:03:03", "start_second": 155, "end_second": 183, "url": "https://www.youtube.com/watch?v=JgvyzIkgxF0&t=155s", "title": "An introduction to Reinforcement Learning", "thumbnail": "https://i.ytimg.com/vi/JgvyzIkgxF0/maxresdefault.jpg"} {"video_id": "JgvyzIkgxF0", "text": "training on the data set of the human gameplay using something like back propagation we can actually train that neural network to replicate the actions of the human gamer but there are two significant downsides to this approach so on the one hand if you want to do supervised learning you have to create a data set to train on which is not always a very easy thing to do and on the other hand if you train your neural network model to simply imitate the actions of the human player well then by definition your agent can never be better at", "start_timestamp": "00:03:03", "end_timestamp": "00:03:31", "start_second": 183, "end_second": 211, "url": "https://www.youtube.com/watch?v=JgvyzIkgxF0&t=183s", "title": "An introduction to Reinforcement Learning", "thumbnail": "https://i.ytimg.com/vi/JgvyzIkgxF0/maxresdefault.jpg"} {"video_id": "JgvyzIkgxF0", "text": "playing the game of pong than that human gamer for example if you want to train a neural net to be better at playing the game of gold and the best human then by definition we can't use supervised learning so is there a way to have an agent learn to play a game entirely by itself well fortunately there is and this is called reinforcement learning so the framework and reinforcement learning is actually surprisingly similar to the normal framework in supervised learning so we still have an input frame we run it through some neural network model and", "start_timestamp": "00:03:31", "end_timestamp": "00:04:01", "start_second": 211, "end_second": 241, "url": "https://www.youtube.com/watch?v=JgvyzIkgxF0&t=211s", "title": "An introduction to Reinforcement Learning", "thumbnail": "https://i.ytimg.com/vi/JgvyzIkgxF0/maxresdefault.jpg"} {"video_id": "JgvyzIkgxF0", "text": "the network produces an output action we either up or down but the only difference here is that now we don't actually know the target label so we don't know in any situation whether we should have gone up or down because we don't have a data set to train on and in reinforcement learning the network that transforms input frames to output actions is called the policy Network now one of the simplest ways to train a policy network is a method called policy gradients so the approach in policy gradients is that you", "start_timestamp": "00:04:01", "end_timestamp": "00:04:32", "start_second": 241, "end_second": 272, "url": "https://www.youtube.com/watch?v=JgvyzIkgxF0&t=241s", "title": "An introduction to Reinforcement Learning", "thumbnail": "https://i.ytimg.com/vi/JgvyzIkgxF0/maxresdefault.jpg"} {"video_id": "JgvyzIkgxF0", "text": "start out with a completely random network you feed that network a frame from the game engine it produces a random up with action you know either up or down you send that action back to the game engine and the game engine produces the next frame and this is how the loop continues and the network in this case it could be a fully connected network but you can obviously apply convolutions there as well and now in reality the output of your network is going to consist of two numbers the probability of going up and the probability of going", "start_timestamp": "00:04:32", "end_timestamp": "00:04:58", "start_second": 272, "end_second": 298, "url": "https://www.youtube.com/watch?v=JgvyzIkgxF0&t=272s", "title": "An introduction to Reinforcement Learning", "thumbnail": "https://i.ytimg.com/vi/JgvyzIkgxF0/maxresdefault.jpg"} {"video_id": "JgvyzIkgxF0", "text": "down and what you will do while training is actually sample from the distribution so that you're not always going to repeat the same exact actions and this will allow your agent to sort of explore the environment a bit randomly and hopefully discover better rewards and better behavior now importantly because we want to enable our agent to learn entirely by itself the only feedback that we're gonna give it is the scoreboard in the game so whenever our agent manages to score a goal it will receive a reward of +1 and if the", "start_timestamp": "00:04:58", "end_timestamp": "00:05:27", "start_second": 298, "end_second": 327, "url": "https://www.youtube.com/watch?v=JgvyzIkgxF0&t=298s", "title": "An introduction to Reinforcement Learning", "thumbnail": "https://i.ytimg.com/vi/JgvyzIkgxF0/maxresdefault.jpg"} {"video_id": "JgvyzIkgxF0", "text": "opponent scored a goal then our agent will receive a penalty of minus 1 and the entire goal of the agent is to optimize its policy to receive as much reward as possible so in order to train our policy network the first thing we're gonna do is collect a bunch of experience so you're just gonna run a whole bunch of those game frames through your network select random actions feed them back into the engine and just create a whole bunch of random pong games and now obviously since our agent hasn't learned anything useful yet it's", "start_timestamp": "00:05:27", "end_timestamp": "00:05:55", "start_second": 327, "end_second": 355, "url": "https://www.youtube.com/watch?v=JgvyzIkgxF0&t=327s", "title": "An introduction to Reinforcement Learning", "thumbnail": "https://i.ytimg.com/vi/JgvyzIkgxF0/maxresdefault.jpg"} {"video_id": "JgvyzIkgxF0", "text": "gonna lose most of those games but the thing is that sometimes our agent might get lucky sometimes it's going to randomly select a whole sequence of actions that actually lead to scoring a goal and in this case our agent is going to receive a reward and a key thing to understand is that for every episode regardless of whether we want a positive or a negative reward we can already compute the gradients that would make the actions that our agents has chosen more likely in the future and this is very crucial and so what policy", "start_timestamp": "00:05:55", "end_timestamp": "00:06:26", "start_second": 355, "end_second": 386, "url": "https://www.youtube.com/watch?v=JgvyzIkgxF0&t=355s", "title": "An introduction to Reinforcement Learning", "thumbnail": "https://i.ytimg.com/vi/JgvyzIkgxF0/maxresdefault.jpg"} {"video_id": "JgvyzIkgxF0", "text": "gradients are going to do is that for every episode where we've got a positive reward we're going to use the normal gradients to increase the probability of those actions in the future but whenever we got a negative we're gonna apply the same gradient but we're gonna multiply it with minus one and this minus sign will make sure that in the future all the actions that we took in a very bad episode are going to be less likely in the future and so the result is that while training our policy network the actions that lead to", "start_timestamp": "00:06:26", "end_timestamp": "00:06:57", "start_second": 386, "end_second": 417, "url": "https://www.youtube.com/watch?v=JgvyzIkgxF0&t=386s", "title": "An introduction to Reinforcement Learning", "thumbnail": "https://i.ytimg.com/vi/JgvyzIkgxF0/maxresdefault.jpg"} {"video_id": "JgvyzIkgxF0", "text": "negative rewards are slowly going to be filtered out and the actions that leads to positive rewards are going to become more and more likely so in a sense our agent is learning how to play the game of pong now I know this was a very quick introduction to reinforcement learning so if you want to read up a bit and spend a little bit more time in thinking about the details I really recommend to read and rake carpet these blog post pong from pixels it does a phenomenal job at explaining all the details all right so we can use", "start_timestamp": "00:06:57", "end_timestamp": "00:07:26", "start_second": 417, "end_second": 446, "url": "https://www.youtube.com/watch?v=JgvyzIkgxF0&t=417s", "title": "An introduction to Reinforcement Learning", "thumbnail": "https://i.ytimg.com/vi/JgvyzIkgxF0/maxresdefault.jpg"} {"video_id": "JgvyzIkgxF0", "text": "policy gradients to train a neural network to play the game of pong that's amazing right well yes it is but as always there are a few very significant downsides to using this methods let's go back to pong one more time so imagine that your agent has been training for a while and it's actually doing a pretty decent job at playing the game of pong it's bouncing the ball back and forth but then at the end of the episode it makes a mistake it lets the ball through and it gets a negative penalty so the problem with policy gradients is that", "start_timestamp": "00:07:26", "end_timestamp": "00:07:56", "start_second": 446, "end_second": 476, "url": "https://www.youtube.com/watch?v=JgvyzIkgxF0&t=446s", "title": "An introduction to Reinforcement Learning", "thumbnail": "https://i.ytimg.com/vi/JgvyzIkgxF0/maxresdefault.jpg"} {"video_id": "JgvyzIkgxF0", "text": "our policy gradient is going to assume that since we lost that episode all of the actions that we took there must be bad actions and is going to reduce the likelihood of taking those actions in the future but remember that actually the most part of that episode we were doing really well so we don't really want to decrease the likelihood of those actions and in reinforcement learning this is called the credit assignment problem it's where if you get a reward at the end of your episode well what are the exact actions that led to that specific", "start_timestamp": "00:07:56", "end_timestamp": "00:08:26", "start_second": 476, "end_second": 506, "url": "https://www.youtube.com/watch?v=JgvyzIkgxF0&t=476s", "title": "An introduction to Reinforcement Learning", "thumbnail": "https://i.ytimg.com/vi/JgvyzIkgxF0/maxresdefault.jpg"} {"video_id": "JgvyzIkgxF0", "text": "reward and this problem is entirely related to the fact that we have what we call a sparse reward setting so instead of getting a reward for every single action we only get a reward after an entire episode and our agent needs to figure out what part of its action sequence we're causing the reward that it eventually gets so in the case of punk for example our agent should learn that it's only the actions right before it hits the ball that are truly important everything else once the ball is flying off it doesn't really matter", "start_timestamp": "00:08:26", "end_timestamp": "00:08:55", "start_second": 506, "end_second": 535, "url": "https://www.youtube.com/watch?v=JgvyzIkgxF0&t=506s", "title": "An introduction to Reinforcement Learning", "thumbnail": "https://i.ytimg.com/vi/JgvyzIkgxF0/maxresdefault.jpg"} {"video_id": "JgvyzIkgxF0", "text": "for the eventual reward and so the result of this sparse reward setting is that in reinforcement learning algorithms are typically very sample inefficient which means that you have to give them a ton of training time before they can learn some useful behavior and I've made a previous video to compare the sample efficiency of reinforcement learning algorithms with human learning that goes much deeper into why this is the case and now it turns out that in some extreme cases the sparse reward setting actually fails completely so a", "start_timestamp": "00:08:55", "end_timestamp": "00:09:22", "start_second": 535, "end_second": 562, "url": "https://www.youtube.com/watch?v=JgvyzIkgxF0&t=535s", "title": "An introduction to Reinforcement Learning", "thumbnail": "https://i.ytimg.com/vi/JgvyzIkgxF0/maxresdefault.jpg"} {"video_id": "JgvyzIkgxF0", "text": "famous example is the game Montezuma's Revenge where the goal of the agent is to navigate a bunch of ladders jump over a skull grab a key and then actually navigate to the door - in order to get to the next level and the problem here is that by taking random actions your agent is never gonna see a single reward because you know the sequence of actions that it needs to take to get that reward is just too complicated it's never gonna get there with random actions and so your policy gradient is never gonna see a single positive reward so it has no", "start_timestamp": "00:09:22", "end_timestamp": "00:09:52", "start_second": 562, "end_second": 592, "url": "https://www.youtube.com/watch?v=JgvyzIkgxF0&t=562s", "title": "An introduction to Reinforcement Learning", "thumbnail": "https://i.ytimg.com/vi/JgvyzIkgxF0/maxresdefault.jpg"} {"video_id": "JgvyzIkgxF0", "text": "idea what to do in the same case applies to robotic control where for example you would like to train in robotic arm to pick up an object and stack it onto something else well the typical robot has about seven joints that it can move so it's a relatively high action space and if you only give it a positive reward when it's actually successfully stacked a block well by doing random exploration it's never gonna get to see any of that reward and I think it's important to compare this with the traditional supervised deep learning successes that", "start_timestamp": "00:09:52", "end_timestamp": "00:10:22", "start_second": 592, "end_second": 622, "url": "https://www.youtube.com/watch?v=JgvyzIkgxF0&t=592s", "title": "An introduction to Reinforcement Learning", "thumbnail": "https://i.ytimg.com/vi/JgvyzIkgxF0/maxresdefault.jpg"} {"video_id": "JgvyzIkgxF0", "text": "we get into something like computer vision for example so the reason computer vision works so well is that for every single input frame you have a target label and this lets you do very efficient gradient descent with something like back propagation whereas in a reinforcement learning setting you're having to deal with this very big problem of sparse reward setting and this is why you know computer vision is showing some very impressive results while something is simple as stacking one block onto another seems very difficult even for", "start_timestamp": "00:10:22", "end_timestamp": "00:10:50", "start_second": 622, "end_second": 650, "url": "https://www.youtube.com/watch?v=JgvyzIkgxF0&t=622s", "title": "An introduction to Reinforcement Learning", "thumbnail": "https://i.ytimg.com/vi/JgvyzIkgxF0/maxresdefault.jpg"} {"video_id": "JgvyzIkgxF0", "text": "state-of-the-art woodland [Music] and so the traditional approach to solve this issue of sparse rewards has been the use of rewards shaping so reward chipping is the process of manually designing a reward function that needs to guide your policy to some desired behavior so in the case of montezuma's revenge for example you could give your agent a reward every single time it manages to avoid the skull or reach the key and these extra rewards will guide your policy to some desired behavior and while this obviously makes it easier for", "start_timestamp": "00:10:50", "end_timestamp": "00:11:27", "start_second": 650, "end_second": 687, "url": "https://www.youtube.com/watch?v=JgvyzIkgxF0&t=650s", "title": "An introduction to Reinforcement Learning", "thumbnail": "https://i.ytimg.com/vi/JgvyzIkgxF0/maxresdefault.jpg"} {"video_id": "JgvyzIkgxF0", "text": "your policy to converge to desired behavior there are some significant downsides to reward shaping so firstly reward shaping is a custom process that needs to be redone for every new environment you want to train a policy so if you're looking at the benchmark of Atari for example well you would have to craft a new reward function for every single one of those games that's just not scalable the second problem is that we Ward shaping suffers from what we call the alignment problem so it turns out that reward shaping is actually", "start_timestamp": "00:11:27", "end_timestamp": "00:11:55", "start_second": 687, "end_second": 715, "url": "https://www.youtube.com/watch?v=JgvyzIkgxF0&t=687s", "title": "An introduction to Reinforcement Learning", "thumbnail": "https://i.ytimg.com/vi/JgvyzIkgxF0/maxresdefault.jpg"} {"video_id": "JgvyzIkgxF0", "text": "surprisingly difficult in a lot of cases when you when you shape your reward function your agent will find some very surprising way to make sure that it's getting a lot of reward but not doing at all what you wanted to do and in a sense the policy is just overfitting to that specific reward function that you designed while not generalizing to the intended behavior that you had in mind and there's a lot of funny cases where reward shaping goes terribly wrong so here for example the agent was trained to do jumping and the reward function", "start_timestamp": "00:11:55", "end_timestamp": "00:12:24", "start_second": 715, "end_second": 744, "url": "https://www.youtube.com/watch?v=JgvyzIkgxF0&t=715s", "title": "An introduction to Reinforcement Learning", "thumbnail": "https://i.ytimg.com/vi/JgvyzIkgxF0/maxresdefault.jpg"} {"video_id": "JgvyzIkgxF0", "text": "was the distance from its feet to the ground and what this agent has learned is to simply grow a very tall body and do some kind of a backflip to make sure that its feet are very far from the ground to give you one final idea of how hard it can be to the reward shaping I mean look at this shaped reward function for a robotic control task I don't even want to know how long the people from this paper spent on designing this specific reward function to get the behavior that they wanted and finally in some cases like alphago for example by", "start_timestamp": "00:12:24", "end_timestamp": "00:12:54", "start_second": 744, "end_second": 774, "url": "https://www.youtube.com/watch?v=JgvyzIkgxF0&t=744s", "title": "An introduction to Reinforcement Learning", "thumbnail": "https://i.ytimg.com/vi/JgvyzIkgxF0/maxresdefault.jpg"} {"video_id": "JgvyzIkgxF0", "text": "definition you don't want to do any reward shaping because this will constrain your policy to the behavior of humans which is not exactly optimal in every situation so the situation that we're in right now is that we know that it's really hard to train in a sparsely setting but at the same time it's also very tricky to shape a reward function and we don't always want to do that and to end this video I would like to note that a lot of media stories picture reinforcement learning as some kind of a magical AI sauce that lets the agent", "start_timestamp": "00:12:54", "end_timestamp": "00:13:23", "start_second": 774, "end_second": 803, "url": "https://www.youtube.com/watch?v=JgvyzIkgxF0&t=774s", "title": "An introduction to Reinforcement Learning", "thumbnail": "https://i.ytimg.com/vi/JgvyzIkgxF0/maxresdefault.jpg"} {"video_id": "JgvyzIkgxF0", "text": "learn on itself or improve upon its previous version but the reality is that most of these breakthroughs are actually the work of some of the brightest minds alive today and there's a lot of very hard engineering going on behind the scenes so I think that one of the biggest challenges in navigating our digital landscape is discerning truth from fiction in this ocean of clickbait that is powered by the advertisement industry and I think the Atlas robot from Boston Dynamics is a very clear example of what I mean so I think if you", "start_timestamp": "00:13:23", "end_timestamp": "00:13:52", "start_second": 803, "end_second": 832, "url": "https://www.youtube.com/watch?v=JgvyzIkgxF0&t=803s", "title": "An introduction to Reinforcement Learning", "thumbnail": "https://i.ytimg.com/vi/JgvyzIkgxF0/maxresdefault.jpg"} {"video_id": "JgvyzIkgxF0", "text": "go out on the streets and you ask a thousand people with the most advanced robots today are well they would probably point to Atlas from Boston Dynamics because everybody has seen the video where it does a backflip but the reality is that if you think about what's what Boston Dynamics is actually doing well it's very likely that there's not a lot of deep learning going on there if you look at their previous papers in the research track record well they're they're doing a lot of very advanced robotics don't get me wrong but", "start_timestamp": "00:13:52", "end_timestamp": "00:14:18", "start_second": 832, "end_second": 858, "url": "https://www.youtube.com/watch?v=JgvyzIkgxF0&t=832s", "title": "An introduction to Reinforcement Learning", "thumbnail": "https://i.ytimg.com/vi/JgvyzIkgxF0/maxresdefault.jpg"} {"video_id": "JgvyzIkgxF0", "text": "there's not a lot of self driven behavior there's not a lot of intelligent decision-making going on in those robots so don't get me wrong Boston Dynamics is a very impressive robotics company but the media images they've created might be a little bit confusing to a lot of people that don't know what's going on behind the scenes but nonetheless if you look at the progress of research that is going on I think we should not be negligible of the potential risks that these technologies can bring so I think it's very good that", "start_timestamp": "00:14:18", "end_timestamp": "00:14:45", "start_second": 858, "end_second": 885, "url": "https://www.youtube.com/watch?v=JgvyzIkgxF0&t=858s", "title": "An introduction to Reinforcement Learning", "thumbnail": "https://i.ytimg.com/vi/JgvyzIkgxF0/maxresdefault.jpg"} {"video_id": "JgvyzIkgxF0", "text": "a lot more people are getting involved in the whole a AI safety research because this is going to become very fundamental threats like autonomous weapons and mass surveillance are to be taken very seriously and so the only hope we have is that international law is going to be somewhat able to keep up with the rapid progress we see in technology but on the other hand I also feel like the media is focusing way too much on the negative side of these technologies simply because people fear what they don't understand and well fear", "start_timestamp": "00:14:45", "end_timestamp": "00:15:13", "start_second": 885, "end_second": 913, "url": "https://www.youtube.com/watch?v=JgvyzIkgxF0&t=885s", "title": "An introduction to Reinforcement Learning", "thumbnail": "https://i.ytimg.com/vi/JgvyzIkgxF0/maxresdefault.jpg"} {"video_id": "JgvyzIkgxF0", "text": "sells more advertisement than utopias so I personally believe that most if not all technological progress is beneficial in the long run as long as we can make sure that there are no monopolies that can maintain or enforce their power with the malignant use of AI well anyway enough politics for one video so this video was an introduction into deep reinforcement learning and an overview of the most challenging problems that we're facing in the field in the next video I will dive into some of the most recent approaches that try to tackle these", "start_timestamp": "00:15:13", "end_timestamp": "00:15:42", "start_second": 913, "end_second": 942, "url": "https://www.youtube.com/watch?v=JgvyzIkgxF0&t=913s", "title": "An introduction to Reinforcement Learning", "thumbnail": "https://i.ytimg.com/vi/JgvyzIkgxF0/maxresdefault.jpg"} {"video_id": "JgvyzIkgxF0", "text": "problems of sample efficiency and the sparse reward setting so specifically I will cover a few technical papers dealing with approaches like auxilary or reward settings intrinsic curiosity hindsight experience replay and so on I've also seen that a few people have chosen to support me on patreon for which I would just like to say thank you very much I mean it really means a big deal to me I'm doing these videos completely in my spare time and knowing that there's people out there that appreciate this content really feels", "start_timestamp": "00:15:42", "end_timestamp": "00:16:10", "start_second": 942, "end_second": 970, "url": "https://www.youtube.com/watch?v=JgvyzIkgxF0&t=942s", "title": "An introduction to Reinforcement Learning", "thumbnail": "https://i.ytimg.com/vi/JgvyzIkgxF0/maxresdefault.jpg"} {"video_id": "Nj2YSLPn6OY", "text": "let me introduce you to mrs. mani she came to the emergency room she searched his fifty two-years-old she came to the emergency room with a foot sore doctors investigated of with saw and she ended up staying there in the hospital for 22 days here's what happened when she came to the emergency room for a foot sore they inspected her they saw no real reason for medical concern but they wanted to monitor in case her foot sore was infected so they put her in the general ward on day three she starts developing symptoms of what looks like", "start_timestamp": "00:00:00", "end_timestamp": "00:00:48", "start_second": 0, "end_second": 48, "url": "https://www.youtube.com/watch?v=Nj2YSLPn6OY&t=0s", "title": "Better Medicine Through Machine Learning | Suchi Saria | TEDxBoston", "thumbnail": "https://i.ytimg.com/vi/Nj2YSLPn6OY/maxresdefault.jpg"} {"video_id": "Nj2YSLPn6OY", "text": "mild pneumonia they give her the usual treatment of antibiotics and all's good but then her condition starts to worsen on day six she develops what's called tachycardia that means in medical speak her heart rhythm has accelerated dramatically she then has trouble breathing on day seven she experiences septic shock that means her body is in crisis incidentally mortality in shock is one in two now it's only at this point that the doctors get really concerned and they transfer her to the intensive care unit I see use other units where the most", "start_timestamp": "00:00:48", "end_timestamp": "00:01:34", "start_second": 48, "end_second": 94, "url": "https://www.youtube.com/watch?v=Nj2YSLPn6OY&t=48s", "title": "Better Medicine Through Machine Learning | Suchi Saria | TEDxBoston", "thumbnail": "https://i.ytimg.com/vi/Nj2YSLPn6OY/maxresdefault.jpg"} {"video_id": "Nj2YSLPn6OY", "text": "critically ill patients get cared for they here they give her every possible treatment to stabilize her but her condition only worsens first her kidneys start to fail then her lungs fail and on day 22 she dies mrs. mani did receive the right set of treatments the problem is she received them only too late what mrs. mani experienced was an infection that turned into sepsis let me tell you a little bit about what substance is sepsis occurs when infection releases chemicals in your blood to tackle the infection so your body releases", "start_timestamp": "00:01:34", "end_timestamp": "00:02:17", "start_second": 94, "end_second": 137, "url": "https://www.youtube.com/watch?v=Nj2YSLPn6OY&t=94s", "title": "Better Medicine Through Machine Learning | Suchi Saria | TEDxBoston", "thumbnail": "https://i.ytimg.com/vi/Nj2YSLPn6OY/maxresdefault.jpg"} {"video_id": "Nj2YSLPn6OY", "text": "chemicals to fight the infection now this chemical can trigger an egg of inflammatory response when this in when this inflammation triggers this negative inflammatory response what it can then do is cause a cascade of changes leading your organs to fail leading to depth sepsis is the 11th leading cause of death more than breast cancer and prostate cancer combined turns out substance is preventable if treated early okay so then what's the catch doctors find it very hard to recognize sepsis in fact a Harvard study shows", "start_timestamp": "00:02:17", "end_timestamp": "00:03:03", "start_second": 137, "end_second": 183, "url": "https://www.youtube.com/watch?v=Nj2YSLPn6OY&t=137s", "title": "Better Medicine Through Machine Learning | Suchi Saria | TEDxBoston", "thumbnail": "https://i.ytimg.com/vi/Nj2YSLPn6OY/maxresdefault.jpg"} {"video_id": "Nj2YSLPn6OY", "text": "with 93 leading academic experts that when they were given several cases of patients with and without sepsis they couldn't agree two years ago my nephew he was admitted to the best Hospital in India and he died of sepsis my family was devastated I'm a machine learning expert and what I do is study ways in which we can use large messy datasets to enable intelligent decision-making so natural question for me was could machine learning of health could machine learning of help mrs. Manny and my nephew so this led to a massive effort", "start_timestamp": "00:03:03", "end_timestamp": "00:03:43", "start_second": 183, "end_second": 223, "url": "https://www.youtube.com/watch?v=Nj2YSLPn6OY&t=183s", "title": "Better Medicine Through Machine Learning | Suchi Saria | TEDxBoston", "thumbnail": "https://i.ytimg.com/vi/Nj2YSLPn6OY/maxresdefault.jpg"} {"video_id": "Nj2YSLPn6OY", "text": "with my colleagues at Hopkins to design what we call the targeted real-time early warning system or trues based on machine learning I'll give you a sneak peek into what chooses and how we're using it to tackle sepsis let me take a step back and tell you a little bit about what machine learning is and what's AI artificial intelligence is a field of study very design very teach computers how to learn okay just like you teach your kids machine learning is one way of doing this by designing code or programs that teach computers stuff", "start_timestamp": "00:03:43", "end_timestamp": "00:04:22", "start_second": 223, "end_second": 262, "url": "https://www.youtube.com/watch?v=Nj2YSLPn6OY&t=223s", "title": "Better Medicine Through Machine Learning | Suchi Saria | TEDxBoston", "thumbnail": "https://i.ytimg.com/vi/Nj2YSLPn6OY/maxresdefault.jpg"} {"video_id": "Nj2YSLPn6OY", "text": "over time by by interacting with the environment or watching okay so I'm going to show you a video of some robots learning how to walk I find it funny how it shudders so you're probably now are thinking this is hopeless well so the question is how can we teach robots or machines how to walk intuitively you can think of it as designing a game the goal of the game is for the computer or the robot to learn how to walk for as long as possible without following ok so to do this first we have to design write down the goal in", "start_timestamp": "00:04:22", "end_timestamp": "00:05:20", "start_second": 262, "end_second": 320, "url": "https://www.youtube.com/watch?v=Nj2YSLPn6OY&t=262s", "title": "Better Medicine Through Machine Learning | Suchi Saria | TEDxBoston", "thumbnail": "https://i.ytimg.com/vi/Nj2YSLPn6OY/maxresdefault.jpg"} {"video_id": "Nj2YSLPn6OY", "text": "a language the computer understands for this we'll use math okay so now you're wondering well how do we write the goal of walking without falling as long as possible in math well that's often hard for different tasks but you can think of it as writing down a formula and what this formula does is it scores so in the case of walking it'll score every move the robot makes if the move it makes helps the robot walk it gets a high score if the move that the robot speaking makes the robot unstable it gets a low score and now the robots goal", "start_timestamp": "00:05:20", "end_timestamp": "00:05:59", "start_second": 320, "end_second": 359, "url": "https://www.youtube.com/watch?v=Nj2YSLPn6OY&t=320s", "title": "Better Medicine Through Machine Learning | Suchi Saria | TEDxBoston", "thumbnail": "https://i.ytimg.com/vi/Nj2YSLPn6OY/maxresdefault.jpg"} {"video_id": "Nj2YSLPn6OY", "text": "is to experiment with the sequence of moves in order to be able to maximize its score so how does it know which moves to try right well there are two strategies for doing it first it expected learns by interacting with the environment okay so here the robot will just make a guess it guesses it makes a move if the move gets a high score that's positive feedback and the robot builds on it okay the second strategy is by watching other robots in other words the robot finds data from past robots that are similar to this robot it watches what", "start_timestamp": "00:05:59", "end_timestamp": "00:06:39", "start_second": 359, "end_second": 399, "url": "https://www.youtube.com/watch?v=Nj2YSLPn6OY&t=359s", "title": "Better Medicine Through Machine Learning | Suchi Saria | TEDxBoston", "thumbnail": "https://i.ytimg.com/vi/Nj2YSLPn6OY/maxresdefault.jpg"} {"video_id": "Nj2YSLPn6OY", "text": "moves that robot did when it was in very similar positions and now it emulates or replicates those moves okay so those are the two strategies so I'm going to show you a video of robot learning how to walk using the strategy I just described okay so in the beginning it's going to look hopeless but I promise you it gets better and just to be clear this is not so this is the skeleton off the robot and so this is not a human animator going there and just moving or animating this video this is really the robot the algorithm", "start_timestamp": "00:06:39", "end_timestamp": "00:07:14", "start_second": 399, "end_second": 434, "url": "https://www.youtube.com/watch?v=Nj2YSLPn6OY&t=399s", "title": "Better Medicine Through Machine Learning | Suchi Saria | TEDxBoston", "thumbnail": "https://i.ytimg.com/vi/Nj2YSLPn6OY/maxresdefault.jpg"} {"video_id": "Nj2YSLPn6OY", "text": "choosing which moves to make by moving the joints of the skeleton that you're seeing and you can see it's already getting better now suddenly the robots a will blue walk and run for a lot longer than it was doing right so essentially the basic principle is as follows you figure out a game that the computer can play you write it down using a language it understands and then we train it to optimise the score right this is how we teach cars how to drive computers how to play the game of go an Alexa to understand say your preference of", "start_timestamp": "00:07:14", "end_timestamp": "00:07:52", "start_second": 434, "end_second": 472, "url": "https://www.youtube.com/watch?v=Nj2YSLPn6OY&t=434s", "title": "Better Medicine Through Machine Learning | Suchi Saria | TEDxBoston", "thumbnail": "https://i.ytimg.com/vi/Nj2YSLPn6OY/maxresdefault.jpg"} {"video_id": "Nj2YSLPn6OY", "text": "coconut water so let's go back in our case stir the problem of sepsis so the goal here is to identify sepsis as quickly as possible right and for this truths learns by watching in other words using data from past patients this avoids the need for tools to have to experiment on new patients right so to do that what are the pieces truth needs to do so one big change that has happened in medicine that's interesting to note is in the past five years the introduction of electronic health records in EHRs every single measurement", "start_timestamp": "00:07:52", "end_timestamp": "00:08:31", "start_second": 472, "end_second": 511, "url": "https://www.youtube.com/watch?v=Nj2YSLPn6OY&t=472s", "title": "Better Medicine Through Machine Learning | Suchi Saria | TEDxBoston", "thumbnail": "https://i.ytimg.com/vi/Nj2YSLPn6OY/maxresdefault.jpg"} {"video_id": "Nj2YSLPn6OY", "text": "every single lab test that is ever done when you walk into the clinic or you're in the hospital gets collected trues analyzes this data from thousands of patients to identify subtle signs and symptoms that appear in patients with sepsis than those without okay but that's not alone what truths also needs to do is to figure out how to think about every signal in the context of every other signal let me give you an example let's look at the example of creatinine in SOL creatinine is a waste molecule okay and your kidneys filter it out okay", "start_timestamp": "00:08:31", "end_timestamp": "00:09:10", "start_second": 511, "end_second": 550, "url": "https://www.youtube.com/watch?v=Nj2YSLPn6OY&t=511s", "title": "Better Medicine Through Machine Learning | Suchi Saria | TEDxBoston", "thumbnail": "https://i.ytimg.com/vi/Nj2YSLPn6OY/maxresdefault.jpg"} {"video_id": "Nj2YSLPn6OY", "text": "but here's the catch so when your body is septic it affects your kidneys it deteriorates your kidneys ability to filter out creatinine so creatinine level rises but there are many other things that can affect your kidneys ability to filter out creatinine for example if you have chronic kidney disease you're very likely to have high creatinine levels so now what truth has to do is to figure out is your creatinine high because of sepsis or because of chronic kidney disease or the numerous other factors that need to high", "start_timestamp": "00:09:10", "end_timestamp": "00:09:45", "start_second": 550, "end_second": 585, "url": "https://www.youtube.com/watch?v=Nj2YSLPn6OY&t=550s", "title": "Better Medicine Through Machine Learning | Suchi Saria | TEDxBoston", "thumbnail": "https://i.ytimg.com/vi/Nj2YSLPn6OY/maxresdefault.jpg"} {"video_id": "Nj2YSLPn6OY", "text": "creatinine levels but that's not enough it needs to do this for every single signal that exists in the electronic health record and chooses think about every signal in the context of every other signal to identify signs and symptoms that occur more often in patients with sepsis than those without let's return to mrs. Manny research by Kumar and colleagues have shown that for every hour treatment is delayed mortality goes up by seven to eight percent so timing is critical we went and took mrs. Manny's data and we ran", "start_timestamp": "00:09:45", "end_timestamp": "00:10:24", "start_second": 585, "end_second": 624, "url": "https://www.youtube.com/watch?v=Nj2YSLPn6OY&t=585s", "title": "Better Medicine Through Machine Learning | Suchi Saria | TEDxBoston", "thumbnail": "https://i.ytimg.com/vi/Nj2YSLPn6OY/maxresdefault.jpg"} {"video_id": "Nj2YSLPn6OY", "text": "throughs on it and here's what we found truth would have detected mrs. Manny's sepsis 12 hours before doctors currently did as my clinical colleagues would say that is the difference between life and death last year we showed using data from 16,000 patients that throughs on average would have detected on most patients on average 24 more than 24 hours prior to the shock onset that's not 24 hours in two-thirds of these patients their sepsis was detected prior to any organ dysfunction whatsoever and to put this result in context that's 60%", "start_timestamp": "00:10:24", "end_timestamp": "00:11:11", "start_second": 624, "end_second": 671, "url": "https://www.youtube.com/watch?v=Nj2YSLPn6OY&t=624s", "title": "Better Medicine Through Machine Learning | Suchi Saria | TEDxBoston", "thumbnail": "https://i.ytimg.com/vi/Nj2YSLPn6OY/maxresdefault.jpg"} {"video_id": "Nj2YSLPn6OY", "text": "increase in performance over state of the art so what truth is really doing is doctors a much longer window to come in and intervene in order to prevent organ dysfunction and mortality this year the independently validated truths in data from Howard County General Hospital in Maryland and now we're working to do real-time integration in order to make something like truths available to every doctor at Hopkins I'm also really excited because after we've published our papers several other health systems are now already", "start_timestamp": "00:11:11", "end_timestamp": "00:11:46", "start_second": 671, "end_second": 706, "url": "https://www.youtube.com/watch?v=Nj2YSLPn6OY&t=671s", "title": "Better Medicine Through Machine Learning | Suchi Saria | TEDxBoston", "thumbnail": "https://i.ytimg.com/vi/Nj2YSLPn6OY/maxresdefault.jpg"} {"video_id": "Nj2YSLPn6OY", "text": "implementing the published version of truths in order to be able to develop it in their own environment so I'm going to highlight like a few perhaps three salient characteristics that I think makes a strategy like truths very powerful ok first truths runs 24/7 what it does is it gives doctors a second pair of reliable eyes right - it's hard to scale-up doctors it's easier I think much easier to scale up computers and what truth is really doing is allowing us to get expertise from the best doctors everywhere here's the third one", "start_timestamp": "00:11:46", "end_timestamp": "00:12:32", "start_second": 706, "end_second": 752, "url": "https://www.youtube.com/watch?v=Nj2YSLPn6OY&t=706s", "title": "Better Medicine Through Machine Learning | Suchi Saria | TEDxBoston", "thumbnail": "https://i.ytimg.com/vi/Nj2YSLPn6OY/maxresdefault.jpg"} {"video_id": "Nj2YSLPn6OY", "text": "which i think is very interesting in many cases like we see in substance we might not need new measurements the signs and symptoms were already in your data and what truth is really doing is discovering these signs and symptoms to learn something that we couldn't see by eye finally there's been a lot of buzz about big data and I want to make a little subtle point about a technical problem that I think truces solving that is very interesting it truths would be able to learn learn much faster if it had a lot of data on you or it could get", "start_timestamp": "00:12:32", "end_timestamp": "00:13:12", "start_second": 752, "end_second": 792, "url": "https://www.youtube.com/watch?v=Nj2YSLPn6OY&t=752s", "title": "Better Medicine Through Machine Learning | Suchi Saria | TEDxBoston", "thumbnail": "https://i.ytimg.com/vi/Nj2YSLPn6OY/maxresdefault.jpg"} {"video_id": "Nj2YSLPn6OY", "text": "more data by experimenting on you but we don't want that right so what truths really has to do is leverage your limited data to figure out what's right for you right so in other words what truths really has to solve ism is a challenging small data problem in other words it has limited data on you and has to figure out what is the right treatment for you and for that it has to let it leverages vast amounts of data from other patients and figures out what information to borrow in order to make these assessments reliably and", "start_timestamp": "00:13:12", "end_timestamp": "00:13:49", "start_second": 792, "end_second": 829, "url": "https://www.youtube.com/watch?v=Nj2YSLPn6OY&t=792s", "title": "Better Medicine Through Machine Learning | Suchi Saria | TEDxBoston", "thumbnail": "https://i.ytimg.com/vi/Nj2YSLPn6OY/maxresdefault.jpg"} {"video_id": "Nj2YSLPn6OY", "text": "precisely so I also want to tell you a little bit about how the strategy is not unique to sepsis so very broadly if you think about it in many diseases essentially where you have profile of symptoms and the response to treatments varies a great deal across individuals you can use the strategy like truth in order to target treatment so you're wondering like for example if you consider cancer diabetes multiple sclerosis Parkinson's lupus so there are many such diseases on which a strategy like Truths is amenable in fact in our", "start_timestamp": "00:13:49", "end_timestamp": "00:14:28", "start_second": 829, "end_second": 868, "url": "https://www.youtube.com/watch?v=Nj2YSLPn6OY&t=829s", "title": "Better Medicine Through Machine Learning | Suchi Saria | TEDxBoston", "thumbnail": "https://i.ytimg.com/vi/Nj2YSLPn6OY/maxresdefault.jpg"} {"video_id": "Nj2YSLPn6OY", "text": "own lab with experts in rheumatic diseases or immune diseases in particular we're looking at how in scleroderma for instance we can use strategies similar to tools to avoid giving strong immunosuppressants to patients who don't need them other colleagues this is William Pelham Susan Murphy and their team they're studying kids with ADHD and looking at how using similar data-driven strategies they can identify when kids can be benefit from behavioral therapy and we can avoid the need for giving them psychostimulants", "start_timestamp": "00:14:28", "end_timestamp": "00:15:05", "start_second": 868, "end_second": 905, "url": "https://www.youtube.com/watch?v=Nj2YSLPn6OY&t=868s", "title": "Better Medicine Through Machine Learning | Suchi Saria | TEDxBoston", "thumbnail": "https://i.ytimg.com/vi/Nj2YSLPn6OY/maxresdefault.jpg"} {"video_id": "Nj2YSLPn6OY", "text": "altogether so the strategy is very very powerful so I was speaking about substance so let's go back to substance again so I said it was sepsis Awareness Month and the CDC has declared substance to be a medical emergency rightfully so remember 750,000 people annually are affected by sepsis a patient's family recently asked me what will it take to bring this to a hospital near us I think that can be done in fact it can even be done within a year but we don't want to stop there we want it to be possible to bring strategy", "start_timestamp": "00:15:05", "end_timestamp": "00:15:43", "start_second": 905, "end_second": 943, "url": "https://www.youtube.com/watch?v=Nj2YSLPn6OY&t=905s", "title": "Better Medicine Through Machine Learning | Suchi Saria | TEDxBoston", "thumbnail": "https://i.ytimg.com/vi/Nj2YSLPn6OY/maxresdefault.jpg"} {"video_id": "Nj2YSLPn6OY", "text": "like truth - hospitals everywhere and so the question is to do that what will it take right so I think the three key things we need your help for one we need super smart engineers to be working in healthcare we need your help in building and scaling up such technologies don't go to wall street healthcare needs you right we need policymakers to create incentives to open up electronic medical records as an expert at a leading health institution it's taken me more than a year because the EMR is so closed in order to be able to figure out how to", "start_timestamp": "00:15:43", "end_timestamp": "00:16:29", "start_second": 943, "end_second": 989, "url": "https://www.youtube.com/watch?v=Nj2YSLPn6OY&t=943s", "title": "Better Medicine Through Machine Learning | Suchi Saria | TEDxBoston", "thumbnail": "https://i.ytimg.com/vi/Nj2YSLPn6OY/maxresdefault.jpg"} {"video_id": "Nj2YSLPn6OY", "text": "implement rules against the EMR it really should be easier than this three we need a healthcare system that's based on quality our current healthcare system is incentivized to optimize volume rather than quality right now you can choose which restaurants to go to based on the quality of food should you be able to choose the hospitals you go to based on quality of care part of the problem is that quality data at the moment is not very visible to consumers and we really need to make a bigger effort to make this quality a visible so", "start_timestamp": "00:16:29", "end_timestamp": "00:17:05", "start_second": 989, "end_second": 1025, "url": "https://www.youtube.com/watch?v=Nj2YSLPn6OY&t=989s", "title": "Better Medicine Through Machine Learning | Suchi Saria | TEDxBoston", "thumbnail": "https://i.ytimg.com/vi/Nj2YSLPn6OY/maxresdefault.jpg"} {"video_id": "qHLLMg0Teg4", "text": "all right welcome everyone welcome to lecture 13 of CS 287 last week on Thursday there was no live lecture but we actually did cover the material so ignosi did they recording at home and posted a video on Piazza so you can watch lecture 12 on video it will mean no live version of that lecture but this way we can keep up with the course and not lose a lecture slot today lecture 13 we'll look at Kalman smoothers maximum a-posteriori estimation maximum likelihood and expectation maximization all right so this is our menu for today", "start_timestamp": "00:00:00", "end_timestamp": "00:00:44", "start_second": 0, "end_second": 44, "url": "https://www.youtube.com/watch?v=qHLLMg0Teg4&t=0s", "title": "Lecture 13 Kalman Smoother, MAP, ML, EM -- CS287-FA19 Advanced Robotics at UC Berkeley", "thumbnail": "https://i.ytimg.com/vi/qHLLMg0Teg4/hqdefault.jpg"} {"video_id": "qHLLMg0Teg4", "text": "let's start with smoothing what's the main idea behind smoothing let's go back to filtering which we've been doing in filtering what you have is you try to find a distribution over variable XT after you have observed some sensor measurements Z 0 through ZT and ideally over time I keep track of this when you go to the next time you again find a distribution for XT plus 1 given all observations up to time T plus 1 and repeat now if you look at this very symmetric you only use information from the past and often that's indeed the", "start_timestamp": "00:00:44", "end_timestamp": "00:01:20", "start_second": 44, "end_second": 80, "url": "https://www.youtube.com/watch?v=qHLLMg0Teg4&t=44s", "title": "Lecture 13 Kalman Smoother, MAP, ML, EM -- CS287-FA19 Advanced Robotics at UC Berkeley", "thumbnail": "https://i.ytimg.com/vi/qHLLMg0Teg4/hqdefault.jpg"} {"video_id": "qHLLMg0Teg4", "text": "best you can do because right now you need to know the state of the robot as well as possible so the best you can do is filtering in that situation but what if you're post-processing your data you already collected your data I want to know where was my robot back at time T we should also use the information that happened after and so that's called smoothing and smoothing you use all the sensor observations before and after to come to a conclusion about the distribution over state of your robot or your environment now and the figures had", "start_timestamp": "00:01:20", "end_timestamp": "00:01:50", "start_second": 80, "end_second": 110, "url": "https://www.youtube.com/watch?v=qHLLMg0Teg4&t=80s", "title": "Lecture 13 Kalman Smoother, MAP, ML, EM -- CS287-FA19 Advanced Robotics at UC Berkeley", "thumbnail": "https://i.ytimg.com/vi/qHLLMg0Teg4/hqdefault.jpg"} {"video_id": "qHLLMg0Teg4", "text": "true here I ignored the actions and in principle it could be action everywhere for every time slice but it does not change the math in any way what does the action do the only thing the action does if we're doing estimation is that the conditional distribution of XT plus 1 given X T becomes conditional XT plus 1 given XT and the action but once the action is fixed because you have the whole sequence is just indexing into a conditional distribution and putting it there index by the action and so let's assume that already happened we", "start_timestamp": "00:01:50", "end_timestamp": "00:02:22", "start_second": 110, "end_second": 142, "url": "https://www.youtube.com/watch?v=qHLLMg0Teg4&t=110s", "title": "Lecture 13 Kalman Smoother, MAP, ML, EM -- CS287-FA19 Advanced Robotics at UC Berkeley", "thumbnail": "https://i.ytimg.com/vi/qHLLMg0Teg4/hqdefault.jpg"} {"video_id": "qHLLMg0Teg4", "text": "already have a conditional distribution or XT given XT minus 1 and maybe what's indexed by the action maybe there was no actions it doesn't really matter the math is the same all right so now to think about smoothing and compared to filtering let's work out the basics on the board so we'll do this by example rather than having a you know just very general like any kind of horizon problem we'll pick a very specific horizon just horizon 2 so we'll be interested in this up [Music] we'll be interested in the probability", "start_timestamp": "00:02:22", "end_timestamp": "00:03:09", "start_second": 142, "end_second": 189, "url": "https://www.youtube.com/watch?v=qHLLMg0Teg4&t=142s", "title": "Lecture 13 Kalman Smoother, MAP, ML, EM -- CS287-FA19 Advanced Robotics at UC Berkeley", "thumbnail": "https://i.ytimg.com/vi/qHLLMg0Teg4/hqdefault.jpg"} {"video_id": "qHLLMg0Teg4", "text": "distribution over state at time two so that's our example in general would be any time T but we're going to do by example for time to given observations at time zero one and two now one thing we know is that this is proportional to the Joint Distribution over x2 and the observation Z 0 Z 1 Z 2 again what does this sign mean the proportional sign what it means is that we're looking at a distribution over x2 and what it means then we can evaluate this quantity here for all values of x2 and once we have all the numbers for every possible value", "start_timestamp": "00:03:09", "end_timestamp": "00:03:58", "start_second": 189, "end_second": 238, "url": "https://www.youtube.com/watch?v=qHLLMg0Teg4&t=189s", "title": "Lecture 13 Kalman Smoother, MAP, ML, EM -- CS287-FA19 Advanced Robotics at UC Berkeley", "thumbnail": "https://i.ytimg.com/vi/qHLLMg0Teg4/hqdefault.jpg"} {"video_id": "qHLLMg0Teg4", "text": "of x2 we can just sum those together divide by that sum and that will normalize it make this sum to 1 and get the actual conditional now the model we have is this hmm like models so how do we write out this thing over here it's equal to the sum over variables we have ignored because we have a chain in how the probability distribution is set up and we left out x0 and x1 but as you write out the Joint Distribution they'll appear in it so we'll sum out over x0 and x1 and then we can write out the full chain rule for this hmm", "start_timestamp": "00:03:58", "end_timestamp": "00:04:38", "start_second": 238, "end_second": 278, "url": "https://www.youtube.com/watch?v=qHLLMg0Teg4&t=238s", "title": "Lecture 13 Kalman Smoother, MAP, ML, EM -- CS287-FA19 Advanced Robotics at UC Berkeley", "thumbnail": "https://i.ytimg.com/vi/qHLLMg0Teg4/hqdefault.jpg"} {"video_id": "qHLLMg0Teg4", "text": "there is z2 given x2 before that there was x2 given x1 then with that there was right before that Z 1 given x1 and then before that there was x1 given x0 and before that there was z 0 given x0 and then distribution for X 0 and the graph corresponding to this is X 0 as to capitals x0 x1 x2 and then observed Z 0 C 1 Z 2 so we just throw it out the fool joint over all six variables here but then we only care about these four so we sum out over X 0 and X 1 to get distribution over just a 4 we care about now what do we do in", "start_timestamp": "00:04:38", "end_timestamp": "00:05:40", "start_second": 278, "end_second": 340, "url": "https://www.youtube.com/watch?v=qHLLMg0Teg4&t=278s", "title": "Lecture 13 Kalman Smoother, MAP, ML, EM -- CS287-FA19 Advanced Robotics at UC Berkeley", "thumbnail": "https://i.ytimg.com/vi/qHLLMg0Teg4/hqdefault.jpg"} {"video_id": "qHLLMg0Teg4", "text": "filtering well what can I should start looking at some reorganization here X 0 where does it appear X 0 only appears at the end here so as we look at this summation we can actually move it we can say this is really X 0 summation can happen in the back here so sum over X 0 P X 1 given X 0 p 0 given X 0 P X 0 because as far as X 0 is concerned everything up front is a constant and bringing up a constant up front is fine to do that's just saying multiplying every term with a constant or first summing it all together and then", "start_timestamp": "00:05:40", "end_timestamp": "00:06:22", "start_second": 340, "end_second": 382, "url": "https://www.youtube.com/watch?v=qHLLMg0Teg4&t=340s", "title": "Lecture 13 Kalman Smoother, MAP, ML, EM -- CS287-FA19 Advanced Robotics at UC Berkeley", "thumbnail": "https://i.ytimg.com/vi/qHLLMg0Teg4/hqdefault.jpg"} {"video_id": "qHLLMg0Teg4", "text": "multiplied with the constant is the same thing then how about X 1 X 1 does not appear in here so we can bring this up front so we can bring P Z 2 given X 2 up front and then we have the summation over X 1 and then this one does have X 1 in it and so that's this one and then this one after we sum out over X 0 we'll have X 1 and Z 0 in it C 0 is a constant that X 1 will be in it so we have to keep it in the back behind this summation now let's give these some meaning and how we would actually run the filtering algorithm", "start_timestamp": "00:06:22", "end_timestamp": "00:06:59", "start_second": 382, "end_second": 419, "url": "https://www.youtube.com/watch?v=qHLLMg0Teg4&t=382s", "title": "Lecture 13 Kalman Smoother, MAP, ML, EM -- CS287-FA19 Advanced Robotics at UC Berkeley", "thumbnail": "https://i.ytimg.com/vi/qHLLMg0Teg4/hqdefault.jpg"} {"video_id": "qHLLMg0Teg4", "text": "we'd say okay this thing here is actually the joint is to join between Z 0 well let's write X first join between X 0 and Z 0 then we multiply with X 1 given X 0 and we sum out over X 0 we've seen this last week this will give us the joint between X 1 and Z 0 then we multiply with Z 1 given X 1 so multiply with this conditional at this point we have C 1 comma X 1 comma Z 0 then we multiply with X 2 given X 1 so here we'd have P X 2 comma Z 1 comma X 1 comma Z 0 then we sum out over X 1 so it disappears after here we have P X 2", "start_timestamp": "00:06:59", "end_timestamp": "00:08:00", "start_second": 419, "end_second": 480, "url": "https://www.youtube.com/watch?v=qHLLMg0Teg4&t=419s", "title": "Lecture 13 Kalman Smoother, MAP, ML, EM -- CS287-FA19 Advanced Robotics at UC Berkeley", "thumbnail": "https://i.ytimg.com/vi/qHLLMg0Teg4/hqdefault.jpg"} {"video_id": "qHLLMg0Teg4", "text": "comma Z 2 comma sorry C 1 comma C 1 C 0 and then here we multiply Z 2 given X 2 so at this point we have P Z 2 comma X 2 comma Z 1 comma Z 0 which is indeed what we have over here and so what we see is that we can recursively compute the Joint Distribution between the latest state variable X 2 and all past observations and the calculation we do is pretty much the same every time we just multiply in an observation they will multiply in the dynamics model and sum over the past variable then we multiply in the next observation", "start_timestamp": "00:08:00", "end_timestamp": "00:08:48", "start_second": 480, "end_second": 528, "url": "https://www.youtube.com/watch?v=qHLLMg0Teg4&t=480s", "title": "Lecture 13 Kalman Smoother, MAP, ML, EM -- CS287-FA19 Advanced Robotics at UC Berkeley", "thumbnail": "https://i.ytimg.com/vi/qHLLMg0Teg4/hqdefault.jpg"} {"video_id": "qHLLMg0Teg4", "text": "multiplied and dynamics some out over the past state variable and repeat so we have a general recursive approach to finding these things and let's see we're going to put this let me put it over here we have P XT plus 1 comma Z 0 through ZT is equal to sum over X T the dynamics model XT plus 1 given X T times X T comma Z 0 through CT and then to bring in that last observation ZT plus 1 which is ultimate what we want here for that update now we have Z 0 through Z T n ZT plus is going to be multiplying this thing", "start_timestamp": "00:08:48", "end_timestamp": "00:09:45", "start_second": 528, "end_second": 585, "url": "https://www.youtube.com/watch?v=qHLLMg0Teg4&t=528s", "title": "Lecture 13 Kalman Smoother, MAP, ML, EM -- CS287-FA19 Advanced Robotics at UC Berkeley", "thumbnail": "https://i.ytimg.com/vi/qHLLMg0Teg4/hqdefault.jpg"} {"video_id": "qHLLMg0Teg4", "text": "with the condition of observation given everything else so P Z T plus 1 given X T plus 1 times what we have here P X T plus 1 comma Z 0 through CT now these are bit equations are very easy to run and that's all you need to do to run filtering that's a quick reminder when we go from the joint so here we have the joint when XT plus 1 is e 0 through Z T and here we have a baton ZT plus 1 in general we need to multiply with the conditional of ZT plus 1 given all the variables already present here not just this one variable need to multiply with", "start_timestamp": "00:09:45", "end_timestamp": "00:10:32", "start_second": 585, "end_second": 632, "url": "https://www.youtube.com/watch?v=qHLLMg0Teg4&t=585s", "title": "Lecture 13 Kalman Smoother, MAP, ML, EM -- CS287-FA19 Advanced Robotics at UC Berkeley", "thumbnail": "https://i.ytimg.com/vi/qHLLMg0Teg4/hqdefault.jpg"} {"video_id": "qHLLMg0Teg4", "text": "the new variable condition all variables you already have but in this model we have a conditional independence we don't need a condition on the past observations if we have the current state and so that's why we just have XT plus 1 here and not XT plus 1 and all observations to condition on because once we know XT plus 1 those past observations don't need to be conditioned on anymore they're independent of Z T plus 1 and that's shown in the graph structure here and we're looking at we're getting z2 given everything in the past is the same thing", "start_timestamp": "00:10:32", "end_timestamp": "00:11:04", "start_second": 632, "end_second": 664, "url": "https://www.youtube.com/watch?v=qHLLMg0Teg4&t=632s", "title": "Lecture 13 Kalman Smoother, MAP, ML, EM -- CS287-FA19 Advanced Robotics at UC Berkeley", "thumbnail": "https://i.ytimg.com/vi/qHLLMg0Teg4/hqdefault.jpg"} {"video_id": "qHLLMg0Teg4", "text": "as z2 just given x2 and that's exactly what's happening here why we only condition on the XT plus 1 so that's filtering and we cover them but we covered it now in a different way that last I'm slightly a different way so we can now match up smoothing with what we saw here in filtering I'm gonna have to use a slightly smaller font for this example actually what a mm-hmm how much smaller brand is in your memories I'll use the entire width of the board as needed to cover smoothing so you've got this and we'll use the full width", "start_timestamp": "00:11:04", "end_timestamp": "00:12:03", "start_second": 664, "end_second": 723, "url": "https://www.youtube.com/watch?v=qHLLMg0Teg4&t=664s", "title": "Lecture 13 Kalman Smoother, MAP, ML, EM -- CS287-FA19 Advanced Robotics at UC Berkeley", "thumbnail": "https://i.ytimg.com/vi/qHLLMg0Teg4/hqdefault.jpg"} {"video_id": "qHLLMg0Teg4", "text": "so smoothing now we care about let me try out here the context we will have x2 again and after the Y should be x3 and x4 and there will be observations Z for Z 3 Z 2 and I should all be things before it also x1 x0 and observation z1 c0 now we're just it in the distribution for x2 given all observations this is something you can do after the fact I have the fact analysis what happened to my system now to have all the information available so the quantity we're after is P x2 given Z 0 Z 1 Z 2 Z 3 Z 4 now again we know this is", "start_timestamp": "00:12:03", "end_timestamp": "00:13:10", "start_second": 723, "end_second": 790, "url": "https://www.youtube.com/watch?v=qHLLMg0Teg4&t=723s", "title": "Lecture 13 Kalman Smoother, MAP, ML, EM -- CS287-FA19 Advanced Robotics at UC Berkeley", "thumbnail": "https://i.ytimg.com/vi/qHLLMg0Teg4/hqdefault.jpg"} {"video_id": "qHLLMg0Teg4", "text": "proportional to the joint between all these variables x2 comma Z 0 Z 1 Z 2 Z 3 Z 4 now we don't directly have this kind of joint available what we have available here is something of the form that involves all of these variables so all the conditionals multiplied together in that graph over there is what we have available and then some variables we're not going to care about X 0 X 1 X 3 and X 4 we don't care about we will write out the full joint over all 10 variables and then sum out the four variables we don't care about so what's the fool", "start_timestamp": "00:13:10", "end_timestamp": "00:13:54", "start_second": 790, "end_second": 834, "url": "https://www.youtube.com/watch?v=qHLLMg0Teg4&t=790s", "title": "Lecture 13 Kalman Smoother, MAP, ML, EM -- CS287-FA19 Advanced Robotics at UC Berkeley", "thumbnail": "https://i.ytimg.com/vi/qHLLMg0Teg4/hqdefault.jpg"} {"video_id": "qHLLMg0Teg4", "text": "joint the fool joint would be Z 4 given X 4 that's happening all the way at the end then X for given X 3 then Z 3 given X 3 x3 given X to Z 2 given X 2 and it will continue on this other board X 2 given X 1 the C 1 given X 1 X 1 given X 0 z 0 given X 0 and then the prior P X 0 this is the full joint now for this full joint we know we don't care about X 0 X 1 X 3 and X 4 so we're going to sum them out to get rid of them X 0 X 1 X 3 X 4 we sum out over to get this quantity over here alright now we're going to play the same trick it's during when we", "start_timestamp": "00:13:54", "end_timestamp": "00:15:10", "start_second": 834, "end_second": 910, "url": "https://www.youtube.com/watch?v=qHLLMg0Teg4&t=834s", "title": "Lecture 13 Kalman Smoother, MAP, ML, EM -- CS287-FA19 Advanced Robotics at UC Berkeley", "thumbnail": "https://i.ytimg.com/vi/qHLLMg0Teg4/hqdefault.jpg"} {"video_id": "qHLLMg0Teg4", "text": "sum out over these variables is their way to rearrange this into smaller calculations we're going to first multiply everything together and then finally get to some out but we can do smaller bite-sized calculations where we sum some variables out make them disappear and that way not have this exponential blow-up as the thing becomes bigger if you do it naively this way this summation that the variables are binary let's say then there will be two to the horizon number of terms in this summation but by bringing them in in the", "start_timestamp": "00:15:10", "end_timestamp": "00:15:38", "start_second": 910, "end_second": 938, "url": "https://www.youtube.com/watch?v=qHLLMg0Teg4&t=910s", "title": "Lecture 13 Kalman Smoother, MAP, ML, EM -- CS287-FA19 Advanced Robotics at UC Berkeley", "thumbnail": "https://i.ytimg.com/vi/qHLLMg0Teg4/hqdefault.jpg"} {"video_id": "qHLLMg0Teg4", "text": "right spots you reduces and you get a linear competition in the length of the network rather than an exponential calculation so can we do the same thing here well let's see how about X 0 actually we can play the same trick as before X 0 instead of summing over it over here where does it appear only in the back here so let's just insert that summation over here how about X 1 all of these are constants as far as X 1 is concerned so we can put the summation over X 1 over here and get rid of it over here then as we think of this thing here so we", "start_timestamp": "00:15:38", "end_timestamp": "00:16:25", "start_second": 938, "end_second": 985, "url": "https://www.youtube.com/watch?v=qHLLMg0Teg4&t=938s", "title": "Lecture 13 Kalman Smoother, MAP, ML, EM -- CS287-FA19 Advanced Robotics at UC Berkeley", "thumbnail": "https://i.ytimg.com/vi/qHLLMg0Teg4/hqdefault.jpg"} {"video_id": "qHLLMg0Teg4", "text": "have this calculation happening as a first calculation and then after that we can do this calculation which we know from filtering this is giving us the quantities we saw in filtering because everything happening here so we did in filtering we know what those quantities are going to be then how about these here what can we reorganize here to make this the summations kind of move in well it's going to be similar to what we have happening here x0 is the furthest out in the chain and so it gets on the most inner side simply", "start_timestamp": "00:16:25", "end_timestamp": "00:17:04", "start_second": 985, "end_second": 1024, "url": "https://www.youtube.com/watch?v=qHLLMg0Teg4&t=985s", "title": "Lecture 13 Kalman Smoother, MAP, ML, EM -- CS287-FA19 Advanced Robotics at UC Berkeley", "thumbnail": "https://i.ytimg.com/vi/qHLLMg0Teg4/hqdefault.jpg"} {"video_id": "qHLLMg0Teg4", "text": "will be through 4 x4 that's the furthest away so actually going to be able to first look at X 4 and say we're just going to sum over X 4 let's let's forget about X chief for now we'll need to squeeze in there but we'll sum over Explorer so we're left with summation over X 4 and we can actually just do this part there's no Explorer anywhere else the result of that will be something that does involve X 3 because as you some out X for Z's are constants but there's still an extreme here so we need to put on the outside still a summation over X", "start_timestamp": "00:17:04", "end_timestamp": "00:17:40", "start_second": 1024, "end_second": 1060, "url": "https://www.youtube.com/watch?v=qHLLMg0Teg4&t=1024s", "title": "Lecture 13 Kalman Smoother, MAP, ML, EM -- CS287-FA19 Advanced Robotics at UC Berkeley", "thumbnail": "https://i.ytimg.com/vi/qHLLMg0Teg4/hqdefault.jpg"} {"video_id": "qHLLMg0Teg4", "text": "3 it's still in there and multiply in every appearance of X 3 this one this one and so we have this thing over here now let's look at the quantities we have after we group things this way so let's start again over here what do we get if you look at this quantity over here we looked at it in filtering this is computing P x1 comma Z 0 comma Z one simple recursive calculation then we look at this one over here this quantity here we have not seen it in filtering but we can interpret it what is Z Z for given X for X 4 and X 3 this is really", "start_timestamp": "00:17:40", "end_timestamp": "00:18:36", "start_second": 1060, "end_second": 1116, "url": "https://www.youtube.com/watch?v=qHLLMg0Teg4&t=1060s", "title": "Lecture 13 Kalman Smoother, MAP, ML, EM -- CS287-FA19 Advanced Robotics at UC Berkeley", "thumbnail": "https://i.ytimg.com/vi/qHLLMg0Teg4/hqdefault.jpg"} {"video_id": "qHLLMg0Teg4", "text": "then Z 4 comma X for given X 3 it's what we have over here then we sum over X 4 so we sum out X 4 so we end up with here is P Z 4 X 4 is being summed out given X 3 all right now as we keep processing this and this will in the future call a backward message be X 3 X 3 is the variable then once we multiply in Z 3 given X 3 what do we what do we get we get I should we multiply multiplying so multiplying this one Z 3 given X 3 which would give us pc3 comma Z forgiven X 3 if we go up to here then we'll apply X 3 given X 2 which will make it multiply", "start_timestamp": "00:18:36", "end_timestamp": "00:19:44", "start_second": 1116, "end_second": 1184, "url": "https://www.youtube.com/watch?v=qHLLMg0Teg4&t=1116s", "title": "Lecture 13 Kalman Smoother, MAP, ML, EM -- CS287-FA19 Advanced Robotics at UC Berkeley", "thumbnail": "https://i.ytimg.com/vi/qHLLMg0Teg4/hqdefault.jpg"} {"video_id": "qHLLMg0Teg4", "text": "next we give an X to graze X 3 to the front P X 3 Z 3 Z 4 given X 2 then we sum out X 3 and we're left with P z3 z4 given x2 so from one side of the chain we get the probability of the evidence that comes after x2 given x2 that's living here over here we have to join between X 1 Z 0 Z 1 but actually we can multiply this thing and some out over X 1 so we'll get P x2 comma Z 0 z 1 so you have the joint of x2 with the past evidence then we have bringing in the current evidence at time 2m we know the conditional submission for the later", "start_timestamp": "00:19:44", "end_timestamp": "00:20:44", "start_second": 1184, "end_second": 1244, "url": "https://www.youtube.com/watch?v=qHLLMg0Teg4&t=1184s", "title": "Lecture 13 Kalman Smoother, MAP, ML, EM -- CS287-FA19 Advanced Robotics at UC Berkeley", "thumbnail": "https://i.ytimg.com/vi/qHLLMg0Teg4/hqdefault.jpg"} {"video_id": "qHLLMg0Teg4", "text": "evidence given x2 so if you multiply all three of these together we get exactly this quantity over here which is to join between x2 and all the evidences and so he think to observe here is that evidence that comes from the past is just a standard forward filter being run that's shown over here Adams coming from the future is some kind of backwards filter running that does these updates here that work from the back of the chain back to x2 and give you a condition of all future evidence given x2 and of course you need the evidence", "start_timestamp": "00:20:44", "end_timestamp": "00:21:20", "start_second": 1244, "end_second": 1280, "url": "https://www.youtube.com/watch?v=qHLLMg0Teg4&t=1244s", "title": "Lecture 13 Kalman Smoother, MAP, ML, EM -- CS287-FA19 Advanced Robotics at UC Berkeley", "thumbnail": "https://i.ytimg.com/vi/qHLLMg0Teg4/hqdefault.jpg"} {"video_id": "qHLLMg0Teg4", "text": "at the actual time to also incorporate it to get the full evidence alright so in terms of math that we did here ultimately all it is is writing out the full joint distribution and moving around the summations and discovering the structure of how can do calculations from the front and from the back to bring in to the time where we're at in a way that is not exponential in the number of variables we're considering it's linear every calculation is simple we do one simple calculation per time slides to work our way to time - all", "start_timestamp": "00:21:20", "end_timestamp": "00:21:57", "start_second": 1280, "end_second": 1317, "url": "https://www.youtube.com/watch?v=qHLLMg0Teg4&t=1280s", "title": "Lecture 13 Kalman Smoother, MAP, ML, EM -- CS287-FA19 Advanced Robotics at UC Berkeley", "thumbnail": "https://i.ytimg.com/vi/qHLLMg0Teg4/hqdefault.jpg"} {"video_id": "qHLLMg0Teg4", "text": "right any questions about this then let me project in typeset lay tag the equations we just derived by example on the board but we're going to look at see how these on the slides this is what we did we did this the whole smoothing thing this is the full filter equations at the bottom going to magnify them there's a backward and a forward we can combine them to get the local so here's the full thing we can run a filter forward and we'll call those things a messages in some sense indexed by time so very simple set of update equations", "start_timestamp": "00:21:57", "end_timestamp": "00:22:51", "start_second": 1317, "end_second": 1371, "url": "https://www.youtube.com/watch?v=qHLLMg0Teg4&t=1317s", "title": "Lecture 13 Kalman Smoother, MAP, ML, EM -- CS287-FA19 Advanced Robotics at UC Berkeley", "thumbnail": "https://i.ytimg.com/vi/qHLLMg0Teg4/hqdefault.jpg"} {"video_id": "qHLLMg0Teg4", "text": "incorporate the dynamics model and the next observation and repeat then backward pass does something very similar but works its way from the back you initialize with just uniform because you have nothing really to go out there's no prior at the end you have just nothing to start from and then you start bringing in evidencing is bringing the dynamics that got you there so the dynamics into the time step you were working from but otherwise it looks exactly the same and once you have those you can combine them to get the distribution for the", "start_timestamp": "00:22:51", "end_timestamp": "00:23:25", "start_second": 1371, "end_second": 1405, "url": "https://www.youtube.com/watch?v=qHLLMg0Teg4&t=1371s", "title": "Lecture 13 Kalman Smoother, MAP, ML, EM -- CS287-FA19 Advanced Robotics at UC Berkeley", "thumbnail": "https://i.ytimg.com/vi/qHLLMg0Teg4/hqdefault.jpg"} {"video_id": "qHLLMg0Teg4", "text": "variable XT jointly with all evidence at all times now one thing that might come up in practice is that as you run this the way it's shown here even though mathematically is the simplest way you might run into numerical problems because actually compute a joint over more and more variables the actual probability value will keep going down and down and you might get under flow where you get numbers that are below numbers you can represent in floating point so in practice even though the math is kind of simple and cleanest when", "start_timestamp": "00:23:25", "end_timestamp": "00:23:58", "start_second": 1405, "end_second": 1438, "url": "https://www.youtube.com/watch?v=qHLLMg0Teg4&t=1405s", "title": "Lecture 13 Kalman Smoother, MAP, ML, EM -- CS287-FA19 Advanced Robotics at UC Berkeley", "thumbnail": "https://i.ytimg.com/vi/qHLLMg0Teg4/hqdefault.jpg"} {"video_id": "qHLLMg0Teg4", "text": "working with the joint often you would renormalize as you work along so just say okay I have maybe currently my this thing is really a joint with all past evidence but I can also just renormalize it and forget about it being a joint and just say hey I'm just going to renormalize and know that it's now a conditional instead of a joint what do you lose you lose that probability you don't know anymore the probability of all the evidence you just have a conditional now for x2 or whatever time slice it is given everything else if you do care", "start_timestamp": "00:23:58", "end_timestamp": "00:24:29", "start_second": 1438, "end_second": 1469, "url": "https://www.youtube.com/watch?v=qHLLMg0Teg4&t=1438s", "title": "Lecture 13 Kalman Smoother, MAP, ML, EM -- CS287-FA19 Advanced Robotics at UC Berkeley", "thumbnail": "https://i.ytimg.com/vi/qHLLMg0Teg4/hqdefault.jpg"} {"video_id": "qHLLMg0Teg4", "text": "about the actual value because you want to say was this a likely or unlikely run that I just saw happen then you can keep track of these in log space you can just keep track of the log probabilities instead of the actual probabilities and that way avoid the underflow alright so last thing to do if you do it this way is just a normalization but again as I said for numerical reasons often you'll be the new you'll be doing the normalization as you work along to make sure things stay in the range that your floating point is computation is happy", "start_timestamp": "00:24:29", "end_timestamp": "00:25:04", "start_second": 1469, "end_second": 1504, "url": "https://www.youtube.com/watch?v=qHLLMg0Teg4&t=1469s", "title": "Lecture 13 Kalman Smoother, MAP, ML, EM -- CS287-FA19 Advanced Robotics at UC Berkeley", "thumbnail": "https://i.ytimg.com/vi/qHLLMg0Teg4/hqdefault.jpg"} {"video_id": "qHLLMg0Teg4", "text": "with now we can do other things the same ideas we can use to find pairwise posteriors for example the posterior between XT and XT plus 1 jointly with all the evidence from what we derived on the board it should be clear what's going to happen here XT and XT plus 1 sitting next to each other we're going to work our way from the front and the back towards them and we're gonna stop right before each one of them coming from each side and then multiply in the middle the conditional XT plus 1 given XT and the evidences for that time slice", "start_timestamp": "00:25:04", "end_timestamp": "00:25:34", "start_second": 1504, "end_second": 1534, "url": "https://www.youtube.com/watch?v=qHLLMg0Teg4&t=1504s", "title": "Lecture 13 Kalman Smoother, MAP, ML, EM -- CS287-FA19 Advanced Robotics at UC Berkeley", "thumbnail": "https://i.ytimg.com/vi/qHLLMg0Teg4/hqdefault.jpg"} {"video_id": "qHLLMg0Teg4", "text": "in fact one way you could Mathematica think of it as just the thing of it is whatever make XT and XT plus on one variable as if it's one variable we ignore that it's really two variables then exactly the same calculation can be done and then when you unpack this one variable you'll see that you'll have to introduce an XT plus 1 given XT into it because you're unpacking the details otherwise it's just the same thing to compute these forward and backward messages and then when you are hitting the middle point you essentially just", "start_timestamp": "00:25:34", "end_timestamp": "00:26:01", "start_second": 1534, "end_second": 1561, "url": "https://www.youtube.com/watch?v=qHLLMg0Teg4&t=1534s", "title": "Lecture 13 Kalman Smoother, MAP, ML, EM -- CS287-FA19 Advanced Robotics at UC Berkeley", "thumbnail": "https://i.ytimg.com/vi/qHLLMg0Teg4/hqdefault.jpg"} {"video_id": "qHLLMg0Teg4", "text": "have the backward coming in XT plus 1 the forward coming into XT you're multiplying the conditional and the one observation conditional that you hadn't incorporated yet now you might wonder why would we care about pairwise posterior I shall come clear later in this lecture for now you might say well why would ever care but we'll clarify that alright so these are just the laws apply but the same as we did on the board oh you might take this to the next level a little harder to do you can do as an exercise can I find the joint between XT", "start_timestamp": "00:26:01", "end_timestamp": "00:26:39", "start_second": 1561, "end_second": 1599, "url": "https://www.youtube.com/watch?v=qHLLMg0Teg4&t=1561s", "title": "Lecture 13 Kalman Smoother, MAP, ML, EM -- CS287-FA19 Advanced Robotics at UC Berkeley", "thumbnail": "https://i.ytimg.com/vi/qHLLMg0Teg4/hqdefault.jpg"} {"video_id": "qHLLMg0Teg4", "text": "and XT plus K which is not a neighboring variable with all the evidence you can imagine that the relief definitely again messages coming from left and right but you'll have to a little bit of thinking about what you do with the stuff in the middle you still have to sum over those variables in a way but not lose XT as you work your way to XT plus K ik is then you don't recover the joint you need to keep somehow XT around so you'll essentially do something like we did right on something out over XT you just keep it around and you skip the summing", "start_timestamp": "00:26:39", "end_timestamp": "00:27:09", "start_second": 1599, "end_second": 1629, "url": "https://www.youtube.com/watch?v=qHLLMg0Teg4&t=1599s", "title": "Lecture 13 Kalman Smoother, MAP, ML, EM -- CS287-FA19 Advanced Robotics at UC Berkeley", "thumbnail": "https://i.ytimg.com/vi/qHLLMg0Teg4/hqdefault.jpg"} {"video_id": "qHLLMg0Teg4", "text": "out over X T and keep working your way forward all the way till XT plus K and the XT will just still be in the air because she never some doubt over it what is the common smoother the common smoother is exactly what we just covered applied to the situation where these probability distributions are conditional gaussians where conditional of XT plus 1 given X T is a linear Gaussian and a conditional of ZT given XT is a linear Gaussian and then their concrete they're not just like these abstract distributions but in that", "start_timestamp": "00:27:09", "end_timestamp": "00:27:44", "start_second": 1629, "end_second": 1664, "url": "https://www.youtube.com/watch?v=qHLLMg0Teg4&t=1629s", "title": "Lecture 13 Kalman Smoother, MAP, ML, EM -- CS287-FA19 Advanced Robotics at UC Berkeley", "thumbnail": "https://i.ytimg.com/vi/qHLLMg0Teg4/hqdefault.jpg"} {"video_id": "qHLLMg0Teg4", "text": "specific case you get the common smoother so you find that the math for common smoother is very similar to come on filter which agnostic covered will be very similar equations happening that are really just matrix updates you don't need to do any explicit integration any kind of weird integrals that are hard to compute no these closed forms you just manipulate matrices and you'll find updates for your covariance matrix and for your mean and this case will come from both sides and then I'll come together and give you the smooth", "start_timestamp": "00:27:44", "end_timestamp": "00:28:14", "start_second": 1664, "end_second": 1694, "url": "https://www.youtube.com/watch?v=qHLLMg0Teg4&t=1664s", "title": "Lecture 13 Kalman Smoother, MAP, ML, EM -- CS287-FA19 Advanced Robotics at UC Berkeley", "thumbnail": "https://i.ytimg.com/vi/qHLLMg0Teg4/hqdefault.jpg"} {"video_id": "qHLLMg0Teg4", "text": "estimate based on evidence from both sides so well you can do it as an exercise see if you can work through that if you want to check if you really understood the derivations that we're done in the previous lecture for the common filter you could see if you can find the derivations for the backward pass the forward pass will stay the same the backward pass will be the new thing can you find what that looks like and if you can do that then means really understood how this works now we can also look at the results the imagine", "start_timestamp": "00:28:14", "end_timestamp": "00:28:46", "start_second": 1694, "end_second": 1726, "url": "https://www.youtube.com/watch?v=qHLLMg0Teg4&t=1694s", "title": "Lecture 13 Kalman Smoother, MAP, ML, EM -- CS287-FA19 Advanced Robotics at UC Berkeley", "thumbnail": "https://i.ytimg.com/vi/qHLLMg0Teg4/hqdefault.jpg"} {"video_id": "qHLLMg0Teg4", "text": "we run a Kalman filter or a common smoother how well does it work so a natural comparison would be something along the lines of let's say I have some dynamical system and I don't get to observe the state directly but I get to see some observations so I would run a Kalman filter but since I'm running an experiment that could say well okay let me actually give myself access to this state see what it is and see how precise my filter is how well does it track this state and that might be but for debugging and just understanding how", "start_timestamp": "00:28:46", "end_timestamp": "00:29:14", "start_second": 1726, "end_second": 1754, "url": "https://www.youtube.com/watch?v=qHLLMg0Teg4&t=1726s", "title": "Lecture 13 Kalman Smoother, MAP, ML, EM -- CS287-FA19 Advanced Robotics at UC Berkeley", "thumbnail": "https://i.ytimg.com/vi/qHLLMg0Teg4/hqdefault.jpg"} {"video_id": "qHLLMg0Teg4", "text": "well a Kalman filter could work but then you could also sending for the smoother you could say oh let me also run the smoother and what would you hope for you'd hope for the smoother let's move your estimate to have a mean that is closer to the real state than the filter it doesn't always have to be closer but an expectation it should be closer because it brings in more information by bringing more information it should be able to do better where might this would be most pronounced at time 0 because at time 0 the filter will have no", "start_timestamp": "00:29:14", "end_timestamp": "00:29:45", "start_second": 1754, "end_second": 1785, "url": "https://www.youtube.com/watch?v=qHLLMg0Teg4&t=1754s", "title": "Lecture 13 Kalman Smoother, MAP, ML, EM -- CS287-FA19 Advanced Robotics at UC Berkeley", "thumbnail": "https://i.ytimg.com/vi/qHLLMg0Teg4/hqdefault.jpg"} {"video_id": "qHLLMg0Teg4", "text": "information yet but its mood will have incorporated everything from the future to estimate the state at time 0 will not be pronounced at all at the very end is at the very end of your time sequence the smoother and the filter uses the exact same information and they should have the same estimate otherwise something funny going on I mean maybe some numerical things going on but overall they should have the same estimate because they use exact same information to get the estimate at the last time slice in between you can think", "start_timestamp": "00:29:45", "end_timestamp": "00:30:13", "start_second": 1785, "end_second": 1813, "url": "https://www.youtube.com/watch?v=qHLLMg0Teg4&t=1785s", "title": "Lecture 13 Kalman Smoother, MAP, ML, EM -- CS287-FA19 Advanced Robotics at UC Berkeley", "thumbnail": "https://i.ytimg.com/vi/qHLLMg0Teg4/hqdefault.jpg"} {"video_id": "qHLLMg0Teg4", "text": "of it as the smoother having roughly twice as much information is it's not necessarily exactly the Pens in the exact conditional probabilities observer and it can depend on a lot of things but in general you can think about is having twice as much information especially roughly in the middle and so you'd expect the variance to be about half meaning that the average deviation from the real estate for the smoother should be about half in terms of variance compared to the average squared deviations be about a half compared to", "start_timestamp": "00:30:13", "end_timestamp": "00:30:39", "start_second": 1813, "end_second": 1839, "url": "https://www.youtube.com/watch?v=qHLLMg0Teg4&t=1813s", "title": "Lecture 13 Kalman Smoother, MAP, ML, EM -- CS287-FA19 Advanced Robotics at UC Berkeley", "thumbnail": "https://i.ytimg.com/vi/qHLLMg0Teg4/hqdefault.jpg"} {"video_id": "qHLLMg0Teg4", "text": "the filter well let's take a look here's some MATLAB code wrote awhile back and ran this experiment and so what we have here is a plot we just did 20 time steps we see in solid line the state or a two state very both losing one state variable shown in blue once the rebels shown in green two-dimensional space and we see the state the greens variable starts at the top there blue starts at the bottom here then we can look at the smoother in dotted and the filter in dashed we look at the estimates for example early on", "start_timestamp": "00:30:39", "end_timestamp": "00:31:13", "start_second": 1839, "end_second": 1873, "url": "https://www.youtube.com/watch?v=qHLLMg0Teg4&t=1839s", "title": "Lecture 13 Kalman Smoother, MAP, ML, EM -- CS287-FA19 Advanced Robotics at UC Berkeley", "thumbnail": "https://i.ytimg.com/vi/qHLLMg0Teg4/hqdefault.jpg"} {"video_id": "qHLLMg0Teg4", "text": "here we see that well the filter really has no clue when it's just starting out and it's not really close to this state but the smoother is very close because as seen all the future to understand what the state might be now then at the very end we see that they're very close together because that's just the way it is you might say why is it never perfectly on the state why does it not at the end noticed it perfectly maybe you've seen some things where it says a common thought that will converge to the correct state that's only true is no", "start_timestamp": "00:31:13", "end_timestamp": "00:31:42", "start_second": 1873, "end_second": 1902, "url": "https://www.youtube.com/watch?v=qHLLMg0Teg4&t=1873s", "title": "Lecture 13 Kalman Smoother, MAP, ML, EM -- CS287-FA19 Advanced Robotics at UC Berkeley", "thumbnail": "https://i.ytimg.com/vi/qHLLMg0Teg4/hqdefault.jpg"} {"video_id": "qHLLMg0Teg4", "text": "noise in the system if there's no noise then over time it'll nail the state but because there is noise in this simulation because noise on the observations noise in the dynamics you can never perfectly know the state because you never get access to it all you get is noisy measurements but we will see that over time the common filter will converge to a kind of fixed variance a fixed expected squared error around the state you'll get that kind of convergence but we won't converge to the actual state per se any questions about", "start_timestamp": "00:31:42", "end_timestamp": "00:32:12", "start_second": 1902, "end_second": 1932, "url": "https://www.youtube.com/watch?v=qHLLMg0Teg4&t=1902s", "title": "Lecture 13 Kalman Smoother, MAP, ML, EM -- CS287-FA19 Advanced Robotics at UC Berkeley", "thumbnail": "https://i.ytimg.com/vi/qHLLMg0Teg4/hqdefault.jpg"} {"video_id": "qHLLMg0Teg4", "text": "this yes Suroosh actually when you use the mic so you talked about the normalization at the very end is that trivial every in every instance or are there certain instances in which you can't do it analytically or it's computationally intractable yeah so I would say there's even a more general question as we look at these update equations for filter and smoother are they tractable in general and in general they are not and we'll actually see approximations later this lecture where we it's not tractable because they're", "start_timestamp": "00:32:12", "end_timestamp": "00:32:58", "start_second": 1932, "end_second": 1978, "url": "https://www.youtube.com/watch?v=qHLLMg0Teg4&t=1932s", "title": "Lecture 13 Kalman Smoother, MAP, ML, EM -- CS287-FA19 Advanced Robotics at UC Berkeley", "thumbnail": "https://i.ytimg.com/vi/qHLLMg0Teg4/hqdefault.jpg"} {"video_id": "qHLLMg0Teg4", "text": "integrals and the integrals you need to do it numerically and in high dimensions you can't do it precisely because you need to populate the high dimensional space to get a reasonable approximation your integral not gonna be able to do it or in a discrete space if your state space is very large imagine I don't know imagine a state variable has is a vector so X is a vector let's say x0 is a vector and each entering that x0 vector can take on I don't know 100 values and maybe 100 entries now you have 100 to 200 possible values for your", "start_timestamp": "00:32:58", "end_timestamp": "00:33:31", "start_second": 1978, "end_second": 2011, "url": "https://www.youtube.com/watch?v=qHLLMg0Teg4&t=1978s", "title": "Lecture 13 Kalman Smoother, MAP, ML, EM -- CS287-FA19 Advanced Robotics at UC Berkeley", "thumbnail": "https://i.ytimg.com/vi/qHLLMg0Teg4/hqdefault.jpg"} {"video_id": "qHLLMg0Teg4", "text": "state and you can't enumerate that in this summation because 100 to 100 will be far too much to work with and so things we'll see later this lecture is how to deal with this when I would say the equivalent of iterative lqr when it's a nonlinear system but maybe locally it's close to linear and then locally can approximate with linear gaussians and that's the extended column under we already covered that that's extended Kalman filter I said a common file that you covered last lecture and we'll cover next lecture is particle", "start_timestamp": "00:33:31", "end_timestamp": "00:34:01", "start_second": 2011, "end_second": 2041, "url": "https://www.youtube.com/watch?v=qHLLMg0Teg4&t=2011s", "title": "Lecture 13 Kalman Smoother, MAP, ML, EM -- CS287-FA19 Advanced Robotics at UC Berkeley", "thumbnail": "https://i.ytimg.com/vi/qHLLMg0Teg4/hqdefault.jpg"} {"video_id": "qHLLMg0Teg4", "text": "filters which will essentially do sample based approximations to this entire calculation so they'll say I can't cover everything let me just run a particle filter which is much like the sampling based approaches to value iteration that we saw it's the counter part you just sample bunch of states look at the valuation update particle filters the equivalent you sample a bunch of possible states you don't know which ones correct you propagate them all re-weight them based on what are the evidence compatible with them or not and", "start_timestamp": "00:34:01", "end_timestamp": "00:34:27", "start_second": 2041, "end_second": 2067, "url": "https://www.youtube.com/watch?v=qHLLMg0Teg4&t=2041s", "title": "Lecture 13 Kalman Smoother, MAP, ML, EM -- CS287-FA19 Advanced Robotics at UC Berkeley", "thumbnail": "https://i.ytimg.com/vi/qHLLMg0Teg4/hqdefault.jpg"} {"video_id": "qHLLMg0Teg4", "text": "that way get an approximate estimate of the distribution and say yes absolutely in general these filter calculations are not possible to do exactly but in special cases discrete small number of values a state can take on yes very feasible and linear Gaussian distributions for a next state given current state and observation given current state again we can do it in closed form those are the only ones that are tractable the other ones you'll do approximations yes I say let me use this what's an example of a smoother being useful in that you", "start_timestamp": "00:34:27", "end_timestamp": "00:35:07", "start_second": 2067, "end_second": 2107, "url": "https://www.youtube.com/watch?v=qHLLMg0Teg4&t=2067s", "title": "Lecture 13 Kalman Smoother, MAP, ML, EM -- CS287-FA19 Advanced Robotics at UC Berkeley", "thumbnail": "https://i.ytimg.com/vi/qHLLMg0Teg4/hqdefault.jpg"} {"video_id": "qHLLMg0Teg4", "text": "want to know the posterior you're giving future evidence okay yeah so a question is about the smoother being useful I'm gonna defer the answer to that till the second half of a lecture because we're kind of building up to where we're going to use it and so let's see if it's still that question after lecture but it's a very valid question just a little bit of patience so what we've covered so far is filtering and smoothing which returns at distribution for the marginal what is the distribution for State at time T", "start_timestamp": "00:35:07", "end_timestamp": "00:35:37", "start_second": 2107, "end_second": 2137, "url": "https://www.youtube.com/watch?v=qHLLMg0Teg4&t=2107s", "title": "Lecture 13 Kalman Smoother, MAP, ML, EM -- CS287-FA19 Advanced Robotics at UC Berkeley", "thumbnail": "https://i.ytimg.com/vi/qHLLMg0Teg4/hqdefault.jpg"} {"video_id": "qHLLMg0Teg4", "text": "given all observations or given all past observations but sometimes we care about it some a little different so top is filtering middle is smoothing bottom you see in red all the states are marked we want to know what is the most likely joint across all of them now typically adjoins this reason of our many variables not easy to represent so typically what you would do instead of trying to find the fool joint over all of them given all the evidence you'd say let's find the single most likely state combination over all times so what is", "start_timestamp": "00:35:37", "end_timestamp": "00:36:10", "start_second": 2137, "end_second": 2170, "url": "https://www.youtube.com/watch?v=qHLLMg0Teg4&t=2137s", "title": "Lecture 13 Kalman Smoother, MAP, ML, EM -- CS287-FA19 Advanced Robotics at UC Berkeley", "thumbnail": "https://i.ytimg.com/vi/qHLLMg0Teg4/hqdefault.jpg"} {"video_id": "qHLLMg0Teg4", "text": "the single most likely path in state space that was followed based on the observations I had that's maximum a-posteriori estimation it's about finding the max instead of the distribution now we won't work through the math on the board for this one but it is a bit exercise to try it on your own and the results are on the slides but effectively you'll see happen in these slides is that instead of looking at a summation over the variables we look at a max over two variables and a max will interact with this whole set of", "start_timestamp": "00:36:10", "end_timestamp": "00:36:42", "start_second": 2170, "end_second": 2202, "url": "https://www.youtube.com/watch?v=qHLLMg0Teg4&t=2170s", "title": "Lecture 13 Kalman Smoother, MAP, ML, EM -- CS287-FA19 Advanced Robotics at UC Berkeley", "thumbnail": "https://i.ytimg.com/vi/qHLLMg0Teg4/hqdefault.jpg"} {"video_id": "qHLLMg0Teg4", "text": "equations essentially the same way as a summation would and we'll play the exact same trick we'll see like which factors have a dependence on each on the variable we're maxing over bringing out in a smaller group together and so we'll recursively be able to calculate the max while running along the sequence so there'll be a max that start at X 0 and y observation Z 0 what is the X 0 that's most likely given the observation so far but actually we'll do a little more than if what we do is find an egg Sierra that's most likely based on the", "start_timestamp": "00:36:42", "end_timestamp": "00:37:15", "start_second": 2202, "end_second": 2235, "url": "https://www.youtube.com/watch?v=qHLLMg0Teg4&t=2202s", "title": "Lecture 13 Kalman Smoother, MAP, ML, EM -- CS287-FA19 Advanced Robotics at UC Berkeley", "thumbnail": "https://i.ytimg.com/vi/qHLLMg0Teg4/hqdefault.jpg"} {"video_id": "qHLLMg0Teg4", "text": "observation it's not in the silicone paddle with everything that's following so they've just computed the most likely x0 we'll say for every x0 we're going to calculate how likely it is given the evidence we've seen so far from there we'll then say once we have that we can combine that with the model for x1 given x0 and observation for z1 given x1 to find how likely each x1 is if we match it with the best x0 for that x1 so essentially saying for each x1 how likely is it if I get to match it with the best the most compatible x0 that's", "start_timestamp": "00:37:15", "end_timestamp": "00:37:51", "start_second": 2235, "end_second": 2271, "url": "https://www.youtube.com/watch?v=qHLLMg0Teg4&t=2235s", "title": "Lecture 13 Kalman Smoother, MAP, ML, EM -- CS287-FA19 Advanced Robotics at UC Berkeley", "thumbnail": "https://i.ytimg.com/vi/qHLLMg0Teg4/hqdefault.jpg"} {"video_id": "qHLLMg0Teg4", "text": "what lives in m1 x1 do the same thing we'll find what how likely is each x2 assuming I get to match it with the best possible choice for x0 and x1 and so it's exactly the same thing instead of saying was a probability for some x2 value summed over x0 and x1 we're just saying if we got to pick the best x0 and x1 so there's a replacing that sum with a max otherwise the same thing is happening same for them x3 and so forth now generally this would be the update equation just as simple as the ones we saw for filtering but now the summation", "start_timestamp": "00:37:51", "end_timestamp": "00:38:30", "start_second": 2271, "end_second": 2310, "url": "https://www.youtube.com/watch?v=qHLLMg0Teg4&t=2271s", "title": "Lecture 13 Kalman Smoother, MAP, ML, EM -- CS287-FA19 Advanced Robotics at UC Berkeley", "thumbnail": "https://i.ytimg.com/vi/qHLLMg0Teg4/hqdefault.jpg"} {"video_id": "qHLLMg0Teg4", "text": "becomes a max that's the only difference because we're not saying was the probability combined over all possible values of the other variables it's if I got to choose the best choice of value for the other variables now one thing that happens when you run this at the end of the day where you have is for the last variable X H capital H at the very end you'll say for each value that can take on how likely is it the assuming the other ones take on the best matching but that's in all you have so I should have to keep some pointers around", "start_timestamp": "00:38:30", "end_timestamp": "00:39:03", "start_second": 2310, "end_second": 2343, "url": "https://www.youtube.com/watch?v=qHLLMg0Teg4&t=2310s", "title": "Lecture 13 Kalman Smoother, MAP, ML, EM -- CS287-FA19 Advanced Robotics at UC Berkeley", "thumbnail": "https://i.ytimg.com/vi/qHLLMg0Teg4/hqdefault.jpg"} {"video_id": "qHLLMg0Teg4", "text": "whenever you do this max here you have to keep track of for each value of XT which value of XT minus 1 is the one that was chosen as the max so you can work your way back along the chain along those pointers to find the full sequence so details are shown here but essentially very simple it's just like the filtering operations except that now we have for all X T we have to store the Arg max to remember the most compatible value for the proof from the previous time slice so when at the very end we're done at capital T we", "start_timestamp": "00:39:03", "end_timestamp": "00:39:42", "start_second": 2343, "end_second": 2382, "url": "https://www.youtube.com/watch?v=qHLLMg0Teg4&t=2343s", "title": "Lecture 13 Kalman Smoother, MAP, ML, EM -- CS287-FA19 Advanced Robotics at UC Berkeley", "thumbnail": "https://i.ytimg.com/vi/qHLLMg0Teg4/hqdefault.jpg"} {"video_id": "qHLLMg0Teg4", "text": "can see okay for all values of K of X capital T which one is the most likely if it gets to be completed in its optimal way you pick that one followed from that value to what the previous values should be previous value all the way back to the front so very efficient algorithm and you can do this for example in a tabular case you can do it in general as long as the competition is tractable as long as you can do that maximization sometimes a maximization is easier to do that an integral so sometimes this thing is more tractable", "start_timestamp": "00:39:42", "end_timestamp": "00:40:15", "start_second": 2382, "end_second": 2415, "url": "https://www.youtube.com/watch?v=qHLLMg0Teg4&t=2382s", "title": "Lecture 13 Kalman Smoother, MAP, ML, EM -- CS287-FA19 Advanced Robotics at UC Berkeley", "thumbnail": "https://i.ytimg.com/vi/qHLLMg0Teg4/hqdefault.jpg"} {"video_id": "qHLLMg0Teg4", "text": "to run than doing the actual filtering because well maxing you can run gradient descendant maybe at least find the local maximum whereas if you need to do an integration it kind of to sum over everything in the space and can be less tractable very often now one special case is the common filter or the linear Gaussian setting so summations become integral sure can't enumerate the overall associations but we can find solutions efficiently we know that when we have because we have multivariate gaussians everywhere the the crazy thing", "start_timestamp": "00:40:15", "end_timestamp": "00:40:52", "start_second": 2415, "end_second": 2452, "url": "https://www.youtube.com/watch?v=qHLLMg0Teg4&t=2415s", "title": "Lecture 13 Kalman Smoother, MAP, ML, EM -- CS287-FA19 Advanced Robotics at UC Berkeley", "thumbnail": "https://i.ytimg.com/vi/qHLLMg0Teg4/hqdefault.jpg"} {"video_id": "qHLLMg0Teg4", "text": "is in some sense that for the common filter if you think about it if you're on your common filter you find the mean everywhere that sequence of means is actually also the most likely sequence so there is no difference in a common filter between the maximum a posteriori and the means that you find at every step from these from the smoother not the filter from the smoother because you want the most likely full sequence so either account for everything why is that well think about it what if you do an exact calculation do an exact", "start_timestamp": "00:40:52", "end_timestamp": "00:41:27", "start_second": 2452, "end_second": 2487, "url": "https://www.youtube.com/watch?v=qHLLMg0Teg4&t=2452s", "title": "Lecture 13 Kalman Smoother, MAP, ML, EM -- CS287-FA19 Advanced Robotics at UC Berkeley", "thumbnail": "https://i.ytimg.com/vi/qHLLMg0Teg4/hqdefault.jpg"} {"video_id": "qHLLMg0Teg4", "text": "calculation forget about any kind of algorithms you say I'm gonna compute the full joint over all access given the evidence that's gonna be a Gaussian the Gaussian for all X is given the evidence well if we have a Gaussian for all X is given the evidence what's the thing that's most likely it's the means all the means and if you wonder what's the most likely for this single time slice given all the evidence it's also the mean for that single time slice so it's a very special case where the means and the fool", "start_timestamp": "00:41:27", "end_timestamp": "00:41:59", "start_second": 2487, "end_second": 2519, "url": "https://www.youtube.com/watch?v=qHLLMg0Teg4&t=2487s", "title": "Lecture 13 Kalman Smoother, MAP, ML, EM -- CS287-FA19 Advanced Robotics at UC Berkeley", "thumbnail": "https://i.ytimg.com/vi/qHLLMg0Teg4/hqdefault.jpg"} {"video_id": "qHLLMg0Teg4", "text": "correlated maximum a-posteriori are actually the same it's because the I mean big part is essentially it's a very simple distribution compared to most distributions and it happens to simplify that way an alternative you can do in situations like this often is to in this case in particular you can solve an optimization problem because essentially trying to find the set of variables that maximizes the objective namely the log probability some of the log probabilities of the evidences is just a Comics optimization problem you can also", "start_timestamp": "00:41:59", "end_timestamp": "00:42:30", "start_second": 2519, "end_second": 2550, "url": "https://www.youtube.com/watch?v=qHLLMg0Teg4&t=2519s", "title": "Lecture 13 Kalman Smoother, MAP, ML, EM -- CS287-FA19 Advanced Robotics at UC Berkeley", "thumbnail": "https://i.ytimg.com/vi/qHLLMg0Teg4/hqdefault.jpg"} {"video_id": "qHLLMg0Teg4", "text": "find it that way all right so so far we looked at estimation we are given a model for dynamics and a model for measurements and from that we estimate distribution over state or max most likely sequence over state second half lecture will actually start looking at how we can estimate the parameters in this distribution we assuming they're given to us we assume we're given dynamics model assume we're given the observation model and practice YouTube we now given them you have to come up with them by hand might be hard more", "start_timestamp": "00:42:30", "end_timestamp": "00:43:04", "start_second": 2550, "end_second": 2584, "url": "https://www.youtube.com/watch?v=qHLLMg0Teg4&t=2550s", "title": "Lecture 13 Kalman Smoother, MAP, ML, EM -- CS287-FA19 Advanced Robotics at UC Berkeley", "thumbnail": "https://i.ytimg.com/vi/qHLLMg0Teg4/hqdefault.jpg"} {"video_id": "qHLLMg0Teg4", "text": "convenient might be to collect data and estimate them and so we'll look at that in when we restart in two minutes let me mute it for a moment [Music] [Music] [Music] alright let's restart so let's look at estimating some parameters Oh wrong one so simple example let's say we have a thumbtack and you want to build a probability distribution when you throw it up and it lands on the table or in the ground will it I should be pointing up or will it be lying on its side with the kind of neatly thing pointing diagonally down well what do you think", "start_timestamp": "00:43:04", "end_timestamp": "00:47:57", "start_second": 2584, "end_second": 2877, "url": "https://www.youtube.com/watch?v=qHLLMg0Teg4&t=2584s", "title": "Lecture 13 Kalman Smoother, MAP, ML, EM -- CS287-FA19 Advanced Robotics at UC Berkeley", "thumbnail": "https://i.ytimg.com/vi/qHLLMg0Teg4/hqdefault.jpg"} {"video_id": "qHLLMg0Teg4", "text": "was the probability of up or down probably you know in principle you could think about if first principle say well the air flow around this thing what might happen and so forth not going to be easy to come up with a very precise number so how do we get this parameter then to know probability of up versus down what we can run an experiment imagine we do it 10 times and with this as the results we get we see it's up eight times and down twice if that's the outcome of our experiments that we might just say well probability of up is 0.8", "start_timestamp": "00:47:57", "end_timestamp": "00:48:28", "start_second": 2877, "end_second": 2908, "url": "https://www.youtube.com/watch?v=qHLLMg0Teg4&t=2877s", "title": "Lecture 13 Kalman Smoother, MAP, ML, EM -- CS287-FA19 Advanced Robotics at UC Berkeley", "thumbnail": "https://i.ytimg.com/vi/qHLLMg0Teg4/hqdefault.jpg"} {"video_id": "qHLLMg0Teg4", "text": "and we might just work with that now I might say well too small an experiment need to run this for longer sure that's just somebody did this and so any thoughts what the probability is gonna be zero point eight you think zero point eight now maybe I don't know yet well no on the next slide any other guesses two-thirds it's never hard to know I mean it's very empirical so it turns out total up seventy-seven total down 23 so they tossed up ten of them every time and then looked at how many we're up versus down so yeah", "start_timestamp": "00:48:28", "end_timestamp": "00:49:12", "start_second": 2908, "end_second": 2952, "url": "https://www.youtube.com/watch?v=qHLLMg0Teg4&t=2908s", "title": "Lecture 13 Kalman Smoother, MAP, ML, EM -- CS287-FA19 Advanced Robotics at UC Berkeley", "thumbnail": "https://i.ytimg.com/vi/qHLLMg0Teg4/hqdefault.jpg"} {"video_id": "qHLLMg0Teg4", "text": "seventy-seven percent chance according to this experiment that you land at the pointy thing up okay so that might be our best model we can make for this short of somebody collecting even more data and getting a more precise estimate of this thing but then I mean this kind of a somewhat specifically designed scenario it's very hard to just you do some first principles but even when you do have first principles available for the dynamical system you're trying to model often a lot of details you won't know very precisely", "start_timestamp": "00:49:12", "end_timestamp": "00:49:42", "start_second": 2952, "end_second": 2982, "url": "https://www.youtube.com/watch?v=qHLLMg0Teg4&t=2952s", "title": "Lecture 13 Kalman Smoother, MAP, ML, EM -- CS287-FA19 Advanced Robotics at UC Berkeley", "thumbnail": "https://i.ytimg.com/vi/qHLLMg0Teg4/hqdefault.jpg"} {"video_id": "qHLLMg0Teg4", "text": "and so very often you'll still want to run experiments to get a more precise estimate of the dynamics model of the sensor model they you can get from just first principles so let's take a more general look at this math works out and how we can generalize this to other things [Music] so the first thing is that we said okay 77 up 23 down 77% chance that that seems pretty reasonable but what if your distribution is more complex what are you gonna do I mean maybe there's no way to just do counting so what are you", "start_timestamp": "00:49:42", "end_timestamp": "00:50:29", "start_second": 2982, "end_second": 3029, "url": "https://www.youtube.com/watch?v=qHLLMg0Teg4&t=2982s", "title": "Lecture 13 Kalman Smoother, MAP, ML, EM -- CS287-FA19 Advanced Robotics at UC Berkeley", "thumbnail": "https://i.ytimg.com/vi/qHLLMg0Teg4/hqdefault.jpg"} {"video_id": "qHLLMg0Teg4", "text": "going to do then well the general principle ideally we'd find a general principle that always applies and in the case of the thumbtack experiments still simplifies and gives us the same solution we already know so how can we generalize this notion that we were just counting to get our best estimate well there's something called likelihood so imagine we observe eight up to down and let's say the probability of up we call theta then we're going to say you know what's the probability of a sequence that we observe maybe we have up down", "start_timestamp": "00:50:29", "end_timestamp": "00:51:08", "start_second": 3029, "end_second": 3068, "url": "https://www.youtube.com/watch?v=qHLLMg0Teg4&t=3029s", "title": "Lecture 13 Kalman Smoother, MAP, ML, EM -- CS287-FA19 Advanced Robotics at UC Berkeley", "thumbnail": "https://i.ytimg.com/vi/qHLLMg0Teg4/hqdefault.jpg"} {"video_id": "qHLLMg0Teg4", "text": "down up up up up up until the end what's the probability of that well we can write it out if we say it probably of up is theta then the probability of up down down all up would be theta times one minus theta times one minus theta times theta and this would be seven times one time in this two times and so total we'd have data to the eight times one minus theta squared as the probability of that particular sequence happening we'll call it as the likelihood of what we saw happen if we choose a parameter vector theta then we could say well how should", "start_timestamp": "00:51:08", "end_timestamp": "00:51:50", "start_second": 3068, "end_second": 3110, "url": "https://www.youtube.com/watch?v=qHLLMg0Teg4&t=3068s", "title": "Lecture 13 Kalman Smoother, MAP, ML, EM -- CS287-FA19 Advanced Robotics at UC Berkeley", "thumbnail": "https://i.ytimg.com/vi/qHLLMg0Teg4/hqdefault.jpg"} {"video_id": "qHLLMg0Teg4", "text": "we choose theta again we chosen by doing counts but we're hoping to find a more general principle that will reduce the counts in this case but in other cases still be possible to apply so you could say well a more general principle would be to say I want to find the parameter theta that maximizes this score because whichever theta maximizes this score is the theta that makes what I saw happen in the world more likely to happen than any other theta I would have made it so it's the best explanation of how the world works at least the part of the", "start_timestamp": "00:51:50", "end_timestamp": "00:52:21", "start_second": 3110, "end_second": 3141, "url": "https://www.youtube.com/watch?v=qHLLMg0Teg4&t=3110s", "title": "Lecture 13 Kalman Smoother, MAP, ML, EM -- CS287-FA19 Advanced Robotics at UC Berkeley", "thumbnail": "https://i.ytimg.com/vi/qHLLMg0Teg4/hqdefault.jpg"} {"video_id": "qHLLMg0Teg4", "text": "world that I observe so you can say okay well this thing look like you can plot this thing look something like this data will live between zero and one of course this is a probability and then dysfunction data to aid one minus theta squared what does that look like there 0.5 over here it'll look like this with it turns out the peak will be at 0.8 and so that's nice because that means that the principal we intuitively thought was pretty good which is just counting corresponds to something more general which is looking at the likelihood of", "start_timestamp": "00:52:21", "end_timestamp": "00:53:15", "start_second": 3141, "end_second": 3195, "url": "https://www.youtube.com/watch?v=qHLLMg0Teg4&t=3141s", "title": "Lecture 13 Kalman Smoother, MAP, ML, EM -- CS287-FA19 Advanced Robotics at UC Berkeley", "thumbnail": "https://i.ytimg.com/vi/qHLLMg0Teg4/hqdefault.jpg"} {"video_id": "qHLLMg0Teg4", "text": "one of the experiment under the parameter and then finding parameter that maximizes the likelihood now in general plotting will not be really an option you'll need to somehow find this thing without having to plot it but we've covered optimization already in the class we can look at derivatives gradients and find the optimum this thing for this very simple objective we can just say derivative of this thing with respect to theta is equal to well what is it it's something like 8 theta to the power 7 1 minus theta squared", "start_timestamp": "00:53:15", "end_timestamp": "00:53:49", "start_second": 3195, "end_second": 3229, "url": "https://www.youtube.com/watch?v=qHLLMg0Teg4&t=3195s", "title": "Lecture 13 Kalman Smoother, MAP, ML, EM -- CS287-FA19 Advanced Robotics at UC Berkeley", "thumbnail": "https://i.ytimg.com/vi/qHLLMg0Teg4/hqdefault.jpg"} {"video_id": "qHLLMg0Teg4", "text": "plus theta to the 8 times 1 minus theta well 2 times 1 minus theta and then there's a minus here so I have another negative 1 appearing here ok and then me if the function really looks like this I mean the derivative is actually 0 over here and over here so hopefully we can find where this is equal to 0 easily and really we find hopefully at 0.8 so let's see this thing equal to 0 while there's a theta to the 7 here theta up to the 8th over there so we can rewrite this as so I want this equal to 0 but this is", "start_timestamp": "00:53:49", "end_timestamp": "00:54:35", "start_second": 3229, "end_second": 3275, "url": "https://www.youtube.com/watch?v=qHLLMg0Teg4&t=3229s", "title": "Lecture 13 Kalman Smoother, MAP, ML, EM -- CS287-FA19 Advanced Robotics at UC Berkeley", "thumbnail": "https://i.ytimg.com/vi/qHLLMg0Teg4/hqdefault.jpg"} {"video_id": "qHLLMg0Teg4", "text": "equal to theta to D seven times I should have plotted it wrong it's like gonna go like this here theta to the seven times one minus theta we can bring up front and then there is left eight one minus theta plus 2 times theta equal to zero so we see that this thing will be equal to zero when theta is equal to 0 theta equal to 1 so those are actually minima rather than Maxima they're bad places to be but they have a derivative that's 0 and then the other thing is whenever this ting is equal to 0 which is 8 minus 8 theta plus 2 theta equal to", "start_timestamp": "00:54:35", "end_timestamp": "00:55:27", "start_second": 3275, "end_second": 3327, "url": "https://www.youtube.com/watch?v=qHLLMg0Teg4&t=3275s", "title": "Lecture 13 Kalman Smoother, MAP, ML, EM -- CS287-FA19 Advanced Robotics at UC Berkeley", "thumbnail": "https://i.ytimg.com/vi/qHLLMg0Teg4/hqdefault.jpg"} {"video_id": "qHLLMg0Teg4", "text": "0 is that working out for us hopefully it's working out oh the minus -1 - there's a minus sign lost somewhere minus 2 here minus 2 theta so I have a minus over here and so then we have 8 equals 10 theta so theta equals 0.8 so we've got the three places where derivative equals 0 0 1 and 0.8 and this is of course the one we want we can verify this by plotting or we can verify by taking the second derivative at that spot and seeing that it's a negative second derivative which gives us that shape now this math was kind of ok we", "start_timestamp": "00:55:27", "end_timestamp": "00:56:13", "start_second": 3327, "end_second": 3373, "url": "https://www.youtube.com/watch?v=qHLLMg0Teg4&t=3327s", "title": "Lecture 13 Kalman Smoother, MAP, ML, EM -- CS287-FA19 Advanced Robotics at UC Berkeley", "thumbnail": "https://i.ytimg.com/vi/qHLLMg0Teg4/hqdefault.jpg"} {"video_id": "qHLLMg0Teg4", "text": "can do it but actually practice people often prefer to do the math slightly differently they'll say ok in general we have a likelihood maybe of the type L theta equals theta to the power N 1 how often we saw the first outcome 1 my state and 0 when we saw outcome 0 and we can work through the same kind of math we saw over there but instead we could actually look at also at the log of L theta the log likelihood which will be log of theta to the N 1 1 minus theta to the M 0 which is equal to n1 log theta plus and 0 log 1 minus theta why", "start_timestamp": "00:56:13", "end_timestamp": "00:57:01", "start_second": 3373, "end_second": 3421, "url": "https://www.youtube.com/watch?v=qHLLMg0Teg4&t=3373s", "title": "Lecture 13 Kalman Smoother, MAP, ML, EM -- CS287-FA19 Advanced Robotics at UC Berkeley", "thumbnail": "https://i.ytimg.com/vi/qHLLMg0Teg4/hqdefault.jpg"} {"video_id": "qHLLMg0Teg4", "text": "is the kid to look at the log instead of the original thing when you're trying to maximize or minimize by taking the log at every point on the function you are doing monotonic transformations something was the highest point it'll still be the highest point for that function because the lowest still be the lowest the ordering stays the same so it's okay to take the log then the derivative becomes simple because we don't have this product of stuff anymore we have a sum of things because the log of the product is sum of the logs and we", "start_timestamp": "00:57:01", "end_timestamp": "00:57:31", "start_second": 3421, "end_second": 3451, "url": "https://www.youtube.com/watch?v=qHLLMg0Teg4&t=3421s", "title": "Lecture 13 Kalman Smoother, MAP, ML, EM -- CS287-FA19 Advanced Robotics at UC Berkeley", "thumbnail": "https://i.ytimg.com/vi/qHLLMg0Teg4/hqdefault.jpg"} {"video_id": "qHLLMg0Teg4", "text": "take derivatives which is the sum of derivatives which is simpler than this thing where if we have many complicated terms multiplied together they'd all like stay together in complicated ways and be a lot more hairy to work with and we can do this thing derivative respect to theta equal to 0 is what we want let's look at the derivative so N 1 times 1 over theta plus and 0 times 1 over 1 minus theta and then a negative 1 here that equal to 0 they were multiplying by theta and 1 minus theta so we have N 1 1 minus theta plus well -", "start_timestamp": "00:57:31", "end_timestamp": "00:58:11", "start_second": 3451, "end_second": 3491, "url": "https://www.youtube.com/watch?v=qHLLMg0Teg4&t=3451s", "title": "Lecture 13 Kalman Smoother, MAP, ML, EM -- CS287-FA19 Advanced Robotics at UC Berkeley", "thumbnail": "https://i.ytimg.com/vi/qHLLMg0Teg4/hqdefault.jpg"} {"video_id": "qHLLMg0Teg4", "text": "and 0 theta equals 0 so now we need to reorganize this a little bit we end up with N 1 minus N 1 theta minus n 0 theta equals 0 so theta equals N 1 over N 0 plus N 1 which is what we hoping for because intuitive result we thought should be the right one but we're covering it in a principled way that does not depend at all on a distribution of this format you can have any kind of distribution and apply the same principle you could say I have a distribution as a complicated functional form very hairy form but I", "start_timestamp": "00:58:11", "end_timestamp": "00:58:53", "start_second": 3491, "end_second": 3533, "url": "https://www.youtube.com/watch?v=qHLLMg0Teg4&t=3491s", "title": "Lecture 13 Kalman Smoother, MAP, ML, EM -- CS287-FA19 Advanced Robotics at UC Berkeley", "thumbnail": "https://i.ytimg.com/vi/qHLLMg0Teg4/hqdefault.jpg"} {"video_id": "qHLLMg0Teg4", "text": "can still say this is the likelihood score under the form let me find the parameters that make this maximally likely now this plot over here is the reason people like the logs it simplifies the math that you do by hand that salsa simplifies the math you do it numerically this numerically once you take the log this plot on log scale on this axis right on the original scale will actually look more like this so it's a nice concave shape with a single optimum there's not this weird curvature happening where you also because this", "start_timestamp": "00:58:53", "end_timestamp": "00:59:31", "start_second": 3533, "end_second": 3571, "url": "https://www.youtube.com/watch?v=qHLLMg0Teg4&t=3533s", "title": "Lecture 13 Kalman Smoother, MAP, ML, EM -- CS287-FA19 Advanced Robotics at UC Berkeley", "thumbnail": "https://i.ytimg.com/vi/qHLLMg0Teg4/hqdefault.jpg"} {"video_id": "qHLLMg0Teg4", "text": "tends to be difficult to optimize with you don't have that show up it's much nicer behaved and so by taking the log you get something numerically easier to work with just as well as often analytically easier to work with we always talked about convex problems which are problems shaped like this and those are easy to minimize well these are concave problems in this case they're easy to maximize the same thing same algorithms can be applied guaranteed to find the one maximum that exists for this thing it was a question", "start_timestamp": "00:59:31", "end_timestamp": "01:00:01", "start_second": 3571, "end_second": 3601, "url": "https://www.youtube.com/watch?v=qHLLMg0Teg4&t=3571s", "title": "Lecture 13 Kalman Smoother, MAP, ML, EM -- CS287-FA19 Advanced Robotics at UC Berkeley", "thumbnail": "https://i.ytimg.com/vi/qHLLMg0Teg4/hqdefault.jpg"} {"video_id": "qHLLMg0Teg4", "text": "there so intuitively yeah so it's always I mean if you take a class in convex optimization you'll see that you know half of the class is dedicated to building intuition of how you highball something's comebacks or not and same thing with concave I mean it's the same kind of thing it's hard to say like how intuitively you would do that sort of like working through all those principles and starting to recognize all the patterns in this case we can just plot it so I made an actual precise plot of what it looks like and we'll see that", "start_timestamp": "01:00:01", "end_timestamp": "01:00:43", "start_second": 3601, "end_second": 3643, "url": "https://www.youtube.com/watch?v=qHLLMg0Teg4&t=3601s", "title": "Lecture 13 Kalman Smoother, MAP, ML, EM -- CS287-FA19 Advanced Robotics at UC Berkeley", "thumbnail": "https://i.ytimg.com/vi/qHLLMg0Teg4/hqdefault.jpg"} {"video_id": "qHLLMg0Teg4", "text": "it looks beautifully concave but yeah I'm practice you would look at second derivatives eigen values of the Hessian if they're all they're all negative then you have a nice concave shape there's no no magic recipe short of all those tricks that they teach in the convex optimization classes so we covered this we covered this we've covered the log is a monotonic transformation we can just work with the log instead of the original thing and then these are the two plots and again I just generated those plots and so we know in this case", "start_timestamp": "01:00:43", "end_timestamp": "01:01:19", "start_second": 3643, "end_second": 3679, "url": "https://www.youtube.com/watch?v=qHLLMg0Teg4&t=3643s", "title": "Lecture 13 Kalman Smoother, MAP, ML, EM -- CS287-FA19 Advanced Robotics at UC Berkeley", "thumbnail": "https://i.ytimg.com/vi/qHLLMg0Teg4/hqdefault.jpg"} {"video_id": "qHLLMg0Teg4", "text": "it's true because these are the precise plots for that objective but generally it's going to be true that the log will do where the log of shape I'll essentially help you getting maximum likely problems often will become better condition when you take the log compared to keeping the original and we said here I remember convex concave convex was it once we've covered before any line between two points on the function should be above the function concave is the other way around and that means you have a unique optimum unique maximum", "start_timestamp": "01:01:19", "end_timestamp": "01:01:52", "start_second": 3679, "end_second": 3712, "url": "https://www.youtube.com/watch?v=qHLLMg0Teg4&t=3679s", "title": "Lecture 13 Kalman Smoother, MAP, ML, EM -- CS287-FA19 Advanced Robotics at UC Berkeley", "thumbnail": "https://i.ytimg.com/vi/qHLLMg0Teg4/hqdefault.jpg"} {"video_id": "qHLLMg0Teg4", "text": "when you have that now effectively you can apply this principle to any kind of distribution we saw just a Bernoulli distribution up or down outcome for the thumbtack well how about multinomial where it can take on different values one two three and so forth up to capital K well we received some samples x1 through XM we can just see what's the log likelihood of these samples well is the log of the product of theta 1 to the power how often we have outcome 1 theta to the power half when we have the outcome 2 and so forth and", "start_timestamp": "01:01:52", "end_timestamp": "01:02:30", "start_second": 3712, "end_second": 3750, "url": "https://www.youtube.com/watch?v=qHLLMg0Teg4&t=3712s", "title": "Lecture 13 Kalman Smoother, MAP, ML, EM -- CS287-FA19 Advanced Robotics at UC Berkeley", "thumbnail": "https://i.ytimg.com/vi/qHLLMg0Teg4/hqdefault.jpg"} {"video_id": "qHLLMg0Teg4", "text": "this is what we end up with and then we can do the math and find that again it comes down to Counting how about an hmm imagine some sample human hmm we see both the state X and the observation Z at all times if we have that we could estimate the model the dynamics model and the observation model again by doing counts but they were precisely we could look at the made to principally derived this we can look at these are the models we want to estimate let's look at the likelihood of this sequence of observation State and sensor observation", "start_timestamp": "01:02:30", "end_timestamp": "01:03:04", "start_second": 3750, "end_second": 3784, "url": "https://www.youtube.com/watch?v=qHLLMg0Teg4&t=3750s", "title": "Lecture 13 Kalman Smoother, MAP, ML, EM -- CS287-FA19 Advanced Robotics at UC Berkeley", "thumbnail": "https://i.ytimg.com/vi/qHLLMg0Teg4/hqdefault.jpg"} {"video_id": "qHLLMg0Teg4", "text": "write out the likelihood under the Joint Distribution then we can run the kind of optimization in this case can be done in closed form and we'll find that indeed we'll get the counts for conditional of State at time T given state at time t minus 1 and the counts for the condition of observation given state doesn't need to be count based distributions or discretize regions here is a continuous distribution exponential distribution and exponential is of the form lambda e to the negative lambda X X can only take", "start_timestamp": "01:03:04", "end_timestamp": "01:03:41", "start_second": 3784, "end_second": 3821, "url": "https://www.youtube.com/watch?v=qHLLMg0Teg4&t=3784s", "title": "Lecture 13 Kalman Smoother, MAP, ML, EM -- CS287-FA19 Advanced Robotics at UC Berkeley", "thumbnail": "https://i.ytimg.com/vi/qHLLMg0Teg4/hqdefault.jpg"} {"video_id": "qHLLMg0Teg4", "text": "on positive values here and lambda is a choice that determines how quickly this thing decays versus maybe having a heavier tail you get some examples some samples from distribution three point one eight point two one point seven you can just say well what's the probability of each of these samples under this density multiply all of them together or take the log of the product of all of them and then see what maximizes it and in this case lambda is three over 13 and that might not have been as easy to read off by just looking at these numbers", "start_timestamp": "01:03:41", "end_timestamp": "01:04:12", "start_second": 3821, "end_second": 3852, "url": "https://www.youtube.com/watch?v=qHLLMg0Teg4&t=3821s", "title": "Lecture 13 Kalman Smoother, MAP, ML, EM -- CS287-FA19 Advanced Robotics at UC Berkeley", "thumbnail": "https://i.ytimg.com/vi/qHLLMg0Teg4/hqdefault.jpg"} {"video_id": "qHLLMg0Teg4", "text": "might not have said oh it's 3 over 13 you have to do a little bit more math derive what it is and you find that you know the equation comes down to what you see here which is some summation of all the values you've got in the denominator and the number of samples on top so this is the general version your lambda will be some sense the one over the average of the x-values that you perceive do the same thing for the distributions how about uniform distribution what do you think can be the outcome there we can plug in the", "start_timestamp": "01:04:12", "end_timestamp": "01:04:53", "start_second": 3852, "end_second": 3893, "url": "https://www.youtube.com/watch?v=qHLLMg0Teg4&t=3852s", "title": "Lecture 13 Kalman Smoother, MAP, ML, EM -- CS287-FA19 Advanced Robotics at UC Berkeley", "thumbnail": "https://i.ytimg.com/vi/qHLLMg0Teg4/hqdefault.jpg"} {"video_id": "qHLLMg0Teg4", "text": "math but actually for uniform it's intuitive is much simpler for uniform you want to maximize mobility of things you saw but it has to be uniform so essentially you look at the farthest out samples and it's a last spot where you assign any probability and everything in between has the same probability so uniform thing that maximize it would be this thing when your highest sample is be your lowest sample as a it with all the mass between a and B and this uniform has to be equal how about gaussians can do the same thing the math", "start_timestamp": "01:04:53", "end_timestamp": "01:05:23", "start_second": 3893, "end_second": 3923, "url": "https://www.youtube.com/watch?v=qHLLMg0Teg4&t=3893s", "title": "Lecture 13 Kalman Smoother, MAP, ML, EM -- CS287-FA19 Advanced Robotics at UC Berkeley", "thumbnail": "https://i.ytimg.com/vi/qHLLMg0Teg4/hqdefault.jpg"} {"video_id": "qHLLMg0Teg4", "text": "is again the same we're not going to work through the details but essentially you said that's my density function when I have a sample or multiple samples I maximize the product of the probabilities of these samples or the sum of the log probabilities of these samples log is convenient here because the Gaussian is an exponential in it and the exponential cancels with the log and you work to the math what do you see well the mean of the Gaussian will be the mean of your samples that's the maximum likelihood estimate and the", "start_timestamp": "01:05:23", "end_timestamp": "01:05:50", "start_second": 3923, "end_second": 3950, "url": "https://www.youtube.com/watch?v=qHLLMg0Teg4&t=3923s", "title": "Lecture 13 Kalman Smoother, MAP, ML, EM -- CS287-FA19 Advanced Robotics at UC Berkeley", "thumbnail": "https://i.ytimg.com/vi/qHLLMg0Teg4/hqdefault.jpg"} {"video_id": "qHLLMg0Teg4", "text": "variance parameter of your Gaussian will actually be the empirical variance on your samples not too surprising but formally derived that that is actually the right thing to do to do maximum likelihood estimation for a Gaussian how about conditional Gaussian but this litigation would be where you have a distribution where Y is effectively a linear regression of X but it could be higher dimensional of course y equals a0 plus a1x plus noise that's a linear Gaussian from one D to one D you can get a bunch of samples work through it and", "start_timestamp": "01:05:50", "end_timestamp": "01:06:22", "start_second": 3950, "end_second": 3982, "url": "https://www.youtube.com/watch?v=qHLLMg0Teg4&t=3950s", "title": "Lecture 13 Kalman Smoother, MAP, ML, EM -- CS287-FA19 Advanced Robotics at UC Berkeley", "thumbnail": "https://i.ytimg.com/vi/qHLLMg0Teg4/hqdefault.jpg"} {"video_id": "qHLLMg0Teg4", "text": "find the maximum likelihood estimate what is it going to be well you have to do some math you'll have a bunch of y's and x's and this will be the probability of their combination and then you'll have to look at okay what maximizes the product of those probabilities you do a bunch of math what comes out well you see that effectively you get a least square solution that you have to do to find the parameters of this linear Gaussian and you will find that the variance is essentially the empirical variance left when estimating Y from X based on your", "start_timestamp": "01:06:22", "end_timestamp": "01:06:59", "start_second": 3982, "end_second": 4019, "url": "https://www.youtube.com/watch?v=qHLLMg0Teg4&t=3982s", "title": "Lecture 13 Kalman Smoother, MAP, ML, EM -- CS287-FA19 Advanced Robotics at UC Berkeley", "thumbnail": "https://i.ytimg.com/vi/qHLLMg0Teg4/hqdefault.jpg"} {"video_id": "qHLLMg0Teg4", "text": "best estimate of that linear fit you can do this for multivariate gaussians again the math is going to be more hairier to this higher dimensional and so forth but only it's just very linear map there's no trickery happening you're just saying this is my density there's my data just plug away at it and out comes some result for these kind of solutions in nice closed form and again conditional multivariate Gaussian y equals C times X will come something that looks like least squares solution for the C matrix and then for the covariance matrix we'll", "start_timestamp": "01:06:59", "end_timestamp": "01:07:33", "start_second": 4019, "end_second": 4053, "url": "https://www.youtube.com/watch?v=qHLLMg0Teg4&t=4019s", "title": "Lecture 13 Kalman Smoother, MAP, ML, EM -- CS287-FA19 Advanced Robotics at UC Berkeley", "thumbnail": "https://i.ytimg.com/vi/qHLLMg0Teg4/hqdefault.jpg"} {"video_id": "qHLLMg0Teg4", "text": "get the empirical covariance on the samples if you actually want to work through this and get this result here are some key matrix identities that are useful otherwise you probably won't get to this result and so these are just kind of things that you at some point might have derived in a previous class or might have never drived it might be a surprise right now but these are true quantities that can come in very handy when doing these multi well multidimensional derivations with gaussians you'll see these tricks will", "start_timestamp": "01:07:33", "end_timestamp": "01:08:03", "start_second": 4053, "end_second": 4083, "url": "https://www.youtube.com/watch?v=qHLLMg0Teg4&t=4053s", "title": "Lecture 13 Kalman Smoother, MAP, ML, EM -- CS287-FA19 Advanced Robotics at UC Berkeley", "thumbnail": "https://i.ytimg.com/vi/qHLLMg0Teg4/hqdefault.jpg"} {"video_id": "qHLLMg0Teg4", "text": "help you out probably one of the more intriguing ones it's like the gradient of the log of the determinant of a matrix with respect to the entries in that matrix is just the inverse of that matrix why does it matter well remember a multivariate Gaussian has that determinant of the covariant main covariance matrix up front they all need to find derivative respect to the entries in the covariance matrix in a maximizing the likelihood and the covariance matrix is a parameter you try to find the right setting of that whole", "start_timestamp": "01:08:03", "end_timestamp": "01:08:30", "start_second": 4083, "end_second": 4110, "url": "https://www.youtube.com/watch?v=qHLLMg0Teg4&t=4083s", "title": "Lecture 13 Kalman Smoother, MAP, ML, EM -- CS287-FA19 Advanced Robotics at UC Berkeley", "thumbnail": "https://i.ytimg.com/vi/qHLLMg0Teg4/hqdefault.jpg"} {"video_id": "qHLLMg0Teg4", "text": "matrix will have to take derivatives of that thing and so turns out nice closed form for this if you don't know about this you might say oh well no closed-form impossible is gonna have to do this numerically but it turns out you can do it as in closed form alright so how about a full we observed linear Gaussian Basin filter setting so you have XT plus 1 equals ax t plus bu t + WT ZT plus 1 equals CX T plus D plus V T there's a standard common filter type setting if everything is observed you can actually apply maximum", "start_timestamp": "01:08:30", "end_timestamp": "01:09:07", "start_second": 4110, "end_second": 4147, "url": "https://www.youtube.com/watch?v=qHLLMg0Teg4&t=4110s", "title": "Lecture 13 Kalman Smoother, MAP, ML, EM -- CS287-FA19 Advanced Robotics at UC Berkeley", "thumbnail": "https://i.ytimg.com/vi/qHLLMg0Teg4/hqdefault.jpg"} {"video_id": "qHLLMg0Teg4", "text": "likelihood to find a B C D and the covariance matrix Q for W and the covariance matrix R for V and that we have a model of your system now one thing you might want to be wary of is that sometimes you don't want to just do the maximum likely estimate you might want to pay attention to something else so think about thumbtack example let's say I had five ups would you say that the probability of down is zero probably you would not because you might say well you never know it could be down sometimes and so what that means is that", "start_timestamp": "01:09:07", "end_timestamp": "01:09:49", "start_second": 4147, "end_second": 4189, "url": "https://www.youtube.com/watch?v=qHLLMg0Teg4&t=4147s", "title": "Lecture 13 Kalman Smoother, MAP, ML, EM -- CS287-FA19 Advanced Robotics at UC Berkeley", "thumbnail": "https://i.ytimg.com/vi/qHLLMg0Teg4/hqdefault.jpg"} {"video_id": "qHLLMg0Teg4", "text": "you have some prior information that is not present yet or reflected yet in this data data is too small you have knowledge about the world that you've condensed in this notion that actually sometimes it could fall the other way just hasn't happened yet in this experiment so we can do you can introduce a prior explicitly to account for that you can say well my prior is something that some probability on theta some 1 1 minus theta I raise them to the same power here theta times 1 minus theta it's as if as if I've seen one", "start_timestamp": "01:09:49", "end_timestamp": "01:10:19", "start_second": 4189, "end_second": 4219, "url": "https://www.youtube.com/watch?v=qHLLMg0Teg4&t=4189s", "title": "Lecture 13 Kalman Smoother, MAP, ML, EM -- CS287-FA19 Advanced Robotics at UC Berkeley", "thumbnail": "https://i.ytimg.com/vi/qHLLMg0Teg4/hqdefault.jpg"} {"video_id": "qHLLMg0Teg4", "text": "time theta come out which is up 1 times 1 minus theta which is which is down I said I assumed that already happened ahead of time hasn't happened yet but I know it could happen let's assume it already happened and then multiplied in with everything else those those kind of priors are particularly convenient come up with a lot of priors if you take priors that look like as if you already ran an experiment say assume I already ran the experiment that I already saw a few times is a few times that then I'll be in the same form factor as the", "start_timestamp": "01:10:19", "end_timestamp": "01:10:47", "start_second": 4219, "end_second": 4247, "url": "https://www.youtube.com/watch?v=qHLLMg0Teg4&t=4219s", "title": "Lecture 13 Kalman Smoother, MAP, ML, EM -- CS287-FA19 Advanced Robotics at UC Berkeley", "thumbnail": "https://i.ytimg.com/vi/qHLLMg0Teg4/hqdefault.jpg"} {"video_id": "qHLLMg0Teg4", "text": "likelihood and if your derivation for maximum likelihood came out nice and closed form then this will come out nationally in closed form too because they'll be the same derivation just introduce some fake experiments in the mix but otherwise everything's the same for this kind of bear knew your experiment this is what it could look like if you have data to the power alpha minus 1 1 minus 2 power beta minus 1 and then closed form on the right there you'll see the effectively add pseudo counts alpha minus 1 beta minus 1 pseudo", "start_timestamp": "01:10:47", "end_timestamp": "01:11:16", "start_second": 4247, "end_second": 4276, "url": "https://www.youtube.com/watch?v=qHLLMg0Teg4&t=4247s", "title": "Lecture 13 Kalman Smoother, MAP, ML, EM -- CS287-FA19 Advanced Robotics at UC Berkeley", "thumbnail": "https://i.ytimg.com/vi/qHLLMg0Teg4/hqdefault.jpg"} {"video_id": "qHLLMg0Teg4", "text": "counts as Zima this happened but hasn't happened for some of these choices by the way like simple ones you might think about like the sign one here that's all faint go to Baker at beta equal to it's like you put as if each side has already happened once but then in the extreme where alpha and beta are smaller than one it's as if you have a negative version of it happening it's like you think it's not likely both have already happened it's actually more likely the only one could have happened and your prior comes out the opposite way where", "start_timestamp": "01:11:16", "end_timestamp": "01:11:48", "start_second": 4276, "end_second": 4308, "url": "https://www.youtube.com/watch?v=qHLLMg0Teg4&t=4276s", "title": "Lecture 13 Kalman Smoother, MAP, ML, EM -- CS287-FA19 Advanced Robotics at UC Berkeley", "thumbnail": "https://i.ytimg.com/vi/qHLLMg0Teg4/hqdefault.jpg"} {"video_id": "qHLLMg0Teg4", "text": "you see that it puts a lot of weight on either one or zero not a lot of weight in the middle that's possible too you don't have to make it uniform your prior it's whatever you think might be likely and so if you think I think it's always gonna be the same it I don't know which side is going to be but so it's gonna be the same that you have this alpha beta equals 0.5 as a reasonable prior it's only called irrational a distribution which generalizes this to multinomial variables but the high level I mean there's a lot of symbols here Ohio was", "start_timestamp": "01:11:48", "end_timestamp": "01:12:20", "start_second": 4308, "end_second": 4340, "url": "https://www.youtube.com/watch?v=qHLLMg0Teg4&t=4308s", "title": "Lecture 13 Kalman Smoother, MAP, ML, EM -- CS287-FA19 Advanced Robotics at UC Berkeley", "thumbnail": "https://i.ytimg.com/vi/qHLLMg0Teg4/hqdefault.jpg"} {"video_id": "qHLLMg0Teg4", "text": "the same thing you're just saying I pretend I already saw a few experiments I have pseudo counts for those pretend experiments and I multiply the probability of those into the likelihood of the actual experiments do the same thing for a Gaussian to make the math work out you want the prior for your mean of your Gaussian they also be a Gaussian because then you have a Gaussian multiplied with a Gaussian and we know that's again a Gaussian and the math will be easy if you said well actually I think the prior for the mean", "start_timestamp": "01:12:20", "end_timestamp": "01:12:50", "start_second": 4340, "end_second": 4370, "url": "https://www.youtube.com/watch?v=qHLLMg0Teg4&t=4340s", "title": "Lecture 13 Kalman Smoother, MAP, ML, EM -- CS287-FA19 Advanced Robotics at UC Berkeley", "thumbnail": "https://i.ytimg.com/vi/qHLLMg0Teg4/hqdefault.jpg"} {"video_id": "qHLLMg0Teg4", "text": "is that I know it's guaranteed to be positive can never be negative so a Gaussian is not a right fit because the Gaussian even if up without positive it'll still have something running negative well then it's not going to work out as nicely with your math and your product to make a trade-off you might say you know what it's fine I take a Gaussian far enough positive a small enough variance there's very low probability mass and a negative and the math will work out cleanly and that's what I'm going to use or you might say", "start_timestamp": "01:12:50", "end_timestamp": "01:13:13", "start_second": 4370, "end_second": 4393, "url": "https://www.youtube.com/watch?v=qHLLMg0Teg4&t=4370s", "title": "Lecture 13 Kalman Smoother, MAP, ML, EM -- CS287-FA19 Advanced Robotics at UC Berkeley", "thumbnail": "https://i.ytimg.com/vi/qHLLMg0Teg4/hqdefault.jpg"} {"video_id": "qHLLMg0Teg4", "text": "no I'm going to use some other kind of prior and now I have to do some numerical optimization to find the maximum likely or maximum a posteriori estimate because it's not closed for him anymore so if typically we'll make a trade-off between convenience and precision of the prior that you are imposing your problem you can do the same thing for conditional linear gosh you can have priors they're gonna priors over the linear coefficient a but more general kind of priors of the matrix there goes some X to Y or from XT to XT plus one", "start_timestamp": "01:13:13", "end_timestamp": "01:13:44", "start_second": 4393, "end_second": 4424, "url": "https://www.youtube.com/watch?v=qHLLMg0Teg4&t=4393s", "title": "Lecture 13 Kalman Smoother, MAP, ML, EM -- CS287-FA19 Advanced Robotics at UC Berkeley", "thumbnail": "https://i.ytimg.com/vi/qHLLMg0Teg4/hqdefault.jpg"} {"video_id": "qHLLMg0Teg4", "text": "that matrix a here are some examples of this worked out so the slides can just work through what it looks like when you have a prior so there are some points shown in blue the true relation is shown in green so that's what we were hoping to recover but the data is noisy so the maximum likely this within a small amount of data shown in red is actually pretty far off from green but if it had a prior that in this case thinks the coefficients are more likely to be small rather than be large it'll kind of regularize that and you'll find the", "start_timestamp": "01:13:44", "end_timestamp": "01:14:18", "start_second": 4424, "end_second": 4458, "url": "https://www.youtube.com/watch?v=qHLLMg0Teg4&t=4424s", "title": "Lecture 13 Kalman Smoother, MAP, ML, EM -- CS287-FA19 Advanced Robotics at UC Berkeley", "thumbnail": "https://i.ytimg.com/vi/qHLLMg0Teg4/hqdefault.jpg"} {"video_id": "qHLLMg0Teg4", "text": "black line which is running closer to horizontal compared to the red line now one thing you also want to do and don't want to forget about is cross-validation so whenever you have some data and you just fit to the data it's possible that you're memorizing data over fitting it rather than paying attention to the real pattern we saw this in valuation sample based valuation you don't want to just over fit the few samples and make sure that you fit your neural net in a way that it generalizes to other data so people we do is you split your data into", "start_timestamp": "01:14:18", "end_timestamp": "01:14:55", "start_second": 4458, "end_second": 4495, "url": "https://www.youtube.com/watch?v=qHLLMg0Teg4&t=4458s", "title": "Lecture 13 Kalman Smoother, MAP, ML, EM -- CS287-FA19 Advanced Robotics at UC Berkeley", "thumbnail": "https://i.ytimg.com/vi/qHLLMg0Teg4/hqdefault.jpg"} {"video_id": "qHLLMg0Teg4", "text": "train and validation data and then for a range of priors you can compute the maximum posterior e and then C on the validation data which estimate of your maximum a-posteriori parameter gives the best performance on the validation data and that tells you that the prior you use to estimate that was the better prior that's the same thing in standard neural net learning you'd say I put some like coefficient in front of weights square because I want to keep the weights small the coefficient in front of that is a choice it's a hyper", "start_timestamp": "01:14:55", "end_timestamp": "01:15:26", "start_second": 4495, "end_second": 4526, "url": "https://www.youtube.com/watch?v=qHLLMg0Teg4&t=4495s", "title": "Lecture 13 Kalman Smoother, MAP, ML, EM -- CS287-FA19 Advanced Robotics at UC Berkeley", "thumbnail": "https://i.ytimg.com/vi/qHLLMg0Teg4/hqdefault.jpg"} {"video_id": "qHLLMg0Teg4", "text": "parameter it's a prior over your weights effectively a Gaussian prior over your weights that you're putting in same thing would be happening here you put in a prior and then in cross-validation find out which prior yielded the best results now what we covered so far assumed in all of the maximum life here the maximum posterior that we observe all the data we can write out the density or the probability of all that data and there's no unobserved variables what we're going to cover next week Tuesday because I'm Thursday we'll do", "start_timestamp": "01:15:26", "end_timestamp": "01:16:00", "start_second": 4526, "end_second": 4560, "url": "https://www.youtube.com/watch?v=qHLLMg0Teg4&t=4526s", "title": "Lecture 13 Kalman Smoother, MAP, ML, EM -- CS287-FA19 Advanced Robotics at UC Berkeley", "thumbnail": "https://i.ytimg.com/vi/qHLLMg0Teg4/hqdefault.jpg"} {"video_id": "nv6oFDp6rNQ", "text": "hi there today we'll look at hopfield networks is all you need by researchers from the johannes kepler university in linz and the university of oslo so on high level this paper proposes a new type of hopfield networks that generalizes modern hopfield networks from binary patterns to continuous patterns and then shows that the retrieval update rule of these new hopfield networks is equivalent to the attention mechanism that's used in modern transformers and it's actually a more general formulation of the attention mechanism and therefore", "start_timestamp": "00:00:00", "end_timestamp": "00:00:39", "start_second": 0, "end_second": 39, "url": "https://www.youtube.com/watch?v=nv6oFDp6rNQ&t=0s", "title": "Hopfield Networks is All You Need (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/nv6oFDp6rNQ/maxresdefault.jpg"} {"video_id": "nv6oFDp6rNQ", "text": "it can be used to do kind of a variety of things to improve modern deep learning and uh it also has a companion paper where it applies this to some kind of immunology research and gets uh achieves state of the art in a task that is specifically suited to this type of attention all right let's dive in together we'll go over what this paper does what it proposes and so on if you like pay if you like videos like this uh consider subscribing you know sharing it out and i hope you're enjoying this all right also thanks to my discord community", "start_timestamp": "00:00:39", "end_timestamp": "00:01:21", "start_second": 39, "end_second": 81, "url": "https://www.youtube.com/watch?v=nv6oFDp6rNQ&t=39s", "title": "Hopfield Networks is All You Need (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/nv6oFDp6rNQ/maxresdefault.jpg"} {"video_id": "nv6oFDp6rNQ", "text": "um for you know very helpful bringing me up to speed on this paper uh super interesting discussions there if you're not on our discord yet uh i invite you to join it's fun okay so what is a hopfield network a hot field network is a pretty kind of old style old conceptualization of a neural network so in a hopfield network what your goal would be is you can conceptualize it as a bit of a neural network so let's say we have five neurons or something like this uh your what your goal would be is to have a neural network where you", "start_timestamp": "00:01:21", "end_timestamp": "00:02:05", "start_second": 81, "end_second": 125, "url": "https://www.youtube.com/watch?v=nv6oFDp6rNQ&t=81s", "title": "Hopfield Networks is All You Need (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/nv6oFDp6rNQ/maxresdefault.jpg"} {"video_id": "nv6oFDp6rNQ", "text": "can store so-called patterns and a pattern in this case would be a binary string of size five so for example one zero one zero zero or one one zero one zero and you'd have a list of these patterns and what your goal would be is to store these patterns in the neural network such that and here you know we'll just consider everything to be sort of connected to everything else and um what your goal would be in this is that you can kind of store patterns inside this neural network and you adjust the weights somehow so this was as i said this was", "start_timestamp": "00:02:05", "end_timestamp": "00:02:48", "start_second": 125, "end_second": 168, "url": "https://www.youtube.com/watch?v=nv6oFDp6rNQ&t=125s", "title": "Hopfield Networks is All You Need (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/nv6oFDp6rNQ/maxresdefault.jpg"} {"video_id": "nv6oFDp6rNQ", "text": "this was this is kind of an old model um you store you you adapt the weights such that you store these patterns and what does it mean for a pattern to be stored if you have stored a pattern you can you will then be able to retrieve it and you retrieve a pattern in these kind of old style hopfield networks by providing a partial pattern so what you'll say is for example i i want a pattern that starts with one one zero and you give that to the network and there would be a so-called update rule and the update rule is kind", "start_timestamp": "00:02:48", "end_timestamp": "00:03:24", "start_second": 168, "end_second": 204, "url": "https://www.youtube.com/watch?v=nv6oFDp6rNQ&t=168s", "title": "Hopfield Networks is All You Need (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/nv6oFDp6rNQ/maxresdefault.jpg"} {"video_id": "nv6oFDp6rNQ", "text": "of an internal uh rule so let's just go through this so here this one one zero maybe this is one one zero and then they would kind of send messages around so this update rule would somehow adjust the value of this and this neuron here to what's most compatible with the network weights and if if the network weights have been adjusted correctly this will turn out then at the end of applying this update rule that this is a one and this is a zero and therefore this pattern here is retrieved now had i input uh one zero one at the beginning then", "start_timestamp": "00:03:24", "end_timestamp": "00:04:07", "start_second": 204, "end_second": 247, "url": "https://www.youtube.com/watch?v=nv6oFDp6rNQ&t=204s", "title": "Hopfield Networks is All You Need (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/nv6oFDp6rNQ/maxresdefault.jpg"} {"video_id": "nv6oFDp6rNQ", "text": "the outcome would be different hopefully this pattern here would have been retrieved okay so you can see the applications of this like you can have the first three digits as sort of a database key and then the last ones as sort of the value that you store along with it and then you can simply provide the first few you can also provide you don't always have to provide three um so this all depends this is this is sort of an as i said an old conceptualization of neural networks so people were imagining that this is kind of how", "start_timestamp": "00:04:07", "end_timestamp": "00:04:41", "start_second": 247, "end_second": 281, "url": "https://www.youtube.com/watch?v=nv6oFDp6rNQ&t=247s", "title": "Hopfield Networks is All You Need (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/nv6oFDp6rNQ/maxresdefault.jpg"} {"video_id": "nv6oFDp6rNQ", "text": "the brain works you know fire together wire together and also with research into this it it turns out that you know you might think you know there's there's kind of five neurons so maybe i can store five different patterns you know accurately because if i store too many patterns right if i have many many many many patterns then i can't expect to be able to retrieve all the patterns again because some of them will just be so equal that you know many will start maybe with this and and i won't have a chance to to", "start_timestamp": "00:04:41", "end_timestamp": "00:05:18", "start_second": 281, "end_second": 318, "url": "https://www.youtube.com/watch?v=nv6oFDp6rNQ&t=281s", "title": "Hopfield Networks is All You Need (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/nv6oFDp6rNQ/maxresdefault.jpg"} {"video_id": "nv6oFDp6rNQ", "text": "retrieve the one i want or if the update rule will make a mistake so you might think this might be like five because i have five neurons or maybe ten because i have ten connections but it turns out that um in modern hopfield networks with the appropriate update rule you can store exponentially many patterns in these networks exponentially many in the in the dimension of the um in the dimension of the patterns and here i guess that would be the length of the pattern so this is a little bit surprising the kind", "start_timestamp": "00:05:18", "end_timestamp": "00:05:53", "start_second": 318, "end_second": 353, "url": "https://www.youtube.com/watch?v=nv6oFDp6rNQ&t=318s", "title": "Hopfield Networks is All You Need (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/nv6oFDp6rNQ/maxresdefault.jpg"} {"video_id": "nv6oFDp6rNQ", "text": "of storage capacity of these networks and we'll this um this paper here generalizes that to continuous uh to continuous states so what do we mean with continuous states i guess i mean continuous patterns so no longer is a pattern a binary string but a pattern now is a string of floating point numbers okay like 0.5 1.3 and so on and you know a string of floating or a sequence of floating point numbers is naturally depicted as a vector okay so our patterns are going to be different vectors that we store and um you know in", "start_timestamp": "00:05:53", "end_timestamp": "00:06:35", "start_second": 353, "end_second": 395, "url": "https://www.youtube.com/watch?v=nv6oFDp6rNQ&t=353s", "title": "Hopfield Networks is All You Need (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/nv6oFDp6rNQ/maxresdefault.jpg"} {"video_id": "nv6oFDp6rNQ", "text": "high dimensions that the the vectors will be kind of separated well from each other as long as we don't have too many but this paper shows that all these properties for the modern hopfield networks that hold for binary strings still hold if you go to these kind of um continuous to these vector patterns that means you can store exponentially many patterns in the dimensions of the vector which is pretty surprising right because you'd think like you know after you have one vector per dimension that you know after that it might get a bit shaky but", "start_timestamp": "00:06:35", "end_timestamp": "00:07:17", "start_second": 395, "end_second": 437, "url": "https://www.youtube.com/watch?v=nv6oFDp6rNQ&t=395s", "title": "Hopfield Networks is All You Need (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/nv6oFDp6rNQ/maxresdefault.jpg"} {"video_id": "nv6oFDp6rNQ", "text": "no you can actually store exponentially many that's pretty surprising and this paper is a lot about how to do that and the fact that that happens and so on so we've talked about update rules for these um kind of hopfield networks and i haven't really specified what that is i've just said that you know i enter a pattern and then the network does something and out comes out comes the whatever the pattern that matches my query so this here is called a query you might already um this is on purpose like the kind of overlap between the attention", "start_timestamp": "00:07:17", "end_timestamp": "00:07:55", "start_second": 437, "end_second": 475, "url": "https://www.youtube.com/watch?v=nv6oFDp6rNQ&t=437s", "title": "Hopfield Networks is All You Need (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/nv6oFDp6rNQ/maxresdefault.jpg"} {"video_id": "nv6oFDp6rNQ", "text": "mechanism lingo and the hopfield network lingo we're going to conflate the two to kind of make clear where the two overlap if you don't know what an attention mechanism is or aren't familiar with it watch my video on attention is all you need uh once you watch that this video will make a lot more sense all right so in what the update rule does is specifically and the update rule there isn't only one right there are many different proposals of hobfield networks and they all lead to different properties but what an", "start_timestamp": "00:07:55", "end_timestamp": "00:08:31", "start_second": 475, "end_second": 511, "url": "https://www.youtube.com/watch?v=nv6oFDp6rNQ&t=475s", "title": "Hopfield Networks is All You Need (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/nv6oFDp6rNQ/maxresdefault.jpg"} {"video_id": "nv6oFDp6rNQ", "text": "update rule does ultimately is it minimizes what's called an energy so every type of hopfield network is associated with an energy function and this the energy function of the modern hopfield network for binary strings is this energy function right here so with x x is the pattern um the pattern this is the kind of state of the hopfield network and so these are the the whatever is stored in the network and then design here is the query that you enter into the network and then the energy here tells you this quantity you have to minimize this", "start_timestamp": "00:08:31", "end_timestamp": "00:09:15", "start_second": 511, "end_second": 555, "url": "https://www.youtube.com/watch?v=nv6oFDp6rNQ&t=511s", "title": "Hopfield Networks is All You Need (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/nv6oFDp6rNQ/maxresdefault.jpg"} {"video_id": "nv6oFDp6rNQ", "text": "quantity in order to retrieve the pattern that you want okay now we are never directly working with the energy as such uh so what you could do is for example use back prop or something to use gradient descent to decrease the energy but usually along with an energy function comes an update function and the update function is what i've talked about here like you do something and then the network does something and then you get the pattern out what the network does is it minimizes its energy function and the update rule", "start_timestamp": "00:09:15", "end_timestamp": "00:09:52", "start_second": 555, "end_second": 592, "url": "https://www.youtube.com/watch?v=nv6oFDp6rNQ&t=555s", "title": "Hopfield Networks is All You Need (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/nv6oFDp6rNQ/maxresdefault.jpg"} {"video_id": "nv6oFDp6rNQ", "text": "is made such that the corresponding energy function is minimized so the energy function is more like a theoretical consideration that you say okay here is my energy function of my hopfield network and the there will be a corresponding update rule that minimizes that energy function and if you use that update rule maybe multiple times then the energy function will be minimized and you will have retrieved your pattern or not if if you have too many patterns stored it might also fail right so they they say what the update rules are", "start_timestamp": "00:09:52", "end_timestamp": "00:10:28", "start_second": 592, "end_second": 628, "url": "https://www.youtube.com/watch?v=nv6oFDp6rNQ&t=592s", "title": "Hopfield Networks is All You Need (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/nv6oFDp6rNQ/maxresdefault.jpg"} {"video_id": "nv6oFDp6rNQ", "text": "in the um in the text here for the old hopfield networks but we're not really interested in the old ones we're interested in the ones that this paper cares about namely where the patterns that you store in the hopfield network are these vectors over our vector patterns and the query is also a vector pattern so you want to store all of these patterns into the hopfield network so i'm going to draw it like this here i'm going to store it into the hopfield network and then after that you want to come up with a query", "start_timestamp": "00:10:28", "end_timestamp": "00:11:02", "start_second": 628, "end_second": 662, "url": "https://www.youtube.com/watch?v=nv6oFDp6rNQ&t=628s", "title": "Hopfield Networks is All You Need (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/nv6oFDp6rNQ/maxresdefault.jpg"} {"video_id": "nv6oFDp6rNQ", "text": "and the query is like this and in the case of the binary strings we had something like well i sort of know half of my binary string now in the vector hop field network it's more like well i sort of kind of know the direction that my vector should point in okay and you will re what you want to retrieve is the vector that has kind of a large inner product okay so if i enter this query into my hopfield network what i hope is that this vector here is retrieved now you see it's not exactly the same vector like they do point if i translate that", "start_timestamp": "00:11:02", "end_timestamp": "00:11:44", "start_second": 662, "end_second": 704, "url": "https://www.youtube.com/watch?v=nv6oFDp6rNQ&t=662s", "title": "Hopfield Networks is All You Need (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/nv6oFDp6rNQ/maxresdefault.jpg"} {"video_id": "nv6oFDp6rNQ", "text": "here by i it's maybe something like this but so they are different but you want to say well i kind of know what i want i kind of want something like this and then the hopfield network would answer with ah i have something like this it's this right here okay so you that the connection to attention mechanism should become pretty pretty obvious right now but you know the um to actually establish this formally is the kind of the point of this paper and you know it's pretty cool to see so they formulate this new energy right", "start_timestamp": "00:11:44", "end_timestamp": "00:12:22", "start_second": 704, "end_second": 742, "url": "https://www.youtube.com/watch?v=nv6oFDp6rNQ&t=704s", "title": "Hopfield Networks is All You Need (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/nv6oFDp6rNQ/maxresdefault.jpg"} {"video_id": "nv6oFDp6rNQ", "text": "here this is the energy of this new continuous hopfield network um specifically they have to have this term right here because they now have continuous states and continuous queries uh this if you minimize the energy it basically means that your query can never you know go to infinity because you have the the query right here in the energy function um the update rule is this right here and we'll look at that in a moment but remember the update rule is what you actually implement in code so if i if i have a query right here i plug it in", "start_timestamp": "00:12:22", "end_timestamp": "00:13:04", "start_second": 742, "end_second": 784, "url": "https://www.youtube.com/watch?v=nv6oFDp6rNQ&t=742s", "title": "Hopfield Networks is All You Need (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/nv6oFDp6rNQ/maxresdefault.jpg"} {"video_id": "nv6oFDp6rNQ", "text": "here this is the state of my hopfield network and i apply this rule multiple times and out comes the kind of answer of the hopfield network to my question so the i input this and the out comes this after i update after i apply the update rule maybe multiple times right and interestingly you can already see that this here if you rewrite a bunch of these quantities so if you rewrite the beta here which is the softmax temperature in a way to be one over square root of d and if you take the query design here to be the query", "start_timestamp": "00:13:04", "end_timestamp": "00:13:48", "start_second": 784, "end_second": 828, "url": "https://www.youtube.com/watch?v=nv6oFDp6rNQ&t=784s", "title": "Hopfield Networks is All You Need (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/nv6oFDp6rNQ/maxresdefault.jpg"} {"video_id": "nv6oFDp6rNQ", "text": "matrix and if you take the x here to be the key matrix then this is equivalent to the update or sorry the attention mechanism of a modern transformer so that's the point of the paper is that we can look at the transformer attention mechanism as a hopfield network and they have this interesting this interesting diagram at the end right here uh so the appendix you know this is typical i guess sep ho haita i remember this sailor paper had like 60 pages of machine proof appendix this also this has like 70 page appendix", "start_timestamp": "00:13:48", "end_timestamp": "00:14:34", "start_second": 828, "end_second": 874, "url": "https://www.youtube.com/watch?v=nv6oFDp6rNQ&t=828s", "title": "Hopfield Networks is All You Need (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/nv6oFDp6rNQ/maxresdefault.jpg"} {"video_id": "nv6oFDp6rNQ", "text": "crazy but at the end of the appendix you'll find this diagram right here now usually in an attention mechanism um you have whatever the the input is so you have an input right here so this is attention mechanisms or at least transformers they work on sequences or sets of objects and from these you'll generate three things you'll generate the you'll generate the queries the keys and the values now you can either generate the queries from the same objects which would be self-attention or you can generate the queries from like a different object", "start_timestamp": "00:14:34", "end_timestamp": "00:15:14", "start_second": 874, "end_second": 914, "url": "https://www.youtube.com/watch?v=nv6oFDp6rNQ&t=874s", "title": "Hopfield Networks is All You Need (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/nv6oFDp6rNQ/maxresdefault.jpg"} {"video_id": "nv6oFDp6rNQ", "text": "over here uh it doesn't it doesn't matter too much for our discussions but either you you know have a reference input or you have you know this kind of same input all the way and then what you do is you use three different heads or you know three different matrices to transform that input into queries keys and values so i often conceptualize this as you have kind of your input set and each of the input sets outputs a key and also each one which would be a vector and also each one outputs a query so i often draw this here", "start_timestamp": "00:15:14", "end_timestamp": "00:16:01", "start_second": 914, "end_second": 961, "url": "https://www.youtube.com/watch?v=nv6oFDp6rNQ&t=914s", "title": "Hopfield Networks is All You Need (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/nv6oFDp6rNQ/maxresdefault.jpg"} {"video_id": "nv6oFDp6rNQ", "text": "the same sequence and each one outputs a query and the query sort of the query is kind of a request for information so the key exposes uh sort of what exposes something about the input here so this could be a sentence down here this could be my cat is very pretty and the the the vector the key vector right here could encode something like this is a noun or this is an animal or anything like this right and the query here it could ask for for other things so for example since this is cat this vector right here the query vector", "start_timestamp": "00:16:01", "end_timestamp": "00:16:51", "start_second": 961, "end_second": 1011, "url": "https://www.youtube.com/watch?v=nv6oFDp6rNQ&t=961s", "title": "Hopfield Networks is All You Need (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/nv6oFDp6rNQ/maxresdefault.jpg"} {"video_id": "nv6oFDp6rNQ", "text": "is generated from that you know token cat now it could recognize that cat is a noun and it could ask the other nodes to basically say are there any adjectives around here because um you know adjectives because i it itself is a noun it's the object of the sentence right it could ask are there any kind of adjectives that describe the object because that would be naturally a thing to ask if you were the noun you would want to know are there any kind of modifiers um for for me so it could output the query and the query here", "start_timestamp": "00:16:51", "end_timestamp": "00:17:32", "start_second": 1011, "end_second": 1052, "url": "https://www.youtube.com/watch?v=nv6oFDp6rNQ&t=1011s", "title": "Hopfield Networks is All You Need (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/nv6oFDp6rNQ/maxresdefault.jpg"} {"video_id": "nv6oFDp6rNQ", "text": "could mean you know this direction could mean adjectives and you see here the word pretty is an adjective so it itself would output a key that says by the way i'm an adjective right so if the cat asks then if this node asks for an adjective and this outputs uh the adjective vector then because the inner product between the two things is high this will be routed here so attention mechanisms basically information routing that's how i always describe it but in this paper we look at it more like these here are the patterns that are", "start_timestamp": "00:17:32", "end_timestamp": "00:18:13", "start_second": 1052, "end_second": 1093, "url": "https://www.youtube.com/watch?v=nv6oFDp6rNQ&t=1052s", "title": "Hopfield Networks is All You Need (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/nv6oFDp6rNQ/maxresdefault.jpg"} {"video_id": "nv6oFDp6rNQ", "text": "stored in a hopfield network and i by inputting a query and the dot product being the update rule of the hopfield network i retrieve from the hopfield network i retrieve the appropriate pattern that i ask for okay and then you know the values the values are simply a modification of the keys in this form but a lot of people also do keys and values to be the same thing but this routing of information happens here where you multiply the queries and the keys and then you put a soft max over them okay so if you just look from the perspective", "start_timestamp": "00:18:13", "end_timestamp": "00:18:57", "start_second": 1093, "end_second": 1137, "url": "https://www.youtube.com/watch?v=nv6oFDp6rNQ&t=1093s", "title": "Hopfield Networks is All You Need (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/nv6oFDp6rNQ/maxresdefault.jpg"} {"video_id": "nv6oFDp6rNQ", "text": "of a single node like this node here this cat node what it would do is it would inner product its own query vector with all of the key vectors right so it would build an inner product with all of these and then it would normalize it would put it through a soft max which will kind of give it a distribution right so here would give it like uh so this this actually matches because ma well my is also very important for cat this this this is just an accident i did not plan this uh this here this also well many things match", "start_timestamp": "00:18:57", "end_timestamp": "00:19:34", "start_second": 1137, "end_second": 1174, "url": "https://www.youtube.com/watch?v=nv6oFDp6rNQ&t=1137s", "title": "Hopfield Networks is All You Need (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/nv6oFDp6rNQ/maxresdefault.jpg"} {"video_id": "nv6oFDp6rNQ", "text": "but but uh in our example we would just say that this last one it's it's not only higher it's also wider um it matches very well right and so the information routing would route mostly information from this pretty token to the cat token which makes sense in our case right this is the attention mechanism now sin if we are interpreting this as a hopfield network and the update rule here is the dot product you can actually think of applying this rule multiple times so what happens now if we and this is where this update rule", "start_timestamp": "00:19:34", "end_timestamp": "00:20:20", "start_second": 1174, "end_second": 1220, "url": "https://www.youtube.com/watch?v=nv6oFDp6rNQ&t=1174s", "title": "Hopfield Networks is All You Need (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/nv6oFDp6rNQ/maxresdefault.jpg"} {"video_id": "nv6oFDp6rNQ", "text": "comes in what happens if we take this distribution and we don't aggregate the values like usually we would aggregate the values by this distribution what if we aggregate the keys by this distribution okay what comes out well if we look at this and you know let's just assume that this key right here matches really well but the others also match a little bit what would come out would be a weighted average where a lot of weight is put on this particular key so what will turn out would be something like something that's", "start_timestamp": "00:20:20", "end_timestamp": "00:20:57", "start_second": 1220, "end_second": 1257, "url": "https://www.youtube.com/watch?v=nv6oFDp6rNQ&t=1220s", "title": "Hopfield Networks is All You Need (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/nv6oFDp6rNQ/maxresdefault.jpg"} {"video_id": "nv6oFDp6rNQ", "text": "very close to that key you can see um i'm going to draw the old key here in green and i want to draw the old query in blue so you see that it's then whatever comes out is not the query but it's also not that only key that matches right it's kind of a weighted average but with that key dominating okay now since you know in a hopfield network what we would do is we would go again we would put this new thing the red thing instead of the query vector okay so we would use this aggregated keys this weighted average", "start_timestamp": "00:20:57", "end_timestamp": "00:21:41", "start_second": 1257, "end_second": 1301, "url": "https://www.youtube.com/watch?v=nv6oFDp6rNQ&t=1257s", "title": "Hopfield Networks is All You Need (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/nv6oFDp6rNQ/maxresdefault.jpg"} {"video_id": "nv6oFDp6rNQ", "text": "as a new query vector for that node right here so duplicate that node over here i'll use that query vector again and do the same thing again okay inner product with all of the query vectors and now since this is already an aggregate of the query vectors what's going to happen of course the distribution that's going to come out is going to be weighted even more heavily into the direction so let's make it even wider into the direction of that key that matches okay and you can pretty clearly see if i do that iteratively", "start_timestamp": "00:21:41", "end_timestamp": "00:22:19", "start_second": 1301, "end_second": 1339, "url": "https://www.youtube.com/watch?v=nv6oFDp6rNQ&t=1301s", "title": "Hopfield Networks is All You Need (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/nv6oFDp6rNQ/maxresdefault.jpg"} {"video_id": "nv6oFDp6rNQ", "text": "um then that will lead to a situation where everything is like very low except that one key will sort of dominate the distribution and ultra high and ultra wide okay and that's how that's exactly how a hobfield network works right i would input the query which would be sort of what i want right i kind of know what i want okay and then i apply this rule multiple times right and with each time i refine refine refine until i decide on a pattern right the hopfield network is made for pattern retrieval and these here are the patterns that i", "start_timestamp": "00:22:19", "end_timestamp": "00:22:59", "start_second": 1339, "end_second": 1379, "url": "https://www.youtube.com/watch?v=nv6oFDp6rNQ&t=1339s", "title": "Hopfield Networks is All You Need (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/nv6oFDp6rNQ/maxresdefault.jpg"} {"video_id": "nv6oFDp6rNQ", "text": "want to retrieve so here the patterns aren't kind of stored in the network beforehand but the patterns are also generated like in an attention layer so the keys are generated by the previous layer or by these matrices but that doesn't matter for the hopfield network update rule so you see here that the attention mechanism can be interpreted as simply one step making one step of this update rule but you can think of making actually multiple steps and and retrieving the particular key so you know deciding on a sort of a hard", "start_timestamp": "00:22:59", "end_timestamp": "00:23:36", "start_second": 1379, "end_second": 1416, "url": "https://www.youtube.com/watch?v=nv6oFDp6rNQ&t=1379s", "title": "Hopfield Networks is All You Need (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/nv6oFDp6rNQ/maxresdefault.jpg"} {"video_id": "nv6oFDp6rNQ", "text": "routing of particular information now that only works if if there are no other vectors that are close to that particular key right so if the query is this and you know the way i drew it here you can see that there are many there is this one and this one and this one that matches so technically the way i drew it um what would happen most likely is no many no matter how many times you apply your update rule it would sort of result in kind of the average of the three keys right so because they're all matching and they would all contribute", "start_timestamp": "00:23:36", "end_timestamp": "00:24:19", "start_second": 1416, "end_second": 1459, "url": "https://www.youtube.com/watch?v=nv6oFDp6rNQ&t=1416s", "title": "Hopfield Networks is All You Need (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/nv6oFDp6rNQ/maxresdefault.jpg"} {"video_id": "nv6oFDp6rNQ", "text": "to that weighted average of the query in the next step and then that means basically the convergence would be to something in the middle and that's going to be a central point of this paper um in which situation we are so they call the first part is retrieving a single pattern and they call the second situation where you have multiple patterns that all match that are not well separated from each other they call this a meta-stable state and it's going to be pretty interesting to look at um transform like bert language models", "start_timestamp": "00:24:19", "end_timestamp": "00:24:53", "start_second": 1459, "end_second": 1493, "url": "https://www.youtube.com/watch?v=nv6oFDp6rNQ&t=1459s", "title": "Hopfield Networks is All You Need (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/nv6oFDp6rNQ/maxresdefault.jpg"} {"video_id": "nv6oFDp6rNQ", "text": "and look at where they actually are are they actually operating in this single pattern retrieval mode or are they operating in the meta stable state mode all right so here you can see it in the diagram the only thing differing this from a hopfield network sorry from an attention mechanism is this branch right here so here you ask do you want to do multiple updates after you've you've multiplied the queries and the keys do you want to do multiple updates if yes so if you're in a this hot field network situation you want to do", "start_timestamp": "00:24:53", "end_timestamp": "00:25:31", "start_second": 1493, "end_second": 1531, "url": "https://www.youtube.com/watch?v=nv6oFDp6rNQ&t=1493s", "title": "Hopfield Networks is All You Need (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/nv6oFDp6rNQ/maxresdefault.jpg"} {"video_id": "nv6oFDp6rNQ", "text": "multiple updates then you go back as you can see and you do you use the keys together with the output of the softmax to generate a new query so this query q here is now generated from the output here and the key so the keys are the same these are this is the same thing it's just put here twice okay this is exactly what we discussed okay i hope that's some somehow clear that these the the attention mechanism is simply a one step uh hub field network pattern retrieval algorithm with a particular update rule that is that is matches this energy", "start_timestamp": "00:25:31", "end_timestamp": "00:26:19", "start_second": 1531, "end_second": 1579, "url": "https://www.youtube.com/watch?v=nv6oFDp6rNQ&t=1531s", "title": "Hopfield Networks is All You Need (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/nv6oFDp6rNQ/maxresdefault.jpg"} {"video_id": "nv6oFDp6rNQ", "text": "function that they propose right here of course they do this you know particularly because the update rule that turns out is the transformer update rule but um i actually don't know if they backwards engineered the energy function to match the transformer or if they first came up with a continuous help field networks and then just kind of discover that it's like the transformer we'll maybe never find out okay so um let's go there are a couple of theorems i believe there are four five theorems right here that uh show", "start_timestamp": "00:26:19", "end_timestamp": "00:26:56", "start_second": 1579, "end_second": 1616, "url": "https://www.youtube.com/watch?v=nv6oFDp6rNQ&t=1579s", "title": "Hopfield Networks is All You Need (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/nv6oFDp6rNQ/maxresdefault.jpg"} {"video_id": "nv6oFDp6rNQ", "text": "that kind of make some points about this about this stuff and we'll go through them we won't go through the proofs or any you know super in-depth meaning but it's pretty cool to go through them and they are proved very rigorously as i said there's a 70 page appendix so have a look at that if you're up for it okay so they say here we have an update rule this is our update rule for our new hop field networks so the first theorem they say is the update rule that we propose converges globally if we apply the update rule repeatedly", "start_timestamp": "00:26:56", "end_timestamp": "00:27:33", "start_second": 1616, "end_second": 1653, "url": "https://www.youtube.com/watch?v=nv6oFDp6rNQ&t=1616s", "title": "Hopfield Networks is All You Need (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/nv6oFDp6rNQ/maxresdefault.jpg"} {"video_id": "nv6oFDp6rNQ", "text": "the energy for t goes equals infinity and the energy will converge sorry the energy will converge to a fixed point this being a fixed point for t of course sorry for t goes to infinity yeah if this is a fixed point basically saying that if i apply this update rule here over and over and over again it will it will make this energy function converge to a fixed it will make this energy function converge don't want to say anything mistakenly here or claim too much but that basically connects the update rule to the energy", "start_timestamp": "00:27:33", "end_timestamp": "00:28:15", "start_second": 1653, "end_second": 1695, "url": "https://www.youtube.com/watch?v=nv6oFDp6rNQ&t=1653s", "title": "Hopfield Networks is All You Need (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/nv6oFDp6rNQ/maxresdefault.jpg"} {"video_id": "nv6oFDp6rNQ", "text": "okay so just showing like this really is the update rule for that particular energy function okay now as as itself it's not super duper interesting yet but um now we get to theorem two so theorem two for the iteration that's the update rule that we just looked at we have we have that um this convergence holds as t goes to infinity for some stationary point furthermore this quantity here goes to zero so that means this is the um the update at t plus one and this is the update at t and the difference between them", "start_timestamp": "00:28:15", "end_timestamp": "00:29:02", "start_second": 1695, "end_second": 1742, "url": "https://www.youtube.com/watch?v=nv6oFDp6rNQ&t=1695s", "title": "Hopfield Networks is All You Need (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/nv6oFDp6rNQ/maxresdefault.jpg"} {"video_id": "nv6oFDp6rNQ", "text": "goes to zero so that means not only does the energy converge but the iterates themselves convert so the algorithm actually converges the individual updates of the algorithm so this e new at some point that will no longer change because the the norm between it and the previous one will go to zero you can see that either the sequence here converges or in the other case the set of limit points yada yada is a connecting subset this is a bit over the top here they say okay it can either converge to a point or it can converge to a", "start_timestamp": "00:29:02", "end_timestamp": "00:29:41", "start_second": 1742, "end_second": 1781, "url": "https://www.youtube.com/watch?v=nv6oFDp6rNQ&t=1742s", "title": "Hopfield Networks is All You Need (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/nv6oFDp6rNQ/maxresdefault.jpg"} {"video_id": "nv6oFDp6rNQ", "text": "connected subset but if the loss is finite then any sequence generated by the iteration equation three converges to some fixed point so you know basically saying that here we oh this is not the loss i'm sorry um no this is the domain never mind i'm an idiot this is basically saying that this algorithm will converge okay and they define here what it means for a pattern to be stored and retrieved and that's for establishing what the kind of storage capacity of a hopfield network is so we've established that the update", "start_timestamp": "00:29:41", "end_timestamp": "00:30:26", "start_second": 1781, "end_second": 1826, "url": "https://www.youtube.com/watch?v=nv6oFDp6rNQ&t=1781s", "title": "Hopfield Networks is All You Need (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/nv6oFDp6rNQ/maxresdefault.jpg"} {"video_id": "nv6oFDp6rNQ", "text": "rule minimizes the appropriate energy and the update rule will converge at some point which means that we can you know if it converges we can retrieve the pattern that it converges to so now we define how many patterns can we actually store for that we need to know what does it mean for a pattern to be stored so we assume that we have patterns and these patterns are called x okay x i we have n different patterns each one is called um x with a subscript we assume that around every pattern a sphere is given so how do we imagine this um", "start_timestamp": "00:30:26", "end_timestamp": "00:31:08", "start_second": 1826, "end_second": 1868, "url": "https://www.youtube.com/watch?v=nv6oFDp6rNQ&t=1826s", "title": "Hopfield Networks is All You Need (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/nv6oFDp6rNQ/maxresdefault.jpg"} {"video_id": "nv6oFDp6rNQ", "text": "we have these patterns and this is this is just a space now they consider patterns of the uh on on a sphere but we'll just conceptualize it as this we'll have a space there are patterns we want to store okay and we'll say around every pattern there is a sphere okay sphere like this and naturally the patterns are going to be there's going to be a notion of well separated patterns and you can imagine this a little bit like these spheres won't be touching each other if these spheres aren't touching each other that means that the patterns are kind of", "start_timestamp": "00:31:08", "end_timestamp": "00:31:43", "start_second": 1868, "end_second": 1903, "url": "https://www.youtube.com/watch?v=nv6oFDp6rNQ&t=1868s", "title": "Hopfield Networks is All You Need (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/nv6oFDp6rNQ/maxresdefault.jpg"} {"video_id": "nv6oFDp6rNQ", "text": "well separated and that means that if we initialize the query remember the query here is a vector that kind of sort of looks like a pattern and that means the query is kind of close to the pattern in some notion of distance so if we initialize the query somewhere in that sphere then it might if it converges to that sphere to that pattern then we retrieve the pattern okay now it gets a bit more complicated than this but not much more so we say a pattern is stored if there is a single fixed point inside the sphere to which all points", "start_timestamp": "00:31:43", "end_timestamp": "00:32:27", "start_second": 1903, "end_second": 1947, "url": "https://www.youtube.com/watch?v=nv6oFDp6rNQ&t=1903s", "title": "Hopfield Networks is All You Need (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/nv6oFDp6rNQ/maxresdefault.jpg"} {"video_id": "nv6oFDp6rNQ", "text": "that start inside the sphere converge and none of the spheres intersect so the sphere of point i doesn't intersect with the sphere of point j so that's where we say all these spheres are non-intersecting we say x i is retrieved if the iteration equation three converged to the single fixed point in that sphere the retrieval error is the distance so you'll notice you have two things you have x i this is the actual pattern and you have x i star this is the retrieved pattern so these hopf they don't always have to", "start_timestamp": "00:32:27", "end_timestamp": "00:33:02", "start_second": 1947, "end_second": 1982, "url": "https://www.youtube.com/watch?v=nv6oFDp6rNQ&t=1947s", "title": "Hopfield Networks is All You Need (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/nv6oFDp6rNQ/maxresdefault.jpg"} {"video_id": "nv6oFDp6rNQ", "text": "give you the same thing that you stored that's part of the the nature of continuous neural networks whatnot so for every sphere we say there is a pattern there is a sphere now we as pattern is stored if every i can start wherever i want in this sphere okay wherever i want it will always converge to a point that's inside the sphere okay and maybe that point isn't the pattern that i stored but actually this point right here but wherever i start i will always converge to that particular point if that's the case then i have stored", "start_timestamp": "00:33:02", "end_timestamp": "00:33:41", "start_second": 1982, "end_second": 2021, "url": "https://www.youtube.com/watch?v=nv6oFDp6rNQ&t=1982s", "title": "Hopfield Networks is All You Need (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/nv6oFDp6rNQ/maxresdefault.jpg"} {"video_id": "nv6oFDp6rNQ", "text": "this particular pattern now the fact is i don't retrieve this particular pattern i retrieve the blue thing but i can then define the error of retrieval the error of retrieval is simply the distance between the two things ideally this distance is very small right but you know we can't can't guarantee it now there are going to be theorems that deal exactly with this retrieval error but first you can see that here if if these spheres become larger you you can't accurately store a pattern anymore so this is the kind of ideal situation", "start_timestamp": "00:33:41", "end_timestamp": "00:34:24", "start_second": 2021, "end_second": 2064, "url": "https://www.youtube.com/watch?v=nv6oFDp6rNQ&t=2021s", "title": "Hopfield Networks is All You Need (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/nv6oFDp6rNQ/maxresdefault.jpg"} {"video_id": "nv6oFDp6rNQ", "text": "but there are also situations where these spheres you know if i have these patterns right here these spheres are so large kind of the the attractions of the patterns are so large that if i start let's say here then i don't converge to either of these two patterns i converge to like something in the middle i converge to maybe this point right here and that's going to be one of these meta stable states okay we're going to encounter situations like this but we're also going to encounter situations like this and the bottom thing isn't necessarily", "start_timestamp": "00:34:24", "end_timestamp": "00:34:58", "start_second": 2064, "end_second": 2098, "url": "https://www.youtube.com/watch?v=nv6oFDp6rNQ&t=2064s", "title": "Hopfield Networks is All You Need (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/nv6oFDp6rNQ/maxresdefault.jpg"} {"video_id": "nv6oFDp6rNQ", "text": "bad and that's what you have to keep in mind um and yeah as i said we will get to it but just keep this kind of sphere image in mind okay so first we'll just deal with the you know the up the top situation where we store patterns and then retrieve patterns so we'll we'll assume a failure probability which is p and p is going to be you know pretty pretty low for their example so they have p equals 0.001 you know like a 0.1 percent error probability of retrieving your pattern things like this and randomly chosen patterns on the", "start_timestamp": "00:34:58", "end_timestamp": "00:35:41", "start_second": 2098, "end_second": 2141, "url": "https://www.youtube.com/watch?v=nv6oFDp6rNQ&t=2098s", "title": "Hopfield Networks is All You Need (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/nv6oFDp6rNQ/maxresdefault.jpg"} {"video_id": "nv6oFDp6rNQ", "text": "sphere with radius m we define some constants then with probability 1 minus p the number of random patterns that can be stored and stored in the sense of having these spheres around them so that you can retrieve them accurately or at least you can retrieve something that's close to them is is bounded lower bounded by this quantity right here so there's the square root of p there is this constant c but then you see that d is in the exponent right here so that means it's exponential in the number of dimensions", "start_timestamp": "00:35:41", "end_timestamp": "00:36:21", "start_second": 2141, "end_second": 2181, "url": "https://www.youtube.com/watch?v=nv6oFDp6rNQ&t=2141s", "title": "Hopfield Networks is All You Need (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/nv6oFDp6rNQ/maxresdefault.jpg"} {"video_id": "nv6oFDp6rNQ", "text": "so that's that's pretty cool so if you add a dimension you exponentially increase the number of the number of patterns you can store and you know that's that is a kind of i mean it's it's been known for modern hopfield networks with binary strings so it's not uber surprising but if you have you know it's not what you would imagine like that okay so they may give a few examples of these co you have to set these constants you know in a particular fashion such that this is given and so on um but they say you know examples here", "start_timestamp": "00:36:21", "end_timestamp": "00:37:01", "start_second": 2181, "end_second": 2221, "url": "https://www.youtube.com/watch?v=nv6oFDp6rNQ&t=2181s", "title": "Hopfield Networks is All You Need (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/nv6oFDp6rNQ/maxresdefault.jpg"} {"video_id": "nv6oFDp6rNQ", "text": "are where c is something like three and d is 20. um so if you were to add a 21st dimension then your i guess storage capacity would increase by a factor of three which pretty cool all right so this is how many that we can store infinitely not sorry exponentially many patterns um in these networks now they deal they say the next theorem states that the update rule typically converges after one update if the patterns are well separated so if we're in a situation where these patterns are well separated which is", "start_timestamp": "00:37:01", "end_timestamp": "00:37:49", "start_second": 2221, "end_second": 2269, "url": "https://www.youtube.com/watch?v=nv6oFDp6rNQ&t=2221s", "title": "Hopfield Networks is All You Need (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/nv6oFDp6rNQ/maxresdefault.jpg"} {"video_id": "nv6oFDp6rNQ", "text": "kind of like this but you can also imagine this in terms of dot products because we operate in the space of dot products so if the patterns are well separated that sort of means that they all kind of sort of point away from each other and this notion of separation is going to be captured by this quantity right here this is the separation of example of pattern i which is just the inner product with itself uh minus the maximum inner product with any other uh pattern and this quantity is going to be large when no", "start_timestamp": "00:37:49", "end_timestamp": "00:38:24", "start_second": 2269, "end_second": 2304, "url": "https://www.youtube.com/watch?v=nv6oFDp6rNQ&t=2269s", "title": "Hopfield Networks is All You Need (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/nv6oFDp6rNQ/maxresdefault.jpg"} {"video_id": "nv6oFDp6rNQ", "text": "other pattern is close to it so when the separation is large then the update rule the retrieval rule of um calculating you know i have a query calculate the inner product with all of those then i re-weigh all of the patterns by that inner product by the softmax then i used that new thing as a query again and so on as we discussed it will converge to the closest pattern but this theorem says it actually converges pretty fast and here i have my problems with saying that it converges after one step um typically converges after one update", "start_timestamp": "00:38:24", "end_timestamp": "00:39:07", "start_second": 2304, "end_second": 2347, "url": "https://www.youtube.com/watch?v=nv6oFDp6rNQ&t=2304s", "title": "Hopfield Networks is All You Need (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/nv6oFDp6rNQ/maxresdefault.jpg"} {"video_id": "nv6oFDp6rNQ", "text": "because that you know genuinely depends on a lot of constants as we'll see but it does converge exponentially fast in this separation constant as the theorem 4 says with query psi after one update the distance of the new point to the fixed point is exponentially small in the separation delta i the precise bound using the jacobian and its value in the mean value theorem are the following so here you can see this is the distance between the updated psi after one step and the um and the fixed point right here this is what it converges to", "start_timestamp": "00:39:07", "end_timestamp": "00:39:51", "start_second": 2347, "end_second": 2391, "url": "https://www.youtube.com/watch?v=nv6oFDp6rNQ&t=2347s", "title": "Hopfield Networks is All You Need (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/nv6oFDp6rNQ/maxresdefault.jpg"} {"video_id": "nv6oFDp6rNQ", "text": "is going to be the distance as it was before times this thing right here so you can see since this is a this is a multiplicative update um and in this jacobian so this is expanded down here this is this you can see here you have the you have this sorry yeah this is this so this is bounded by that you have the exponent the exponential function negative this separation right here so the higher the separation the faster this algorithm converges okay to say that it converges after one step is you know it might be a bit of of", "start_timestamp": "00:39:51", "end_timestamp": "00:40:41", "start_second": 2391, "end_second": 2441, "url": "https://www.youtube.com/watch?v=nv6oFDp6rNQ&t=2391s", "title": "Hopfield Networks is All You Need (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/nv6oFDp6rNQ/maxresdefault.jpg"} {"video_id": "nv6oFDp6rNQ", "text": "bragging i don't know if this is a common thing if you have like an exponential convergence uh that you are allowed to say it's after one step i'm not sure especially what i'm not sure about is that you have n here as linear constants in that factor okay so if you if you and that's what they do in their code so if you look at their code and the code's available which is pretty cool it's implemented in pi torch as a general module that can you can just drop in so this is not only for transformers this is for you can replace like lstms you can", "start_timestamp": "00:40:41", "end_timestamp": "00:41:16", "start_second": 2441, "end_second": 2476, "url": "https://www.youtube.com/watch?v=nv6oFDp6rNQ&t=2441s", "title": "Hopfield Networks is All You Need (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/nv6oFDp6rNQ/maxresdefault.jpg"} {"video_id": "nv6oFDp6rNQ", "text": "replace pooling mechanisms um you can you know do a whole bunch of stuff in their paper in the accompanying paper they do this multi-instance learning with giant uh sets um on using these hopfield layers so pretty pretty cool this code is definitely worth kind of checking out and maybe you want to replace some stuff with you but the question is how many of these update steps should you do right because we looked at the diagram at least in the attention mechanism it seems like you have attention layers right you have a transformer", "start_timestamp": "00:41:16", "end_timestamp": "00:41:52", "start_second": 2476, "end_second": 2512, "url": "https://www.youtube.com/watch?v=nv6oFDp6rNQ&t=2476s", "title": "Hopfield Networks is All You Need (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/nv6oFDp6rNQ/maxresdefault.jpg"} {"video_id": "nv6oFDp6rNQ", "text": "and the transformer consists of you know you have this input right here and you go through layer layer layer layer layer and in each layer there's contained in it and one of these attention mechanism right this entire thing is in this layer okay and now if you interpret this as a hopfield network and you want to do multiple steps that means you go this branch right here so in each layer potentially you do multiple steps of these things so for whatever computational constraints um transformers had already this will", "start_timestamp": "00:41:52", "end_timestamp": "00:42:30", "start_second": 2512, "end_second": 2550, "url": "https://www.youtube.com/watch?v=nv6oFDp6rNQ&t=2512s", "title": "Hopfield Networks is All You Need (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/nv6oFDp6rNQ/maxresdefault.jpg"} {"video_id": "nv6oFDp6rNQ", "text": "certainly make it worse but also you need to decide how many steps you want to do now you can hard code that of course but they say you should do these steps until this norm here until the norm between the old and the new is small enough so where is that so you can't measure how close you are to the convergence points right because you don't know in practice but you can measure how far you're away you can measure where did we have it you can measure this quantity right here that's something you can measure how far", "start_timestamp": "00:42:30", "end_timestamp": "00:43:05", "start_second": 2550, "end_second": 2585, "url": "https://www.youtube.com/watch?v=nv6oFDp6rNQ&t=2550s", "title": "Hopfield Networks is All You Need (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/nv6oFDp6rNQ/maxresdefault.jpg"} {"video_id": "nv6oFDp6rNQ", "text": "twit rates are apart so what you'll simply do is you'll measure that and if that is small enough then you'll you'll stop but that i guess is very related to this so how if you we've already proven it converges to this x star so i guess we can approximate this quantity right here with the quantity above and that tells you how many updates you need to do and that quantity is linear not only linear but actually here quadratic in n i don't care you know yes it's exponential uh amd uh separation but it's quadratic in n and if i've", "start_timestamp": "00:43:05", "end_timestamp": "00:43:47", "start_second": 2585, "end_second": 2627, "url": "https://www.youtube.com/watch?v=nv6oFDp6rNQ&t=2585s", "title": "Hopfield Networks is All You Need (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/nv6oFDp6rNQ/maxresdefault.jpg"} {"video_id": "nv6oFDp6rNQ", "text": "learned anything from kind of my fast code courses is that constants actually matter when you're not dealing with infinity with an infinite number of steps so the number of the number of steps you need to do i guess will depend on the sequence length in a quadratic fashion so i'm not sure you can always claim this is converges in one step now i might be super mistaken here and none of this uh will can none of this actually makes a difference in the in the light of the exponential decay here but i would just i'm just a bit worried", "start_timestamp": "00:43:47", "end_timestamp": "00:44:26", "start_second": 2627, "end_second": 2666, "url": "https://www.youtube.com/watch?v=nv6oFDp6rNQ&t=2627s", "title": "Hopfield Networks is All You Need (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/nv6oFDp6rNQ/maxresdefault.jpg"} {"video_id": "nv6oFDp6rNQ", "text": "saying this usually converges in one step it's clear i guess why they do it right because the attention mechanism in transformers is a one-step application of this rule and this here is kind of a theoretical justification for interpreting this precisely as a hopfield network because you'd say well in a hopfield network you would do multiple steps but wait wait we can actually prove that even if you interpret it as a hub field network you it can it usually converges after one step so what you're actually doing in a", "start_timestamp": "00:44:26", "end_timestamp": "00:44:57", "start_second": 2666, "end_second": 2697, "url": "https://www.youtube.com/watch?v=nv6oFDp6rNQ&t=2666s", "title": "Hopfield Networks is All You Need (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/nv6oFDp6rNQ/maxresdefault.jpg"} {"video_id": "nv6oFDp6rNQ", "text": "transformer is applying a hopfield network update rule uh to convergence so yeah i'm not yeah i might be bickering on a high level here luxury problems theorem five then says so theorem four is how fast does this converge um theorem five the last theorem right here uh says that the retrieval error of a pattern and so this is the this is what you converge to and this is what you've stored um is bounded by again something that's exponential in the separation right here as you can see okay so that was the theorem so if we go", "start_timestamp": "00:44:57", "end_timestamp": "00:45:42", "start_second": 2697, "end_second": 2742, "url": "https://www.youtube.com/watch?v=nv6oFDp6rNQ&t=2697s", "title": "Hopfield Networks is All You Need (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/nv6oFDp6rNQ/maxresdefault.jpg"} {"video_id": "nv6oFDp6rNQ", "text": "quickly through them again theorems one and two deal with the convergence of this algorithm and the fact that it actually minimizes the proposed energy then theorem three says you can store exponentially many patterns in terms of the dimension of your space and theorems four and five say that this update rule will converge exponentially fast after after one step if you believe that and the retrieval error will also go down exponentially fast with the number of update steps that you do okay that sounds pretty pretty pretty", "start_timestamp": "00:45:42", "end_timestamp": "00:46:22", "start_second": 2742, "end_second": 2782, "url": "https://www.youtube.com/watch?v=nv6oFDp6rNQ&t=2742s", "title": "Hopfield Networks is All You Need (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/nv6oFDp6rNQ/maxresdefault.jpg"} {"video_id": "nv6oFDp6rNQ", "text": "good but we've heard it it's very dependent on how well separated these patterns are and it turns out that it's you know at least in transformers they aren't always well separated and that might be on purpose remember the these states here the the patterns aren't pre-stored like in a classic hopfield network but the patterns if you interpret an attention mechanism as this are also generated by the network itself so the pattern matrix that you retrieve from and the query are generated by the attention mechanism in in this case as i said this", "start_timestamp": "00:46:22", "end_timestamp": "00:46:59", "start_second": 2782, "end_second": 2819, "url": "https://www.youtube.com/watch?v=nv6oFDp6rNQ&t=2782s", "title": "Hopfield Networks is All You Need (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/nv6oFDp6rNQ/maxresdefault.jpg"} {"video_id": "nv6oFDp6rNQ", "text": "is applicable to many many more uh domains than just this but um yeah so there's another slight modification that you have to do to make this actually equivalent to an attention mechanism and that is you'll have to recast the value because usually what you'll do is you have some sort of input and then you make queries keys and values from that using different heads the only thing to make it formally equivalent is you have to make the values generated from the keys so the keys give rise to the values as you can see", "start_timestamp": "00:46:59", "end_timestamp": "00:47:36", "start_second": 2819, "end_second": 2856, "url": "https://www.youtube.com/watch?v=nv6oFDp6rNQ&t=2819s", "title": "Hopfield Networks is All You Need (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/nv6oFDp6rNQ/maxresdefault.jpg"} {"video_id": "nv6oFDp6rNQ", "text": "right here that you first multiply with the key matrix and then with the value matrix i think that's you know that i don't i doubt that this will will change anything um if you if you the only way that could really change anything is if this matrix here would be super low rank like collapse the space of um into like very few dimensions which the value matrix wouldn't do so you know but just letting you know that the technical equality requires this slight modification okay now we said that um it might not you know be that this is", "start_timestamp": "00:47:36", "end_timestamp": "00:48:17", "start_second": 2856, "end_second": 2897, "url": "https://www.youtube.com/watch?v=nv6oFDp6rNQ&t=2856s", "title": "Hopfield Networks is All You Need (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/nv6oFDp6rNQ/maxresdefault.jpg"} {"video_id": "nv6oFDp6rNQ", "text": "always super well separate and you retrieve a single pattern and that's what they research here in a pre-trained bert model so they take a pre-trained model from i guess from hogging face and they run they just run a data set through it and what they do is so for each for each query and sorry for each attention head because you have multiple ones of these attention heads right um in each layer so in each layer you have multiple ones of these heads for each head they look at over the course of the whole data set", "start_timestamp": "00:48:17", "end_timestamp": "00:48:52", "start_second": 2897, "end_second": 2932, "url": "https://www.youtube.com/watch?v=nv6oFDp6rNQ&t=2897s", "title": "Hopfield Networks is All You Need (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/nv6oFDp6rNQ/maxresdefault.jpg"} {"video_id": "nv6oFDp6rNQ", "text": "how do these softmax distributions look like so when you believe that this is a hopfield network and you believe that this converges in one step then if the patterns are well separated what we would expect is a distribution as we said like this okay there would be one dominant pattern that you retrieve you know that's what you want to retrieve that's what comes out but a bang um you retrieve that accurate pattern anything else would mean that the hopfield network sort of failed right it wouldn't give you back", "start_timestamp": "00:48:52", "end_timestamp": "00:49:30", "start_second": 2932, "end_second": 2970, "url": "https://www.youtube.com/watch?v=nv6oFDp6rNQ&t=2932s", "title": "Hopfield Networks is All You Need (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/nv6oFDp6rNQ/maxresdefault.jpg"} {"video_id": "nv6oFDp6rNQ", "text": "one particular pattern so they have i think that's a pretty it's a pretty smart experiment they look how many bars do we need to add how many of these bars in the soft max distribution do we need to add to reach 90 percent right so it depends a bit on the temperature of the softmax which is hardcoded and attention mechanism pdb is one this squared over d um so they say how many do we need to add to get to 0.9 to 90 percent of the mass of this distribution and if this is the hopfield network where you retrieve one pattern", "start_timestamp": "00:49:30", "end_timestamp": "00:50:10", "start_second": 2970, "end_second": 3010, "url": "https://www.youtube.com/watch?v=nv6oFDp6rNQ&t=2970s", "title": "Hopfield Networks is All You Need (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/nv6oFDp6rNQ/maxresdefault.jpg"} {"video_id": "nv6oFDp6rNQ", "text": "then one will be enough right one of these bars will probably be i don't know like 99 okay but there are other cases imagine the case where the patterns and the query you retrieve the spheres that it gives rise to are all like overlapping okay so what that will do is it won't converge to any particular pattern but the attractor space in this kind so you can imagine if you have two spheres that are apart from each other the update rule converges either so if it's closer to here it converge here if it's closer to here it'll converge", "start_timestamp": "00:50:10", "end_timestamp": "00:50:50", "start_second": 3010, "end_second": 3050, "url": "https://www.youtube.com/watch?v=nv6oFDp6rNQ&t=3010s", "title": "Hopfield Networks is All You Need (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/nv6oFDp6rNQ/maxresdefault.jpg"} {"video_id": "nv6oFDp6rNQ", "text": "here but if they are overlapping like this the energy landscape will actually make it such that it will neither if it starts somewhere it will neither converge to here nor to here it will actually converge to somewhere in the middle okay into the mean of the stored patterns and if we take that to the extreme what could be is it could be that the softmax distribution looks completely uniform okay which would basically mean that you know i don't care where my information comes from just average and this has its applications so if you", "start_timestamp": "00:50:50", "end_timestamp": "00:51:29", "start_second": 3050, "end_second": 3089, "url": "https://www.youtube.com/watch?v=nv6oFDp6rNQ&t=3050s", "title": "Hopfield Networks is All You Need (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/nv6oFDp6rNQ/maxresdefault.jpg"} {"video_id": "nv6oFDp6rNQ", "text": "for example want to make a sentiment classifier a very cheap way to do that is to simply take pre-trained word embeddings like glove or word to back you know assign each word word embedding and then just average the word embeddings okay and you count on the fact if there are a lot of kind of negative words in there like bad sad angry the word embedding kind of will you know reflect that and the average word embedding will point more into the bad direction and if there's a lot of happy words the average will point into the happy", "start_timestamp": "00:51:29", "end_timestamp": "00:52:01", "start_second": 3089, "end_second": 3121, "url": "https://www.youtube.com/watch?v=nv6oFDp6rNQ&t=3089s", "title": "Hopfield Networks is All You Need (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/nv6oFDp6rNQ/maxresdefault.jpg"} {"video_id": "nv6oFDp6rNQ", "text": "direction okay so there are applications of averaging information not caring particularly where it comes from and um in that case what we'd expect is that this number and we'll call that so we'll call that the number k in this case it equals one but in this case k equals i guess n the number of inputs okay because we need well not maybe n but you know approximately we need almost all of them to uh to reach the 90 percent okay and there there is an in between and these are called these meta stable states where", "start_timestamp": "00:52:01", "end_timestamp": "00:52:44", "start_second": 3121, "end_second": 3164, "url": "https://www.youtube.com/watch?v=nv6oFDp6rNQ&t=3121s", "title": "Hopfield Networks is All You Need (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/nv6oFDp6rNQ/maxresdefault.jpg"} {"video_id": "nv6oFDp6rNQ", "text": "and the in between is something like you'd have a couple of patterns here a couple here and a couple maybe here it's almost like a clustering like and these overlap and these overlap and these overlap but they don't overlap with each other which means that if you start somewhere here you would converge to the mean but not to the mean of all the patterns but just to the mean of these patterns and here here and here here so this this is like a clustering in latent space right so you can interpret these hopfield", "start_timestamp": "00:52:44", "end_timestamp": "00:53:17", "start_second": 3164, "end_second": 3197, "url": "https://www.youtube.com/watch?v=nv6oFDp6rNQ&t=3164s", "title": "Hopfield Networks is All You Need (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/nv6oFDp6rNQ/maxresdefault.jpg"} {"video_id": "nv6oFDp6rNQ", "text": "update rules as somehow you know getting not going to a particular pattern but going to sort of a cluster and this is if you ask something like hey is there any adjective around right and all of these patterns they kind of overlap in that space in that query space of adjective they overlap and therefore the update rule would converge to sort of the mean which would basically say yes there is an adjective here right and the information would not be routed so that the distribution if we start here right and we converge", "start_timestamp": "00:53:17", "end_timestamp": "00:53:51", "start_second": 3197, "end_second": 3231, "url": "https://www.youtube.com/watch?v=nv6oFDp6rNQ&t=3197s", "title": "Hopfield Networks is All You Need (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/nv6oFDp6rNQ/maxresdefault.jpg"} {"video_id": "nv6oFDp6rNQ", "text": "to this the distribution would look something like small small small and then you'd have a couple of large ones right you'd have like maybe two or three or four of large ones and these would exactly correspond to the patterns here so the information will be routed from all of those in that cluster to this particular node that asks the query okay these are these are what's called these meta stable states and what they do is they calculate over the entire data set this number k and here they show you the distribution so in these plots", "start_timestamp": "00:53:51", "end_timestamp": "00:54:28", "start_second": 3231, "end_second": 3268, "url": "https://www.youtube.com/watch?v=nv6oFDp6rNQ&t=3231s", "title": "Hopfield Networks is All You Need (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/nv6oFDp6rNQ/maxresdefault.jpg"} {"video_id": "nv6oFDp6rNQ", "text": "what you'll see is over the entire data set um k goes into that direction so i guess let's go to tis here this this seems pretty easy so k uh is in this direction and this is simply the amount of like how so in each you you let a data point run through it you measure k for that particular layer one you see this is layer one head four okay this is one layer one attention head and then you can see that the number k is distributed like this okay so contrast this to this head right here where it's a lot of weight on the number", "start_timestamp": "00:54:28", "end_timestamp": "00:55:15", "start_second": 3268, "end_second": 3315, "url": "https://www.youtube.com/watch?v=nv6oFDp6rNQ&t=3268s", "title": "Hopfield Networks is All You Need (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/nv6oFDp6rNQ/maxresdefault.jpg"} {"video_id": "nv6oFDp6rNQ", "text": "one or like very few numbers okay so these blue ones would be these are your typical like when you retrieve one particular pattern so this attention head we can sort of conclude in this particular tension head this is very specific it looks at its input it looks at its token and it decides what information do i want and it retrieves one particular thing from the other nodes okay whereas here it's more like kind of an an averaging it's more like i want this kind of information and on average i don't even know what", "start_timestamp": "00:55:15", "end_timestamp": "00:55:53", "start_second": 3315, "end_second": 3353, "url": "https://www.youtube.com/watch?v=nv6oFDp6rNQ&t=3315s", "title": "Hopfield Networks is All You Need (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/nv6oFDp6rNQ/maxresdefault.jpg"} {"video_id": "nv6oFDp6rNQ", "text": "the sequence length is here i guess it's maybe 512. uh so of the 512 the median this number is always the median in median it collects information from 231 of them okay so you can see that this corresponds this green and orange ones correspond to these meta-stable states where uh there's kind of an implicit clustering done in the in this space of attention whereas the blue ones they correspond to attention heads that ask for particular information retrieve one particular maybe a few patterns and um happy with that and the red ones", "start_timestamp": "00:55:53", "end_timestamp": "00:56:35", "start_second": 3353, "end_second": 3395, "url": "https://www.youtube.com/watch?v=nv6oFDp6rNQ&t=3353s", "title": "Hopfield Networks is All You Need (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/nv6oFDp6rNQ/maxresdefault.jpg"} {"video_id": "nv6oFDp6rNQ", "text": "here you can see that they often just average they just you know because k is so high means that i need all of the i need all of these bars to get to the 90 or i need almost all of them which basically means it's a uniform distribution right so it's like i don't care where information comes from just average whatever average i just want the average you know some particular uh space and as we said that also has its uses interesting how this translate through so this here is as we go down the vert model on the bottom you have layer one", "start_timestamp": "00:56:35", "end_timestamp": "00:57:14", "start_second": 3395, "end_second": 3434, "url": "https://www.youtube.com/watch?v=nv6oFDp6rNQ&t=3395s", "title": "Hopfield Networks is All You Need (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/nv6oFDp6rNQ/maxresdefault.jpg"} {"video_id": "nv6oFDp6rNQ", "text": "you see there are a lot of these averaging operations going on so a lot of the heads are simply doing averaging and as you go up the layers the heads get more and more specific in the types of information they seek but then again in the last layers interestingly you get into a lot of these meta stable states again which i guess again interpret this as you as you want i'm going to leave this up to you but it sort of says like here you want kind of general patterns at the bottom and then the middle layers are kind of", "start_timestamp": "00:57:14", "end_timestamp": "00:57:48", "start_second": 3434, "end_second": 3468, "url": "https://www.youtube.com/watch?v=nv6oFDp6rNQ&t=3434s", "title": "Hopfield Networks is All You Need (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/nv6oFDp6rNQ/maxresdefault.jpg"} {"video_id": "nv6oFDp6rNQ", "text": "the logical workhorses so you look for very specific things in the input this is i guess this is where i guess this is where the thinking happens um so this is sort of pre-processing i'm just making stuff up here by the way this is this must be in no way true this is maybe thinking and this this here this might already be output again because you know after that you have language modeling or classification so this might already be like aggregating uh types of information this is how i sort of interpret it okay uh yeah so so this these these", "start_timestamp": "00:57:48", "end_timestamp": "00:58:31", "start_second": 3468, "end_second": 3511, "url": "https://www.youtube.com/watch?v=nv6oFDp6rNQ&t=3468s", "title": "Hopfield Networks is All You Need (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/nv6oFDp6rNQ/maxresdefault.jpg"} {"video_id": "nv6oFDp6rNQ", "text": "experiments are pretty pretty pretty interesting and here they have they do these are the last experiments for this paper um they do an interesting experiment where they actually replace the attention heads by simply an average mechanism and later they actually replace them by gaussians but in this case they simply average and they show that look if i replace layer one with just averaging the perplexity doesn't rise that much right so it's pretty good um even if i replace an entire layer here with averaging uh it it perplexity goes more up", "start_timestamp": "00:58:31", "end_timestamp": "00:59:12", "start_second": 3511, "end_second": 3552, "url": "https://www.youtube.com/watch?v=nv6oFDp6rNQ&t=3511s", "title": "Hopfield Networks is All You Need (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/nv6oFDp6rNQ/maxresdefault.jpg"} {"video_id": "nv6oFDp6rNQ", "text": "and you can see the corresponds if you remember the previous plot the correspondence is pretty one to one with how much blue and green uh heads there are as in contrast to how much red uh and orange ones there are so here you have lots of blue ones and you can see that the error kind of goes up and interestingly here you have more meta-stable states at the end but still the perplexity goes up uh more so i guess you can only really replace the red ones with the averaging so this is always averaging in one particular layer", "start_timestamp": "00:59:12", "end_timestamp": "00:59:52", "start_second": 3552, "end_second": 3592, "url": "https://www.youtube.com/watch?v=nv6oFDp6rNQ&t=3552s", "title": "Hopfield Networks is All You Need (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/nv6oFDp6rNQ/maxresdefault.jpg"} {"video_id": "nv6oFDp6rNQ", "text": "and they go into more detail here where they say look this is this is layer six and this is layer 12. so this is one particular tension head from layer 6 and layer 12 and the updates don't be confused it goes in this direction okay i was confused at first and you can see right here this number k at first you know it's kind of spread out but then it pretty quickly converges to a very small number and there is this kind of point right here i don't know if the learning rate's decreased i don't think so i think that's just kind of a", "start_timestamp": "00:59:52", "end_timestamp": "01:00:26", "start_second": 3592, "end_second": 3626, "url": "https://www.youtube.com/watch?v=nv6oFDp6rNQ&t=3592s", "title": "Hopfield Networks is All You Need (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/nv6oFDp6rNQ/maxresdefault.jpg"} {"video_id": "nv6oFDp6rNQ", "text": "a phase transition right here this is the blue line by the way the blue training line a face transition where all of a sudden these just these attention heads they somehow decide okay this is the thing i want to specialize in this is the type of task i want like a sub task of linguistic subtask i want to specialize in and then they concentrate on one particular pattern per input so they are really specializing whereas in the last layer you see here that even during training they are sort of continuously learning", "start_timestamp": "01:00:26", "end_timestamp": "01:00:59", "start_second": 3626, "end_second": 3659, "url": "https://www.youtube.com/watch?v=nv6oFDp6rNQ&t=3626s", "title": "Hopfield Networks is All You Need (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/nv6oFDp6rNQ/maxresdefault.jpg"} {"video_id": "nv6oFDp6rNQ", "text": "so first they also do this averaging then they go into this meta-stable region right this is this meta-stable region k isn't one but also k isn't a very high number um so they continuously learn and it's even indicative of this training might not be done here first of all and second of all it would be really interesting to see how this works out with you know sizes of transformers and like especially these these huge transformers just the fact that they can keep learning the more we train them might be you know", "start_timestamp": "01:00:59", "end_timestamp": "01:01:36", "start_second": 3659, "end_second": 3696, "url": "https://www.youtube.com/watch?v=nv6oFDp6rNQ&t=3659s", "title": "Hopfield Networks is All You Need (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/nv6oFDp6rNQ/maxresdefault.jpg"} {"video_id": "nv6oFDp6rNQ", "text": "be interpreted in the light of what kind of states they converge to and the fact that there are tension heads i don't know how does this go on do they stay in the meta stable states because it makes sense to have metastable states as i said it makes sense to kind of cluster things or are they simply is this simply an intermediate step and if you go really far down they would actually also converge to the k equals one where they really specialize or maybe do we need more attention heads for this i don't know it it's just i think this is just the", "start_timestamp": "01:01:36", "end_timestamp": "01:02:12", "start_second": 3696, "end_second": 3732, "url": "https://www.youtube.com/watch?v=nv6oFDp6rNQ&t=3696s", "title": "Hopfield Networks is All You Need (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/nv6oFDp6rNQ/maxresdefault.jpg"} {"video_id": "nv6oFDp6rNQ", "text": "the beginning of kind of research in this direction i think just this kind of number k um how it's how it's made it's pretty simple and apparently it's pretty pretty revealing so you know that's pretty cool so that was the paper uh and its experiments it's it's a pretty sizable paper as i said even the paper itself is uh 10 pages and then there is this immune repertoire classification which uh i will like spend one minute looking at it so you have you have these set classifications so for each human you obtain a set of immune receptors", "start_timestamp": "01:02:12", "end_timestamp": "01:02:51", "start_second": 3732, "end_second": 3771, "url": "https://www.youtube.com/watch?v=nv6oFDp6rNQ&t=3732s", "title": "Hopfield Networks is All You Need (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/nv6oFDp6rNQ/maxresdefault.jpg"} {"video_id": "nv6oFDp6rNQ", "text": "and you simply obtain one label whether that human is immune to a particular disease or not and your task is kind and then a different human has a different set you have no idea which one of these things is responsible for it being for the human being um for the human being immune or not in fact there is a it you can't even decide based on these you can only decide based on like sub sequences uh of these and they might be in combination with each other so there might not be a single one responsible but like a combination but you don't", "start_timestamp": "01:02:51", "end_timestamp": "01:03:26", "start_second": 3771, "end_second": 3806, "url": "https://www.youtube.com/watch?v=nv6oFDp6rNQ&t=3771s", "title": "Hopfield Networks is All You Need (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/nv6oFDp6rNQ/maxresdefault.jpg"} {"video_id": "nv6oFDp6rNQ", "text": "have labels for the individual ones and you have different ones per human and they are different lengths all of this is just a giant um giant task and you have many of them you have tens of thousands per human right so they build a system here where first they do these 1d convolutions to process the inside sequences and then they do this hopfield um attention mechanism all with with learned queries uh over these things and then they train on the output label and surprisingly uh that actually works even with tens of thousands of inside", "start_timestamp": "01:03:26", "end_timestamp": "01:04:06", "start_second": 3806, "end_second": 3846, "url": "https://www.youtube.com/watch?v=nv6oFDp6rNQ&t=3806s", "title": "Hopfield Networks is All You Need (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/nv6oFDp6rNQ/maxresdefault.jpg"} {"video_id": "nv6oFDp6rNQ", "text": "sequences and only one label for all of them and so they they achieve i guess uh favorable results compared to other baselines on this task using these hopfield network which is pretty interesting but i'll let you look at that paper yourself so i hope this somehow uh made it a bit clear what happens here and it would actually be pretty interesting um if we you know to see what happens if we just do maybe two rounds of these updates is this even desirable right uh is it desirable to run this to convergence is there something good", "start_timestamp": "01:04:06", "end_timestamp": "01:04:48", "start_second": 3846, "end_second": 3888, "url": "https://www.youtube.com/watch?v=nv6oFDp6rNQ&t=3846s", "title": "Hopfield Networks is All You Need (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/nv6oFDp6rNQ/maxresdefault.jpg"} {"video_id": "SoODZ7tEN5Q", "text": "in 1948 Claude Bristol wrote a best-selling book the magic of believing this very popular book continues to sell in paperback form today Claude Bristol died in 1951 however we feel that in order to maintain the compelling nature of the book we would like to give you this in a form as close to the author's original style as possible just as you hear the voice of a writer as you read a book with the help of an actor William Caine we hope to bring you the voice of Claude Bristol and just as you can stop reading a book", "start_timestamp": "00:00:00", "end_timestamp": "00:00:41", "start_second": 0, "end_second": 41, "url": "https://www.youtube.com/watch?v=SoODZ7tEN5Q&t=0s", "title": "\"The Magic of Believing\" By Claude Bristol", "thumbnail": "https://i.ytimg.com/vi/SoODZ7tEN5Q/hqdefault.jpg"} {"video_id": "SoODZ7tEN5Q", "text": "most conveniently at the end of a chapter we'll create natural places for you to stop time to think about what you've heard time to take it in listen to the magic of believing by Claude Bristol it could change the course of your life chapter 1 how to tap the power of belief is there a force a factor a power a science call it what you will are something which a few people understand and use to overcome their difficulties and achieve outstanding success I firmly believe that there is it is my purpose here to attempt to explain it so that", "start_timestamp": "00:00:41", "end_timestamp": "00:01:43", "start_second": 41, "end_second": 103, "url": "https://www.youtube.com/watch?v=SoODZ7tEN5Q&t=41s", "title": "\"The Magic of Believing\" By Claude Bristol", "thumbnail": "https://i.ytimg.com/vi/SoODZ7tEN5Q/hqdefault.jpg"} {"video_id": "SoODZ7tEN5Q", "text": "you may use it if you desire I realize I have run across something that is workable but I don't consider it as anything mystical except in the sense that it is unknown to the majority of people and is little understood by the average person I'm aware that there are forces powerful forces at work in this country that would dominate us substituting a kind of regimentation for the competitive system which has made America great among nations I believe that we must continue to retain the wealth of spirit of our forefathers if", "start_timestamp": "00:01:43", "end_timestamp": "00:02:21", "start_second": 103, "end_second": 141, "url": "https://www.youtube.com/watch?v=SoODZ7tEN5Q&t=103s", "title": "\"The Magic of Believing\" By Claude Bristol", "thumbnail": "https://i.ytimg.com/vi/SoODZ7tEN5Q/hqdefault.jpg"} {"video_id": "SoODZ7tEN5Q", "text": "we don't we shall find ourselves dominated and everything we do by a mighty few will become serfs in fact if not in name I hope this work will help develop individual thinking and doing some may call Mia a crackpot or a screwball I'm well aware of that let me say that I am past the half-century mark and have had many years of hard practical business experience as well as a goodly number of years as a newspaperman I started as a police reporter police reporters are trained to get the facts and take nothing for", "start_timestamp": "00:02:21", "end_timestamp": "00:03:04", "start_second": 141, "end_second": 184, "url": "https://www.youtube.com/watch?v=SoODZ7tEN5Q&t=141s", "title": "\"The Magic of Believing\" By Claude Bristol", "thumbnail": "https://i.ytimg.com/vi/SoODZ7tEN5Q/hqdefault.jpg"} {"video_id": "SoODZ7tEN5Q", "text": "granted apparently I was born with a huge bump of curiosity I've always had an insatiable yearning to seek explanations and answers this yearning has taken me to many strange places brought to light many peculiar cases and has caused me to read every book I could get my hand on dealing with religions cults and both physical and mental sciences I have read literally thousands of books on modern psychology metaphysics ancient magic voodoo ISM yoga 'sm theosophy Christian Science unity truth new thought and many", "start_timestamp": "00:03:04", "end_timestamp": "00:03:42", "start_second": 184, "end_second": 222, "url": "https://www.youtube.com/watch?v=SoODZ7tEN5Q&t=184s", "title": "\"The Magic of Believing\" By Claude Bristol", "thumbnail": "https://i.ytimg.com/vi/SoODZ7tEN5Q/hqdefault.jpg"} {"video_id": "SoODZ7tEN5Q", "text": "other dealings it's what I call mind stuff many of these books were nonsensical others strange and many very profound gradually I discovered that there is a golden thread that runs through all the teachings and makes them work for those who sincerely accept and apply them that thread can be named in a single word belief it is the same element or factor belief which causes people to be cured through mental healing enables others to climb the ladder of success and gets phenomenal results for all who accept it why belief", "start_timestamp": "00:03:42", "end_timestamp": "00:04:24", "start_second": 222, "end_second": 264, "url": "https://www.youtube.com/watch?v=SoODZ7tEN5Q&t=222s", "title": "\"The Magic of Believing\" By Claude Bristol", "thumbnail": "https://i.ytimg.com/vi/SoODZ7tEN5Q/hqdefault.jpg"} {"video_id": "SoODZ7tEN5Q", "text": "is a miracle worker is something that cannot be satisfactorily explained but have no doubt about it there's genuine magic in believing the magic of believing became a phrase around which my thoughts steadily revolved I've tried to put down these thoughts as simply and as clearly as I could so that everyone can understand my hope is that anyone who listens will be helped in reaching their goal in life I would like to start by relating a few experiences of my own life with the hope that by hearing them you will gain a better understanding of", "start_timestamp": "00:04:24", "end_timestamp": "00:05:03", "start_second": 264, "end_second": 303, "url": "https://www.youtube.com/watch?v=SoODZ7tEN5Q&t=264s", "title": "\"The Magic of Believing\" By Claude Bristol", "thumbnail": "https://i.ytimg.com/vi/SoODZ7tEN5Q/hqdefault.jpg"} {"video_id": "SoODZ7tEN5Q", "text": "the entire science early in 1918 I landed in France as a casual soldier unattached to a regular company as a result it was several weeks before my service record necessary for my pay caught up with me during that time I was without money to buy gum candy cigarettes and the like every time I saw a man light a cigarette or chew stick a gum the thought came to me that I was without money to spend on myself certainly I was eating in the army clothed me and provided me with a place on the ground to sleep but I grew bitter because I had no", "start_timestamp": "00:05:03", "end_timestamp": "00:05:40", "start_second": 303, "end_second": 340, "url": "https://www.youtube.com/watch?v=SoODZ7tEN5Q&t=303s", "title": "\"The Magic of Believing\" By Claude Bristol", "thumbnail": "https://i.ytimg.com/vi/SoODZ7tEN5Q/hqdefault.jpg"} {"video_id": "SoODZ7tEN5Q", "text": "spending money and no way of getting any one night on route to the forward area on a crowded troop train sleep was out of the question I made up my mind then that when I returned a civilian life I would have a lot of money the whole pattern of my life was altered at that moment I didn't realize it then that at that moment I was laying the groundwork for a new direction in my life groundwork that would unleash forces that would bring accomplishment as a matter of fact the idea that I could with my thinking and believing develop a", "start_timestamp": "00:05:40", "end_timestamp": "00:06:20", "start_second": 340, "end_second": 380, "url": "https://www.youtube.com/watch?v=SoODZ7tEN5Q&t=340s", "title": "\"The Magic of Believing\" By Claude Bristol", "thumbnail": "https://i.ytimg.com/vi/SoODZ7tEN5Q/hqdefault.jpg"} {"video_id": "SoODZ7tEN5Q", "text": "fortune never entered my mind money is not the only desire you may have it doesn't matter to what end the science is used it will be effective in achieving the object of your desires and in this connection let me tell another experience some years ago I decided on a trip to the Orient and sailed on a ship called the Empress of Japan something was working for me on that trip I had no claim to anything but ordinary service however I sat at the executive officers table and was frequently his personal guest in his quarters as well as on", "start_timestamp": "00:06:20", "end_timestamp": "00:06:58", "start_second": 380, "end_second": 418, "url": "https://www.youtube.com/watch?v=SoODZ7tEN5Q&t=380s", "title": "\"The Magic of Believing\" By Claude Bristol", "thumbnail": "https://i.ytimg.com/vi/SoODZ7tEN5Q/hqdefault.jpg"} {"video_id": "SoODZ7tEN5Q", "text": "inspection trips through the ship well naturally the treatment I received made a great impression on me and in Honolulu I often had the thought it would be nice to receive comparable treatment on my journey home on another ship one afternoon I got the sudden impulse to leave for the mainland it was about closing time when I arrived at the ticket agency I was told that a ship was leaving the next day at noon and I could get the only remaining cabin ticket I bought it and the next day just a few minutes before noon I started up the", "start_timestamp": "00:06:58", "end_timestamp": "00:07:31", "start_second": 418, "end_second": 451, "url": "https://www.youtube.com/watch?v=SoODZ7tEN5Q&t=418s", "title": "\"The Magic of Believing\" By Claude Bristol", "thumbnail": "https://i.ytimg.com/vi/SoODZ7tEN5Q/hqdefault.jpg"} {"video_id": "SoODZ7tEN5Q", "text": "gangplank in an offhand manner I said to myself they treated you as a king on the Empress of Japan the least you can do here is sit at the captain's table sure you'll sit at the captain's table the ship got underway and as we steamed out of the harbor word was received from the dining room steward for passengers to appear in the dining room for assignments to tables about half the assignments had been made when I came before him he asked me for my ticket which I placed on the table he glanced at it and then to me saying oh yes table", "start_timestamp": "00:07:31", "end_timestamp": "00:08:10", "start_second": 451, "end_second": 490, "url": "https://www.youtube.com/watch?v=SoODZ7tEN5Q&t=451s", "title": "\"The Magic of Believing\" By Claude Bristol", "thumbnail": "https://i.ytimg.com/vi/SoODZ7tEN5Q/hqdefault.jpg"} {"video_id": "SoODZ7tEN5Q", "text": "a seat number five it was the captain's table and I was seated directly across from him many things happened aboard that ship which pertained to the subject the most prominent being a party supposed to be an honor of my birthday just an idea of the captain's because my birthday was months off in laying before you this very workable science I am aware that the subject has been handled before from many angles I am also cognizant that many people shy away from anything that smacks of religion the occult or the metaphysical", "start_timestamp": "00:08:10", "end_timestamp": "00:08:49", "start_second": 490, "end_second": 529, "url": "https://www.youtube.com/watch?v=SoODZ7tEN5Q&t=490s", "title": "\"The Magic of Believing\" By Claude Bristol", "thumbnail": "https://i.ytimg.com/vi/SoODZ7tEN5Q/hqdefault.jpg"} {"video_id": "SoODZ7tEN5Q", "text": "accordingly I am presenting it in the language of a businessman who believes that sincere thinking and plain speaking will get any message across to the people in using this science which is given to you with the confident knowledge that no matter how you use it it will get results I wish to warn you never use it for harmful or evil purposes since the beginning of man there have been two great subtle forces in the world good and evil both are terrifically powerful in their respective scopes and cycles the basic", "start_timestamp": "00:08:49", "end_timestamp": "00:09:27", "start_second": 529, "end_second": 567, "url": "https://www.youtube.com/watch?v=SoODZ7tEN5Q&t=529s", "title": "\"The Magic of Believing\" By Claude Bristol", "thumbnail": "https://i.ytimg.com/vi/SoODZ7tEN5Q/hqdefault.jpg"} {"video_id": "SoODZ7tEN5Q", "text": "principle operating both is mind power masked mind power therefore take great care that you do not misuse the science of mind stuff I cannot emphasize this too strongly for if you employ it for him harmful or evil purposes it will boomerang and destroy you just as it has others down through the centuries these are not idle words but solemn words of warning chapter 2 the power of thought glance around you if you are in a furnished room your eyes tell you that you are looking at a number of inanimate objects that's true", "start_timestamp": "00:09:27", "end_timestamp": "00:10:14", "start_second": 567, "end_second": 614, "url": "https://www.youtube.com/watch?v=SoODZ7tEN5Q&t=567s", "title": "\"The Magic of Believing\" By Claude Bristol", "thumbnail": "https://i.ytimg.com/vi/SoODZ7tEN5Q/hqdefault.jpg"} {"video_id": "SoODZ7tEN5Q", "text": "so far as visual perception is concerned but in reality you are actually looking at thoughts or ideas which have come into materialization through the creative work of some human being it was a thought first that created the furniture fashion the window glass and gave form to the draperies and coverings the automobile the skyscraper the Great Plains that sweep the stratosphere the sewing machine the tiny pin a thousand and one things yes millions of objects where did they come from originally only one source from that strange force thought", "start_timestamp": "00:10:14", "end_timestamp": "00:10:52", "start_second": 614, "end_second": 652, "url": "https://www.youtube.com/watch?v=SoODZ7tEN5Q&t=614s", "title": "\"The Magic of Believing\" By Claude Bristol", "thumbnail": "https://i.ytimg.com/vi/SoODZ7tEN5Q/hqdefault.jpg"} {"video_id": "SoODZ7tEN5Q", "text": "as we look further we realize that these achievements and in fact all our possessions came as a result of creative thinking thought is the original source of all wealth all success all material gain all great discoveries inventions and of all achievements with that in mind it becomes easy to understand that a man's thoughts make or break him and Shakespeare's words become more intelligible there is nothing either good or bad but thinking makes it so many people feel that success comes with hard work however I would like to point", "start_timestamp": "00:10:52", "end_timestamp": "00:11:39", "start_second": 652, "end_second": 699, "url": "https://www.youtube.com/watch?v=SoODZ7tEN5Q&t=652s", "title": "\"The Magic of Believing\" By Claude Bristol", "thumbnail": "https://i.ytimg.com/vi/SoODZ7tEN5Q/hqdefault.jpg"} {"video_id": "SoODZ7tEN5Q", "text": "out that hard work alone will not bring success the world is filled with people who have worked hard but have little to show for it something more than hard work is necessary it is creative thinking and firm belief in your ability to execute your ideas the successful people in history have succeeded through their thinking their hands were merely helpers to the brains another important point is that one essential to success is that your desire be an all obsessing one your thoughts and aims be coordinated and your energy be concentrated and applied", "start_timestamp": "00:11:39", "end_timestamp": "00:12:22", "start_second": 699, "end_second": 742, "url": "https://www.youtube.com/watch?v=SoODZ7tEN5Q&t=699s", "title": "\"The Magic of Believing\" By Claude Bristol", "thumbnail": "https://i.ytimg.com/vi/SoODZ7tEN5Q/hqdefault.jpg"} {"video_id": "SoODZ7tEN5Q", "text": "without let-up it may be riches or fame or position or knowledge that you want for each person has his own idea of what success means to him but whatever you consider it to be you can have it provided you are willing to make the objective the burning desire of your life a big order you say no not at all by using the dynamic force of believing you can set all your inner forces in motion and they in turn will help you reach your goal now that you have a clearer idea of the part that thought and desire play in your daily", "start_timestamp": "00:12:22", "end_timestamp": "00:13:00", "start_second": 742, "end_second": 780, "url": "https://www.youtube.com/watch?v=SoODZ7tEN5Q&t=742s", "title": "\"The Magic of Believing\" By Claude Bristol", "thumbnail": "https://i.ytimg.com/vi/SoODZ7tEN5Q/hqdefault.jpg"} {"video_id": "SoODZ7tEN5Q", "text": "lives the first thing to determine is precisely what you want starting in with the general idea that you merely want to be a success as most people do is to indefinite you must have a mental pattern clearly drawn in your mind ask yourself where am I headed what is my goal have I visualized just what I really want if success is to be measured in terms of wealth can you fix the amount in figures if in terms of achievement can you specify it definitely I ask these questions for in their answers are the factors which will", "start_timestamp": "00:13:00", "end_timestamp": "00:13:38", "start_second": 780, "end_second": 818, "url": "https://www.youtube.com/watch?v=SoODZ7tEN5Q&t=780s", "title": "\"The Magic of Believing\" By Claude Bristol", "thumbnail": "https://i.ytimg.com/vi/SoODZ7tEN5Q/hqdefault.jpg"} {"video_id": "SoODZ7tEN5Q", "text": "determine your whole life from now on strange as it may appear not one out of a hundred people can answer these questions most people have a general idea that they would like to be a success but beyond that everything is vague they go along from day to day figuring that if they have a job today they will have it tomorrow that somehow they will be looked after in their old age they are like a cork on the water floating aimlessly drawn this way and that by various currents being washed up on the shore or becoming waterlogged and", "start_timestamp": "00:13:38", "end_timestamp": "00:14:13", "start_second": 818, "end_second": 853, "url": "https://www.youtube.com/watch?v=SoODZ7tEN5Q&t=818s", "title": "\"The Magic of Believing\" By Claude Bristol", "thumbnail": "https://i.ytimg.com/vi/SoODZ7tEN5Q/hqdefault.jpg"} {"video_id": "SoODZ7tEN5Q", "text": "eventually sinking therefore it is vital that you know what you want of life you must know where you were headed and you must keep a fixed goal in your view only then will you get what you're after so you begin with desire if you ever hope to achieve anything or gain more than you have now however as we shall see there is more to it than mere desire it has been said that thought attracts that upon which it is directed thought attracts that upon which it is directed it was Jobe who said for the thing which I greatly", "start_timestamp": "00:14:13", "end_timestamp": "00:14:57", "start_second": 853, "end_second": 897, "url": "https://www.youtube.com/watch?v=SoODZ7tEN5Q&t=853s", "title": "\"The Magic of Believing\" By Claude Bristol", "thumbnail": "https://i.ytimg.com/vi/SoODZ7tEN5Q/hqdefault.jpg"} {"video_id": "SoODZ7tEN5Q", "text": "feared has come upon me our fearful thoughts are just as creative or just as magnetic and attracting troubles to us as are the constructive and positive ones and attracting positive results so no matter what the character of the thought it does create after its kind when this sinks into a man's consciousness he gets some inkling of the awe-inspiring power which is his to use I cling to the theory that while thoughts do create an exercise control far beyond any limits yet known to man they create only according to their", "start_timestamp": "00:14:57", "end_timestamp": "00:15:35", "start_second": 897, "end_second": 935, "url": "https://www.youtube.com/watch?v=SoODZ7tEN5Q&t=897s", "title": "\"The Magic of Believing\" By Claude Bristol", "thumbnail": "https://i.ytimg.com/vi/SoODZ7tEN5Q/hqdefault.jpg"} {"video_id": "SoODZ7tEN5Q", "text": "pitch intensity emotional quality depth of feeling or vibratory plane in other words comparable to the wavelength and wattage of a radio station thoughts have a creative or controlling force in the exact ratio of their constancy intensity and power let me try to clarify that while many explanations have been offered no one knows where the thought is a form of electrical energy or something else yet to be defined but I have been an experimenter in that branch of electricity known as high frequency pioneered by the great genius Nikola", "start_timestamp": "00:15:35", "end_timestamp": "00:16:16", "start_second": 935, "end_second": 976, "url": "https://www.youtube.com/watch?v=SoODZ7tEN5Q&t=935s", "title": "\"The Magic of Believing\" By Claude Bristol", "thumbnail": "https://i.ytimg.com/vi/SoODZ7tEN5Q/hqdefault.jpg"} {"video_id": "SoODZ7tEN5Q", "text": "Tesla and whenever I think of thought and its radiations and vibrations I instinctively link them up with electricity and it's phenomena in this manner they become more understandable to me all persons living in high altitudes have felt and sometimes observed the electric spark resulting from walking across the room then touching some metallic substance that of course is a form of static electricity generated by friction it gives you an idea of how one kind of electricity can be developed through the body the Sigmund Freud the famous Austrian", "start_timestamp": "00:16:16", "end_timestamp": "00:16:53", "start_second": 976, "end_second": 1013, "url": "https://www.youtube.com/watch?v=SoODZ7tEN5Q&t=976s", "title": "\"The Magic of Believing\" By Claude Bristol", "thumbnail": "https://i.ytimg.com/vi/SoODZ7tEN5Q/hqdefault.jpg"} {"video_id": "SoODZ7tEN5Q", "text": "psychoanalyst brought the world's attention to the hypothesis that there was a powerful force within us an unenumerated part of the mind separate from the conscious mind constantly at work molding our thoughts feelings and actions others have called this division of our mental existence the soul some call it the super-ego the inner power the super consciousness the unconscious the subconscious and various other names it isn't an organ or so-called physical matter such as we know the brain to be nevertheless it is there and from the", "start_timestamp": "00:16:53", "end_timestamp": "00:17:30", "start_second": 1013, "end_second": 1050, "url": "https://www.youtube.com/watch?v=SoODZ7tEN5Q&t=1013s", "title": "\"The Magic of Believing\" By Claude Bristol", "thumbnail": "https://i.ytimg.com/vi/SoODZ7tEN5Q/hqdefault.jpg"} {"video_id": "SoODZ7tEN5Q", "text": "beginning of recorded time man has known that it exists the ancients often referred to it as the spirit Paracelsus called it the will others have called it the mind an adjunct to the brain some have referred to it as conscience the creator of the still small voice within still others called it intelligence and have asserted that it is a part of the supreme intelligence to which we are all linked no matter what we call it I prefer the word subconscious it is recognized as the essence of life and the limits of", "start_timestamp": "00:17:30", "end_timestamp": "00:18:06", "start_second": 1050, "end_second": 1086, "url": "https://www.youtube.com/watch?v=SoODZ7tEN5Q&t=1050s", "title": "\"The Magic of Believing\" By Claude Bristol", "thumbnail": "https://i.ytimg.com/vi/SoODZ7tEN5Q/hqdefault.jpg"} {"video_id": "SoODZ7tEN5Q", "text": "its powers are unknown it never sleeps it comes to our support in times of great trouble it warns us of impending danger often it aids us in what seems impossible it guides us in many ways and when properly employed perform so-called miracles perhaps the most effective method of bringing the subconscious into practical action is through the process of making mental pictures using the imagination perfecting an image of the thing or situation as you would have it exist in physical form this is usually referred to as visualization", "start_timestamp": "00:18:06", "end_timestamp": "00:18:46", "start_second": 1086, "end_second": 1126, "url": "https://www.youtube.com/watch?v=SoODZ7tEN5Q&t=1086s", "title": "\"The Magic of Believing\" By Claude Bristol", "thumbnail": "https://i.ytimg.com/vi/SoODZ7tEN5Q/hqdefault.jpg"} {"video_id": "SoODZ7tEN5Q", "text": "however before this visualization can work you must really believe I refer now to deep-seated belief a firm and positive conviction that goes through every fiber of your being when you believe at heart and soul as the saying goes I'll call it a phase of emotion a spiritual force a type of electrical vibration anything you please but that's the force that brings outstanding results it sets the law of attraction into operation and enables sustained thought to correlate with its object this belief changes the tempo of the mind or thought", "start_timestamp": "00:18:46", "end_timestamp": "00:19:25", "start_second": 1126, "end_second": 1165, "url": "https://www.youtube.com/watch?v=SoODZ7tEN5Q&t=1126s", "title": "\"The Magic of Believing\" By Claude Bristol", "thumbnail": "https://i.ytimg.com/vi/SoODZ7tEN5Q/hqdefault.jpg"} {"video_id": "SoODZ7tEN5Q", "text": "frequency and like a huge magnet draws the subconscious forces into play changing your whole aura and affecting everything about you and often people and objects at great distances it brings into your individual sphere of life results that are sometimes startling often results you never dreamed possible Chapter three what the subconscious is a good staff kelly the distinguished french psychologist and author of from the unconscious to the conscious once wrote there is no artist man of science or writer of any distinction however", "start_timestamp": "00:19:25", "end_timestamp": "00:20:12", "start_second": 1165, "end_second": 1212, "url": "https://www.youtube.com/watch?v=SoODZ7tEN5Q&t=1165s", "title": "\"The Magic of Believing\" By Claude Bristol", "thumbnail": "https://i.ytimg.com/vi/SoODZ7tEN5Q/hqdefault.jpg"} {"video_id": "SoODZ7tEN5Q", "text": "little disposed to self-analysis who is not aware by personal experience of the unequaled importance of the subconscious he also said that the best results in life were obtained by a close harmony and cooperation between the conscious and subconscious mind as a subconscious plays a very important part in the magic of believing it will bring you to a quicker understanding of this science if you have a clear and detailed picture of what the subconscious mind is where it is located and how it functions now it is the", "start_timestamp": "00:20:12", "end_timestamp": "00:20:48", "start_second": 1212, "end_second": 1248, "url": "https://www.youtube.com/watch?v=SoODZ7tEN5Q&t=1212s", "title": "\"The Magic of Believing\" By Claude Bristol", "thumbnail": "https://i.ytimg.com/vi/SoODZ7tEN5Q/hqdefault.jpg"} {"video_id": "SoODZ7tEN5Q", "text": "conscious mind that is the source of thought also it's the mind that gives us the sense of awareness in our normal waking life the knowledge that we are ourselves here and now the recognition and understanding of our environments the power to rule over our mental faculties to recall the events of our past life and to understand our emotions and their significance more concretely it enables us to have a rational understanding of the objects and persons about us of our successes or shortcomings of the validity of an", "start_timestamp": "00:20:48", "end_timestamp": "00:21:25", "start_second": 1248, "end_second": 1285, "url": "https://www.youtube.com/watch?v=SoODZ7tEN5Q&t=1248s", "title": "\"The Magic of Believing\" By Claude Bristol", "thumbnail": "https://i.ytimg.com/vi/SoODZ7tEN5Q/hqdefault.jpg"} {"video_id": "SoODZ7tEN5Q", "text": "argument or the beauty of a work of art many times the solution of our problems result from the use of the conscious mind but now and then when the solution is not forthcoming we become exhausted with continued trying we begin to lose confidence in ourselves and we often resign ourselves to the idea that we have failed that nothing can be done about it here is where the subconscious mind comes in it helps us to renew our belief in ourselves it assists us to overcome our difficulty and to put us on the road to achievement and success", "start_timestamp": "00:21:25", "end_timestamp": "00:22:05", "start_second": 1285, "end_second": 1325, "url": "https://www.youtube.com/watch?v=SoODZ7tEN5Q&t=1285s", "title": "\"The Magic of Believing\" By Claude Bristol", "thumbnail": "https://i.ytimg.com/vi/SoODZ7tEN5Q/hqdefault.jpg"} {"video_id": "SoODZ7tEN5Q", "text": "just as the conscious mind is a source of thought so the subconscious is the source of power also it is one of the greatest realities in human life it is rooted in instinct and is aware of the most elemental desires of the individual yet it is always pressing upward into conscious existence the powers of the subconscious are many the chief of which our intuition emotion certitude inspiration suggestion deduction imagination organization and of course memory and dynamic energy it is a distinct entity it possesses powers and", "start_timestamp": "00:22:05", "end_timestamp": "00:22:50", "start_second": 1325, "end_second": 1370, "url": "https://www.youtube.com/watch?v=SoODZ7tEN5Q&t=1325s", "title": "\"The Magic of Believing\" By Claude Bristol", "thumbnail": "https://i.ytimg.com/vi/SoODZ7tEN5Q/hqdefault.jpg"} {"video_id": "SoODZ7tEN5Q", "text": "functions with unique mental organization all its own now the subconscious mind has three primary functions first with its intuitive understanding of the bodily needs it maintains and preserves the well-being and indeed the very life of the body unaided by the conscious mind second in times of great emergency it springs into immediate action again independent of the conscious mind it takes supreme command acting with incredible certitude rapidity accuracy and understanding in the saving of the life of the individual", "start_timestamp": "00:22:50", "end_timestamp": "00:23:26", "start_second": 1370, "end_second": 1406, "url": "https://www.youtube.com/watch?v=SoODZ7tEN5Q&t=1370s", "title": "\"The Magic of Believing\" By Claude Bristol", "thumbnail": "https://i.ytimg.com/vi/SoODZ7tEN5Q/hqdefault.jpg"} {"video_id": "SoODZ7tEN5Q", "text": "third it is operative in the psychic world in which the psychic powers of the subconscious are manifested in such phenomena as telepathy clairvoyance and psychokinesis but also it can be summoned to help the conscious mind in times of great personal necessity when the conscious calls upon the subconscious to use its powers and resources to solve a vital problem or bring to pass that which is sought or desired by the individual it is the third function that we are most concerned with here to draw upon the resources and powers of the subconscious", "start_timestamp": "00:23:26", "end_timestamp": "00:24:01", "start_second": 1406, "end_second": 1441, "url": "https://www.youtube.com/watch?v=SoODZ7tEN5Q&t=1406s", "title": "\"The Magic of Believing\" By Claude Bristol", "thumbnail": "https://i.ytimg.com/vi/SoODZ7tEN5Q/hqdefault.jpg"} {"video_id": "SoODZ7tEN5Q", "text": "and awaken it into action you must first be sure that you are asking for something that is rightfully yours to have and is within your ability to handle the subconscious manifests itself only according to the capabilities of the person then you must have patience and absolute faith Theodore Simon Geoffroy the French philosopher said the subconscious mind will not take the trouble to work for those who do not believe in it next in conveying your need to the subconscious it must be in the spirit that the work has already", "start_timestamp": "00:24:01", "end_timestamp": "00:24:34", "start_second": 1441, "end_second": 1474, "url": "https://www.youtube.com/watch?v=SoODZ7tEN5Q&t=1441s", "title": "\"The Magic of Believing\" By Claude Bristol", "thumbnail": "https://i.ytimg.com/vi/SoODZ7tEN5Q/hqdefault.jpg"} {"video_id": "SoODZ7tEN5Q", "text": "been done so while it is necessary for you to feel and think yourself successful it is important for you to go one step further and actually see yourself as already successful either in the performance of some selected task or as actually occupying the position to which you are aspiring for the next and final step you must wait patiently while the subconscious is assimilating the elements of your problem and then goes about its own way to work it out for you in due course with the flowing of ideas and plans of the subconscious into your", "start_timestamp": "00:24:34", "end_timestamp": "00:25:13", "start_second": 1474, "end_second": 1513, "url": "https://www.youtube.com/watch?v=SoODZ7tEN5Q&t=1474s", "title": "\"The Magic of Believing\" By Claude Bristol", "thumbnail": "https://i.ytimg.com/vi/SoODZ7tEN5Q/hqdefault.jpg"} {"video_id": "SoODZ7tEN5Q", "text": "waiting conscious mind the solution of your problem will be revealed to you the correct course of action will be indicated you must follow those indications immediately and unquestioningly there must be no hesitation on your part no mental reservation no deliberation you must receive the message from the subconscious freely and after understanding it you must act on it at once only by doing that will you make your subconscious serve you and continue to respond whenever you call upon it however your problem may be one that", "start_timestamp": "00:25:13", "end_timestamp": "00:25:48", "start_second": 1513, "end_second": 1548, "url": "https://www.youtube.com/watch?v=SoODZ7tEN5Q&t=1513s", "title": "\"The Magic of Believing\" By Claude Bristol", "thumbnail": "https://i.ytimg.com/vi/SoODZ7tEN5Q/hqdefault.jpg"} {"video_id": "SoODZ7tEN5Q", "text": "cannot be solved in such a manner instead of receiving the solution in the form of a blueprint as it were you may instead feel some mysterious force urging you at intervals to do certain things that seem to have no special significance or logical connection nevertheless you must continue to believe in the power and the wisdom of the subconscious and obediently perform the seemingly irrelevant things one day you will find yourself in the position you sought through the aid of the subconscious and doing the work you", "start_timestamp": "00:25:48", "end_timestamp": "00:26:17", "start_second": 1548, "end_second": 1577, "url": "https://www.youtube.com/watch?v=SoODZ7tEN5Q&t=1548s", "title": "\"The Magic of Believing\" By Claude Bristol", "thumbnail": "https://i.ytimg.com/vi/SoODZ7tEN5Q/hqdefault.jpg"} {"video_id": "SoODZ7tEN5Q", "text": "envision for yourself then when you look back you will see how the things you were called upon to do all formed a logical line of events the last one of which was your final arriving the reward of your sincerest hopes and desires your own triumphant personal success chapter four suggestion is power after studying the various mystical religions and different teachings and systems of mind stuff one is impressed with the fact that they all have the same basic modus operandi and that is through repetition the repeating of certain", "start_timestamp": "00:26:17", "end_timestamp": "00:27:10", "start_second": 1577, "end_second": 1630, "url": "https://www.youtube.com/watch?v=SoODZ7tEN5Q&t=1577s", "title": "\"The Magic of Believing\" By Claude Bristol", "thumbnail": "https://i.ytimg.com/vi/SoODZ7tEN5Q/hqdefault.jpg"} {"video_id": "SoODZ7tEN5Q", "text": "mantras words formulas or just plain mumbo-jumbo is common with witch doctors voodoo high priests hexes and many other followers of strange cults they use them to evoke the spirits or work black magic one finds the same principle at work in chants incantations litanies daily lessons also the frequent praying of the Buddhists and Muslims alike the affirmation of the Theosophists and the followers of unity the absolute true New Thought divine science in fact it is basic to all religions although here it is white magic instead of black magic", "start_timestamp": "00:27:10", "end_timestamp": "00:27:48", "start_second": 1630, "end_second": 1668, "url": "https://www.youtube.com/watch?v=SoODZ7tEN5Q&t=1630s", "title": "\"The Magic of Believing\" By Claude Bristol", "thumbnail": "https://i.ytimg.com/vi/SoODZ7tEN5Q/hqdefault.jpg"} {"video_id": "SoODZ7tEN5Q", "text": "this brings us to the law of suggestion through which all forces operating within its limits are capable of producing phenomenal results that is it is the power of suggestion and Auto suggestion your own to yourself or hetero suggestion coming to you from outside sources that starts the machinery into operation or causes the subconscious mind to begin its creative work and right here is where the affirmations and repetitions play their part it's the repetition of the same chant the same incantation the same affirmations that lead to belief and", "start_timestamp": "00:27:48", "end_timestamp": "00:28:27", "start_second": 1668, "end_second": 1707, "url": "https://www.youtube.com/watch?v=SoODZ7tEN5Q&t=1668s", "title": "\"The Magic of Believing\" By Claude Bristol", "thumbnail": "https://i.ytimg.com/vi/SoODZ7tEN5Q/hqdefault.jpg"} {"video_id": "SoODZ7tEN5Q", "text": "once that belief becomes a deep conviction things begin to happen now this is the same identical force and the same mechanics that Hitler used in building up the German people to attack the world a reading of mind camp will verify that dr. Rene faux vel a famous French psychologist explained it by saying that Hitler had a remarkable understanding of the law of suggestion and it's different forms of application it was with uncanny skill and masterly showmanship that he mobilized every instrument of propaganda in his mighty", "start_timestamp": "00:28:27", "end_timestamp": "00:29:04", "start_second": 1707, "end_second": 1744, "url": "https://www.youtube.com/watch?v=SoODZ7tEN5Q&t=1707s", "title": "\"The Magic of Believing\" By Claude Bristol", "thumbnail": "https://i.ytimg.com/vi/SoODZ7tEN5Q/hqdefault.jpg"} {"video_id": "SoODZ7tEN5Q", "text": "campaign of suggestion Hitler openly stated that the psychology of suggestion was a terrible weapon in the hands of anyone who knew how to use it let's see how he worked it to make the Germans believe what he wanted him to and once that belief took hold how they started their campaign of terror slogans huge signs posters masked flags appeared throughout Germany Hitler's picture was everywhere one Reich one folk one leader became the chant it was heard everywhere today we own Germany tomorrow the entire world the marching", "start_timestamp": "00:29:04", "end_timestamp": "00:29:43", "start_second": 1744, "end_second": 1783, "url": "https://www.youtube.com/watch?v=SoODZ7tEN5Q&t=1744s", "title": "\"The Magic of Believing\" By Claude Bristol", "thumbnail": "https://i.ytimg.com/vi/SoODZ7tEN5Q/hqdefault.jpg"} {"video_id": "SoODZ7tEN5Q", "text": "song of the German news came from thousands of throats daily such slogans as Germany has waited long enough stand up you are the aristocrats of the Third Reich Germany is behind Hitler to a man and hundreds others bombarded the people 24 hours a day from billboards sides of buildings the radio and the press every time they move turned around or spoke to one another they got the idea that they were a superior race and under the hypnotic influence of this belief strengthened by repeated suggestion they started out to prove it", "start_timestamp": "00:29:43", "end_timestamp": "00:30:22", "start_second": 1783, "end_second": 1822, "url": "https://www.youtube.com/watch?v=SoODZ7tEN5Q&t=1783s", "title": "\"The Magic of Believing\" By Claude Bristol", "thumbnail": "https://i.ytimg.com/vi/SoODZ7tEN5Q/hqdefault.jpg"} {"video_id": "SoODZ7tEN5Q", "text": "unfortunately for them there were other nations who also had strong national beliefs that eventually became the means of bringing defeat to the Germans let's go into the field of sports for everyone who has ever witnessed a football or baseball game has actually seen this power of suggestion at work the late Knute Rockne famous coach at Notre Dame knew the value of suggestion and used it repeatedly but he always suited his method of applying it to the temperament of the individual team a story is told that on one Sunday afternoon Notre Dame", "start_timestamp": "00:30:22", "end_timestamp": "00:30:57", "start_second": 1822, "end_second": 1857, "url": "https://www.youtube.com/watch?v=SoODZ7tEN5Q&t=1822s", "title": "\"The Magic of Believing\" By Claude Bristol", "thumbnail": "https://i.ytimg.com/vi/SoODZ7tEN5Q/hqdefault.jpg"} {"video_id": "SoODZ7tEN5Q", "text": "was playing a particularly grueling game and at the end of the first half was trailing badly the players were in their dressing room nervously awaiting the arrival of Rockne finally the Douro and Rocky's head came in slowly his eyes swept inquiringly over the squad oh excuse me I made a mistake I thought these were the quarters of the Notre Dame team the door closed and Rockne was gone puzzled and then stung with fury the team went out for the second half and won the game in the Depression years and there may be years like them in the", "start_timestamp": "00:30:57", "end_timestamp": "00:31:41", "start_second": 1857, "end_second": 1901, "url": "https://www.youtube.com/watch?v=SoODZ7tEN5Q&t=1857s", "title": "\"The Magic of Believing\" By Claude Bristol", "thumbnail": "https://i.ytimg.com/vi/SoODZ7tEN5Q/hqdefault.jpg"} {"video_id": "SoODZ7tEN5Q", "text": "future we saw this same suggestive force working overtime day after day we heard the expression times a hard businesses pour the banks of failing prosperity hasn't a chance and while stories about business failures on every hand until they became the national chant millions believe that prosperous days would never return hundred just thousands of strong-willed men went down under the constant hammering the continuous tap tapping of the same fear vibratory thoughts money always sensitive runs to cover when fear", "start_timestamp": "00:31:41", "end_timestamp": "00:32:18", "start_second": 1901, "end_second": 1938, "url": "https://www.youtube.com/watch?v=SoODZ7tEN5Q&t=1901s", "title": "\"The Magic of Believing\" By Claude Bristol", "thumbnail": "https://i.ytimg.com/vi/SoODZ7tEN5Q/hqdefault.jpg"} {"video_id": "SoODZ7tEN5Q", "text": "suggestions begin to circulate and business failures and unemployment follow quickly we heard thousands of stories of bank failures huge concerns going to the wall and people believe them readily and acted accordingly there will never be another business depression if people generally realize that it is with their own fear thoughts that they literally create hard times they think hard times and hard times follow doctor walk till Scott eminent psychologist and longtime president of Northwestern University told the whole", "start_timestamp": "00:32:18", "end_timestamp": "00:32:55", "start_second": 1938, "end_second": 1975, "url": "https://www.youtube.com/watch?v=SoODZ7tEN5Q&t=1938s", "title": "\"The Magic of Believing\" By Claude Bristol", "thumbnail": "https://i.ytimg.com/vi/SoODZ7tEN5Q/hqdefault.jpg"} {"video_id": "SoODZ7tEN5Q", "text": "story when he said success or failure in businesses caused more by mental attitude rather than by mental capacities let's consider charms talismans amulets good-luck pieces four-leaf clovers old horseshoes a rabbit's foot and countless other trinkets which thousands of people believe in by themselves they are inanimate harmless objects without power but people breathe life into them by thinking they do have power or even though the power isn't in them per se the power comes only with the believing which alone makes them", "start_timestamp": "00:32:55", "end_timestamp": "00:33:31", "start_second": 1975, "end_second": 2011, "url": "https://www.youtube.com/watch?v=SoODZ7tEN5Q&t=1975s", "title": "\"The Magic of Believing\" By Claude Bristol", "thumbnail": "https://i.ytimg.com/vi/SoODZ7tEN5Q/hqdefault.jpg"} {"video_id": "SoODZ7tEN5Q", "text": "effective - outstanding illustrations of this are found in the stories of Alexander the Great and Napoleon in Alexander's day an Oracle proclaimed that whoever unloosened the Gordian knot would become ruler of all Asia Alexander you may remember with one stroke of his sword cut the knot and rose to tremendous Heights and power napoleon was given a star sapphire when he was a child with the prophecy that it would bring him luck and someday make him emperor of france could it have been anything but the supreme belief in the", "start_timestamp": "00:33:31", "end_timestamp": "00:34:07", "start_second": 2011, "end_second": 2047, "url": "https://www.youtube.com/watch?v=SoODZ7tEN5Q&t=2011s", "title": "\"The Magic of Believing\" By Claude Bristol", "thumbnail": "https://i.ytimg.com/vi/SoODZ7tEN5Q/hqdefault.jpg"} {"video_id": "SoODZ7tEN5Q", "text": "prophecy that carried these two great men to a place in the Hall of Fame they became great men because of their super normal beliefs here's a simple experiment that will demonstrate to you the strange power of attraction through visualization making the mental picture actually work find a few small stones or pebbles which you can easily throw locate a tree or a post of six to ten inches in diameter stand 25 to 30 feet away from it start throwing pebbles at the tree trying to hit it if you're an average person most of the stones will", "start_timestamp": "00:34:07", "end_timestamp": "00:34:46", "start_second": 2047, "end_second": 2086, "url": "https://www.youtube.com/watch?v=SoODZ7tEN5Q&t=2047s", "title": "\"The Magic of Believing\" By Claude Bristol", "thumbnail": "https://i.ytimg.com/vi/SoODZ7tEN5Q/hqdefault.jpg"} {"video_id": "SoODZ7tEN5Q", "text": "go wide of their mark now stop and tell yourself that you can hit the objective get a mental picture of the tree figuratively stepping forward to meet the stone imagine the rock actually colliding with the tree in the spot where you want it to strike you'll soon find yourself making a perfect score don't say it's impossible try it and you'll prove that it can be done if you will only believe it chapter 5 the art of mental pictures to become the person that you would like to be you create a mental picture of your", "start_timestamp": "00:34:46", "end_timestamp": "00:35:33", "start_second": 2086, "end_second": 2133, "url": "https://www.youtube.com/watch?v=SoODZ7tEN5Q&t=2086s", "title": "\"The Magic of Believing\" By Claude Bristol", "thumbnail": "https://i.ytimg.com/vi/SoODZ7tEN5Q/hqdefault.jpg"} {"video_id": "SoODZ7tEN5Q", "text": "newly conceived self and if you continue to hold it the day will come when you are in reality that person Shakespeare said assume the virtue if you have it not now let's take this great truth and follow some of its implications in assuming the virtue you are assuming via your imagination but here we must make a distinction between daydreaming and a true mental picture or proper use of the imagination perhaps there is some genie who will drop $100,000 into your lap or overnight provide you with a mansion luxuriously furnished I have never had", "start_timestamp": "00:35:33", "end_timestamp": "00:36:13", "start_second": 2133, "end_second": 2173, "url": "https://www.youtube.com/watch?v=SoODZ7tEN5Q&t=2133s", "title": "\"The Magic of Believing\" By Claude Bristol", "thumbnail": "https://i.ytimg.com/vi/SoODZ7tEN5Q/hqdefault.jpg"} {"video_id": "SoODZ7tEN5Q", "text": "the pleasure of meeting one but daydreaming or mere undirected wishful thinking doesn't have the power to release the latent forces within you that will bring you the $100,000 over the mansion when you employ your imagination properly you see yourself doing a thing and you go ahead and do it it's the doing the thing you have pictured to yourself that brings it into actual existence in this connection think about the use of the magnifying glass when properly focused it will gather the light from the Sun and concentrate it so that the heat will", "start_timestamp": "00:36:13", "end_timestamp": "00:36:47", "start_second": 2173, "end_second": 2207, "url": "https://www.youtube.com/watch?v=SoODZ7tEN5Q&t=2173s", "title": "\"The Magic of Believing\" By Claude Bristol", "thumbnail": "https://i.ytimg.com/vi/SoODZ7tEN5Q/hqdefault.jpg"} {"video_id": "SoODZ7tEN5Q", "text": "burn a hole in the object on which the Rays are focused it must be held steady before the heat power is developed and so it is with the holding of the image or the mental picture however it is very difficult for the average person to concentrate for any length of time to say nothing of holding on to a mental picture for any great period you are constantly being swayed by what you read and hear and as a result the coordinating part of this creative force turns to gathering together all these scattered elements in a focused mass", "start_timestamp": "00:36:47", "end_timestamp": "00:37:19", "start_second": 2207, "end_second": 2239, "url": "https://www.youtube.com/watch?v=SoODZ7tEN5Q&t=2207s", "title": "\"The Magic of Believing\" By Claude Bristol", "thumbnail": "https://i.ytimg.com/vi/SoODZ7tEN5Q/hqdefault.jpg"} {"video_id": "SoODZ7tEN5Q", "text": "instead of devoting itself to making a clear and dynamic picture of your desire often I have thought of this matter of desire and suggestion in connection with the planting of veg or flower seeds once the soil is prepared and the tiny seeds are placed in it it only takes a short time until they begin to root and sprouts begin to appear the moment they start upward through the soil in search of light sunshine and moisture obstacles mean nothing to them they will push aside small stones or bits of wood and if they can't do that", "start_timestamp": "00:37:19", "end_timestamp": "00:37:52", "start_second": 2239, "end_second": 2272, "url": "https://www.youtube.com/watch?v=SoODZ7tEN5Q&t=2239s", "title": "\"The Magic of Believing\" By Claude Bristol", "thumbnail": "https://i.ytimg.com/vi/SoODZ7tEN5Q/hqdefault.jpg"} {"video_id": "SoODZ7tEN5Q", "text": "their extend themselves and grow around them so it can be with you when the suggestions you give to your subconscious mind the results will be pure or complex depending upon the original seed in the attention which you give it in other words plant the right kind of seed and habitually feed it with strong affirmative thought always directed toward the same end it will grow into a mighty force finding ways and means of overcoming all obstacles I have been in the private offices of a great many industrial leaders", "start_timestamp": "00:37:52", "end_timestamp": "00:38:26", "start_second": 2272, "end_second": 2306, "url": "https://www.youtube.com/watch?v=SoODZ7tEN5Q&t=2272s", "title": "\"The Magic of Believing\" By Claude Bristol", "thumbnail": "https://i.ytimg.com/vi/SoODZ7tEN5Q/hqdefault.jpg"} {"video_id": "SoODZ7tEN5Q", "text": "businessmen great bankers and others long before this magic of belief was understood by me I was impressed with the pictures photographs slogans bits of statuary and so forth which were to be found in the inner sanctums of great firms undoubtedly many of you have seen or heard of such displays but has it ever occurred to you what their purpose was there can only be one answer and that is they serve as a constant reminder getting the picture over to the occupant of the room that he too can succeed as those did before him in", "start_timestamp": "00:38:26", "end_timestamp": "00:39:01", "start_second": 2306, "end_second": 2341, "url": "https://www.youtube.com/watch?v=SoODZ7tEN5Q&t=2306s", "title": "\"The Magic of Believing\" By Claude Bristol", "thumbnail": "https://i.ytimg.com/vi/SoODZ7tEN5Q/hqdefault.jpg"} {"video_id": "SoODZ7tEN5Q", "text": "common with other great men Thomas a Edison obviously knew the value of the repeated suggestion and made use of it among the articles found in his desk was a piece of paper that said when down in the mouth remembered Jonah he came out all right Edison must have thought well of that expression and perhaps reflected much upon it so let's get down to the mechanics find yourself three or four cards ordinary business cards will do in your office your home your room or any other place where you can have privacy sit down and ask yourself what you", "start_timestamp": "00:39:01", "end_timestamp": "00:39:39", "start_second": 2341, "end_second": 2379, "url": "https://www.youtube.com/watch?v=SoODZ7tEN5Q&t=2341s", "title": "\"The Magic of Believing\" By Claude Bristol", "thumbnail": "https://i.ytimg.com/vi/SoODZ7tEN5Q/hqdefault.jpg"} {"video_id": "SoODZ7tEN5Q", "text": "desire above everything else when the answer comes and you certain that it is your uppermost desire then at the top of one card write a word picture of it one or two words may be sufficient a job a better job or money a home of your own then on each card duplicate the word picture from the original carry one in your billfold or handbag place another alongside your bed or fasten it to your bedstead place another on your saving mirror or dressing table and still another on your desk the whole idea as you may have", "start_timestamp": "00:39:39", "end_timestamp": "00:40:16", "start_second": 2379, "end_second": 2416, "url": "https://www.youtube.com/watch?v=SoODZ7tEN5Q&t=2379s", "title": "\"The Magic of Believing\" By Claude Bristol", "thumbnail": "https://i.ytimg.com/vi/SoODZ7tEN5Q/hqdefault.jpg"} {"video_id": "SoODZ7tEN5Q", "text": "guessed is to enable you to see mentally the picture at all hours of the day just before going to sleep at night and upon waking in the morning are two very important moments of the 24 hours in which to concentrate upon your thoughts with added force but don't stop just with those two periods the more often you can visualize the desire by this method or one of your own devising for that matter the speedier the materialization at the start you may have no idea of how the results are to come don't worry just leave it to the", "start_timestamp": "00:40:16", "end_timestamp": "00:40:51", "start_second": 2416, "end_second": 2451, "url": "https://www.youtube.com/watch?v=SoODZ7tEN5Q&t=2416s", "title": "\"The Magic of Believing\" By Claude Bristol", "thumbnail": "https://i.ytimg.com/vi/SoODZ7tEN5Q/hqdefault.jpg"} {"video_id": "SoODZ7tEN5Q", "text": "subconscious mind which has its own ways of making contacts and of opening doors and avenues that you may never even have thought of you will receive assistance from the most unexpected sources you may be suddenly struck with the idea of seeing a person that you have not heard from in a long time or calling upon a man you've never seen before you may get the idea of writing a letter or making a telephone call whatever the idea is follow it it cannot be too strongly emphasized that you should tell no one just what the words on the cards mean", "start_timestamp": "00:40:51", "end_timestamp": "00:41:26", "start_second": 2451, "end_second": 2486, "url": "https://www.youtube.com/watch?v=SoODZ7tEN5Q&t=2451s", "title": "\"The Magic of Believing\" By Claude Bristol", "thumbnail": "https://i.ytimg.com/vi/SoODZ7tEN5Q/hqdefault.jpg"} {"video_id": "SoODZ7tEN5Q", "text": "don't give anyone an inkling of what you desire the truth is that when you talk about what you're going to do you scatter your forces you lose the close connection you have with the subconscious and you frequently find that unless you do is direct it you will have to start all over again in your program of achievement go and tell no man still holds true suppose you want a better job or promotion not only use the cards but keep telling yourself constantly continuously that you are going to get that job you have already visualized it", "start_timestamp": "00:41:26", "end_timestamp": "00:42:00", "start_second": 2486, "end_second": 2520, "url": "https://www.youtube.com/watch?v=SoODZ7tEN5Q&t=2486s", "title": "\"The Magic of Believing\" By Claude Bristol", "thumbnail": "https://i.ytimg.com/vi/SoODZ7tEN5Q/hqdefault.jpg"} {"video_id": "SoODZ7tEN5Q", "text": "if you have accepted this science but the repetition will be the means of driving the suggestion deeply and firmly into the subconscious mind this may be compared to driving a nail into a board the first tap puts the nail in place but it is only by a number of heavy strokes that the nail is driven home it has been my observation that those who consciously use this science as well as those who may be using it unconsciously are people of tremendous energies virtually human dynamos they are people who not only use their imagination and", "start_timestamp": "00:42:00", "end_timestamp": "00:42:34", "start_second": 2520, "end_second": 2554, "url": "https://www.youtube.com/watch?v=SoODZ7tEN5Q&t=2520s", "title": "\"The Magic of Believing\" By Claude Bristol", "thumbnail": "https://i.ytimg.com/vi/SoODZ7tEN5Q/hqdefault.jpg"} {"video_id": "SoODZ7tEN5Q", "text": "hold strong beliefs and convictions but they are great doers in action and that brings me to this most important statement faith without action is dead chapter 6 the mirror technique there is another device which I call the mirror technique before explaining it I want to tell you how I happen to discover what a truly wonderful thing it is and how it can be used to bring quicker and more effective results many years ago I was the dinner guest of a very wealthy man who owned many patents covering logging and sawmill machinery he had invited a", "start_timestamp": "00:42:34", "end_timestamp": "00:43:20", "start_second": 2554, "end_second": 2600, "url": "https://www.youtube.com/watch?v=SoODZ7tEN5Q&t=2554s", "title": "\"The Magic of Believing\" By Claude Bristol", "thumbnail": "https://i.ytimg.com/vi/SoODZ7tEN5Q/hqdefault.jpg"} {"video_id": "SoODZ7tEN5Q", "text": "number of newspaper publishers bankers and industrial leaders to his suite in a prominent hotel in order to explain a new method he had devised for mill operations dinner was late in being served and as there had been plenty of liquor offered the host found himself in an embarrassing state of intoxication just before dinner was served I noticed him staggering into his bedroom and pulling himself up abruptly before his dresser thinking I might help him I followed him to the door of his room as I stood there I saw him grab the edge of", "start_timestamp": "00:43:20", "end_timestamp": "00:43:56", "start_second": 2600, "end_second": 2636, "url": "https://www.youtube.com/watch?v=SoODZ7tEN5Q&t=2600s", "title": "\"The Magic of Believing\" By Claude Bristol", "thumbnail": "https://i.ytimg.com/vi/SoODZ7tEN5Q/hqdefault.jpg"} {"video_id": "SoODZ7tEN5Q", "text": "the dresser top with both hands and stare into the mirror all the time mumbling as a drunken man sometimes does then his words began to make sense and I moved back a little to watch the performance I heard him say John you own they tried to get you drunk but you're going to fool them you're sober cold sober this is your party and you've got to be sober as he kept repeating these words while continuing to stare at the reflection of his eyes in the mirror I noticed that a Transfiguration was taking place his body was becoming more erect the muscles", "start_timestamp": "00:43:56", "end_timestamp": "00:44:39", "start_second": 2636, "end_second": 2679, "url": "https://www.youtube.com/watch?v=SoODZ7tEN5Q&t=2636s", "title": "\"The Magic of Believing\" By Claude Bristol", "thumbnail": "https://i.ytimg.com/vi/SoODZ7tEN5Q/hqdefault.jpg"} {"video_id": "SoODZ7tEN5Q", "text": "of his face were tightening and his drunken look was disappearing the whole performance was over in about five minutes but in all my experience as a newspaperman and more especially as a police reporter I had never seen such a rapid change not wanting him to know that I'd been watching him I made for the bathroom when I got back to the dining room I found the host at the head of the table and file his face was still a little flushed to all appearances he was sober at the end of the dinner he presented a very dramatic and convincing", "start_timestamp": "00:44:39", "end_timestamp": "00:45:14", "start_second": 2679, "end_second": 2714, "url": "https://www.youtube.com/watch?v=SoODZ7tEN5Q&t=2679s", "title": "\"The Magic of Believing\" By Claude Bristol", "thumbnail": "https://i.ytimg.com/vi/SoODZ7tEN5Q/hqdefault.jpg"} {"video_id": "SoODZ7tEN5Q", "text": "picture of his new plans it wasn't until long afterwards when I got a better understanding of the power of the subconscious mind that I understood the science involved in transforming the obviously drunken man into a cold sober host many great orators preachers actors and statesmen have used this mirror technique for years Winston Churchill according to Drew Pearson never made a speech of importance unless he made it before a mirror first Woodrow Wilson also employed the same technique it's what I call a super charging method of stepping", "start_timestamp": "00:45:14", "end_timestamp": "00:45:51", "start_second": 2714, "end_second": 2751, "url": "https://www.youtube.com/watch?v=SoODZ7tEN5Q&t=2714s", "title": "\"The Magic of Believing\" By Claude Bristol", "thumbnail": "https://i.ytimg.com/vi/SoODZ7tEN5Q/hqdefault.jpg"} {"video_id": "SoODZ7tEN5Q", "text": "up the subconscious forces this mirror technique gives a clue to the power and magnetism of Billy Sunday the great evangelist according to Eric severide in his book not so wild a dream Billy Sunday would bound about his hotel room now peering intently out the window with one foot on the sill now grasping the dressing table firmly in both hands while lecturing his reflection in the mirror now to outline the technique stand in front of a mirror it need not be a full-length mirror but it should be big enough so that you may at least see", "start_timestamp": "00:45:51", "end_timestamp": "00:46:27", "start_second": 2751, "end_second": 2787, "url": "https://www.youtube.com/watch?v=SoODZ7tEN5Q&t=2751s", "title": "\"The Magic of Believing\" By Claude Bristol", "thumbnail": "https://i.ytimg.com/vi/SoODZ7tEN5Q/hqdefault.jpg"} {"video_id": "SoODZ7tEN5Q", "text": "your body from the waist up those of you who have been in the army know what it means to come to attention stand fully erect bring your heels together pull in your stomach keep your chest out your head up now breathe three or four times until you feel a sense of power strength and determination next look into the very depths of your eyes tell yourself that you're going to get what you want name it out loud so that you can see your lips move and you can hear the words make a ritual of it practice doing it at least twice a day mornings and", "start_timestamp": "00:46:27", "end_timestamp": "00:47:03", "start_second": 2787, "end_second": 2823, "url": "https://www.youtube.com/watch?v=SoODZ7tEN5Q&t=2787s", "title": "\"The Magic of Believing\" By Claude Bristol", "thumbnail": "https://i.ytimg.com/vi/SoODZ7tEN5Q/hqdefault.jpg"} {"video_id": "SoODZ7tEN5Q", "text": "evenings and you'll be surprised at the results within a few days you have developed a sense of confidence that you never realize that you could build within yourself this power will give you that penetrating gaze that causes others to think you are looking into their very souls sooner or later there will come an intensity that will reveal the intensity of your thought emerson wrote that every man carries in his eyes the exact indication of his rank remember that your own gradation or position in life is marked by what you carry in your eyes", "start_timestamp": "00:47:03", "end_timestamp": "00:47:44", "start_second": 2823, "end_second": 2864, "url": "https://www.youtube.com/watch?v=SoODZ7tEN5Q&t=2823s", "title": "\"The Magic of Believing\" By Claude Bristol", "thumbnail": "https://i.ytimg.com/vi/SoODZ7tEN5Q/hqdefault.jpg"} {"video_id": "SoODZ7tEN5Q", "text": "so develop eyes that say confidence the mirror will help you a word of warning here I take it for granted that none of you assume that the techniques I'm showing you here is an open sesame' to riches and fame overnight certainly it wouldn't be wise to rush into undertakings far beyond your capabilities or your development but by using this science you could learn the various steps which will take you to the top but you must have a plan of action before any program is undertaken you've got to know what you want and be", "start_timestamp": "00:47:44", "end_timestamp": "00:48:20", "start_second": 2864, "end_second": 2900, "url": "https://www.youtube.com/watch?v=SoODZ7tEN5Q&t=2864s", "title": "\"The Magic of Believing\" By Claude Bristol", "thumbnail": "https://i.ytimg.com/vi/SoODZ7tEN5Q/hqdefault.jpg"} {"video_id": "SoODZ7tEN5Q", "text": "specific about it as long as you hold on to the mental picture of your idea and begin to develop it with action nothing can stop you from succeeding for the subconscious mind never fails to obey any order given to it clearly and emphatically Chapter seven how to project your thoughts in this section I want to talk about several points that I think pertain to mind stuff call it a pot pourri we seldom realize how much our emotional vibrations affect others and how much we're affected by there's an extremely nervous person in a position", "start_timestamp": "00:48:20", "end_timestamp": "00:49:07", "start_second": 2900, "end_second": 2947, "url": "https://www.youtube.com/watch?v=SoODZ7tEN5Q&t=2900s", "title": "\"The Magic of Believing\" By Claude Bristol", "thumbnail": "https://i.ytimg.com/vi/SoODZ7tEN5Q/hqdefault.jpg"} {"video_id": "SoODZ7tEN5Q", "text": "of authority can put nearly every person associated with them into a nervous state it's always important to remember that a negative person can raise havoc in an organization or a home the same amount of damage can be done by a strong negative personality as good can be done by a positive one when the two are pitted together against one another the negative frequently becomes the more powerful to get a better understanding of the effect of these suggestive vibrations you need only to read your varying feelings when entering different", "start_timestamp": "00:49:07", "end_timestamp": "00:49:41", "start_second": 2947, "end_second": 2981, "url": "https://www.youtube.com/watch?v=SoODZ7tEN5Q&t=2947s", "title": "\"The Magic of Believing\" By Claude Bristol", "thumbnail": "https://i.ytimg.com/vi/SoODZ7tEN5Q/hqdefault.jpg"} {"video_id": "SoODZ7tEN5Q", "text": "offices or homes the atmosphere which is the creation of the people living there can be instantly detected as being upsetting disturbing trend core harmonious the vibrations set up by others affect us much more than we realize we take on the characteristics of those with whom we are more or less constantly associated if you want to remain a positive type avoid associating too much with anyone who has a negative or pessimistic personality this brings me to another point a person who desires riches must go where the riches are", "start_timestamp": "00:49:41", "end_timestamp": "00:50:21", "start_second": 2981, "end_second": 3021, "url": "https://www.youtube.com/watch?v=SoODZ7tEN5Q&t=2981s", "title": "\"The Magic of Believing\" By Claude Bristol", "thumbnail": "https://i.ytimg.com/vi/SoODZ7tEN5Q/hqdefault.jpg"} {"video_id": "SoODZ7tEN5Q", "text": "alone on a desert I have on a man would probably have a tough time eking out a living to say nothing of trying to amass a fortune so it is in everyday pursuits therefore if you want money you have to associate yourself with people who have it or who know how to make it this may sound rather gross but the truth is that if it's money you're after you must go where it is and where it is being spent also you must become personally acquainted with those who have the authority to spend it if you're a Salesman selling advertising and you", "start_timestamp": "00:50:21", "end_timestamp": "00:50:56", "start_second": 3021, "end_second": 3056, "url": "https://www.youtube.com/watch?v=SoODZ7tEN5Q&t=3021s", "title": "\"The Magic of Believing\" By Claude Bristol", "thumbnail": "https://i.ytimg.com/vi/SoODZ7tEN5Q/hqdefault.jpg"} {"video_id": "SoODZ7tEN5Q", "text": "know the head of the firm is the man with the final say it's a waste of time trying to convince minor clerks and junior executives the same holds true if you're trying to sell other commodities or what is more important trying to sell yourself and finally the right mental attitude being properly attired keeping your eyes straight ahead and fixed on your goal throwing around you the proper aura which is done by an act of your imagination or an extension of your personal magnetism will work wonders Theo's Bernard in his penthouse of the", "start_timestamp": "00:50:56", "end_timestamp": "00:51:32", "start_second": 3056, "end_second": 3092, "url": "https://www.youtube.com/watch?v=SoODZ7tEN5Q&t=3056s", "title": "\"The Magic of Believing\" By Claude Bristol", "thumbnail": "https://i.ytimg.com/vi/SoODZ7tEN5Q/hqdefault.jpg"} {"video_id": "SoODZ7tEN5Q", "text": "gods learned this when he was cornered and stoned by a crowd of natives in Tibet in his book he says his first reaction was to fight but the thought was immediately dismissed when he recalled that he had been taught to assume and maintain his aura thus he straightened his shoulders lifted high his head directed his eyes straight ahead and moved forward with a firm and rapid stride not only did the crowd give way but others came forward and made a path for him when man fully comprehends the great power of his mind", "start_timestamp": "00:51:32", "end_timestamp": "00:52:06", "start_second": 3092, "end_second": 3126, "url": "https://www.youtube.com/watch?v=SoODZ7tEN5Q&t=3092s", "title": "\"The Magic of Believing\" By Claude Bristol", "thumbnail": "https://i.ytimg.com/vi/SoODZ7tEN5Q/hqdefault.jpg"} {"video_id": "SoODZ7tEN5Q", "text": "and honestly puts it to work he will have dominion over this earth and everything on it you yourself have this inner spark but it must be fanned until the fire is of white-hot intensity and it must be constantly stoked which you do by adding fuel ideas ideas more ideas and action chapter 8 belief makes things happen I have tried to make plain how this power through belief can be developed and to take you up the ladder as far as you wish to go it is necessary though to point out that it is easy to lose ones belief or faith thousands have risen to", "start_timestamp": "00:52:06", "end_timestamp": "00:53:02", "start_second": 3126, "end_second": 3182, "url": "https://www.youtube.com/watch?v=SoODZ7tEN5Q&t=3126s", "title": "\"The Magic of Believing\" By Claude Bristol", "thumbnail": "https://i.ytimg.com/vi/SoODZ7tEN5Q/hqdefault.jpg"} {"video_id": "SoODZ7tEN5Q", "text": "great heights of success only to stumble row or fall to undreamed-of depths others seeking health have appeared to be more or less miraculously cured only to find that in later years or even months there is a recurrence of their ailments there are many weakening factors and influences all suggestive in nature which we in unguarded moments allowed to slip into our subconscious minds once these influences begin the destructive work they can undo all the good accomplished by our constructive forces so step out in front head toward the Sun", "start_timestamp": "00:53:02", "end_timestamp": "00:53:43", "start_second": 3182, "end_second": 3223, "url": "https://www.youtube.com/watch?v=SoODZ7tEN5Q&t=3182s", "title": "\"The Magic of Believing\" By Claude Bristol", "thumbnail": "https://i.ytimg.com/vi/SoODZ7tEN5Q/hqdefault.jpg"} {"video_id": "SoODZ7tEN5Q", "text": "keep facing it and the dark shadows will not cross your path I know that it is difficult for the average person who knows nothing of the subject to accept the idea that all is within but surely the most materialistic person must realize that as far as he himself is concerned nothing exists on the outside plane unless he has knowledge of it or unless it becomes fixed in his consciousness it is the image created in his mind that gives reality to the world outside of him happiness sought by many and found by few therefore is a matter", "start_timestamp": "00:53:43", "end_timestamp": "00:54:25", "start_second": 3223, "end_second": 3265, "url": "https://www.youtube.com/watch?v=SoODZ7tEN5Q&t=3223s", "title": "\"The Magic of Believing\" By Claude Bristol", "thumbnail": "https://i.ytimg.com/vi/SoODZ7tEN5Q/hqdefault.jpg"} {"video_id": "SoODZ7tEN5Q", "text": "entirely within ourselves our environment and the everyday happenings of life have absolutely no effect on our happiness except as we permit mental images of the outside to enter our consciousness happiness is wholly independent of position wealth or material possessions it is a state of mind which we ourselves have the power to control and that control lies with our thinking Emerson said what is the hardest task in the world to think obviously this is so when one considers that most of us are victims of mass thinking and feed upon suggestions", "start_timestamp": "00:54:25", "end_timestamp": "00:55:08", "start_second": 3265, "end_second": 3308, "url": "https://www.youtube.com/watch?v=SoODZ7tEN5Q&t=3265s", "title": "\"The Magic of Believing\" By Claude Bristol", "thumbnail": "https://i.ytimg.com/vi/SoODZ7tEN5Q/hqdefault.jpg"} {"video_id": "SoODZ7tEN5Q", "text": "from others we all know that the law of cause and effect is inviolable yet how many of us ever pause to consider its workings the entire course of a man's life has many times been changed by a single thought which coming to him in a flash became a mighty power that altered the whole current of human events history is replete with the stories of strong-minded resolutely willed individuals who steadfastly holding to their inner convictions have been able to inspire their fellow man and in the face of tremendous and determined", "start_timestamp": "00:55:08", "end_timestamp": "00:55:43", "start_second": 3308, "end_second": 3343, "url": "https://www.youtube.com/watch?v=SoODZ7tEN5Q&t=3308s", "title": "\"The Magic of Believing\" By Claude Bristol", "thumbnail": "https://i.ytimg.com/vi/SoODZ7tEN5Q/hqdefault.jpg"} {"video_id": "SoODZ7tEN5Q", "text": "opposition have literally created out of nothing great businesses huge empires and new worlds they had no monopoly of thought power you and every man and woman have it all you have to do is use it you will then become the person you envisage in your imagination know yourself know your power faithfully use the cards and the mirror techniques and you will get results far beyond your fondest expectations just believe that there is a genuine creative magic in believing and magic there will be for belief will supply the power which will", "start_timestamp": "00:55:43", "end_timestamp": "00:56:27", "start_second": 3343, "end_second": 3387, "url": "https://www.youtube.com/watch?v=SoODZ7tEN5Q&t=3343s", "title": "\"The Magic of Believing\" By Claude Bristol", "thumbnail": "https://i.ytimg.com/vi/SoODZ7tEN5Q/hqdefault.jpg"} {"video_id": "xp0O2vi8DX4", "text": "Transcriber: Leonardo Silva Reviewer: Denise RQ So, we all have some behavior that we would like to change about ourselves. And we certainly all want to help someone else change their behavior in a positive way. So, maybe it's your kid, your spouse, your colleague. So I want to share some new research with you that I think reveals something really important about what gets people to change their behavior. But before I do that, let's zoom in on one strategy that I think you probably use a lot. So, let's say you're trying to stop yourself from snacking.", "start_timestamp": "00:00:00", "end_timestamp": "00:00:47", "start_second": 0, "end_second": 47, "url": "https://www.youtube.com/watch?v=xp0O2vi8DX4&t=0s", "title": "How to motivate yourself to change your behavior | Tali Sharot | TEDxCambridge", "thumbnail": "https://i.ytimg.com/vi/xp0O2vi8DX4/maxresdefault.jpg"} {"video_id": "xp0O2vi8DX4", "text": "What do you tell yourself? Well, most people, in a monologue, will say, \"Beware. You'll be fat.\" And if this was your kid, you would probably tell him that smoking kills and, by the way, he's in big, big trouble. (Laughter) So, what we're trying to do here is we're trying to scare ourselves and others into changing their behavior. And it's not just us. Warnings and threats are really common in health campaigns, in policy. It's because we all share this deep-rooted belief that if you threaten people, if fear is induced,", "start_timestamp": "00:00:47", "end_timestamp": "00:01:29", "start_second": 47, "end_second": 89, "url": "https://www.youtube.com/watch?v=xp0O2vi8DX4&t=47s", "title": "How to motivate yourself to change your behavior | Tali Sharot | TEDxCambridge", "thumbnail": "https://i.ytimg.com/vi/xp0O2vi8DX4/maxresdefault.jpg"} {"video_id": "xp0O2vi8DX4", "text": "it will get them to act. And it seems like a really reasonable assumption, except for the fact that the science shows that warnings have very limited impact on behavior. So, graphic images on cigarette packets, for example, do not deter smokers from smoking, and one study found that, after looking at those images, quitting actually became a lower priority for smokers. So, I'm not saying that warnings and threats never work, but what I'm saying is, on average, they seem to have a very limited impact. And so, the question is: why?", "start_timestamp": "00:01:29", "end_timestamp": "00:02:02", "start_second": 89, "end_second": 122, "url": "https://www.youtube.com/watch?v=xp0O2vi8DX4&t=89s", "title": "How to motivate yourself to change your behavior | Tali Sharot | TEDxCambridge", "thumbnail": "https://i.ytimg.com/vi/xp0O2vi8DX4/maxresdefault.jpg"} {"video_id": "xp0O2vi8DX4", "text": "Why are we resistant to warnings? Well, if you think about animals, when you induce fear in an animal, the most common response you will see is freezing or fleeing; fighting, not as much. And so, humans are the same. So if something scares us, we tend to shut down and we try to eliminate the negative feelings. So, we might use rationalizations. For example, you might tell yourself: \"My grandpa smoked. He lived to be 90. So, I have really good genes and absolutely nothing to worry about.\" And this process can actually make you feel more resilient", "start_timestamp": "00:02:02", "end_timestamp": "00:02:39", "start_second": 122, "end_second": 159, "url": "https://www.youtube.com/watch?v=xp0O2vi8DX4&t=122s", "title": "How to motivate yourself to change your behavior | Tali Sharot | TEDxCambridge", "thumbnail": "https://i.ytimg.com/vi/xp0O2vi8DX4/maxresdefault.jpg"} {"video_id": "xp0O2vi8DX4", "text": "than you did before, which is why warnings sometimes have this boomerang effect. In other times, we simply put our head in the ground. (Laughter) Take the stock market for example. Do you know when people pull their head out of the ground to look at their accounts -- not to make a transaction, just to log in to check their account? So, what you're seeing here, in black, is the S&P 500 over two years, and in gray, is the number of times that people logged in to their account just to check. And this is data from Karlsson, Loewenstein & Seppi,", "start_timestamp": "00:02:39", "end_timestamp": "00:03:12", "start_second": 159, "end_second": 192, "url": "https://www.youtube.com/watch?v=xp0O2vi8DX4&t=159s", "title": "How to motivate yourself to change your behavior | Tali Sharot | TEDxCambridge", "thumbnail": "https://i.ytimg.com/vi/xp0O2vi8DX4/maxresdefault.jpg"} {"video_id": "xp0O2vi8DX4", "text": "it's control [data] for all the obvious confounds. So, what do we see? When the market is high, people log in all the time, because positive information makes you feel good, so you seek it out. And when the market is low, people avoid logging in, because negative information makes us feel bad, so we try to avoid it altogether. And all this is true as long as bad information can reasonably be avoided. So, what you don't see here is what happened a few months later, in the financial collapse of 2008, when the market went drastically down", "start_timestamp": "00:03:12", "end_timestamp": "00:03:48", "start_second": 192, "end_second": 228, "url": "https://www.youtube.com/watch?v=xp0O2vi8DX4&t=192s", "title": "How to motivate yourself to change your behavior | Tali Sharot | TEDxCambridge", "thumbnail": "https://i.ytimg.com/vi/xp0O2vi8DX4/maxresdefault.jpg"} {"video_id": "xp0O2vi8DX4", "text": "and that was when people started logging in frantically, but it was a bit too late. So, you can think about it like this -- it's not just finance: In many different parts of our life, (Laughter) we have warning signs and bad behaviors now. And they could potentially lead to all these bad outcomes later, but not necessarily so, because there are different routs from your present to your future, right? It can go this way, it can go that way. And, as time passes, you gather more and more information about where the wind is blowing.", "start_timestamp": "00:03:48", "end_timestamp": "00:04:26", "start_second": 228, "end_second": 266, "url": "https://www.youtube.com/watch?v=xp0O2vi8DX4&t=228s", "title": "How to motivate yourself to change your behavior | Tali Sharot | TEDxCambridge", "thumbnail": "https://i.ytimg.com/vi/xp0O2vi8DX4/maxresdefault.jpg"} {"video_id": "xp0O2vi8DX4", "text": "(Laughter) And, at any point, you can intervene and you could potentially change the outcome, but that takes energy and you might tell yourself: \"What's the point about worrying about something that might happen? It might not happen.\" Until we reach this point, at which time you do jump into action, but sometimes it's a little bit too late. So, we wanted to know, in my lab, what type of information does leak into people. So, we conducted an experiment where we asked approximately 100 people to estimate the likelihood", "start_timestamp": "00:04:26", "end_timestamp": "00:05:00", "start_second": 266, "end_second": 300, "url": "https://www.youtube.com/watch?v=xp0O2vi8DX4&t=266s", "title": "How to motivate yourself to change your behavior | Tali Sharot | TEDxCambridge", "thumbnail": "https://i.ytimg.com/vi/xp0O2vi8DX4/maxresdefault.jpg"} {"video_id": "xp0O2vi8DX4", "text": "of 80 different negative events that might happen to them in the future. So, for example, I might ask you: \"What is the likelihood that you'll suffer hearing loss in your future?\" And let's say you think it's about 50%. Then, I give you the opinion of two different experts. So, expert A tells you: \"You know, for someone like you, I think it's only 40%.\" So, they give you a rosier view of your future. Expert B says: \"You know, for someone like you, I actually think it's about 60%. It's worse.\" So, they give you a bleaker view of your future.", "start_timestamp": "00:05:00", "end_timestamp": "00:05:39", "start_second": 300, "end_second": 339, "url": "https://www.youtube.com/watch?v=xp0O2vi8DX4&t=300s", "title": "How to motivate yourself to change your behavior | Tali Sharot | TEDxCambridge", "thumbnail": "https://i.ytimg.com/vi/xp0O2vi8DX4/maxresdefault.jpg"} {"video_id": "xp0O2vi8DX4", "text": "What should you do? Well, you shouldn't change your beliefs, right? Wrong. What we find is that people tend to change their beliefs towards a more desirable opinion. In other words, people listen to the positive information. Now, this study was conducted on college students, so you might say: \"Well, college students are delusional, right? We all know that.\" (Laughter) And surely, as we grow older, we grow wiser. So we said: \"OK, let's test that. Does this really generalize? Does it generalize to your kid, to your parent?", "start_timestamp": "00:05:39", "end_timestamp": "00:06:15", "start_second": 339, "end_second": 375, "url": "https://www.youtube.com/watch?v=xp0O2vi8DX4&t=339s", "title": "How to motivate yourself to change your behavior | Tali Sharot | TEDxCambridge", "thumbnail": "https://i.ytimg.com/vi/xp0O2vi8DX4/maxresdefault.jpg"} {"video_id": "xp0O2vi8DX4", "text": "Does it generalize to your spouse?\" And so, we tested people from the age of 10 until the age of 80, and the answer was yes. In all these age groups, people take in information they want to hear -- like someone telling you you're more attractive than you thought -- than information that they don't want to hear. And the ability to learn from good news remained quite stable throughout the life span, but the ability to learn from bad news, that changes as you age. So, what we found was that kids and teenagers were the worse at learning from bad news,", "start_timestamp": "00:06:15", "end_timestamp": "00:06:53", "start_second": 375, "end_second": 413, "url": "https://www.youtube.com/watch?v=xp0O2vi8DX4&t=375s", "title": "How to motivate yourself to change your behavior | Tali Sharot | TEDxCambridge", "thumbnail": "https://i.ytimg.com/vi/xp0O2vi8DX4/maxresdefault.jpg"} {"video_id": "xp0O2vi8DX4", "text": "and the ability became better and better as people aged. But then, around the age of 40, around midlife, it started deteriorating again. So, what this means is that the most vulnerable populations, kids and teenagers on the one hand, and the elderly on the other hand, they're the least likely to accurately learn from warnings. But what you can see here is that it doesn't matter what age you are. You can be 20, 30, 40, 50 or 60; everyone takes in information they want to hear more than information that they don't.", "start_timestamp": "00:06:53", "end_timestamp": "00:07:28", "start_second": 413, "end_second": 448, "url": "https://www.youtube.com/watch?v=xp0O2vi8DX4&t=413s", "title": "How to motivate yourself to change your behavior | Tali Sharot | TEDxCambridge", "thumbnail": "https://i.ytimg.com/vi/xp0O2vi8DX4/maxresdefault.jpg"} {"video_id": "xp0O2vi8DX4", "text": "And so, we end up with a view like this of ourselves. (Laughter) Our mistake as teachers, as mentors, as employers is that, instead of working with this positive image that people so effortfully maintain, we try and put a clear mirror in front of them. We tell them: \"You know, the image is just going to get worse and worse and worse.\" And it doesn't work. It doesn't work because the brain will frantically try to distort the image, using Photoshop and fancy lenses, until it gets the image it's happy with. But what would happen if we went along with how our brain works", "start_timestamp": "00:07:28", "end_timestamp": "00:08:16", "start_second": 448, "end_second": 496, "url": "https://www.youtube.com/watch?v=xp0O2vi8DX4&t=448s", "title": "How to motivate yourself to change your behavior | Tali Sharot | TEDxCambridge", "thumbnail": "https://i.ytimg.com/vi/xp0O2vi8DX4/maxresdefault.jpg"} {"video_id": "xp0O2vi8DX4", "text": "and not against it? Take handwashing, for example. We all know that handwashing is the number one way to prevent the spread of disease, and this is really important in hospitals. So, in a hospital here in the United States, a camera was installed to see how often medical staff do, in fact, sanitize their hands before and after entering a patient's room. Now, the medical staff knew a camera was installed. Nevertheless, only one in ten washed their hands before and after entering a patient's room. But then, an intervention was introduced:", "start_timestamp": "00:08:16", "end_timestamp": "00:08:54", "start_second": 496, "end_second": 534, "url": "https://www.youtube.com/watch?v=xp0O2vi8DX4&t=496s", "title": "How to motivate yourself to change your behavior | Tali Sharot | TEDxCambridge", "thumbnail": "https://i.ytimg.com/vi/xp0O2vi8DX4/maxresdefault.jpg"} {"video_id": "xp0O2vi8DX4", "text": "an electronic board that told the medical staff how well they were doing. Every time you washed your hands, the numbers went up on the screen and it showed you your rate of your current shift and the rate of the weekly staff. And what happened? Boom. Compliance raised to 90%, which is absolutely amazing. And the research staff were amazed as well, and they made sure to replicate it in another division in the hospital. Again, the same results. So, why does this intervention work so well? It works well because, instead of using warnings", "start_timestamp": "00:08:54", "end_timestamp": "00:09:39", "start_second": 534, "end_second": 579, "url": "https://www.youtube.com/watch?v=xp0O2vi8DX4&t=534s", "title": "How to motivate yourself to change your behavior | Tali Sharot | TEDxCambridge", "thumbnail": "https://i.ytimg.com/vi/xp0O2vi8DX4/maxresdefault.jpg"} {"video_id": "xp0O2vi8DX4", "text": "about bad things that can happen in the future, like disease, it uses three principles that we know really drive your mind and your behavior. Let me explain. The first one is social incentives. In the hospital study, the medical staff could see what other people were doing. They can see the rates of the shift, the rate of the week. We're social people, we really care what other people are doing, we want to do the same and we want to do it better. This is an image from a study that we conducted, led by PhD student Micah Edelson,", "start_timestamp": "00:09:39", "end_timestamp": "00:10:16", "start_second": 579, "end_second": 616, "url": "https://www.youtube.com/watch?v=xp0O2vi8DX4&t=579s", "title": "How to motivate yourself to change your behavior | Tali Sharot | TEDxCambridge", "thumbnail": "https://i.ytimg.com/vi/xp0O2vi8DX4/maxresdefault.jpg"} {"video_id": "xp0O2vi8DX4", "text": "and what it's showing you is a signal in the emotional center of your brain when you hear about the opinion of others. And what we found was that this signal can predict how likely you are to conform at a later time, how likely you are to change your behavior. So, the British government are using this principle to get people to pay taxes on time. In an old letter that they sent to people who \"forgot\" to pay taxes on time, they simply stressed how important it was pay taxes, and that didn't help. Then, they added one sentence,", "start_timestamp": "00:10:16", "end_timestamp": "00:10:55", "start_second": 616, "end_second": 655, "url": "https://www.youtube.com/watch?v=xp0O2vi8DX4&t=616s", "title": "How to motivate yourself to change your behavior | Tali Sharot | TEDxCambridge", "thumbnail": "https://i.ytimg.com/vi/xp0O2vi8DX4/maxresdefault.jpg"} {"video_id": "xp0O2vi8DX4", "text": "and that sentence said: \"Nine out of ten people in Britain pay their taxes on time.\" And that one sentence enhanced compliance within that group by 15%, and it's thought to bring into the British government 5.6 billion pounds. So, highlighting what other people are doing is a really strong incentive. The other principle is immediate rewards. So, every time the staff washed their hand, they could see the numbers go up on the board and it made them feel good. And knowing that in advance made them do something that they, otherwise, may not want to do.", "start_timestamp": "00:10:55", "end_timestamp": "00:11:36", "start_second": 655, "end_second": 696, "url": "https://www.youtube.com/watch?v=xp0O2vi8DX4&t=655s", "title": "How to motivate yourself to change your behavior | Tali Sharot | TEDxCambridge", "thumbnail": "https://i.ytimg.com/vi/xp0O2vi8DX4/maxresdefault.jpg"} {"video_id": "xp0O2vi8DX4", "text": "Now, this works because we value immediate rewards, rewards that we can get now, more than rewards that we can get in the future. And people tend to think it's because we don't care about the future, but that's completely wrong, we all care about our future, right? We want to be happy and healthy in the future, we want to be successful, but the future is so far away. I mean, maybe you'll behave badly now and you'll be fine in the future, and maybe you'll be altogether dead. (Laughter) So, the here-and-now you would rather have that tangible drink,", "start_timestamp": "00:11:36", "end_timestamp": "00:12:13", "start_second": 696, "end_second": 733, "url": "https://www.youtube.com/watch?v=xp0O2vi8DX4&t=696s", "title": "How to motivate yourself to change your behavior | Tali Sharot | TEDxCambridge", "thumbnail": "https://i.ytimg.com/vi/xp0O2vi8DX4/maxresdefault.jpg"} {"video_id": "xp0O2vi8DX4", "text": "that tangible T-bone, rather than something that's uncertain in the future. If you think about it, it's not altogether irrational, right? You're choosing something sure now rather than something that is unsure in the future. But what will happen if you reward people now for doing actions that are good for them in the future? Studies show that giving people immediate rewards make them more likely to quit smoking, more likely to start exercising, and this effect lasts for at least six months, because not smoking becomes associated with a reward,", "start_timestamp": "00:12:13", "end_timestamp": "00:12:52", "start_second": 733, "end_second": 772, "url": "https://www.youtube.com/watch?v=xp0O2vi8DX4&t=733s", "title": "How to motivate yourself to change your behavior | Tali Sharot | TEDxCambridge", "thumbnail": "https://i.ytimg.com/vi/xp0O2vi8DX4/maxresdefault.jpg"} {"video_id": "xp0O2vi8DX4", "text": "and exercising becomes associated with a reward, and it becomes a habit, it becomes a lifestyle. So, we can reward ourselves and others now for behaving in ways that are good for us in the future and that's a way for us to bridge the temporal gap. And the third principle is progress monitoring. So, the electronic board focused the medical staff attention on improving their performance. This is an image from a study that we conducted, that shows you brain activity suggestive of efficient coding of positive information about the future.", "start_timestamp": "00:12:52", "end_timestamp": "00:13:27", "start_second": 772, "end_second": 807, "url": "https://www.youtube.com/watch?v=xp0O2vi8DX4&t=772s", "title": "How to motivate yourself to change your behavior | Tali Sharot | TEDxCambridge", "thumbnail": "https://i.ytimg.com/vi/xp0O2vi8DX4/maxresdefault.jpg"} {"video_id": "xp0O2vi8DX4", "text": "And what we found was that the brain does a really good job at this, but it doesn't do such a good job at processing negative information about the future. So, what does this mean? It means that, if you're trying to get people's attention, you might want to highlight the progress, not the decline. So, for example, if you take that kid with the cigarette, you might want to tell them: \"You know, if you stop smoking, you'll become better at sports.\" Highlight the progress, not the decline. Now, before I sum up, let me just share this small anecdote with you.", "start_timestamp": "00:13:27", "end_timestamp": "00:14:03", "start_second": 807, "end_second": 843, "url": "https://www.youtube.com/watch?v=xp0O2vi8DX4&t=807s", "title": "How to motivate yourself to change your behavior | Tali Sharot | TEDxCambridge", "thumbnail": "https://i.ytimg.com/vi/xp0O2vi8DX4/maxresdefault.jpg"} {"video_id": "xp0O2vi8DX4", "text": "A few weeks ago, I got home and I found this bill on my fridge. And was really surprised because there's never any bills on my fridge. So, I was wondering why my husband decided to put that on our fridge. And so, looking at the bill, I could see that what this bill was trying to do is get me to be more efficient with my electricity use. And how was it doing it? Social incentives, immediate rewards and progress monitoring. Let me show you. Here are the social incentives. In gray is the energy use on the average energy use of people in my neighborhood.", "start_timestamp": "00:14:03", "end_timestamp": "00:14:38", "start_second": 843, "end_second": 878, "url": "https://www.youtube.com/watch?v=xp0O2vi8DX4&t=843s", "title": "How to motivate yourself to change your behavior | Tali Sharot | TEDxCambridge", "thumbnail": "https://i.ytimg.com/vi/xp0O2vi8DX4/maxresdefault.jpg"} {"video_id": "xp0O2vi8DX4", "text": "And in blue is my energy use, and in green is the most efficient neighbor. And my reaction to this was -- my immediate reaction was: \"I'm a little bit better than average\" (Laughter) -- a tiny bit, but still... and my husband had exactly the same reaction -- and \"I want to get to the green bar.\" And then, I got a smiley face. That was my immediate reward and it was telling me, \"You're doing good,\" and it made me want to put this on my fridge. (Laughter) And although I have this one smiley face, I can see an opportunity there to get two smiley faces.", "start_timestamp": "00:14:38", "end_timestamp": "00:15:15", "start_second": 878, "end_second": 915, "url": "https://www.youtube.com/watch?v=xp0O2vi8DX4&t=878s", "title": "How to motivate yourself to change your behavior | Tali Sharot | TEDxCambridge", "thumbnail": "https://i.ytimg.com/vi/xp0O2vi8DX4/maxresdefault.jpg"} {"video_id": "xp0O2vi8DX4", "text": "(Laughter) So, there's an opportunity for progress and it's showing me my progress throughout the year, how my energy use changes throughout the year. And the last thing this bill gave me: it gave me a sense of control. So, it gave me a sense of I was in control of my use of electricity. And that is a really important thing, if you try to get people to change their behavior, because the brain is constantly trying to seek ways to control its environment. It's one of the principles of what the brain is actually doing.", "start_timestamp": "00:15:15", "end_timestamp": "00:15:47", "start_second": 915, "end_second": 947, "url": "https://www.youtube.com/watch?v=xp0O2vi8DX4&t=915s", "title": "How to motivate yourself to change your behavior | Tali Sharot | TEDxCambridge", "thumbnail": "https://i.ytimg.com/vi/xp0O2vi8DX4/maxresdefault.jpg"} {"video_id": "xp0O2vi8DX4", "text": "And so, giving people a sense of control is a really important motivator. OK. So, what am I not saying? I'm not saying that we do not need to communicate risks, and I'm not saying that there's one-solution-fits-all, but I am saying that, if we want to motivate change, we might want to rethink how we do it, because fear, the fear of losing your health, the fear of losing money, induces inaction, while the thrill of a gain induces action. And so, to change behavior in ourselves and in others, we may want to try these positive strategies", "start_timestamp": "00:15:47", "end_timestamp": "00:16:28", "start_second": 947, "end_second": 988, "url": "https://www.youtube.com/watch?v=xp0O2vi8DX4&t=947s", "title": "How to motivate yourself to change your behavior | Tali Sharot | TEDxCambridge", "thumbnail": "https://i.ytimg.com/vi/xp0O2vi8DX4/maxresdefault.jpg"} {"video_id": "eYgPJ_7BkEw", "text": "hi today we're looking at fix match simplifying semi-supervised learning with consistency and confidence by cukes on David birth Berthelot and others of Google research so this paper concerns semi-supervised learning so what does semi-supervised learning mean in semi-supervised learning you have a data set of labeled samples so right you have this data set of X's and corresponding Y labels but this data set sometimes is very small now you have a much bigger data set of unlabeled examples just X's with no labels right so you don't know", "start_timestamp": "00:00:00", "end_timestamp": "00:00:50", "start_second": 0, "end_second": 50, "url": "https://www.youtube.com/watch?v=eYgPJ_7BkEw&t=0s", "title": "FixMatch: Simplifying Semi-Supervised Learning with Consistency and Confidence", "thumbnail": "https://i.ytimg.com/vi/eYgPJ_7BkEw/hqdefault.jpg"} {"video_id": "eYgPJ_7BkEw", "text": "what the labels of the of the unlabeled examples are but what you would like to do is you would like to use this really large data set in order to help you with learning the association between the data points and the labels so for example in this case you would have something like like an image classification data set and I'm gonna take the example here of medical data so you have a pictures of lungs let's draw a long here that is an ugly long you have pictures of lungs and whether or not they are they have like a tumor in", "start_timestamp": "00:00:50", "end_timestamp": "00:01:30", "start_second": 50, "end_second": 90, "url": "https://www.youtube.com/watch?v=eYgPJ_7BkEw&t=50s", "title": "FixMatch: Simplifying Semi-Supervised Learning with Consistency and Confidence", "thumbnail": "https://i.ytimg.com/vi/eYgPJ_7BkEw/hqdefault.jpg"} {"video_id": "eYgPJ_7BkEw", "text": "them right so medical data is very hard to get especially labeled medical data because you need first of all you need the data itself but then you also need like like one at least one but ideally like three radiologists to look at whether or not this is a good or a bad image and label it so it's usually very expensive to collect that data but you might have plenty of unlabeled data right you might just be able to go who you're through through some database and find like anonymized undiagnosed long scans somewhere lying around the same", "start_timestamp": "00:01:30", "end_timestamp": "00:02:10", "start_second": 90, "end_second": 130, "url": "https://www.youtube.com/watch?v=eYgPJ_7BkEw&t=90s", "title": "FixMatch: Simplifying Semi-Supervised Learning with Consistency and Confidence", "thumbnail": "https://i.ytimg.com/vi/eYgPJ_7BkEw/hqdefault.jpg"} {"video_id": "eYgPJ_7BkEw", "text": "with image like other images so labeling images is pretty human intensive but the internet contains like a whole bunch of unlabeled images so the task of semi-supervised learning is how do you use this unlabeled data set in order to make your classification on the label data set easier and fix match combines two approaches to this in a smart way namely the consistency and confidence approach right so what does what does well it will jump right into into the method so basically what you want to do is you want to say my loss", "start_timestamp": "00:02:10", "end_timestamp": "00:02:51", "start_second": 130, "end_second": 171, "url": "https://www.youtube.com/watch?v=eYgPJ_7BkEw&t=130s", "title": "FixMatch: Simplifying Semi-Supervised Learning with Consistency and Confidence", "thumbnail": "https://i.ytimg.com/vi/eYgPJ_7BkEw/hqdefault.jpg"} {"video_id": "eYgPJ_7BkEw", "text": "that I optimized right this is my loss consists of two parts namely a supervised loss which is your classic classification loss right plus an unsupervised loss right and then you have like some sort of a trade-off parameter in front now your supervised loss here this is where this is just the the cross-entropy let's call it h between your predicted labels and your the actual true labels right and the predicted labels say they can be you know kind of a distribution over labels now the magic of course is here in the unsupervised loss and this", "start_timestamp": "00:02:51", "end_timestamp": "00:03:34", "start_second": 171, "end_second": 214, "url": "https://www.youtube.com/watch?v=eYgPJ_7BkEw&t=171s", "title": "FixMatch: Simplifying Semi-Supervised Learning with Consistency and Confidence", "thumbnail": "https://i.ytimg.com/vi/eYgPJ_7BkEw/hqdefault.jpg"} {"video_id": "eYgPJ_7BkEw", "text": "unsupervised loss this is what's described here in this part right so the unsupervised loss is going to be this age between P and Q and we'll see what P and Q is so if for the unsupervised loss you two of course want to start with an unlabeled example then you have the same sample go into two different pipelines in the first pipeline up here what you do is you so-called weekly augmented and here we're dealing with images so we have to talk about image augmentation so image augmentation has long been used in supervised learning to kind of give you", "start_timestamp": "00:03:34", "end_timestamp": "00:04:19", "start_second": 214, "end_second": 259, "url": "https://www.youtube.com/watch?v=eYgPJ_7BkEw&t=214s", "title": "FixMatch: Simplifying Semi-Supervised Learning with Consistency and Confidence", "thumbnail": "https://i.ytimg.com/vi/eYgPJ_7BkEw/hqdefault.jpg"} {"video_id": "eYgPJ_7BkEw", "text": "more it's kind of a cheat to give you more training data so if you have an image right of let's say famous cat you can obtain met more training data if you for example by random cropping so you can random crop let's say we just take this bottom right corner here and then we enlarge it to the original size right then it is still sort of a cat but it's just a part of a cat right but usually that helps because you you say okay um my image data set is just pictures of animals right it's entirely conceivable that someone held", "start_timestamp": "00:04:19", "end_timestamp": "00:05:04", "start_second": 259, "end_second": 304, "url": "https://www.youtube.com/watch?v=eYgPJ_7BkEw&t=259s", "title": "FixMatch: Simplifying Semi-Supervised Learning with Consistency and Confidence", "thumbnail": "https://i.ytimg.com/vi/eYgPJ_7BkEw/hqdefault.jpg"} {"video_id": "eYgPJ_7BkEw", "text": "the camera like this or like this right so technically in terms of generalizing to a test set these both data points should be valid so I'm just gonna add both to my training data so you can see how from one training data point you can get many training data points just by doing this cropping what you can also do is you can flip it left right right you just in swap the pixels left right and usually the these kind of um so a a a cat that has a little dark spot here is still a cat when it has too little dark spot over there right but to your", "start_timestamp": "00:05:04", "end_timestamp": "00:05:41", "start_second": 304, "end_second": 341, "url": "https://www.youtube.com/watch?v=eYgPJ_7BkEw&t=304s", "title": "FixMatch: Simplifying Semi-Supervised Learning with Consistency and Confidence", "thumbnail": "https://i.ytimg.com/vi/eYgPJ_7BkEw/hqdefault.jpg"} {"video_id": "eYgPJ_7BkEw", "text": "classifier those are two different samples so you can do many of those things and they have to kind of augmentations they have what they call weakly augmented and strongly augmented right so in the weakly augmented pipeline I think they just they crop and they they shift and they rotate or something like this so you can see here this this horsey here it is something like it's cropped here about then it is turned slightly to the left and then yeah I think that's it so they crop they rotate and then they also flip", "start_timestamp": "00:05:41", "end_timestamp": "00:06:24", "start_second": 341, "end_second": 384, "url": "https://www.youtube.com/watch?v=eYgPJ_7BkEw&t=341s", "title": "FixMatch: Simplifying Semi-Supervised Learning with Consistency and Confidence", "thumbnail": "https://i.ytimg.com/vi/eYgPJ_7BkEw/hqdefault.jpg"} {"video_id": "eYgPJ_7BkEw", "text": "horizontally at random in like 50 percent of the time so these are what's called weekly augmented the goal here is just to kind of obtain a bit more training data alright so you run this through your model through your classification model as you would a regular sample and you get a prediction now from your prediction you can take the highest prediction here and that is going to be your pseudo label so this is P of Y this is your distribution that you estimate right so and this and this if you just take the max this is", "start_timestamp": "00:06:24", "end_timestamp": "00:07:03", "start_second": 384, "end_second": 423, "url": "https://www.youtube.com/watch?v=eYgPJ_7BkEw&t=384s", "title": "FixMatch: Simplifying Semi-Supervised Learning with Consistency and Confidence", "thumbnail": "https://i.ytimg.com/vi/eYgPJ_7BkEw/hqdefault.jpg"} {"video_id": "eYgPJ_7BkEw", "text": "going to be your Y hat right and this is what they call a pseudo label sorry you'll see why it is called a pseudo label so the other pipeline here is the strong augmentation pipeline now in weak augmentation we just wanted to get somewhere training it in strong augmentation now the goal is to really screw up that picture to the point where it's still you know you could recognize it in the same class but you can see here the augmentations they go wild so you play around with the color with the hue you play around with the light", "start_timestamp": "00:07:03", "end_timestamp": "00:07:39", "start_second": 423, "end_second": 459, "url": "https://www.youtube.com/watch?v=eYgPJ_7BkEw&t=423s", "title": "FixMatch: Simplifying Semi-Supervised Learning with Consistency and Confidence", "thumbnail": "https://i.ytimg.com/vi/eYgPJ_7BkEw/hqdefault.jpg"} {"video_id": "eYgPJ_7BkEw", "text": "intensity right with the contrast you can do many many things you can see this this image looks basically nothing like this image buddied you can still kind of recognize it as a horse but the strongly augmented data is much more distorted than the weakly augmented data and that's the point so also you send the strongly augmented data through the model and again you get a prediction right and now is that the trick is you take the label from here and you you take that as if it were the true label right you take that as if it were the", "start_timestamp": "00:07:39", "end_timestamp": "00:08:26", "start_second": 459, "end_second": 506, "url": "https://www.youtube.com/watch?v=eYgPJ_7BkEw&t=459s", "title": "FixMatch: Simplifying Semi-Supervised Learning with Consistency and Confidence", "thumbnail": "https://i.ytimg.com/vi/eYgPJ_7BkEw/hqdefault.jpg"} {"video_id": "eYgPJ_7BkEw", "text": "true label and you form a loss from this prediction being the model prediction as if this thing here that also comes from the model as if that was the true label right that's why it's called a pseudo label because it is a label that you produce from the model itself now of course if these were to be the same picture it would be kind of pointless right that's why you see there needs to be a weekly and a strongly augmented pipeline I am pretty sure ammo if you want a more basic version of this make this just clean", "start_timestamp": "00:08:26", "end_timestamp": "00:09:06", "start_second": 506, "end_second": 546, "url": "https://www.youtube.com/watch?v=eYgPJ_7BkEw&t=506s", "title": "FixMatch: Simplifying Semi-Supervised Learning with Consistency and Confidence", "thumbnail": "https://i.ytimg.com/vi/eYgPJ_7BkEw/hqdefault.jpg"} {"video_id": "eYgPJ_7BkEw", "text": "so no augmentation and make this augment it right that's that's how you can think of it the fact that there is weak and here strong augmentation I think is just a your classic trick to get more training data but in essence you can think of it as this is here the clean thing you just want to produce a label and then you want the that an Augmented version of the image has the same label now you can think of it shortly what does this model learn if you just have this you remember I think the important thing is always to remember that there", "start_timestamp": "00:09:06", "end_timestamp": "00:09:42", "start_second": 546, "end_second": 582, "url": "https://www.youtube.com/watch?v=eYgPJ_7BkEw&t=546s", "title": "FixMatch: Simplifying Semi-Supervised Learning with Consistency and Confidence", "thumbnail": "https://i.ytimg.com/vi/eYgPJ_7BkEw/hqdefault.jpg"} {"video_id": "eYgPJ_7BkEw", "text": "are two components here right there is first the supervised loss this is the important one ultimately because we have the true labels right and then second there is the unsupervised loss which is just an auxiliary loss that is supposed to just kind of tune our model to the nature of the data right so don't forget that this this down here just concerns the unsupervised part of that loss so if you think what does the model actually learn when whenever you train it like this it basically learns to revert this strong augmentation right say basically", "start_timestamp": "00:09:42", "end_timestamp": "00:10:26", "start_second": 582, "end_second": 626, "url": "https://www.youtube.com/watch?v=eYgPJ_7BkEw&t=582s", "title": "FixMatch: Simplifying Semi-Supervised Learning with Consistency and Confidence", "thumbnail": "https://i.ytimg.com/vi/eYgPJ_7BkEw/hqdefault.jpg"} {"video_id": "eYgPJ_7BkEw", "text": "sells hey model whenever I give you a week augmented image and I distort it heavily right whenever I give you an image and that distort it heavily I want the label to be the same so the model basically learns that whatever the image the whatever the image the model at the end of the trend will be able to basically map any strongly augmented picture to the same class as a weekly augmented picture if it comes from the same source right so the model basically learned to ignore these kinds of augmentations that's what", "start_timestamp": "00:10:26", "end_timestamp": "00:11:14", "start_second": 626, "end_second": 674, "url": "https://www.youtube.com/watch?v=eYgPJ_7BkEw&t=626s", "title": "FixMatch: Simplifying Semi-Supervised Learning with Consistency and Confidence", "thumbnail": "https://i.ytimg.com/vi/eYgPJ_7BkEw/hqdefault.jpg"} {"video_id": "eYgPJ_7BkEw", "text": "this loss over here does it basically says these sorts of augmentations these sorts of distortions of images please ignore those because I always want you to output the same label here in the prediction here as if I had not distorted or just weakly distorted the image so that's that's what you have to keep in mind that this this loss is designed to make the model not distinguish between differently augmented versions of the same image and interestingly that really seems to help with the with the supervised loss right", "start_timestamp": "00:11:14", "end_timestamp": "00:11:57", "start_second": 674, "end_second": 717, "url": "https://www.youtube.com/watch?v=eYgPJ_7BkEw&t=674s", "title": "FixMatch: Simplifying Semi-Supervised Learning with Consistency and Confidence", "thumbnail": "https://i.ytimg.com/vi/eYgPJ_7BkEw/hqdefault.jpg"} {"video_id": "eYgPJ_7BkEw", "text": "my kind of hypothesis is is that all these methods what they're kind of trying to do is to just tune the neural network to the let's say the orders of magnitude of the of the input data and also to the kinds of augmentations that the humans come up with and that's a very important point so the Ottoman tations and here we said you know it's it's kind of a rotation and the crop the kind of augmentation really seemed to play a role so this paper finds that on C 410 where the state of the art I believe is something like ninety six ninety seven", "start_timestamp": "00:11:57", "end_timestamp": "00:12:37", "start_second": 717, "end_second": 757, "url": "https://www.youtube.com/watch?v=eYgPJ_7BkEw&t=717s", "title": "FixMatch: Simplifying Semi-Supervised Learning with Consistency and Confidence", "thumbnail": "https://i.ytimg.com/vi/eYgPJ_7BkEw/hqdefault.jpg"} {"video_id": "eYgPJ_7BkEw", "text": "percent accuracy on C for ten with just two hundred and fifty labeled examples right now the usual data set size is about fifty thousand it goes to ninety four point three four point nine percent so almost 95 percent accuracy with the state of the art being like ninety seven this is incredible with just two hundred and fifty labeled examples crazy right and it with only four labels per class it gets eighty eight point six percent so that's just forty images with labels they get a 8.6 percent of of the of accuracy", "start_timestamp": "00:12:37", "end_timestamp": "00:13:25", "start_second": 757, "end_second": 805, "url": "https://www.youtube.com/watch?v=eYgPJ_7BkEw&t=757s", "title": "FixMatch: Simplifying Semi-Supervised Learning with Consistency and Confidence", "thumbnail": "https://i.ytimg.com/vi/eYgPJ_7BkEw/hqdefault.jpg"} {"video_id": "eYgPJ_7BkEw", "text": "compared to the 97 percent that you get with like 50,000 images that is pretty pretty cool right simply by having all other images not labeled but pseudo labeled and consistency regularized right so the the two to two things that are combined by fixed match again or consistency regularization which basically it means that the model should output similar predictions when fed perturbed versions of the same image right this they they're really forthcoming that they are not the ones who invented this they just combine the", "start_timestamp": "00:13:25", "end_timestamp": "00:14:09", "start_second": 805, "end_second": 849, "url": "https://www.youtube.com/watch?v=eYgPJ_7BkEw&t=805s", "title": "FixMatch: Simplifying Semi-Supervised Learning with Consistency and Confidence", "thumbnail": "https://i.ytimg.com/vi/eYgPJ_7BkEw/hqdefault.jpg"} {"video_id": "eYgPJ_7BkEw", "text": "consistency regularization with the pseudo labeling now the pseudo labeling they have also not invented the pseudo labeling leverages the idea that we should use the model itself to obtain artificial labels for unlabeled data we've seen a lot of papers in the last few months or years where it's like the teacher teaches the student and then the student teaches the teacher model again and so on so that they simply combine the two methods in a clever way they have one last thing that is not in this drawing namely they only use the pseudo", "start_timestamp": "00:14:09", "end_timestamp": "00:14:48", "start_second": 849, "end_second": 888, "url": "https://www.youtube.com/watch?v=eYgPJ_7BkEw&t=849s", "title": "FixMatch: Simplifying Semi-Supervised Learning with Consistency and Confidence", "thumbnail": "https://i.ytimg.com/vi/eYgPJ_7BkEw/hqdefault.jpg"} {"video_id": "eYgPJ_7BkEw", "text": "label they have a break right here and they only use the pseudo label if if the confidence if this P of Y here is above a certain threshold so they don't take all the pseudo labels but they only take the labels where the model is fairly sure about right so they haven't actually an ablation study where they show that this is reasonably reasonably important and if you go down here where they say ablation or is it ablation ablation study oh yeah something I also find cool if you just give one image per class this one image per class ten", "start_timestamp": "00:14:48", "end_timestamp": "00:15:32", "start_second": 888, "end_second": 932, "url": "https://www.youtube.com/watch?v=eYgPJ_7BkEw&t=888s", "title": "FixMatch: Simplifying Semi-Supervised Learning with Consistency and Confidence", "thumbnail": "https://i.ytimg.com/vi/eYgPJ_7BkEw/hqdefault.jpg"} {"video_id": "eYgPJ_7BkEw", "text": "images that are labeled it still gets like 78 percent accuracy I think the images are chosen as good representations of their class but still one image per class pretty pretty cool an important part of this is the ablation study where they say okay we want to tease apart why this algorithm why this on semi-supervised learning technique works so well and they find several important factors they find for example that they're all mentation strategy is extremely important so how they augment the images is very important you see here the error of this", "start_timestamp": "00:15:32", "end_timestamp": "00:16:20", "start_second": 932, "end_second": 980, "url": "https://www.youtube.com/watch?v=eYgPJ_7BkEw&t=932s", "title": "FixMatch: Simplifying Semi-Supervised Learning with Consistency and Confidence", "thumbnail": "https://i.ytimg.com/vi/eYgPJ_7BkEw/hqdefault.jpg"} {"video_id": "eYgPJ_7BkEw", "text": "4.8% and the 250 label split if you change up the if you change up the the augmentation strategies your error gets higher right and so they say we use this cutout and we measure the effect of cut out we find that both cut out and seek T augment are required to obtain the best performance removing either results in in a comparable increase in error rate we've seen before for example they went they went from there some 93 sorry 93 point something percent to ninety four point something percent from the previous state-of-the-art", "start_timestamp": "00:16:20", "end_timestamp": "00:17:15", "start_second": 980, "end_second": 1035, "url": "https://www.youtube.com/watch?v=eYgPJ_7BkEw&t=980s", "title": "FixMatch: Simplifying Semi-Supervised Learning with Consistency and Confidence", "thumbnail": "https://i.ytimg.com/vi/eYgPJ_7BkEw/hqdefault.jpg"} {"video_id": "eYgPJ_7BkEw", "text": "semi-supervised learning and here they find that simply changing the augmentation strategy changes the error by more than a percent so you can just see this in context of of what's important here right they say again the ratio of unlabeled data seems pretty important we observe a significant decrease in error rates by losing using a large amounts of unlabeled data right then the optimizer and learning schedule seems to be very important as well in that they use this they say STD with momentum works much better than", "start_timestamp": "00:17:15", "end_timestamp": "00:17:58", "start_second": 1035, "end_second": 1078, "url": "https://www.youtube.com/watch?v=eYgPJ_7BkEw&t=1035s", "title": "FixMatch: Simplifying Semi-Supervised Learning with Consistency and Confidence", "thumbnail": "https://i.ytimg.com/vi/eYgPJ_7BkEw/hqdefault.jpg"} {"video_id": "eYgPJ_7BkEw", "text": "Adam and then they use this decreasing learning rate schedule this cosine learning rate schedule so there seem to be a lot of things a lot of hyper parameters that are fairly important here and you can see that the gains are substantial sometimes but they aren't like through the roof substantial where you can make a good argument that it is unclear how much really comes from this clever combination that fit fix match proposes and how much also just comes from whether or not you set the hyper parameters correctly and exactly how", "start_timestamp": "00:17:58", "end_timestamp": "00:18:47", "start_second": 1078, "end_second": 1127, "url": "https://www.youtube.com/watch?v=eYgPJ_7BkEw&t=1078s", "title": "FixMatch: Simplifying Semi-Supervised Learning with Consistency and Confidence", "thumbnail": "https://i.ytimg.com/vi/eYgPJ_7BkEw/hqdefault.jpg"} {"video_id": "eYgPJ_7BkEw", "text": "much computation are you able to throw at selecting your selecting your hyper parameters so that that seems to be a bit of a a bit of a pain point for me they also say we find that tuning the weight decay is exceptionally important for low label regimes right choosing a value that is just one order of magnitude larger or smaller than optimal can cost ten percentage points or more and so that all of that seems to me that this this kind of research where you're you're nibbling for half or single percentage points in accuracy while a", "start_timestamp": "00:18:47", "end_timestamp": "00:19:37", "start_second": 1127, "end_second": 1177, "url": "https://www.youtube.com/watch?v=eYgPJ_7BkEw&t=1127s", "title": "FixMatch: Simplifying Semi-Supervised Learning with Consistency and Confidence", "thumbnail": "https://i.ytimg.com/vi/eYgPJ_7BkEw/hqdefault.jpg"} {"video_id": "n3g2DlmZsLg", "text": "this is Earl Nightingale the purpose of this recording is to tell you about and try to condense one of the most amazing books ever written Think and Grow Rich by Napoleon Hill without question this single book has had a greater influence on the lives accomplishments and fortunes of more individuals than any other work of its kind all over the free world there are literally thousands of successful men and all lines of work who are where they are today because they once picked up and bought a copy of Think and Grow Rich and they'll be quick", "start_timestamp": "00:00:00", "end_timestamp": "00:00:31", "start_second": 0, "end_second": 31, "url": "https://www.youtube.com/watch?v=n3g2DlmZsLg&t=0s", "title": "Napoleon Hill's Think & Grow Rich Condensed and Narrated by Earl Nightingale", "thumbnail": "https://i.ytimg.com/vi/n3g2DlmZsLg/maxresdefault.jpg"} {"video_id": "n3g2DlmZsLg", "text": "to tell you so I first discovered this remarkable book in the fall of 1949 it was an enormous help to me it helped me decide once and for all how I was to accomplish my goal it unified my thinking and gave me a straight clear Road to the point I had decided to reach one of my closest friends found the book several years ago and stayed home for three days reading and digesting its material and he then went on to reach the top in his industry i sat in richly paneled carpeted executive offices and listened to world", "start_timestamp": "00:00:31", "end_timestamp": "00:01:01", "start_second": 31, "end_second": 61, "url": "https://www.youtube.com/watch?v=n3g2DlmZsLg&t=31s", "title": "Napoleon Hill's Think & Grow Rich Condensed and Narrated by Earl Nightingale", "thumbnail": "https://i.ytimg.com/vi/n3g2DlmZsLg/maxresdefault.jpg"} {"video_id": "n3g2DlmZsLg", "text": "famous business leaders some of them old enough to be my father tell me that everything worked out fine after they had read Think and Grow Rich now what's the secret of this amazing book why has this book out of all the thousands of self-help books remain the one towering giant I think to understand this you have to know Napoleon Hill as I do he certainly was not the first man to be appalled at the poverty and seemingly endless struggle and lack of direction he saw about him as a boy and as a young man nor was he the first to write on the", "start_timestamp": "00:01:01", "end_timestamp": "00:01:32", "start_second": 61, "end_second": 92, "url": "https://www.youtube.com/watch?v=n3g2DlmZsLg&t=61s", "title": "Napoleon Hill's Think & Grow Rich Condensed and Narrated by Earl Nightingale", "thumbnail": "https://i.ytimg.com/vi/n3g2DlmZsLg/maxresdefault.jpg"} {"video_id": "n3g2DlmZsLg", "text": "subject but he possessed two unique highly developed abilities seldom found in one man the first was in the manner in which he approached his subject Napoleon Hill went after the answers to achievement in the same way a scientist seeks to open to the light of Reason a secret of nature he went after the solution to accomplishment in the same way Thomas Edison discovered the solution to the electric light relentlessly indefatigably implacably until the truth which had been there all the time was revealed to him his second", "start_timestamp": "00:01:32", "end_timestamp": "00:02:02", "start_second": 92, "end_second": 122, "url": "https://www.youtube.com/watch?v=n3g2DlmZsLg&t=92s", "title": "Napoleon Hill's Think & Grow Rich Condensed and Narrated by Earl Nightingale", "thumbnail": "https://i.ytimg.com/vi/n3g2DlmZsLg/maxresdefault.jpg"} {"video_id": "n3g2DlmZsLg", "text": "important ability was the knack or skill of writing about his findings in such a way that it was instantly understood intellectually but what is perhaps even more important for this particular subject understood emotionally as well on the last page of Think and Grow Rich was read the hand which put the book down on the table was a different hand the man who then stood and walked out into the world was a different a changed man the suffocating entangling webs of self-imposed frustration and interaction had fallen", "start_timestamp": "00:02:02", "end_timestamp": "00:02:33", "start_second": 122, "end_second": 153, "url": "https://www.youtube.com/watch?v=n3g2DlmZsLg&t=122s", "title": "Napoleon Hill's Think & Grow Rich Condensed and Narrated by Earl Nightingale", "thumbnail": "https://i.ytimg.com/vi/n3g2DlmZsLg/maxresdefault.jpg"} {"video_id": "n3g2DlmZsLg", "text": "away and now the way was clear the man was now the possessor of the unique unseen talent for turning dreams into reality thoughts into things so called fate of the idle effects of exterior circumstances were no longer in command he who had been a passenger was now suddenly the captain to begin we have to understand the simple truth the principle or philosophy which lies is the supporting structure of this work unless whatever it is you build is based on truth you will end with the entire structure fallen and scattered about you", "start_timestamp": "00:02:33", "end_timestamp": "00:03:07", "start_second": 153, "end_second": 187, "url": "https://www.youtube.com/watch?v=n3g2DlmZsLg&t=153s", "title": "Napoleon Hill's Think & Grow Rich Condensed and Narrated by Earl Nightingale", "thumbnail": "https://i.ytimg.com/vi/n3g2DlmZsLg/maxresdefault.jpg"} {"video_id": "n3g2DlmZsLg", "text": "like the armor of Homer's ancient warriors it simply cannot stand it cannot withstand the test of time the reason thinking Grow Rich has withstood the test of time is because it rests on the broad clean foundation upon which may also be found every accomplishment of man the clear unchallengeable fact that everything begins with an idea a philosophy based on the fact that riches of every kind begin with the State of Mind that one may start with nothing but thoughts ideas and organized plans thoughts are sings incredibly powerful things when", "start_timestamp": "00:03:07", "end_timestamp": "00:03:45", "start_second": 187, "end_second": 225, "url": "https://www.youtube.com/watch?v=n3g2DlmZsLg&t=187s", "title": "Napoleon Hill's Think & Grow Rich Condensed and Narrated by Earl Nightingale", "thumbnail": "https://i.ytimg.com/vi/n3g2DlmZsLg/maxresdefault.jpg"} {"video_id": "n3g2DlmZsLg", "text": "mixed with definiteness of purpose persistence and the burning desire for their translation into material objects or riches riches being whatever it is you happen to want wise men have been saying this for centuries and just recently Charles a sarami wrote the truth is that the human mind is as real and organism as any muscle in the body but far greater in potential power and like muscle fiber it can be strengthened to lead on to unimaginable is based if you know what you want and if you want it strongly enough to muster the kind of", "start_timestamp": "00:03:45", "end_timestamp": "00:04:22", "start_second": 225, "end_second": 262, "url": "https://www.youtube.com/watch?v=n3g2DlmZsLg&t=225s", "title": "Napoleon Hill's Think & Grow Rich Condensed and Narrated by Earl Nightingale", "thumbnail": "https://i.ytimg.com/vi/n3g2DlmZsLg/maxresdefault.jpg"} {"video_id": "n3g2DlmZsLg", "text": "persistence that simply cannot be stopped you most certainly achieve it by controlling your mind you can control your destiny here on earth with this as our foundation let's talk about napoleon hill's famous 13 proven steps to riches as found in his book Think and Grow Rich remembering of course that riches are whatever it is you happen to want and right here let me make two important points the first is that whenever you listen to this record have a notebook handy and make notes as we go along the second is that this record was produced", "start_timestamp": "00:04:22", "end_timestamp": "00:04:57", "start_second": 262, "end_second": 297, "url": "https://www.youtube.com/watch?v=n3g2DlmZsLg&t=262s", "title": "Napoleon Hill's Think & Grow Rich Condensed and Narrated by Earl Nightingale", "thumbnail": "https://i.ytimg.com/vi/n3g2DlmZsLg/maxresdefault.jpg"} {"video_id": "n3g2DlmZsLg", "text": "for your own personal use to go with you on your own exciting journey to play this record for a group will prove to be of only temporary help make sure you have your own personal copy to play again and again particularly at those times when you may feel yourself getting off the track and now napoleon hill's famous 13 principles you will notice that we have separated each principles by bending them on the record in this way you're given quick access to any particular principle to which you may wish to return for reference the first", "start_timestamp": "00:04:57", "end_timestamp": "00:05:33", "start_second": 297, "end_second": 333, "url": "https://www.youtube.com/watch?v=n3g2DlmZsLg&t=297s", "title": "Napoleon Hill's Think & Grow Rich Condensed and Narrated by Earl Nightingale", "thumbnail": "https://i.ytimg.com/vi/n3g2DlmZsLg/maxresdefault.jpg"} {"video_id": "n3g2DlmZsLg", "text": "principle desire here is the starting point for all achievement that first step toward riches but it's right here that we so often run into a stumbling block a person will say I know what I desire but can I get it we'll get into this business of doubt later but once and for all let's clear up this point this point of whether or not you can accomplish that which you desire with all your heart I think it was best expressed by Emerson who wrote there is nothing capricious in nature and the implanting of a desire indicates that", "start_timestamp": "00:05:33", "end_timestamp": "00:06:07", "start_second": 333, "end_second": 367, "url": "https://www.youtube.com/watch?v=n3g2DlmZsLg&t=333s", "title": "Napoleon Hill's Think & Grow Rich Condensed and Narrated by Earl Nightingale", "thumbnail": "https://i.ytimg.com/vi/n3g2DlmZsLg/maxresdefault.jpg"} {"video_id": "n3g2DlmZsLg", "text": "its gratification is in the constitution of the creature that feels it in other words you would not have the desire unless you were capable of its achievement each of us has a built-in governor and our desires are modified by our abilities and leanings whatever it is that you desire with all your heart understand once and for all that it can and should be yours in Think and Grow Rich Napoleon Hill cites example after example of why you're burning desire is nothing more than an accurate picture of what you will one day become", "start_timestamp": "00:06:07", "end_timestamp": "00:06:43", "start_second": 367, "end_second": 403, "url": "https://www.youtube.com/watch?v=n3g2DlmZsLg&t=367s", "title": "Napoleon Hill's Think & Grow Rich Condensed and Narrated by Earl Nightingale", "thumbnail": "https://i.ytimg.com/vi/n3g2DlmZsLg/maxresdefault.jpg"} {"video_id": "n3g2DlmZsLg", "text": "so right here firmly establish in your mind that which you desire more than anything else for as hell vicious put it by annihilating the desires you annihilate the mind every man without passions has within him no principle of action nor motive to act a good way to determine whether or not you really have a burning desire is to examine the way you go after it if you go after that which you think you desire tentatively timidly in an attempt to play it safe you don't have a burning desire at all you can't get the second base if you", "start_timestamp": "00:06:43", "end_timestamp": "00:07:17", "start_second": 403, "end_second": 437, "url": "https://www.youtube.com/watch?v=n3g2DlmZsLg&t=403s", "title": "Napoleon Hill's Think & Grow Rich Condensed and Narrated by Earl Nightingale", "thumbnail": "https://i.ytimg.com/vi/n3g2DlmZsLg/maxresdefault.jpg"} {"video_id": "n3g2DlmZsLg", "text": "keep one foot on first but if you're willing to burn your bridges behind you and say once and for all this is it this is what I will do and I will never retreat I'll never go back then you have the sort of desire that can only end in success it takes that kind of resolve to be able to keep picking yourself up after there falls you're bound to take the only people who don't make mistakes are those who never try anything the timid feeders in the lagoon who never venture into the broad deep sea beyond well these principles", "start_timestamp": "00:07:17", "end_timestamp": "00:07:50", "start_second": 437, "end_second": 470, "url": "https://www.youtube.com/watch?v=n3g2DlmZsLg&t=437s", "title": "Napoleon Hill's Think & Grow Rich Condensed and Narrated by Earl Nightingale", "thumbnail": "https://i.ytimg.com/vi/n3g2DlmZsLg/maxresdefault.jpg"} {"video_id": "n3g2DlmZsLg", "text": "will work for anything you may want a more harmonious home life a more successful career for our example let's say your desire happens to be more money to better care for your family and provide for your future years to get your share of the prosperity that lies ahead Napoleon Hill gives us six definite practical steps to follow 1 fix in your mind the exact amount of money you desire it is not sufficient merely to say I want plenty of money be definite as to the amount there's a psychological reason for definiteness", "start_timestamp": "00:07:50", "end_timestamp": "00:08:20", "start_second": 470, "end_second": 500, "url": "https://www.youtube.com/watch?v=n3g2DlmZsLg&t=470s", "title": "Napoleon Hill's Think & Grow Rich Condensed and Narrated by Earl Nightingale", "thumbnail": "https://i.ytimg.com/vi/n3g2DlmZsLg/maxresdefault.jpg"} {"video_id": "n3g2DlmZsLg", "text": "which will be described in a subsequent principle number 2 determine exactly what you intend to give in return for the money you desire there's no such reality as something for nothing 3 establish a definite date when you intend to possess the money you desire and 4 create a definite plan for carrying out your desire and begin at once whether you're ready or not to put this plan into action 5 write out a clear concise statement of the amount of money you intend to acquire name the time limit for its acquisition state what you", "start_timestamp": "00:08:20", "end_timestamp": "00:08:56", "start_second": 500, "end_second": 536, "url": "https://www.youtube.com/watch?v=n3g2DlmZsLg&t=500s", "title": "Napoleon Hill's Think & Grow Rich Condensed and Narrated by Earl Nightingale", "thumbnail": "https://i.ytimg.com/vi/n3g2DlmZsLg/maxresdefault.jpg"} {"video_id": "n3g2DlmZsLg", "text": "intend to give and return for the money and describe clearly the plan through which you intend to accumulate it 6 read your written statement aloud twice daily once just before retiring at night and once after a rising in the morning as you read see and feel and believe yourself already in possession of the money or whatever your goal happens to be it's important that you follow these instructions to the letter play this part of the record over until you have it down to your satisfaction for this is by far the most important of", "start_timestamp": "00:08:56", "end_timestamp": "00:09:28", "start_second": 536, "end_second": 568, "url": "https://www.youtube.com/watch?v=n3g2DlmZsLg&t=536s", "title": "Napoleon Hill's Think & Grow Rich Condensed and Narrated by Earl Nightingale", "thumbnail": "https://i.ytimg.com/vi/n3g2DlmZsLg/maxresdefault.jpg"} {"video_id": "n3g2DlmZsLg", "text": "the 13 principles and this chapter of the book ends with these words through some strange and powerful principle of mental chemistry which she has never divulged nature wraps up in the impulse of strong desire that's something which recognizes no such word is impossible and accepts no such reality as failure the second principle is faith you never would have even thought of your main desire unless faith were tugging at your mind and if you find it difficult at times to have faith in yourself you can be certain that you can have faith in", "start_timestamp": "00:09:28", "end_timestamp": "00:10:05", "start_second": 568, "end_second": 605, "url": "https://www.youtube.com/watch?v=n3g2DlmZsLg&t=568s", "title": "Napoleon Hill's Think & Grow Rich Condensed and Narrated by Earl Nightingale", "thumbnail": "https://i.ytimg.com/vi/n3g2DlmZsLg/maxresdefault.jpg"} {"video_id": "n3g2DlmZsLg", "text": "these principles Napoleon Hill writes faith is a state of mind which may be induced or created by affirmation or repeated instructions to the subconscious mind through the principle of conscious Auto suggestion conscious auto suggestion simply means a suggestion by yourself to yourself just as an autobiography is a biography written by the person it's about by getting a mental image of yourself already having accomplished your main desire over and over again you will muster the faith you need faith is vital to accomplishment the emperor napoleon", "start_timestamp": "00:10:05", "end_timestamp": "00:10:39", "start_second": 605, "end_second": 639, "url": "https://www.youtube.com/watch?v=n3g2DlmZsLg&t=605s", "title": "Napoleon Hill's Think & Grow Rich Condensed and Narrated by Earl Nightingale", "thumbnail": "https://i.ytimg.com/vi/n3g2DlmZsLg/maxresdefault.jpg"} {"video_id": "n3g2DlmZsLg", "text": "once said all the scholastic scaffolding falls as a ruined edifice before one single word faith Pascal said faith affirms many things respecting which the senses are silent but nothing which they deny it is superior to their testimony but never opposed to it Goethe said epochs of faith are epics of fruitfulness but epochs of unbelief however glittering a barren of all permanent good and a Schlegel put it in actual life every great Enterprise begins with and takes its first forward step in faith have faith that you can accomplish that which", "start_timestamp": "00:10:39", "end_timestamp": "00:11:18", "start_second": 639, "end_second": 678, "url": "https://www.youtube.com/watch?v=n3g2DlmZsLg&t=639s", "title": "Napoleon Hill's Think & Grow Rich Condensed and Narrated by Earl Nightingale", "thumbnail": "https://i.ytimg.com/vi/n3g2DlmZsLg/maxresdefault.jpg"} {"video_id": "n3g2DlmZsLg", "text": "you seek for you would never have decided upon it unless it was meant for you to accomplish in his chapter on faith Napoleon Hill gives us a self-confidence formula first I know that I have the ability to achieve the object of my definite purpose in life therefore I demand of myself persistent continuous action toward its attainment and I hear and now promise to render such action second I realize that dominant thoughts of my mind will eventually reproduce themselves in outward physical action and gradually transform themselves into physical", "start_timestamp": "00:11:18", "end_timestamp": "00:11:51", "start_second": 678, "end_second": 711, "url": "https://www.youtube.com/watch?v=n3g2DlmZsLg&t=678s", "title": "Napoleon Hill's Think & Grow Rich Condensed and Narrated by Earl Nightingale", "thumbnail": "https://i.ytimg.com/vi/n3g2DlmZsLg/maxresdefault.jpg"} {"video_id": "n3g2DlmZsLg", "text": "reality therefore I will concentrate my thoughts for 30 minutes daily upon the task of thinking of the person I intend to become thereby creating in my mind a clear mental picture of that person third I know through the principle of Auto suggestion any desire that I persistently hold in my mind will eventually seek expression through some practical means of attaining the object back of it therefore I will devote ten minutes daily to demanding of myself the development of self-confidence fourth I have clearly written down a description", "start_timestamp": "00:11:51", "end_timestamp": "00:12:24", "start_second": 711, "end_second": 744, "url": "https://www.youtube.com/watch?v=n3g2DlmZsLg&t=711s", "title": "Napoleon Hill's Think & Grow Rich Condensed and Narrated by Earl Nightingale", "thumbnail": "https://i.ytimg.com/vi/n3g2DlmZsLg/maxresdefault.jpg"} {"video_id": "n3g2DlmZsLg", "text": "of my definite chief aim in life and I will never stop trying until I have developed sufficient self-confidence for its attainment fifth I fully realize that no wealth or position can long endure unless built upon truth and justice therefore I will engage in no transaction which does not benefit all human effects I will succeed by attracting to myself the forces I wish to use and the cooperation of other people I will induce others to serve me because of my willingness to serve others I will eliminate hatred envy jealousy", "start_timestamp": "00:12:24", "end_timestamp": "00:12:57", "start_second": 744, "end_second": 777, "url": "https://www.youtube.com/watch?v=n3g2DlmZsLg&t=744s", "title": "Napoleon Hill's Think & Grow Rich Condensed and Narrated by Earl Nightingale", "thumbnail": "https://i.ytimg.com/vi/n3g2DlmZsLg/maxresdefault.jpg"} {"video_id": "n3g2DlmZsLg", "text": "selfishness and cynicism by developing love for all humanity because I know that a negative attitude toward others can never bring me success I will cause others to believe in me because I will believe in them and in myself in rereading thinking Grow Rich so that I could write this condensation for recording I was forcibly struck all again by this great chapter on faith particularly the examples of how some of the world's greatest men have accomplished what appeared to be impossible through faith the third principle is Auto suggestion", "start_timestamp": "00:12:57", "end_timestamp": "00:13:33", "start_second": 777, "end_second": 813, "url": "https://www.youtube.com/watch?v=n3g2DlmZsLg&t=777s", "title": "Napoleon Hill's Think & Grow Rich Condensed and Narrated by Earl Nightingale", "thumbnail": "https://i.ytimg.com/vi/n3g2DlmZsLg/maxresdefault.jpg"} {"video_id": "n3g2DlmZsLg", "text": "now we've already touched on this this chapter the book tells us how through repeated suggestion the subconscious mind can be put to work for us it is the Faculty of being able to concentrate your mind on your burning desire until your subconscious mind accepts it as fact and begins to devise ways of bringing it about here is where hunches come from sudden flashes of thought or inspiration guidance the instructions given in connection with the six steps in the second chapter will now be summarized and blended with the", "start_timestamp": "00:13:33", "end_timestamp": "00:14:01", "start_second": 813, "end_second": 841, "url": "https://www.youtube.com/watch?v=n3g2DlmZsLg&t=813s", "title": "Napoleon Hill's Think & Grow Rich Condensed and Narrated by Earl Nightingale", "thumbnail": "https://i.ytimg.com/vi/n3g2DlmZsLg/maxresdefault.jpg"} {"video_id": "n3g2DlmZsLg", "text": "principles covered by napoleon hill's chapter on Auto suggestion first go into some quiet spot perhaps in bed at night close your eyes and repeat aloud so you may hear your own words the written statement of the amount of money you intend to accumulate or a careful reaffirmation of whatever your goal happens to be the time limit for its accumulation and a description of the service or merchandise you intend to give in return for the money as you carry out these instructions see yourself already in possession of your", "start_timestamp": "00:14:01", "end_timestamp": "00:14:32", "start_second": 841, "end_second": 872, "url": "https://www.youtube.com/watch?v=n3g2DlmZsLg&t=841s", "title": "Napoleon Hill's Think & Grow Rich Condensed and Narrated by Earl Nightingale", "thumbnail": "https://i.ytimg.com/vi/n3g2DlmZsLg/maxresdefault.jpg"} {"video_id": "n3g2DlmZsLg", "text": "goal for example suppose that you intend to accumulate fifty thousand dollars by the 1st of January five years from now that you intend to give personal services in return for the money in the capacity of a Salesman your written statement of your purpose should be similar to the following by the first day of January 19 whatever it happens to be I will have in my possession fifty thousand dollars which will come to me in various amounts from time to time during the interim in return for this money I will give the most efficient", "start_timestamp": "00:14:32", "end_timestamp": "00:15:03", "start_second": 872, "end_second": 903, "url": "https://www.youtube.com/watch?v=n3g2DlmZsLg&t=872s", "title": "Napoleon Hill's Think & Grow Rich Condensed and Narrated by Earl Nightingale", "thumbnail": "https://i.ytimg.com/vi/n3g2DlmZsLg/maxresdefault.jpg"} {"video_id": "n3g2DlmZsLg", "text": "service of which I'm capable rendering the fullest possible quantity and the best possible quality of service in the capacity of salesmen of and here describe the product or service you intend to sell or whatever it is you do for a living it goes on I believe that I will have this money in my possession my faith is so strong that I can now see this money before my eyes I can touch it with my hands it is now awaiting transfer to me at the time and in the proportion that I deliver the service I intend to Ren in return for it I am awaiting a plan by", "start_timestamp": "00:15:03", "end_timestamp": "00:15:34", "start_second": 903, "end_second": 934, "url": "https://www.youtube.com/watch?v=n3g2DlmZsLg&t=903s", "title": "Napoleon Hill's Think & Grow Rich Condensed and Narrated by Earl Nightingale", "thumbnail": "https://i.ytimg.com/vi/n3g2DlmZsLg/maxresdefault.jpg"} {"video_id": "n3g2DlmZsLg", "text": "which to accumulate this money and I will follow that plan when it is received second repeat this program night and morning until you can see in your imagination the money you intend to accumulate third place a written copy of your statement where you can see at night morning and read it just before retiring and upon arising until it's been memorized as you carry out these instructions you are applying the principle of auto suggestion the fourth principle is specialized knowledge it is here that I think Napoleon Hill makes a", "start_timestamp": "00:15:34", "end_timestamp": "00:16:09", "start_second": 934, "end_second": 969, "url": "https://www.youtube.com/watch?v=n3g2DlmZsLg&t=934s", "title": "Napoleon Hill's Think & Grow Rich Condensed and Narrated by Earl Nightingale", "thumbnail": "https://i.ytimg.com/vi/n3g2DlmZsLg/maxresdefault.jpg"} {"video_id": "n3g2DlmZsLg", "text": "very important point knowledge is power only to the extent that it is organized into a definite plan of action and directed to a definite end to quote from the book before you can be sure of your ability to transmute desire into its monetary equivalent you will require specialized knowledge of the service merchandise or profession which you intend to offer in return for fortune perhaps you may need much more specialized knowledge than you have the ability or the information require and if this should be true you", "start_timestamp": "00:16:09", "end_timestamp": "00:16:38", "start_second": 969, "end_second": 998, "url": "https://www.youtube.com/watch?v=n3g2DlmZsLg&t=969s", "title": "Napoleon Hill's Think & Grow Rich Condensed and Narrated by Earl Nightingale", "thumbnail": "https://i.ytimg.com/vi/n3g2DlmZsLg/maxresdefault.jpg"} {"video_id": "n3g2DlmZsLg", "text": "may bridge your weakness through the aid of your mastermind group more on this later but for now realize that you must learn all you can about your specialty set aside a definite time every day for learning more about what it is you do for a living take the courses that are offered on your subject and associate with men who know your business well the fifth principle is imagination the imagination is literally the workshop where in our fashion all plans created by man the impulse the desire is given shape form and action through the aid of", "start_timestamp": "00:16:38", "end_timestamp": "00:17:12", "start_second": 998, "end_second": 1032, "url": "https://www.youtube.com/watch?v=n3g2DlmZsLg&t=998s", "title": "Napoleon Hill's Think & Grow Rich Condensed and Narrated by Earl Nightingale", "thumbnail": "https://i.ytimg.com/vi/n3g2DlmZsLg/maxresdefault.jpg"} {"video_id": "n3g2DlmZsLg", "text": "the imaginative Faculty of the mind it has been said that man can create anything he can imagine as Napoleon Hill says and teaches whatever the mind of man can conceive and believe it can achieve man's only limitation within reason lies in his development and use of his imagination and subsequent motivation to action the great leaders of business industry finance and the great artists musicians poets and writers became great because they developed the power of self motivation incidentally one of best books ever put together on this", "start_timestamp": "00:17:12", "end_timestamp": "00:17:47", "start_second": 1032, "end_second": 1067, "url": "https://www.youtube.com/watch?v=n3g2DlmZsLg&t=1032s", "title": "Napoleon Hill's Think & Grow Rich Condensed and Narrated by Earl Nightingale", "thumbnail": "https://i.ytimg.com/vi/n3g2DlmZsLg/maxresdefault.jpg"} {"video_id": "n3g2DlmZsLg", "text": "subject if not the greatest is success through positive mental attitude by Napoleon Hill and W Clement stone I suggest you get a copy from your bookstore at your earliest convenience if your bookstore happens to be out of the book you may obtain a copy by writing to the address on the label of this record as you go about your daily work think constantly of ways in which it could be done better more efficiently think of the changes that are inevitable can they be made now and if you feel limited remember the words of the late", "start_timestamp": "00:17:47", "end_timestamp": "00:18:16", "start_second": 1067, "end_second": 1096, "url": "https://www.youtube.com/watch?v=n3g2DlmZsLg&t=1067s", "title": "Napoleon Hill's Think & Grow Rich Condensed and Narrated by Earl Nightingale", "thumbnail": "https://i.ytimg.com/vi/n3g2DlmZsLg/maxresdefault.jpg"} {"video_id": "n3g2DlmZsLg", "text": "Frank Lloyd Wright he said the human race built most nobly when limitations were greatest and therefore when most was required of imagination in order to build it all limitations seemed to have always been the best friends of architecture as you build your future from this point onward don't concern yourself with limitations but remember that they may be your best friends since they require imagination if were to rise above them and this Beecher said the soul without imagination is what an observatory would be without a telescope", "start_timestamp": "00:18:16", "end_timestamp": "00:18:49", "start_second": 1096, "end_second": 1129, "url": "https://www.youtube.com/watch?v=n3g2DlmZsLg&t=1096s", "title": "Napoleon Hill's Think & Grow Rich Condensed and Narrated by Earl Nightingale", "thumbnail": "https://i.ytimg.com/vi/n3g2DlmZsLg/maxresdefault.jpg"} {"video_id": "n3g2DlmZsLg", "text": "now if you turn this record over we'll get to the sixth principle the sixth principle is organized planning you have decided on your desire your goal now let's organize the plan for its accomplishment right on schedule let me quote again from Think and Grow Rich you have learned that everything man creates or acquires begins in the form of desire the desire is taken on the first lap of its journey from the abstract to the concrete in the workshop of the imagination for plans for its transition are created and organized", "start_timestamp": "00:18:49", "end_timestamp": "00:19:24", "start_second": 1129, "end_second": 1164, "url": "https://www.youtube.com/watch?v=n3g2DlmZsLg&t=1129s", "title": "Napoleon Hill's Think & Grow Rich Condensed and Narrated by Earl Nightingale", "thumbnail": "https://i.ytimg.com/vi/n3g2DlmZsLg/maxresdefault.jpg"} {"video_id": "n3g2DlmZsLg", "text": "earlier you were instructed to take six definite practical steps as your first move in translating the desire for whatever you want into its physical equivalent one of these steps is the formation of a definite practical plan or plans through which this transformation may be made one ally yourself with one or more persons a group of as many people as you may need for the creation and carrying out of your plan or plans for the accumulation of the money you've established as your goal making use of the mastermind", "start_timestamp": "00:19:24", "end_timestamp": "00:19:54", "start_second": 1164, "end_second": 1194, "url": "https://www.youtube.com/watch?v=n3g2DlmZsLg&t=1164s", "title": "Napoleon Hill's Think & Grow Rich Condensed and Narrated by Earl Nightingale", "thumbnail": "https://i.ytimg.com/vi/n3g2DlmZsLg/maxresdefault.jpg"} {"video_id": "n3g2DlmZsLg", "text": "principle this is important too before forming your mastermind Alliance decide what advantages and benefits you may offer the individual members of your group in return for their cooperation no one will work indefinitely without some form of compensation although this may not always be in the form of money 3 arrange to meet with the members of your mastermind group at least twice a week and more often if possible until you have jointly perfected the necessary plan or plans for the accomplishment of your goal 4 maintain perfect harmony", "start_timestamp": "00:19:54", "end_timestamp": "00:20:26", "start_second": 1194, "end_second": 1226, "url": "https://www.youtube.com/watch?v=n3g2DlmZsLg&t=1194s", "title": "Napoleon Hill's Think & Grow Rich Condensed and Narrated by Earl Nightingale", "thumbnail": "https://i.ytimg.com/vi/n3g2DlmZsLg/maxresdefault.jpg"} {"video_id": "n3g2DlmZsLg", "text": "between yourself and every member of your mastermind group keep in mind these facts first you're engaged in an undertaking of major importance to you to be sure of success you must have plans which are faultless second you must have the advantage of the experience education native ability and imagination of other minds this is in harmony with the methods followed by every person who has risen above the average work at this until you have a well-executed formal plan for reaching your objective in this way you're never", "start_timestamp": "00:20:26", "end_timestamp": "00:20:57", "start_second": 1226, "end_second": 1257, "url": "https://www.youtube.com/watch?v=n3g2DlmZsLg&t=1226s", "title": "Napoleon Hill's Think & Grow Rich Condensed and Narrated by Earl Nightingale", "thumbnail": "https://i.ytimg.com/vi/n3g2DlmZsLg/maxresdefault.jpg"} {"video_id": "n3g2DlmZsLg", "text": "confused or wondering what you should do next every morning you know exactly what you're going to do and why it is in this chapter of thinking Grow Rich that Napoleon Hill gives us his eleven qualities of leadership one unwavering courage two self-control 3 a keen sense of justice four definiteness of decision 5 definiteness of plans 6 the habit of doing more than paid for 7 a pleasing personality 8 sympathy and understanding 9 mastery of detail 10 willingness to assume full responsibility and 11 cooperation the", "start_timestamp": "00:20:57", "end_timestamp": "00:21:39", "start_second": 1257, "end_second": 1299, "url": "https://www.youtube.com/watch?v=n3g2DlmZsLg&t=1257s", "title": "Napoleon Hill's Think & Grow Rich Condensed and Narrated by Earl Nightingale", "thumbnail": "https://i.ytimg.com/vi/n3g2DlmZsLg/maxresdefault.jpg"} {"video_id": "n3g2DlmZsLg", "text": "chapter on organized planning is one of the largest and most important in the book it goes without saying that a man without a plan to follow is like a ship without a course no place to go with disaster a probability the seventh principle decision the mastery of procrastination to quote accurate analysis of over twenty five thousand men and women who had experienced failure disclosed the fact that lack of decision was near the head of the last of the thirty major causes of failure this is no mayor's statement", "start_timestamp": "00:21:39", "end_timestamp": "00:22:13", "start_second": 1299, "end_second": 1333, "url": "https://www.youtube.com/watch?v=n3g2DlmZsLg&t=1299s", "title": "Napoleon Hill's Think & Grow Rich Condensed and Narrated by Earl Nightingale", "thumbnail": "https://i.ytimg.com/vi/n3g2DlmZsLg/maxresdefault.jpg"} {"video_id": "n3g2DlmZsLg", "text": "of a theory in his effect procrastination the opposite of decision is a common enemy which every man must conquer analysis of several hundred people who had accumulated fortunes well beyond the million dollar mark disclosed the fact that every one of them had the habit of reaching decisions promptly and of changing these decisions slowly if and when they were changed people who fail to accumulate money without exception have the habit of reaching decisions if at all very slowly and of changing these decisions quickly and", "start_timestamp": "00:22:13", "end_timestamp": "00:22:42", "start_second": 1333, "end_second": 1362, "url": "https://www.youtube.com/watch?v=n3g2DlmZsLg&t=1333s", "title": "Napoleon Hill's Think & Grow Rich Condensed and Narrated by Earl Nightingale", "thumbnail": "https://i.ytimg.com/vi/n3g2DlmZsLg/maxresdefault.jpg"} {"video_id": "n3g2DlmZsLg", "text": "often a definite objective makes reaching prompt decisions that much easier Napoleon Hill gives many examples one of which is the case of Henry Ford one of Henry Ford's most outstanding qualities was his habit of reaching decisions quickly and definitely and changing them slowly this quality was so pronounced in the late mr. Ford that it earned form the reputation of being obstinate it was this quality which prompted mr. Ford to continue to manufacture his famous Model T the world's ugliest but for the time most", "start_timestamp": "00:22:42", "end_timestamp": "00:23:12", "start_second": 1362, "end_second": 1392, "url": "https://www.youtube.com/watch?v=n3g2DlmZsLg&t=1362s", "title": "Napoleon Hill's Think & Grow Rich Condensed and Narrated by Earl Nightingale", "thumbnail": "https://i.ytimg.com/vi/n3g2DlmZsLg/maxresdefault.jpg"} {"video_id": "n3g2DlmZsLg", "text": "practical car when all of his advisors and many of the purchases of the car were urging him to change it perhaps he delayed too long in making the change but the other side of the story is that his firmness of decision yielded a huge fortune before the change in model became necessary and the company's certainly none the worse for today when you make up your mind stay with it the majority of people who fail to make the grade are generally easily influenced by the opinions of others easily swayed they permit the newspapers and the", "start_timestamp": "00:23:12", "end_timestamp": "00:23:40", "start_second": 1392, "end_second": 1420, "url": "https://www.youtube.com/watch?v=n3g2DlmZsLg&t=1392s", "title": "Napoleon Hill's Think & Grow Rich Condensed and Narrated by Earl Nightingale", "thumbnail": "https://i.ytimg.com/vi/n3g2DlmZsLg/maxresdefault.jpg"} {"video_id": "n3g2DlmZsLg", "text": "gossiping neighbors to do their thinking for them opinions are the cheapest commodities on earth keep your own counsel when you begin to put into practice the principles we're describing here by reaching your own decisions and following them take no one into your confidence except the members of your mastermind group and be very careful in your selection of this group that you choose only those who will be in complete sympathy and harmony with your purpose close friends and relatives while not meaning to do so often", "start_timestamp": "00:23:40", "end_timestamp": "00:24:06", "start_second": 1420, "end_second": 1446, "url": "https://www.youtube.com/watch?v=n3g2DlmZsLg&t=1420s", "title": "Napoleon Hill's Think & Grow Rich Condensed and Narrated by Earl Nightingale", "thumbnail": "https://i.ytimg.com/vi/n3g2DlmZsLg/maxresdefault.jpg"} {"video_id": "n3g2DlmZsLg", "text": "handicapped one through a opinions and sometimes through ridicule thousands of men and women carry inferiority complexes with them all through life because some well-meaning but ignorant person destroyed their confidence through opinions or ridicule if a decision is worth anything at all it's worth sticking to until it's been completely worked the eighth principle persistence Napoleon Hill defines persistence as the power of will willpower and desire when properly combined make an irresistible pair persistence to an individual is what", "start_timestamp": "00:24:06", "end_timestamp": "00:24:42", "start_second": 1446, "end_second": 1482, "url": "https://www.youtube.com/watch?v=n3g2DlmZsLg&t=1446s", "title": "Napoleon Hill's Think & Grow Rich Condensed and Narrated by Earl Nightingale", "thumbnail": "https://i.ytimg.com/vi/n3g2DlmZsLg/maxresdefault.jpg"} {"video_id": "n3g2DlmZsLg", "text": "carbon is to steel in uncounted thousands of cases persistence has stood as the difference between success and failure it is this quality more than any other that keeps the majority from great accomplishment they will try a thing but as soon as the going gets tough they fold experience with thousands of people has proved that lack of persistence is a weakness common to the majority of men it is a weakness which may be overcome by effort if you are to accomplish the desire you've set for yourself you must form the habit of persistence things", "start_timestamp": "00:24:42", "end_timestamp": "00:25:13", "start_second": 1482, "end_second": 1513, "url": "https://www.youtube.com/watch?v=n3g2DlmZsLg&t=1482s", "title": "Napoleon Hill's Think & Grow Rich Condensed and Narrated by Earl Nightingale", "thumbnail": "https://i.ytimg.com/vi/n3g2DlmZsLg/maxresdefault.jpg"} {"video_id": "n3g2DlmZsLg", "text": "will get dark it will seem as though there's no longer any reason to continue everything in you will tell you to give up to quit trying and it's right here that the men are separated from the boys it's right here that if you'll go that extra mile and keep going that the skies will clear and you'll begin to see the first signs of the abundance that is to be yours because you had the courage to persist with persistence will come success persistence is a state of mind therefore it can be cultivated like all states of mind persistence is based upon", "start_timestamp": "00:25:13", "end_timestamp": "00:25:45", "start_second": 1513, "end_second": 1545, "url": "https://www.youtube.com/watch?v=n3g2DlmZsLg&t=1513s", "title": "Napoleon Hill's Think & Grow Rich Condensed and Narrated by Earl Nightingale", "thumbnail": "https://i.ytimg.com/vi/n3g2DlmZsLg/maxresdefault.jpg"} {"video_id": "n3g2DlmZsLg", "text": "definite causes among them these 1 definiteness of purpose knowing what you want to desire 3 self-reliance 4 definiteness of plans 5 accurate knowledge knowing that your plan is sound 6 cooperation sympathy understanding and harmonious cooperation with others tend to develop persistence 7 willpower 8 Hemet persistence is the direct result of habit the ninth principle power of the mastermind it is in this section that Napoleon Hill describes the importance of forming a group of individuals sympathetic to your desire they may be", "start_timestamp": "00:25:45", "end_timestamp": "00:26:30", "start_second": 1545, "end_second": 1590, "url": "https://www.youtube.com/watch?v=n3g2DlmZsLg&t=1545s", "title": "Napoleon Hill's Think & Grow Rich Condensed and Narrated by Earl Nightingale", "thumbnail": "https://i.ytimg.com/vi/n3g2DlmZsLg/maxresdefault.jpg"} {"video_id": "n3g2DlmZsLg", "text": "individuals with similar plans a mastermind group can be made up of two or more individuals no two Minds ever come together without thereby creating a third a third invisible intangible force which may be likened to a third mind you may have noticed many times that by discussing something with another individual you suddenly get good ideas as a result of this Association ideas you would not have gotten without this Association the same thing happens to the other person a lot of good ideas have been born in individual minds as a result of having", "start_timestamp": "00:26:30", "end_timestamp": "00:27:03", "start_second": 1590, "end_second": 1623, "url": "https://www.youtube.com/watch?v=n3g2DlmZsLg&t=1590s", "title": "Napoleon Hill's Think & Grow Rich Condensed and Narrated by Earl Nightingale", "thumbnail": "https://i.ytimg.com/vi/n3g2DlmZsLg/maxresdefault.jpg"} {"video_id": "n3g2DlmZsLg", "text": "met in committee associating with your mastermind group is not meant as a means of letting others do your thinking for you far from it it is to stimulate your own thinking through the association with other minds no one knows everything the more sympathetic Minds you get together and by sympathetic I mean working for a common purpose the more related information is going to be available and great ideas are a combination of related information so pick the member or members of your mastermind group with care make sure", "start_timestamp": "00:27:03", "end_timestamp": "00:27:31", "start_second": 1623, "end_second": 1651, "url": "https://www.youtube.com/watch?v=n3g2DlmZsLg&t=1623s", "title": "Napoleon Hill's Think & Grow Rich Condensed and Narrated by Earl Nightingale", "thumbnail": "https://i.ytimg.com/vi/n3g2DlmZsLg/maxresdefault.jpg"} {"video_id": "n3g2DlmZsLg", "text": "there are people you respect and who are hardworking and conscientious you have a lot of fun and you'll all reach your goals just that much sooner the tenth principle could be called enthusiasm that is the enthusiasm that comes from the channeling of all bodily drives into positive worthwhile outlets it is in this chapter that Napoleon a describes the importance of the woman the one and only woman in the achievement of a worthwhile goal it seemed quite significant to mr. Hill that practically every great leader was a man whose", "start_timestamp": "00:27:31", "end_timestamp": "00:28:05", "start_second": 1651, "end_second": 1685, "url": "https://www.youtube.com/watch?v=n3g2DlmZsLg&t=1651s", "title": "Napoleon Hill's Think & Grow Rich Condensed and Narrated by Earl Nightingale", "thumbnail": "https://i.ytimg.com/vi/n3g2DlmZsLg/maxresdefault.jpg"} {"video_id": "n3g2DlmZsLg", "text": "achievements were largely inspired by a woman when things get tough and you can count on if they will you may be deserted by what you thought were friends but if you've got a good woman you'll never be alone she'll be willing to start over again if necessary and she'll give you the new enthusiasm that comes through her faith in you having someone to love is having someone to share your success and accomplishment to give you the praise that all of us need from time to time a man can become successful without a wife and family but", "start_timestamp": "00:28:05", "end_timestamp": "00:28:34", "start_second": 1685, "end_second": 1714, "url": "https://www.youtube.com/watch?v=n3g2DlmZsLg&t=1685s", "title": "Napoleon Hill's Think & Grow Rich Condensed and Narrated by Earl Nightingale", "thumbnail": "https://i.ytimg.com/vi/n3g2DlmZsLg/maxresdefault.jpg"} {"video_id": "n3g2DlmZsLg", "text": "all the real joy is taken out of it take care of your wife and children as your greatest possessions the 11th principle has to do with the subconscious mind the subconscious mind consists of a field of consciousness in which every impulse of thought that reaches the objective mind through any of the five senses is classified and recorded and from which thoughts may be recalled or withdrawn as letters may be taken from a filing cabinet it receives and files sense impressions or thoughts regardless of their nature you may voluntarily plant", "start_timestamp": "00:28:34", "end_timestamp": "00:29:11", "start_second": 1714, "end_second": 1751, "url": "https://www.youtube.com/watch?v=n3g2DlmZsLg&t=1714s", "title": "Napoleon Hill's Think & Grow Rich Condensed and Narrated by Earl Nightingale", "thumbnail": "https://i.ytimg.com/vi/n3g2DlmZsLg/maxresdefault.jpg"} {"video_id": "n3g2DlmZsLg", "text": "in your subconscious mind any plan thought or purpose which you desire to translate into its physical or monetary equivalent the subconscious acts first on the dominating desires which have been mixed with emotional feeling such as faith your subconscious mind works night and day through a method of procedure unknown to man the subconscious mind draws upon the forces of infinite intelligence for the power with which it voluntarily transmutes one's desires into their physical equivalent making use always of the most", "start_timestamp": "00:29:11", "end_timestamp": "00:29:42", "start_second": 1751, "end_second": 1782, "url": "https://www.youtube.com/watch?v=n3g2DlmZsLg&t=1751s", "title": "Napoleon Hill's Think & Grow Rich Condensed and Narrated by Earl Nightingale", "thumbnail": "https://i.ytimg.com/vi/n3g2DlmZsLg/maxresdefault.jpg"} {"video_id": "n3g2DlmZsLg", "text": "practical media by which this end may be accomplished you cannot entirely control your subconscious mind but you can voluntarily hand over to it any plan desire or purpose which you wish transformed in a concrete form no one knows very much about what we call the subconscious or unconscious mind we do know that it is incalculably powerful and can solve our problems if we go about using it the right way and the best way is to hold in your conscious mind as often as possible a clearer picture of yourself already", "start_timestamp": "00:29:42", "end_timestamp": "00:30:15", "start_second": 1782, "end_second": 1815, "url": "https://www.youtube.com/watch?v=n3g2DlmZsLg&t=1782s", "title": "Napoleon Hill's Think & Grow Rich Condensed and Narrated by Earl Nightingale", "thumbnail": "https://i.ytimg.com/vi/n3g2DlmZsLg/maxresdefault.jpg"} {"video_id": "n3g2DlmZsLg", "text": "having accomplished your goal you know what you want define it clearly and then project it on the motion picture screen of your mind hold it see yourself doing the things and having the things you'll have when your objective will have been reached do this as often as possible as you go about your daily work and particularly at night just before you go to sleep and the first thing upon arising as you do this your subconscious will begin to lead you in the most logical ways toward your objective don't fight it follow", "start_timestamp": "00:30:15", "end_timestamp": "00:30:43", "start_second": 1815, "end_second": 1843, "url": "https://www.youtube.com/watch?v=n3g2DlmZsLg&t=1815s", "title": "Napoleon Hill's Think & Grow Rich Condensed and Narrated by Earl Nightingale", "thumbnail": "https://i.ytimg.com/vi/n3g2DlmZsLg/maxresdefault.jpg"} {"video_id": "n3g2DlmZsLg", "text": "your sudden hunches the ideas that come into your mind knowing that it's your subconscious trying to get through to your conscious mind if you keep at this you'll be amazed and delighted by the wonderful ideas that just seem to come from nowhere in the next principle we'll talk some more about this sixth sense that seems to control the lives of the great men and women but it comes from a systemic triggering of the subconscious mind the lives of the great men and women which seem miraculous to the average person are nothing more than the", "start_timestamp": "00:30:43", "end_timestamp": "00:31:11", "start_second": 1843, "end_second": 1871, "url": "https://www.youtube.com/watch?v=n3g2DlmZsLg&t=1843s", "title": "Napoleon Hill's Think & Grow Rich Condensed and Narrated by Earl Nightingale", "thumbnail": "https://i.ytimg.com/vi/n3g2DlmZsLg/maxresdefault.jpg"} {"video_id": "n3g2DlmZsLg", "text": "fulfillment of their burning desires through the power of their subconscious minds time means nothing to your subconscious a man could work steadily at his job for forty years and not accomplish as much as is possible in three or four years through the proper working of this principle your subconscious mind cannot remain idle if you fail to plant desires in your subconscious mind it will feed upon the thoughts which reach it as a result of your neglect remember that you're living daily in the midst of all manner of", "start_timestamp": "00:31:11", "end_timestamp": "00:31:40", "start_second": 1871, "end_second": 1900, "url": "https://www.youtube.com/watch?v=n3g2DlmZsLg&t=1871s", "title": "Napoleon Hill's Think & Grow Rich Condensed and Narrated by Earl Nightingale", "thumbnail": "https://i.ytimg.com/vi/n3g2DlmZsLg/maxresdefault.jpg"} {"video_id": "n3g2DlmZsLg", "text": "thought impulses which are reaching your subconscious mind without your knowledge some of these impulses are negative some are positive you are now engaged in trying to help shut off the flow of negative impulses and to aid in voluntarily influencing your subconscious mind through positive impulses of desire when you achieve this you will possess the key which unlocks the door to your subconscious mind bulwa wrote the man who succeeds above his fellows is the one who early in life clearly discerned his object and toward", "start_timestamp": "00:31:40", "end_timestamp": "00:32:11", "start_second": 1900, "end_second": 1931, "url": "https://www.youtube.com/watch?v=n3g2DlmZsLg&t=1900s", "title": "Napoleon Hill's Think & Grow Rich Condensed and Narrated by Earl Nightingale", "thumbnail": "https://i.ytimg.com/vi/n3g2DlmZsLg/maxresdefault.jpg"} {"video_id": "n3g2DlmZsLg", "text": "that object habitually directs his powers even genius itself is but fine observation strengthened by fixity of purpose every man who observes vigilantly and resolves steadfastly grows unconsciously into genius the key word there is unconsciously know what you want decide once and for all that it will be yours remain steadfast on course propelled by faith and your subconscious or unconscious mind will do the rest the twelfth principle as outlined in thinking Grow Rich by Napoleon Hill has to do with the brain if you had access", "start_timestamp": "00:32:11", "end_timestamp": "00:32:52", "start_second": 1931, "end_second": 1972, "url": "https://www.youtube.com/watch?v=n3g2DlmZsLg&t=1931s", "title": "Napoleon Hill's Think & Grow Rich Condensed and Narrated by Earl Nightingale", "thumbnail": "https://i.ytimg.com/vi/n3g2DlmZsLg/maxresdefault.jpg"} {"video_id": "n3g2DlmZsLg", "text": "to all the wealth in the world and took a penny you would be doing exactly what you very probably have been doing in the use of your brain nothing in the world is more pitiful than the misunderstanding by the average person of the power of his brain and the minds to which it is connected the conscious and the subconscious you own in your brain the most marvelous miraculous inconceivably powerful force the world has ever known take for example the fact that the number of lines which connect the brain cells with one another equal a figure", "start_timestamp": "00:32:52", "end_timestamp": "00:33:23", "start_second": 1972, "end_second": 2003, "url": "https://www.youtube.com/watch?v=n3g2DlmZsLg&t=1972s", "title": "Napoleon Hill's Think & Grow Rich Condensed and Narrated by Earl Nightingale", "thumbnail": "https://i.ytimg.com/vi/n3g2DlmZsLg/maxresdefault.jpg"} {"video_id": "n3g2DlmZsLg", "text": "one followed by 15 million ciphers it has been determined that there are from ten to fourteen billion cells and the average human cerebral cortex it is inconceivable that such a network of intricate machinery should be in existence for the sole purpose of carrying on the physical functions incidental to growth and maintenance of the physical body this is the mechanism that has given us the supersonic airplane our deep rocket probes into outer space the sciences the arts all that we know and use today and will use", "start_timestamp": "00:33:23", "end_timestamp": "00:33:58", "start_second": 2003, "end_second": 2038, "url": "https://www.youtube.com/watch?v=n3g2DlmZsLg&t=2003s", "title": "Napoleon Hill's Think & Grow Rich Condensed and Narrated by Earl Nightingale", "thumbnail": "https://i.ytimg.com/vi/n3g2DlmZsLg/maxresdefault.jpg"} {"video_id": "n3g2DlmZsLg", "text": "tomorrow have hatched from this small grey mass each of us carries around do you can you doubt even for a moment that it can bring you and yours everything you want here on earth of course it can if you will recognize your power as an individual and stop acting like those who have never even thought about it give it the job you've decided to accomplish and watch it handle it the thirteenth and final principle is called the sixth sense the sixth sense can be described as the sense through which infinite intelligence may and will", "start_timestamp": "00:33:58", "end_timestamp": "00:34:35", "start_second": 2038, "end_second": 2075, "url": "https://www.youtube.com/watch?v=n3g2DlmZsLg&t=2038s", "title": "Napoleon Hill's Think & Grow Rich Condensed and Narrated by Earl Nightingale", "thumbnail": "https://i.ytimg.com/vi/n3g2DlmZsLg/maxresdefault.jpg"} {"video_id": "n3g2DlmZsLg", "text": "communicate voluntarily without any effort from or demands by the individual this principle is the apex of the philosophy it can be assimilated understood and applied only by first mastering the other twelve principles the sixth sense is that portion of the subconscious mind which has been referred to as the creative imagination it has also been referred to as the receiving through which ideas plans and thoughts flash into the mind the flashes are sometimes called punches or inspirations the sixth sense defies description it", "start_timestamp": "00:34:35", "end_timestamp": "00:35:07", "start_second": 2075, "end_second": 2107, "url": "https://www.youtube.com/watch?v=n3g2DlmZsLg&t=2075s", "title": "Napoleon Hill's Think & Grow Rich Condensed and Narrated by Earl Nightingale", "thumbnail": "https://i.ytimg.com/vi/n3g2DlmZsLg/maxresdefault.jpg"} {"video_id": "n3g2DlmZsLg", "text": "cannot be described to a person who has not mastered the other principles of this philosophy because such a person has no knowledge and no experience with which the sixth sense may be compared the sixth sense is not something that one can take off and put on a twill ability to use this great power comes slowly through application of the other principles we've outlined many individuals come into a workable knowledge of the sixth sense even before the age of 40 but more often the knowledge is not available until one is", "start_timestamp": "00:35:07", "end_timestamp": "00:35:33", "start_second": 2107, "end_second": 2133, "url": "https://www.youtube.com/watch?v=n3g2DlmZsLg&t=2107s", "title": "Napoleon Hill's Think & Grow Rich Condensed and Narrated by Earl Nightingale", "thumbnail": "https://i.ytimg.com/vi/n3g2DlmZsLg/maxresdefault.jpg"} {"video_id": "n3g2DlmZsLg", "text": "well past 50 and this for the reason that the spiritual forces with which the sixth sense is so closely related do not mature and become usable generally except through years of meditation self-examination and serious thought but begin to develop it now by applying the principles we've talked about here remember this man can create nothing which he does not first conceived in the form of an impulse of thought mens thought impulses begin immediately to translate themselves into their physical equivalent whether those thoughts are", "start_timestamp": "00:35:33", "end_timestamp": "00:36:04", "start_second": 2133, "end_second": 2164, "url": "https://www.youtube.com/watch?v=n3g2DlmZsLg&t=2133s", "title": "Napoleon Hill's Think & Grow Rich Condensed and Narrated by Earl Nightingale", "thumbnail": "https://i.ytimg.com/vi/n3g2DlmZsLg/maxresdefault.jpg"} {"video_id": "n3g2DlmZsLg", "text": "voluntary or involuntary keep fear out of your mind by concentrating on the mental picture of your goal your greatest desire now I want to mention that Think and Grow Rich as a book carries more endorsements by great men who knew of its truth than any other book of its kind ever written I'll touch on only a few all of these great men actually endorsed Napoleon Hills principles former President of the United States William Howard Taft FW Woolworth wrote that he had built his great chain of stores by applying many", "start_timestamp": "00:36:04", "end_timestamp": "00:36:38", "start_second": 2164, "end_second": 2198, "url": "https://www.youtube.com/watch?v=n3g2DlmZsLg&t=2164s", "title": "Napoleon Hill's Think & Grow Rich Condensed and Narrated by Earl Nightingale", "thumbnail": "https://i.ytimg.com/vi/n3g2DlmZsLg/maxresdefault.jpg"} {"video_id": "n3g2DlmZsLg", "text": "of these principles Robert dollar who built the great dollar steamship line wrote if I had had this philosophy 50 years ago I suppose I could have accomplished all that I've done in less than half the time I sincerely hope the world will discover and reward you Samuel Gompers wrote that the mastery of these principles is the equivalent of an insurance policy against failure President Woodrow Wilson John Wanamaker the merchant Prince wrote I know that your fundamentals are sound because I've been applying them in my", "start_timestamp": "00:36:38", "end_timestamp": "00:37:06", "start_second": 2198, "end_second": 2226, "url": "https://www.youtube.com/watch?v=n3g2DlmZsLg&t=2198s", "title": "Napoleon Hill's Think & Grow Rich Condensed and Narrated by Earl Nightingale", "thumbnail": "https://i.ytimg.com/vi/n3g2DlmZsLg/maxresdefault.jpg"} {"video_id": "n3g2DlmZsLg", "text": "business for more than 30 years George Eastman the world's largest maker of cash Thomas Edison Luther Burbank Theodore Roosevelt iam Statler john d rockefeller i think you know that the endorsements of these great men could not have been bought with all the money in the world what we have been talking about here can change your life can bring you anything and everything worthwhile you want in life for yourself and your family cut yourself away from the average from the mediocre and chart your course on the dream in your heart these thirteen", "start_timestamp": "00:37:06", "end_timestamp": "00:37:37", "start_second": 2226, "end_second": 2257, "url": "https://www.youtube.com/watch?v=n3g2DlmZsLg&t=2226s", "title": "Napoleon Hill's Think & Grow Rich Condensed and Narrated by Earl Nightingale", "thumbnail": "https://i.ytimg.com/vi/n3g2DlmZsLg/maxresdefault.jpg"} {"video_id": "n3g2DlmZsLg", "text": "principles will never let you down as long as you use them and now it is with great pleasure that i give you the author himself napoleon hill thank you era nightingale the message which you who are listening have just heard has brought you within three short steps of the supreme secret of success the same secret which has brought happiness peace of mind and financial success to countless thousands of people who have read Think and Grow Rich the same secrets which has made master salesmen out of ordinary order takers and the", "start_timestamp": "00:37:37", "end_timestamp": "00:38:11", "start_second": 2257, "end_second": 2291, "url": "https://www.youtube.com/watch?v=n3g2DlmZsLg&t=2257s", "title": "Napoleon Hill's Think & Grow Rich Condensed and Narrated by Earl Nightingale", "thumbnail": "https://i.ytimg.com/vi/n3g2DlmZsLg/maxresdefault.jpg"} {"video_id": "n3g2DlmZsLg", "text": "same secret which has brought friendship love and marriage to men and women who have come under the spell of the thirteen principles which Earl Nightingale has just described you desire the better things of life or you wouldn't be listening to this record come with me then and I will help you chart your course so you may acquire whatever it is that you desire most in life by following these instructions one condition your own subconscious mind to work for you while you sleep as well as when you are awake you can do this by", "start_timestamp": "00:38:11", "end_timestamp": "00:38:41", "start_second": 2291, "end_second": 2321, "url": "https://www.youtube.com/watch?v=n3g2DlmZsLg&t=2291s", "title": "Napoleon Hill's Think & Grow Rich Condensed and Narrated by Earl Nightingale", "thumbnail": "https://i.ytimg.com/vi/n3g2DlmZsLg/maxresdefault.jpg"} {"video_id": "n3g2DlmZsLg", "text": "playing this record to yourself every night just before you retire then before you go to sleep after hearing the record write out a clear statement of what you wish to accomplish the following day and request your subconscious mind to work during the night and provide you with the plan you will need to achieve your purpose to form a personal mastermind group of two or more people who are closely associated with you they can be members of your family your business or your professional associates or people who work where you are employed and play", "start_timestamp": "00:38:41", "end_timestamp": "00:39:18", "start_second": 2321, "end_second": 2358, "url": "https://www.youtube.com/watch?v=n3g2DlmZsLg&t=2321s", "title": "Napoleon Hill's Think & Grow Rich Condensed and Narrated by Earl Nightingale", "thumbnail": "https://i.ytimg.com/vi/n3g2DlmZsLg/maxresdefault.jpg"} {"video_id": "n3g2DlmZsLg", "text": "this record in their presence at least once a week after you have done this engaging friendly roundtable discussion of the thirteen success principles and requests each person in the group to participate and to look for inspirational ideas which may help him or her to become more successful three beginning now follow the habit of rendering more service and better service than that which is expected of you and do it in a pleasing positive mental attitude this will make friends for you it will increase the value of your services and", "start_timestamp": "00:39:18", "end_timestamp": "00:39:52", "start_second": 2358, "end_second": 2392, "url": "https://www.youtube.com/watch?v=n3g2DlmZsLg&t=2358s", "title": "Napoleon Hill's Think & Grow Rich Condensed and Narrated by Earl Nightingale", "thumbnail": "https://i.ytimg.com/vi/n3g2DlmZsLg/maxresdefault.jpg"} {"video_id": "n3g2DlmZsLg", "text": "it will attract opportunities by which you may get from life whatever it is that you most desire follow these instructions to the letter and you ensure to see the day when you can express a prayer of gratitude for having had the privilege of hearing and acting on the message which Earl Nightingale has given you through this record and now may I reach out across the space and the time which separate us and offer you a hand of friendship and a sincere prayer that you will be blessed with a richer and fuller life because of this", "start_timestamp": "00:39:52", "end_timestamp": "00:40:22", "start_second": 2392, "end_second": 2422, "url": "https://www.youtube.com/watch?v=n3g2DlmZsLg&t=2392s", "title": "Napoleon Hill's Think & Grow Rich Condensed and Narrated by Earl Nightingale", "thumbnail": "https://i.ytimg.com/vi/n3g2DlmZsLg/maxresdefault.jpg"} {"video_id": "z9Fz96Mr4bM", "text": "[Music] professor we might call this a biography of your curiosity so I want to see some of the origins of your curiosity perhaps in childhood the kinds of books you might have been reading the kind of child you were intellectually um where are we first of all in your childhood let's see I guess I'm about five let's start about five okay and when where is the location I'm on a bus coming home from school in the countryside okay my English countryside in the English country yeah my mother was a schoolteacher ah she said the", "start_timestamp": "00:00:00", "end_timestamp": "00:01:00", "start_second": 0, "end_second": 60, "url": "https://www.youtube.com/watch?v=z9Fz96Mr4bM&t=0s", "title": "The Heidelberg Laureate Forum Foundation presents the HLF Portraits: Geoffrey E. Hinton", "thumbnail": "https://i.ytimg.com/vi/z9Fz96Mr4bM/maxresdefault.jpg"} {"video_id": "z9Fz96Mr4bM", "text": "headmistress of a little school in the countryside about ten miles outside of Bristol in the UK okay and I went to the set of the school of where she was headmistress Oh which is the nightmare it had some disadvantages occasionally the kids would surround me in the pay playground and explained to me that I had to persuade my mother to do so it's always the kids that make it in a bit yes but it wasn't too bad but you survived that I survived that so your five on a bus and five on a bus and it the bus has velvet seats where this", "start_timestamp": "00:01:00", "end_timestamp": "00:01:40", "start_second": 60, "end_second": 100, "url": "https://www.youtube.com/watch?v=z9Fz96Mr4bM&t=60s", "title": "The Heidelberg Laureate Forum Foundation presents the HLF Portraits: Geoffrey E. Hinton", "thumbnail": "https://i.ytimg.com/vi/z9Fz96Mr4bM/maxresdefault.jpg"} {"video_id": "z9Fz96Mr4bM", "text": "seats covered with the material that's got a heavy pile and I'm sitting on the bus and I put a penny and old British pennies were big I put his name on the velvet seat and it moved and it went uphill and this was clearly impossible and I was absolutely fascinated by the fact that you put this penny down it wouldn't always do it but sometimes it would go up hill and it's a bit like those dreams you have I guess I have not everybody has but I think a lot of people have them when you suddenly discover you can fly and in the", "start_timestamp": "00:01:40", "end_timestamp": "00:02:25", "start_second": 100, "end_second": 145, "url": "https://www.youtube.com/watch?v=z9Fz96Mr4bM&t=100s", "title": "The Heidelberg Laureate Forum Foundation presents the HLF Portraits: Geoffrey E. Hinton", "thumbnail": "https://i.ytimg.com/vi/z9Fz96Mr4bM/maxresdefault.jpg"} {"video_id": "z9Fz96Mr4bM", "text": "dream yes you think this can't be real yes yes but you tried again you can fly and it's amazing and you know it can't quite be real but it's just amazing and it's wonderful that you can fly well this penny moving uphill was the same kind of thing it was sort of weird and wonderful I'm like the first time you discover magnetism and I couldn't believe what was happening and at the time I just was amazed by it and I was puzzled by it for a long long time and possibly a little experimental you tried it a few times I tried a few times and", "start_timestamp": "00:02:25", "end_timestamp": "00:03:05", "start_second": 145, "end_second": 185, "url": "https://www.youtube.com/watch?v=z9Fz96Mr4bM&t=145s", "title": "The Heidelberg Laureate Forum Foundation presents the HLF Portraits: Geoffrey E. Hinton", "thumbnail": "https://i.ytimg.com/vi/z9Fz96Mr4bM/maxresdefault.jpg"} {"video_id": "z9Fz96Mr4bM", "text": "it didn't work reliably but it did work and I remembered a lot about the circumstances on that bus I don't I remember which side of the bus we were sitting yes yes I'm wearing the bus we were sitting you for years I just didn't understand how this penny gone uphill and it was a sort of lurking memory that was this weird thing that happened that couldn't possibly have happened wonderful now did you rush to the nearest adult and say I've noticed this can you help me understand why or did you just chemo I guess I was yeah I'm", "start_timestamp": "00:03:05", "end_timestamp": "00:03:35", "start_second": 185, "end_second": 215, "url": "https://www.youtube.com/watch?v=z9Fz96Mr4bM&t=185s", "title": "The Heidelberg Laureate Forum Foundation presents the HLF Portraits: Geoffrey E. Hinton", "thumbnail": "https://i.ytimg.com/vi/z9Fz96Mr4bM/maxresdefault.jpg"} {"video_id": "z9Fz96Mr4bM", "text": "not that kind of a person I want to figure it out myself okay okay exactly and so I just couldn't figure it out did you make any sense and but it stayed in my mind and and I think might a lot of my research careers like that there's these little niggly things that don't make any sense and they just stay in my mind wonderful and with this when I was a teenager and I guess when I started physics at school although you wouldn't need to study physics for this I finally understood what must have been handling because the other thing I remember about", "start_timestamp": "00:03:35", "end_timestamp": "00:04:11", "start_second": 215, "end_second": 251, "url": "https://www.youtube.com/watch?v=z9Fz96Mr4bM&t=215s", "title": "The Heidelberg Laureate Forum Foundation presents the HLF Portraits: Geoffrey E. Hinton", "thumbnail": "https://i.ytimg.com/vi/z9Fz96Mr4bM/maxresdefault.jpg"} {"video_id": "z9Fz96Mr4bM", "text": "the bus was it vibrated a lot it had an engine a big engine at the front that was somehow loose that made the whole bus vibrate yes and so clearly what was happening was the flock on the material was at an angle and it was an angle facing upwards yes so it was slick yes the seat was kind of like this and the flock was like this and so the penny couldn't go down so when it vibrated well when it vibrated this way it will go up and he couldn't vibrate down so that was what was making that making it move up did you actually pursue the", "start_timestamp": "00:04:11", "end_timestamp": "00:04:49", "start_second": 251, "end_second": 289, "url": "https://www.youtube.com/watch?v=z9Fz96Mr4bM&t=251s", "title": "The Heidelberg Laureate Forum Foundation presents the HLF Portraits: Geoffrey E. Hinton", "thumbnail": "https://i.ytimg.com/vi/z9Fz96Mr4bM/maxresdefault.jpg"} {"video_id": "z9Fz96Mr4bM", "text": "problem eventually or it just came to me when it escaped you when I understood enough about things like vowels I guess I'm going to boringly jump to a more formal part of your education although that was that was the gem of understanding your curiosity but are you getting a good education are you in a well there's the the school your mother is head miss yourself but then the next level are you getting teaching in science are you finding teachers that interest you yes and no yes so he's I was at my mother's school in the countryside which", "start_timestamp": "00:04:49", "end_timestamp": "00:05:35", "start_second": 289, "end_second": 335, "url": "https://www.youtube.com/watch?v=z9Fz96Mr4bM&t=289s", "title": "The Heidelberg Laureate Forum Foundation presents the HLF Portraits: Geoffrey E. Hinton", "thumbnail": "https://i.ytimg.com/vi/z9Fz96Mr4bM/maxresdefault.jpg"} {"video_id": "z9Fz96Mr4bM", "text": "was mainly children of farmers yes and had a vet they had a very strong Somerset accent yes um there's actually quite a lot of German in the Somerset they use words that sound German to me and I think we must be German in origin and so I had this very strong agricultural accent and I when my mother got pregnant with my younger sister she had to leave the school so she would have been I'd being about six then yes just turning six heading and I went to a primary school in near where I lived and then my parents wanted me to go to the", "start_timestamp": "00:05:35", "end_timestamp": "00:06:35", "start_second": 335, "end_second": 395, "url": "https://www.youtube.com/watch?v=z9Fz96Mr4bM&t=335s", "title": "The Heidelberg Laureate Forum Foundation presents the HLF Portraits: Geoffrey E. Hinton", "thumbnail": "https://i.ytimg.com/vi/z9Fz96Mr4bM/maxresdefault.jpg"} {"video_id": "z9Fz96Mr4bM", "text": "grammar school which had a it had a sort of genius school I can't on board age but and ice at an exam for the grammar school when I failed it so so then my parents decided to send me to the a private school our school I never liked it was called Clifton College yeah I'm yes yes I know and they wouldn't let me in I mean they were willing to take me in the junior school but the prep the prep school it was called preparation for the big school but they said I couldn't come until something had been done about my", "start_timestamp": "00:06:35", "end_timestamp": "00:07:30", "start_second": 395, "end_second": 450, "url": "https://www.youtube.com/watch?v=z9Fz96Mr4bM&t=395s", "title": "The Heidelberg Laureate Forum Foundation presents the HLF Portraits: Geoffrey E. Hinton", "thumbnail": "https://i.ytimg.com/vi/z9Fz96Mr4bM/maxresdefault.jpg"} {"video_id": "z9Fz96Mr4bM", "text": "accident your accent my accent I wasn't allowed into the school because my accent and they said I'd have to stay in the primary school for another year until my accident be fixed may I color than only in England situation maybe I don't know no fair enough all right so you went about improving your accent or your your parents just I I mean at home the accent was fine no but you was the cool accent yeah yeah and yeah and in the local primary school my accident became more acceptable right and then I went to this prep school and", "start_timestamp": "00:07:30", "end_timestamp": "00:08:15", "start_second": 450, "end_second": 495, "url": "https://www.youtube.com/watch?v=z9Fz96Mr4bM&t=450s", "title": "The Heidelberg Laureate Forum Foundation presents the HLF Portraits: Geoffrey E. Hinton", "thumbnail": "https://i.ytimg.com/vi/z9Fz96Mr4bM/maxresdefault.jpg"} {"video_id": "z9Fz96Mr4bM", "text": "it was a British prep school I didn't like it I was completely out of place and isolated there uh because my father was a Stalinist my mother was a staunch Labour Party supporter yes the Secretary of the local leg up ah ah so you were a classy anyway I'd been taught that religion was just nonsense yes something I still believe and I mean it's just patently nonsense and this was a sort of I call it muscular Christianity that is it wasn't sort of particularly religious but it was a Christian school and they yes they", "start_timestamp": "00:08:15", "end_timestamp": "00:09:08", "start_second": 495, "end_second": 548, "url": "https://www.youtube.com/watch?v=z9Fz96Mr4bM&t=495s", "title": "The Heidelberg Laureate Forum Foundation presents the HLF Portraits: Geoffrey E. Hinton", "thumbnail": "https://i.ytimg.com/vi/z9Fz96Mr4bM/maxresdefault.jpg"} {"video_id": "z9Fz96Mr4bM", "text": "thought the things that were good for boys were things like playing playing rugger and going to church and so we had to go to a service every morning and I felt well years later at the service in the morning when I was much older when as a teenager I also had to come in on Sundays because I lived at home most of the kids in the school were boarders who lived in the school but maybe sort of 20% of the school with day boys who lived in Bristol yes and we had to come in on Sundays yeah they were service on Sunday there was no excusing your", "start_timestamp": "00:09:08", "end_timestamp": "00:09:58", "start_second": 548, "end_second": 598, "url": "https://www.youtube.com/watch?v=z9Fz96Mr4bM&t=548s", "title": "The Heidelberg Laureate Forum Foundation presents the HLF Portraits: Geoffrey E. Hinton", "thumbnail": "https://i.ytimg.com/vi/z9Fz96Mr4bM/maxresdefault.jpg"} {"video_id": "z9Fz96Mr4bM", "text": "absence you couldn't just be at home on Sunday started walking on Sunday service service and they would invite Headmaster's of other schools to come and give sermons and I remember a headmaster for another school giving a sermon about how awful Russia was and how they had forced ideological education in Marxism and people had to sort of they weren't they couldn't be a fuse they had to sit there and they had to listen to this Marxist nonsense I'm forcibly and I remember sitting there thinking you know I'm sitting here", "start_timestamp": "00:09:58", "end_timestamp": "00:10:36", "start_second": 598, "end_second": 636, "url": "https://www.youtube.com/watch?v=z9Fz96Mr4bM&t=598s", "title": "The Heidelberg Laureate Forum Foundation presents the HLF Portraits: Geoffrey E. Hinton", "thumbnail": "https://i.ytimg.com/vi/z9Fz96Mr4bM/maxresdefault.jpg"} {"video_id": "z9Fz96Mr4bM", "text": "I'm forcibly listening to this religious nonsense and I almost got up and walked out and I somehow wish I had I'd have been thrown out of the school but I wish I'd done that um few would have had the Kurds to do it I didn't I didn't no one is it's no one noticing your intelligence in this muscular school now um there were some people who some teachers who were very nice to me there was a math teacher called stp Wells and they I have no idea what his first name was Jersey school he was mr. wells um STP wells I remember he was", "start_timestamp": "00:10:36", "end_timestamp": "00:11:20", "start_second": 636, "end_second": 680, "url": "https://www.youtube.com/watch?v=z9Fz96Mr4bM&t=636s", "title": "The Heidelberg Laureate Forum Foundation presents the HLF Portraits: Geoffrey E. Hinton", "thumbnail": "https://i.ytimg.com/vi/z9Fz96Mr4bM/maxresdefault.jpg"} {"video_id": "z9Fz96Mr4bM", "text": "tall and thin and he loved math unlike many math teachers in junior schools he actually thought math was fun um he had curious ideas about discipline so I remember he did things you would never get away with nowadays so I remember a class where there were two boys talking at the back sitting across the row from each other and they were they were talking softly together well he was trying to explain something so remember he walked down the row grabbed their two heads by the hair and bang them together that was not accepted", "start_timestamp": "00:11:20", "end_timestamp": "00:12:04", "start_second": 680, "end_second": 724, "url": "https://www.youtube.com/watch?v=z9Fz96Mr4bM&t=680s", "title": "The Heidelberg Laureate Forum Foundation presents the HLF Portraits: Geoffrey E. Hinton", "thumbnail": "https://i.ytimg.com/vi/z9Fz96Mr4bM/maxresdefault.jpg"} {"video_id": "z9Fz96Mr4bM", "text": "practice now even even in such a school it seems he quite muscular to me I remember my my introduction to the school in probably in my first few weeks there there was a teacher who was the art teacher who was fairly sadistic and you won't like to run in the changing room so you changed for games you hung up your clothes she put on your sports clothes yes you came back you changed again and if you ran in the changing room you explained that was not allowed so well we were changing back into our clothes another boy stole my tie and there was a", "start_timestamp": "00:12:04", "end_timestamp": "00:12:53", "start_second": 724, "end_second": 773, "url": "https://www.youtube.com/watch?v=z9Fz96Mr4bM&t=724s", "title": "The Heidelberg Laureate Forum Foundation presents the HLF Portraits: Geoffrey E. Hinton", "thumbnail": "https://i.ytimg.com/vi/z9Fz96Mr4bM/maxresdefault.jpg"} {"video_id": "z9Fz96Mr4bM", "text": "kind of partition down the middle of the changing room and he and avoid the other side of the partition would throw the tie from one to the other if I were you I would kind of run around the petitioner try and get it and as I was trying to run around the petition to get my tie this art teacher it's curious I remember that he was Catholic I think this is how prejudice really gets going and I've never forgiven Catholics for him he came into the change room I was running so he grabbed me he marched me down the corridor to the Headmaster's", "start_timestamp": "00:12:53", "end_timestamp": "00:13:36", "start_second": 773, "end_second": 816, "url": "https://www.youtube.com/watch?v=z9Fz96Mr4bM&t=773s", "title": "The Heidelberg Laureate Forum Foundation presents the HLF Portraits: Geoffrey E. Hinton", "thumbnail": "https://i.ytimg.com/vi/z9Fz96Mr4bM/maxresdefault.jpg"} {"video_id": "z9Fz96Mr4bM", "text": "office and he beat me with a bamboo cane quite severely my trousers were still on but he really he really enjoyed it and I just remember huge sense of grievance but he wouldn't listen to explanations yes I just weren't being enormous ly aggrieved at him and at Catholics but I'm going to guess and you'll tell me if I'm wrong maybe even against Authority oh yes oh yes I never choose which is the larger our key because of my father but ah so you already had reason I read challenge authority I already had good reason it's", "start_timestamp": "00:13:36", "end_timestamp": "00:14:19", "start_second": 816, "end_second": 859, "url": "https://www.youtube.com/watch?v=z9Fz96Mr4bM&t=816s", "title": "The Heidelberg Laureate Forum Foundation presents the HLF Portraits: Geoffrey E. Hinton", "thumbnail": "https://i.ytimg.com/vi/z9Fz96Mr4bM/maxresdefault.jpg"} {"video_id": "z9Fz96Mr4bM", "text": "gonna show up later in your life that's the only reason i underscore that how are we gonna get this oppressed young man occasionally oppressed by this school and a status at the end of his time there where he is actually going to be admitted to Cambridge well how am I going to get him either intellectually or morally beyond this this stage of life well before we get there all right please I have one more story I think you would like from that school which is we had to have religious education of course it was kind of Sunday School kind", "start_timestamp": "00:14:19", "end_timestamp": "00:14:58", "start_second": 859, "end_second": 898, "url": "https://www.youtube.com/watch?v=z9Fz96Mr4bM&t=859s", "title": "The Heidelberg Laureate Forum Foundation presents the HLF Portraits: Geoffrey E. Hinton", "thumbnail": "https://i.ytimg.com/vi/z9Fz96Mr4bM/maxresdefault.jpg"} {"video_id": "z9Fz96Mr4bM", "text": "of religious education about how Jesus was good and yes when God would look after us and I remember several things about it because of course the other kids all believed in God some more than others I had a very intelligent friend there who who believed in God and I remember during religious class the teacher saying all good things come from God and I would have been about eight or nine then and I remember thinking that there was a problem I don't remember how coherent me I thought it but there's a problem that she was only saying they", "start_timestamp": "00:14:58", "end_timestamp": "00:15:42", "start_second": 898, "end_second": 942, "url": "https://www.youtube.com/watch?v=z9Fz96Mr4bM&t=898s", "title": "The Heidelberg Laureate Forum Foundation presents the HLF Portraits: Geoffrey E. Hinton", "thumbnail": "https://i.ytimg.com/vi/z9Fz96Mr4bM/maxresdefault.jpg"} {"video_id": "z9Fz96Mr4bM", "text": "came from God because they were good and she couldn't use the fact that God you couldn't sort of use this as evidence that God was good because you just assumed they came from God because they were good so it was that's what you would call a circular argument now I didn't know about circular arguments but I knew there was something deeply suspicious about what she just said and I guess I was in a mood to argue I argued a lot I was very small by the way I was a smallest boy in the school ah the smallest and lightest not an", "start_timestamp": "00:15:42", "end_timestamp": "00:16:13", "start_second": 942, "end_second": 973, "url": "https://www.youtube.com/watch?v=z9Fz96Mr4bM&t=942s", "title": "The Heidelberg Laureate Forum Foundation presents the HLF Portraits: Geoffrey E. Hinton", "thumbnail": "https://i.ytimg.com/vi/z9Fz96Mr4bM/maxresdefault.jpg"} {"video_id": "z9Fz96Mr4bM", "text": "advantage but I made up for it with arguing um so I put up my hand and ice I tried to explain to her what was wrong with what she just said yes and I think she got frustrated and she said ok Hinton where where do where do you think all good things come from a classic kind of parent move yes um build up build an assumption into the question I leave the kid with the problem of dealing with it um because of my father I um I thought for a moment and said Russia no this wasn't what was expected in an English private school in", "start_timestamp": "00:16:13", "end_timestamp": "00:16:55", "start_second": 973, "end_second": 1015, "url": "https://www.youtube.com/watch?v=z9Fz96Mr4bM&t=973s", "title": "The Heidelberg Laureate Forum Foundation presents the HLF Portraits: Geoffrey E. Hinton", "thumbnail": "https://i.ytimg.com/vi/z9Fz96Mr4bM/maxresdefault.jpg"} {"video_id": "z9Fz96Mr4bM", "text": "the 1950s this would have been kind of the mid-1950s yes and when the Cold War was at its height and that kind of gave you an indication of how much I fitted in with the culture of the school I've already had the indication though can I get you to Cambridge only become at a time you can try I'll try my my parents tried haha so let's see um when I used to question my parents later on about why on earth that insisted on my staying there even though I was I'm happy yes um they said well the science teaching was good", "start_timestamp": "00:16:55", "end_timestamp": "00:17:39", "start_second": 1015, "end_second": 1059, "url": "https://www.youtube.com/watch?v=z9Fz96Mr4bM&t=1015s", "title": "The Heidelberg Laureate Forum Foundation presents the HLF Portraits: Geoffrey E. Hinton", "thumbnail": "https://i.ytimg.com/vi/z9Fz96Mr4bM/maxresdefault.jpg"} {"video_id": "z9Fz96Mr4bM", "text": "dear and the sons teaching was good was good later on I I enjoyed the science lessons and I didn't enjoy the math much I went in when I was before I got to the senior school I enjoyed math but then there came a point when I just got confused about math when they introduced functions I didn't understand what functions were I would have been good at arithmetic and good at algebra yeah so neither were these things I didn't understand things like sine X I didn't know what sine was and I was always a very concrete thinker I think in terms of", "start_timestamp": "00:17:39", "end_timestamp": "00:18:24", "start_second": 1059, "end_second": 1104, "url": "https://www.youtube.com/watch?v=z9Fz96Mr4bM&t=1059s", "title": "The Heidelberg Laureate Forum Foundation presents the HLF Portraits: Geoffrey E. Hinton", "thumbnail": "https://i.ytimg.com/vi/z9Fz96Mr4bM/maxresdefault.jpg"} {"video_id": "z9Fz96Mr4bM", "text": "mechanical analogies yes and I was actually I felt unhappy with functions and not at home with them until I started programming when I was a graduate student yeah as soon as I started programming a function was just a box she gave it one thing and it gave you back something else but until then I had I didn't really I wasn't friends with functions and that made math I got worse and worse at math um can I generalize and say that if you don't understand its purpose you challenge it or dismiss it maybe I'm reaching to to", "start_timestamp": "00:18:24", "end_timestamp": "00:19:05", "start_second": 1104, "end_second": 1145, "url": "https://www.youtube.com/watch?v=z9Fz96Mr4bM&t=1104s", "title": "The Heidelberg Laureate Forum Foundation presents the HLF Portraits: Geoffrey E. Hinton", "thumbnail": "https://i.ytimg.com/vi/z9Fz96Mr4bM/maxresdefault.jpg"} {"video_id": "z9Fz96Mr4bM", "text": "deepen your childhood for the later right I certainly want to i i'm i i'm not happy with things until i understand them yes and what I mean by understanding them I think it's the same as what Fineman meant you could build one you know you understand it well enough that you could build one that's that's what I'm I'm wondering it at the X ounds like that was it as later on that's exactly what I felt about psychology in general that you know I want to know what feelings were what sensation was and yes and I felt you", "start_timestamp": "00:19:05", "end_timestamp": "00:19:42", "start_second": 1145, "end_second": 1182, "url": "https://www.youtube.com/watch?v=z9Fz96Mr4bM&t=1145s", "title": "The Heidelberg Laureate Forum Foundation presents the HLF Portraits: Geoffrey E. Hinton", "thumbnail": "https://i.ytimg.com/vi/z9Fz96Mr4bM/maxresdefault.jpg"} {"video_id": "z9Fz96Mr4bM", "text": "never really understand that until you could build one and you as many people say you couldn't build one but if we make it to your current approach you will not be sentimental about such terms as feelings and emotions but we'll get we'll get there I'm not sure what you mean by not being sentimental I I mean the the hope that they are undefinable not reproducible human as opposed to something that you can analyze you you really I wouldn't call that sentimental okay I collect just dumb haha fair enough I still haven't gotten you into", "start_timestamp": "00:19:42", "end_timestamp": "00:20:22", "start_second": 1182, "end_second": 1222, "url": "https://www.youtube.com/watch?v=z9Fz96Mr4bM&t=1182s", "title": "The Heidelberg Laureate Forum Foundation presents the HLF Portraits: Geoffrey E. Hinton", "thumbnail": "https://i.ytimg.com/vi/z9Fz96Mr4bM/maxresdefault.jpg"} {"video_id": "z9Fz96Mr4bM", "text": "Cambridge No so you've got a good science together several several good science teacher yes and so he's good at science I was okay at math and because my father had been to a particular College in Cambridge yes he wanted me to go there yes Ferrand ice at the entrance exams and I did quite well in the exam so I got in did you surprise your school were they assuming you were clever at this point they're not particularly no no they knew I was fairly clever by and then you're as clever but um unruly yes um also might be truth today so going on going", "start_timestamp": "00:20:22", "end_timestamp": "00:21:14", "start_second": 1222, "end_second": 1274, "url": "https://www.youtube.com/watch?v=z9Fz96Mr4bM&t=1222s", "title": "The Heidelberg Laureate Forum Foundation presents the HLF Portraits: Geoffrey E. Hinton", "thumbnail": "https://i.ytimg.com/vi/z9Fz96Mr4bM/maxresdefault.jpg"} {"video_id": "z9Fz96Mr4bM", "text": "on oh you you sit the exams you get in uh-huh I go to Cambridge yes um it's a big shock because at school there were a few kids who were clever and interesting ideas I had a friend called Inman Harvey who was very clever he got the top math scholarship at Trinity which is the best place in math and Britain yes um but mostly the kids they weren't their primary interest was not ideas weren't they mostly influenced by the kinds of school you went to in fact I mean this is sorry Edie at my at my high school what you call a high", "start_timestamp": "00:21:14", "end_timestamp": "00:21:58", "start_second": 1274, "end_second": 1318, "url": "https://www.youtube.com/watch?v=z9Fz96Mr4bM&t=1274s", "title": "The Heidelberg Laureate Forum Foundation presents the HLF Portraits: Geoffrey E. Hinton", "thumbnail": "https://i.ytimg.com/vi/z9Fz96Mr4bM/maxresdefault.jpg"} {"video_id": "z9Fz96Mr4bM", "text": "school a North America most of the kids were not any I went to Cambridge everybody was classy I say okay um actually not everybody there was some medics venue but but apart from that yes everybody was clever and that was a bit of a shock it was very nice yes but also quite threatening yes of course so after I've been there a month I left I found it too stressful yes and I went to London and did various odd jobs [Music] I read a lot of depressing literature like crime and punishment from The Brothers Karamazov remember sitting on", "start_timestamp": "00:21:58", "end_timestamp": "00:22:45", "start_second": 1318, "end_second": 1365, "url": "https://www.youtube.com/watch?v=z9Fz96Mr4bM&t=1318s", "title": "The Heidelberg Laureate Forum Foundation presents the HLF Portraits: Geoffrey E. Hinton", "thumbnail": "https://i.ytimg.com/vi/z9Fz96Mr4bM/maxresdefault.jpg"} {"video_id": "z9Fz96Mr4bM", "text": "the London tube which is a fairly depressing place anyway reading these depressing Russian novels I did various jobs I ended up getting very interested in architecture and I reapplied to Cambridge to do architecture and they let me back into the same College but before I got there I'd done I've worked in an architect's office over the summer and I discovered what the practice of architecture was like yes I sort of imagined you'd sit there sketching out what wonderful buildings going to be we're going to be like or maybe new ways", "start_timestamp": "00:22:45", "end_timestamp": "00:23:31", "start_second": 1365, "end_second": 1411, "url": "https://www.youtube.com/watch?v=z9Fz96Mr4bM&t=1365s", "title": "The Heidelberg Laureate Forum Foundation presents the HLF Portraits: Geoffrey E. Hinton", "thumbnail": "https://i.ytimg.com/vi/z9Fz96Mr4bM/maxresdefault.jpg"} {"video_id": "z9Fz96Mr4bM", "text": "of constructing buildings actually life in electrics office consists of you know are we gonna have cheap flooring or cheap door handles because one of them's got to be cheap otherwise we won't meet the budget yes and so after I've done architecture for a day I went to see my tutor and said I want to switch back to doing natural sciences and my teacher said ok which is a bit if I'm right about this one of the advantages of the Oxbridge approach I mean they they can be flexible they can be very flexible um so my first year I", "start_timestamp": "00:23:31", "end_timestamp": "00:24:07", "start_second": 1411, "end_second": 1447, "url": "https://www.youtube.com/watch?v=z9Fz96Mr4bM&t=1411s", "title": "The Heidelberg Laureate Forum Foundation presents the HLF Portraits: Geoffrey E. Hinton", "thumbnail": "https://i.ytimg.com/vi/z9Fz96Mr4bM/maxresdefault.jpg"} {"video_id": "z9Fz96Mr4bM", "text": "did Natural Sciences and I think I was the only student in that year doing both physics and physiology Oh so I've always been interested in biology uh and I hadn't been allowed to do biology at school because my father wouldn't allow it he said they would teach me genetics and genetics was nonsense huh what he really meant by that was he was convinced that all sort of macroscopic observable traits would be caused by complex interactions of many genes yes and the sort of standard theory was you know blue eyes is caused by this gene", "start_timestamp": "00:24:07", "end_timestamp": "00:24:48", "start_second": 1447, "end_second": 1488, "url": "https://www.youtube.com/watch?v=z9Fz96Mr4bM&t=1447s", "title": "The Heidelberg Laureate Forum Foundation presents the HLF Portraits: Geoffrey E. Hinton", "thumbnail": "https://i.ytimg.com/vi/z9Fz96Mr4bM/maxresdefault.jpg"} {"video_id": "z9Fz96Mr4bM", "text": "and intelligence is caused by that gene yes and of course that was very much against communist teaching yes and so for ideological reasons I wasn't allowed to borrow your school even though my father was a biologist he was very good biologist in his way um he got to be a member of the Royal Society yes I actually got to be a member of the world sight at age 49 and I got to be a member of the Royal sight at age 50 which was always irritating Tonica um so I was very competitive with my father anyway well the Cambridge I did physics and", "start_timestamp": "00:24:48", "end_timestamp": "00:25:27", "start_second": 1488, "end_second": 1527, "url": "https://www.youtube.com/watch?v=z9Fz96Mr4bM&t=1488s", "title": "The Heidelberg Laureate Forum Foundation presents the HLF Portraits: Geoffrey E. Hinton", "thumbnail": "https://i.ytimg.com/vi/z9Fz96Mr4bM/maxresdefault.jpg"} {"video_id": "z9Fz96Mr4bM", "text": "physiology chemistry and physiology was really interesting I'd never done it was it was stuff I didn't know anything about yes and I remember the highlight of the course was going to be the final term where they were going to teach us about the central nervous system yes and I was I was a very very interested in had the brain work because my friend Inman Harvey at school hood was interested in how the brain worked and we had discussions about it yes and I remember how disappointed I was when they taught us about the central nervous", "start_timestamp": "00:25:27", "end_timestamp": "00:26:07", "start_second": 1527, "end_second": 1567, "url": "https://www.youtube.com/watch?v=z9Fz96Mr4bM&t=1527s", "title": "The Heidelberg Laureate Forum Foundation presents the HLF Portraits: Geoffrey E. Hinton", "thumbnail": "https://i.ytimg.com/vi/z9Fz96Mr4bM/maxresdefault.jpg"} {"video_id": "z9Fz96Mr4bM", "text": "system because the way the central nervous system works is this those neurons and they have axons yes and electrical impulses travel down the axons and cause some chemicals to be released that get other neurons excited and they taught us a lot about how the electrical impulses travel down the neurons because that was the classic work of hodgkin-huxley and that was how the central nervous system worked and we're being immensely frustrated that they hadn't actually said anything about how it worked I mean they said how the", "start_timestamp": "00:26:07", "end_timestamp": "00:26:47", "start_second": 1567, "end_second": 1607, "url": "https://www.youtube.com/watch?v=z9Fz96Mr4bM&t=1567s", "title": "The Heidelberg Laureate Forum Foundation presents the HLF Portraits: Geoffrey E. Hinton", "thumbnail": "https://i.ytimg.com/vi/z9Fz96Mr4bM/maxresdefault.jpg"} {"video_id": "z9Fz96Mr4bM", "text": "impulses were on propagated but how did it actually work it was just descriptive basically I mean it was interesting that that's neurons communicate yeah or another but that wasn't what I meant by how it works I wanted to know how how the brain works and how that gives rise to emotions and sensations the song oh why was that if it was such a radical thing to want to know I mean I don't think it was a radical thing hard to know okay I think lots of you want to know that it's just that in Physiology yes they didn't know", "start_timestamp": "00:26:47", "end_timestamp": "00:27:21", "start_second": 1607, "end_second": 1641, "url": "https://www.youtube.com/watch?v=z9Fz96Mr4bM&t=1607s", "title": "The Heidelberg Laureate Forum Foundation presents the HLF Portraits: Geoffrey E. Hinton", "thumbnail": "https://i.ytimg.com/vi/z9Fz96Mr4bM/maxresdefault.jpg"} {"video_id": "z9Fz96Mr4bM", "text": "okay so after my first year I switched to philosophy because I thought philosophy would teach me more about those the things I was really interested in and that was a big mistake um I remember learning about thick and shiny and getting very depressed because I couldn't understand it later on I think he was very useful to have learned that stuff but really I think my main the main thing I got from a year of doing philosophy Cambridge was I developed antibodies against philosophy although there was one philosopher there who I", "start_timestamp": "00:27:21", "end_timestamp": "00:28:07", "start_second": 1641, "end_second": 1687, "url": "https://www.youtube.com/watch?v=z9Fz96Mr4bM&t=1641s", "title": "The Heidelberg Laureate Forum Foundation presents the HLF Portraits: Geoffrey E. Hinton", "thumbnail": "https://i.ytimg.com/vi/z9Fz96Mr4bM/maxresdefault.jpg"} {"video_id": "z9Fz96Mr4bM", "text": "got along with extremely well who actually they're two of them there's my tutor someone called Jim Hopkins who was a very nice guy I'm also interested in how the mind actually worked although he didn't know either and then there was a philosopher called Burnham Williams yes who was a very good philosopher later went to Berkeley I never met him again which was always a regret he used to hold Monday evening a sort of open house where you would go to his room if you're interesting and just discuss philosophy for an hour - and a number of people", "start_timestamp": "00:28:07", "end_timestamp": "00:28:49", "start_second": 1687, "end_second": 1729, "url": "https://www.youtube.com/watch?v=z9Fz96Mr4bM&t=1687s", "title": "The Heidelberg Laureate Forum Foundation presents the HLF Portraits: Geoffrey E. Hinton", "thumbnail": "https://i.ytimg.com/vi/z9Fz96Mr4bM/maxresdefault.jpg"} {"video_id": "z9Fz96Mr4bM", "text": "would turn up and I got along with him very well he's a great thing yes I know about it he was very eclectic very very fluid in his thinking yes he he wasn't he wasn't dogmatic at all and he was just very interesting to is very interesting to learn from him and he would always take the students ideas seriously and always have something interesting to say about them he was never dismissive so that was the one for philosopher who I really appreciated then I got fed up with philosophy because it wasn't telling me what I", "start_timestamp": "00:28:49", "end_timestamp": "00:29:35", "start_second": 1729, "end_second": 1775, "url": "https://www.youtube.com/watch?v=z9Fz96Mr4bM&t=1729s", "title": "The Heidelberg Laureate Forum Foundation presents the HLF Portraits: Geoffrey E. Hinton", "thumbnail": "https://i.ytimg.com/vi/z9Fz96Mr4bM/maxresdefault.jpg"} {"video_id": "z9Fz96Mr4bM", "text": "wanted to know and so I switched again to psychology is it fair dude just to finish the philosophy apart is it that they're not asking interesting questions or that they are giving an overly to us as you're centering questions another philosopher I really like is Daniel Dennett who definitely asked interesting questions okay they just don't have the apparatus to answer them okay my sort of methodological conclusion relates to why it is that philosophers sometimes at a conference will read their paper whereas", "start_timestamp": "00:29:35", "end_timestamp": "00:30:14", "start_second": 1775, "end_second": 1814, "url": "https://www.youtube.com/watch?v=z9Fz96Mr4bM&t=1775s", "title": "The Heidelberg Laureate Forum Foundation presents the HLF Portraits: Geoffrey E. Hinton", "thumbnail": "https://i.ytimg.com/vi/z9Fz96Mr4bM/maxresdefault.jpg"} {"video_id": "z9Fz96Mr4bM", "text": "a scientist very few scientists would do that a scientist will get up and give a talk yes and make some claims and tell you the evidence maybe philosophers will read their papers and they'll sort of the way they read them is significant so with philosophy it's it's how you say something that's important and the words you say it was that's the content and it's because there is no other content so in in philosophy there's no difference between something sounding really good and something being good yes there's no empirical test yeah in", "start_timestamp": "00:30:14", "end_timestamp": "00:30:56", "start_second": 1814, "end_second": 1856, "url": "https://www.youtube.com/watch?v=z9Fz96Mr4bM&t=1814s", "title": "The Heidelberg Laureate Forum Foundation presents the HLF Portraits: Geoffrey E. Hinton", "thumbnail": "https://i.ytimg.com/vi/z9Fz96Mr4bM/maxresdefault.jpg"} {"video_id": "z9Fz96Mr4bM", "text": "science something can sign completely loopy like acceleration is gravity yes and yet it can turn out to be true yes and the things that science is established to be true a far crazier than anything any religious connect traumatic could dream up that outside the realm of the crazy things religious fanatics treemap huh they're always rather boring things like you know there's another world a bit like this one but above the clouds where all the good people go then they're nothing like you know there might be black holes right right so science", "start_timestamp": "00:30:56", "end_timestamp": "00:31:35", "start_second": 1856, "end_second": 1895, "url": "https://www.youtube.com/watch?v=z9Fz96Mr4bM&t=1856s", "title": "The Heidelberg Laureate Forum Foundation presents the HLF Portraits: Geoffrey E. Hinton", "thumbnail": "https://i.ytimg.com/vi/z9Fz96Mr4bM/maxresdefault.jpg"} {"video_id": "z9Fz96Mr4bM", "text": "establishes things that are far crazier than normal people could have a thing called yes yes so this idea that scientists a kind of narrow is just crazy you know the way you can do it is because it's got something out outside of theorizing that can tell you um where these theories are right or not right and I've heard people I don't understand string theory but I've heard people claiming that um that borderline is being pushed by string theory where it's not quite clear whether it's all just mathematics or whether it really is", "start_timestamp": "00:31:35", "end_timestamp": "00:32:08", "start_second": 1895, "end_second": 1928, "url": "https://www.youtube.com/watch?v=z9Fz96Mr4bM&t=1895s", "title": "The Heidelberg Laureate Forum Foundation presents the HLF Portraits: Geoffrey E. Hinton", "thumbnail": "https://i.ytimg.com/vi/z9Fz96Mr4bM/maxresdefault.jpg"} {"video_id": "z9Fz96Mr4bM", "text": "being verified by data the whole science has this independent testing philosophy doesn't it's in there sometimes the impossible I may be thinking only at mathematics to speak and search for the elegant solution which always surprises me is terminology because elegance is not necessarily truthful but maybe that is the the core of mathematical elegance it's a very interesting argument about why why should elegant things be true yes suddenly if you look at physics if you look at particles and things you find kind of fifteen particles which all", "start_timestamp": "00:32:08", "end_timestamp": "00:32:47", "start_second": 1928, "end_second": 1967, "url": "https://www.youtube.com/watch?v=z9Fz96Mr4bM&t=1928s", "title": "The Heidelberg Laureate Forum Foundation presents the HLF Portraits: Geoffrey E. Hinton", "thumbnail": "https://i.ytimg.com/vi/z9Fz96Mr4bM/maxresdefault.jpg"} {"video_id": "z9Fz96Mr4bM", "text": "fit into a nice pattern except there's a missing one yes and the methodology of saying well we should search for that missing one because of all these symmetries it must be there that works and the question is why does and why does it work so it's not clear that works in biology so when I was a postdoc in San Diego I got to know Francis Crick who who was just become very interested in the brain he was a very impressive thinker but he was of the opinion that the idea that the elegant thing is going to be true did not necessarily apply to", "start_timestamp": "00:32:47", "end_timestamp": "00:33:27", "start_second": 1967, "end_second": 2007, "url": "https://www.youtube.com/watch?v=z9Fz96Mr4bM&t=1967s", "title": "The Heidelberg Laureate Forum Foundation presents the HLF Portraits: Geoffrey E. Hinton", "thumbnail": "https://i.ytimg.com/vi/z9Fz96Mr4bM/maxresdefault.jpg"} {"video_id": "z9Fz96Mr4bM", "text": "biology if philosophy is unsatisfying to the young undergraduate I would then assume given his future career he would leap to the sciences but in fact he lept at the Social Sciences I don't know allow help me help Meg gain which psychology was a science not a session science and yeah I mean psychology has got these sort of two aspects to it but it was definitely the scientific aspect of Cambridge so we learned about rats and we learned about section theory so that's you look to the rats well I didn't want to leap to the", "start_timestamp": "00:33:27", "end_timestamp": "00:34:03", "start_second": 2007, "end_second": 2043, "url": "https://www.youtube.com/watch?v=z9Fz96Mr4bM&t=2007s", "title": "The Heidelberg Laureate Forum Foundation presents the HLF Portraits: Geoffrey E. Hinton", "thumbnail": "https://i.ytimg.com/vi/z9Fz96Mr4bM/maxresdefault.jpg"} {"video_id": "z9Fz96Mr4bM", "text": "rats that's what psychology was there right right in fact I actually was very annoyed that it wasn't teaching us anything about people and so I went to see my shooter not my psychology tutor but my sort of general children king's college who was in charge of my welfare yes and I explained that the psychology course was not telling me anything about real psychology for example there was nothing about psychoanalysis in the psychology course yes and so what I would like to do is I'd like to go to London once a week and get a tutorial", "start_timestamp": "00:34:03", "end_timestamp": "00:34:47", "start_second": 2043, "end_second": 2087, "url": "https://www.youtube.com/watch?v=z9Fz96Mr4bM&t=2043s", "title": "The Heidelberg Laureate Forum Foundation presents the HLF Portraits: Geoffrey E. Hinton", "thumbnail": "https://i.ytimg.com/vi/z9Fz96Mr4bM/maxresdefault.jpg"} {"video_id": "z9Fz96Mr4bM", "text": "from an existential psychoanalyst so I could learn about that because I was really really mad and I would like the college to pay for it and so my teachers my teacher said well that sounds reasonable so yes the college would pay for that ah that's a real liberal education is oh yeah King's College Cambridge which was loaded not only loaded but I had the capacity to imagine doing that yeah so the audacious undergraduate asked for it the wealthy college agreed yeah and how important was that actually that so the", "start_timestamp": "00:34:47", "end_timestamp": "00:35:26", "start_second": 2087, "end_second": 2126, "url": "https://www.youtube.com/watch?v=z9Fz96Mr4bM&t=2087s", "title": "The Heidelberg Laureate Forum Foundation presents the HLF Portraits: Geoffrey E. Hinton", "thumbnail": "https://i.ytimg.com/vi/z9Fz96Mr4bM/maxresdefault.jpg"} {"video_id": "z9Fz96Mr4bM", "text": "only thing I really remember about those tutorials and existential psychoanalysis was psychoanalysts had a really beautiful Japanese girlfriend which makes our analyst psychoanalysis seem very good idea but he taught me about his Earl and Heidegger and I don't never really understood any of that didn't understand it couldn't be bothered no I tried I just didn't didn't really understand yes um I make the assumption that there was something there to be understood but I which now you don't make anymore which I'm not sure I make", "start_timestamp": "00:35:26", "end_timestamp": "00:36:14", "start_second": 2126, "end_second": 2174, "url": "https://www.youtube.com/watch?v=z9Fz96Mr4bM&t=2126s", "title": "The Heidelberg Laureate Forum Foundation presents the HLF Portraits: Geoffrey E. Hinton", "thumbnail": "https://i.ytimg.com/vi/z9Fz96Mr4bM/maxresdefault.jpg"} {"video_id": "z9Fz96Mr4bM", "text": "anymore yeah I'm because we don't have very much time I'm gonna race you through your psychology course but I'll stop at any point that seems critical in your in your intellectual development so I just felt psychology was totally lacking in any idea of what a proper theory would look like they didn't they didn't sort of have the physics view that our theory should really explain something so he did have the experimental method so it's better than philosophy in that sense but what they were using experiments for was to try", "start_timestamp": "00:36:14", "end_timestamp": "00:36:58", "start_second": 2174, "end_second": 2218, "url": "https://www.youtube.com/watch?v=z9Fz96Mr4bM&t=2174s", "title": "The Heidelberg Laureate Forum Foundation presents the HLF Portraits: Geoffrey E. Hinton", "thumbnail": "https://i.ytimg.com/vi/z9Fz96Mr4bM/maxresdefault.jpg"} {"video_id": "z9Fz96Mr4bM", "text": "and decide between theories that were sort of hopelessly inadequate and you could just dismiss out of hand without doing experiments because it was just not up to the job yes and so I had to do an experiment and I remember the experiment where you would take you take children between 3 & 5 yes and you try and decide whether they developed during that period in the sense that during that period they started paying more attention to shape and less attention to things like color and texture so is this model of little kids which is the", "start_timestamp": "00:36:58", "end_timestamp": "00:37:34", "start_second": 2218, "end_second": 2254, "url": "https://www.youtube.com/watch?v=z9Fz96Mr4bM&t=2218s", "title": "The Heidelberg Laureate Forum Foundation presents the HLF Portraits: Geoffrey E. Hinton", "thumbnail": "https://i.ytimg.com/vi/z9Fz96Mr4bM/maxresdefault.jpg"} {"video_id": "z9Fz96Mr4bM", "text": "stimuli like shape and color and texture and they respond to stimuli so this was behaviorist kind of psychology yes I'm just beginning to get a little bit cantar and the experiment is going to be decided to decide if the strength is the response to shape increase in the strength of a response to color decreased and so you presenting with three objects during training you give them say two triangles and a square all of which are yellow and they learn to pick out the odd one Act which is the square yes and then you give them I", "start_timestamp": "00:37:34", "end_timestamp": "00:38:10", "start_second": 2254, "end_second": 2290, "url": "https://www.youtube.com/watch?v=z9Fz96Mr4bM&t=2254s", "title": "The Heidelberg Laureate Forum Foundation presents the HLF Portraits: Geoffrey E. Hinton", "thumbnail": "https://i.ytimg.com/vi/z9Fz96Mr4bM/maxresdefault.jpg"} {"video_id": "z9Fz96Mr4bM", "text": "better get this right you give them three triangles one of which is red from the other to a yellow and they learnt it has the red triangle hands down then once they've been trained like that with various different stimulus dimensions you then give them a yellow triangle a red triangle and a yellow square so now they've got a conflict are they going to pick out the old one based on color or are they going to pick out the old one based on shape right and you look to see what they do and the hope is that as as they get old after the older kids will", "start_timestamp": "00:38:10", "end_timestamp": "00:38:51", "start_second": 2290, "end_second": 2331, "url": "https://www.youtube.com/watch?v=z9Fz96Mr4bM&t=2290s", "title": "The Heidelberg Laureate Forum Foundation presents the HLF Portraits: Geoffrey E. Hinton", "thumbnail": "https://i.ytimg.com/vi/z9Fz96Mr4bM/maxresdefault.jpg"} {"video_id": "z9Fz96Mr4bM", "text": "use shape to pick the old one out and the younger kids will use color I'm not picking up a sense that you were impressed by this experiment well here's what actually happened the experiment was going along and then I got a bright five-year-old yes and the bright five five year old the first time I showed him one of the conflicted ones yes where there wasn't a clear up one out yes he pointed so will be a yellow triangle the yellow square in a red circle he pointed at the red circle and said you painted that one the wrong", "start_timestamp": "00:38:51", "end_timestamp": "00:39:31", "start_second": 2331, "end_second": 2371, "url": "https://www.youtube.com/watch?v=z9Fz96Mr4bM&t=2331s", "title": "The Heidelberg Laureate Forum Foundation presents the HLF Portraits: Geoffrey E. Hinton", "thumbnail": "https://i.ytimg.com/vi/z9Fz96Mr4bM/maxresdefault.jpg"} {"video_id": "z9Fz96Mr4bM", "text": "color he thought I'd made a mistake right ah because it was the old one out game and clearly I painted that one the wrong color and I thought you know this this this organism that's meant to be modelled as responding to color or responding to shape yes this organism has just done a piece of reasoning and said you painted that one the wrong color that's just way beyond the scope of any of these theories it's just hugely complicated behavior compared with these seriously or the organism sort of figured out I had", "start_timestamp": "00:39:31", "end_timestamp": "00:40:06", "start_second": 2371, "end_second": 2406, "url": "https://www.youtube.com/watch?v=z9Fz96Mr4bM&t=2371s", "title": "The Heidelberg Laureate Forum Foundation presents the HLF Portraits: Geoffrey E. Hinton", "thumbnail": "https://i.ytimg.com/vi/z9Fz96Mr4bM/maxresdefault.jpg"} {"video_id": "z9Fz96Mr4bM", "text": "intentions and that I made a mistake here it's just utterly out of the realm I mean if you're trying to sort of model going to the moon by climbing out of a stepladder you know yes that that the the theories they were dealing with stood no more chance of dealing with this kind of behavior than a stepladder would get you to the moon and so that had a big effect on me so you're you're disappointed I'm totally totally disenchanted with psychology because although it's got an experimental method it's using it in an incredibly naive way", "start_timestamp": "00:40:06", "end_timestamp": "00:40:47", "start_second": 2406, "end_second": 2447, "url": "https://www.youtube.com/watch?v=z9Fz96Mr4bM&t=2406s", "title": "The Heidelberg Laureate Forum Foundation presents the HLF Portraits: Geoffrey E. Hinton", "thumbnail": "https://i.ytimg.com/vi/z9Fz96Mr4bM/maxresdefault.jpg"} {"video_id": "z9Fz96Mr4bM", "text": "to test really dumb theories yeah one man said I have to get you I'm sorry I have to get you out of Cambridge right and into your next step and just the preface for that is of course because of those who might be listening not really quite understanding the state of inquiry in artificial intelligence computers and so forth how do you then map your next step so then I became a carpenter for a year I quit I could he Mia yes and then I got back into academia by working on a project studying child language development influenced a lot by Chomsky", "start_timestamp": "00:40:47", "end_timestamp": "00:41:32", "start_second": 2447, "end_second": 2492, "url": "https://www.youtube.com/watch?v=z9Fz96Mr4bM&t=2447s", "title": "The Heidelberg Laureate Forum Foundation presents the HLF Portraits: Geoffrey E. Hinton", "thumbnail": "https://i.ytimg.com/vi/z9Fz96Mr4bM/maxresdefault.jpg"} {"video_id": "z9Fz96Mr4bM", "text": "yes um who claimed that claimed on spurious mathematical grounds that almost all language was innate yes in tacky aspects of language were innate which is complete rubbish and the project was looking at a large section of young children in Bristol to look at their language development empirically by just measuring what happened it happened at the same time with Watergate this project uh-huh and we had little jackets with radio microphones so we put this little jacket on a kid he wear it all day and would be broadcasting everything that the kid", "start_timestamp": "00:41:32", "end_timestamp": "00:42:15", "start_second": 2492, "end_second": 2535, "url": "https://www.youtube.com/watch?v=z9Fz96Mr4bM&t=2492s", "title": "The Heidelberg Laureate Forum Foundation presents the HLF Portraits: Geoffrey E. Hinton", "thumbnail": "https://i.ytimg.com/vi/z9Fz96Mr4bM/maxresdefault.jpg"} {"video_id": "z9Fz96Mr4bM", "text": "said and inside the house and everything that was said to the kid and inside the house it wasn't regarded as a bad thing back then inside the house we had a recorder that every 20 minutes would take a one-minute sample it would do that by having a cardboard discs that rotated very slowly and a notch in it and a little lever would fall and connect the tape recorder that's what technology was like then and we would get these little samples of child of children's language and then we would try and analyze them that was the", "start_timestamp": "00:42:15", "end_timestamp": "00:42:55", "start_second": 2535, "end_second": 2575, "url": "https://www.youtube.com/watch?v=z9Fz96Mr4bM&t=2535s", "title": "The Heidelberg Laureate Forum Foundation presents the HLF Portraits: Geoffrey E. Hinton", "thumbnail": "https://i.ytimg.com/vi/z9Fz96Mr4bM/maxresdefault.jpg"} {"video_id": "z9Fz96Mr4bM", "text": "problem but we got very interesting the most interesting utterance we got was when we were looking at tag so a tag is like isn't here aren't they or won't we write and in english tags have a lot of syntax in them you don't just seen this part or something like that right right you have and so we were looking at tags to see whether children don't express complicated grammatical structures very early on because it's too many phonemes it was because they just don't know these structures and it tanks very good cases very few phonemes but has a lot of", "start_timestamp": "00:42:55", "end_timestamp": "00:43:35", "start_second": 2575, "end_second": 2615, "url": "https://www.youtube.com/watch?v=z9Fz96Mr4bM&t=2575s", "title": "The Heidelberg Laureate Forum Foundation presents the HLF Portraits: Geoffrey E. Hinton", "thumbnail": "https://i.ytimg.com/vi/z9Fz96Mr4bM/maxresdefault.jpg"} {"video_id": "z9Fz96Mr4bM", "text": "grammar in it yes and we got one kid one parent who said to her child Santa don't give you no toys if you don't talk proper isn't he Wow and I just thought that was a nice example of the kind of data that children got from which they learn to speak good English yes yes but you're not gonna linger you're going to move on so then I started PhD in artificial intelligence because I thought that was such a program was possible this was 1972 yes and they just set up a couple years earlier a big center of art of intelligence in Edinburgh Edinburgh so", "start_timestamp": "00:43:35", "end_timestamp": "00:44:23", "start_second": 2615, "end_second": 2663, "url": "https://www.youtube.com/watch?v=z9Fz96Mr4bM&t=2615s", "title": "The Heidelberg Laureate Forum Foundation presents the HLF Portraits: Geoffrey E. Hinton", "thumbnail": "https://i.ytimg.com/vi/z9Fz96Mr4bM/maxresdefault.jpg"} {"video_id": "z9Fz96Mr4bM", "text": "the science Research Council had decided to fund one big Centre in artificial intelligence ah and looking back how was it structured this doesn't impress you now in retrospect as to what they they thought they were doing I think it was a sensible thing to do okay I think was good policy to make one really good Center mmm I think nearly all the people there believed in symbolic err good ol fact what is now called good old-fashioned symbolic AI right I think they were making a huge mistake and the government um a few years later I can't remember", "start_timestamp": "00:44:23", "end_timestamp": "00:45:12", "start_second": 2663, "end_second": 2712, "url": "https://www.youtube.com/watch?v=z9Fz96Mr4bM&t=2663s", "title": "The Heidelberg Laureate Forum Foundation presents the HLF Portraits: Geoffrey E. Hinton", "thumbnail": "https://i.ytimg.com/vi/z9Fz96Mr4bM/maxresdefault.jpg"} {"video_id": "z9Fz96Mr4bM", "text": "the exact time probably in 1974 or something like that got a very eminent mathematician called Sir James light Hill to do a report on the AI Center yes yes and he produced a very damning report saying that basically saying these guys didn't know what they were talking about there was an interchange with McCarthy who was one of the fathers of AI yes I remember the seeing the interchange years later where McCarthy was saying look anything you can compute you can compute with symbolic operations so what we doing must be right hon light Hill", "start_timestamp": "00:45:12", "end_timestamp": "00:46:01", "start_second": 2712, "end_second": 2761, "url": "https://www.youtube.com/watch?v=z9Fz96Mr4bM&t=2712s", "title": "The Heidelberg Laureate Forum Foundation presents the HLF Portraits: Geoffrey E. Hinton", "thumbnail": "https://i.ytimg.com/vi/z9Fz96Mr4bM/maxresdefault.jpg"} {"video_id": "z9Fz96Mr4bM", "text": "was saying basically yes but you've no idea how the brain does that which is the only advice we know that can think and you've no idea whether this way of doing it is efficient now retrospect at the time everybody in Britain was outraged that AI was now going to go through hard times because Sir James like chill didn't believe in it and I think blight chill was entirely correct because both sides were assuming something and both sides were assuming that computation wouldn't get millions of times faster than it was now yes might", "start_timestamp": "00:46:01", "end_timestamp": "00:46:38", "start_second": 2761, "end_second": 2798, "url": "https://www.youtube.com/watch?v=z9Fz96Mr4bM&t=2761s", "title": "The Heidelberg Laureate Forum Foundation presents the HLF Portraits: Geoffrey E. Hinton", "thumbnail": "https://i.ytimg.com/vi/z9Fz96Mr4bM/maxresdefault.jpg"} {"video_id": "z9Fz96Mr4bM", "text": "get thousand time plus we're not millions of times faster yes and under that assumption lecture was completely correct that this symbolic way of doing things although it theoretically you could do anything that way with the speed of computation we had there was no hope of doing things like perceptional much when I was the report and where are you at the point with undergraduate Street you are a graduate student at the time this pronouncement is made yes and as a result of the pronouncement so it must mean in 74 my advisor who's one of", "start_timestamp": "00:46:38", "end_timestamp": "00:47:07", "start_second": 2798, "end_second": 2827, "url": "https://www.youtube.com/watch?v=z9Fz96Mr4bM&t=2798s", "title": "The Heidelberg Laureate Forum Foundation presents the HLF Portraits: Geoffrey E. Hinton", "thumbnail": "https://i.ytimg.com/vi/z9Fz96Mr4bM/maxresdefault.jpg"} {"video_id": "z9Fz96Mr4bM", "text": "the three professors who set up the AI yes the school day I remember he leaves head Umbra well he leaves for several reasons personal conflict with one of the other main protagonists was perhaps one of the reasons he left and went to Sussex and I went with him you went with him did you find your fellow graduate students bewildered by this analysis and you you know I didn't or I don't remember at the time I think I thought light Hill was too severe on a I to ah there have been a lot of paper pressure to believe that yes of course um yeah I", "start_timestamp": "00:47:07", "end_timestamp": "00:47:57", "start_second": 2827, "end_second": 2877, "url": "https://www.youtube.com/watch?v=z9Fz96Mr4bM&t=2827s", "title": "The Heidelberg Laureate Forum Foundation presents the HLF Portraits: Geoffrey E. Hinton", "thumbnail": "https://i.ytimg.com/vi/z9Fz96Mr4bM/maxresdefault.jpg"} {"video_id": "z9Fz96Mr4bM", "text": "don't remember too much with philosophy course but you go you go to Sussex then yes this is before you've completed your degree yes oh yes so I used to have to go back to Edinburgh for one day I've returned to sign a register to say that I lived in Edinburgh I would get a return ticket from a day return from Brighton to Edinburgh um but some of the in n borough is going to have to approve a dissertation topic course yes who is that who's who's lingering there that there was a more junior guy in a I called Jim Howe who was my official", "start_timestamp": "00:47:57", "end_timestamp": "00:48:36", "start_second": 2877, "end_second": 2916, "url": "https://www.youtube.com/watch?v=z9Fz96Mr4bM&t=2877s", "title": "The Heidelberg Laureate Forum Foundation presents the HLF Portraits: Geoffrey E. Hinton", "thumbnail": "https://i.ytimg.com/vi/z9Fz96Mr4bM/maxresdefault.jpg"} {"video_id": "z9Fz96Mr4bM", "text": "advisor I see but everybody knew it was just an arrangement how did you work out the topic of the dissertation if that was but so I wanted to work on ural networks and how they learned but I couldn't figure out how they learned from I couldn't buy this I couldn't figure out anything significantly improved over what was called the perceptron convergence theorem which was already known in the early sixties late twenties early 60s neural networks are in the air I mean ever since touring in a way no not so much when I my advisor had done neural", "start_timestamp": "00:48:36", "end_timestamp": "00:49:14", "start_second": 2916, "end_second": 2954, "url": "https://www.youtube.com/watch?v=z9Fz96Mr4bM&t=2916s", "title": "The Heidelberg Laureate Forum Foundation presents the HLF Portraits: Geoffrey E. Hinton", "thumbnail": "https://i.ytimg.com/vi/z9Fz96Mr4bM/maxresdefault.jpg"} {"video_id": "z9Fz96Mr4bM", "text": "networks until I arrived just before I arrived in Edinburgh he switched his views so he had worked on holographic memories which I was very interested in and about when I arrived as a graduate student he was very impressed by a thesis by Terry Winograd that was using symbolic arrow methods to try and understand natural language commands like yes put the red block on the green block in the blue box and he was very impressed by that he basically switched his interest from neural Nets just symbolic AI and he'd taken on this great", "start_timestamp": "00:49:14", "end_timestamp": "00:49:58", "start_second": 2954, "end_second": 2998, "url": "https://www.youtube.com/watch?v=z9Fz96Mr4bM&t=2954s", "title": "The Heidelberg Laureate Forum Foundation presents the HLF Portraits: Geoffrey E. Hinton", "thumbnail": "https://i.ytimg.com/vi/z9Fz96Mr4bM/maxresdefault.jpg"} {"video_id": "z9Fz96Mr4bM", "text": "student who was killing it in neural nets and he tried to convince me to switch my interest but I wasn't having any no no you're stubborn I mean I was very stunned I remembering when I look back on it and having been an adviser of graduates yes yes and having seen the various types of graduate students there are including the extremely stubborn ones I remember him coming into my office when I was a grad student and saying Geoffrey I've had this idea that you might be interested in let me explain it to you so he explained this", "start_timestamp": "00:49:58", "end_timestamp": "00:50:31", "start_second": 2998, "end_second": 3031, "url": "https://www.youtube.com/watch?v=z9Fz96Mr4bM&t=2998s", "title": "The Heidelberg Laureate Forum Foundation presents the HLF Portraits: Geoffrey E. Hinton", "thumbnail": "https://i.ytimg.com/vi/z9Fz96Mr4bM/maxresdefault.jpg"} {"video_id": "z9Fz96Mr4bM", "text": "idea to me that seemed moderately interesting and at the end of the idea he said so Geoffrey do you think you'd like to work on that and I looked at him in an amazement and said no no I God I didn't my only need to work up um yes so he was very tolerant he was tolerant of having me as a graduate student even though he thought I was doing crazy stuff he's available to say he was charmed by your audacity or is that No we'd have arguments in the time and I kept agreeing that okay if I hadn't made it work in six months I would switch to", "start_timestamp": "00:50:31", "end_timestamp": "00:51:10", "start_second": 3031, "end_second": 3070, "url": "https://www.youtube.com/watch?v=z9Fz96Mr4bM&t=3031s", "title": "The Heidelberg Laureate Forum Foundation presents the HLF Portraits: Geoffrey E. Hinton", "thumbnail": "https://i.ytimg.com/vi/z9Fz96Mr4bM/maxresdefault.jpg"} {"video_id": "z9Fz96Mr4bM", "text": "doing symbolic AI and then I would keep running on those arguments that's what he's tolerant of you were also unwilling as part of your temperament as we begin to understand it to just go ahead and do the union card thing which is right whatever it is that the professor says do do it and then get on with your real life you weren't prepared to do that no no I I mean I wasn't gonna work on ideas I didn't believe it yeah you're not cynical yeah I I am quite cynical about a lot of things but like that that's I still want you to to get your degree how", "start_timestamp": "00:51:10", "end_timestamp": "00:51:53", "start_second": 3070, "end_second": 3113, "url": "https://www.youtube.com/watch?v=z9Fz96Mr4bM&t=3070s", "title": "The Heidelberg Laureate Forum Foundation presents the HLF Portraits: Geoffrey E. Hinton", "thumbnail": "https://i.ytimg.com/vi/z9Fz96Mr4bM/maxresdefault.jpg"} {"video_id": "z9Fz96Mr4bM", "text": "do you how does that manage to happen so in the end I managed to do something that wasn't learning in your all Nets but was inference in your own ads and I made it work and it had a little bit of math that justified it mmm and he was happy with that and so I got a PhD and then I got out of there and I was very disillusioned with it all by then and I took another field oh by the university or the feel academia by academia itself and so again and you've done that before yes I dropped by frequently yeah and so I took a year off I went to London and", "start_timestamp": "00:51:53", "end_timestamp": "00:52:37", "start_second": 3113, "end_second": 3157, "url": "https://www.youtube.com/watch?v=z9Fz96Mr4bM&t=3113s", "title": "The Heidelberg Laureate Forum Foundation presents the HLF Portraits: Geoffrey E. Hinton", "thumbnail": "https://i.ytimg.com/vi/z9Fz96Mr4bM/maxresdefault.jpg"} {"video_id": "z9Fz96Mr4bM", "text": "taught in a free school that was a very different environment yes free school with a lot of disturbed children yes and then I went back I probably Trebek signal itself then I went back and got a job as a postdoc we're in Sussex in Sussex for a while and finally I saw an investment for a job in San Diego that seemed like a really nice job in cognitive science where they were gonna recruit six postdocs in various areas who were going to interact with one another and try and understand the mind one of the articles I've read", "start_timestamp": "00:52:37", "end_timestamp": "00:53:23", "start_second": 3157, "end_second": 3203, "url": "https://www.youtube.com/watch?v=z9Fz96Mr4bM&t=3157s", "title": "The Heidelberg Laureate Forum Foundation presents the HLF Portraits: Geoffrey E. Hinton", "thumbnail": "https://i.ytimg.com/vi/z9Fz96Mr4bM/maxresdefault.jpg"} {"video_id": "z9Fz96Mr4bM", "text": "about you not necessarily was accurate but said that the question of monetary support for work was also a factor in your going to america is that not true well because of the like hill report the were no jobs in AI in vogue were there he killed it basically they killed that there was one job in AI at edinburgh and lots of very good people competing for it so you almost didn't have a choice no I couldn't get it I couldn't get an academic academic job in Britain at all those nations zero shuns I mean I couldn't even get an interview for an", "start_timestamp": "00:53:23", "end_timestamp": "00:53:56", "start_second": 3203, "end_second": 3236, "url": "https://www.youtube.com/watch?v=z9Fz96Mr4bM&t=3203s", "title": "The Heidelberg Laureate Forum Foundation presents the HLF Portraits: Geoffrey E. Hinton", "thumbnail": "https://i.ytimg.com/vi/z9Fz96Mr4bM/maxresdefault.jpg"} {"video_id": "z9Fz96Mr4bM", "text": "academic job now we're almost at the end and the whole point of this interview is really the origins of your of your thinking so that's not a problem but I I wonder if we can get toward the end a kind of description of the academic environment in the American University and the the sort of strategies that you encountered at that point when you went to San Diego so it's a big contrast that in Britain the academic establishment in a I was monolithic there was sort of the correct view yes and there wasn't room for multiple camps ah and in America at", "start_timestamp": "00:53:56", "end_timestamp": "00:54:31", "start_second": 3236, "end_second": 3271, "url": "https://www.youtube.com/watch?v=z9Fz96Mr4bM&t=3236s", "title": "The Heidelberg Laureate Forum Foundation presents the HLF Portraits: Geoffrey E. Hinton", "thumbnail": "https://i.ytimg.com/vi/z9Fz96Mr4bM/maxresdefault.jpg"} {"video_id": "z9Fz96Mr4bM", "text": "least you've got two coasts so in linguistics you had a salary of East Coast Chomsky camp and a West Coast film or on lake off camp and it was the same with AI that in San Diego though a group of people particularly David Rama Hart who came from psychology but was strong mathematically who basically had a completely different view of AI he was interested in how it actually happened in the mind and he thought understanding had happened in the brain would be useful for understanding that Ruth unlike many psychologists and he just", "start_timestamp": "00:54:31", "end_timestamp": "00:55:13", "start_second": 3271, "end_second": 3313, "url": "https://www.youtube.com/watch?v=z9Fz96Mr4bM&t=3271s", "title": "The Heidelberg Laureate Forum Foundation presents the HLF Portraits: Geoffrey E. Hinton", "thumbnail": "https://i.ytimg.com/vi/z9Fz96Mr4bM/maxresdefault.jpg"} {"video_id": "z9Fz96Mr4bM", "text": "had a view that was extremely compatible with what I've been thinking so it's the first time I've been somewhere because longings had changed his mind just before I arrived yes it's first time I've been somewhere where I was working with someone who really had the same general beliefs about how to go about understanding the mind and what it was like as I did and that was wonderful working life as we're toward the end of this I may I'm gonna ask something that may be ridiculous generalization and I'm perfectly", "start_timestamp": "00:55:13", "end_timestamp": "00:55:44", "start_second": 3313, "end_second": 3344, "url": "https://www.youtube.com/watch?v=z9Fz96Mr4bM&t=3313s", "title": "The Heidelberg Laureate Forum Foundation presents the HLF Portraits: Geoffrey E. Hinton", "thumbnail": "https://i.ytimg.com/vi/z9Fz96Mr4bM/maxresdefault.jpg"} {"video_id": "z9Fz96Mr4bM", "text": "prepared to hear you say say that but as people look at your career there are some lessons some draw about the stubborn persistence in an idea in the face of most people saying it's nonsense in America as well and then maybe even into Canada or where you later went there was still in it an academy saying what you were interested in and what you were pursuing was wrong yes but I sent him over bad things about psychology but what happened was after backpropagation you have been rediscovered by dave rama ha yes and he and i and various other", "start_timestamp": "00:55:44", "end_timestamp": "00:56:29", "start_second": 3344, "end_second": 3389, "url": "https://www.youtube.com/watch?v=z9Fz96Mr4bM&t=3344s", "title": "The Heidelberg Laureate Forum Foundation presents the HLF Portraits: Geoffrey E. Hinton", "thumbnail": "https://i.ytimg.com/vi/z9Fz96Mr4bM/maxresdefault.jpg"} {"video_id": "z9Fz96Mr4bM", "text": "people had shown he could do interesting things in terms of learning representations right there was a surge of interest which then died out with in computer science because it didn't work as well as we'd hoped but in psychology people stayed interested in ah so it had a home in psychology and there was always support for these ideas in psychology so you had people to talk yes to anyway yeah um although on the whole i was much more interested in making it solve problems like speech recognition and object recognition which", "start_timestamp": "00:56:29", "end_timestamp": "00:57:05", "start_second": 3389, "end_second": 3425, "url": "https://www.youtube.com/watch?v=z9Fz96Mr4bM&t=3389s", "title": "The Heidelberg Laureate Forum Foundation presents the HLF Portraits: Geoffrey E. Hinton", "thumbnail": "https://i.ytimg.com/vi/z9Fz96Mr4bM/maxresdefault.jpg"} {"video_id": "z9Fz96Mr4bM", "text": "the psychologists weren't really doing very effectively so i was interested in doing machine learning with it and the psychologists weren't really pushing that agenda right so in that sense you were relatively lonely I was relatively lonely but there was definitely it would be completely incorrect to say I was the kind of lone voice in the wilderness right there were a few other lone voices too but the wilderness with a will learning not the whole whole the whole academic scene that's going to be the last word", "start_timestamp": "00:57:05", "end_timestamp": "00:57:38", "start_second": 3425, "end_second": 3458, "url": "https://www.youtube.com/watch?v=z9Fz96Mr4bM&t=3425s", "title": "The Heidelberg Laureate Forum Foundation presents the HLF Portraits: Geoffrey E. Hinton", "thumbnail": "https://i.ytimg.com/vi/z9Fz96Mr4bM/maxresdefault.jpg"} {"video_id": "Z6rxFNMGdn0", "text": "the following is a conversation with Ian good fellow he's the author of the popular textbook on deep learning simply titled deep learning he coined the term of generative adversarial networks otherwise known as Ganz and with his 2014 paper is responsible for launching the incredible growth of research and innovation in this subfield of deep learning he got his BS and MS at Stanford his PhD at University of Montreal with yoshua bengio and Erin Kerrville he held several research positions including an open AI Google", "start_timestamp": "00:00:00", "end_timestamp": "00:00:36", "start_second": 0, "end_second": 36, "url": "https://www.youtube.com/watch?v=Z6rxFNMGdn0&t=0s", "title": "Ian Goodfellow: Generative Adversarial Networks (GANs) | Lex Fridman Podcast #19", "thumbnail": "https://i.ytimg.com/vi/Z6rxFNMGdn0/maxresdefault.jpg"} {"video_id": "Z6rxFNMGdn0", "text": "brain and now at Apple as the director of machine learning this recording happened while Ian was still a Google brain but we don't talk about anything specific to Google or any other organization this conversation is part of the artificial intelligence podcast if you enjoy it subscribe on YouTube iTunes or simply connect with me on Twitter at lex friedman spelled fri d and now here's my conversation with Ian good fellow you open your popular deep learning book with a Russian doll type diagram that shows deep learning is a", "start_timestamp": "00:00:36", "end_timestamp": "00:01:14", "start_second": 36, "end_second": 74, "url": "https://www.youtube.com/watch?v=Z6rxFNMGdn0&t=36s", "title": "Ian Goodfellow: Generative Adversarial Networks (GANs) | Lex Fridman Podcast #19", "thumbnail": "https://i.ytimg.com/vi/Z6rxFNMGdn0/maxresdefault.jpg"} {"video_id": "Z6rxFNMGdn0", "text": "subset of representation learning which in turn is a subset of machine learning and finally a subset of AI so this kind of implies that there may be limits to deep learning in the context of AI so what do you think is the current limits of deep learning and are those limits something that we can overcome with time yeah I think one of the biggest limitations of deep learning is that right now it requires really a lot of data especially labeled data there's some unsupervised and semi-supervised learning algorithms that can reduce the", "start_timestamp": "00:01:14", "end_timestamp": "00:01:48", "start_second": 74, "end_second": 108, "url": "https://www.youtube.com/watch?v=Z6rxFNMGdn0&t=74s", "title": "Ian Goodfellow: Generative Adversarial Networks (GANs) | Lex Fridman Podcast #19", "thumbnail": "https://i.ytimg.com/vi/Z6rxFNMGdn0/maxresdefault.jpg"} {"video_id": "Z6rxFNMGdn0", "text": "amount of labeled data you need but they still require a lot of unlabeled data reinforcement learning algorithms they don't need labels but they need really a lot of experiences as human beings we don't learn to play pong by failing at pong two million times so just getting the generalization ability better is one of the most important bottlenecks and the capability of the technology today and then I guess I'd also say deep learning is like a of a bigger system so far nobody is really proposing to have only what you'd", "start_timestamp": "00:01:48", "end_timestamp": "00:02:21", "start_second": 108, "end_second": 141, "url": "https://www.youtube.com/watch?v=Z6rxFNMGdn0&t=108s", "title": "Ian Goodfellow: Generative Adversarial Networks (GANs) | Lex Fridman Podcast #19", "thumbnail": "https://i.ytimg.com/vi/Z6rxFNMGdn0/maxresdefault.jpg"} {"video_id": "Z6rxFNMGdn0", "text": "call deep learning as the entire ingredient of intelligence you use deep learning as sub modules of other systems like alphago has a deep learning model that estimates the value function most reinforcement learning algorithms have a deep learning module that estimates which action to take next but you might have other components here basically as building a function estimator do you think it's possible you said nobody is kind of in thinking about this so far but do you think neural networks could be made to reason in the way symbolic", "start_timestamp": "00:02:21", "end_timestamp": "00:02:55", "start_second": 141, "end_second": 175, "url": "https://www.youtube.com/watch?v=Z6rxFNMGdn0&t=141s", "title": "Ian Goodfellow: Generative Adversarial Networks (GANs) | Lex Fridman Podcast #19", "thumbnail": "https://i.ytimg.com/vi/Z6rxFNMGdn0/maxresdefault.jpg"} {"video_id": "Z6rxFNMGdn0", "text": "systems did in the 80s and 90s to do more create more like programs as opposed to functions yeah I think we already see that a little bit I already kind of think of neural nets as a kind of program I think of deep learning as basically learning programs that have more than one step so if you draw a flowchart or or if you draw a tensor flow graph describing your machine learning model I think of the depth of that graph is describing the number of steps that run in sequence and then the width of that graph is the number of", "start_timestamp": "00:02:55", "end_timestamp": "00:03:28", "start_second": 175, "end_second": 208, "url": "https://www.youtube.com/watch?v=Z6rxFNMGdn0&t=175s", "title": "Ian Goodfellow: Generative Adversarial Networks (GANs) | Lex Fridman Podcast #19", "thumbnail": "https://i.ytimg.com/vi/Z6rxFNMGdn0/maxresdefault.jpg"} {"video_id": "Z6rxFNMGdn0", "text": "steps that run in parallel now it's been long enough that we've had deep learning working that it's a little bit silly to even discuss shallow learning anymore but back when I first got involved in AI when we used machine learning we were usually learning things like support vector machines you could have a lot of input features to the model and you could multiply each feature by a different weight but all those multiplications were done in parallel to each other there wasn't a lot done in series I think what we got with deep", "start_timestamp": "00:03:28", "end_timestamp": "00:03:54", "start_second": 208, "end_second": 234, "url": "https://www.youtube.com/watch?v=Z6rxFNMGdn0&t=208s", "title": "Ian Goodfellow: Generative Adversarial Networks (GANs) | Lex Fridman Podcast #19", "thumbnail": "https://i.ytimg.com/vi/Z6rxFNMGdn0/maxresdefault.jpg"} {"video_id": "Z6rxFNMGdn0", "text": "learning was really the ability to have steps of a program that run in sequence and I think that we've actually started to see that what's important with deep learning is more the fact that we have a multi-step program rather than the fact that we've learned a representation if you look at things like res nuts for example they take one particular kind of representation and they update it several times back when deep learning first really took off in the academic world in 2006 when Geoff Hinton showed that you could train deep belief", "start_timestamp": "00:03:54", "end_timestamp": "00:04:28", "start_second": 234, "end_second": 268, "url": "https://www.youtube.com/watch?v=Z6rxFNMGdn0&t=234s", "title": "Ian Goodfellow: Generative Adversarial Networks (GANs) | Lex Fridman Podcast #19", "thumbnail": "https://i.ytimg.com/vi/Z6rxFNMGdn0/maxresdefault.jpg"} {"video_id": "Z6rxFNMGdn0", "text": "networks everybody who was under ested in the idea thought of it as each layer learns a different level of abstraction but the first layer trained on images learn something like edges and the second layer learns corners and eventually you get these kind of grandmother's cell units that recognize specific objects today I think most people think of it more as a computer program where as you add more layers you can do more updates before you output your final number but I don't think anybody believes the layer 150 of the", "start_timestamp": "00:04:28", "end_timestamp": "00:04:58", "start_second": 268, "end_second": 298, "url": "https://www.youtube.com/watch?v=Z6rxFNMGdn0&t=268s", "title": "Ian Goodfellow: Generative Adversarial Networks (GANs) | Lex Fridman Podcast #19", "thumbnail": "https://i.ytimg.com/vi/Z6rxFNMGdn0/maxresdefault.jpg"} {"video_id": "Z6rxFNMGdn0", "text": "resin it is a grand grandmother cell and you know layer 100 is contours or something like that okay so you think you're not thinking of it as a singular representation that keeps building you think of it as a program sort of almost like a state the representation is a state of understanding and yeah I think of it as a program that makes several updates and arrives it better and better understandings but it's not replacing the representation at each step its refining it and in some sense that's a little bit like reasoning it's not", "start_timestamp": "00:04:58", "end_timestamp": "00:05:32", "start_second": 298, "end_second": 332, "url": "https://www.youtube.com/watch?v=Z6rxFNMGdn0&t=298s", "title": "Ian Goodfellow: Generative Adversarial Networks (GANs) | Lex Fridman Podcast #19", "thumbnail": "https://i.ytimg.com/vi/Z6rxFNMGdn0/maxresdefault.jpg"} {"video_id": "Z6rxFNMGdn0", "text": "reasoning in the form of deduction but it's reasoning in the form of taking a thought and refining it and refining it carefully until it's good enough to use do you think and I hope you don't mind we'll jump philosophical every once in a while do you think of you know a cognition human cognition or even consciousness as simply a result of this kind of cincuenta sequential representation learning do you think that can emerge cognition yes I think so consciousness it's really hard to even define what we mean by that I guess", "start_timestamp": "00:05:32", "end_timestamp": "00:06:07", "start_second": 332, "end_second": 367, "url": "https://www.youtube.com/watch?v=Z6rxFNMGdn0&t=332s", "title": "Ian Goodfellow: Generative Adversarial Networks (GANs) | Lex Fridman Podcast #19", "thumbnail": "https://i.ytimg.com/vi/Z6rxFNMGdn0/maxresdefault.jpg"} {"video_id": "Z6rxFNMGdn0", "text": "there's consciousness is often defined as things like having self-awareness and that's relatively easy to turn into something actionable for a computer scientists the reason about people also defined consciousness in terms of having qualitative states of experience like qualia and there's all these philosophical problems like could you imagine jambe who does all the same information processing as a human but doesn't really have the qualitative experiences that we have that sort of thing I have no idea how to formalize or", "start_timestamp": "00:06:07", "end_timestamp": "00:06:37", "start_second": 367, "end_second": 397, "url": "https://www.youtube.com/watch?v=Z6rxFNMGdn0&t=367s", "title": "Ian Goodfellow: Generative Adversarial Networks (GANs) | Lex Fridman Podcast #19", "thumbnail": "https://i.ytimg.com/vi/Z6rxFNMGdn0/maxresdefault.jpg"} {"video_id": "Z6rxFNMGdn0", "text": "turn it into a scientific question I don't know how you could run in experiment to tell whether a person is a zombie or not and similarly I don't know how you could run an experiment to tell whether an advanced AI system had become conscious in the sense of qualia or not but in the more practical sense like almost like self attention you think consciousness and cognition can in an impressive way emerge from current types of architectures though yes yeah or or if if you think of consciousness in terms of self-awareness", "start_timestamp": "00:06:37", "end_timestamp": "00:07:08", "start_second": 397, "end_second": 428, "url": "https://www.youtube.com/watch?v=Z6rxFNMGdn0&t=397s", "title": "Ian Goodfellow: Generative Adversarial Networks (GANs) | Lex Fridman Podcast #19", "thumbnail": "https://i.ytimg.com/vi/Z6rxFNMGdn0/maxresdefault.jpg"} {"video_id": "Z6rxFNMGdn0", "text": "and just making plans based on the fact that the agent itself exists in the world reinforcement learning algorithms are already more or less forced to model the agents effect on the environment so that that more limited version of consciousness is already something that we get limited versions of with reinforcement learning algorithms if they're trained well but you say limited so the the big question really is how you jump from limited to human level yeah right and whether it's possible you know the even just building common-sense", "start_timestamp": "00:07:08", "end_timestamp": "00:07:48", "start_second": 428, "end_second": 468, "url": "https://www.youtube.com/watch?v=Z6rxFNMGdn0&t=428s", "title": "Ian Goodfellow: Generative Adversarial Networks (GANs) | Lex Fridman Podcast #19", "thumbnail": "https://i.ytimg.com/vi/Z6rxFNMGdn0/maxresdefault.jpg"} {"video_id": "Z6rxFNMGdn0", "text": "reasoning seems to be exceptionally difficult so K if we scale things up forget much better on supervised learning if we get better at labeling forget bigger datasets and the more compute do you think we'll start to see really impressive things that go from limited to you know something echoes of human level cognition I think so yeah I'm optimistic about what can happen just with more computation and more data I do think it'll be important to get the right kind of data today most of the machine learning systems we train our", "start_timestamp": "00:07:48", "end_timestamp": "00:08:22", "start_second": 468, "end_second": 502, "url": "https://www.youtube.com/watch?v=Z6rxFNMGdn0&t=468s", "title": "Ian Goodfellow: Generative Adversarial Networks (GANs) | Lex Fridman Podcast #19", "thumbnail": "https://i.ytimg.com/vi/Z6rxFNMGdn0/maxresdefault.jpg"} {"video_id": "Z6rxFNMGdn0", "text": "mostly trained on one type of data for each model but the human brain we get all of our different senses and we have many different experiences like you know riding a bike driving a car talking to people reading I think when you get that kind of integrated data set working with a machine learning model that can actually close the loop and interact we may find that algorithms not so different from what we have today learn really interesting things when you scale them up a lot and a large amount of multimodal data so", "start_timestamp": "00:08:22", "end_timestamp": "00:08:58", "start_second": 502, "end_second": 538, "url": "https://www.youtube.com/watch?v=Z6rxFNMGdn0&t=502s", "title": "Ian Goodfellow: Generative Adversarial Networks (GANs) | Lex Fridman Podcast #19", "thumbnail": "https://i.ytimg.com/vi/Z6rxFNMGdn0/maxresdefault.jpg"} {"video_id": "Z6rxFNMGdn0", "text": "multimodal is really interesting but within like you're working adversarial examples so selecting within modal within up one mode of data selecting better at what are the difficult cases from which are most useful to learn from oh yeah like could we could you get a whole lot of mileage out of designing a model that's resistant to adverse fare examples or something like that right yeah question but my thinking on that has evolved a lot over the last few years one nice thing when I first started to really invest in studying", "start_timestamp": "00:08:58", "end_timestamp": "00:09:31", "start_second": 538, "end_second": 571, "url": "https://www.youtube.com/watch?v=Z6rxFNMGdn0&t=538s", "title": "Ian Goodfellow: Generative Adversarial Networks (GANs) | Lex Fridman Podcast #19", "thumbnail": "https://i.ytimg.com/vi/Z6rxFNMGdn0/maxresdefault.jpg"} {"video_id": "Z6rxFNMGdn0", "text": "adversarial examples I was thinking of it mostly as that versus aryl examples reveal a big problem with machine learning and we would like to close the gap between how machine learning models respond to adversarial examples and how humans respond after studying the problem more I still think that adversarial examples are important I think of them now more of as a security liability then as an issue that necessarily shows there something uniquely wrong with machine learning as opposed to humans also do you see them", "start_timestamp": "00:09:31", "end_timestamp": "00:10:03", "start_second": 571, "end_second": 603, "url": "https://www.youtube.com/watch?v=Z6rxFNMGdn0&t=571s", "title": "Ian Goodfellow: Generative Adversarial Networks (GANs) | Lex Fridman Podcast #19", "thumbnail": "https://i.ytimg.com/vi/Z6rxFNMGdn0/maxresdefault.jpg"} {"video_id": "Z6rxFNMGdn0", "text": "as a tool to improve the performance of the system not not on the security side but literally just accuracy I do see them as a kind of tool on that side but maybe not quite as much as I used to think we've started to find that there's a trade-off between accuracy on adversarial examples and accuracy on clean examples back in 2014 when I did the first adversary trained classifier that showed resistance to some kinds of adversarial examples it also got better at the clean data on M NIST and that's something we've replicated several times", "start_timestamp": "00:10:03", "end_timestamp": "00:10:37", "start_second": 603, "end_second": 637, "url": "https://www.youtube.com/watch?v=Z6rxFNMGdn0&t=603s", "title": "Ian Goodfellow: Generative Adversarial Networks (GANs) | Lex Fridman Podcast #19", "thumbnail": "https://i.ytimg.com/vi/Z6rxFNMGdn0/maxresdefault.jpg"} {"video_id": "Z6rxFNMGdn0", "text": "an M NIST that when we train against weak adversarial examples Emnes classifiers get more accurate so far that hasn't really held up on other data sets and hasn't held up when we train against stronger adversaries it seems like when you confront a really strong adversary you tend to have to give something up interesting this is such a compelling idea because it feels it feels like that's how us humans learn yeah the difficult cases we we try to think of what would we screw up and then we make sure we fix that yeah", "start_timestamp": "00:10:37", "end_timestamp": "00:11:10", "start_second": 637, "end_second": 670, "url": "https://www.youtube.com/watch?v=Z6rxFNMGdn0&t=637s", "title": "Ian Goodfellow: Generative Adversarial Networks (GANs) | Lex Fridman Podcast #19", "thumbnail": "https://i.ytimg.com/vi/Z6rxFNMGdn0/maxresdefault.jpg"} {"video_id": "Z6rxFNMGdn0", "text": "it's also in a lot of branches of engineering you do a worst case analysis and make sure that your system will work in the worst case and then that guarantees that it'll work in all of the messy average cases that happen when you go out into a really randomized world you know with driving with autonomous vehicles there seems to be a desire to just look for think I'd viscerally tried to figure out how to mess up the system and if you can be robust to all those difficult cases then you can it's a hand waving empirical way to show that your", "start_timestamp": "00:11:10", "end_timestamp": "00:11:44", "start_second": 670, "end_second": 704, "url": "https://www.youtube.com/watch?v=Z6rxFNMGdn0&t=670s", "title": "Ian Goodfellow: Generative Adversarial Networks (GANs) | Lex Fridman Podcast #19", "thumbnail": "https://i.ytimg.com/vi/Z6rxFNMGdn0/maxresdefault.jpg"} {"video_id": "Z6rxFNMGdn0", "text": "system is yeah yes today most adverse early example research isn't really focused on a particular use case but there are a lot of different use cases where you'd like to make sure that the adversary can't interfere with the operation of your system like in finance if you have an algorithm making trades for you people go to a lot of an effort to obfuscate their algorithm that's both to protect their IP because you don't want to research and develop a profitable trading algorithm then have somebody else capture the gains but it's at least", "start_timestamp": "00:11:44", "end_timestamp": "00:12:16", "start_second": 704, "end_second": 736, "url": "https://www.youtube.com/watch?v=Z6rxFNMGdn0&t=704s", "title": "Ian Goodfellow: Generative Adversarial Networks (GANs) | Lex Fridman Podcast #19", "thumbnail": "https://i.ytimg.com/vi/Z6rxFNMGdn0/maxresdefault.jpg"} {"video_id": "Z6rxFNMGdn0", "text": "partly because you don't want people to make adversarial examples that fool you our algorithm into making bad trades or I guess one area that's been popular in the academic literature is speech recognition if you use speech recognition to hear an audio waveform and then in turn that into a command that a phone executes for you you don't want and a malicious adversary to be able to produce audio that gets interpreted as malicious commands especially if a human in the room doesn't realize that something like that", "start_timestamp": "00:12:16", "end_timestamp": "00:12:49", "start_second": 736, "end_second": 769, "url": "https://www.youtube.com/watch?v=Z6rxFNMGdn0&t=736s", "title": "Ian Goodfellow: Generative Adversarial Networks (GANs) | Lex Fridman Podcast #19", "thumbnail": "https://i.ytimg.com/vi/Z6rxFNMGdn0/maxresdefault.jpg"} {"video_id": "Z6rxFNMGdn0", "text": "is happening in speech recognition has there been much success in in being able to create adversarial examples that fool the system yeah actually I guess the first work that I'm aware of is a paper called hidden voice commands that came out in 2016 I believe and they were able to show that they could make sounds that are not understandable by a human but are recognized as the target phrase that the attacker wants the phone to recognize it as since then things have gotten a little bit better on the attacker side when worse on the defender", "start_timestamp": "00:12:49", "end_timestamp": "00:13:27", "start_second": 769, "end_second": 807, "url": "https://www.youtube.com/watch?v=Z6rxFNMGdn0&t=769s", "title": "Ian Goodfellow: Generative Adversarial Networks (GANs) | Lex Fridman Podcast #19", "thumbnail": "https://i.ytimg.com/vi/Z6rxFNMGdn0/maxresdefault.jpg"} {"video_id": "Z6rxFNMGdn0", "text": "side it's become possible to make sounds that sound like normal speech but are actually interpreted as a different sentence than the human here's the level of perceptibility of the adversarial perturbation is still kind of high the when you listen to the recording it sounds like there's some noise in the background just like rustling sounds but those rustling sounds are actually the adversarial perturbation that makes the phone hear a completely different sentence yeah that's so fascinating Peter Norvig mention that you're writing", "start_timestamp": "00:13:27", "end_timestamp": "00:14:01", "start_second": 807, "end_second": 841, "url": "https://www.youtube.com/watch?v=Z6rxFNMGdn0&t=807s", "title": "Ian Goodfellow: Generative Adversarial Networks (GANs) | Lex Fridman Podcast #19", "thumbnail": "https://i.ytimg.com/vi/Z6rxFNMGdn0/maxresdefault.jpg"} {"video_id": "Z6rxFNMGdn0", "text": "the deep learning chapter for the fourth edition of the artificial intelligence the modern approach book so how do you even begin summarizing the field of deep learning in a chapter well in my case I waited like a year before I actually read anything is it even having written a full length textbook before it's still pretty intimidating to try to start writing just one chapter that covers everything one thing that helped me make that plan was actually the experience of having ridden the full book before and then", "start_timestamp": "00:14:01", "end_timestamp": "00:14:37", "start_second": 841, "end_second": 877, "url": "https://www.youtube.com/watch?v=Z6rxFNMGdn0&t=841s", "title": "Ian Goodfellow: Generative Adversarial Networks (GANs) | Lex Fridman Podcast #19", "thumbnail": "https://i.ytimg.com/vi/Z6rxFNMGdn0/maxresdefault.jpg"} {"video_id": "Z6rxFNMGdn0", "text": "watching how the field changed after the book came out I realized there's a lot of topics that were maybe extraneous in the first book and just seeing what stood the test of a few years of being published and what seems a little bit less important to have included now helped me pare down the topics I wanted to cover for the book it's also really nice now that the field is kind of stabilized to the point where some core ideas from the 1980s are still used today when I first started studying machine learning almost everything from", "start_timestamp": "00:14:37", "end_timestamp": "00:15:07", "start_second": 877, "end_second": 907, "url": "https://www.youtube.com/watch?v=Z6rxFNMGdn0&t=877s", "title": "Ian Goodfellow: Generative Adversarial Networks (GANs) | Lex Fridman Podcast #19", "thumbnail": "https://i.ytimg.com/vi/Z6rxFNMGdn0/maxresdefault.jpg"} {"video_id": "Z6rxFNMGdn0", "text": "the 1980s had been rejected and now some of it has come back so that stuff that's really stood the test of time is what I focused on putting into the book there's also I guess two different philosophies about how you might write a book one philosophy is you try to write a reference that covers everything and the other philosophy is you try to provide a high level summary that gives people the language to understand a field and tells them what the most important concepts are the first deep learning book that I", "start_timestamp": "00:15:07", "end_timestamp": "00:15:36", "start_second": 907, "end_second": 936, "url": "https://www.youtube.com/watch?v=Z6rxFNMGdn0&t=907s", "title": "Ian Goodfellow: Generative Adversarial Networks (GANs) | Lex Fridman Podcast #19", "thumbnail": "https://i.ytimg.com/vi/Z6rxFNMGdn0/maxresdefault.jpg"} {"video_id": "Z6rxFNMGdn0", "text": "wrote with Yahshua and Aaron was somewhere between the the two philosophies that it's trying to be both a reference and an introductory guide writing this chapter for Russell and Norvig book I was able to focus more on just a concise introduction of the key concepts and the language you need to read about them more and a lot of cases actually just wrote paragraphs that said here's a rapidly evolving area that you should pay attention to it's it's pointless to try to tell you what the latest and best version of a you know", "start_timestamp": "00:15:36", "end_timestamp": "00:16:07", "start_second": 936, "end_second": 967, "url": "https://www.youtube.com/watch?v=Z6rxFNMGdn0&t=936s", "title": "Ian Goodfellow: Generative Adversarial Networks (GANs) | Lex Fridman Podcast #19", "thumbnail": "https://i.ytimg.com/vi/Z6rxFNMGdn0/maxresdefault.jpg"} {"video_id": "Z6rxFNMGdn0", "text": "learn to learn model is right you know I can I can point you to a paper that's recent right now but there isn't a whole lot of a reason to delve into exactly what's going on with the latest learning to learn approach or the latest module produced by learning to learn algorithm you should know that learning to learn is a thing and that it may very well be the source of the latest and greatest convolutional net or recurrent net module that you would want to use in your latest project but there isn't a lot of point in trying to summarize", "start_timestamp": "00:16:07", "end_timestamp": "00:16:37", "start_second": 967, "end_second": 997, "url": "https://www.youtube.com/watch?v=Z6rxFNMGdn0&t=967s", "title": "Ian Goodfellow: Generative Adversarial Networks (GANs) | Lex Fridman Podcast #19", "thumbnail": "https://i.ytimg.com/vi/Z6rxFNMGdn0/maxresdefault.jpg"} {"video_id": "Z6rxFNMGdn0", "text": "exactly which architecture in which learning approach got to which level of performance so you maybe focus more on the basics of the methodology so from back propagation to feed-forward to recur in your networks convolutional that kind of thing yeah yeah so if I were to ask you I remember I took algorithms and data structures algorithm there of course remember the professor asked what is an algorithm and yelled at everybody in a good way that nobody was answering it correctly everybody knew what the alkyl it was graduate course everybody knew", "start_timestamp": "00:16:37", "end_timestamp": "00:17:16", "start_second": 997, "end_second": 1036, "url": "https://www.youtube.com/watch?v=Z6rxFNMGdn0&t=997s", "title": "Ian Goodfellow: Generative Adversarial Networks (GANs) | Lex Fridman Podcast #19", "thumbnail": "https://i.ytimg.com/vi/Z6rxFNMGdn0/maxresdefault.jpg"} {"video_id": "Z6rxFNMGdn0", "text": "what an algorithm was but they weren't able to answer it well let me ask you in that same spirit what is deep learning I would say deep learning is any kind of machine learning that involves learning parameters of more than one consecutive step so that I mean shallow learning is things where you learn a lot of operations that happen in parallel you might have a system that makes multiple steps like you might have had designed feature extractors but really only one step is learned deep learning is anything where you have multiple", "start_timestamp": "00:17:16", "end_timestamp": "00:17:55", "start_second": 1036, "end_second": 1075, "url": "https://www.youtube.com/watch?v=Z6rxFNMGdn0&t=1036s", "title": "Ian Goodfellow: Generative Adversarial Networks (GANs) | Lex Fridman Podcast #19", "thumbnail": "https://i.ytimg.com/vi/Z6rxFNMGdn0/maxresdefault.jpg"} {"video_id": "Z6rxFNMGdn0", "text": "operations in sequence and that includes the things that are really popular today like convolutional networks and recurrent networks but it also includes some of the things that have died out like Bolton machines where we weren't using back propagation today I hear a lot of people define deep learning as gradient descent applied to these differentiable functions and I think that's a legitimate usage of the term it's just different from the way that I use the term myself so what's an example of deep learning that is not gradient", "start_timestamp": "00:17:55", "end_timestamp": "00:18:32", "start_second": 1075, "end_second": 1112, "url": "https://www.youtube.com/watch?v=Z6rxFNMGdn0&t=1075s", "title": "Ian Goodfellow: Generative Adversarial Networks (GANs) | Lex Fridman Podcast #19", "thumbnail": "https://i.ytimg.com/vi/Z6rxFNMGdn0/maxresdefault.jpg"} {"video_id": "Z6rxFNMGdn0", "text": "descent on differentiable functions in your I mean not specifically perhaps but more even looking into the future what's your thought about that space of approaches yeah so I tend to think of machine learning algorithms as decomposed into really three different pieces there's the model which can be something like a neural nut or a Bolton machine or a recurrent model and I basically just described how do you take data and how do you take parameters and you know what function do you use to make a prediction given the", "start_timestamp": "00:18:32", "end_timestamp": "00:19:04", "start_second": 1112, "end_second": 1144, "url": "https://www.youtube.com/watch?v=Z6rxFNMGdn0&t=1112s", "title": "Ian Goodfellow: Generative Adversarial Networks (GANs) | Lex Fridman Podcast #19", "thumbnail": "https://i.ytimg.com/vi/Z6rxFNMGdn0/maxresdefault.jpg"} {"video_id": "Z6rxFNMGdn0", "text": "data and the parameters another piece of the learning algorithm is the optimization algorithm or not every algorithm can be really described in terms of optimization but what's the algorithm for updating the parameters or updating whatever the state of the network is and then the the last part is the the data set like how do you actually represent the world as it comes into your machine learning system so I think of deep learning as telling us something about what does the model look like and basically to qualify as deep I", "start_timestamp": "00:19:04", "end_timestamp": "00:19:41", "start_second": 1144, "end_second": 1181, "url": "https://www.youtube.com/watch?v=Z6rxFNMGdn0&t=1144s", "title": "Ian Goodfellow: Generative Adversarial Networks (GANs) | Lex Fridman Podcast #19", "thumbnail": "https://i.ytimg.com/vi/Z6rxFNMGdn0/maxresdefault.jpg"} {"video_id": "Z6rxFNMGdn0", "text": "say that it just has to have multiple layers that can be multiple steps in a feed-forward differentiable computation that can be multiple layers in a graphical model there's a lot of ways that you could satisfy me that something has multiple steps that are each parameterised separately I think of gradient descent as being all about that other piece the how do you actually update the parameters piece so you can imagine having a deep model like a convolutional net and training it with something like evolution or a genetic", "start_timestamp": "00:19:41", "end_timestamp": "00:20:10", "start_second": 1181, "end_second": 1210, "url": "https://www.youtube.com/watch?v=Z6rxFNMGdn0&t=1181s", "title": "Ian Goodfellow: Generative Adversarial Networks (GANs) | Lex Fridman Podcast #19", "thumbnail": "https://i.ytimg.com/vi/Z6rxFNMGdn0/maxresdefault.jpg"} {"video_id": "Z6rxFNMGdn0", "text": "algorithm and I would say that still qualifies as deep learning and then in terms of models that aren't necessarily differentiable I guess Boltzmann machines are probably the main example of something where you can't really take a derivative and use that for the learning process but you you can still argue that the model has many steps of processing that it applies when you run inference in the model so that's the steps of processing that's key so geoff hinton suggests that we need to throw away back prop back", "start_timestamp": "00:20:10", "end_timestamp": "00:20:42", "start_second": 1210, "end_second": 1242, "url": "https://www.youtube.com/watch?v=Z6rxFNMGdn0&t=1210s", "title": "Ian Goodfellow: Generative Adversarial Networks (GANs) | Lex Fridman Podcast #19", "thumbnail": "https://i.ytimg.com/vi/Z6rxFNMGdn0/maxresdefault.jpg"} {"video_id": "Z6rxFNMGdn0", "text": "propagation and start all over what do you think about that what could an alternative direction of training nil networks look like I don't know that back propagation is going to go away entirely most of this time when we decide that a machine learning algorithm isn't on the critical path to research for improving AI the algorithm doesn't die it just becomes used for some specialized set of things a lot of algorithms like logistic regression don't seem that exciting to AI researchers who are working on things", "start_timestamp": "00:20:42", "end_timestamp": "00:21:14", "start_second": 1242, "end_second": 1274, "url": "https://www.youtube.com/watch?v=Z6rxFNMGdn0&t=1242s", "title": "Ian Goodfellow: Generative Adversarial Networks (GANs) | Lex Fridman Podcast #19", "thumbnail": "https://i.ytimg.com/vi/Z6rxFNMGdn0/maxresdefault.jpg"} {"video_id": "Z6rxFNMGdn0", "text": "like speech recognition or autonomous cars today but there's still a lot of use for logistic regression and things like analyzing really noisy data and medicine and finance or making really rapid predictions in really time-limited contexts so I think I think back propagation and gradient descent are around to stay but they may not end up being everything that we need to get to real human level or superhuman AI are you optimistic about us discovering you know back propagation has been around for a few decades", "start_timestamp": "00:21:14", "end_timestamp": "00:21:49", "start_second": 1274, "end_second": 1309, "url": "https://www.youtube.com/watch?v=Z6rxFNMGdn0&t=1274s", "title": "Ian Goodfellow: Generative Adversarial Networks (GANs) | Lex Fridman Podcast #19", "thumbnail": "https://i.ytimg.com/vi/Z6rxFNMGdn0/maxresdefault.jpg"} {"video_id": "Z6rxFNMGdn0", "text": "so I optimistic bus about us as a community being able to discover something better yeah I am I think I think we likely will find something that works better you could imagine things like having stacks of models where some of the lower level models predict parameters of the higher level models and so at the top level you're not learning in terms of literally calculating gradients but just predicting how different values will perform you can kind of see that already in some areas like Bayesian optimization where you have a Gaussian process that", "start_timestamp": "00:21:49", "end_timestamp": "00:22:23", "start_second": 1309, "end_second": 1343, "url": "https://www.youtube.com/watch?v=Z6rxFNMGdn0&t=1309s", "title": "Ian Goodfellow: Generative Adversarial Networks (GANs) | Lex Fridman Podcast #19", "thumbnail": "https://i.ytimg.com/vi/Z6rxFNMGdn0/maxresdefault.jpg"} {"video_id": "Z6rxFNMGdn0", "text": "predicts how well different parameter values will perform we already used those kinds of algorithms for things like hyper parameter optimization and in general we know a lot of things other than back prep that work really well for specific problems the main thing we haven't found is a way of taking one of these other non back based algorithms and having it really advanced the state-of-the-art on an AI level problem right but I wouldn't be surprised if eventually we find that some of these algorithms that even the ones that already exists not", "start_timestamp": "00:22:23", "end_timestamp": "00:22:52", "start_second": 1343, "end_second": 1372, "url": "https://www.youtube.com/watch?v=Z6rxFNMGdn0&t=1343s", "title": "Ian Goodfellow: Generative Adversarial Networks (GANs) | Lex Fridman Podcast #19", "thumbnail": "https://i.ytimg.com/vi/Z6rxFNMGdn0/maxresdefault.jpg"} {"video_id": "Z6rxFNMGdn0", "text": "even necessarily a new one we might find some way of customizing one of these algorithms to do something really interesting at the level of cognition or or the the level of I think one system that we really don't have working quite right yet is like short-term memory we have things like LST M's they're called long short-term memory they still don't do quite what a human does with short-term memory like gradient descent to learn a specific fact has to do multiple steps on that fact like if I I tell you the meeting today is at 3 p.m. I don't need", "start_timestamp": "00:22:52", "end_timestamp": "00:23:34", "start_second": 1372, "end_second": 1414, "url": "https://www.youtube.com/watch?v=Z6rxFNMGdn0&t=1372s", "title": "Ian Goodfellow: Generative Adversarial Networks (GANs) | Lex Fridman Podcast #19", "thumbnail": "https://i.ytimg.com/vi/Z6rxFNMGdn0/maxresdefault.jpg"} {"video_id": "Z6rxFNMGdn0", "text": "to say over and over again it's at 3 p.m. it's not 3 p.m. it's at 3 p.m. it's a 3 p.m. right for you to do a gradient step on each one you just hear it once and you remember it there's been some work on things like self attention and attention like mechanisms like the neural Turing machine that can write to memory cells and update themselves with facts like that right away but I don't think we've really nailed it yet and that's one area where I'd imagine that new optimization algorithms are different ways of applying existing", "start_timestamp": "00:23:34", "end_timestamp": "00:24:03", "start_second": 1414, "end_second": 1443, "url": "https://www.youtube.com/watch?v=Z6rxFNMGdn0&t=1414s", "title": "Ian Goodfellow: Generative Adversarial Networks (GANs) | Lex Fridman Podcast #19", "thumbnail": "https://i.ytimg.com/vi/Z6rxFNMGdn0/maxresdefault.jpg"} {"video_id": "Z6rxFNMGdn0", "text": "optimization algorithms could give us a way of just lightning-fast updating the state of a machine learning system to contain a specific fact like that without needing to have it presented over and over and over again so some of the success of symbolic systems in the 80s is they were able to assemble these kinds of facts better but dude there's a lot of expert input required and it's very limited in that sense do you ever look back to that as something that will have to return to eventually sort of dust off the book from the shelf and", "start_timestamp": "00:24:03", "end_timestamp": "00:24:38", "start_second": 1443, "end_second": 1478, "url": "https://www.youtube.com/watch?v=Z6rxFNMGdn0&t=1443s", "title": "Ian Goodfellow: Generative Adversarial Networks (GANs) | Lex Fridman Podcast #19", "thumbnail": "https://i.ytimg.com/vi/Z6rxFNMGdn0/maxresdefault.jpg"} {"video_id": "Z6rxFNMGdn0", "text": "think about how we build knowledge representation knowledge place well we have to use graph searches searches right and like first-order logic and entailment and things like that a thing yeah exactly in my particular line of work which has mostly been machine learning security and and also generative modeling I haven't usually found myself moving in that direction for generative models I could see a little bit of it could be useful if you had something like a differentiable knowledge base or some other kind of knowledge base where it's", "start_timestamp": "00:24:38", "end_timestamp": "00:25:11", "start_second": 1478, "end_second": 1511, "url": "https://www.youtube.com/watch?v=Z6rxFNMGdn0&t=1478s", "title": "Ian Goodfellow: Generative Adversarial Networks (GANs) | Lex Fridman Podcast #19", "thumbnail": "https://i.ytimg.com/vi/Z6rxFNMGdn0/maxresdefault.jpg"} {"video_id": "Z6rxFNMGdn0", "text": "possible for some of our fuzzier machine learning algorithms to interact with the knowledge base immanuel Network is kind of like that it's a differentiable knowledge base of sorts yeah but if if we had a really easy way of giving feedback to machine learning models that would clearly helped a lot with with generative models and so you could imagine one way of getting there would be get a lot better at natural language processing but another way of getting there would be take some kind of knowledge base and figure out a way for", "start_timestamp": "00:25:11", "end_timestamp": "00:25:40", "start_second": 1511, "end_second": 1540, "url": "https://www.youtube.com/watch?v=Z6rxFNMGdn0&t=1511s", "title": "Ian Goodfellow: Generative Adversarial Networks (GANs) | Lex Fridman Podcast #19", "thumbnail": "https://i.ytimg.com/vi/Z6rxFNMGdn0/maxresdefault.jpg"} {"video_id": "Z6rxFNMGdn0", "text": "it to actually interact with a neural network being able to have a chat within y'all network yes so like one thing in generative models we see a lot today is you'll get things like faces that are not symmetrical like like people that have two eyes that are different colors and I mean there are people with eyes that are different colors in real life but not nearly as many of them as you tend to see in the machine learning generated data so if if you had either a knowledge base that could contain the fact people's faces are generally", "start_timestamp": "00:25:40", "end_timestamp": "00:26:11", "start_second": 1540, "end_second": 1571, "url": "https://www.youtube.com/watch?v=Z6rxFNMGdn0&t=1540s", "title": "Ian Goodfellow: Generative Adversarial Networks (GANs) | Lex Fridman Podcast #19", "thumbnail": "https://i.ytimg.com/vi/Z6rxFNMGdn0/maxresdefault.jpg"} {"video_id": "Z6rxFNMGdn0", "text": "approximately symmetric and eye color is especially likely to be the same on both sides being able to just inject that hint into the machine learning model without it having to discover that itself after studying a lot of data it would be a really useful feature I could see a lot of ways of getting there without bringing back some of the 1980s technology but I also see some ways that you could imagine extending the 1980s technology to play nice with neural nets and have it help get there awesome so you talked about the story of", "start_timestamp": "00:26:11", "end_timestamp": "00:26:42", "start_second": 1571, "end_second": 1602, "url": "https://www.youtube.com/watch?v=Z6rxFNMGdn0&t=1571s", "title": "Ian Goodfellow: Generative Adversarial Networks (GANs) | Lex Fridman Podcast #19", "thumbnail": "https://i.ytimg.com/vi/Z6rxFNMGdn0/maxresdefault.jpg"} {"video_id": "Z6rxFNMGdn0", "text": "you coming up with idea of Gans at a bar with some friends you were arguing that this you know Gans would work Jenner of adversarial networks and the others didn't think so then he went home at midnight coated up and it worked so if I was a friend of yours at the bar I would also have doubts it's a really nice idea but I'm very skeptical that it would work what was the basis of their skepticism what was the basis of your intuition why he should work I don't want to be someone who goes around promoting alcohol for the science in", "start_timestamp": "00:26:42", "end_timestamp": "00:27:19", "start_second": 1602, "end_second": 1639, "url": "https://www.youtube.com/watch?v=Z6rxFNMGdn0&t=1602s", "title": "Ian Goodfellow: Generative Adversarial Networks (GANs) | Lex Fridman Podcast #19", "thumbnail": "https://i.ytimg.com/vi/Z6rxFNMGdn0/maxresdefault.jpg"} {"video_id": "Z6rxFNMGdn0", "text": "this case I do actually think that drinking helped a little bit mm-hmm when your inhibitions are lowered you're more willing to try out things that you wouldn't try out otherwise so I I have noticed it in general that I'm less prone to shooting down some of my own ideas when I'm when I have had a little bit to drink I think if I had had that idea at lunch time yeah I probably would have thought it it's hard enough I mean one neural net you can't train a second neuron that in the inner loop of the outer neural net that was basically my", "start_timestamp": "00:27:19", "end_timestamp": "00:27:48", "start_second": 1639, "end_second": 1668, "url": "https://www.youtube.com/watch?v=Z6rxFNMGdn0&t=1639s", "title": "Ian Goodfellow: Generative Adversarial Networks (GANs) | Lex Fridman Podcast #19", "thumbnail": "https://i.ytimg.com/vi/Z6rxFNMGdn0/maxresdefault.jpg"} {"video_id": "Z6rxFNMGdn0", "text": "friends action was that trying to train two neural nets at the same time would be too hard so it was more about the training process unless so my skepticism would be you know I'm sure you could train it but the thing would converge to would not be able to generate anything reasonable and any kind of reasonable realism yeah so so part of what all of us were thinking about when we had this conversation was deep Bolton machines which a lot of us in the lab including me were a big fan of deep bolts and machines at the time they involved two", "start_timestamp": "00:27:48", "end_timestamp": "00:28:21", "start_second": 1668, "end_second": 1701, "url": "https://www.youtube.com/watch?v=Z6rxFNMGdn0&t=1668s", "title": "Ian Goodfellow: Generative Adversarial Networks (GANs) | Lex Fridman Podcast #19", "thumbnail": "https://i.ytimg.com/vi/Z6rxFNMGdn0/maxresdefault.jpg"} {"video_id": "Z6rxFNMGdn0", "text": "separate processes running at the same time one of them is called the positive phase where you load data into the model and tell the model to make the data more likely the owners called the negative phase where you draw samples from the model and tell the model to make those samples less likely in a deep Bolton machine it's not trivial to generate a sample you have to actually run an iterative process that gets better and better samples coming closer and closer to the distribution the model represents so during the training process you're", "start_timestamp": "00:28:21", "end_timestamp": "00:28:53", "start_second": 1701, "end_second": 1733, "url": "https://www.youtube.com/watch?v=Z6rxFNMGdn0&t=1701s", "title": "Ian Goodfellow: Generative Adversarial Networks (GANs) | Lex Fridman Podcast #19", "thumbnail": "https://i.ytimg.com/vi/Z6rxFNMGdn0/maxresdefault.jpg"} {"video_id": "Z6rxFNMGdn0", "text": "always running these two systems at the same time one that's updating the parameters of the model and another one that's trying to generate samples from the model and they worked really well on things like Amnesty a lot of us in the lab including me had tried to get the Boltzmann machines to scale past em inist to things like generating color photos and we just couldn't get the two processes to stay synchronized so when I had the idea for Gans a lot of people thought that the discriminator would have more or less the same problem as", "start_timestamp": "00:28:53", "end_timestamp": "00:29:22", "start_second": 1733, "end_second": 1762, "url": "https://www.youtube.com/watch?v=Z6rxFNMGdn0&t=1733s", "title": "Ian Goodfellow: Generative Adversarial Networks (GANs) | Lex Fridman Podcast #19", "thumbnail": "https://i.ytimg.com/vi/Z6rxFNMGdn0/maxresdefault.jpg"} {"video_id": "Z6rxFNMGdn0", "text": "the negative phase in the Boltzmann machine that trying to train the discriminator in the inner loop you just couldn't get it to keep up with the generator and the outer loop and that would prevent it from converging to anything useful yeah I share that intuition yeah what turns out to not be the case a lot of the time with machine learning algorithms it's really hard to predict ahead of time how well they'll actually perform you have to just run the experiment and see what happens and I would say I still today don't have", "start_timestamp": "00:29:22", "end_timestamp": "00:29:51", "start_second": 1762, "end_second": 1791, "url": "https://www.youtube.com/watch?v=Z6rxFNMGdn0&t=1762s", "title": "Ian Goodfellow: Generative Adversarial Networks (GANs) | Lex Fridman Podcast #19", "thumbnail": "https://i.ytimg.com/vi/Z6rxFNMGdn0/maxresdefault.jpg"} {"video_id": "Z6rxFNMGdn0", "text": "like one factor I can put my finger on it say this is why ganz worked for photo generation and deep Boltzmann machines don't there are a lot of theory papers showing that under some theoretical settings the the gun algorithm does actually converge but those settings are restricted enough that they don't necessarily explain the whole picture in terms of all the results that we see in practice so taking a step back can you in the same way as we talked about deep learning can you tell me what generative adversarial", "start_timestamp": "00:29:51", "end_timestamp": "00:30:27", "start_second": 1791, "end_second": 1827, "url": "https://www.youtube.com/watch?v=Z6rxFNMGdn0&t=1791s", "title": "Ian Goodfellow: Generative Adversarial Networks (GANs) | Lex Fridman Podcast #19", "thumbnail": "https://i.ytimg.com/vi/Z6rxFNMGdn0/maxresdefault.jpg"} {"video_id": "Z6rxFNMGdn0", "text": "networks are yeah so generative adversarial networks are a particular kind of generative model a generative model is a machine learning model that can train on some set of data like so you have a collection of photos of cats and you want to generate more photos of cats or you want to estimate a probability distribution over cats so you can ask how likely it is that some new image is a photo of a cat ganzar one way of doing this some generative models are good at creating new data other generative models are good at estimating that", "start_timestamp": "00:30:27", "end_timestamp": "00:31:01", "start_second": 1827, "end_second": 1861, "url": "https://www.youtube.com/watch?v=Z6rxFNMGdn0&t=1827s", "title": "Ian Goodfellow: Generative Adversarial Networks (GANs) | Lex Fridman Podcast #19", "thumbnail": "https://i.ytimg.com/vi/Z6rxFNMGdn0/maxresdefault.jpg"} {"video_id": "Z6rxFNMGdn0", "text": "density function and telling you how likely particular pieces of data are to come from the same distribution as a training data gans are more focused on generating samples rather than estimating the density function there are some kinds of games like flow gun that can do both but mostly guns are about generating samples of generating new photos of cats that look realistic and they do that completely from scratch it's analogous to human imagination when again creates a new image of a cat it's using a neural network to produce a cat", "start_timestamp": "00:31:01", "end_timestamp": "00:31:39", "start_second": 1861, "end_second": 1899, "url": "https://www.youtube.com/watch?v=Z6rxFNMGdn0&t=1861s", "title": "Ian Goodfellow: Generative Adversarial Networks (GANs) | Lex Fridman Podcast #19", "thumbnail": "https://i.ytimg.com/vi/Z6rxFNMGdn0/maxresdefault.jpg"} {"video_id": "Z6rxFNMGdn0", "text": "that has not existed before it isn't doing something like compositing photos together you're not you're not literally taking the eye off of one cat on the ear off of another cat it's it's more of this digestive process where the the neural net trains on a lot of data and comes up with some representation of the probability distribution and generates entirely new cats there are a lot of different ways of building a generative model what's specific against is that we have a two-player game in the game theoretic sense and as the players in", "start_timestamp": "00:31:39", "end_timestamp": "00:32:08", "start_second": 1899, "end_second": 1928, "url": "https://www.youtube.com/watch?v=Z6rxFNMGdn0&t=1899s", "title": "Ian Goodfellow: Generative Adversarial Networks (GANs) | Lex Fridman Podcast #19", "thumbnail": "https://i.ytimg.com/vi/Z6rxFNMGdn0/maxresdefault.jpg"} {"video_id": "Z6rxFNMGdn0", "text": "this game compete one of them becomes able to generate realistic data the first player is called the generator it produces output data such as just images for example and at the start of the learning process it'll just produce completely random images the other player is called the discriminator the discriminator takes images as input and guesses whether they're real or fake you train it both on real data so photos that come from your training set actual photos of cats and you try to say that those are real you also train it on images that come", "start_timestamp": "00:32:08", "end_timestamp": "00:32:42", "start_second": 1928, "end_second": 1962, "url": "https://www.youtube.com/watch?v=Z6rxFNMGdn0&t=1928s", "title": "Ian Goodfellow: Generative Adversarial Networks (GANs) | Lex Fridman Podcast #19", "thumbnail": "https://i.ytimg.com/vi/Z6rxFNMGdn0/maxresdefault.jpg"} {"video_id": "Z6rxFNMGdn0", "text": "from the generator network and you train it to say that those are fake as the two players compete in this game the discriminator tries to become better at recognizing where their images are real or fake and the generator becomes better at fooling the discriminator into thinking that its outputs are are real and you can analyze this through the language of game theory and find that there's a Nash equilibrium where the generator has captured the correct probability distribution so in the cat example it makes perfectly realistic cat", "start_timestamp": "00:32:42", "end_timestamp": "00:33:13", "start_second": 1962, "end_second": 1993, "url": "https://www.youtube.com/watch?v=Z6rxFNMGdn0&t=1962s", "title": "Ian Goodfellow: Generative Adversarial Networks (GANs) | Lex Fridman Podcast #19", "thumbnail": "https://i.ytimg.com/vi/Z6rxFNMGdn0/maxresdefault.jpg"} {"video_id": "Z6rxFNMGdn0", "text": "photos and the discriminator is unable to do better than random guessing because all the all the samples coming from both the data and the generator look equally likely to have come from either source so do you ever do sit back and does it just blow your mind that this thing works so from very so it's able to estimate that density function enough to generate generate realistic images I mean does it yeah do you ever sit back yeah how does this even why this is quite incredible especially where Gant's have gone in terms of", "start_timestamp": "00:33:13", "end_timestamp": "00:33:48", "start_second": 1993, "end_second": 2028, "url": "https://www.youtube.com/watch?v=Z6rxFNMGdn0&t=1993s", "title": "Ian Goodfellow: Generative Adversarial Networks (GANs) | Lex Fridman Podcast #19", "thumbnail": "https://i.ytimg.com/vi/Z6rxFNMGdn0/maxresdefault.jpg"} {"video_id": "Z6rxFNMGdn0", "text": "realism yeah and and not just to flatter my own work but generative models all of them have this property that if they really did what we asked them to do they would do nothing but memorize the training data right some models that are based on maximizing the likelihood the way that you obtain the maximum likelihood for a specific training set is you assign all of your probability mass to the training examples and nowhere else forgets the game is played using a training set so the way that you become unbeatable in the game is you literally", "start_timestamp": "00:33:48", "end_timestamp": "00:34:21", "start_second": 2028, "end_second": 2061, "url": "https://www.youtube.com/watch?v=Z6rxFNMGdn0&t=2028s", "title": "Ian Goodfellow: Generative Adversarial Networks (GANs) | Lex Fridman Podcast #19", "thumbnail": "https://i.ytimg.com/vi/Z6rxFNMGdn0/maxresdefault.jpg"} {"video_id": "Z6rxFNMGdn0", "text": "memorize training examples one of my former interns wrote a paper his name is a Vaishnav nagarajan and he showed that it's actually hard for the generator to memorize the training data hard in a statistical learning theory sense that you can actually create reasons for why it would require quite a lot of learning steps and and a lot of observations of of different latent variables before you could memorize the training data that still doesn't really explain why when you produce samples that are new why do you get compelling", "start_timestamp": "00:34:21", "end_timestamp": "00:34:59", "start_second": 2061, "end_second": 2099, "url": "https://www.youtube.com/watch?v=Z6rxFNMGdn0&t=2061s", "title": "Ian Goodfellow: Generative Adversarial Networks (GANs) | Lex Fridman Podcast #19", "thumbnail": "https://i.ytimg.com/vi/Z6rxFNMGdn0/maxresdefault.jpg"} {"video_id": "Z6rxFNMGdn0", "text": "images rather than you know just garbage that's different from the training set and I don't think we really have a good answer for that especially if you think about how many possible images are out there and how few images the generative model sees during training it seems just unreasonable that generative models create new images as well as they do especially considering that we're basically training them to memorize rather than generalize I think part of the answer is there's a paper called deep image prior where they show that", "start_timestamp": "00:34:59", "end_timestamp": "00:35:31", "start_second": 2099, "end_second": 2131, "url": "https://www.youtube.com/watch?v=Z6rxFNMGdn0&t=2099s", "title": "Ian Goodfellow: Generative Adversarial Networks (GANs) | Lex Fridman Podcast #19", "thumbnail": "https://i.ytimg.com/vi/Z6rxFNMGdn0/maxresdefault.jpg"} {"video_id": "Z6rxFNMGdn0", "text": "you can take a convolutional net and you don't even need to learn the parameters of it at all you just use the model architecture and it's already useful for things like in painting images I think that shows us that the convolutional network architecture captures something really important about the structure of images and we don't need to actually use learning to capture all the information coming out of the convolutional net that would that would imply that it would be much harder to make generative models in", "start_timestamp": "00:35:31", "end_timestamp": "00:35:59", "start_second": 2131, "end_second": 2159, "url": "https://www.youtube.com/watch?v=Z6rxFNMGdn0&t=2131s", "title": "Ian Goodfellow: Generative Adversarial Networks (GANs) | Lex Fridman Podcast #19", "thumbnail": "https://i.ytimg.com/vi/Z6rxFNMGdn0/maxresdefault.jpg"} {"video_id": "Z6rxFNMGdn0", "text": "other domains so far we're able to make reasonable speech models and things like that but to be honest we haven't actually explored a whole lot of different data sets all that much we don't for example see a lot of deep learning models of like biology datasets where you have lots of microarrays measuring the amount of different enzymes and things like that so we may find that some of the progress that we've seen for images and speech turns out to really rely heavily on the model architecture and we were able to do what", "start_timestamp": "00:35:59", "end_timestamp": "00:36:32", "start_second": 2159, "end_second": 2192, "url": "https://www.youtube.com/watch?v=Z6rxFNMGdn0&t=2159s", "title": "Ian Goodfellow: Generative Adversarial Networks (GANs) | Lex Fridman Podcast #19", "thumbnail": "https://i.ytimg.com/vi/Z6rxFNMGdn0/maxresdefault.jpg"} {"video_id": "Z6rxFNMGdn0", "text": "we did for vision by trying to reverse-engineer the human visual system and maybe it'll turn out that we can't just use that same trick for arbitrary kinds of data all right so there's aspects of the human vision system the hardware of it that makes it without learning without cognition just makes it really effective at detecting the patterns we've seen the visual world yeah that's yeah that's really interesting what in a big quick overview in your view in your view what types of Gans are there and what other generative models besides", "start_timestamp": "00:36:32", "end_timestamp": "00:37:08", "start_second": 2192, "end_second": 2228, "url": "https://www.youtube.com/watch?v=Z6rxFNMGdn0&t=2192s", "title": "Ian Goodfellow: Generative Adversarial Networks (GANs) | Lex Fridman Podcast #19", "thumbnail": "https://i.ytimg.com/vi/Z6rxFNMGdn0/maxresdefault.jpg"} {"video_id": "Z6rxFNMGdn0", "text": "games are there yeah so it's maybe a little bit easier to start with what kinds of generative models are there other than Gans so most generative models are likelihood based where to train them you have a model that tells you how how much probability it assigns to a particular example and you just maximize the probability assigned to all the training examples it turns out that it's hard to design a model that can create really complicated images or really complicated audio waveforms and still have it be possible to estimate the the likelihood", "start_timestamp": "00:37:08", "end_timestamp": "00:37:47", "start_second": 2228, "end_second": 2267, "url": "https://www.youtube.com/watch?v=Z6rxFNMGdn0&t=2228s", "title": "Ian Goodfellow: Generative Adversarial Networks (GANs) | Lex Fridman Podcast #19", "thumbnail": "https://i.ytimg.com/vi/Z6rxFNMGdn0/maxresdefault.jpg"} {"video_id": "Z6rxFNMGdn0", "text": "function from a computational point of view most interesting models that you would just write down intuitively it turns out that it's almost impossible to calculate the amount of probability they assign to a particular point so there's a few different schools of generative models in the likelyhood family one approach is to very carefully design the model so that it is computationally tractable to measure the density it assigns to a particular point so there are things like auto regressive models like pixel CN n those basically break", "start_timestamp": "00:37:47", "end_timestamp": "00:38:23", "start_second": 2267, "end_second": 2303, "url": "https://www.youtube.com/watch?v=Z6rxFNMGdn0&t=2267s", "title": "Ian Goodfellow: Generative Adversarial Networks (GANs) | Lex Fridman Podcast #19", "thumbnail": "https://i.ytimg.com/vi/Z6rxFNMGdn0/maxresdefault.jpg"} {"video_id": "Z6rxFNMGdn0", "text": "down the probability distribution into a product over every single feature so for an image you estimate the probability of each pixel given all of the pixels that came before it hmm there's tricks where if you want to measure the density function you can actually calculate the density for all these pixels more or less in parallel generating the image still tends to require you to go one pixel at a time and that can be very slow but there again tricks for doing this in a hierarchical pattern where you can keep the runtime under control or the", "start_timestamp": "00:38:23", "end_timestamp": "00:38:56", "start_second": 2303, "end_second": 2336, "url": "https://www.youtube.com/watch?v=Z6rxFNMGdn0&t=2303s", "title": "Ian Goodfellow: Generative Adversarial Networks (GANs) | Lex Fridman Podcast #19", "thumbnail": "https://i.ytimg.com/vi/Z6rxFNMGdn0/maxresdefault.jpg"} {"video_id": "Z6rxFNMGdn0", "text": "quality of the images it generates putting runtime aside pretty good they're reasonable yeah the I would say a lot of the best results are from Gans these days but it can be hard to tell how much of that is based on who's studying which type of algorithm if that makes sense the amount of effort invest in it but yeah or like the kind of expertise so a lot of people who've traditionally been excited about graphics or art and things like that have gotten interested in Gans and to some extent it's hard to tell our Gans", "start_timestamp": "00:38:56", "end_timestamp": "00:39:29", "start_second": 2336, "end_second": 2369, "url": "https://www.youtube.com/watch?v=Z6rxFNMGdn0&t=2336s", "title": "Ian Goodfellow: Generative Adversarial Networks (GANs) | Lex Fridman Podcast #19", "thumbnail": "https://i.ytimg.com/vi/Z6rxFNMGdn0/maxresdefault.jpg"} {"video_id": "Z6rxFNMGdn0", "text": "doing better because they have a lot of graphics and art experts behind them or our Gans doing better because they're more computationally efficient or our Gans doing better because they prioritize the realism of samples over the accuracy of the density function I think I think all of those are potentially valid explanations and it's it's hard to tell so can you give a brief history of Gans from 2014 we paid for 13 yeah so a few highlights in the first paper we just showed that Gans basically work if you look back at the", "start_timestamp": "00:39:29", "end_timestamp": "00:40:05", "start_second": 2369, "end_second": 2405, "url": "https://www.youtube.com/watch?v=Z6rxFNMGdn0&t=2369s", "title": "Ian Goodfellow: Generative Adversarial Networks (GANs) | Lex Fridman Podcast #19", "thumbnail": "https://i.ytimg.com/vi/Z6rxFNMGdn0/maxresdefault.jpg"} {"video_id": "Z6rxFNMGdn0", "text": "samples we had now they looked terrible on the CFR 10 dataset you can't even recognize objects in them your papers I will use CFR 10 we use em NIST which is little handwritten digits we used the Toronto face database which is small grayscale photos of faces we did have recognizable faces my colleague Bing Xu put together the first again face model for that paper we also had the CFR 10 dataset which is things like very small 32 by 32 pixels of cars and cats and dogs for that we didn't get recognizable objects but all the deep", "start_timestamp": "00:40:05", "end_timestamp": "00:40:44", "start_second": 2405, "end_second": 2444, "url": "https://www.youtube.com/watch?v=Z6rxFNMGdn0&t=2405s", "title": "Ian Goodfellow: Generative Adversarial Networks (GANs) | Lex Fridman Podcast #19", "thumbnail": "https://i.ytimg.com/vi/Z6rxFNMGdn0/maxresdefault.jpg"} {"video_id": "Z6rxFNMGdn0", "text": "learning people back then we're really used to looking at these failed samples and kind of reading them like tea leaves right and people who are used to reading the tea leaves recognize that our tea leaves at least look different right maybe not necessarily better but there was something unusual about them and that got a lot of us excited one of the next really big steps was lap gown by Emily Denton and seemeth chintala at Facebook AI research where they actually got really good high-resolution photos working with gans for the first time", "start_timestamp": "00:40:44", "end_timestamp": "00:41:15", "start_second": 2444, "end_second": 2475, "url": "https://www.youtube.com/watch?v=Z6rxFNMGdn0&t=2444s", "title": "Ian Goodfellow: Generative Adversarial Networks (GANs) | Lex Fridman Podcast #19", "thumbnail": "https://i.ytimg.com/vi/Z6rxFNMGdn0/maxresdefault.jpg"} {"video_id": "Z6rxFNMGdn0", "text": "they had a complicated system where they generated the image starting at low res and then scaling up to high res but they were able to get it to work and then in 2015 I believe later that same year palek Radford and sumh intelli and Luke Metz published the DC gain paper which it stands for deep convolutional again it's kind of a non unique name because these days basically all gans and even some before that were deep in convolutional but they just kind of picked a name for a really great recipe where they were able to actually using", "start_timestamp": "00:41:15", "end_timestamp": "00:41:54", "start_second": 2475, "end_second": 2514, "url": "https://www.youtube.com/watch?v=Z6rxFNMGdn0&t=2475s", "title": "Ian Goodfellow: Generative Adversarial Networks (GANs) | Lex Fridman Podcast #19", "thumbnail": "https://i.ytimg.com/vi/Z6rxFNMGdn0/maxresdefault.jpg"} {"video_id": "Z6rxFNMGdn0", "text": "only one model instead of a multi-step process actually generate realistic images of faces and things like that that was sort of like the beginning of the Cambrian explosion of gans like you know once once you got animals that had a backbone you suddenly got lots of different versions of you know like fish and right they have four-legged animals and things like that so so DC Gann became kind of the backbone for many different models that came out used as a baseline even still yeah yeah and so from there I would say some interesting", "start_timestamp": "00:41:54", "end_timestamp": "00:42:25", "start_second": 2514, "end_second": 2545, "url": "https://www.youtube.com/watch?v=Z6rxFNMGdn0&t=2514s", "title": "Ian Goodfellow: Generative Adversarial Networks (GANs) | Lex Fridman Podcast #19", "thumbnail": "https://i.ytimg.com/vi/Z6rxFNMGdn0/maxresdefault.jpg"} {"video_id": "Z6rxFNMGdn0", "text": "things we've seen are there's a lot you can say about how just the quality of standard image generation ganz has increased but what's also maybe more interesting on an intellectual level is how the things you can use guns for has also changed one thing is that you can use them to learn classifiers without having to have class labels for every example in your your training set so that's called semi-supervised learning my colleague at open AI Tim Solomon's who's at at brain now wrote a paper called improved techniques for training", "start_timestamp": "00:42:25", "end_timestamp": "00:42:58", "start_second": 2545, "end_second": 2578, "url": "https://www.youtube.com/watch?v=Z6rxFNMGdn0&t=2545s", "title": "Ian Goodfellow: Generative Adversarial Networks (GANs) | Lex Fridman Podcast #19", "thumbnail": "https://i.ytimg.com/vi/Z6rxFNMGdn0/maxresdefault.jpg"} {"video_id": "Z6rxFNMGdn0", "text": "guns I'm a co-author on this paper but I can't claim any credit for this particular part one thing he showed in the paper is that you can take the gun discriminator and use it as a classifier that actually tells you you know this image is a cat this image is a dog this image is a car this image is a truck and so and not just to say whether the image is real or fake but if it is real to say specifically what kind of object it is and he found that you can train these classifiers with far fewer labeled examples learn traditional classifiers", "start_timestamp": "00:42:58", "end_timestamp": "00:43:29", "start_second": 2578, "end_second": 2609, "url": "https://www.youtube.com/watch?v=Z6rxFNMGdn0&t=2578s", "title": "Ian Goodfellow: Generative Adversarial Networks (GANs) | Lex Fridman Podcast #19", "thumbnail": "https://i.ytimg.com/vi/Z6rxFNMGdn0/maxresdefault.jpg"} {"video_id": "Z6rxFNMGdn0", "text": "so a few supervised based on also not just your discrimination ability but your ability to classify you're going to do much you're going to convert much faster to being effective at being a discriminator yeah so for example for the emne status set you want to look at an image of a handwritten digit and say whether it's a 0 a 1 or 2 and so on to get down to less than 1% accuracy required around 60,000 examples until maybe about 2014 or so in 2016 with this semi-supervised degan project tim was able to get below 1% error using only a", "start_timestamp": "00:43:29", "end_timestamp": "00:44:11", "start_second": 2609, "end_second": 2651, "url": "https://www.youtube.com/watch?v=Z6rxFNMGdn0&t=2609s", "title": "Ian Goodfellow: Generative Adversarial Networks (GANs) | Lex Fridman Podcast #19", "thumbnail": "https://i.ytimg.com/vi/Z6rxFNMGdn0/maxresdefault.jpg"} {"video_id": "Z6rxFNMGdn0", "text": "hundred labeled examples so that was about a 600 X decrease in the amount of labels that he needed he's still using more images in that but he doesn't need to have each of them labeled as you know this one's a 1 this one's a 2 this one's a 0 and so on then to be able to for Ganz to be able to generate recognizable objects so object for a particular class you still need labelled data because you need to know what it means to be a particular class cat dog how do you think we can move away from that yeah some researchers at brain Zurich", "start_timestamp": "00:44:11", "end_timestamp": "00:44:46", "start_second": 2651, "end_second": 2686, "url": "https://www.youtube.com/watch?v=Z6rxFNMGdn0&t=2651s", "title": "Ian Goodfellow: Generative Adversarial Networks (GANs) | Lex Fridman Podcast #19", "thumbnail": "https://i.ytimg.com/vi/Z6rxFNMGdn0/maxresdefault.jpg"} {"video_id": "Z6rxFNMGdn0", "text": "actually just released a really great paper on semi-supervised de Gans whether their goal isn't to classify its to make recognizable objects despite not having a lot of label data they were working off of deep minds big gun project and they showed that they can match the performance of began using only 10% I believe of the of the labels big gun was trained on the image net dataset which is about 1.2 million images and had all of them labelled this latest project from brain Zurich shows that they're able to get away with only having about", "start_timestamp": "00:44:46", "end_timestamp": "00:45:21", "start_second": 2686, "end_second": 2721, "url": "https://www.youtube.com/watch?v=Z6rxFNMGdn0&t=2686s", "title": "Ian Goodfellow: Generative Adversarial Networks (GANs) | Lex Fridman Podcast #19", "thumbnail": "https://i.ytimg.com/vi/Z6rxFNMGdn0/maxresdefault.jpg"} {"video_id": "Z6rxFNMGdn0", "text": "10% of the of the images labeled and they do that essentially using a clustering algorithm where the discriminator learns to assign the objects to groups and then this understanding that objects can be grouped into you know similar types helps it to form more realistic ideas of what should be appearing in the image because it knows that every image it creates has to come from one of these archetypal groups rather than just being some arbitrary image if you train again with no class labels you tend to get things that look sort of like grass or", "start_timestamp": "00:45:21", "end_timestamp": "00:45:57", "start_second": 2721, "end_second": 2757, "url": "https://www.youtube.com/watch?v=Z6rxFNMGdn0&t=2721s", "title": "Ian Goodfellow: Generative Adversarial Networks (GANs) | Lex Fridman Podcast #19", "thumbnail": "https://i.ytimg.com/vi/Z6rxFNMGdn0/maxresdefault.jpg"} {"video_id": "Z6rxFNMGdn0", "text": "water or brick or dirt but but without necessarily a lot going on in them and I think that's partly because if you look at a large image net image the object doesn't necessarily occupy the whole image and so you learn to create realistic sets of pixels but you don't necessarily learn that the object is the star of the show and you want it to be in every image you make yeah you've heard you talk about the the horse the zebra cycle Gann mapping and how it turns out again thought provoking that horses are usually on grass and zebras", "start_timestamp": "00:45:57", "end_timestamp": "00:46:34", "start_second": 2757, "end_second": 2794, "url": "https://www.youtube.com/watch?v=Z6rxFNMGdn0&t=2757s", "title": "Ian Goodfellow: Generative Adversarial Networks (GANs) | Lex Fridman Podcast #19", "thumbnail": "https://i.ytimg.com/vi/Z6rxFNMGdn0/maxresdefault.jpg"} {"video_id": "Z6rxFNMGdn0", "text": "are usually on drier terrain so when you're doing that kind of generation you're going to end up generating greener horses or whatever so those are connected together it's not just yeah yeah be able to you're not able to segment yeah it's generating the segments away so there are other types of games you come across in your mind that neural networks can play with each other to to to be able to solve problems yeah the the one that I spend most of my time on is insecurity you can model most interactions as a game where there's", "start_timestamp": "00:46:34", "end_timestamp": "00:47:13", "start_second": 2794, "end_second": 2833, "url": "https://www.youtube.com/watch?v=Z6rxFNMGdn0&t=2794s", "title": "Ian Goodfellow: Generative Adversarial Networks (GANs) | Lex Fridman Podcast #19", "thumbnail": "https://i.ytimg.com/vi/Z6rxFNMGdn0/maxresdefault.jpg"} {"video_id": "Z6rxFNMGdn0", "text": "attackers trying to break your system and you order the defender trying to build a resilient system there's also domain adversarial learning which is an approach to domain adaptation that looks really a lot like Ganz the the author's had the idea before the game paper came out their paper came out a little bit later and you know they they're very nice and sighted again paper but I know that they actually had the idea before I came out domain adaptation is when you want to train a machine learning model in 1:1 setting called a", "start_timestamp": "00:47:13", "end_timestamp": "00:47:47", "start_second": 2833, "end_second": 2867, "url": "https://www.youtube.com/watch?v=Z6rxFNMGdn0&t=2833s", "title": "Ian Goodfellow: Generative Adversarial Networks (GANs) | Lex Fridman Podcast #19", "thumbnail": "https://i.ytimg.com/vi/Z6rxFNMGdn0/maxresdefault.jpg"} {"video_id": "Z6rxFNMGdn0", "text": "domain and then deploy it in another domain later and he would like it to perform well in the new domain even though the new domain is different from how it was trained so for example you might want to train on a really clean image data set like image net but then deploy on users phones where the user is taking you know pictures in the dark or pictures while moving quickly and just pictures that aren't really centered or composed all that well when you take a normal machine learning model it often degrades really badly", "start_timestamp": "00:47:47", "end_timestamp": "00:48:17", "start_second": 2867, "end_second": 2897, "url": "https://www.youtube.com/watch?v=Z6rxFNMGdn0&t=2867s", "title": "Ian Goodfellow: Generative Adversarial Networks (GANs) | Lex Fridman Podcast #19", "thumbnail": "https://i.ytimg.com/vi/Z6rxFNMGdn0/maxresdefault.jpg"} {"video_id": "Z6rxFNMGdn0", "text": "when you move to the new domain because it looks so different from what the model was trained on domain adaptation algorithms try to smooth out that gap and the domain adverse oral approach is based on training a feature extractor where the features have the same statistics regardless of which domain you extracted them on so in the domain adversarial game you have one player that's a feature extractor and another player that's a domain recognizer the domain recognizer wants to look at the output of the feature extractor and", "start_timestamp": "00:48:17", "end_timestamp": "00:48:45", "start_second": 2897, "end_second": 2925, "url": "https://www.youtube.com/watch?v=Z6rxFNMGdn0&t=2897s", "title": "Ian Goodfellow: Generative Adversarial Networks (GANs) | Lex Fridman Podcast #19", "thumbnail": "https://i.ytimg.com/vi/Z6rxFNMGdn0/maxresdefault.jpg"} {"video_id": "Z6rxFNMGdn0", "text": "guess which of the two domains oh the features came from so it's a lot like the real versus fake discriminator and ends and then the feature extractor you can think of as loosely analogous to the generator in games except what's trying to do here is both fool the domain recognizer and two not knowing which domain the data came from and also extract features that are good for classification so at the end of the day you can in in the cases where it works out you can actually get features that work about the same in both domains", "start_timestamp": "00:48:45", "end_timestamp": "00:49:20", "start_second": 2925, "end_second": 2960, "url": "https://www.youtube.com/watch?v=Z6rxFNMGdn0&t=2925s", "title": "Ian Goodfellow: Generative Adversarial Networks (GANs) | Lex Fridman Podcast #19", "thumbnail": "https://i.ytimg.com/vi/Z6rxFNMGdn0/maxresdefault.jpg"} {"video_id": "Z6rxFNMGdn0", "text": "sometimes this has a drawback where in order to make things work the same in both domains it just gets worse at the first one but there are a lot of cases where it actually works out well on both do you think gas being useful in the context of data augmentation yeah one thing you could hope for with Kenz is you could imagine I've got a limited training set and I'd like to make more training data to train something else like a classifier you could train Magan on the training set and then create more data and then maybe the classifier would", "start_timestamp": "00:49:20", "end_timestamp": "00:49:54", "start_second": 2960, "end_second": 2994, "url": "https://www.youtube.com/watch?v=Z6rxFNMGdn0&t=2960s", "title": "Ian Goodfellow: Generative Adversarial Networks (GANs) | Lex Fridman Podcast #19", "thumbnail": "https://i.ytimg.com/vi/Z6rxFNMGdn0/maxresdefault.jpg"} {"video_id": "Z6rxFNMGdn0", "text": "perform better on the test set after training on those big ERG and generated data set so that's the simplest version of of something you might hope would work I've never heard of that particular approach working but I think there's some there's some closely related things that that I think could work in the future and some that actually already have worked so if you think a little bit about what we'd be hoping for if we use the gun to make more training data we're hoping that again we'll generalize to new examples better than the classifier", "start_timestamp": "00:49:54", "end_timestamp": "00:50:23", "start_second": 2994, "end_second": 3023, "url": "https://www.youtube.com/watch?v=Z6rxFNMGdn0&t=2994s", "title": "Ian Goodfellow: Generative Adversarial Networks (GANs) | Lex Fridman Podcast #19", "thumbnail": "https://i.ytimg.com/vi/Z6rxFNMGdn0/maxresdefault.jpg"} {"video_id": "Z6rxFNMGdn0", "text": "would have generalized if it was trained on the same buddy at us and I don't know of any reason to believe that the Gann would generalize better than the classifier would but what we might hope for is that the Gann could generalize differently from a specific classifier so one thing I think is worth trying that I haven't personally tried but someone could try is what have you trained a whole lot of different generative models on the same training set create samples from all of them and then train a classifier on that", "start_timestamp": "00:50:23", "end_timestamp": "00:50:49", "start_second": 3023, "end_second": 3049, "url": "https://www.youtube.com/watch?v=Z6rxFNMGdn0&t=3023s", "title": "Ian Goodfellow: Generative Adversarial Networks (GANs) | Lex Fridman Podcast #19", "thumbnail": "https://i.ytimg.com/vi/Z6rxFNMGdn0/maxresdefault.jpg"} {"video_id": "Z6rxFNMGdn0", "text": "because each of the generative models might generalize in a slightly different way they might capture many different axes of variation that one individual model wouldn't and then the classifier can capture all of those ideas by training in all of their data so we'd be a little bit like making an ensemble of classifiers and I say oh of gans yeah in a way I think that could generalize better the other thing that gans are really good for is not necessarily generating new data that's exactly like what you already have but", "start_timestamp": "00:50:49", "end_timestamp": "00:51:19", "start_second": 3049, "end_second": 3079, "url": "https://www.youtube.com/watch?v=Z6rxFNMGdn0&t=3049s", "title": "Ian Goodfellow: Generative Adversarial Networks (GANs) | Lex Fridman Podcast #19", "thumbnail": "https://i.ytimg.com/vi/Z6rxFNMGdn0/maxresdefault.jpg"} {"video_id": "Z6rxFNMGdn0", "text": "by generating new data that has different properties from the data you already had one thing that you can do is you can create differentially private data so suppose that you have something like medical records and you don't want to train a classifier on the medical records and then publish the classifier because someone might be able to reverse-engineer some of the medical records you trained on there's a paper from Casey greens lab that shows how you can train again using differential privacy and then the samples one again", "start_timestamp": "00:51:19", "end_timestamp": "00:51:48", "start_second": 3079, "end_second": 3108, "url": "https://www.youtube.com/watch?v=Z6rxFNMGdn0&t=3079s", "title": "Ian Goodfellow: Generative Adversarial Networks (GANs) | Lex Fridman Podcast #19", "thumbnail": "https://i.ytimg.com/vi/Z6rxFNMGdn0/maxresdefault.jpg"} {"video_id": "Z6rxFNMGdn0", "text": "still have the same differential privacy guarantees as the parameters that again so you can make fake patient data for other researchers to use and they can do almost anything they want with that data because it doesn't come from real people and the differential privacy mechanism gives you clear guarantees on how much the original people's data has been protected that's really interesting actually I haven't heard you talk about that before in terms of fairness I've seen from triple AI your talk how can an adversarial machine learning", "start_timestamp": "00:51:48", "end_timestamp": "00:52:21", "start_second": 3108, "end_second": 3141, "url": "https://www.youtube.com/watch?v=Z6rxFNMGdn0&t=3108s", "title": "Ian Goodfellow: Generative Adversarial Networks (GANs) | Lex Fridman Podcast #19", "thumbnail": "https://i.ytimg.com/vi/Z6rxFNMGdn0/maxresdefault.jpg"} {"video_id": "Z6rxFNMGdn0", "text": "help models be more fair with respect to sensitive variables yeah there was a paper from Amos Torquay's lab about how to learn machine learning models that are incapable of using specific variables so to say for example you wanted to make predictions that are not affected by gender it isn't enough to just leave gender out of the input to the model you can often infer gender from a lot of other characteristics like say that you have the person's name but you're not told their gender well right if if their name", "start_timestamp": "00:52:21", "end_timestamp": "00:52:50", "start_second": 3141, "end_second": 3170, "url": "https://www.youtube.com/watch?v=Z6rxFNMGdn0&t=3141s", "title": "Ian Goodfellow: Generative Adversarial Networks (GANs) | Lex Fridman Podcast #19", "thumbnail": "https://i.ytimg.com/vi/Z6rxFNMGdn0/maxresdefault.jpg"} {"video_id": "Z6rxFNMGdn0", "text": "is Ian they're kind of obviously a man so what you'd like to do is make a machine learning model that can still take in a lot of different attributes and make a really accurate informed prediction but be confident that it isn't reverse engineering gender or another sensitive variable internally you can do that using something very similar to the domain adversarial approach where you have one player that's a feature extractor and another player that's a feature analyzer and you want to make sure that the feature", "start_timestamp": "00:52:50", "end_timestamp": "00:53:20", "start_second": 3170, "end_second": 3200, "url": "https://www.youtube.com/watch?v=Z6rxFNMGdn0&t=3170s", "title": "Ian Goodfellow: Generative Adversarial Networks (GANs) | Lex Fridman Podcast #19", "thumbnail": "https://i.ytimg.com/vi/Z6rxFNMGdn0/maxresdefault.jpg"} {"video_id": "Z6rxFNMGdn0", "text": "analyzer is not able to guess the value of the sensitive variable that you're trying to keep private right that's yeah I love this approach so we'll yeah with the with the feature you're not able to infer right this sensitive variables yeah brilliant it's quite quite brilliant and simple actually another way I think that Ganz in particular could be used for fairness would be to make something like a cycle again where you can take data from one domain and convert it into another we've seen cycle again turning horses into zebras we've", "start_timestamp": "00:53:20", "end_timestamp": "00:53:54", "start_second": 3200, "end_second": 3234, "url": "https://www.youtube.com/watch?v=Z6rxFNMGdn0&t=3200s", "title": "Ian Goodfellow: Generative Adversarial Networks (GANs) | Lex Fridman Podcast #19", "thumbnail": "https://i.ytimg.com/vi/Z6rxFNMGdn0/maxresdefault.jpg"} {"video_id": "Z6rxFNMGdn0", "text": "seen other unsupervised gains made by Ming Yue Lu doing things like turning day photos into night photos I think for fairness you could imagine taking records for people in one group and transforming them into analogous people in another group and testing to see if they're they're treated equitably across those two groups there's a lot of things that be hard to get right to make sure that the conversion process itself is fair and I don't think it's anywhere near something that we could actually use yet but if you could design that", "start_timestamp": "00:53:54", "end_timestamp": "00:54:26", "start_second": 3234, "end_second": 3266, "url": "https://www.youtube.com/watch?v=Z6rxFNMGdn0&t=3234s", "title": "Ian Goodfellow: Generative Adversarial Networks (GANs) | Lex Fridman Podcast #19", "thumbnail": "https://i.ytimg.com/vi/Z6rxFNMGdn0/maxresdefault.jpg"} {"video_id": "Z6rxFNMGdn0", "text": "conversion process very carefully it might give you a way of doing audits where you say what if we took people from this group converted them into equivalent people in another group does the system actually treat them how it ought to that's also really interesting you know in a popular in popular press and in general in our imagination you think well gangs are able to generate data and use to think about deep fakes or being able to sort of maliciously generate data that fakes the identity of other people is this something of a concern to you is", "start_timestamp": "00:54:26", "end_timestamp": "00:55:03", "start_second": 3266, "end_second": 3303, "url": "https://www.youtube.com/watch?v=Z6rxFNMGdn0&t=3266s", "title": "Ian Goodfellow: Generative Adversarial Networks (GANs) | Lex Fridman Podcast #19", "thumbnail": "https://i.ytimg.com/vi/Z6rxFNMGdn0/maxresdefault.jpg"} {"video_id": "Z6rxFNMGdn0", "text": "this something if you look 10 20 years into the future is that something that pops up in your work in the work of the community that's working on generating models I'm a lot less concerned about 20 years from now than the next few years I think there will be a kind of bumpy cultural transition as people encounter this idea that there can be very realistic videos and audio that aren't real I think 20 years from now people will mostly understand that you shouldn't believe something is real just because you saw a video of it people", "start_timestamp": "00:55:03", "end_timestamp": "00:55:34", "start_second": 3303, "end_second": 3334, "url": "https://www.youtube.com/watch?v=Z6rxFNMGdn0&t=3303s", "title": "Ian Goodfellow: Generative Adversarial Networks (GANs) | Lex Fridman Podcast #19", "thumbnail": "https://i.ytimg.com/vi/Z6rxFNMGdn0/maxresdefault.jpg"} {"video_id": "Z6rxFNMGdn0", "text": "will expect to see that it's been cryptographically signed or or have some other mechanism to make them believe the the content is real there's already people working on this like there's a startup called true pic that provides a lot of mechanisms for authenticating that an image is real there they're maybe not quite up to having a state actor try to to evade their their verification techniques but it's something people are already working on and I think we'll get right eventually so you think authentication will will", "start_timestamp": "00:55:34", "end_timestamp": "00:56:06", "start_second": 3334, "end_second": 3366, "url": "https://www.youtube.com/watch?v=Z6rxFNMGdn0&t=3334s", "title": "Ian Goodfellow: Generative Adversarial Networks (GANs) | Lex Fridman Podcast #19", "thumbnail": "https://i.ytimg.com/vi/Z6rxFNMGdn0/maxresdefault.jpg"} {"video_id": "Z6rxFNMGdn0", "text": "eventually went out so being able to authenticate that this is real and this is not yeah as opposed to gas just getting better and better or generative models being able to get better and better to where the nature of what is real I don't think we'll ever be able to look at the pixels of a photo and tell you for sure that it's real or not real and I think it would actually be somewhat dangerous to rely on that approach too much if you make a really good fake detector and then someone's able to fool your fake detector and your", "start_timestamp": "00:56:06", "end_timestamp": "00:56:39", "start_second": 3366, "end_second": 3399, "url": "https://www.youtube.com/watch?v=Z6rxFNMGdn0&t=3366s", "title": "Ian Goodfellow: Generative Adversarial Networks (GANs) | Lex Fridman Podcast #19", "thumbnail": "https://i.ytimg.com/vi/Z6rxFNMGdn0/maxresdefault.jpg"} {"video_id": "Z6rxFNMGdn0", "text": "fake detector says this image is not fake then it's even more credible than if you've never made a fake detector in the first place what I do think we'll get to is systems that we can kind of use behind the scenes for to make estimates of what's going on and maybe not like use them in court for a definitive analysis I also think we will likely get better authentication systems where you know if a match every phone cryptographically signs everything that comes out of it you wouldn't go to conclusively tell that an", "start_timestamp": "00:56:39", "end_timestamp": "00:57:13", "start_second": 3399, "end_second": 3433, "url": "https://www.youtube.com/watch?v=Z6rxFNMGdn0&t=3399s", "title": "Ian Goodfellow: Generative Adversarial Networks (GANs) | Lex Fridman Podcast #19", "thumbnail": "https://i.ytimg.com/vi/Z6rxFNMGdn0/maxresdefault.jpg"} {"video_id": "Z6rxFNMGdn0", "text": "image was real but you would be able to tell somebody who knew the appropriate private key for this phone was actually able to sign this image and upload it to this server at this timestamp so you could imagine maybe you make phones that have the private keys Hardware embedded in them if like a State Security Agency really wants to infiltrate the company they could probably you know plant a private key of their choice or break open the chip and learn the private key or something like that but it would make it a lot harder for an adversary with", "start_timestamp": "00:57:13", "end_timestamp": "00:57:49", "start_second": 3433, "end_second": 3469, "url": "https://www.youtube.com/watch?v=Z6rxFNMGdn0&t=3433s", "title": "Ian Goodfellow: Generative Adversarial Networks (GANs) | Lex Fridman Podcast #19", "thumbnail": "https://i.ytimg.com/vi/Z6rxFNMGdn0/maxresdefault.jpg"} {"video_id": "Z6rxFNMGdn0", "text": "fewer resources to fake things most of us yeah okay okay so you mentioned the beer and the bar and the new ideas you were able to implement this or come up with this new idea pretty quickly and implement it pretty quickly do you think there are still many such groundbreaking ideas and deep learning that could be developed so quickly yeah I do think that there are a lot of ideas that can be developed really quickly guns were probably a little bit of an outlier on the whole like one-hour timescale right but just in terms of a like low resource", "start_timestamp": "00:57:49", "end_timestamp": "00:58:23", "start_second": 3469, "end_second": 3503, "url": "https://www.youtube.com/watch?v=Z6rxFNMGdn0&t=3469s", "title": "Ian Goodfellow: Generative Adversarial Networks (GANs) | Lex Fridman Podcast #19", "thumbnail": "https://i.ytimg.com/vi/Z6rxFNMGdn0/maxresdefault.jpg"} {"video_id": "Z6rxFNMGdn0", "text": "ideas where you do something really different on the algorithm scale and get a big payback I think it's not as likely that you'll see that in terms of things like core machine learning technologies like a better classifier or a better reinforcement learning algorithm or a better generative model if I had the gun idea today it would be a lot harder to prove that it was useful than it was back in 2014 because I would need to get it running on something like image net or celibate high resolution you know those take a while to train you couldn't", "start_timestamp": "00:58:23", "end_timestamp": "00:58:55", "start_second": 3503, "end_second": 3535, "url": "https://www.youtube.com/watch?v=Z6rxFNMGdn0&t=3503s", "title": "Ian Goodfellow: Generative Adversarial Networks (GANs) | Lex Fridman Podcast #19", "thumbnail": "https://i.ytimg.com/vi/Z6rxFNMGdn0/maxresdefault.jpg"} {"video_id": "Z6rxFNMGdn0", "text": "you couldn't train it in an hour and know that it was something really new and exciting back in 2014 shredding an amnesty was enough but there are other areas of machine learning where I think a new idea could actually be developed really quickly with low resources what's your intuition about what areas of machine learning are ripe for this yeah so I think fairness and interpretability our areas where we just really don't have any idea how anything should be done yet like for interpretability I don't think we even have the right definitions and", "start_timestamp": "00:58:55", "end_timestamp": "00:59:32", "start_second": 3535, "end_second": 3572, "url": "https://www.youtube.com/watch?v=Z6rxFNMGdn0&t=3535s", "title": "Ian Goodfellow: Generative Adversarial Networks (GANs) | Lex Fridman Podcast #19", "thumbnail": "https://i.ytimg.com/vi/Z6rxFNMGdn0/maxresdefault.jpg"} {"video_id": "Z6rxFNMGdn0", "text": "even just defining a really useful concept you don't even need to run any experiments could have a huge impact on the field we've seen that for example in differential privacy that uh Cynthia Dworkin her collaborators made this technical definition of privacy where before a lot of things are really mushy and then with that definition you could actually design randomized algorithms for accessing databases and guarantee that they preserved individual people's privacy in a in like a mathematical quantitative sense right now we all talk", "start_timestamp": "00:59:32", "end_timestamp": "01:00:04", "start_second": 3572, "end_second": 3604, "url": "https://www.youtube.com/watch?v=Z6rxFNMGdn0&t=3572s", "title": "Ian Goodfellow: Generative Adversarial Networks (GANs) | Lex Fridman Podcast #19", "thumbnail": "https://i.ytimg.com/vi/Z6rxFNMGdn0/maxresdefault.jpg"} {"video_id": "Z6rxFNMGdn0", "text": "a lot about how interpretable different machine learning algorithms are but it's really just people's opinion and everybody probably has a different idea of what interpretability means in their head if we could define some concept related to interpretability that's actually measurable that would be a huge leap forward even without a new algorithm that increases that quantity and also once once we had the definition of differential privacy it was fast to get the algorithms that guaranteed it so you could imagine once we have", "start_timestamp": "01:00:04", "end_timestamp": "01:00:32", "start_second": 3604, "end_second": 3632, "url": "https://www.youtube.com/watch?v=Z6rxFNMGdn0&t=3604s", "title": "Ian Goodfellow: Generative Adversarial Networks (GANs) | Lex Fridman Podcast #19", "thumbnail": "https://i.ytimg.com/vi/Z6rxFNMGdn0/maxresdefault.jpg"} {"video_id": "Z6rxFNMGdn0", "text": "definitions of good concepts and interpretability we might be able to provide the algorithms that have the interpretability guarantees quickly to what do you think it takes to build a system with human level intelligence as we quickly venture into the philosophical so artificial general intelligence what do you think I I think that it definitely takes better environments than we currently have for training agents that we want them to have a really wide diversity of experiences I also think it's going to take really a lot of computation it's", "start_timestamp": "01:00:32", "end_timestamp": "01:01:11", "start_second": 3632, "end_second": 3671, "url": "https://www.youtube.com/watch?v=Z6rxFNMGdn0&t=3632s", "title": "Ian Goodfellow: Generative Adversarial Networks (GANs) | Lex Fridman Podcast #19", "thumbnail": "https://i.ytimg.com/vi/Z6rxFNMGdn0/maxresdefault.jpg"} {"video_id": "Z6rxFNMGdn0", "text": "hard to imagine exactly how much so you're optimistic about simulation simulating a variety of environments is the path forward I think it's a necessary ingredient yeah I don't think that we're going to get to artificial general intelligence by training on fixed datasets or by thinking really hard about the problem I think that the the agent really needs to interact and have a variety of experiences within the same lifespan and today we have many different models that can each do one thing and we tend to train them on one data set or one RL", "start_timestamp": "01:01:11", "end_timestamp": "01:01:48", "start_second": 3671, "end_second": 3708, "url": "https://www.youtube.com/watch?v=Z6rxFNMGdn0&t=3671s", "title": "Ian Goodfellow: Generative Adversarial Networks (GANs) | Lex Fridman Podcast #19", "thumbnail": "https://i.ytimg.com/vi/Z6rxFNMGdn0/maxresdefault.jpg"} {"video_id": "Z6rxFNMGdn0", "text": "environment sometimes they're actually papers about getting one set of parameters to perform well in many different RL environments but we don't really have anything like an agent that goes seamlessly from one type of experience to another and and really integrates all the different things that it does over the course of its life when we do see multi agent environments they tend to be there are so many multi environment agents they tend to be similar environments like all of them are playing like an action based video", "start_timestamp": "01:01:48", "end_timestamp": "01:02:19", "start_second": 3708, "end_second": 3739, "url": "https://www.youtube.com/watch?v=Z6rxFNMGdn0&t=3708s", "title": "Ian Goodfellow: Generative Adversarial Networks (GANs) | Lex Fridman Podcast #19", "thumbnail": "https://i.ytimg.com/vi/Z6rxFNMGdn0/maxresdefault.jpg"} {"video_id": "Z6rxFNMGdn0", "text": "game we don't really have an agent that goes from you know playing a video game to like reading The Wall Street Journal to predicting how effective a molecule will be as a drug or something like that what do you think is a good test for intelligence in you view it's been a lot of benchmarks started with the with Alan Turing a natural conversation being good being a good benchmark for intelligence what what are what would you and good fellows sit back and be really damn impressed if a system was able to accomplish something that doesn't take a", "start_timestamp": "01:02:19", "end_timestamp": "01:02:57", "start_second": 3739, "end_second": 3777, "url": "https://www.youtube.com/watch?v=Z6rxFNMGdn0&t=3739s", "title": "Ian Goodfellow: Generative Adversarial Networks (GANs) | Lex Fridman Podcast #19", "thumbnail": "https://i.ytimg.com/vi/Z6rxFNMGdn0/maxresdefault.jpg"} {"video_id": "Z6rxFNMGdn0", "text": "lot of glue from human engineers so imagine that instead of having to go to the CFR website and download CFR 10 and then write a Python script to parse it and all that you could just point an agent at the CFR 10 problem and it downloads and extracts the data and trains a model and starts giving you predictions I feel like something that doesn't need to have every step of the pipeline assembled for it it definitely understands what it's doing is Auto ml moving into that direction are you thinking wave and bigger autosomal has", "start_timestamp": "01:02:57", "end_timestamp": "01:03:35", "start_second": 3777, "end_second": 3815, "url": "https://www.youtube.com/watch?v=Z6rxFNMGdn0&t=3777s", "title": "Ian Goodfellow: Generative Adversarial Networks (GANs) | Lex Fridman Podcast #19", "thumbnail": "https://i.ytimg.com/vi/Z6rxFNMGdn0/maxresdefault.jpg"} {"video_id": "Z6rxFNMGdn0", "text": "mostly been moving toward once we've built all the glue can the machine learning system to design the architecture really well so I'm we're saying like if something knows how to pre-process the data so that it successfully accomplishes the task then it would be very hard to argue that it doesn't truly understand the task in some fundamental sense and I don't necessarily know that that's like the philosophical definition of intelligence but that's something that would be really cool to build that would be really useful and would impress me", "start_timestamp": "01:03:35", "end_timestamp": "01:04:05", "start_second": 3815, "end_second": 3845, "url": "https://www.youtube.com/watch?v=Z6rxFNMGdn0&t=3815s", "title": "Ian Goodfellow: Generative Adversarial Networks (GANs) | Lex Fridman Podcast #19", "thumbnail": "https://i.ytimg.com/vi/Z6rxFNMGdn0/maxresdefault.jpg"} {"video_id": "Z6rxFNMGdn0", "text": "and would convince me that we've made a step forward in real AI so you give it like the URL for Wikipedia and then next day expected to be able to solve CFR 10 or like you type in a paragraph explaining what you want it to do and it figures out what web searches it should run and downloads all the whole unnecessary ingredients so you have a very clear calm way of speaking no arms easy to edit I've seen comments for both you and I have been identified as both potentially being robots if you have to prove to the world that you are indeed", "start_timestamp": "01:04:05", "end_timestamp": "01:04:46", "start_second": 3845, "end_second": 3886, "url": "https://www.youtube.com/watch?v=Z6rxFNMGdn0&t=3845s", "title": "Ian Goodfellow: Generative Adversarial Networks (GANs) | Lex Fridman Podcast #19", "thumbnail": "https://i.ytimg.com/vi/Z6rxFNMGdn0/maxresdefault.jpg"} {"video_id": "Z6rxFNMGdn0", "text": "human how would you do it but I can understand thinking that I'm a robot it's the flipside yeah touring test I think yeah yeah the proof prove your human test I mean I lecture so you have to is there something that's truly unique in your mind I suppose it doesn't go back to just natural language again just being able to so proving proving that I'm not a robot with today's technology yeah that's pretty straightforward too like my conversation today hasn't veered off into you know talking about the stock market or something because in my", "start_timestamp": "01:04:46", "end_timestamp": "01:05:24", "start_second": 3886, "end_second": 3924, "url": "https://www.youtube.com/watch?v=Z6rxFNMGdn0&t=3886s", "title": "Ian Goodfellow: Generative Adversarial Networks (GANs) | Lex Fridman Podcast #19", "thumbnail": "https://i.ytimg.com/vi/Z6rxFNMGdn0/maxresdefault.jpg"} {"video_id": "Z6rxFNMGdn0", "text": "training data but I think it's more generally trying to prove that something is real from the content alone it was incredibly hard that's one of the main things I've gotten out of my can research that you can simulate almost anything and so you have to really step back to a separate channel to prove that slang is real so like I guess I should have had myself stamped on a blockchain when I was born or something but I didn't do that so according to my own research methodology there's just no way to know at this point so what", "start_timestamp": "01:05:24", "end_timestamp": "01:05:53", "start_second": 3924, "end_second": 3953, "url": "https://www.youtube.com/watch?v=Z6rxFNMGdn0&t=3924s", "title": "Ian Goodfellow: Generative Adversarial Networks (GANs) | Lex Fridman Podcast #19", "thumbnail": "https://i.ytimg.com/vi/Z6rxFNMGdn0/maxresdefault.jpg"} {"video_id": "Z6rxFNMGdn0", "text": "last question problem stands all for you that you're really excited about challenging in the near future so I think resistance to adversarial examples figuring out how to make machine learning secure against an adversary who wants to interfere it in control with it is one of the most important things researchers today could solve in all domains in image language driving in I guess I'm most concerned about domains we haven't really encountered yet like like imagine twenty years from now when we're using advanced day eyes to do", "start_timestamp": "01:05:53", "end_timestamp": "01:06:26", "start_second": 3953, "end_second": 3986, "url": "https://www.youtube.com/watch?v=Z6rxFNMGdn0&t=3953s", "title": "Ian Goodfellow: Generative Adversarial Networks (GANs) | Lex Fridman Podcast #19", "thumbnail": "https://i.ytimg.com/vi/Z6rxFNMGdn0/maxresdefault.jpg"} {"video_id": "Z6rxFNMGdn0", "text": "things we haven't even thought of yet like if you ask people what are the important problems in security of phones in in like 2002 I don't think we would have anticipated that we're using them for you know nearly as many things as we're using them for today I think it's going to be like that with AI that you can kind of try to speculate about where it's going but really the business opportunities that end up taking off would be hard to predict ahead of time well you can predict ahead of time is that almost anything you can do with", "start_timestamp": "01:06:26", "end_timestamp": "01:06:57", "start_second": 3986, "end_second": 4017, "url": "https://www.youtube.com/watch?v=Z6rxFNMGdn0&t=3986s", "title": "Ian Goodfellow: Generative Adversarial Networks (GANs) | Lex Fridman Podcast #19", "thumbnail": "https://i.ytimg.com/vi/Z6rxFNMGdn0/maxresdefault.jpg"} {"video_id": "Z6rxFNMGdn0", "text": "machine learning you would like to make sure that people can't get it to do what they want rather than what you want just by showing it a funny QR code or a funny input pattern and you think that the set of methodology to do that can be bigger than you want domain and that's I think so yeah yeah like one methodology that I think is not not a specific methodology but like a category of solutions that I'm excited about today is making dynamic models that change every time they make a prediction so right now we tend to train models and", "start_timestamp": "01:06:57", "end_timestamp": "01:07:31", "start_second": 4017, "end_second": 4051, "url": "https://www.youtube.com/watch?v=Z6rxFNMGdn0&t=4017s", "title": "Ian Goodfellow: Generative Adversarial Networks (GANs) | Lex Fridman Podcast #19", "thumbnail": "https://i.ytimg.com/vi/Z6rxFNMGdn0/maxresdefault.jpg"} {"video_id": "Z6rxFNMGdn0", "text": "then after they're trained we freeze them and we just use the same rule to classify everything that comes in from then on that's really a sitting duck from a security point of view if you always output the same answer for the same input then people can just run inputs through until they find a mistake that benefits them and then they use the same mistake over and over and over again I think having a model that updates its predictions so that it's harder to predict what you're going to get will make it harder for the for an", "start_timestamp": "01:07:31", "end_timestamp": "01:08:02", "start_second": 4051, "end_second": 4082, "url": "https://www.youtube.com/watch?v=Z6rxFNMGdn0&t=4051s", "title": "Ian Goodfellow: Generative Adversarial Networks (GANs) | Lex Fridman Podcast #19", "thumbnail": "https://i.ytimg.com/vi/Z6rxFNMGdn0/maxresdefault.jpg"} {"video_id": "n1SXlK5rhR8", "text": "hi there so you may have seen this already there's a cvpr paper called pulse and what it does is it's a method to up sample a pixelated image in a way that makes it look realistic but also that the again down sampled variant matches the original down sampled image so it's kind of a cycle consistency loss together with a again and all in all it's a method to demonstrate how you could do this now this has been trained on this face dataset among others there it was a user Bomb Z that made this into a collapse of", "start_timestamp": "00:00:00", "end_timestamp": "00:00:37", "start_second": 0, "end_second": 37, "url": "https://www.youtube.com/watch?v=n1SXlK5rhR8&t=0s", "title": "[Drama] Yann LeCun against Twitter on Dataset Bias", "thumbnail": "https://i.ytimg.com/vi/n1SXlK5rhR8/maxresdefault.jpg"} {"video_id": "n1SXlK5rhR8", "text": "people could try it out and tweet it this out and as you can see it works pretty nicely it gives pretty nice results on this particular data set but of course people started playing around with it and gave fairly funny results like this or that that gets more into the horrible category these so you can see these ones I particularly liked from being made into the little child so you can see as soon as you get away from the original kind of dataset modality you are going to get these results that are off and people started to notice that so", "start_timestamp": "00:00:37", "end_timestamp": "00:01:21", "start_second": 37, "end_second": 81, "url": "https://www.youtube.com/watch?v=n1SXlK5rhR8&t=37s", "title": "[Drama] Yann LeCun against Twitter on Dataset Bias", "thumbnail": "https://i.ytimg.com/vi/n1SXlK5rhR8/maxresdefault.jpg"} {"video_id": "n1SXlK5rhR8", "text": "here you input Barack Obama and what comes out is a fairly standard Caucasian person someone tweeted out saying this image speaks volumes about the dangers of bias in AI I guess here is where the entire story starts so young Luca weighs in and says ml systems are biased when data is biased this face up sampling system makes everyone look white because the network was pre trained on flick face HQ which mainly contains white people picks train the exact same system on a date set from Senegal and everyone will look African so this is pointing", "start_timestamp": "00:01:21", "end_timestamp": "00:01:59", "start_second": 81, "end_second": 119, "url": "https://www.youtube.com/watch?v=n1SXlK5rhR8&t=81s", "title": "[Drama] Yann LeCun against Twitter on Dataset Bias", "thumbnail": "https://i.ytimg.com/vi/n1SXlK5rhR8/maxresdefault.jpg"} {"video_id": "n1SXlK5rhR8", "text": "out why this happens namely because the data set is mainly Caucasian people so the results of up sampling are going to be mainly Caucasian people I mean this is like a straightforward explanation of why we're seeing what we're seeing but of course this was not okay and here is where the piling starts as an interjection we have to talk about bias in machine learning technically there is a statistical notion of bias which has a very rigorous definition and there is the societal definition of bias and these two things even though they're the", "start_timestamp": "00:01:59", "end_timestamp": "00:02:31", "start_second": 119, "end_second": 151, "url": "https://www.youtube.com/watch?v=n1SXlK5rhR8&t=119s", "title": "[Drama] Yann LeCun against Twitter on Dataset Bias", "thumbnail": "https://i.ytimg.com/vi/n1SXlK5rhR8/maxresdefault.jpg"} {"video_id": "n1SXlK5rhR8", "text": "same word they're totally different a machine learning system mainly consists of four different parts there is a data set the model the loss function and the optimization procedure statistical bias means whenever the model the loss or the optimization procedure leads to a situation where the outcome doesn't reflect the distribution of the data that you input this for example is achieved when you regularize your model which means that you put some prior knowledge onto the model you introduce bias and therefore you choose to not", "start_timestamp": "00:02:31", "end_timestamp": "00:03:04", "start_second": 151, "end_second": 184, "url": "https://www.youtube.com/watch?v=n1SXlK5rhR8&t=151s", "title": "[Drama] Yann LeCun against Twitter on Dataset Bias", "thumbnail": "https://i.ytimg.com/vi/n1SXlK5rhR8/maxresdefault.jpg"} {"video_id": "n1SXlK5rhR8", "text": "accurately represent your data distribution regularize it to a more bias distribution that in turn has lower variance we know this as the bias-variance tradeoff it's actually very simple right you you have the Ferraris and the Lamborghinis and you want make a model that predicts the accident probability now it just so happens that the Ferrari drivers are a bit more reckless and they do slightly higher accidents and now I train my logistic regression and it tells me okay 6040 cool but now I train my logistic regression with an l1 penalty and I say", "start_timestamp": "00:03:04", "end_timestamp": "00:03:37", "start_second": 184, "end_second": 217, "url": "https://www.youtube.com/watch?v=n1SXlK5rhR8&t=184s", "title": "[Drama] Yann LeCun against Twitter on Dataset Bias", "thumbnail": "https://i.ytimg.com/vi/n1SXlK5rhR8/maxresdefault.jpg"} {"video_id": "n1SXlK5rhR8", "text": "I want my model to be you know explainable so I wanted to be sparse I want the least amount of variables to be contributing to it what's the model gonna say the models gonna say Ferrari drivers add Lamborghini drivers good societal bias and machine learning is way different an example for this is when face detection systems work well on Caucasian people but don't work so well faced with people from other Heritage's and these societal biases are in the dataset as young account points out here if you change the dataset", "start_timestamp": "00:03:37", "end_timestamp": "00:04:07", "start_second": 217, "end_second": 247, "url": "https://www.youtube.com/watch?v=n1SXlK5rhR8&t=217s", "title": "[Drama] Yann LeCun against Twitter on Dataset Bias", "thumbnail": "https://i.ytimg.com/vi/n1SXlK5rhR8/maxresdefault.jpg"} {"video_id": "n1SXlK5rhR8", "text": "you'll change these biases notably the societal biases can only be in the data set otherwise you'd have to argue something like logistic regression itself has a preference for white people or something like this now there is a considerable interaction effect between the two but as Jung Lacan points out the actual societal bias of the final system is a direct result of the bias in the dataset and he is very correct if you train that system on a different data set it will exhibit different biases societal bias cannot be", "start_timestamp": "00:04:07", "end_timestamp": "00:04:40", "start_second": 247, "end_second": 280, "url": "https://www.youtube.com/watch?v=n1SXlK5rhR8&t=247s", "title": "[Drama] Yann LeCun against Twitter on Dataset Bias", "thumbnail": "https://i.ytimg.com/vi/n1SXlK5rhR8/maxresdefault.jpg"} {"video_id": "n1SXlK5rhR8", "text": "in the other parts of the machine learning pipeline they can serve to exaggerate or mitigate that bias in the data set but they themselves can only be statistically biased and not societally biased but ya'know can make the terrible mistake of pinpointing the exact root cause of this problem and not addressing the I guess wider ranging problems in the field as some people perceive it and he shouldn't have to write he pretty clearly says this is why it happens we can solve it by swapping the dataset he doesn't say anything about anything else", "start_timestamp": "00:04:40", "end_timestamp": "00:05:15", "start_second": 280, "end_second": 315, "url": "https://www.youtube.com/watch?v=n1SXlK5rhR8&t=280s", "title": "[Drama] Yann LeCun against Twitter on Dataset Bias", "thumbnail": "https://i.ytimg.com/vi/n1SXlK5rhR8/maxresdefault.jpg"} {"video_id": "n1SXlK5rhR8", "text": "namely he doesn't say that general bias in the field is not a problem he doesn't say that this doesn't harm anyone none of that he simply suggests a solution Jonathan Peck says well yes that's the point ml researchers need to be more careful selecting their data so that they don't encode biases like this and Lacan responds with not so much ml researchers but ml engineers the consequences of bias are considerably more dire in a deployed product than in an academic paper which is also correct this paper was about the method showing", "start_timestamp": "00:05:15", "end_timestamp": "00:05:49", "start_second": 315, "end_second": 349, "url": "https://www.youtube.com/watch?v=n1SXlK5rhR8&t=315s", "title": "[Drama] Yann LeCun against Twitter on Dataset Bias", "thumbnail": "https://i.ytimg.com/vi/n1SXlK5rhR8/maxresdefault.jpg"} {"video_id": "n1SXlK5rhR8", "text": "that this method works on this dataset now assume if here makes a interesting point which I agree with saying that today ml researchers are inadvertently powering product of a lot of non-ai companies who ignorant lis start with a pre trained birth or ResNet or Yola from the internet probably ignoring the license readme and so on which is a valid point right there are going to be people that take this and think oh this is a face up sampler cool I can use that without noting that this is simply an example and implementation", "start_timestamp": "00:05:49", "end_timestamp": "00:06:21", "start_second": 349, "end_second": 381, "url": "https://www.youtube.com/watch?v=n1SXlK5rhR8&t=349s", "title": "[Drama] Yann LeCun against Twitter on Dataset Bias", "thumbnail": "https://i.ytimg.com/vi/n1SXlK5rhR8/maxresdefault.jpg"} {"video_id": "n1SXlK5rhR8", "text": "on an example data set so you can argue that there might be some responsibilities of the researchers right here that doesn't make yung Lacan not correct but I still consider this to be like a fruitful discussion between individuals right here but now we go on this person saying train it on the whole American population with an l2 Lawson almost everyone will look white or train it on the whole American population with an l1 loss and more people might look black stop pretending that bias does not also come from algorithmic choices young", "start_timestamp": "00:06:21", "end_timestamp": "00:06:52", "start_second": 381, "end_second": 412, "url": "https://www.youtube.com/watch?v=n1SXlK5rhR8&t=381s", "title": "[Drama] Yann LeCun against Twitter on Dataset Bias", "thumbnail": "https://i.ytimg.com/vi/n1SXlK5rhR8/maxresdefault.jpg"} {"video_id": "n1SXlK5rhR8", "text": "the car never says it doesn't right the car responds now saying the most efficient way to do it though is to equalize the frequencies of the categories of samples during training this forces the network to pay attention to all the relevant features for all the sample categories and training with an l1 instead of an l2 will not even begin to solve the problem I would pretty much argue training with an l1 loss here would exacerbate the problem because the l2 loss is much more sensitive to outliers drawl Sutton says serious", "start_timestamp": "00:06:52", "end_timestamp": "00:07:21", "start_second": 412, "end_second": 441, "url": "https://www.youtube.com/watch?v=n1SXlK5rhR8&t=412s", "title": "[Drama] Yann LeCun against Twitter on Dataset Bias", "thumbnail": "https://i.ytimg.com/vi/n1SXlK5rhR8/maxresdefault.jpg"} {"video_id": "n1SXlK5rhR8", "text": "question why do you feel that it's important to make this point are you worried that people are going to start suing cycle gang and Lacan says because people should be aware of this problem and know its cause so they can fix it how terrible yawn how terrible you dare pinpoint the exact cause of the problem so that people can fix it the correct thing to do is to point out that everything is problematic so Tim the giver says Jung I suggest you watch me and Emily's tutorial or a number of scholars who are expert in", "start_timestamp": "00:07:21", "end_timestamp": "00:07:53", "start_second": 441, "end_second": 473, "url": "https://www.youtube.com/watch?v=n1SXlK5rhR8&t=441s", "title": "[Drama] Yann LeCun against Twitter on Dataset Bias", "thumbnail": "https://i.ytimg.com/vi/n1SXlK5rhR8/maxresdefault.jpg"} {"video_id": "n1SXlK5rhR8", "text": "this area you can't just reduce harms to dataset bias for once listen to us people from marginalized communities and what we tell you if not now during worldwide protests not sure when so again I feel the argument here is that you can't simply point out that it's the data set bias you must point out the bigger problems which the on account does not ever deny he simply says this particular problem can be solved by switching the data set Nikola LaRue says Jung was in my PhD jury I am indebted for him for everything he taught me but this", "start_timestamp": "00:07:53", "end_timestamp": "00:08:27", "start_second": 473, "end_second": 507, "url": "https://www.youtube.com/watch?v=n1SXlK5rhR8&t=473s", "title": "[Drama] Yann LeCun against Twitter on Dataset Bias", "thumbnail": "https://i.ytimg.com/vi/n1SXlK5rhR8/maxresdefault.jpg"} {"video_id": "n1SXlK5rhR8", "text": "constant dismissal of the harms caused directly or indirectly by the m/l community is highly problematic where or when have I dismissed the harm caused by the m/l community I'm pointing out the cause of the harm so it can be fixed you can't fix the harm unless you know what causes it know the roux says causes of the biases are numerous only pointing out data set bias deflects the attention away from the other more pervasive ones that make the whole field of bias in ml many people try to your attention about these issues but", "start_timestamp": "00:08:27", "end_timestamp": "00:08:56", "start_second": 507, "end_second": 536, "url": "https://www.youtube.com/watch?v=n1SXlK5rhR8&t=507s", "title": "[Drama] Yann LeCun against Twitter on Dataset Bias", "thumbnail": "https://i.ytimg.com/vi/n1SXlK5rhR8/maxresdefault.jpg"} {"video_id": "n1SXlK5rhR8", "text": "you kept focus on the data set because the dataset is the problem right here he doesn't dismiss any of the other things he simply says here the data set is the problem if your problem is that it doesn't work as well for non-caucasian people which was never the intent of this the intent of this was to showcase the method I mean imagenet is like 60% dog species and still people trained on it to showcase their image recognition techniques no one training on image net makes a claim that they have solved computer vision for all the", "start_timestamp": "00:08:56", "end_timestamp": "00:09:30", "start_second": 536, "end_second": 570, "url": "https://www.youtube.com/watch?v=n1SXlK5rhR8&t=536s", "title": "[Drama] Yann LeCun against Twitter on Dataset Bias", "thumbnail": "https://i.ytimg.com/vi/n1SXlK5rhR8/maxresdefault.jpg"} {"video_id": "n1SXlK5rhR8", "text": "classes in the world in a fair manner Tim McGee Bru goes on saying I'm sick of this framing tired of it many people have tried to explain many scholars listen to us you can't just reduce the harms caused by ML to dataset bias doesn't do that doesn't do it so someone asks her is he engaging in any ways with you it's appalling to see that he answers to everybody but you yet maybe there is a conversation going on in private and I don't want to jeopardize it note that young Lacoste tweet has 500 retweets 1.9 K likes and comments as far", "start_timestamp": "00:09:30", "end_timestamp": "00:10:06", "start_second": 570, "end_second": 606, "url": "https://www.youtube.com/watch?v=n1SXlK5rhR8&t=570s", "title": "[Drama] Yann LeCun against Twitter on Dataset Bias", "thumbnail": "https://i.ytimg.com/vi/n1SXlK5rhR8/maxresdefault.jpg"} {"video_id": "n1SXlK5rhR8", "text": "as you can scroll to what she responds to with yep but I'm used to white men refusing to engage with black and brown women even on issues of bias that mostly affect us I mean he literally has ignored a whole body of work by people from that demographic hence the statement so not surprised I mean in absence of the fact that an argument should be independent of the person making the argument that is a low blow heart Meru says I respectfully disagree with Yun here as long as progress is benchmarked unbiased data such biases", "start_timestamp": "00:10:06", "end_timestamp": "00:10:43", "start_second": 606, "end_second": 643, "url": "https://www.youtube.com/watch?v=n1SXlK5rhR8&t=606s", "title": "[Drama] Yann LeCun against Twitter on Dataset Bias", "thumbnail": "https://i.ytimg.com/vi/n1SXlK5rhR8/maxresdefault.jpg"} {"video_id": "n1SXlK5rhR8", "text": "will also be reflected in the inductive biases of ML systems advancing ml with biased benchmarks and asking engineers to simply retrain models with unbiased data is not helpful I don't disagree with you here I don't think my tweet contradicts your statement which it doesn't people are reading into this because he doesn't conform to the orthodoxy of pointing out that everything and everything is problematic and the pinpoints a particular problem he must be thinking all the wrong things Jeff Dean says this is a clear example", "start_timestamp": "00:10:43", "end_timestamp": "00:11:15", "start_second": 643, "end_second": 675, "url": "https://www.youtube.com/watch?v=n1SXlK5rhR8&t=643s", "title": "[Drama] Yann LeCun against Twitter on Dataset Bias", "thumbnail": "https://i.ytimg.com/vi/n1SXlK5rhR8/maxresdefault.jpg"} {"video_id": "n1SXlK5rhR8", "text": "here is an illustration that seemingly minor choices in learning algorithms or loss can have significant effects so bias in ML systems is about much more than just avoid data bias ml researchers and practitioners must pay attention to these issues and I think they are and Lacan doesn't say anything against that he says as I point out in my comment to this tweet is much more efficient to correct this kind of bias note that Yann Lacan actually differentiates between the different kinds of biases by equalizing the", "start_timestamp": "00:11:15", "end_timestamp": "00:11:45", "start_second": 675, "end_second": 705, "url": "https://www.youtube.com/watch?v=n1SXlK5rhR8&t=675s", "title": "[Drama] Yann LeCun against Twitter on Dataset Bias", "thumbnail": "https://i.ytimg.com/vi/n1SXlK5rhR8/maxresdefault.jpg"} {"video_id": "n1SXlK5rhR8", "text": "frequencies of categories of samples during training than be hacking the loss function correct because if you hack the loss function you're trying to counter one kind of bias by another kind of bias Meredith Whittaker says this is very racist and even if it recognized non-white people it would be very racist this is Coptic it's designed to allow those with power to surveil and control those with less power diverse training sets aren't going to fix it advocating that we should never build these systems and that's a discussion to be had but", "start_timestamp": "00:11:45", "end_timestamp": "00:12:19", "start_second": 705, "end_second": 739, "url": "https://www.youtube.com/watch?v=n1SXlK5rhR8&t=705s", "title": "[Drama] Yann LeCun against Twitter on Dataset Bias", "thumbnail": "https://i.ytimg.com/vi/n1SXlK5rhR8/maxresdefault.jpg"} {"video_id": "n1SXlK5rhR8", "text": "let me break this to you this isn't going to help the cops this isn't actually giving you the face of the person that was down pixeled this is simply going to give you the most likely face associated with that down pixel picture given the dataset the algorithm was trained on I don't see this whenever any machine learning algorithm does anything with faces at all people jumping up going like this is cop technology well in line with all the broader impact statement advice can't it also be used to find lost children from", "start_timestamp": "00:12:19", "end_timestamp": "00:12:50", "start_second": 739, "end_second": 770, "url": "https://www.youtube.com/watch?v=n1SXlK5rhR8&t=739s", "title": "[Drama] Yann LeCun against Twitter on Dataset Bias", "thumbnail": "https://i.ytimg.com/vi/n1SXlK5rhR8/maxresdefault.jpg"} {"video_id": "n1SXlK5rhR8", "text": "very very bad security camera footage and if I already mentioned that this doesn't actually give you back the person on the down stamp old image it will give you back the most likely person given the data set so with that I want to conclude this section please stop the witch hunting young account made a completely fine tweet here and there's no reason why people should pile on him this hard he doesn't dismiss any of the other problems just because he doesn't mention them and while we all enjoy a good discussion", "start_timestamp": "00:12:50", "end_timestamp": "00:13:22", "start_second": 770, "end_second": 802, "url": "https://www.youtube.com/watch?v=n1SXlK5rhR8&t=770s", "title": "[Drama] Yann LeCun against Twitter on Dataset Bias", "thumbnail": "https://i.ytimg.com/vi/n1SXlK5rhR8/maxresdefault.jpg"} {"video_id": "n1SXlK5rhR8", "text": "where people disagree genuinely is not helpful to accuse him of things he never said or meant I mean where does this all lead the result of this is going to be that small labs that don't have the resources to collect their own data sets or check for all the possible biases in their models that are reliant on the data sets that we do have even if they are biased and flawed we'll just be disincentivized from publishing their code or actually doing research at all so this as every other additional constraint on research is going to help", "start_timestamp": "00:13:22", "end_timestamp": "00:13:53", "start_second": 802, "end_second": 833, "url": "https://www.youtube.com/watch?v=n1SXlK5rhR8&t=802s", "title": "[Drama] Yann LeCun against Twitter on Dataset Bias", "thumbnail": "https://i.ytimg.com/vi/n1SXlK5rhR8/maxresdefault.jpg"} {"video_id": "n1SXlK5rhR8", "text": "the large corporations with lots of money and maybe that's just my opinion but we should be able to just talk about a problem and the solution to it without always having to make sure that we rabble down all the different things that are and might be wrong according to the Canada and big props to young lecounte here for holding his own 90% of people by now would probably be like oh yes I'm so sorry I did a not thoughtful comment blah blah blah props tyrion keep going and with that I conclude this section let me know what", "start_timestamp": "00:13:53", "end_timestamp": "00:14:25", "start_second": 833, "end_second": 865, "url": "https://www.youtube.com/watch?v=n1SXlK5rhR8&t=833s", "title": "[Drama] Yann LeCun against Twitter on Dataset Bias", "thumbnail": "https://i.ytimg.com/vi/n1SXlK5rhR8/maxresdefault.jpg"} {"video_id": "La9oLLoI5Rc", "text": "It's a scientific fact that the hormones of stress downregulate genes and create disease. Long-term effects. Human beings because of the size of the neocortex, we can turn on the stress response just by thought alone as I think about our problems and turn on those chemicals That means then our thoughts Could make us sick So if it's possible, that our thoughts could make us sick then it is possible then our thoughts could make us well, the answer is absolutely yes Everybody welcome to Impact Theory our goal with this show and company is to introduce you to the people and ideas that will help you", "start_timestamp": "00:00:00", "end_timestamp": "00:00:38", "start_second": 0, "end_second": 38, "url": "https://www.youtube.com/watch?v=La9oLLoI5Rc&t=0s", "title": "How To BRAINWASH Yourself For Success & Destroy NEGATIVE THOUGHTS! | Dr. Joe Dispenza", "thumbnail": "https://i.ytimg.com/vi/La9oLLoI5Rc/maxresdefault.jpg"} {"video_id": "La9oLLoI5Rc", "text": "Actually execute on your dreams Alright today's guest is a New York Times bestselling author and one of the most sought-after speakers in the world He's lectured and given advanced workshops in more than 30 countries Across five continents all with the aim of helping people better understand and unlock the power of their mind His expertise is the intersection of the fields of neuroscience Epigenetics and quantum physics and he's partnered with other scientists across multiple disciplines to perform extensive research on the effects of meditation", "start_timestamp": "00:00:38", "end_timestamp": "00:01:09", "start_second": 38, "end_second": 69, "url": "https://www.youtube.com/watch?v=La9oLLoI5Rc&t=38s", "title": "How To BRAINWASH Yourself For Success & Destroy NEGATIVE THOUGHTS! | Dr. Joe Dispenza", "thumbnail": "https://i.ytimg.com/vi/La9oLLoI5Rc/maxresdefault.jpg"} {"video_id": "La9oLLoI5Rc", "text": "Using advanced technologies such as epigenetic testing brain mapping with EEG s and gas-discharge visualization technology. Through his work He is endeavouring to help advance both the scientific community and the public at large as understanding of mind derived health optimization, a topic he covered extensively in his groundbreaking book, You are the placebo. His teaching has had such a profound impact on the way that people perceive a wide range of brain related topics around Mindfulness and well-being that he's a faculty member at the quantum University in Hawaii the Omega Institute for holistic studies in New York", "start_timestamp": "00:01:09", "end_timestamp": "00:01:44", "start_second": 69, "end_second": 104, "url": "https://www.youtube.com/watch?v=La9oLLoI5Rc&t=69s", "title": "How To BRAINWASH Yourself For Success & Destroy NEGATIVE THOUGHTS! | Dr. Joe Dispenza", "thumbnail": "https://i.ytimg.com/vi/La9oLLoI5Rc/maxresdefault.jpg"} {"video_id": "La9oLLoI5Rc", "text": "And the Kerr Paulo Centre for yoga and health in Stockbridge, Massachusetts He's also an invited chair of the research committee at life University in Atlanta As well as a corporate consultant where he delivers his lectures and workshops for businesses So, please help me in welcoming the man who has appeared in such films as Heal, People versus the state of illusion and Unleashing creativity The author of the recent book Becoming supernatural. Dr. Joe Dispenza Thanks for being here So, diving into your world and how you perceive the sense of self and", "start_timestamp": "00:01:44", "end_timestamp": "00:02:23", "start_second": 104, "end_second": 143, "url": "https://www.youtube.com/watch?v=La9oLLoI5Rc&t=104s", "title": "How To BRAINWASH Yourself For Success & Destroy NEGATIVE THOUGHTS! | Dr. Joe Dispenza", "thumbnail": "https://i.ytimg.com/vi/La9oLLoI5Rc/maxresdefault.jpg"} {"video_id": "La9oLLoI5Rc", "text": "the way that you marry science to - the way that we form memories the way that we live in a perpetual state of Reliving our past and things like that It's really, really incredible and I want to dive into the whole notion of you sort of being a habitual Construct like what? What is that? What is the habit of you? Well a habit is a redundant set of Automatic unconscious thoughts, behaviors and emotions that's acquired through repetition The habit is when you've done done something so many times that your body now knows how to do it better than your mind", "start_timestamp": "00:02:23", "end_timestamp": "00:02:55", "start_second": 143, "end_second": 175, "url": "https://www.youtube.com/watch?v=La9oLLoI5Rc&t=143s", "title": "How To BRAINWASH Yourself For Success & Destroy NEGATIVE THOUGHTS! | Dr. Joe Dispenza", "thumbnail": "https://i.ytimg.com/vi/La9oLLoI5Rc/maxresdefault.jpg"} {"video_id": "La9oLLoI5Rc", "text": "So if you think about it people wake up in the morning they Begin to think about their problems Those problems are circuits, memories in the brain, each One of those memories are connected to people and things at certain times and places and if the brain is a record of the past The moment they start their day, they're already thinking in the past. Each one of those memories has an emotion Emotions are the end product of past experiences So the moment they recall those memories of their problems, they all of a sudden feel unhappy, they feel sad, they feel pain", "start_timestamp": "00:02:55", "end_timestamp": "00:03:30", "start_second": 175, "end_second": 210, "url": "https://www.youtube.com/watch?v=La9oLLoI5Rc&t=175s", "title": "How To BRAINWASH Yourself For Success & Destroy NEGATIVE THOUGHTS! | Dr. Joe Dispenza", "thumbnail": "https://i.ytimg.com/vi/La9oLLoI5Rc/maxresdefault.jpg"} {"video_id": "La9oLLoI5Rc", "text": "Now how you think and how you feel creates your state of being. So the person's entire State of being when they start their day is in the past. So what does that mean? The familiar past will sooner or later be predictable future so if you believe that your thoughts have something to do with your destiny and You can't think greater than how you feel Or feelings have become the means of thinking by very definition of emotions you're thinking in the past And for the most part you're going to keep creating the same life, so then people grab their cell phone", "start_timestamp": "00:03:30", "end_timestamp": "00:04:03", "start_second": 210, "end_second": 243, "url": "https://www.youtube.com/watch?v=La9oLLoI5Rc&t=210s", "title": "How To BRAINWASH Yourself For Success & Destroy NEGATIVE THOUGHTS! | Dr. Joe Dispenza", "thumbnail": "https://i.ytimg.com/vi/La9oLLoI5Rc/maxresdefault.jpg"} {"video_id": "La9oLLoI5Rc", "text": "They check their WhatsApp. They check their texts. They check their emails. They check Facebook They take a picture of their feet. They post it on Facebook. They tweet something, they do Instagram they check the news and now they feel really connected to everything that's known in their life And then they go through a series of routine behaviors They get out of bed on the same side. They go to the toilet. They get a cup of coffee They take a shower, they get dressed, they drive to work the same way. They do the same things", "start_timestamp": "00:04:03", "end_timestamp": "00:04:28", "start_second": 243, "end_second": 268, "url": "https://www.youtube.com/watch?v=La9oLLoI5Rc&t=243s", "title": "How To BRAINWASH Yourself For Success & Destroy NEGATIVE THOUGHTS! | Dr. Joe Dispenza", "thumbnail": "https://i.ytimg.com/vi/La9oLLoI5Rc/maxresdefault.jpg"} {"video_id": "La9oLLoI5Rc", "text": "They see the same people that pushed the same emotional buttons and that becomes the routine and it becomes like a program So now they've lost their free will To a program and there's no unseen hand doing it to them. So when it comes time to change the Redundancy of that cycle becomes a subconscious program. So now 95% of who we are by the time we're 35 years old is a Memorized set of behaviors, emotional reactions, unconscious habits, hardwired attitudes, beliefs and perceptions that function like a computer program", "start_timestamp": "00:04:28", "end_timestamp": "00:05:03", "start_second": 268, "end_second": 303, "url": "https://www.youtube.com/watch?v=La9oLLoI5Rc&t=268s", "title": "How To BRAINWASH Yourself For Success & Destroy NEGATIVE THOUGHTS! | Dr. Joe Dispenza", "thumbnail": "https://i.ytimg.com/vi/La9oLLoI5Rc/maxresdefault.jpg"} {"video_id": "La9oLLoI5Rc", "text": "So then person can say with their five percent of their conscious mind. I want to be healthy I want to be happy. I want to be free but the body's on a whole different program So then how do you begin to make those changes? Well? you have to get beyond the analytical mind because what separates the conscious mind from the Subconscious mind is the analytical mind and that's where meditation comes in because you can teach people through practice how to change their brainwaves, slow them down and when they do that", "start_timestamp": "00:05:03", "end_timestamp": "00:05:33", "start_second": 303, "end_second": 333, "url": "https://www.youtube.com/watch?v=La9oLLoI5Rc&t=303s", "title": "How To BRAINWASH Yourself For Success & Destroy NEGATIVE THOUGHTS! | Dr. Joe Dispenza", "thumbnail": "https://i.ytimg.com/vi/La9oLLoI5Rc/maxresdefault.jpg"} {"video_id": "La9oLLoI5Rc", "text": "Properly they do enter the operating system where they can begin to make some really important changes. So Most people then wait for crisis or trauma or disease or diagnosis, you know, they wait for loss some tragedy to make up their mind to change and my message is why wait and and You can learn and change in a state of pain and suffering or you can learn and change in a state of joy and inspiration I think right now the cool thing is that people are waking up that's really interesting and where I found the", "start_timestamp": "00:05:33", "end_timestamp": "00:06:03", "start_second": 333, "end_second": 363, "url": "https://www.youtube.com/watch?v=La9oLLoI5Rc&t=333s", "title": "How To BRAINWASH Yourself For Success & Destroy NEGATIVE THOUGHTS! | Dr. Joe Dispenza", "thumbnail": "https://i.ytimg.com/vi/La9oLLoI5Rc/maxresdefault.jpg"} {"video_id": "La9oLLoI5Rc", "text": "the deepest hooks into how powerful this can be for somebody is when you talk about trauma and you've talked about how People experience a traumatic event, but they then basically rehearse it and how that then has this knock-on effect. So, what is that? Why do people find it so hard to get past trauma? Well? the the stronger the emotional reaction You have to some experience in your life the higher the emotional quotient The more you pay attention to the cause and the moment the brain puts all of its attention on the cause", "start_timestamp": "00:06:03", "end_timestamp": "00:06:34", "start_second": 363, "end_second": 394, "url": "https://www.youtube.com/watch?v=La9oLLoI5Rc&t=363s", "title": "How To BRAINWASH Yourself For Success & Destroy NEGATIVE THOUGHTS! | Dr. Joe Dispenza", "thumbnail": "https://i.ytimg.com/vi/La9oLLoI5Rc/maxresdefault.jpg"} {"video_id": "La9oLLoI5Rc", "text": "It takes a snapshot and that's called a memory. So long-term memories are created from very highly Emotional experiences. So what happens then is that people think neurologically within the circuitry of that experience and they feel chemically within the boundaries of those emotions and So when you have an emotional reaction to someone or something most people think that they can't control their emotional reaction Well, it turns out if you allow that emotional reaction, it's called a refractory period to last for hours or days", "start_timestamp": "00:06:34", "end_timestamp": "00:07:08", "start_second": 394, "end_second": 428, "url": "https://www.youtube.com/watch?v=La9oLLoI5Rc&t=394s", "title": "How To BRAINWASH Yourself For Success & Destroy NEGATIVE THOUGHTS! | Dr. Joe Dispenza", "thumbnail": "https://i.ytimg.com/vi/La9oLLoI5Rc/maxresdefault.jpg"} {"video_id": "La9oLLoI5Rc", "text": "That's called the mood. I say to someone. Hey, what's up think so I'm gonna move well, why are you in a mood? well I had this thing happen to me five days ago and I'm having one long emotional reaction if you keep that same emotional reaction going on for weeks or months That's called temperament. Why is he so bitter? I don't know. Let's ask him. Why is he so bitter? Why are you bitter? Well, I had this thing happened to me nine months ago And if you keep that same emotional reaction going on for years on end that's called a personality trait", "start_timestamp": "00:07:08", "end_timestamp": "00:07:38", "start_second": 428, "end_second": 458, "url": "https://www.youtube.com/watch?v=La9oLLoI5Rc&t=428s", "title": "How To BRAINWASH Yourself For Success & Destroy NEGATIVE THOUGHTS! | Dr. Joe Dispenza", "thumbnail": "https://i.ytimg.com/vi/La9oLLoI5Rc/maxresdefault.jpg"} {"video_id": "La9oLLoI5Rc", "text": "And so learning how to shorten your refractory period of emotional reactions is really where that work starts So then people when they have an event what they do is they keep recalling the event because the Emotions of stress hormones the survival emotions are saying pay attention to what happened Because you want to be prepared if it happens again Turns out most people spend 70% of their life living in survival and living in stress. So they're they're always Anticipating the worst-case scenario based on a past experience and they're literally out of the infinite", "start_timestamp": "00:07:38", "end_timestamp": "00:08:17", "start_second": 458, "end_second": 497, "url": "https://www.youtube.com/watch?v=La9oLLoI5Rc&t=458s", "title": "How To BRAINWASH Yourself For Success & Destroy NEGATIVE THOUGHTS! | Dr. Joe Dispenza", "thumbnail": "https://i.ytimg.com/vi/La9oLLoI5Rc/maxresdefault.jpg"} {"video_id": "La9oLLoI5Rc", "text": "potentials in the quantum field they're selecting the worst possible outcome and they're beginning to emotionally embrace it with fear and their Conditioning their body into a state of fear do that enough times Body has a panic attack without you you you can't even predict it because it's programmed subconsciously So then you say to the person why are you this way? And they'll say I am this way because of this event that happened to me 15 or 20 years ago and what that means from biological standpoint is that they haven't been able to change since that event", "start_timestamp": "00:08:17", "end_timestamp": "00:08:50", "start_second": 497, "end_second": 530, "url": "https://www.youtube.com/watch?v=La9oLLoI5Rc&t=497s", "title": "How To BRAINWASH Yourself For Success & Destroy NEGATIVE THOUGHTS! | Dr. Joe Dispenza", "thumbnail": "https://i.ytimg.com/vi/La9oLLoI5Rc/maxresdefault.jpg"} {"video_id": "La9oLLoI5Rc", "text": "So then the emotions from the experience tend to give the body and the brain a rush of energy So people become addicted To the rush of those emotions and they use the problems and conditions in their life to reaffirm their limitation So at least they can feel something. So now when it comes time to change you say the person why are you this way? Well, every time they recall the event they're producing the same chemistry in their brain and body as if the event is occurring firing and wiring the same circuits and", "start_timestamp": "00:08:50", "end_timestamp": "00:09:21", "start_second": 530, "end_second": 561, "url": "https://www.youtube.com/watch?v=La9oLLoI5Rc&t=530s", "title": "How To BRAINWASH Yourself For Success & Destroy NEGATIVE THOUGHTS! | Dr. Joe Dispenza", "thumbnail": "https://i.ytimg.com/vi/La9oLLoI5Rc/maxresdefault.jpg"} {"video_id": "La9oLLoI5Rc", "text": "Settings the same emotional signature to the body. Well, what's the revelant behind that? Well your body is the unconscious mind It doesn't know the difference between the experience that's creating the emotion and the emotion that you're creating by thought alone So the body's believing it's living in the same past experience 24 hours a day seven days a week 365 days a year and so then when those emotions influence certain thoughts and they do and Then those thoughts create the same emotions and those same emotions influence the same thoughts", "start_timestamp": "00:09:21", "end_timestamp": "00:09:52", "start_second": 561, "end_second": 592, "url": "https://www.youtube.com/watch?v=La9oLLoI5Rc&t=561s", "title": "How To BRAINWASH Yourself For Success & Destroy NEGATIVE THOUGHTS! | Dr. Joe Dispenza", "thumbnail": "https://i.ytimg.com/vi/La9oLLoI5Rc/maxresdefault.jpg"} {"video_id": "La9oLLoI5Rc", "text": "Now the entire person's state of being is in the past. So then the hardest part about change is not making the same choice as you did the day before a period and The moment you decide to make a different choice get ready because it's going to feel uncomfortable It's going to feel unfamiliar. It's there's gonna be something so why does it feel so uncomfortable? Is it because of the the neurons that fire together wire together so I've there's like an Easiness to that loop just because literally and you've talked very eloquently", "start_timestamp": "00:09:52", "end_timestamp": "00:10:23", "start_second": 592, "end_second": 623, "url": "https://www.youtube.com/watch?v=La9oLLoI5Rc&t=592s", "title": "How To BRAINWASH Yourself For Success & Destroy NEGATIVE THOUGHTS! | Dr. Joe Dispenza", "thumbnail": "https://i.ytimg.com/vi/La9oLLoI5Rc/maxresdefault.jpg"} {"video_id": "La9oLLoI5Rc", "text": "about this the way that the neurons connect in the brain how rapidly I've seen you show footage of how Rapidly those connections happen, which is pretty incredible Is is that what makes it so? discomforting for people I think that I think that the bigger thing is that we we keep Firing and wiring those circuits they become more hardwired. So there you have a thought and then the program runs but it's the emotion that follows the thought if you have a if you have a Fearful thought you're gonna feel anxiety the moment you feel anxiety your brains checking in with your body and saying yeah, you're pretty anxious", "start_timestamp": "00:10:23", "end_timestamp": "00:10:59", "start_second": 623, "end_second": 659, "url": "https://www.youtube.com/watch?v=La9oLLoI5Rc&t=623s", "title": "How To BRAINWASH Yourself For Success & Destroy NEGATIVE THOUGHTS! | Dr. Joe Dispenza", "thumbnail": "https://i.ytimg.com/vi/La9oLLoI5Rc/maxresdefault.jpg"} {"video_id": "La9oLLoI5Rc", "text": "so then you start thinking more corresponding thoughts equaled how you While the redundancy of that cycle conditions the body to become the minds. So now when it comes time to change Person's steps into that river of change and they make a different choice in all of a sudden They don't they don't feel the same way So the body says well you've been doing this for 35 years Well, you're gonna just stop feel suffering and stop feeling guilty and stop feeling shameful and you're not gonna complain or blame or make excuses", "start_timestamp": "00:10:59", "end_timestamp": "00:11:31", "start_second": 659, "end_second": 691, "url": "https://www.youtube.com/watch?v=La9oLLoI5Rc&t=659s", "title": "How To BRAINWASH Yourself For Success & Destroy NEGATIVE THOUGHTS! | Dr. Joe Dispenza", "thumbnail": "https://i.ytimg.com/vi/La9oLLoI5Rc/maxresdefault.jpg"} {"video_id": "La9oLLoI5Rc", "text": "Or feel sorry for yourself long The body's in the unknown so the body says I want to return back to familiar territory so the body starts influencing the mind then it says Start tomorrow, you're too much like your mother. You'll never change. This isn't gonna work for you. This doesn't feel right And so if you respond to that thought as if it's true that same thought will lead to the same choice Which will lead to the same behavior, which will create the same experience which produce the same emotion I want to talk about that notion of", "start_timestamp": "00:11:31", "end_timestamp": "00:12:05", "start_second": 691, "end_second": 725, "url": "https://www.youtube.com/watch?v=La9oLLoI5Rc&t=691s", "title": "How To BRAINWASH Yourself For Success & Destroy NEGATIVE THOUGHTS! | Dr. Joe Dispenza", "thumbnail": "https://i.ytimg.com/vi/La9oLLoI5Rc/maxresdefault.jpg"} {"video_id": "La9oLLoI5Rc", "text": "Give me a little more detail. We mean by the body becomes the mind or the unconscious mind. What do you mean by that exactly? Well, those are two different things your body is your unconscious mind in a sense if you're sitting down and you start thinking about Some future worst-case scenario that you're conjuring up in your mind and you begin to feel the emotion of that event your body doesn't know the difference between The event that's taking place in your world outer world and what you're creating by emotion or thought alone. So most people then", "start_timestamp": "00:12:05", "end_timestamp": "00:12:41", "start_second": 725, "end_second": 761, "url": "https://www.youtube.com/watch?v=La9oLLoI5Rc&t=725s", "title": "How To BRAINWASH Yourself For Success & Destroy NEGATIVE THOUGHTS! | Dr. Joe Dispenza", "thumbnail": "https://i.ytimg.com/vi/La9oLLoI5Rc/maxresdefault.jpg"} {"video_id": "La9oLLoI5Rc", "text": "They're they're constantly reaffirming their emotional states So when it comes time to give up that emotion they can say I really want to do it but really the body is stronger than the mind because it's been conditioned that way so The servant now has become the master and the person all of a sudden once they step into that unknown They'd rather feel guilt and suffering because at least they can predict it being in the unknown Is a scary place for most people because the unknown is uncertain people say to me. Well, I can't predict my future", "start_timestamp": "00:12:41", "end_timestamp": "00:13:14", "start_second": 761, "end_second": 794, "url": "https://www.youtube.com/watch?v=La9oLLoI5Rc&t=761s", "title": "How To BRAINWASH Yourself For Success & Destroy NEGATIVE THOUGHTS! | Dr. Joe Dispenza", "thumbnail": "https://i.ytimg.com/vi/La9oLLoI5Rc/maxresdefault.jpg"} {"video_id": "La9oLLoI5Rc", "text": "I'm in the unknown and I always say the best way to predict your futures have created Not from the known but from the unknown what thoughts? Do you want to fire and wire in your brain? what behaviors do you want to demonstrate in one day the act of rehearsing the mentally closing your eyes and rehearsing the action the rehearsing the reaction of what you want or the action of what you want by closing your eyes and mentally rehearsing some action if You're truly present. The brain does not know the difference between what you're imaging and what you're experiencing in 3d world", "start_timestamp": "00:13:14", "end_timestamp": "00:13:46", "start_second": 794, "end_second": 826, "url": "https://www.youtube.com/watch?v=La9oLLoI5Rc&t=794s", "title": "How To BRAINWASH Yourself For Success & Destroy NEGATIVE THOUGHTS! | Dr. Joe Dispenza", "thumbnail": "https://i.ytimg.com/vi/La9oLLoI5Rc/maxresdefault.jpg"} {"video_id": "La9oLLoI5Rc", "text": "so then you begin to install the Neurological Hardware in your brain to look like the event has already occurred Now your brain is no longer a record of the past now It's a map to the future and if you keep doing it priming it that way the hardware becomes a software program and who knows you just may start acting like a happy person and then I think the hardest part is To teach our body emotionally what the future will feel like ahead of the actual experience. So, what does that mean? You can't wait for your success to feel empowered. You can't wait for your wealth to feel abundant", "start_timestamp": "00:13:46", "end_timestamp": "00:14:20", "start_second": 826, "end_second": 860, "url": "https://www.youtube.com/watch?v=La9oLLoI5Rc&t=826s", "title": "How To BRAINWASH Yourself For Success & Destroy NEGATIVE THOUGHTS! | Dr. Joe Dispenza", "thumbnail": "https://i.ytimg.com/vi/La9oLLoI5Rc/maxresdefault.jpg"} {"video_id": "La9oLLoI5Rc", "text": "you can't wait for your your new relationship to feel love or Your healing to feel whole I mean that's the old model of reality of cause and effect, you know Waiting for something outside of us to change how we feel inside of us and when we feel better inside of us We pay attention to ever or whatever caused it But what that means then is that from the Newtonian world that most people spend their whole life living in lack Will hitting through something to change out their what do you mean the Newtonian world?", "start_timestamp": "00:14:20", "end_timestamp": "00:14:48", "start_second": 860, "end_second": 888, "url": "https://www.youtube.com/watch?v=La9oLLoI5Rc&t=860s", "title": "How To BRAINWASH Yourself For Success & Destroy NEGATIVE THOUGHTS! | Dr. Joe Dispenza", "thumbnail": "https://i.ytimg.com/vi/La9oLLoI5Rc/maxresdefault.jpg"} {"video_id": "La9oLLoI5Rc", "text": "Newtonian world is all about the predictable It's all about predicting the future But the quantum model of reality isn't is about causing an effect the moment you start feeling Abundant and worthy you are generating wealth the moment you're empowered and feel it You're beginning to step towards your success the moment. You start feeling whole Your healing begins and when you love yourself and you love all of life You'll create an equal and now you're causing an effect and I think that's that the difference between living as a victim in", "start_timestamp": "00:14:48", "end_timestamp": "00:15:20", "start_second": 888, "end_second": 920, "url": "https://www.youtube.com/watch?v=La9oLLoI5Rc&t=888s", "title": "How To BRAINWASH Yourself For Success & Destroy NEGATIVE THOUGHTS! | Dr. Joe Dispenza", "thumbnail": "https://i.ytimg.com/vi/La9oLLoI5Rc/maxresdefault.jpg"} {"video_id": "La9oLLoI5Rc", "text": "Your world saying I am this way because of this person or that thing or this experience They made me think and feel this way when you switch that around you become a creator of your world and you start My thinking and my feeling is changing an outcome in my life And now that's a whole different game and we start believing more that were creators of reality. So, how do we go from? Okay, I have this negative emotion. It's controlling my life. It's got me in this cycle of I think about this emotion which triggers a chemical reaction which trains my body to feel that way which makes it easier more likely I will do it again and", "start_timestamp": "00:15:20", "end_timestamp": "00:15:53", "start_second": 920, "end_second": 953, "url": "https://www.youtube.com/watch?v=La9oLLoI5Rc&t=920s", "title": "How To BRAINWASH Yourself For Success & Destroy NEGATIVE THOUGHTS! | Dr. Joe Dispenza", "thumbnail": "https://i.ytimg.com/vi/La9oLLoI5Rc/maxresdefault.jpg"} {"video_id": "La9oLLoI5Rc", "text": "so now I'm in this vicious line unconscious and it's unconscious right and you You said does your thinking create your environment orders your environment create your thinking which I thought was really really interesting. So how do we then go from that like mechanistically To begin this visualization process of something that's empowering its me in a different state. It's my future self Is it meditation is it what does that look like if you're not being defined by a vision of the future? Then you're left with the old memories of the past and you will be predictable in your life", "start_timestamp": "00:15:53", "end_timestamp": "00:16:29", "start_second": 953, "end_second": 989, "url": "https://www.youtube.com/watch?v=La9oLLoI5Rc&t=953s", "title": "How To BRAINWASH Yourself For Success & Destroy NEGATIVE THOUGHTS! | Dr. Joe Dispenza", "thumbnail": "https://i.ytimg.com/vi/La9oLLoI5Rc/maxresdefault.jpg"} {"video_id": "La9oLLoI5Rc", "text": "and If you wake up in the morning and you're not being defined by a vision in the future as you see the same people and you go to the same places and You do the exact same thing at the exact same time It's no longer that your personality is creating your personal reality Now your personal reality is affecting or creating your personality Your environment is really controlling how you think and feel? unconsciously because every person every thing every place every experience has a neurological network in your brain every", "start_timestamp": "00:16:29", "end_timestamp": "00:16:59", "start_second": 989, "end_second": 1019, "url": "https://www.youtube.com/watch?v=La9oLLoI5Rc&t=989s", "title": "How To BRAINWASH Yourself For Success & Destroy NEGATIVE THOUGHTS! | Dr. Joe Dispenza", "thumbnail": "https://i.ytimg.com/vi/La9oLLoI5Rc/maxresdefault.jpg"} {"video_id": "La9oLLoI5Rc", "text": "Experience that you have with every person produces an emotion. So some people will use their boss to reaffirm their addiction to judgment They'll use their enemy to reaffirm their addiction to hatred to use their friends that we affirm their addiction to suffering So now they need the outer world to feel something. So To change them is to be greater than your environment to be greater than the conditions in your world and the environment Is that seductive so then why is meditation the tool well? Let's sit down. Let's close our eyes. Let's disconnect", "start_timestamp": "00:16:59", "end_timestamp": "00:17:32", "start_second": 1019, "end_second": 1052, "url": "https://www.youtube.com/watch?v=La9oLLoI5Rc&t=1019s", "title": "How To BRAINWASH Yourself For Success & Destroy NEGATIVE THOUGHTS! | Dr. Joe Dispenza", "thumbnail": "https://i.ytimg.com/vi/La9oLLoI5Rc/maxresdefault.jpg"} {"video_id": "La9oLLoI5Rc", "text": "from your outer environment So if you're seeing less things is less stimulation going to your brain if you're playing soft music or you have earplugs in Less sensory information coming to your brain. So you're disconnecting from environment if you can sit your body down and Tell it to stay like an animal stay right here. I'm gonna feed you when we're done You can get up and check your emails You can do all your texts, but right now you're gonna sit there and obey me So then when you do that properly and the you're not eating anything or smelling anything or tasting anything?", "start_timestamp": "00:17:32", "end_timestamp": "00:18:05", "start_second": 1052, "end_second": 1085, "url": "https://www.youtube.com/watch?v=La9oLLoI5Rc&t=1052s", "title": "How To BRAINWASH Yourself For Success & Destroy NEGATIVE THOUGHTS! | Dr. Joe Dispenza", "thumbnail": "https://i.ytimg.com/vi/La9oLLoI5Rc/maxresdefault.jpg"} {"video_id": "La9oLLoI5Rc", "text": "You're not up experiencing and feeling anything. You would have to agree with me that you're being defined by a thought, right? So when the body wants to go back to its emotional past And you become aware that your attention is on that emotion And where you place your attention is where you place your energy? you're siphoning your energy out of the present moment into the past and you become aware of that and You settle your body back down in the present moment because it's saying well, it's eight o'clock You normally get upset because you're in traffic around this time and here you are sitting and we're used to feeling anger and you're off", "start_timestamp": "00:18:05", "end_timestamp": "00:18:40", "start_second": 1085, "end_second": 1120, "url": "https://www.youtube.com/watch?v=La9oLLoI5Rc&t=1085s", "title": "How To BRAINWASH Yourself For Success & Destroy NEGATIVE THOUGHTS! | Dr. Joe Dispenza", "thumbnail": "https://i.ytimg.com/vi/La9oLLoI5Rc/maxresdefault.jpg"} {"video_id": "La9oLLoI5Rc", "text": "Schedule. Oh, it's 11 o'clock and usually check your emails and judge everybody. Well, the body is looking for that that predictable chemical state every time you become aware that you're doing that and your body is craving those emotions and You settle it back down into the present moment. You're telling the body it's no longer the mind that you're the mind and now your will is Getting greater than the program and if you keep doing this over and over again over and over again over and over again Just like training a stallion or a dog. It's just gonna say", "start_timestamp": "00:18:40", "end_timestamp": "00:19:12", "start_second": 1120, "end_second": 1152, "url": "https://www.youtube.com/watch?v=La9oLLoI5Rc&t=1120s", "title": "How To BRAINWASH Yourself For Success & Destroy NEGATIVE THOUGHTS! | Dr. Joe Dispenza", "thumbnail": "https://i.ytimg.com/vi/La9oLLoI5Rc/maxresdefault.jpg"} {"video_id": "La9oLLoI5Rc", "text": "I'm gonna sit and the moment that happens when the body's no longer the mind when it finally surrenders There's a liberation of energy We go from particle to wave from matter to energy and we free ourselves from the chains Of those emotions that keep us in the in the familiar past and we've seen this Thousands of times. In fact, we can actually predict it now on a brain scan. That's so interesting Let's go a little bit harder on Metacognition the notion that you don't have to believe everything you think I love the way that you talk about that. Hmm", "start_timestamp": "00:19:12", "end_timestamp": "00:19:47", "start_second": 1152, "end_second": 1187, "url": "https://www.youtube.com/watch?v=La9oLLoI5Rc&t=1152s", "title": "How To BRAINWASH Yourself For Success & Destroy NEGATIVE THOUGHTS! | Dr. Joe Dispenza", "thumbnail": "https://i.ytimg.com/vi/La9oLLoI5Rc/maxresdefault.jpg"} {"video_id": "La9oLLoI5Rc", "text": "Yeah, and we have a huge frontal lobe. It's 40% of our entire brain and most people when they have a thought they just think that that's the truth and I think one of my greatest Realizations in my own journey was just because you have a thought it doesn't necessarily mean it's true so if you think 60 to 70 thousand thoughts in one day and we do and 90% of those thoughts are the same thoughts as The day before and you believe that your thoughts have something to do with your destiny Your life's not gonna change very much", "start_timestamp": "00:19:47", "end_timestamp": "00:20:15", "start_second": 1187, "end_second": 1215, "url": "https://www.youtube.com/watch?v=La9oLLoI5Rc&t=1187s", "title": "How To BRAINWASH Yourself For Success & Destroy NEGATIVE THOUGHTS! | Dr. Joe Dispenza", "thumbnail": "https://i.ytimg.com/vi/La9oLLoI5Rc/maxresdefault.jpg"} {"video_id": "La9oLLoI5Rc", "text": "Because the same thought leads to the same choice the same choice leads to the same behavior the same behavior creates the same experience and the same experience produces the same motion and so then the act of becoming conscious of this process to to begin to become more aware of How you think how you act in how you feel? It's called metacognition and so then why is that important because the more conscious you become of those unconscious states of mind and body the Less likely you're gonna go unconscious during the day and that thought is not gonna slip by your awareness unchecked", "start_timestamp": "00:20:15", "end_timestamp": "00:20:53", "start_second": 1215, "end_second": 1253, "url": "https://www.youtube.com/watch?v=La9oLLoI5Rc&t=1215s", "title": "How To BRAINWASH Yourself For Success & Destroy NEGATIVE THOUGHTS! | Dr. Joe Dispenza", "thumbnail": "https://i.ytimg.com/vi/La9oLLoI5Rc/maxresdefault.jpg"} {"video_id": "La9oLLoI5Rc", "text": "because your It means to know thyself and the word meditation means to become familiar with So as you become familiar with the thoughts the behaviors and the emotions of the old self You're retiring that old self as you fire and wire new thoughts and condition the body into a new emotional state if you do that Enough times it'll begin to become familiar to you. So it's so important Just like a garden if you're planting a garden, you've got to get rid of the weeds You got to take the plants from the past year and you got to pull them out the rocks that sift to the top that", "start_timestamp": "00:20:53", "end_timestamp": "00:21:29", "start_second": 1253, "end_second": 1289, "url": "https://www.youtube.com/watch?v=La9oLLoI5Rc&t=1253s", "title": "How To BRAINWASH Yourself For Success & Destroy NEGATIVE THOUGHTS! | Dr. Joe Dispenza", "thumbnail": "https://i.ytimg.com/vi/La9oLLoI5Rc/maxresdefault.jpg"} {"video_id": "La9oLLoI5Rc", "text": "Are like our emotional blocks they have to be removed that soil has to be tenderized and broken down We have to we have to make room to plant the new garden So primarily we learn the most about ourselves and others when we're uncomfortable because the moment you move into that uncomfortable state normally a program jumps in When that program jumps ins because the person doesn't want to be in the present moment and engage it consciously So when you teach people how to do that with a meditative process Turns out that when they're in their life", "start_timestamp": "00:21:29", "end_timestamp": "00:22:01", "start_second": 1289, "end_second": 1321, "url": "https://www.youtube.com/watch?v=La9oLLoI5Rc&t=1289s", "title": "How To BRAINWASH Yourself For Success & Destroy NEGATIVE THOUGHTS! | Dr. Joe Dispenza", "thumbnail": "https://i.ytimg.com/vi/La9oLLoI5Rc/maxresdefault.jpg"} {"video_id": "La9oLLoI5Rc", "text": "They're less likely to emotionally react they're less likely to be so rigid and believe the thoughts they were thinking they're more aware of when they go unconscious back into a habit and that is what starts the process of change and So we have to unlearn Before we relearn we have to break the habit of the old self before we reinvent a new self We have to pre synaptic connections and sprout new connections. We have to unfired unwire and refire and rewire. We have to unmemorable Body to a new mind into a new emotion like the program and reprogram that's the act and it's a two-step process", "start_timestamp": "00:22:01", "end_timestamp": "00:22:35", "start_second": 1321, "end_second": 1355, "url": "https://www.youtube.com/watch?v=La9oLLoI5Rc&t=1321s", "title": "How To BRAINWASH Yourself For Success & Destroy NEGATIVE THOUGHTS! | Dr. Joe Dispenza", "thumbnail": "https://i.ytimg.com/vi/La9oLLoI5Rc/maxresdefault.jpg"} {"video_id": "La9oLLoI5Rc", "text": "Yeah, I like the way that you call that out as an action There was another thing that you said that I thought was really powerful about how insights themselves are essentially inert. They don't do anything What what then do we do with an insight? How do we take a breakthrough moment and make sure that it's not just a breakthrough moment Like I guarantee people watching right now are having like a hundred aha moments for sure That was definitely the case for me as I was researching you and when you said that I was like and that's the danger that", "start_timestamp": "00:22:35", "end_timestamp": "00:23:03", "start_second": 1355, "end_second": 1383, "url": "https://www.youtube.com/watch?v=La9oLLoI5Rc&t=1355s", "title": "How To BRAINWASH Yourself For Success & Destroy NEGATIVE THOUGHTS! | Dr. Joe Dispenza", "thumbnail": "https://i.ytimg.com/vi/La9oLLoI5Rc/maxresdefault.jpg"} {"video_id": "La9oLLoI5Rc", "text": "You have the AHA and then nothing. Yeah Yeah and that's it's a it is a danger because then people will will shrink back into mediocracy and they'll use the insight to Excuse them from taking a leap. They'll say yeah, you know, I have a chemical imbalance in my brain. Yeah, my father was Really overbearing he was a perfectionist. That's why I am the way I am you know people they come up with stuff to to excuse themselves. The insight is Actually giving them permission to stay limited and it's an amazing idea because they'll say to you", "start_timestamp": "00:23:03", "end_timestamp": "00:23:37", "start_second": 1383, "end_second": 1417, "url": "https://www.youtube.com/watch?v=La9oLLoI5Rc&t=1383s", "title": "How To BRAINWASH Yourself For Success & Destroy NEGATIVE THOUGHTS! | Dr. Joe Dispenza", "thumbnail": "https://i.ytimg.com/vi/La9oLLoI5Rc/maxresdefault.jpg"} {"video_id": "La9oLLoI5Rc", "text": "And they really want to get over their anxiety. But let's ok. Let's take your ex-husband. Let's put him in a straitjacket Let's duct tape them and shoot them to the moon know what I mean. What are you gonna do now? You still have to make those changes. And so then the person's enemy dies or they're something shifts in their life And that person's gone. They'll find another person to hate. This is just how we function as human beings. We just slide another Reason to feel those emotions. So I think I think when people start to understand this, you know,", "start_timestamp": "00:23:37", "end_timestamp": "00:24:09", "start_second": 1417, "end_second": 1449, "url": "https://www.youtube.com/watch?v=La9oLLoI5Rc&t=1417s", "title": "How To BRAINWASH Yourself For Success & Destroy NEGATIVE THOUGHTS! | Dr. Joe Dispenza", "thumbnail": "https://i.ytimg.com/vi/La9oLLoI5Rc/maxresdefault.jpg"} {"video_id": "La9oLLoI5Rc", "text": "I think knowledge is power but knowledge about yourself is self empowerment. So how much of this is really learning to? just bifurcate the world into there's negative emotions that have negative neuro chemistry associated with and you said that in those states if you're living in a perpetual state of stress hormones and things like that illness is like a step away and Then just the other side of that is understanding but there's this whole other side of positive energy which happiness joy Empowerment whatever that you know neurochemical cocktail is but that when you're on that side", "start_timestamp": "00:24:09", "end_timestamp": "00:24:45", "start_second": 1449, "end_second": 1485, "url": "https://www.youtube.com/watch?v=La9oLLoI5Rc&t=1449s", "title": "How To BRAINWASH Yourself For Success & Destroy NEGATIVE THOUGHTS! | Dr. Joe Dispenza", "thumbnail": "https://i.ytimg.com/vi/La9oLLoI5Rc/maxresdefault.jpg"} {"video_id": "La9oLLoI5Rc", "text": "Your immune system is more likely to function. Well, like is that Just sort of bringing it down to like a really basal. Yeah, that's sort of one of the biggies Well, let's talk about it in terms of survival or creation As I said 70% of the time people live in stress and living in stress is living in survival now All organisms in nature can tolerate short-term stress, you know a deer gets chased by a pack of coyotes when it out runs the Coyotes it goes back to grazing and the event is over and The definition of stress is when your brain and body are knocked out of balance out of homeostasis", "start_timestamp": "00:24:45", "end_timestamp": "00:25:22", "start_second": 1485, "end_second": 1522, "url": "https://www.youtube.com/watch?v=La9oLLoI5Rc&t=1485s", "title": "How To BRAINWASH Yourself For Success & Destroy NEGATIVE THOUGHTS! | Dr. Joe Dispenza", "thumbnail": "https://i.ytimg.com/vi/La9oLLoI5Rc/maxresdefault.jpg"} {"video_id": "La9oLLoI5Rc", "text": "The stress response is what the body and Nate Lee does to return itself back to order. So you're driving down the road Someone cuts you off you jam on the brakes You may give them the finger and then you settle back down and the event is over and boom now. Everything's back back to normal But what if it's not a predator that's waiting for you outside the cave, but what if it's your coworker? Sitting right next to you and all day long you're turning on those chemicals because they're pushing all your emotional buttons", "start_timestamp": "00:25:22", "end_timestamp": "00:25:53", "start_second": 1522, "end_second": 1553, "url": "https://www.youtube.com/watch?v=La9oLLoI5Rc&t=1522s", "title": "How To BRAINWASH Yourself For Success & Destroy NEGATIVE THOUGHTS! | Dr. Joe Dispenza", "thumbnail": "https://i.ytimg.com/vi/La9oLLoI5Rc/maxresdefault.jpg"} {"video_id": "La9oLLoI5Rc", "text": "When you turn on the stress response, and you can't turn it off Now you're headed for a disease because no organism in nature can live an emergency mode for that extended period of time It's a scientific fact that the hormones of stress down regulate genes and create disease long term affects Human beings because of the size of the neocortex we can turn on the stress response just by thought alone Which I think about our problems and turn on those chemicals That means then our thoughts Could make us sick So if it's possible that our thoughts could make us sick. Is it possible that our thoughts could make us? Well, the answer is absolutely", "start_timestamp": "00:25:53", "end_timestamp": "00:26:32", "start_second": 1553, "end_second": 1592, "url": "https://www.youtube.com/watch?v=La9oLLoI5Rc&t=1553s", "title": "How To BRAINWASH Yourself For Success & Destroy NEGATIVE THOUGHTS! | Dr. Joe Dispenza", "thumbnail": "https://i.ytimg.com/vi/La9oLLoI5Rc/maxresdefault.jpg"} {"video_id": "La9oLLoI5Rc", "text": "Yes So then what are the emotions that are connected to survival? Let's name them anger aggression hostility hatred competition fear Anxiety worry pain suffering guilt shame unworthiness the envy jealousy. Those are all Created by the hormones of stressin and psychology calls them normal human states of consciousness I call those altered states of consciousness So then we tend to remember those traumatic events more because in survival, you better be ready if it happens again that's an and in one's survival gene is switched on you could have ten really great things that happen to you in your day and", "start_timestamp": "00:26:32", "end_timestamp": "00:27:15", "start_second": 1592, "end_second": 1635, "url": "https://www.youtube.com/watch?v=La9oLLoI5Rc&t=1592s", "title": "How To BRAINWASH Yourself For Success & Destroy NEGATIVE THOUGHTS! | Dr. Joe Dispenza", "thumbnail": "https://i.ytimg.com/vi/La9oLLoI5Rc/maxresdefault.jpg"} {"video_id": "La9oLLoI5Rc", "text": "you just have one bad thing that happens and you cannot take your attention off that bad that that unhappy thing because The survival gene is switched on it's really interesting How does epigenetics come into play and all this like what's actually happening? You've talked pretty profoundly about? Proteins and like really at a deep level how we're signalling to our genetics to create these kinds of changes What does that actually look like? Well epigenetics epi means above the gene and Many years ago after the DNA helix was discovered by Watson and Crick", "start_timestamp": "00:27:15", "end_timestamp": "00:27:51", "start_second": 1635, "end_second": 1671, "url": "https://www.youtube.com/watch?v=La9oLLoI5Rc&t=1635s", "title": "How To BRAINWASH Yourself For Success & Destroy NEGATIVE THOUGHTS! | Dr. Joe Dispenza", "thumbnail": "https://i.ytimg.com/vi/La9oLLoI5Rc/maxresdefault.jpg"} {"video_id": "La9oLLoI5Rc", "text": "They said the blueprints of life, you know, all diseases are created from genes it turns out less than 5% more like 1% of people on the planet are born with a genetic condition like type 1 diabetes or Tay-sachs disease or sickle cell anemia the other 95 to 99 percent Are created by lifestyle and by choices you can take to identical twins Exact same genome one dies at 51. The other one dies at 85 same gene different environment, so All of a sudden they said we lied That was wrong. It's not genes that create disease. It's the environment that signals the gene that creates disease. Well, ok, but", "start_timestamp": "00:27:51", "end_timestamp": "00:28:32", "start_second": 1671, "end_second": 1712, "url": "https://www.youtube.com/watch?v=La9oLLoI5Rc&t=1671s", "title": "How To BRAINWASH Yourself For Success & Destroy NEGATIVE THOUGHTS! | Dr. Joe Dispenza", "thumbnail": "https://i.ytimg.com/vi/La9oLLoI5Rc/maxresdefault.jpg"} {"video_id": "La9oLLoI5Rc", "text": "That's not the whole truth too because you could have two people working side by side in the same factory One gets cancer after being exposed to a carcinogenic for 25 years both working for 25 years The other one has no cancer at all. So there must be some internal order That would cause one person to not get it while another one does So is it possible then if? The environment signals the gene and it does and the end product of an experience in the environment is called an emotion Can you signal the gene ahead of the environment by embracing an elevated emotion?", "start_timestamp": "00:28:32", "end_timestamp": "00:29:08", "start_second": 1712, "end_second": 1748, "url": "https://www.youtube.com/watch?v=La9oLLoI5Rc&t=1712s", "title": "How To BRAINWASH Yourself For Success & Destroy NEGATIVE THOUGHTS! | Dr. Joe Dispenza", "thumbnail": "https://i.ytimg.com/vi/La9oLLoI5Rc/maxresdefault.jpg"} {"video_id": "La9oLLoI5Rc", "text": "We've done the research on this where we measured 7,500 different gene expressions in a group of people it came to an advanced event for four days and we Had them doing a seated meditation a walking meditation a laying down meditation a standing meditation and at the end of four days Just four days The common eight genes that were upregulated two genes to suppress cancer cells and tumor growth Two genes for neurogenesis the growth of new neurons in response to novel experiences and learning the gene that signals stem cells", "start_timestamp": "00:29:08", "end_timestamp": "00:29:44", "start_second": 1748, "end_second": 1784, "url": "https://www.youtube.com/watch?v=La9oLLoI5Rc&t=1748s", "title": "How To BRAINWASH Yourself For Success & Destroy NEGATIVE THOUGHTS! | Dr. Joe Dispenza", "thumbnail": "https://i.ytimg.com/vi/La9oLLoI5Rc/maxresdefault.jpg"} {"video_id": "La9oLLoI5Rc", "text": "To go to damaged areas and repair them the gene for oxidative stress was upregulated We started seeing all these genes that are very very healthy to cause the body to flourish Imagine if people were doing that for three months. We also measured telomeres the little Shoestrings on the end of DNA that tell us our biological age. We asked people to Do the work meditation five out of seven days for 60 days Measure their telomeres that determine their biological age sixty days later seventy four percent of the people lengthen their telomeres 40 percent", "start_timestamp": "00:29:44", "end_timestamp": "00:30:20", "start_second": 1784, "end_second": 1820, "url": "https://www.youtube.com/watch?v=La9oLLoI5Rc&t=1784s", "title": "How To BRAINWASH Yourself For Success & Destroy NEGATIVE THOUGHTS! | Dr. Joe Dispenza", "thumbnail": "https://i.ytimg.com/vi/La9oLLoI5Rc/maxresdefault.jpg"} {"video_id": "La9oLLoI5Rc", "text": "significant change twenty percent a very remarkable change That means that they got a little bit of their life back if it lengthened by ten percent They got 10% of their life back. That's incredible Before I ask my last question tell these guys where they can find you online Sure. My website is just dr. Joe Dispenza dot-com. You can follow us on Facebook Twitter Instagram We're all over and then my final question. What's the impact that you want to have on the world? I? think that the end game for me is to empower people to", "start_timestamp": "00:30:20", "end_timestamp": "00:30:55", "start_second": 1820, "end_second": 1855, "url": "https://www.youtube.com/watch?v=La9oLLoI5Rc&t=1820s", "title": "How To BRAINWASH Yourself For Success & Destroy NEGATIVE THOUGHTS! | Dr. Joe Dispenza", "thumbnail": "https://i.ytimg.com/vi/La9oLLoI5Rc/maxresdefault.jpg"} {"video_id": "La9oLLoI5Rc", "text": "Such a degree that they realize that they need less things outside of them to make them happy less things outside of them to regulate their moods and their behaviors and that they begin to use the kind of the power that we All have access to and into really and to change the world to make a difference so that there's more peace There's more homeless. There's more connection that we support and love each other and we serve better and and I think that we have to start for the most part if everybody's working on themselves and", "start_timestamp": "00:30:55", "end_timestamp": "00:31:26", "start_second": 1855, "end_second": 1886, "url": "https://www.youtube.com/watch?v=La9oLLoI5Rc&t=1855s", "title": "How To BRAINWASH Yourself For Success & Destroy NEGATIVE THOUGHTS! | Dr. Joe Dispenza", "thumbnail": "https://i.ytimg.com/vi/La9oLLoI5Rc/maxresdefault.jpg"} {"video_id": "La9oLLoI5Rc", "text": "and Trying doing their best to present the greatest ideal of themselves to the world. I think the world would be a better place. And so That's my passion and I'm witnessing it happening now The more than I ever thought I would was incredible Joe. Thank you so much for being here and amazing having you Guys Go watch this man's videos They are some of the best explanations of what's going on inside the mind that I've ever come across There were of several that I literally have people in my life that I'm going to force to sit down and watch these things", "start_timestamp": "00:31:26", "end_timestamp": "00:32:02", "start_second": 1886, "end_second": 1922, "url": "https://www.youtube.com/watch?v=La9oLLoI5Rc&t=1886s", "title": "How To BRAINWASH Yourself For Success & Destroy NEGATIVE THOUGHTS! | Dr. Joe Dispenza", "thumbnail": "https://i.ytimg.com/vi/La9oLLoI5Rc/maxresdefault.jpg"} {"video_id": "La9oLLoI5Rc", "text": "It's just incredible explanations of how you create yourself out of the things you do Habitually the way that you think creates a feeling the way that you feel creates thinking that matches that and then you get in this cycle and that coming down to that personality ultimately being a finite set of patterns in your brain I think is really really illuminating in terms of how we actually experience the world and I think when people understand that that it's within your control that you don't have to believe every thought that you think that you can", "start_timestamp": "00:32:02", "end_timestamp": "00:32:31", "start_second": 1922, "end_second": 1951, "url": "https://www.youtube.com/watch?v=La9oLLoI5Rc&t=1922s", "title": "How To BRAINWASH Yourself For Success & Destroy NEGATIVE THOUGHTS! | Dr. Joe Dispenza", "thumbnail": "https://i.ytimg.com/vi/La9oLLoI5Rc/maxresdefault.jpg"} {"video_id": "La9oLLoI5Rc", "text": "step outside of that that you can leverage metacognition to think about your thinking and Deconstruct and decide what you want to think about and start focusing on that and create an entirely different version of yourself that has new Elevated feelings that's over on the side of the positivity empowering yourself I think it's really incredible and he gets deep into the mechanistic stuff Which I love you guys will not regret diving deep into this man's world. I think you will get some incredible revelations All right, if you haven't already be sure to subscribe and until next time my friends be legendary. Take care", "start_timestamp": "00:32:31", "end_timestamp": "00:33:03", "start_second": 1951, "end_second": 1983, "url": "https://www.youtube.com/watch?v=La9oLLoI5Rc&t=1951s", "title": "How To BRAINWASH Yourself For Success & Destroy NEGATIVE THOUGHTS! | Dr. Joe Dispenza", "thumbnail": "https://i.ytimg.com/vi/La9oLLoI5Rc/maxresdefault.jpg"} {"video_id": "QhjJFUmDX4E", "text": "[Music] oh hello everybody um I'm really excited to host this panel about the extent to which mobile apps can contain the kovat 19 crisis as you all know we are developers is the largest community of software developers in Europe and in doing so we like to produce exciting content for developers which is not just tech content but also content that affects us as a society in a political context and from an economic standpoint at this point I would like to thank fit for internet Austrian organization that aims to digitalize the", "start_timestamp": "00:00:00", "end_timestamp": "00:00:48", "start_second": 0, "end_second": 48, "url": "https://www.youtube.com/watch?v=QhjJFUmDX4E&t=0s", "title": "What extent can mobile apps solve the COVID-19 crisis?", "thumbnail": "https://i.ytimg.com/vi/QhjJFUmDX4E/maxresdefault.jpg"} {"video_id": "QhjJFUmDX4E", "text": "Austrian population for collaborating with us and making this panel possible and I'm excited to introduce to you the three guests of honor we have today joining us in this panel first and foremost I would like to say hello to Antonella my Hoffler special advisor to the federal Chancellor of Austria head of the strategy unit think Austria co-head of the future operations clearing word that has been set up to plan the host Perona era she's a long-standing senior partner and managing director and now senior advisor", "start_timestamp": "00:00:48", "end_timestamp": "00:01:23", "start_second": 48, "end_second": 83, "url": "https://www.youtube.com/watch?v=QhjJFUmDX4E&t=48s", "title": "What extent can mobile apps solve the COVID-19 crisis?", "thumbnail": "https://i.ytimg.com/vi/QhjJFUmDX4E/maxresdefault.jpg"} {"video_id": "QhjJFUmDX4E", "text": "of the Boston Consulting Group and a member of various supervisory boards welcome Antonella thank you next up I'd like to say hi to Tomas inertness he's the CEO of CEO of ions Telecom our group the largest telecommunications provider in Austria and one of the largest in the sea region hello thanks for having me I told us and last but not least the person I'm going to begin this talk with me a secular country managing director of Accenture Austria it's great to have you here hi and thanks for the invitation so Michael and", "start_timestamp": "00:01:23", "end_timestamp": "00:02:05", "start_second": 83, "end_second": 125, "url": "https://www.youtube.com/watch?v=QhjJFUmDX4E&t=83s", "title": "What extent can mobile apps solve the COVID-19 crisis?", "thumbnail": "https://i.ytimg.com/vi/QhjJFUmDX4E/maxresdefault.jpg"} {"video_id": "QhjJFUmDX4E", "text": "jump right into the topic so Accenture developed the Austrian Corona app how did this project come into being when we received the car back area phytic at the head of the Austrian red course in regards to fight in Kuwait on March 9th and we thought back then together that first of all technology is an ally in our fight against this new disease so we thought it's a goodness is to use an app in order to help fight the spread of Corona and therefore we quickly set up a team and within roughly two and a half weeks we were able to design test and", "start_timestamp": "00:02:05", "end_timestamp": "00:02:43", "start_second": 125, "end_second": 163, "url": "https://www.youtube.com/watch?v=QhjJFUmDX4E&t=125s", "title": "What extent can mobile apps solve the COVID-19 crisis?", "thumbnail": "https://i.ytimg.com/vi/QhjJFUmDX4E/maxresdefault.jpg"} {"video_id": "QhjJFUmDX4E", "text": "ROI and f2 the App Store's first with say a very limited set of features but already productive in order to get first insights from production and usage and then we continually improve the app up to the status where we are now where the automatic handshake is working and where we are waiting for Google and Apple to provide further features in order to further improve i'b what were the challenges in developing this app and how are you able to develop it so quickly seeing that Germany is still trying to launch their up quality the challenge is", "start_timestamp": "00:02:43", "end_timestamp": "00:03:21", "start_second": 163, "end_second": 201, "url": "https://www.youtube.com/watch?v=QhjJFUmDX4E&t=163s", "title": "What extent can mobile apps solve the COVID-19 crisis?", "thumbnail": "https://i.ytimg.com/vi/QhjJFUmDX4E/maxresdefault.jpg"} {"video_id": "QhjJFUmDX4E", "text": "better more or less overwhelming first of all the technology we have mobile phones actually not suited for this purpose so bluetooth is only able to mesh you're only able to measure the signal strength for Bluetooth you're not able to measure distance so first of all we had to overcome the technical problems associated and they are still not solve to be very clear I mean the technically say workaround is first of all when measure or at the contact which is relevant for this disease is combined of two parameters which is time and", "start_timestamp": "00:03:21", "end_timestamp": "00:03:57", "start_second": 201, "end_second": 237, "url": "https://www.youtube.com/watch?v=QhjJFUmDX4E&t=201s", "title": "What extent can mobile apps solve the COVID-19 crisis?", "thumbnail": "https://i.ytimg.com/vi/QhjJFUmDX4E/maxresdefault.jpg"} {"video_id": "QhjJFUmDX4E", "text": "distance and time is even more than distance if you listen to what scientists in the medical field say and the recommendations by the World Health Organization so time is an important parameter distance can only be approached by the signal strengths of Pluto that's the first issue the second point is that especially with the iOS operating system so with our ever iPhones bluetooth low in low energy mode cannot receive blue other Bluetooth signals in the background so the app has to be in the foreground which is then of", "start_timestamp": "00:03:57", "end_timestamp": "00:04:40", "start_second": 237, "end_second": 280, "url": "https://www.youtube.com/watch?v=QhjJFUmDX4E&t=237s", "title": "What extent can mobile apps solve the COVID-19 crisis?", "thumbnail": "https://i.ytimg.com/vi/QhjJFUmDX4E/maxresdefault.jpg"} {"video_id": "QhjJFUmDX4E", "text": "course very battery consuming and which naturally leads to a limitation of in the usage with iPhones in the current state now as a plan could have made that alliance has built have built an alliance that problem shall be overcome with in the next weeks we hope so that by June we will have the major technology limitations currently in the operating systems we have we are able to cope with them so that's the that's one of the that's a technology view and of course we had significant obstacles as well in saying everything around so", "start_timestamp": "00:04:40", "end_timestamp": "00:05:22", "start_second": 280, "end_second": 322, "url": "https://www.youtube.com/watch?v=QhjJFUmDX4E&t=280s", "title": "What extent can mobile apps solve the COVID-19 crisis?", "thumbnail": "https://i.ytimg.com/vi/QhjJFUmDX4E/maxresdefault.jpg"} {"video_id": "QhjJFUmDX4E", "text": "first of all data protection was a very important goal from the very beginning and of course the inter protection also is a challenge from a technology point of view so we had to implement a lot of additional actions and a lot of mechanisms in order to to ensure that the privacy is is guaranteed within the app and that's an of course an ongoing process as it is with every security measure you take you always learn you always improve and that's where we invest a lot as well so that's the second point the third", "start_timestamp": "00:05:22", "end_timestamp": "00:05:57", "start_second": 322, "end_second": 357, "url": "https://www.youtube.com/watch?v=QhjJFUmDX4E&t=322s", "title": "What extent can mobile apps solve the COVID-19 crisis?", "thumbnail": "https://i.ytimg.com/vi/QhjJFUmDX4E/maxresdefault.jpg"} {"video_id": "QhjJFUmDX4E", "text": "point is then of course we want to have the app work as widely as possible so ideally within the whole world at least within Europe which is currently not possible as there is no standard protocol available for the European Union for example so we need to find standard first which can template be deployed within the whole European Union which is currently not the case so we are working on that standard as well we try to bring our own expertise but also talking and listening to others and their expertise to find a common standard across Europe and", "start_timestamp": "00:05:57", "end_timestamp": "00:06:33", "start_second": 357, "end_second": 393, "url": "https://www.youtube.com/watch?v=QhjJFUmDX4E&t=357s", "title": "What extent can mobile apps solve the COVID-19 crisis?", "thumbnail": "https://i.ytimg.com/vi/QhjJFUmDX4E/maxresdefault.jpg"} {"video_id": "QhjJFUmDX4E", "text": "the last point of course we got a lot of attention actually we were a bit overwhelmed by the press and and media response to that app we were truly surprised that it's such a I would say politically or a sensitive topic we thought it's totally natural that we use technology in order to fight the disease okay you talked about the topic of data protection and of course for us as a huge issue so um where's the data stored and how do you guarantee that the data is safe and secure well first of all in regards to data protection we had a lot", "start_timestamp": "00:06:33", "end_timestamp": "00:07:15", "start_second": 393, "end_second": 435, "url": "https://www.youtube.com/watch?v=QhjJFUmDX4E&t=393s", "title": "What extent can mobile apps solve the COVID-19 crisis?", "thumbnail": "https://i.ytimg.com/vi/QhjJFUmDX4E/maxresdefault.jpg"} {"video_id": "QhjJFUmDX4E", "text": "of external reviews very thorough reviews by NGOs I mean as in open we have one of the lead leading NGOs for data protection none of your business no max James and his team is one of the leading data protection organizations in Europe at least if not in the whole world so thought we gave those guys in very early stage look at the full source code without any limitations and they they made a review independently of course so they weren't paid by us they did it by themselves on their own and they came up with a lot of", "start_timestamp": "00:07:15", "end_timestamp": "00:07:56", "start_second": 435, "end_second": 476, "url": "https://www.youtube.com/watch?v=QhjJFUmDX4E&t=435s", "title": "What extent can mobile apps solve the COVID-19 crisis?", "thumbnail": "https://i.ytimg.com/vi/QhjJFUmDX4E/maxresdefault.jpg"} {"video_id": "QhjJFUmDX4E", "text": "recommendations which we implemented and we are still in exchange with them so they have access to the source code we we inform them on the changes we do and keep them posted so to ensure that third parties can take a look at it in the meantime we've also open sourced the whole code to the public so everybody can take a look at it at github from a data perspective that's the most important point because it was so intensively discussed and publicly discussed there is the only contact their contact data is only stored on the", "start_timestamp": "00:07:56", "end_timestamp": "00:08:31", "start_second": 476, "end_second": 511, "url": "https://www.youtube.com/watch?v=QhjJFUmDX4E&t=476s", "title": "What extent can mobile apps solve the COVID-19 crisis?", "thumbnail": "https://i.ytimg.com/vi/QhjJFUmDX4E/maxresdefault.jpg"} {"video_id": "QhjJFUmDX4E", "text": "mobile phone itself so nobody can access the contact data by any means and it is stored anonymously or technically correct Stud anomaly so you can't access any personal data in this app we can exclude that so there's one point where we need to exchange information through a server which is when we notify people so if somebody it wants to issue a warning or a positive result of her of a cuvette test then in this case he has to give his telephone number in order to receive transaction number to have something like or to have", "start_timestamp": "00:08:31", "end_timestamp": "00:09:16", "start_second": 511, "end_second": 556, "url": "https://www.youtube.com/watch?v=QhjJFUmDX4E&t=511s", "title": "What extent can mobile apps solve the COVID-19 crisis?", "thumbnail": "https://i.ytimg.com/vi/QhjJFUmDX4E/maxresdefault.jpg"} {"video_id": "QhjJFUmDX4E", "text": "a security mechanism so that it's that to make sure that it's your mobile phone which from which the information was sent and also when sending this this when issuing this warning of course the notification needs to be sent to a server but again this is done without the detailed contact data but also only using anonymous data how many users have downloaded the app so far we have roughly 600 K so 600,000 downloads and yeah that's where we are of course we hope for more and we hope that the usage will significantly increase once the", "start_timestamp": "00:09:16", "end_timestamp": "00:09:59", "start_second": 556, "end_second": 599, "url": "https://www.youtube.com/watch?v=QhjJFUmDX4E&t=556s", "title": "What extent can mobile apps solve the COVID-19 crisis?", "thumbnail": "https://i.ytimg.com/vi/QhjJFUmDX4E/maxresdefault.jpg"} {"video_id": "QhjJFUmDX4E", "text": "improvements by a cookie and April are available within our app ok and one one more question on that have you considered incentivizing the use of the app so for example the EU politician axial force he suggested some ways how you could positively incentivize people downloading and using the app and be sure that we have tons of ideas on that but it's a very sensitive topic I'll ask Antonella my ideas and how to incentivize the use of the app but there are many good ideas and for example I mean to give you two small ones at least", "start_timestamp": "00:09:59", "end_timestamp": "00:10:37", "start_second": 599, "end_second": 637, "url": "https://www.youtube.com/watch?v=QhjJFUmDX4E&t=599s", "title": "What extent can mobile apps solve the COVID-19 crisis?", "thumbnail": "https://i.ytimg.com/vi/QhjJFUmDX4E/maxresdefault.jpg"} {"video_id": "QhjJFUmDX4E", "text": "we implement the recommendation function so actually this week's of just one or two days ago with the new version we implemented a couple of additional security mechanisms and now you can also recommend the app to your friends so that's a very small feature and there are first to answer vada yes one is to have a campaign start more or less the whole society and that is something we hope that the Austrian Red Cross will put further effort into this okay well thanks that's that's really interesting information I'd like to continue with", "start_timestamp": "00:10:37", "end_timestamp": "00:11:19", "start_second": 637, "end_second": 679, "url": "https://www.youtube.com/watch?v=QhjJFUmDX4E&t=637s", "title": "What extent can mobile apps solve the COVID-19 crisis?", "thumbnail": "https://i.ytimg.com/vi/QhjJFUmDX4E/maxresdefault.jpg"} {"video_id": "QhjJFUmDX4E", "text": "you Antonella could you tell us about the work that you're currently doing with pink Austria and with the future operations clearing board with regards to on the one hand containing the coronavirus and on the other hand ensuring that the austrian economy hits the floor-- running once we're in the posts covered 19 period so thank you for that I will focus on the future operations clearing board because that is looking at exactly the topic that you are asking for so as you can imagine for us it is paramount to be able to manage", "start_timestamp": "00:11:19", "end_timestamp": "00:11:52", "start_second": 679, "end_second": 712, "url": "https://www.youtube.com/watch?v=QhjJFUmDX4E&t=679s", "title": "What extent can mobile apps solve the COVID-19 crisis?", "thumbnail": "https://i.ytimg.com/vi/QhjJFUmDX4E/maxresdefault.jpg"} {"video_id": "QhjJFUmDX4E", "text": "the so-called tans face as we all know in the meantime you know after the hammer phase to be able to manage the dance phase in the best possible way so that we can get to a quick restart and we are all very much aware that we have extremely strong impact on the economy overall and on the life of everybody so what we look at that the future operations clearing board is at four different areas which we try to you want to triage to try to combine so the first area looking at the management of the health area of the health system and for", "start_timestamp": "00:11:52", "end_timestamp": "00:12:30", "start_second": 712, "end_second": 750, "url": "https://www.youtube.com/watch?v=QhjJFUmDX4E&t=712s", "title": "What extent can mobile apps solve the COVID-19 crisis?", "thumbnail": "https://i.ytimg.com/vi/QhjJFUmDX4E/maxresdefault.jpg"} {"video_id": "QhjJFUmDX4E", "text": "that I will come to that for that the app is a part of the solution it's not the whole solution unfortunately it would be nice if we just had one app that can solve everything but we need to find a way to track trace and test in significant numbers to be able to manage this containment so that is the first area is managing the the infection and managing it in a very differentiated way and we'll come to that mode EP the second area just for for your information the second area is what recall the provisioning of basic", "start_timestamp": "00:12:30", "end_timestamp": "00:13:11", "start_second": 750, "end_second": 791, "url": "https://www.youtube.com/watch?v=QhjJFUmDX4E&t=750s", "title": "What extent can mobile apps solve the COVID-19 crisis?", "thumbnail": "https://i.ytimg.com/vi/QhjJFUmDX4E/maxresdefault.jpg"} {"video_id": "QhjJFUmDX4E", "text": "services you know so looking at the logistics chains looking at the food chain at other critical areas like medicines how are they impacted by possible lock ups in different countries in different companies and so that is something that we are we are analytically trying to capture and to see whether we have significant trigger point and breakup points in the overall supply chains that's the second area the third area is economy at large and its economy it's the financial system it's the tourism system it's anything which", "start_timestamp": "00:13:11", "end_timestamp": "00:13:57", "start_second": 791, "end_second": 837, "url": "https://www.youtube.com/watch?v=QhjJFUmDX4E&t=791s", "title": "What extent can mobile apps solve the COVID-19 crisis?", "thumbnail": "https://i.ytimg.com/vi/QhjJFUmDX4E/maxresdefault.jpg"} {"video_id": "QhjJFUmDX4E", "text": "has to do with with the workforce with with the impact that it has on the economic activity that is where the focus over time will certainly become more important because that is where the full impact is is visible already now and well we need to create the prerequisites for a quick restart and also for I would call it a reset because we will have different areas well which will become more important and certain areas which will be which will be impacted in a sustained way let me put it this way and you know it's a", "start_timestamp": "00:13:57", "end_timestamp": "00:14:33", "start_second": 837, "end_second": 873, "url": "https://www.youtube.com/watch?v=QhjJFUmDX4E&t=837s", "title": "What extent can mobile apps solve the COVID-19 crisis?", "thumbnail": "https://i.ytimg.com/vi/QhjJFUmDX4E/maxresdefault.jpg"} {"video_id": "QhjJFUmDX4E", "text": "certainly we can go in more detail and the last area is the psychosocial impact needless to say people are impacted by what is going on there is a great angst in people there is a lack of there was a lack of security at the beginning particularly the more we manage the particular health element properly the more we will have you know the less you know the burden is on on people and particularly there let me highlight one point what we see is that certain groups of the population are more impacted than others and particularly women and women", "start_timestamp": "00:14:33", "end_timestamp": "00:15:19", "start_second": 873, "end_second": 919, "url": "https://www.youtube.com/watch?v=QhjJFUmDX4E&t=873s", "title": "What extent can mobile apps solve the COVID-19 crisis?", "thumbnail": "https://i.ytimg.com/vi/QhjJFUmDX4E/maxresdefault.jpg"} {"video_id": "QhjJFUmDX4E", "text": "with small children of much impacted by that so we are tracking that and trying to make it possible to take the burden from this from these groups and that many others were really impacted just let me come to the first area because that is where we feel that technology is definitely part of the solution actually apps can help in many areas but particularly you know the the t-t-t-that testing tracking and tracing is one of the critical areas so making sure that we have enough capacity to track and trace when we open the borders when we", "start_timestamp": "00:15:19", "end_timestamp": "00:16:03", "start_second": 919, "end_second": 963, "url": "https://www.youtube.com/watch?v=QhjJFUmDX4E&t=919s", "title": "What extent can mobile apps solve the COVID-19 crisis?", "thumbnail": "https://i.ytimg.com/vi/QhjJFUmDX4E/maxresdefault.jpg"} {"video_id": "QhjJFUmDX4E", "text": "have more tourists where tourists coming in where we have a stronger mixing of population I think is very important and that is why we feel that it's it's critical to have an app that is very much accepted by the population so what do we mean by they're much accepted it has to fulfill three three key criteria it needs to fully comply with the privacy expectations of the population and that we are very clear and I think you know Michael was very explicit you know we have very clear guidelines with it needs to be privacy compliant and it", "start_timestamp": "00:16:03", "end_timestamp": "00:16:44", "start_second": 963, "end_second": 1004, "url": "https://www.youtube.com/watch?v=QhjJFUmDX4E&t=963s", "title": "What extent can mobile apps solve the COVID-19 crisis?", "thumbnail": "https://i.ytimg.com/vi/QhjJFUmDX4E/maxresdefault.jpg"} {"video_id": "QhjJFUmDX4E", "text": "needs to be accepted second thing it's usability now on the usability we all agree that it was a minimal Viable Product what was introduced and I think it's a it's a it was a great proof of speed of bringing it to the market it's still not as function as you know as performing as as we would love it to be the more performing it gets and more usable it gets the higher the acceptance and I think that then people will like to to use it the third thing and that is also mission critical is interoperability because clearly why do we need it we", "start_timestamp": "00:16:44", "end_timestamp": "00:17:26", "start_second": 1004, "end_second": 1046, "url": "https://www.youtube.com/watch?v=QhjJFUmDX4E&t=1004s", "title": "What extent can mobile apps solve the COVID-19 crisis?", "thumbnail": "https://i.ytimg.com/vi/QhjJFUmDX4E/maxresdefault.jpg"} {"video_id": "QhjJFUmDX4E", "text": "need it because we want to make sure that we are able to act once we open the borders so these are three key elements but let me come to the fourth essential element its individual responsibility we will never get control of this health crisis of this infection if people don't take the individual responsibility of making sure that we manage this in the proper way and I do not think that we can forever managing top-down I don't think that's the that's what we want and it's also makes no sense it's not what we what", "start_timestamp": "00:17:26", "end_timestamp": "00:18:08", "start_second": 1046, "end_second": 1088, "url": "https://www.youtube.com/watch?v=QhjJFUmDX4E&t=1046s", "title": "What extent can mobile apps solve the COVID-19 crisis?", "thumbnail": "https://i.ytimg.com/vi/QhjJFUmDX4E/maxresdefault.jpg"} {"video_id": "QhjJFUmDX4E", "text": "were used to and it's also not going to work in general we need to enforce you know enforce a sense of responsibility if anything but not enforce tools or other stuff you know and that is what we that that is what we totally think and that is what we are clearly factoring in in our considerations in the future operation spot under which preconditions taking us back to the topic of that under which preconditions does the corona have fulfilled its purpose I think it's it will fulfill its purpose if it really works you know in a", "start_timestamp": "00:18:08", "end_timestamp": "00:18:48", "start_second": 1088, "end_second": 1128, "url": "https://www.youtube.com/watch?v=QhjJFUmDX4E&t=1088s", "title": "What extent can mobile apps solve the COVID-19 crisis?", "thumbnail": "https://i.ytimg.com/vi/QhjJFUmDX4E/maxresdefault.jpg"} {"video_id": "QhjJFUmDX4E", "text": "hassle-free mode I mean usability as I said is essential and you know I think that you know there are non-debatable switches privacy clearly that's a known debatable we don't need to discuss about that but the usability is the essential thing it needs to be it needs to be working you know easy in an easy way and clearly you know and I think you know we could we could start to see a broader a broader usage once the usability gets better but then interoperability is then the the other key problem that's that I", "start_timestamp": "00:18:48", "end_timestamp": "00:19:22", "start_second": 1128, "end_second": 1162, "url": "https://www.youtube.com/watch?v=QhjJFUmDX4E&t=1128s", "title": "What extent can mobile apps solve the COVID-19 crisis?", "thumbnail": "https://i.ytimg.com/vi/QhjJFUmDX4E/maxresdefault.jpg"} {"video_id": "QhjJFUmDX4E", "text": "think is really the it's the to maker or breakers because then we can we can make sure that we move around and we are we have a sent not only a sense but a real tool for individual security and I think that's and for individual control for self control I think this was moving to a responsible behavior - self-control is essential ok which what percentage of the Austrian population has to install and use the app in order for it to be effective I don't think that it's about broad percentages of the total population I", "start_timestamp": "00:19:22", "end_timestamp": "00:20:03", "start_second": 1162, "end_second": 1203, "url": "https://www.youtube.com/watch?v=QhjJFUmDX4E&t=1162s", "title": "What extent can mobile apps solve the COVID-19 crisis?", "thumbnail": "https://i.ytimg.com/vi/QhjJFUmDX4E/maxresdefault.jpg"} {"video_id": "QhjJFUmDX4E", "text": "think that we need to have this particularly you know brought in in broader usage where where the risk of super spreading is highest so I think that we need to be you know we need to see that the risk of being infected of encounter of a infected person is very different if I'm somewhere in at the if I'm you know someone Berglund or if I'm at the border of German Austrian border and Tirol so I think it's very or in Vienna which has a very high you know confluence of a lot of people so I don't think that we should", "start_timestamp": "00:20:03", "end_timestamp": "00:20:49", "start_second": 1203, "end_second": 1249, "url": "https://www.youtube.com/watch?v=QhjJFUmDX4E&t=1203s", "title": "What extent can mobile apps solve the COVID-19 crisis?", "thumbnail": "https://i.ytimg.com/vi/QhjJFUmDX4E/maxresdefault.jpg"} {"video_id": "QhjJFUmDX4E", "text": "aim at broad percentages let us try to look at how if it's used where its most needed and that is where the encounters of different of groups from different people more more expected to happen mm-hmm but I mean I did a little bit of research and there's like experts or experts suggesting that 60 to 70 percent of the population should be should have installed and be using that prefer to be effective from what Michael just said were right now around 7% a little bit a little bit below 7% don't you feel that we need to get a", "start_timestamp": "00:20:49", "end_timestamp": "00:21:28", "start_second": 1249, "end_second": 1288, "url": "https://www.youtube.com/watch?v=QhjJFUmDX4E&t=1249s", "title": "What extent can mobile apps solve the COVID-19 crisis?", "thumbnail": "https://i.ytimg.com/vi/QhjJFUmDX4E/maxresdefault.jpg"} {"video_id": "QhjJFUmDX4E", "text": "much higher rate of usage in the next month in order for the project to work no absolutely I think that we need to increase the use of the users rate we need to increase it and and if you and it were clear you know about that it is about making sure that for example companies start to introduce you know to recommend the usage of an app and the people in a company which are usually working together that they also ask their the people that they are working with and and there you know other people that they are interacting", "start_timestamp": "00:21:28", "end_timestamp": "00:22:07", "start_second": 1288, "end_second": 1327, "url": "https://www.youtube.com/watch?v=QhjJFUmDX4E&t=1288s", "title": "What extent can mobile apps solve the COVID-19 crisis?", "thumbnail": "https://i.ytimg.com/vi/QhjJFUmDX4E/maxresdefault.jpg"} {"video_id": "QhjJFUmDX4E", "text": "with to also use it so that there is a bit of a control but I think you know it's not abroad I mean if it's if 60 to 70 percent of their awesome population is expected to use it then we can forget about it you know I think we should not define irrational high bars we should introduce sensible D averaged targets and make sure that it's used where the risk is highest okay now obviously the whole topic of the app has created quite a media frenzy a bit of a heated discussion on relevant or less relevant topics and actually might have been", "start_timestamp": "00:22:07", "end_timestamp": "00:22:53", "start_second": 1327, "end_second": 1373, "url": "https://www.youtube.com/watch?v=QhjJFUmDX4E&t=1327s", "title": "What extent can mobile apps solve the COVID-19 crisis?", "thumbnail": "https://i.ytimg.com/vi/QhjJFUmDX4E/maxresdefault.jpg"} {"video_id": "QhjJFUmDX4E", "text": "taken out of context one of the topics that the the whole discussion focuses on at least what I've been reading on is that there is kind of two different aspects of the whole situation the one aspect is saving lives because let's face it it is at the end of the day a virus that causes a lot of death and saving life is the priority and a responsibility that that we need to take care of but on the other hand we have the topic of freedom of choice democracy and what do you think is more important I mean what I mean freedom of choice", "start_timestamp": "00:22:53", "end_timestamp": "00:23:38", "start_second": 1373, "end_second": 1418, "url": "https://www.youtube.com/watch?v=QhjJFUmDX4E&t=1373s", "title": "What extent can mobile apps solve the COVID-19 crisis?", "thumbnail": "https://i.ytimg.com/vi/QhjJFUmDX4E/maxresdefault.jpg"} {"video_id": "QhjJFUmDX4E", "text": "democracy is always more important than anything else it's there that's more important than anything else but I think yet we need to be I mean the if if we look at the discussion I think that we are mixing a lot of things you know you you rightly said it is a matter of balancing the needs for for health control with other needs and given that we have a very very performing data privacy regime in in Europe we are in a match that situation because we have the possibility to create things that are totally compliant with this data regime", "start_timestamp": "00:23:38", "end_timestamp": "00:24:23", "start_second": 1418, "end_second": 1463, "url": "https://www.youtube.com/watch?v=QhjJFUmDX4E&t=1418s", "title": "What extent can mobile apps solve the COVID-19 crisis?", "thumbnail": "https://i.ytimg.com/vi/QhjJFUmDX4E/maxresdefault.jpg"} {"video_id": "QhjJFUmDX4E", "text": "which is extremely essential it's it's totally fitting the the belief in democracy and in individual free choice that is you know a pillar in Europe so I think that's as I said it's a non debatable about that but then we need to be also very clear about what does it mean if people you know willingly and the mind the discussion about these apps you know because of I don't know what reasons because it's makes no sense but at the same time the same people have a broadly used commercial apps and have profit sharing agreements and and then", "start_timestamp": "00:24:23", "end_timestamp": "00:25:11", "start_second": 1463, "end_second": 1511, "url": "https://www.youtube.com/watch?v=QhjJFUmDX4E&t=1463s", "title": "What extent can mobile apps solve the COVID-19 crisis?", "thumbnail": "https://i.ytimg.com/vi/QhjJFUmDX4E/maxresdefault.jpg"} {"video_id": "QhjJFUmDX4E", "text": "which you know the user of their services don't even know about and and these commercially commercial companies fully capture the data not for protecting the health of anybody apart from the economic business health of the company so I think that we need to be we need to have a serious discussion about what is the right governance for for this kind of health and and life-saving apps clearly I mean compliance with the privacy rules is essential and and then there are some elements that I feel a very important like making sure that", "start_timestamp": "00:25:11", "end_timestamp": "00:25:56", "start_second": 1511, "end_second": 1556, "url": "https://www.youtube.com/watch?v=QhjJFUmDX4E&t=1511s", "title": "What extent can mobile apps solve the COVID-19 crisis?", "thumbnail": "https://i.ytimg.com/vi/QhjJFUmDX4E/maxresdefault.jpg"} {"video_id": "QhjJFUmDX4E", "text": "open technology and also that you make sure that that they are used in the spirit of individual responsibility I think that's essential okay you mentioned previously that one of the four pillars that the future operations clearing board is working on is the topic of the economy the financial system and tourism so since Austria is heavily dependent on tourism as a pillar of our economic prosperity how do we handle how do we tackle the challenge that tourists are entering the country and they're not using the corona app how do", "start_timestamp": "00:25:56", "end_timestamp": "00:26:34", "start_second": 1556, "end_second": 1594, "url": "https://www.youtube.com/watch?v=QhjJFUmDX4E&t=1556s", "title": "What extent can mobile apps solve the COVID-19 crisis?", "thumbnail": "https://i.ytimg.com/vi/QhjJFUmDX4E/maxresdefault.jpg"} {"video_id": "QhjJFUmDX4E", "text": "we what we can do if we have look I think you know the tourists the individual tourists comes to Austria because he wants to have a fantastic time he wants to be nature wants to experience our beautiful country so that is what what they want they don't come primarily to either to get into quarantine you know or to get or to infect others so I think that if we have if we make sure that we get what Michael has promised to deliver which is a very usable very performing app that helps them control I think that everybody will", "start_timestamp": "00:26:34", "end_timestamp": "00:27:19", "start_second": 1594, "end_second": 1639, "url": "https://www.youtube.com/watch?v=QhjJFUmDX4E&t=1594s", "title": "What extent can mobile apps solve the COVID-19 crisis?", "thumbnail": "https://i.ytimg.com/vi/QhjJFUmDX4E/maxresdefault.jpg"} {"video_id": "QhjJFUmDX4E", "text": "happily download it and make sure that that he or she is neither risk to others but also that others are not a risk for him or her and I think you know that is what we feel is very important and clearly one can it's our duty to make them aware I think that Michael was mentioning the fact that we need perhaps to to have a more stronger communication campaign at least we need to make them aware that there is a stop Karuna app you know if they don't know about it it's difficult they come from countries where the the app is not yet there so", "start_timestamp": "00:27:19", "end_timestamp": "00:27:57", "start_second": 1639, "end_second": 1677, "url": "https://www.youtube.com/watch?v=QhjJFUmDX4E&t=1639s", "title": "What extent can mobile apps solve the COVID-19 crisis?", "thumbnail": "https://i.ytimg.com/vi/QhjJFUmDX4E/maxresdefault.jpg"} {"video_id": "QhjJFUmDX4E", "text": "how should they know that we have an app I think that we should make them aware we should invite them to use it I think that's that's quite important and and then make sure that people really take their individual responsibility seriously so are you also working on measures to communicate the availability of the app to tourists coming to Austria not not directly I mean this is something that red cross has on top of the mind and and is working on what we are looking at is looking at the processes and looking at", "start_timestamp": "00:27:57", "end_timestamp": "00:28:37", "start_second": 1677, "end_second": 1717, "url": "https://www.youtube.com/watch?v=QhjJFUmDX4E&t=1677s", "title": "What extent can mobile apps solve the COVID-19 crisis?", "thumbnail": "https://i.ytimg.com/vi/QhjJFUmDX4E/maxresdefault.jpg"} {"video_id": "QhjJFUmDX4E", "text": "the prerequisites for TTT for tracking testing tracing so that that is in that is really available in the right way in Austria so that when people come in they can experience experience a really very safe country if we're not one of the safest countries in Europe I would like to touch upon something that Michael sets in the beginning obviously Corbett 19 is causing a global crisis it's affecting nearly every country in the world so do you think it makes sense to treat a global problem on a local basis I think it's I think you need to treat", "start_timestamp": "00:28:37", "end_timestamp": "00:29:21", "start_second": 1717, "end_second": 1761, "url": "https://www.youtube.com/watch?v=QhjJFUmDX4E&t=1717s", "title": "What extent can mobile apps solve the COVID-19 crisis?", "thumbnail": "https://i.ytimg.com/vi/QhjJFUmDX4E/maxresdefault.jpg"} {"video_id": "QhjJFUmDX4E", "text": "the global problem at a global and at a local level if you want to tackle it and I think that's true for most of the huge problems that we are facing not only for Corbett but also for the environmental problems we need we do not have the right governance for many of these problems because in a sense you need the global concepts to to fight the problem but then you need really to break it down on an individual and on a local level you cannot enforce the top-down it needs to come from the bottom up and the better the concepts that are developed", "start_timestamp": "00:29:21", "end_timestamp": "00:30:03", "start_second": 1761, "end_second": 1803, "url": "https://www.youtube.com/watch?v=QhjJFUmDX4E&t=1761s", "title": "What extent can mobile apps solve the COVID-19 crisis?", "thumbnail": "https://i.ytimg.com/vi/QhjJFUmDX4E/maxresdefault.jpg"} {"video_id": "QhjJFUmDX4E", "text": "on a global level the more they will be taken up and as we know we live in very different types of countries and every country will have their own way of communicating with their citizen and making sure that their citizens are made aware about what solutions are best so clearly we saw the example of China we saw the example of Singapore we see the example of Taiwan and we see the example of Austria or Italy they are totally different that we need to be mindful about what is working in our specific cultural context and political context", "start_timestamp": "00:30:03", "end_timestamp": "00:30:42", "start_second": 1803, "end_second": 1842, "url": "https://www.youtube.com/watch?v=QhjJFUmDX4E&t=1803s", "title": "What extent can mobile apps solve the COVID-19 crisis?", "thumbnail": "https://i.ytimg.com/vi/QhjJFUmDX4E/maxresdefault.jpg"} {"video_id": "QhjJFUmDX4E", "text": "and I think that that's very important so yes it's it's a it's a Savola at South in German you know it's a clearly you need to work on a global level but you need us to work on a local level and are there activities to work on a europe-wide corona apps because Germany is working on one Austria already published its app are there activities going in this direction there is a lot of activity on a European level but I think that at the moment I think that Michael can probably say more to this bilateral working bilaterally is", "start_timestamp": "00:30:42", "end_timestamp": "00:31:19", "start_second": 1842, "end_second": 1879, "url": "https://www.youtube.com/watch?v=QhjJFUmDX4E&t=1842s", "title": "What extent can mobile apps solve the COVID-19 crisis?", "thumbnail": "https://i.ytimg.com/vi/QhjJFUmDX4E/maxresdefault.jpg"} {"video_id": "QhjJFUmDX4E", "text": "probably one of the other routes but you know I think that we need to start somewhere and having having a working app somewhere is a good starting point okay thank you very much Tomas I would like to continue with you also focusing on a topic that has cost for heated discussion so SEO of iron steel camasta which is one of the largest telco providers in the CEA and region and in Austria you've provided the government with anonymized data on the movement of groups and individuals in collaboration with in vain IAM which is a spinoff from", "start_timestamp": "00:31:19", "end_timestamp": "00:32:02", "start_second": 1879, "end_second": 1922, "url": "https://www.youtube.com/watch?v=QhjJFUmDX4E&t=1879s", "title": "What extent can mobile apps solve the COVID-19 crisis?", "thumbnail": "https://i.ytimg.com/vi/QhjJFUmDX4E/maxresdefault.jpg"} {"video_id": "QhjJFUmDX4E", "text": "the Technical University of god\u00b4s some insights on how this initiative has contributed to the management and the containment of the current crisis well for sure thanks for the invitation once again first of all let me start with saying that when this crisis started and actually the first lockdown measures were implemented in Austria middle of March we somehow went through a similar experience as marketed because when we were discussing internally how can we sa technology company contribute to the handling of this crisis and when we", "start_timestamp": "00:32:02", "end_timestamp": "00:32:44", "start_second": 1922, "end_second": 1964, "url": "https://www.youtube.com/watch?v=QhjJFUmDX4E&t=1922s", "title": "What extent can mobile apps solve the COVID-19 crisis?", "thumbnail": "https://i.ytimg.com/vi/QhjJFUmDX4E/maxresdefault.jpg"} {"video_id": "QhjJFUmDX4E", "text": "actually went into discussions with the crisis management of the Republic of Austria very soon after we got some significant media backlash where maybe some fair concerns were addressed but much of the issues raised were really based on a fundamental lack of information and when people said we cannot use big data as a means in tackling these crisis I must say I was somehow concerned if not to say shocked to see that the to see the amount of resistance versus using facts data evidence to take valuable decisions in technique this crisis and I", "start_timestamp": "00:32:44", "end_timestamp": "00:33:39", "start_second": 1964, "end_second": 2019, "url": "https://www.youtube.com/watch?v=QhjJFUmDX4E&t=1964s", "title": "What extent can mobile apps solve the COVID-19 crisis?", "thumbnail": "https://i.ytimg.com/vi/QhjJFUmDX4E/maxresdefault.jpg"} {"video_id": "QhjJFUmDX4E", "text": "was shocked to see that people don't understand that not every use of data in handling these crisis does necessarily imply an infringement of fundamental rights such as the privacy let me explain what we do in a mobile network you always need and have so-called signalling information that is the information which you need to set up for example a call of a mobile phone which you need to know to which sales site mobile site a phone has to connect itself it's about controlling the network and in the signaling traffic we", "start_timestamp": "00:33:39", "end_timestamp": "00:34:20", "start_second": 2019, "end_second": 2060, "url": "https://www.youtube.com/watch?v=QhjJFUmDX4E&t=2019s", "title": "What extent can mobile apps solve the COVID-19 crisis?", "thumbnail": "https://i.ytimg.com/vi/QhjJFUmDX4E/maxresdefault.jpg"} {"video_id": "QhjJFUmDX4E", "text": "are already using an IDE called the MZ that's the international mobile subscriber identity which is in the systems attached to name of a person but in the signalling systems it's only a number of any given cell phone attached to the network we use this information and we immediately anonymize this because we attach to each of these energy numbers in a normalized hashtag which is by the way exchanged every 24 hours so also we don't have any historical individual patterns of customers the collaborate with a spin-off of the Technical University", "start_timestamp": "00:34:20", "end_timestamp": "00:35:12", "start_second": 2060, "end_second": 2112, "url": "https://www.youtube.com/watch?v=QhjJFUmDX4E&t=2060s", "title": "What extent can mobile apps solve the COVID-19 crisis?", "thumbnail": "https://i.ytimg.com/vi/QhjJFUmDX4E/maxresdefault.jpg"} {"video_id": "QhjJFUmDX4E", "text": "cards called indium which have very sophisticated algorithms in using these information to get insights on the mobility of people when I say people and referring to anonymized aggregated clouds of people's with a minimum website of 20 people or more we do not have information on the any individual movement of people and this is by the way not a use case of this information and that's another learning that people really tend to mix up the different use cases in handling this crisis but we were interested in and what we're", "start_timestamp": "00:35:12", "end_timestamp": "00:36:01", "start_second": 2112, "end_second": 2161, "url": "https://www.youtube.com/watch?v=QhjJFUmDX4E&t=2112s", "title": "What extent can mobile apps solve the COVID-19 crisis?", "thumbnail": "https://i.ytimg.com/vi/QhjJFUmDX4E/maxresdefault.jpg"} {"video_id": "QhjJFUmDX4E", "text": "providing to the crisis part of the Republic of Austria is information of the kind of how how many how many people less have moved for example in the inner city of Vienna how many people have not moved beyond a parameter of 10 kilometers from their assumed and we cannot only assume because we don't have this information from the assumed whole because what we wanted to understand and what proved to be very at at least what the crisis management tells us what proved to be very helpful to the crisis management is to understand to the", "start_timestamp": "00:36:01", "end_timestamp": "00:36:52", "start_second": 2161, "end_second": 2212, "url": "https://www.youtube.com/watch?v=QhjJFUmDX4E&t=2161s", "title": "What extent can mobile apps solve the COVID-19 crisis?", "thumbnail": "https://i.ytimg.com/vi/QhjJFUmDX4E/maxresdefault.jpg"} {"video_id": "QhjJFUmDX4E", "text": "extent of reduction of mobility when the lockdown measures were implemented and the extent of additional movement with every step of lifting these lockdown measures the solution is fully gdpr compliant it has been tested by external auditors even the head of the board of the Austrian so-called attitudes behind the regulatory body taking care of the implementation and enforcement of gdpr has confirmed that the solution is gbbr compliant and we believe there is a very valuable piece of information in assessing the effectiveness of the", "start_timestamp": "00:36:52", "end_timestamp": "00:37:37", "start_second": 2212, "end_second": 2257, "url": "https://www.youtube.com/watch?v=QhjJFUmDX4E&t=2212s", "title": "What extent can mobile apps solve the COVID-19 crisis?", "thumbnail": "https://i.ytimg.com/vi/QhjJFUmDX4E/maxresdefault.jpg"} {"video_id": "QhjJFUmDX4E", "text": "lockdown measures now having said this I fully agree with what - Michael and Antonella said before I do believe that especially as technology companies we do have a responsibility to provide technology and we do have a moral obligation to provide technology which helps addressing these major crisis and even if the solution we are providing would constitute an infringement of fundamental rights I really want to point out affected in all these use cases we are using technology to lessen the impact of the infringement on", "start_timestamp": "00:37:37", "end_timestamp": "00:38:21", "start_second": 2257, "end_second": 2301, "url": "https://www.youtube.com/watch?v=QhjJFUmDX4E&t=2257s", "title": "What extent can mobile apps solve the COVID-19 crisis?", "thumbnail": "https://i.ytimg.com/vi/QhjJFUmDX4E/maxresdefault.jpg"} {"video_id": "QhjJFUmDX4E", "text": "fundamental rights because every day we are subject to much more material infringement of our fundamental human rights such as the right to live the right to live a healthy life the life - the right to an adequate standard of living just think about the existential threats to many businesses to many individuals which is at risk every day we are speaking every day think about the freedom of movement which has been inhibited all across the globe significantly or just the right of peaceful assembly in the using", "start_timestamp": "00:38:21", "end_timestamp": "00:39:02", "start_second": 2301, "end_second": 2342, "url": "https://www.youtube.com/watch?v=QhjJFUmDX4E&t=2301s", "title": "What extent can mobile apps solve the COVID-19 crisis?", "thumbnail": "https://i.ytimg.com/vi/QhjJFUmDX4E/maxresdefault.jpg"} {"video_id": "QhjJFUmDX4E", "text": "technology in order to get back to at least more normal way of life with less infringement of fundamental rights every day and I think it's a moral obligation actually to do this and we should be asking ourselves what we would be asked if he would not prove up I think the last sentence was cut off could you distribute that if you could not provide well if we are asked whether its moral morally defendable to use technology we also have to ask the question the other way around is it more about just morally justifiable not to", "start_timestamp": "00:39:02", "end_timestamp": "00:39:57", "start_second": 2342, "end_second": 2397, "url": "https://www.youtube.com/watch?v=QhjJFUmDX4E&t=2342s", "title": "What extent can mobile apps solve the COVID-19 crisis?", "thumbnail": "https://i.ytimg.com/vi/QhjJFUmDX4E/maxresdefault.jpg"} {"video_id": "QhjJFUmDX4E", "text": "use technology which we have available to take it is crisis and I think that's one of the learnings also of this crisis not only when we talk about apps or mobility insights without technology we would not at all be in a position to handle the crisis we are handling it today think about how we would deal with the situation 10 or 20 years ago or what happened a hundred years ago in the Spanish Flu we would not even understand the DNA of the virus today we will not be able to test it within hours we would not be able to continue working", "start_timestamp": "00:39:57", "end_timestamp": "00:40:40", "start_second": 2397, "end_second": 2440, "url": "https://www.youtube.com/watch?v=QhjJFUmDX4E&t=2397s", "title": "What extent can mobile apps solve the COVID-19 crisis?", "thumbnail": "https://i.ytimg.com/vi/QhjJFUmDX4E/maxresdefault.jpg"} {"video_id": "QhjJFUmDX4E", "text": "in our home offices we would not be able to educate our students and pupils at home he cetera et cetera et cetera so I understand and I take very seriously all the concerns also with regards to privacy within when we use these pieces of technology but I believe we really need to calibrate also our approach to technology and really embrace the fact that without technology we would be in much better shape in handling this crisis but this this again brings us back to this topic of the ability to save lives but at the same", "start_timestamp": "00:40:40", "end_timestamp": "00:41:25", "start_second": 2440, "end_second": 2485, "url": "https://www.youtube.com/watch?v=QhjJFUmDX4E&t=2440s", "title": "What extent can mobile apps solve the COVID-19 crisis?", "thumbnail": "https://i.ytimg.com/vi/QhjJFUmDX4E/maxresdefault.jpg"} {"video_id": "QhjJFUmDX4E", "text": "time somehow infringing the freedom of choice of the population correct and that's why I believe and I think Antonella has beautifully described the important criteria when using these these tools and first we need to be clear that we should not mix abuse cases and there is a fundamental difference between what we do when we do general assessments on the amount of mobility of large groups of persons versus personalized even if it's anonymous but still personalized contact tracing and when we look into international examples there are other", "start_timestamp": "00:41:25", "end_timestamp": "00:42:17", "start_second": 2485, "end_second": 2537, "url": "https://www.youtube.com/watch?v=QhjJFUmDX4E&t=2485s", "title": "What extent can mobile apps solve the COVID-19 crisis?", "thumbnail": "https://i.ytimg.com/vi/QhjJFUmDX4E/maxresdefault.jpg"} {"video_id": "QhjJFUmDX4E", "text": "examples around of voluntary or imposed Tracking's of quarantine measures which is again another topic but I do believe that in the public debate people tend to put this all in one basket and that's an issue that's a real to be frank an education issue I don't believe that with these measures you avail advice to do it in a top-down approach in non-voluntary approaches in turn Ella has described it because especially in these crisis in in crisis mode and we also know it from our businesses it's about changing behaviors", "start_timestamp": "00:42:17", "end_timestamp": "00:43:01", "start_second": 2537, "end_second": 2581, "url": "https://www.youtube.com/watch?v=QhjJFUmDX4E&t=2537s", "title": "What extent can mobile apps solve the COVID-19 crisis?", "thumbnail": "https://i.ytimg.com/vi/QhjJFUmDX4E/maxresdefault.jpg"} {"video_id": "QhjJFUmDX4E", "text": "of large masses of people and that's like a large transformation project in a company it does not work top-down you need some enforcement enforcement measures for people who come let's say completely off track and to completely disregard all let's say all reasonable behavior but for a large amount of people you need to get buy-in in the buy-in requires trust and looking forward Trust will be extremely important in handling the crisis also when we talk about handling the economic crisis and we need more trust in", "start_timestamp": "00:43:01", "end_timestamp": "00:43:45", "start_second": 2581, "end_second": 2625, "url": "https://www.youtube.com/watch?v=QhjJFUmDX4E&t=2581s", "title": "What extent can mobile apps solve the COVID-19 crisis?", "thumbnail": "https://i.ytimg.com/vi/QhjJFUmDX4E/maxresdefault.jpg"} {"video_id": "QhjJFUmDX4E", "text": "institutions not less and we will get trapped only if we consider design principles like the privacy like using anonymous thought data as long as as much as we can use it by using decentralized solutions by storing data carefully and by especially feeling on the voluntary participation of people and not on the obligation enforcement um I did a bit of research on how you collect the data how you analyze the movements of groups of 20 plus individuals is it correct that you're solely tracking the data of mobile", "start_timestamp": "00:43:45", "end_timestamp": "00:44:35", "start_second": 2625, "end_second": 2675, "url": "https://www.youtube.com/watch?v=QhjJFUmDX4E&t=2625s", "title": "What extent can mobile apps solve the COVID-19 crisis?", "thumbnail": "https://i.ytimg.com/vi/QhjJFUmDX4E/maxresdefault.jpg"} {"video_id": "QhjJFUmDX4E", "text": "phones in the telecom master Network yes that's correct however and by the way this use case has been around for some time we also use this data for example to measure how many people are traveling on the train because it's also quite a challenge because train you don't have you know it's not like on the plane where you have dedicated seating or you have physical barriers to enter the Train but as we do know our market share is in given areas and as this model has been calibrated also with external health we", "start_timestamp": "00:44:35", "end_timestamp": "00:45:14", "start_second": 2675, "end_second": 2714, "url": "https://www.youtube.com/watch?v=QhjJFUmDX4E&t=2675s", "title": "What extent can mobile apps solve the COVID-19 crisis?", "thumbnail": "https://i.ytimg.com/vi/QhjJFUmDX4E/maxresdefault.jpg"} {"video_id": "QhjJFUmDX4E", "text": "have at least very high confidence to calculate on the entire basis would it make sense to collaborate with the other two telco operators in Austria to get a more granular they're granular database well we would be open to do so however I do believe that the data we are able to provide the sufficient is sufficiently concise in order to draw similar conclusions today already ok I have a bit of a another example I don't agree with the approach that was taken but if we look at what the Israelis did in mid March of this", "start_timestamp": "00:45:14", "end_timestamp": "00:46:02", "start_second": 2714, "end_second": 2762, "url": "https://www.youtube.com/watch?v=QhjJFUmDX4E&t=2714s", "title": "What extent can mobile apps solve the COVID-19 crisis?", "thumbnail": "https://i.ytimg.com/vi/QhjJFUmDX4E/maxresdefault.jpg"} {"video_id": "QhjJFUmDX4E", "text": "year it became public that Prime Minister Binyamin Netanyahu authorized Israel's Internal Security Agency Shin Bet to use the location data of cellphones to help contain the coronavirus so what they did is they actually track the movements of individuals who tested positively for Kovach 19 and identified the individuals who came in touch with these infected people and these individuals receive text messages informing them anonymously of the fact that they had been in touch with an infected person just from a", "start_timestamp": "00:46:02", "end_timestamp": "00:46:34", "start_second": 2762, "end_second": 2794, "url": "https://www.youtube.com/watch?v=QhjJFUmDX4E&t=2762s", "title": "What extent can mobile apps solve the COVID-19 crisis?", "thumbnail": "https://i.ytimg.com/vi/QhjJFUmDX4E/maxresdefault.jpg"} {"video_id": "QhjJFUmDX4E", "text": "technological standpoint would you be able to do this would you be able to build a system that allows us to basically cover the entire user base of the ions tilaka monster network anonymously in a safe and private method in order to effectively have the same result as what Michael and his team at Accenture had created with the corona no clearly not based on the technology we using when we do the mobility insights we are using more or less triangulation out of the mobile network we are not using for example any GPS data from the", "start_timestamp": "00:46:34", "end_timestamp": "00:47:13", "start_second": 2794, "end_second": 2833, "url": "https://www.youtube.com/watch?v=QhjJFUmDX4E&t=2794s", "title": "What extent can mobile apps solve the COVID-19 crisis?", "thumbnail": "https://i.ytimg.com/vi/QhjJFUmDX4E/maxresdefault.jpg"} {"video_id": "QhjJFUmDX4E", "text": "cell phones and with that we can see if people move let's say from one community to to another one or if they move from Vienna to Lynch or whatever but we cannot sufficiently assess whether I would be sitting in the Train next to you or you are sitting at the beginning of the Train and then actually also where we are not even interested in the information and we don't have to technologically I set access to that information I think that's more a topic of the very front okay but combining your data with GPS and Wi-Fi data would", "start_timestamp": "00:47:13", "end_timestamp": "00:48:02", "start_second": 2833, "end_second": 2882, "url": "https://www.youtube.com/watch?v=QhjJFUmDX4E&t=2833s", "title": "What extent can mobile apps solve the COVID-19 crisis?", "thumbnail": "https://i.ytimg.com/vi/QhjJFUmDX4E/maxresdefault.jpg"} {"video_id": "QhjJFUmDX4E", "text": "that be an option and perhaps with with Michael's app for us it's not an option because I do believe that the app is a very smart approach I do have the upon myself on myself I agree with them Antonella and and Michael I think it's quite a long way to go in order to have it let's say fully functional and fully performing but we also have to consider that I think in Austria we have the advantage of having head very early start we have the advantage of having from my non expert view the right setup from the beginning", "start_timestamp": "00:48:02", "end_timestamp": "00:48:43", "start_second": 2882, "end_second": 2923, "url": "https://www.youtube.com/watch?v=QhjJFUmDX4E&t=2882s", "title": "What extent can mobile apps solve the COVID-19 crisis?", "thumbnail": "https://i.ytimg.com/vi/QhjJFUmDX4E/maxresdefault.jpg"} {"video_id": "QhjJFUmDX4E", "text": "with a decentralized approach with an voluntary approach with a very transparent approach based on open source so I think we are going in the right direction and I have quite some confidence that at the end of the day it will help us tackling this pandemic and we shouldn't forget this pandemic from all what we hear and know will be around for months if not years and we have now to prepare in order to handle what comes in months and years right so we are not at the end of the tunnel I think we are at the beginning of the tunnel and we", "start_timestamp": "00:48:43", "end_timestamp": "00:49:28", "start_second": 2923, "end_second": 2968, "url": "https://www.youtube.com/watch?v=QhjJFUmDX4E&t=2923s", "title": "What extent can mobile apps solve the COVID-19 crisis?", "thumbnail": "https://i.ytimg.com/vi/QhjJFUmDX4E/maxresdefault.jpg"} {"video_id": "QhjJFUmDX4E", "text": "have an early start in using technology I think the app and the use case of Traken traces one thing and what we can do in the network in assessing the amount of mobility is another use case but we should mix it - thanks Tomas a question to all of you what are the international kind of best practices that you've seen when it comes to using technology in order to tackle the library if you want I can start because this is what we have been looking into a lot I think it depends you know clearly you can find a lot of interesting", "start_timestamp": "00:49:28", "end_timestamp": "00:50:10", "start_second": 2968, "end_second": 3010, "url": "https://www.youtube.com/watch?v=QhjJFUmDX4E&t=2968s", "title": "What extent can mobile apps solve the COVID-19 crisis?", "thumbnail": "https://i.ytimg.com/vi/QhjJFUmDX4E/maxresdefault.jpg"} {"video_id": "QhjJFUmDX4E", "text": "examples particularly a shoe where they had already the experience the previous Czar's experience and that has has sharpened their view on how to manage pandemics in general and also sharpen the view on how to make the best use of of technology so I think that you know what we saw in South Korea they had both and I think that Thomas was also mentioning it we speaking always of apps but there are so many different apps you know there are symptoms checking apps which are very broadly used in Korea symptom checking apps are very", "start_timestamp": "00:50:10", "end_timestamp": "00:50:53", "start_second": 3010, "end_second": 3053, "url": "https://www.youtube.com/watch?v=QhjJFUmDX4E&t=3010s", "title": "What extent can mobile apps solve the COVID-19 crisis?", "thumbnail": "https://i.ytimg.com/vi/QhjJFUmDX4E/maxresdefault.jpg"} {"video_id": "QhjJFUmDX4E", "text": "interesting because they are a first line sort of defense if you want or a first line of information that everyone can take before going into a system that is certainly easily burdened of going into fully testing and tracking and tracing so symptom tracking apps are broadly used in Konya and I think that is something that is super useful then they have tracking and tracing Epson use but mainly also parent time surveillance apps which are used also not only in Korea but also used in Singapore and also used in Poland you know that we", "start_timestamp": "00:50:53", "end_timestamp": "00:51:35", "start_second": 3053, "end_second": 3095, "url": "https://www.youtube.com/watch?v=QhjJFUmDX4E&t=3053s", "title": "What extent can mobile apps solve the COVID-19 crisis?", "thumbnail": "https://i.ytimg.com/vi/QhjJFUmDX4E/maxresdefault.jpg"} {"video_id": "QhjJFUmDX4E", "text": "have quite exciting interesting examples you mentioned the Israeli example I think that it's we have to see Israel is a country which is on constant high alert and I think this being constant high alert allows allows to interact with the population in a very different way you know so we are not in a constant high alert as we have seen we need to create the sense of alert so that people can act you know responsibly and so what is working there there are very interesting tools there but they would not work here so that is the other thing", "start_timestamp": "00:51:35", "end_timestamp": "00:52:16", "start_second": 3095, "end_second": 3136, "url": "https://www.youtube.com/watch?v=QhjJFUmDX4E&t=3095s", "title": "What extent can mobile apps solve the COVID-19 crisis?", "thumbnail": "https://i.ytimg.com/vi/QhjJFUmDX4E/maxresdefault.jpg"} {"video_id": "QhjJFUmDX4E", "text": "what we have been also looking at is the potential use of immunity of the immunity apps which allow you to fully capture your testing process and the tests result on your app so that you have it easily usable I think this would be a very interesting solution particularly on a European level or on a global level but part you know a European level because then you can really travel and make sure that you you know you minimize the risk so that is also something that that we've seen and then they're clearly your traffic", "start_timestamp": "00:52:16", "end_timestamp": "00:52:57", "start_second": 3136, "end_second": 3177, "url": "https://www.youtube.com/watch?v=QhjJFUmDX4E&t=3136s", "title": "What extent can mobile apps solve the COVID-19 crisis?", "thumbnail": "https://i.ytimg.com/vi/QhjJFUmDX4E/maxresdefault.jpg"} {"video_id": "QhjJFUmDX4E", "text": "management apps broadly speaking I mean anything which allows to book things upfront viewing the menus in restaurants without having physical menus be presented I think it's something that is being discussed anything which is sort of touch free which are touch free solutions touch free solutions beyond you're attaching your own phone I think that is something that are being used and I think that if we look around we have some countries which have been very active in working with that Australia has introduced a very good tracking and", "start_timestamp": "00:52:57", "end_timestamp": "00:53:39", "start_second": 3177, "end_second": 3219, "url": "https://www.youtube.com/watch?v=QhjJFUmDX4E&t=3177s", "title": "What extent can mobile apps solve the COVID-19 crisis?", "thumbnail": "https://i.ytimg.com/vi/QhjJFUmDX4E/maxresdefault.jpg"} {"video_id": "QhjJFUmDX4E", "text": "tracing app Singapore as use a combination of these apps so these are the countries but with that we look into but then we need to sort of factor it in that it should be used in Europe okay would you like to add anything Tomas or Michael I think there was already a very comprehensive summary of technologies available by Antonella but a really one would like to reiterate is that I think actually with the two solutions used in Austria you have mentioned your Italy at least leading one of the leaders in Europe which the", "start_timestamp": "00:53:39", "end_timestamp": "00:54:22", "start_second": 3219, "end_second": 3262, "url": "https://www.youtube.com/watch?v=QhjJFUmDX4E&t=3219s", "title": "What extent can mobile apps solve the COVID-19 crisis?", "thumbnail": "https://i.ytimg.com/vi/QhjJFUmDX4E/maxresdefault.jpg"} {"video_id": "QhjJFUmDX4E", "text": "stop Corona app with I think as I said earlier a which was taking technology as well as in timing or those on our side I must say after the first I should have put his surprise in the public attention our initiative has a stake we got a lot of international requests actually also from other operators who by the way have similar solutions in place but many of them do not prove a creators hours to us and here and by the way I was also invited to a court with a commissioner Kato with the ambition to have similar use cases on", "start_timestamp": "00:54:22", "end_timestamp": "00:55:16", "start_second": 3262, "end_second": 3316, "url": "https://www.youtube.com/watch?v=QhjJFUmDX4E&t=3262s", "title": "What extent can mobile apps solve the COVID-19 crisis?", "thumbnail": "https://i.ytimg.com/vi/QhjJFUmDX4E/maxresdefault.jpg"} {"video_id": "QhjJFUmDX4E", "text": "European level because I agree with what was said before interoperability in all what we do will be key because especially in Europe with our small structures we need to open up the borders as soon as possible and if technology can help you it's for sure - so I think we in this respect we are also leading to take you within Europe in let me just add one thing thanks Thomas also for the acknowledgments that we are leading in Europe I think helps in speaking more general is health is one of the most under penetrated areas", "start_timestamp": "00:55:16", "end_timestamp": "00:55:56", "start_second": 3316, "end_second": 3356, "url": "https://www.youtube.com/watch?v=QhjJFUmDX4E&t=3316s", "title": "What extent can mobile apps solve the COVID-19 crisis?", "thumbnail": "https://i.ytimg.com/vi/QhjJFUmDX4E/maxresdefault.jpg"} {"video_id": "QhjJFUmDX4E", "text": "for the use of technology for good so I truly hope that we see a push towards the use of technology in health chain in in a more general way such as using telemedicine for to give one example and and for a couple of other use cases where we could use technology in a far better way than we do today based on say resistance in society where technology really could make a difference and that's what at least I hope that we can catch up on that and help to improve the overall situation of the health systems with technology but", "start_timestamp": "00:55:56", "end_timestamp": "00:56:40", "start_second": 3356, "end_second": 3400, "url": "https://www.youtube.com/watch?v=QhjJFUmDX4E&t=3356s", "title": "What extent can mobile apps solve the COVID-19 crisis?", "thumbnail": "https://i.ytimg.com/vi/QhjJFUmDX4E/maxresdefault.jpg"} {"video_id": "QhjJFUmDX4E", "text": "may I just build on something that told us I said before I think the critical thing is really making sure that the population appreciates truly the value of technology in liberating us in making us freer in giving us our individual freedom of movement freedom of speech freedom of everything you know and the technology can help absolutely but we need to probably invest significantly more educating people because there is a lot of mixing up things you know and in general suspiciousness big suspiciousness towards certain certain", "start_timestamp": "00:56:40", "end_timestamp": "00:57:29", "start_second": 3400, "end_second": 3449, "url": "https://www.youtube.com/watch?v=QhjJFUmDX4E&t=3400s", "title": "What extent can mobile apps solve the COVID-19 crisis?", "thumbnail": "https://i.ytimg.com/vi/QhjJFUmDX4E/maxresdefault.jpg"} {"video_id": "QhjJFUmDX4E", "text": "you know solutions that are very very important for for the individuals and for the society for the collective solution of this of the problems and i think that that's where we we jointly need to need to act and make sure that particularly you know the younger generations are totally you know just have a full understanding of where the risks are and where the risks are not and that they develop a good sense of trust and mistrust and not just you know just ad averaged mistrust about certain people and total trust towards others", "start_timestamp": "00:57:29", "end_timestamp": "00:58:13", "start_second": 3449, "end_second": 3493, "url": "https://www.youtube.com/watch?v=QhjJFUmDX4E&t=3449s", "title": "What extent can mobile apps solve the COVID-19 crisis?", "thumbnail": "https://i.ytimg.com/vi/QhjJFUmDX4E/maxresdefault.jpg"} {"video_id": "QhjJFUmDX4E", "text": "and i think and going down to naivete yeah and i think that's what we need to be careful about okay well i think we're we're solely coming to an end i would really like to thank the three of you for your great contributions internet i think it's great that you're taking such a helicopter perspective and tackling so many topics at the same time also thanks for for clarifying the the data protection and the data privacy aspects that i think are very important to the population and that you've also demonstrated and we can confirm this", "start_timestamp": "00:58:13", "end_timestamp": "00:58:53", "start_second": 3493, "end_second": 3533, "url": "https://www.youtube.com/watch?v=QhjJFUmDX4E&t=3493s", "title": "What extent can mobile apps solve the COVID-19 crisis?", "thumbnail": "https://i.ytimg.com/vi/QhjJFUmDX4E/maxresdefault.jpg"} {"video_id": "2lkUNDZld-4", "text": "hi there today we'll look at big self supervised models are strong semi-supervised learners by ting chen simon Kornbluth kevin sworsky muhammad nur Uzi and Geoffrey Hinton of Google brain so this paper on a high level it's also known as Sinclair v2 demonstrates that if you want to do semi-supervised learning that you're very well served by starting out with self supervised learning and then doing fine-tuning much like NLP models do rather than the kind of semi-supervised approach that image image tasks had so far and they present", "start_timestamp": "00:00:00", "end_timestamp": "00:00:40", "start_second": 0, "end_second": 40, "url": "https://www.youtube.com/watch?v=2lkUNDZld-4&t=0s", "title": "Big Self-Supervised Models are Strong Semi-Supervised Learners (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/2lkUNDZld-4/maxresdefault.jpg"} {"video_id": "2lkUNDZld-4", "text": "this same clear v2 which is an improvement over the same clear approach to self supervised pre training and they demonstrate it outperforms a lot of the baselines alright so if you like content like this don't forget to share it out and leave a like and tell me what you think in the comments so this paper um it sort of is kind of a club together thing of different of different things so they present this new method like simply same clear v2 which is a modification of sim clear and we'll go over that but they also try to", "start_timestamp": "00:00:40", "end_timestamp": "00:01:19", "start_second": 40, "end_second": 79, "url": "https://www.youtube.com/watch?v=2lkUNDZld-4&t=40s", "title": "Big Self-Supervised Models are Strong Semi-Supervised Learners (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/2lkUNDZld-4/maxresdefault.jpg"} {"video_id": "2lkUNDZld-4", "text": "make like a and a scientific claim namely that the that somehow bigger models are better for this pathway of learning and we'll try to untangle all of these things so first of all we're in the semi supprised learning regime right here semi-supervised basically means that you have a data set and you only have labels for a part of that data set so this could be like here at the bottom 10% or so because labels might be expensive to get and so you only have a few of them but you have much more data that's unlabeled now sometimes this", "start_timestamp": "00:01:19", "end_timestamp": "00:02:02", "start_second": 79, "end_second": 122, "url": "https://www.youtube.com/watch?v=2lkUNDZld-4&t=79s", "title": "Big Self-Supervised Models are Strong Semi-Supervised Learners (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/2lkUNDZld-4/maxresdefault.jpg"} {"video_id": "2lkUNDZld-4", "text": "problem is formulated as this here is your data set and then this here is like a different data set but one that's close enough such that you can learn from it and that's usually in in NLP you'll have your data set is like a sentiment classification task but you have all of Wikipedia that is not labeled but it's just text so you can sort of pre train on it in this case we'll be in a situation where will artificially construct a small data set so this entire thing here is going to be the image net data set and this right", "start_timestamp": "00:02:02", "end_timestamp": "00:02:38", "start_second": 122, "end_second": 158, "url": "https://www.youtube.com/watch?v=2lkUNDZld-4&t=122s", "title": "Big Self-Supervised Models are Strong Semi-Supervised Learners (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/2lkUNDZld-4/maxresdefault.jpg"} {"video_id": "2lkUNDZld-4", "text": "here is going to be our labelled portion like we have labels now usually one has labels for image net as well but we artificially restrict ourselves to simulate a situation where we have lots of data and we only have a fixed budget so we can only because to obtain labels often times you have to ask humans right to label images and let's say we're a company and we've collected this big data set but we only have like maybe 500 bucks on Amazon Mechanical Turk and we only managed to get a very flat aset labeled now we're in the in the regime", "start_timestamp": "00:02:38", "end_timestamp": "00:03:20", "start_second": 158, "end_second": 200, "url": "https://www.youtube.com/watch?v=2lkUNDZld-4&t=158s", "title": "Big Self-Supervised Models are Strong Semi-Supervised Learners (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/2lkUNDZld-4/maxresdefault.jpg"} {"video_id": "2lkUNDZld-4", "text": "of semi-supervised learning ok this is slightly different from what NLP does and as I said in NLP usually assume you have different data sets the large one being the different distribution and in this semi-supervised regime you often assume that it is actually the same data distribution but you only have labels for some of them but there should be a fair bit of overlap between the two things so oh I've recently made a video about open a eyes image GPT that kind of goes into the into the same direction as this work right here that basically says", "start_timestamp": "00:03:20", "end_timestamp": "00:03:57", "start_second": 200, "end_second": 237, "url": "https://www.youtube.com/watch?v=2lkUNDZld-4&t=200s", "title": "Big Self-Supervised Models are Strong Semi-Supervised Learners (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/2lkUNDZld-4/maxresdefault.jpg"} {"video_id": "2lkUNDZld-4", "text": "pre training on unlabeled data like this whole data set without the labels can be a very good pre conditioner for fine-tuning later and this paper says the same thing so basically in in the good old days what you would do is you would devise a method that somehow takes you know takes in a devise a method that takes in a mini batch and in the mini batch you have your data samples and then some of them would be labeled right here you'd have Y and here it'd have a Y but most of them would be not labeled and you'd", "start_timestamp": "00:03:57", "end_timestamp": "00:04:36", "start_second": 237, "end_second": 276, "url": "https://www.youtube.com/watch?v=2lkUNDZld-4&t=237s", "title": "Big Self-Supervised Models are Strong Semi-Supervised Learners (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/2lkUNDZld-4/maxresdefault.jpg"} {"video_id": "2lkUNDZld-4", "text": "have like some sort of loss function that would put special weight on the ones that are labeled or somehow handle these ones that are unlabeled in a way you might be doing like a some sort of a consistency loss such that if they are very nearest near neighbors to these in the feature space they should have similar labels or things like this so these semi-supervised methods they basically try to solve the problem at once but while taking data that is labeled and not labeled this paper goes into a different direction this paper", "start_timestamp": "00:04:36", "end_timestamp": "00:05:10", "start_second": 276, "end_second": 310, "url": "https://www.youtube.com/watch?v=2lkUNDZld-4&t=276s", "title": "Big Self-Supervised Models are Strong Semi-Supervised Learners (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/2lkUNDZld-4/maxresdefault.jpg"} {"video_id": "2lkUNDZld-4", "text": "says first we should it's actually three stages right here and they have a diagram so I don't need to draw they have a three stage approach three stages the one on the left is unsupervised pre training so they say let's forget about the labels right now even like your unlabeled data so even the data where we have the labels let's forget about the labels and let's just do unsupervised pre-training now unsupervised pre training in this kind of setting is also known as self supervised pre training and this first stage is done using a", "start_timestamp": "00:05:10", "end_timestamp": "00:05:50", "start_second": 310, "end_second": 350, "url": "https://www.youtube.com/watch?v=2lkUNDZld-4&t=310s", "title": "Big Self-Supervised Models are Strong Semi-Supervised Learners (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/2lkUNDZld-4/maxresdefault.jpg"} {"video_id": "2lkUNDZld-4", "text": "contrasted loss and that's very similar to sim clear to this contrastive loss so what you'll do and they describe it very very well here so what you'll do is given a randomly sampled mini batch of images each image is augmented twice using random crop color distortion and Gaussian blur creating two views of the same example okay so you have an image in your mini batch each image you take and you make two versions of it and each version you crop real random crop somewhere so version one could be random cropped here version cook two could be", "start_timestamp": "00:05:50", "end_timestamp": "00:06:26", "start_second": 350, "end_second": 386, "url": "https://www.youtube.com/watch?v=2lkUNDZld-4&t=350s", "title": "Big Self-Supervised Models are Strong Semi-Supervised Learners (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/2lkUNDZld-4/maxresdefault.jpg"} {"video_id": "2lkUNDZld-4", "text": "random cropped here and then you put some Gaussian blur on it and so on so a little bit of us as you can see random crop color distortion Gaussian blur so what you want is two different versions of the same image each of these versions has been augmented in a different way cropped in a different way blurred in a different way such it's it's two slightly different versions of the same image and now you want to enforce you want to put this through your network so ultimately as you can see on the right side here what you want", "start_timestamp": "00:06:26", "end_timestamp": "00:07:04", "start_second": 386, "end_second": 424, "url": "https://www.youtube.com/watch?v=2lkUNDZld-4&t=386s", "title": "Big Self-Supervised Models are Strong Semi-Supervised Learners (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/2lkUNDZld-4/maxresdefault.jpg"} {"video_id": "2lkUNDZld-4", "text": "to end up is a a network and then okay we'll forget about this right now what you want to Train is this network right here actually including these projection layers we'll get to them later this is the network that you want to Train so you want to put you take your unlabeled data you take an image you'd make two versions of it and you put those through the network right until the end right here so you'll get Z 1 Z 2 these are the the output of the network for the two images and then what you want to do is you want to take another image that's", "start_timestamp": "00:07:04", "end_timestamp": "00:07:45", "start_second": 424, "end_second": 465, "url": "https://www.youtube.com/watch?v=2lkUNDZld-4&t=424s", "title": "Big Self-Supervised Models are Strong Semi-Supervised Learners (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/2lkUNDZld-4/maxresdefault.jpg"} {"video_id": "2lkUNDZld-4", "text": "not this image and also put it through the network maybe also augment it first and then you have Z 3 so now you have the outputs of 2 things that are supposed to come from the same image and one thing that's supposed to come from a different image and now your loss is simply going to be make those two things close together and push those two things apart or those 3 actually so the loss and this is this is the contrastive loss of self supervised learning as you know you don't need any labels right here you simply say the things that come from the", "start_timestamp": "00:07:45", "end_timestamp": "00:08:24", "start_second": 465, "end_second": 504, "url": "https://www.youtube.com/watch?v=2lkUNDZld-4&t=465s", "title": "Big Self-Supervised Models are Strong Semi-Supervised Learners (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/2lkUNDZld-4/maxresdefault.jpg"} {"video_id": "2lkUNDZld-4", "text": "same image should be close together and the things that come from different images should be far apart and this relies heavily on these data augmentations that you do right here they also employ some other tricks like the momentum encoder from moco from momentum contrast and so on but this is the main the main part so you can pull a lot of strings here to get like another percent of performance but ultimately they won't see the similarity of Zi and ZJ which are the outputs of the same image to be close together and", "start_timestamp": "00:08:24", "end_timestamp": "00:09:03", "start_second": 504, "end_second": 543, "url": "https://www.youtube.com/watch?v=2lkUNDZld-4&t=504s", "title": "Big Self-Supervised Models are Strong Semi-Supervised Learners (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/2lkUNDZld-4/maxresdefault.jpg"} {"video_id": "2lkUNDZld-4", "text": "then this down here they want to be far apart Zi with Z K where K is all the other images okay you can do this in a mini-batch fashion so this is self supervised learning and the reason why you do this is you don't need labels and it tends we know it tends to give very very good representations so um pass that so what this network here will learn will be very good representation with these self supervised loss with contrastive loss for example gives such good performance there have been papers recently that modify the loss and so on", "start_timestamp": "00:09:03", "end_timestamp": "00:09:52", "start_second": 543, "end_second": 592, "url": "https://www.youtube.com/watch?v=2lkUNDZld-4&t=543s", "title": "Big Self-Supervised Models are Strong Semi-Supervised Learners (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/2lkUNDZld-4/maxresdefault.jpg"} {"video_id": "2lkUNDZld-4", "text": "but it's not super well understood yet but if you do it like this there the network here will give you already very very good representation and we know this because we can take a network like this and then simply train a linear classifier on top of that on a data set and achieve very very good performance and mind you you have trained it with unlabeled data right so the the network has never been trained to solve like image net classification it has simply been trained to look at the pictures and determine if you know", "start_timestamp": "00:09:52", "end_timestamp": "00:10:26", "start_second": 592, "end_second": 626, "url": "https://www.youtube.com/watch?v=2lkUNDZld-4&t=592s", "title": "Big Self-Supervised Models are Strong Semi-Supervised Learners (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/2lkUNDZld-4/maxresdefault.jpg"} {"video_id": "2lkUNDZld-4", "text": "two versions of a picture come from the same picture or from different pictures and now if you simply train a linear classifier on top of these representations you're doing extremely well already so we know these representations they actually learn something about these images so that's the first part then stage 2 let's cancel all of that stage 2 is you want to do supervised fine tuning now you already see that the arrow here coming out is not this was a task agnostic big CNN the arrow is actually coming out of those", "start_timestamp": "00:10:26", "end_timestamp": "00:11:02", "start_second": 626, "end_second": 662, "url": "https://www.youtube.com/watch?v=2lkUNDZld-4&t=626s", "title": "Big Self-Supervised Models are Strong Semi-Supervised Learners (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/2lkUNDZld-4/maxresdefault.jpg"} {"video_id": "2lkUNDZld-4", "text": "those yellow boxes and the yellow boxes are these projection heads so in the original seem clear paper what they did was they they wanted originally they wanted to train this network right here this is like a ResNet 50 it's pretty standard in these kind of self supervised approaches and so on to train or these few label approaches to train a standardized network and this is like a ResNet 50 so in the original sim clear paper they said we want to make ResNet 50 as strong as possible but in order to do this loss", "start_timestamp": "00:11:02", "end_timestamp": "00:11:40", "start_second": 662, "end_second": 700, "url": "https://www.youtube.com/watch?v=2lkUNDZld-4&t=662s", "title": "Big Self-Supervised Models are Strong Semi-Supervised Learners (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/2lkUNDZld-4/maxresdefault.jpg"} {"video_id": "2lkUNDZld-4", "text": "right here we are going to attach this projection head just to because the dimensionality here I think is like 2048 and we want to do this inner product in a lower dimension of like maybe 256 or so so this these are just multi-layer perceptrons these are just fully connected layers that compress the representation down to that and once we're done with the unsupervised returning we're going to throw those away right and this ResNet is the thing that we really care about now here they claim okay it actually works better and", "start_timestamp": "00:11:40", "end_timestamp": "00:12:18", "start_second": 700, "end_second": 738, "url": "https://www.youtube.com/watch?v=2lkUNDZld-4&t=700s", "title": "Big Self-Supervised Models are Strong Semi-Supervised Learners (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/2lkUNDZld-4/maxresdefault.jpg"} {"video_id": "2lkUNDZld-4", "text": "they have experiments to prove this or to show this if you use one if you actually leave one of these layers here so in the end they I guess they converge on three projection head layers and then they only throw away the top two and like they make this big deal out of the fact where you know I can just call I can just call this part right here now the encoder and I don't so I don't know exact like I don't see the giant deal here like you've just made your network one layer bigger and now you consider that to be your encoder and the", "start_timestamp": "00:12:18", "end_timestamp": "00:12:58", "start_second": 738, "end_second": 778, "url": "https://www.youtube.com/watch?v=2lkUNDZld-4&t=738s", "title": "Big Self-Supervised Models are Strong Semi-Supervised Learners (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/2lkUNDZld-4/maxresdefault.jpg"} {"video_id": "2lkUNDZld-4", "text": "projection head is now two layers and that will be much easier than calling the projection head three layers but we leave one layer and we train from the middle layers in any case they have this layer additional layer right here compared to the old sim clear and then the representation of that goes into supervised fine-tuning now this is pretty easy this is exactly what it sounds like so now you use only only the dataset that has labels so the part of the data set that has labels and you do the fine tuning and fine tuning is", "start_timestamp": "00:12:58", "end_timestamp": "00:13:29", "start_second": 778, "end_second": 809, "url": "https://www.youtube.com/watch?v=2lkUNDZld-4&t=778s", "title": "Big Self-Supervised Models are Strong Semi-Supervised Learners (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/2lkUNDZld-4/maxresdefault.jpg"} {"video_id": "2lkUNDZld-4", "text": "simply supervised learning you train this network in a supervised fashion on that small fraction of data that has class labels and that already performs pretty well and they show this in experiments but then you can go a step further and do what's known as distillation or self-training and what's distillation or self-training it's so distillation is when you have a network that you call the teacher Network and that network has been trained to do some classification maybe into three classes pretty pretty well okay but now this is", "start_timestamp": "00:13:29", "end_timestamp": "00:14:12", "start_second": 809, "end_second": 852, "url": "https://www.youtube.com/watch?v=2lkUNDZld-4&t=809s", "title": "Big Self-Supervised Models are Strong Semi-Supervised Learners (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/2lkUNDZld-4/maxresdefault.jpg"} {"video_id": "2lkUNDZld-4", "text": "very large and you want maybe a smaller model so you just want like this tiny model because you want to ship it on a mobile device right but it's also supposed to do this and you know that if you just directly train this which is called the student model it doesn't perform as well as the teacher model there is a better way if you have the teacher model you can sort of transfer the knowledge to the student model you can distill the knowledge and how do you do that you do that by so what would you do in supervised training in supervised", "start_timestamp": "00:14:12", "end_timestamp": "00:14:44", "start_second": 852, "end_second": 884, "url": "https://www.youtube.com/watch?v=2lkUNDZld-4&t=852s", "title": "Big Self-Supervised Models are Strong Semi-Supervised Learners (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/2lkUNDZld-4/maxresdefault.jpg"} {"video_id": "2lkUNDZld-4", "text": "training you would take an image put it in and then put the label that comes along with the image you put it up here and you compare the output to the label and that gives you the loss function right now you do that right here if you distill you put the image into both now the teacher is already trained so its output will be a distribution over classes it won't be a single label it will be like okay 90% class 1 10% class 2 0 % class 3 something like this and now you take this as like a pseudo label this entire distribution and you put it", "start_timestamp": "00:14:44", "end_timestamp": "00:15:25", "start_second": 884, "end_second": 925, "url": "https://www.youtube.com/watch?v=2lkUNDZld-4&t=884s", "title": "Big Self-Supervised Models are Strong Semi-Supervised Learners (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/2lkUNDZld-4/maxresdefault.jpg"} {"video_id": "2lkUNDZld-4", "text": "here and you compare the output of the student to that of the teacher and that's your loss function so this kind of the teacher might have learned to put some nuance into the classification well I'm pretty sure this is class one but I'm not a hundred percent sure and it can transfer that knowledge to the student and that makes the student better than had you just trained it from the beginning from from with just the labels right so this is distillation and you can do this even what they call self distillation here or self-training so", "start_timestamp": "00:15:25", "end_timestamp": "00:16:01", "start_second": 925, "end_second": 961, "url": "https://www.youtube.com/watch?v=2lkUNDZld-4&t=925s", "title": "Big Self-Supervised Models are Strong Semi-Supervised Learners (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/2lkUNDZld-4/maxresdefault.jpg"} {"video_id": "2lkUNDZld-4", "text": "apparently this even helps if the teacher is if the student model is the same as the teacher model now why does it help in this case and I think it is not exactly the case in this case because they always say their teacher model has this extra projection layer right and then the student model doesn't have that even if they do self-training but why does it help in this case I mean it's it's kind of shocking and I'm pretty sure it helps in any case but in this particular case it helps because now you're using the unlabeled data", "start_timestamp": "00:16:01", "end_timestamp": "00:16:36", "start_second": 961, "end_second": 996, "url": "https://www.youtube.com/watch?v=2lkUNDZld-4&t=961s", "title": "Big Self-Supervised Models are Strong Semi-Supervised Learners (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/2lkUNDZld-4/maxresdefault.jpg"} {"video_id": "2lkUNDZld-4", "text": "again so you have a teacher model and the teacher model is trained first using unsupervised like this is the teacher model right here using unsupervised training then the teacher model is further fine-tuned on the small data right so it is now already pretty good at the task but how can you get a student model that's even better than the teacher model it's by using again this unlabeled that you have this giant amount of data so what you'll do is you take an image from the unlabeled data and you ask the teacher model teacher", "start_timestamp": "00:16:36", "end_timestamp": "00:17:12", "start_second": 996, "end_second": 1032, "url": "https://www.youtube.com/watch?v=2lkUNDZld-4&t=996s", "title": "Big Self-Supervised Models are Strong Semi-Supervised Learners (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/2lkUNDZld-4/maxresdefault.jpg"} {"video_id": "2lkUNDZld-4", "text": "model what do you think about that image right and the teacher model will give you a prediction like let's say again this 90 percent 10% 0% and then you take the student model you input that image and you compare its output to what the teacher said so this combines the teacher model you freeze the teacher model right the teacher model is only trained until here you take it from here the student model is now able to take basically the teacher it takes everything that the teacher model knows not only about this data but about all", "start_timestamp": "00:17:12", "end_timestamp": "00:17:53", "start_second": 1032, "end_second": 1073, "url": "https://www.youtube.com/watch?v=2lkUNDZld-4&t=1032s", "title": "Big Self-Supervised Models are Strong Semi-Supervised Learners (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/2lkUNDZld-4/maxresdefault.jpg"} {"video_id": "2lkUNDZld-4", "text": "the data so it kind of gets to ask the teacher model what do you think about this what do you think about this what do you think about this and it it can incorporate all that knowledge about all of this unlabeled data and that's why the student model here in the end if it's the same size will probably end up even better than the teacher model right so distillation I think also is still kind of a mystery of why you get a better model or I mean to to make it smaller if you make it a lot smaller usually you don't up end up with a", "start_timestamp": "00:17:53", "end_timestamp": "00:18:26", "start_second": 1073, "end_second": 1106, "url": "https://www.youtube.com/watch?v=2lkUNDZld-4&t=1073s", "title": "Big Self-Supervised Models are Strong Semi-Supervised Learners (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/2lkUNDZld-4/maxresdefault.jpg"} {"video_id": "2lkUNDZld-4", "text": "better model but you end up with a pretty good model that you couldn't have gotten by just training the small small model but so that's already pretty cool but why you get a better model with when they're the same size that's I don't think that's well understood yet so that's the three-stage approach so recap first use all of the data without labels to do unsupervised or self supervised contrastive pre-training second use only the data that has labels to do fine tuning third either distill the learnt classifier to a smaller model or distill", "start_timestamp": "00:18:26", "end_timestamp": "00:19:10", "start_second": 1106, "end_second": 1150, "url": "https://www.youtube.com/watch?v=2lkUNDZld-4&t=1106s", "title": "Big Self-Supervised Models are Strong Semi-Supervised Learners (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/2lkUNDZld-4/maxresdefault.jpg"} {"video_id": "2lkUNDZld-4", "text": "it to a model of the same size again within both cases you would again use the unlabeled all of the unlabeled data okay and that's the three-step approach that's same clear v2 in it's in all of its form alright so they go into fine tuning right here and yeah so they say we elaborate with a three layer projection head so that's the three layer projection head this here is the output of resonate 50 where Sigma is a Rayleigh non-linearity and we ignore the bias term for gravity power blah blah blah so they contrast this here for fine", "start_timestamp": "00:19:10", "end_timestamp": "00:19:54", "start_second": 1150, "end_second": 1194, "url": "https://www.youtube.com/watch?v=2lkUNDZld-4&t=1150s", "title": "Big Self-Supervised Models are Strong Semi-Supervised Learners (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/2lkUNDZld-4/maxresdefault.jpg"} {"video_id": "2lkUNDZld-4", "text": "tuning sim clear uses this right here which is just it's basically just a classifier on top of the ow foot of the ResNet 50 okay yada yada yada yada this is fine tuning from the input layer of the projection head to fine tune from the first layer of the projection head we have a new encoder function as this which is ResNet followed by fully connected layers and you see they take the resonate 50 output and they ship it through the first projection layer and then there is a task specific classifier now again why I", "start_timestamp": "00:19:54", "end_timestamp": "00:20:34", "start_second": 1194, "end_second": 1234, "url": "https://www.youtube.com/watch?v=2lkUNDZld-4&t=1194s", "title": "Big Self-Supervised Models are Strong Semi-Supervised Learners (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/2lkUNDZld-4/maxresdefault.jpg"} {"video_id": "2lkUNDZld-4", "text": "don't even see why they make like this ginormous deal out of it especially especially since the last layer of the resonate 50 I'm not ok here is I'm not entirely sure but are they taking the low no they're probably not taking the log it's ok but it's yeah I'm it's just weird like is there even a non-linearity at the end right here or is this really just like to matrix multiplications in a row which I'm going to guess there's a big chance that that's the case that the last layer of this encoder is actually not even followed by non-linearity and", "start_timestamp": "00:20:34", "end_timestamp": "00:21:11", "start_second": 1234, "end_second": 1271, "url": "https://www.youtube.com/watch?v=2lkUNDZld-4&t=1234s", "title": "Big Self-Supervised Models are Strong Semi-Supervised Learners (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/2lkUNDZld-4/maxresdefault.jpg"} {"video_id": "2lkUNDZld-4", "text": "therefore you'll just kind of make the dimension different and I don't see why you can't just just incorporate this into the model and have to like say it over and over again that this is a new special thing right again this is equivalent of tuning from a middle layer of the projection head instead of the output layer like ok you just make your model a bit bigger yeah so the third step is self training or knowledge distillation and they give two variants right here this variant as you can see here this is this is just the", "start_timestamp": "00:21:11", "end_timestamp": "00:21:41", "start_second": 1271, "end_second": 1301, "url": "https://www.youtube.com/watch?v=2lkUNDZld-4&t=1271s", "title": "Big Self-Supervised Models are Strong Semi-Supervised Learners (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/2lkUNDZld-4/maxresdefault.jpg"} {"video_id": "2lkUNDZld-4", "text": "cross-entropy but instead of having labels right here Y or you have the T term what the teacher model thinks Y is given X okay that's that's cross-entropy but not with the true labels but with the output of the teacher model and you can even mix that so you can as you can see right here you can mix this with an actual supervised loss so this would be the supervised loss whatever yeah I guess that I was wrong that wasn't I guess P of Y is always in that case but they don't use this particular kind I think except in one of", "start_timestamp": "00:21:41", "end_timestamp": "00:22:24", "start_second": 1301, "end_second": 1344, "url": "https://www.youtube.com/watch?v=2lkUNDZld-4&t=1301s", "title": "Big Self-Supervised Models are Strong Semi-Supervised Learners (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/2lkUNDZld-4/maxresdefault.jpg"} {"video_id": "2lkUNDZld-4", "text": "the ablations so how does this work it works pretty well and so one of their experiments as you see up here it works pretty well in that if you have 1% of the labels only 1% of imagenet labels which they say is smaller or equal than 13 images per class so there's a thousand classes and you only have 13 labels per class or less if you and they differentiate if your encoder that you train is a resonate 50 then you get and you can see the dashed line here is a supervised baseline you almost get to the supervised baseline with one percent", "start_timestamp": "00:22:24", "end_timestamp": "00:23:14", "start_second": 1344, "end_second": 1394, "url": "https://www.youtube.com/watch?v=2lkUNDZld-4&t=1344s", "title": "Big Self-Supervised Models are Strong Semi-Supervised Learners (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/2lkUNDZld-4/maxresdefault.jpg"} {"video_id": "2lkUNDZld-4", "text": "of the labels and if you actually have a larger ResNet then you get to the supervised performance without without 99 percent of the labels and if you have excuse me ten percent of the labels you pass the supervised baseline so the supervised baseline is on 100% of the labels mind you and you only have ten percent and this outperforms the supervised baseline now of course you could hear you could have another graphic where you show Oh 100% what if we you know what if we do the whole procedure with 100 percent of the labels", "start_timestamp": "00:23:14", "end_timestamp": "00:23:51", "start_second": 1394, "end_second": 1431, "url": "https://www.youtube.com/watch?v=2lkUNDZld-4&t=1394s", "title": "Big Self-Supervised Models are Strong Semi-Supervised Learners (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/2lkUNDZld-4/maxresdefault.jpg"} {"video_id": "2lkUNDZld-4", "text": "so first we don't label the data we do supervised self supervision then we fine-tune on a hundred percent of the data and then we do this distillation again you would of course be even better and I think they have this somewhere in a table but this is already pretty pretty impressive and another claim they make right here is about the model sizes so and this figure is description this now relates to the title they say bigger models yield larger gains when fine-tuning with fewer labeled examples so there there", "start_timestamp": "00:23:51", "end_timestamp": "00:24:31", "start_second": 1431, "end_second": 1471, "url": "https://www.youtube.com/watch?v=2lkUNDZld-4&t=1431s", "title": "Big Self-Supervised Models are Strong Semi-Supervised Learners (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/2lkUNDZld-4/maxresdefault.jpg"} {"video_id": "2lkUNDZld-4", "text": "are three comparative statement words in one sentence let's unpack this bigger models yield larger gains so the bigger the bigger the model the better the good let's say when fine-tuning with fewer labeled examples let's just look at the graph it's pretty it's really clear so here we have a number of parameters going over so these are the different models they look at how many parameters they have to do this whole procedure and here is the relative improvement in percent over the top image net one top accuracy so if you do this whole thing", "start_timestamp": "00:24:31", "end_timestamp": "00:25:13", "start_second": 1471, "end_second": 1513, "url": "https://www.youtube.com/watch?v=2lkUNDZld-4&t=1471s", "title": "Big Self-Supervised Models are Strong Semi-Supervised Learners (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/2lkUNDZld-4/maxresdefault.jpg"} {"video_id": "2lkUNDZld-4", "text": "with a hundred percent of the labels right I'm gonna guess this here this here is where they start out and you can see as you grow your models you grow the performance and this this is just by increasing the model size right you have the same data set you have the same amount of labels you have the same number of steps that you train for and so on just by the fact that you make your model bigger you gain in performance okay now you can see that these curves here are above one another and these curves refer to getting small", "start_timestamp": "00:25:13", "end_timestamp": "00:25:56", "start_second": 1513, "end_second": 1556, "url": "https://www.youtube.com/watch?v=2lkUNDZld-4&t=1513s", "title": "Big Self-Supervised Models are Strong Semi-Supervised Learners (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/2lkUNDZld-4/maxresdefault.jpg"} {"video_id": "2lkUNDZld-4", "text": "less and less labels okay so if you only have 10% of the labels your relative gains are a larger this doesn't mean that you perform better with 10% of the labels than with hundred-percent of the labels that would be that would be like ridiculous well I guess in this day and age nothing is ridiculous but for now we're still performing better by having more labels if we do the same procedure right it's not like here so here this baseline the supervised baseline only does supervised training right so that's why we can outperform it with less of", "start_timestamp": "00:25:56", "end_timestamp": "00:26:33", "start_second": 1556, "end_second": 1593, "url": "https://www.youtube.com/watch?v=2lkUNDZld-4&t=1556s", "title": "Big Self-Supervised Models are Strong Semi-Supervised Learners (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/2lkUNDZld-4/maxresdefault.jpg"} {"video_id": "2lkUNDZld-4", "text": "labels but here we do the same procedure this is relative improvement right so this right here the starting point would be if you had 10 percent of labels and a 25 million model parameter model and this right here for example is if you have the same amount of labels but a 200 million parameter model and this is relative improvement okay but what the graph says is that the relative improvement is larger that the relative improvement is higher the more parameters you have which is the more you go to the right and that effect in", "start_timestamp": "00:26:33", "end_timestamp": "00:27:20", "start_second": 1593, "end_second": 1640, "url": "https://www.youtube.com/watch?v=2lkUNDZld-4&t=1593s", "title": "Big Self-Supervised Models are Strong Semi-Supervised Learners (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/2lkUNDZld-4/maxresdefault.jpg"} {"video_id": "2lkUNDZld-4", "text": "itself is higher the fewer labels you have which is the different graphs and you can see that right here so if you have fewer and fewer labels it becomes more and more important that you have bigger models and that's really counterintuitive right because you would expect that the bigger models they can over fit much more easily to the fewer labels but that doesn't seem the case so this self supervision it really seems to be sort of a counter to this notion of overfitting and if you have larger and larger models that's what they argue in", "start_timestamp": "00:27:20", "end_timestamp": "00:27:56", "start_second": 1640, "end_second": 1676, "url": "https://www.youtube.com/watch?v=2lkUNDZld-4&t=1640s", "title": "Big Self-Supervised Models are Strong Semi-Supervised Learners (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/2lkUNDZld-4/maxresdefault.jpg"} {"video_id": "2lkUNDZld-4", "text": "the paper you might be able to learn more and more features that might be useful for classification so if you have a larger model you might you're gonna learn more kinds of features and then you're going to outperform because you have more chance that these features are going to be useful for classification and I don't think they really make a statement as to why that happens more with the if you have less labels so let's think about this if I have very few labels very very few labels why does it help me even more if I have a big", "start_timestamp": "00:27:56", "end_timestamp": "00:28:32", "start_second": 1676, "end_second": 1712, "url": "https://www.youtube.com/watch?v=2lkUNDZld-4&t=1676s", "title": "Big Self-Supervised Models are Strong Semi-Supervised Learners (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/2lkUNDZld-4/maxresdefault.jpg"} {"video_id": "2lkUNDZld-4", "text": "model well with the same argumentation we could say and maybe they actually say this already so I'm I might be copying them involuntarily maybe with fewer and fewer labels like let's say we have all the labels that's probably too many right if we can learn a task with some accuracy we probably had too many labels okay it's like weakly like we can't learn a task we know we have too few somewhere there is a border where we have enough but that's like can one number and everything else is too too many technically speaking like", "start_timestamp": "00:28:32", "end_timestamp": "00:29:09", "start_second": 1712, "end_second": 1749, "url": "https://www.youtube.com/watch?v=2lkUNDZld-4&t=1712s", "title": "Big Self-Supervised Models are Strong Semi-Supervised Learners (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/2lkUNDZld-4/maxresdefault.jpg"} {"video_id": "2lkUNDZld-4", "text": "learning theoretically speaking so usually we have too many labels and what does that mean that probably means that there are multiple ways like if we have too many labels there are multiple different features we can pick up tool and there are multiple different paths to learn our goals so if we have imagenet and like that there's this weird task to recognize a three and we get lots and lots and lots of examples of threes right we can we can decide on a feature we can say oh I all the threes that I see they have this bow down here", "start_timestamp": "00:29:09", "end_timestamp": "00:29:43", "start_second": 1749, "end_second": 1783, "url": "https://www.youtube.com/watch?v=2lkUNDZld-4&t=1749s", "title": "Big Self-Supervised Models are Strong Semi-Supervised Learners (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/2lkUNDZld-4/maxresdefault.jpg"} {"video_id": "2lkUNDZld-4", "text": "or all the threes that I see they have this bend here and so on but if I only have very few labels there might only be like a single feature that is even theoretically possible to learn from the labels I'm given and therefore if I have a bigger model in cell in pre-training because the pre-training happens with the same amount of data right if I have a if I have a bigger model that does the self supervised preaching is going to learn more features and then there's a higher chance that that one feature that I'm that these very few labels that I am", "start_timestamp": "00:29:43", "end_timestamp": "00:30:21", "start_second": 1783, "end_second": 1821, "url": "https://www.youtube.com/watch?v=2lkUNDZld-4&t=1783s", "title": "Big Self-Supervised Models are Strong Semi-Supervised Learners (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/2lkUNDZld-4/maxresdefault.jpg"} {"video_id": "2lkUNDZld-4", "text": "able to learn something from is going to be in these features so that's kind of how I make sense of it in combination what with what they're saying right here okay so this was the main points they do a lot of empirical studies showing the effects of these sizes they've stressed that it's important to have both deep and wide no that works and they also do this additional attention mechanism over the convolution filters I don't want to go into that particularly but they they also do linear evaluation compared to", "start_timestamp": "00:30:21", "end_timestamp": "00:31:01", "start_second": 1821, "end_second": 1861, "url": "https://www.youtube.com/watch?v=2lkUNDZld-4&t=1821s", "title": "Big Self-Supervised Models are Strong Semi-Supervised Learners (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/2lkUNDZld-4/maxresdefault.jpg"} {"video_id": "2lkUNDZld-4", "text": "supervised compared to to fine tuning on with 100% of the labels so they do a very thorough empirical investigation and yeah I I do appreciate that and they kind of show the same things and here they show the number of layers in the projection head so as you increase the number of layers in the projection head and train from the optimal layer in the middle your performance goes up as you can see but it also this effect is stronger when you have fewer labels right you can see the differences here are greater than the differences here or", "start_timestamp": "00:31:01", "end_timestamp": "00:31:43", "start_second": 1861, "end_second": 1903, "url": "https://www.youtube.com/watch?v=2lkUNDZld-4&t=1861s", "title": "Big Self-Supervised Models are Strong Semi-Supervised Learners (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/2lkUNDZld-4/maxresdefault.jpg"} {"video_id": "2lkUNDZld-4", "text": "even here when you have a hundred percent of the labels so the fewer labels the fewer the labels the more benefit you have from the architecture right here and here they show that it's not always optimal to train from the last projection layer but here the first one so they I guess they converge on three projection layers and you always want to keep the first one around after self supervised training as we mentioned before okay they investigate different different distillation losses and show that it is actually important that you", "start_timestamp": "00:31:43", "end_timestamp": "00:32:18", "start_second": 1903, "end_second": 1938, "url": "https://www.youtube.com/watch?v=2lkUNDZld-4&t=1903s", "title": "Big Self-Supervised Models are Strong Semi-Supervised Learners (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/2lkUNDZld-4/maxresdefault.jpg"} {"video_id": "2lkUNDZld-4", "text": "do the distillation loss on labeled and unlabeled sets you can see here if you only do it if you only train with the labels after fine-tuning you get poor performance if you do the label and distillation loss but only do it on the data set where you have labels then you get more performance if you do label and distillation loss but also include your unlabeled data you get even more performance and then if you do that but you don't do the label loss so before we've seen you can mix the distillation loss with the label loss if you have", "start_timestamp": "00:32:18", "end_timestamp": "00:33:00", "start_second": 1938, "end_second": 1980, "url": "https://www.youtube.com/watch?v=2lkUNDZld-4&t=1938s", "title": "Big Self-Supervised Models are Strong Semi-Supervised Learners (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/2lkUNDZld-4/maxresdefault.jpg"} {"video_id": "2lkUNDZld-4", "text": "lots of labels then you drop in performance again and you can see right here the drop in performance is proportional to how many labeled examples you haven't that's that's natural right if you have the labels you can actually mix that information in with the distillation loss and that make you better and here they drop point one percent and here they drop less than one percent by leaving away the labeled but their point basically is that it is more important to to distill using also unlabeled data than it is to", "start_timestamp": "00:33:00", "end_timestamp": "00:33:36", "start_second": 1980, "end_second": 2016, "url": "https://www.youtube.com/watch?v=2lkUNDZld-4&t=1980s", "title": "Big Self-Supervised Models are Strong Semi-Supervised Learners (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/2lkUNDZld-4/maxresdefault.jpg"} {"video_id": "2lkUNDZld-4", "text": "distill including the label loss and it's much easier to not include the label or so they don't do it I guess alright so I think that was it they they compared as I said they compare like self distillation where you distill into an equally sized model and down distillation where you distill into a smaller model maybe that's vice-versa and they do a lot of comparison to other methods so this is a very thorough work I feel and yeah if you want more about the exact experiments I invite you to look at the paper and let's just have a final look", "start_timestamp": "00:33:36", "end_timestamp": "00:34:19", "start_second": 2016, "end_second": 2059, "url": "https://www.youtube.com/watch?v=2lkUNDZld-4&t=2016s", "title": "Big Self-Supervised Models are Strong Semi-Supervised Learners (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/2lkUNDZld-4/maxresdefault.jpg"} {"video_id": "2lkUNDZld-4", "text": "at the broader impact statement right here so the broader remember the broader impact statement is supposed to to force you to think about how society might be impacted at large by your work so it says the finding described in this paper can potentially be harnessed to improve accuracy in any application or computer vision where it is more expensive or difficult to label additional data than to train larger models such applications are clearly beneficial to society for example in medical applications we're acquiring high quality labels requires", "start_timestamp": "00:34:19", "end_timestamp": "00:34:58", "start_second": 2059, "end_second": 2098, "url": "https://www.youtube.com/watch?v=2lkUNDZld-4&t=2059s", "title": "Big Self-Supervised Models are Strong Semi-Supervised Learners (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/2lkUNDZld-4/maxresdefault.jpg"} {"video_id": "2lkUNDZld-4", "text": "careful annotation by clinicians better semi-supervised learning approaches can potentially help save lives application of computer vision to agriculture can increase crop yields which may help to improve availability of food however we also recognize their approach could become a potential component of harmful surveillance systems more over there is an entire industry built around human labeling services and technology that reduces the need for these services could lead to short-term loss of income for some of those currently employed or", "start_timestamp": "00:34:58", "end_timestamp": "00:35:29", "start_second": 2098, "end_second": 2129, "url": "https://www.youtube.com/watch?v=2lkUNDZld-4&t=2098s", "title": "Big Self-Supervised Models are Strong Semi-Supervised Learners (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/2lkUNDZld-4/maxresdefault.jpg"} {"video_id": "2lkUNDZld-4", "text": "contracted to provide labels so ask yourself how much of that statement has to do with the actual novelty this paper and the answer is of course zero right like you can replace like our method in this thing with like machine learning or computer vision in general like oh really sim clear v2 specifically can increase crop yields like that specific invention of this paper will lead to higher crop yields will lead to surveillance systems so I'm yeah you know I I think like I'm not gonna get too too upset about least I mean this I", "start_timestamp": "00:35:29", "end_timestamp": "00:36:15", "start_second": 2129, "end_second": 2175, "url": "https://www.youtube.com/watch?v=2lkUNDZld-4&t=2129s", "title": "Big Self-Supervised Models are Strong Semi-Supervised Learners (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/2lkUNDZld-4/maxresdefault.jpg"} {"video_id": "2lkUNDZld-4", "text": "think it's quite funny but just again I I've I wonder whether the people advocating for these things are happy with these statements because clearly clearly this is just a template that you copy/paste from paper to paper replacing like a few words and if it's computer vision you're like oh my deep fakes and if it's NLP it's like ah fake news and yeah I I wonder if really anything like particularly is has I wonder whether these people are happy now yeah I just I wonder and if if they are I wonder whether it's really for the reason that", "start_timestamp": "00:36:15", "end_timestamp": "00:37:05", "start_second": 2175, "end_second": 2225, "url": "https://www.youtube.com/watch?v=2lkUNDZld-4&t=2175s", "title": "Big Self-Supervised Models are Strong Semi-Supervised Learners (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/2lkUNDZld-4/maxresdefault.jpg"} {"video_id": "vzBpSlexTVY", "text": "today I want to talk to you guys about how to get machine learning on to your physical devices when there is no network connectivity and convince you why you would want to do that why you would actually prefer even with network connectivity to leave it aside perhaps in certain situations but to get there I want to start back we're gonna go back in time back to when the computer first came about right that started this computing revolution that we're in today but today we're in what we call an AI first world how did we get here well", "start_timestamp": "00:00:00", "end_timestamp": "00:00:42", "start_second": 0, "end_second": 42, "url": "https://www.youtube.com/watch?v=vzBpSlexTVY&t=0s", "title": "On-device Machine Learning With TensorFlow", "thumbnail": "https://i.ytimg.com/vi/vzBpSlexTVY/maxresdefault.jpg"} {"video_id": "vzBpSlexTVY", "text": "first we had the computer and that gave us computing then the internet came right the internet connected all the computers together and I want to point out that the internet did not wipe out the computer right the computer is still used today then came mobile the mobile revolution can brought the Internet to everyone it brought the connected computers into our pockets and again we see that mobile did not wipe out the Internet we still use the Internet it's still strong with an IP version 6 we're running out of IP", "start_timestamp": "00:00:42", "end_timestamp": "00:01:18", "start_second": 42, "end_second": 78, "url": "https://www.youtube.com/watch?v=vzBpSlexTVY&t=42s", "title": "On-device Machine Learning With TensorFlow", "thumbnail": "https://i.ytimg.com/vi/vzBpSlexTVY/maxresdefault.jpg"} {"video_id": "vzBpSlexTVY", "text": "addresses and so too will a I not replace mobile but build on top of it and so I see mobile has a key foundation for the core of how we will interact with AI in the years to come and so to that end I want to show you guys one approach for integrating AI into mobile devices and bring amazing user experiences today over half of the fortune 500 globally have disappeared the company's kaput gone since 2000 and so how can we not have that situation first of all but also what will happen to the other half right it's the", "start_timestamp": "00:01:18", "end_timestamp": "00:02:12", "start_second": 78, "end_second": 132, "url": "https://www.youtube.com/watch?v=vzBpSlexTVY&t=78s", "title": "On-device Machine Learning With TensorFlow", "thumbnail": "https://i.ytimg.com/vi/vzBpSlexTVY/maxresdefault.jpg"} {"video_id": "vzBpSlexTVY", "text": "companies who embrace AI these startups who are well nowadays all the startups are saying we are AI startup everyone is a start-up but a few short years ago machine learning and AI was something that companies would add on to their product they would say oh yeah we do a little machine learning here on the side right but now I root everyone's doing it it's it's super popular and for the most part people trained on the server right server has a lot of compute power it makes sense and I'm now going to refute that because it", "start_timestamp": "00:02:12", "end_timestamp": "00:02:46", "start_second": 132, "end_second": 166, "url": "https://www.youtube.com/watch?v=vzBpSlexTVY&t=132s", "title": "On-device Machine Learning With TensorFlow", "thumbnail": "https://i.ytimg.com/vi/vzBpSlexTVY/maxresdefault.jpg"} {"video_id": "vzBpSlexTVY", "text": "makes sense Mobile is not a compute powerful platform so we train on the server that's fine but what if we did the predictions on the mobile on mobile what if we did inference there instead of serving them from a server web server and in particular this approach can lead to amazing user experiences I want to just show with one illustration example this is a Google Translate and it has the ability to overlay the translation directly in the image that you see now you can tell just from the the speed that it can do this that it is", "start_timestamp": "00:02:46", "end_timestamp": "00:03:31", "start_second": 166, "end_second": 211, "url": "https://www.youtube.com/watch?v=vzBpSlexTVY&t=166s", "title": "On-device Machine Learning With TensorFlow", "thumbnail": "https://i.ytimg.com/vi/vzBpSlexTVY/maxresdefault.jpg"} {"video_id": "vzBpSlexTVY", "text": "definitely not happening over the network right the video is not being streamed to Google servers and then sent back that's why it works when you are offline that works it works on a boat it works underwater well if your phone can be underwater it would even probably would work in space and having access to AI on your phone wherever you go that is responsive and accurate that can really change things on a global scale for mobile for the internet and for computing so how can we make something like that right machine learning is hard", "start_timestamp": "00:03:31", "end_timestamp": "00:04:12", "start_second": 211, "end_second": 252, "url": "https://www.youtube.com/watch?v=vzBpSlexTVY&t=211s", "title": "On-device Machine Learning With TensorFlow", "thumbnail": "https://i.ytimg.com/vi/vzBpSlexTVY/maxresdefault.jpg"} {"video_id": "vzBpSlexTVY", "text": "enough by itself then put it on mobile I mean mobile apps aren't easy either and so combining those that can be a real challenge so let me start by showing you guys what a little demo of what I've kind of put together and then we'll talk through how we might build something like this it's a simple demo just mainly just to demonstrate the core kind of functionality and what I've got here is a phone and I have a little app if we switch over to the camera here I have on the table a few different candies let's see what we see here oh great and so", "start_timestamp": "00:04:12", "end_timestamp": "00:04:53", "start_second": 252, "end_second": 293, "url": "https://www.youtube.com/watch?v=vzBpSlexTVY&t=252s", "title": "On-device Machine Learning With TensorFlow", "thumbnail": "https://i.ytimg.com/vi/vzBpSlexTVY/maxresdefault.jpg"} {"video_id": "vzBpSlexTVY", "text": "let's see you guys see this okay so what I'm gonna do is what we'll take some of these away and if the lighting is good you know here we have a a Reese's Cup right and we can see here know that as the image kind of isolates down it recognizes that no I also have a smaller Reese's Cup which just fell on the floor okay so we have a smaller Reese's Cup here and you know and we it will switch over and recognize that now you might say well how do I know that he's not cheating by sending all these images over the web right well let me first of", "start_timestamp": "00:04:53", "end_timestamp": "00:05:32", "start_second": 293, "end_second": 332, "url": "https://www.youtube.com/watch?v=vzBpSlexTVY&t=293s", "title": "On-device Machine Learning With TensorFlow", "thumbnail": "https://i.ytimg.com/vi/vzBpSlexTVY/maxresdefault.jpg"} {"video_id": "vzBpSlexTVY", "text": "all there's no Wi-Fi here secondly let's let's just go ahead and hit airplane mode right as well and we'll see here that you know everything continues to work just fine these are some more peanut butter cups I'm a big fan of peanut butter and so let's push these guys away and so here we have a shot of the Justin's see here it says white chocolate peanut butter cups and you know one of my hand enters the frame it may get upset candy is a little bit bent out of shape from its days inside the suitcase but you know you can see that", "start_timestamp": "00:05:32", "end_timestamp": "00:06:07", "start_second": 332, "end_second": 367, "url": "https://www.youtube.com/watch?v=vzBpSlexTVY&t=332s", "title": "On-device Machine Learning With TensorFlow", "thumbnail": "https://i.ytimg.com/vi/vzBpSlexTVY/maxresdefault.jpg"} {"video_id": "vzBpSlexTVY", "text": "it clearly recognizes that versus a very similar packaging right but this is milk chocolate this is a milk chocolate peanut butter cup and and this one also updates you know it updates right away you can see the the confidence and then we here we have some juicy fruit gum just for variety you know some people don't like peanut butter I understand that so that's that's the gist of the demo here and so if we switch back to the slides we can think about how we might build something like this how do we go from collecting data to having an app that", "start_timestamp": "00:06:07", "end_timestamp": "00:06:42", "start_second": 367, "end_second": 402, "url": "https://www.youtube.com/watch?v=vzBpSlexTVY&t=367s", "title": "On-device Machine Learning With TensorFlow", "thumbnail": "https://i.ytimg.com/vi/vzBpSlexTVY/maxresdefault.jpg"} {"video_id": "vzBpSlexTVY", "text": "can recognize images in real time custom images right I just I chose these pretty arbitrarily if you were to take any generic machine learning visual model and pointed them at this it might take candy candy bar maybe it even says chocolate maybe it just says yellow but how can you get something to recognize something that's specific you know imagine having something recognize your particular products your whether it's your brand things in your home and so this is my are kind of guidelines here this is what will follow this our", "start_timestamp": "00:06:42", "end_timestamp": "00:07:21", "start_second": 402, "end_second": 441, "url": "https://www.youtube.com/watch?v=vzBpSlexTVY&t=402s", "title": "On-device Machine Learning With TensorFlow", "thumbnail": "https://i.ytimg.com/vi/vzBpSlexTVY/maxresdefault.jpg"} {"video_id": "vzBpSlexTVY", "text": "little nap this is our road map and what we're gonna do is we're gonna try to fill in each of these blocks and go from gathering data to having an application so the first thing you got to do is collect data my data collection is as we all know the most fun step of machine learning now I've found a bit of a shortcut for you instead of you know collecting lots of pictures and then trimming them down and then labeling them and it would just be a lot of work right it's a lot of pictures so maybe we can shoot some video we shoot some video", "start_timestamp": "00:07:21", "end_timestamp": "00:08:00", "start_second": 441, "end_second": 480, "url": "https://www.youtube.com/watch?v=vzBpSlexTVY&t=441s", "title": "On-device Machine Learning With TensorFlow", "thumbnail": "https://i.ytimg.com/vi/vzBpSlexTVY/maxresdefault.jpg"} {"video_id": "vzBpSlexTVY", "text": "and we only take the video of that particular object so I go through each one now you can see there I have one of the the peanut butter cups and we go through each one and we capture a video and what's nice about that is that entire video every single frame is a picture of that object hopefully from a different angle so keep that camera moving and then what we can do is we can well we chop that up right there's a command line tool called ffmpeg and there's lots of tools ways you can chop up a video that's a solved problem and", "start_timestamp": "00:08:00", "end_timestamp": "00:08:34", "start_second": 480, "end_second": 514, "url": "https://www.youtube.com/watch?v=vzBpSlexTVY&t=480s", "title": "On-device Machine Learning With TensorFlow", "thumbnail": "https://i.ytimg.com/vi/vzBpSlexTVY/maxresdefault.jpg"} {"video_id": "vzBpSlexTVY", "text": "we put those pictures all into one place all together in one folder for each of the objects you want to recognize so so one folder for your juicy fruit one folder for your milk chocolate peanut butter cups one folder white chocolate ones and so on and so now we've effectively just labeled all of the images right we didn't have to come up with any sophisticated way to label it for us with some system so that's great so we have folders of images what's next well we take these pictures and we send them to training", "start_timestamp": "00:08:34", "end_timestamp": "00:09:10", "start_second": 514, "end_second": 550, "url": "https://www.youtube.com/watch?v=vzBpSlexTVY&t=514s", "title": "On-device Machine Learning With TensorFlow", "thumbnail": "https://i.ytimg.com/vi/vzBpSlexTVY/maxresdefault.jpg"} {"video_id": "vzBpSlexTVY", "text": "right so in my particular case I uploaded them to the cloud because my MacBook was running out of space form all the images and pictures and so I zipped them up put them in the cloud and I happen to do my training in the cloud you can do them on your data center you can do them on your local machine if you have a lot of storage and the training we did is using transferred learning now this speaker before me mentioned transfer learning so I won't go too much into detail about it but I do have a kind of little story a little analogy", "start_timestamp": "00:09:10", "end_timestamp": "00:09:38", "start_second": 550, "end_second": 578, "url": "https://www.youtube.com/watch?v=vzBpSlexTVY&t=550s", "title": "On-device Machine Learning With TensorFlow", "thumbnail": "https://i.ytimg.com/vi/vzBpSlexTVY/maxresdefault.jpg"} {"video_id": "vzBpSlexTVY", "text": "that I recently accidentally discovered I was playing with a puzzle this is a jigsaw puzzle right lots of pieces and and this is the final picture this is the box showing what I was supposed to build and I'm not a very good jigsaw puzzle person I kind of struggle with it but as I was struggling through this I realized you know there's some good tricks in here I could do first of all I could separate the the pieces with no images the white pieces from the everything else right so I moved all of the white pieces to one side okay so", "start_timestamp": "00:09:38", "end_timestamp": "00:10:12", "start_second": 578, "end_second": 612, "url": "https://www.youtube.com/watch?v=vzBpSlexTVY&t=578s", "title": "On-device Machine Learning With TensorFlow", "thumbnail": "https://i.ytimg.com/vi/vzBpSlexTVY/maxresdefault.jpg"} {"video_id": "vzBpSlexTVY", "text": "that's the obvious step first step but then what how do we start from there I also noticed that in this image the roofs all the individual roofs are very distinctly patterned and so I said ah I know I can look for pieces with this pattern and with that I can begin to put those together those were easy to find I could recognize those patterns so my brains neural network through my eyes could find those individual small pieces and combine them together and say ah there's a roof similarly I noticed in the stairs the stairs have a distinct", "start_timestamp": "00:10:12", "end_timestamp": "00:10:52", "start_second": 612, "end_second": 652, "url": "https://www.youtube.com/watch?v=vzBpSlexTVY&t=612s", "title": "On-device Machine Learning With TensorFlow", "thumbnail": "https://i.ytimg.com/vi/vzBpSlexTVY/maxresdefault.jpg"} {"video_id": "vzBpSlexTVY", "text": "pattern they're parallel lines there's some of those X's there and I could find the pieces that kind of looked similar and begin to squeeze those together and eventually get that together and instantly everything else came together well there was a few more steps right but that that's as far as I got the edge pieces they're so hard so so a convolutional neural network kind of works in a similar way I use transfer learning with the inception model which is a model that the Google brain team created a few years ago it has 48 layers", "start_timestamp": "00:10:52", "end_timestamp": "00:11:30", "start_second": 652, "end_second": 690, "url": "https://www.youtube.com/watch?v=vzBpSlexTVY&t=652s", "title": "On-device Machine Learning With TensorFlow", "thumbnail": "https://i.ytimg.com/vi/vzBpSlexTVY/maxresdefault.jpg"} {"video_id": "vzBpSlexTVY", "text": "and to give you some perspective on how big or how small that might be just a few years ago I think it was 2011 or 2012 it was Impractical to train a neural network that was more than four layers deep the year before inception v3 came out the model that won the International image recognition competition called imagenet that had 22 layers so in the inception of e3 model really was a giant leap ahead right more than double the number of layers it really showcased the improvements in both computational power that was", "start_timestamp": "00:11:30", "end_timestamp": "00:12:07", "start_second": 690, "end_second": 727, "url": "https://www.youtube.com/watch?v=vzBpSlexTVY&t=690s", "title": "On-device Machine Learning With TensorFlow", "thumbnail": "https://i.ytimg.com/vi/vzBpSlexTVY/maxresdefault.jpg"} {"video_id": "vzBpSlexTVY", "text": "available as well as network design so we can use that wonderful research and take it for our advantage so we trained the last layer right of the network and leave everything else intact this means that everything in the visual part of recognizing those pieces recognizing the little bits the principal pieces are already there in place for us so you have a great model you can train it but when you're done you look at your file system and you say wow this model is 84 megabytes I'm trying to put this in a mobile app", "start_timestamp": "00:12:07", "end_timestamp": "00:12:44", "start_second": 727, "end_second": 764, "url": "https://www.youtube.com/watch?v=vzBpSlexTVY&t=727s", "title": "On-device Machine Learning With TensorFlow", "thumbnail": "https://i.ytimg.com/vi/vzBpSlexTVY/maxresdefault.jpg"} {"video_id": "vzBpSlexTVY", "text": "can you help me out sure thing we're gonna optimize it for mobile what can we do to shrink down a model well handily enough there is a graph transform tool and what that's gonna do for us there's a couple of steps in there that we can do the first thing is a technique called quantizing or quantization and the floating-point numbers those 32-bit floating-point numbers that are taking up all this space we're gonna shrink that down to just eight bits how can we get away with this we're gonna lose so much accuracy right", "start_timestamp": "00:12:44", "end_timestamp": "00:13:18", "start_second": 764, "end_second": 798, "url": "https://www.youtube.com/watch?v=vzBpSlexTVY&t=764s", "title": "On-device Machine Learning With TensorFlow", "thumbnail": "https://i.ytimg.com/vi/vzBpSlexTVY/maxresdefault.jpg"} {"video_id": "vzBpSlexTVY", "text": "well not necessarily luckily neural networks are designed for fuzziness in their inputs so by quantizing down to eight bits and and that's just by rounding but like actually saying these numbers are close enough we're gonna make them all kind of say this number and then this number so we then say the range we're gonna split that out but into 256 pieces so if the range of values is say negative ten to thirty within there when they divided up into 256 little steps and so that gives you a little more accuracy than just purely", "start_timestamp": "00:13:18", "end_timestamp": "00:13:51", "start_second": 798, "end_second": 831, "url": "https://www.youtube.com/watch?v=vzBpSlexTVY&t=798s", "title": "On-device Machine Learning With TensorFlow", "thumbnail": "https://i.ytimg.com/vi/vzBpSlexTVY/maxresdefault.jpg"} {"video_id": "vzBpSlexTVY", "text": "changing it directly to 8-bit and additionally that means when you do compression those values are the same and so that's what it lets you go down literally for X so you go from 84 megabytes down to around 20 21 megabytes and one small additional thing you can do is take away the parts of the graph the parts of the graph that you don't need anymore for prediction there's some graph pete nodes which are only useful for training and there's also a tool that will prune that down for you as well so that's also part of this graph", "start_timestamp": "00:13:51", "end_timestamp": "00:14:25", "start_second": 831, "end_second": 865, "url": "https://www.youtube.com/watch?v=vzBpSlexTVY&t=831s", "title": "On-device Machine Learning With TensorFlow", "thumbnail": "https://i.ytimg.com/vi/vzBpSlexTVY/maxresdefault.jpg"} {"video_id": "vzBpSlexTVY", "text": "transform tool is a whole suite of tools so that's really useful and I also want to call out that so far everything we've done is basically running existing code and existing tools you didn't have to custom write anything the only custom thing you had to do was shoot that video and run ffmpeg so this really makes it you know puts it in an immediate possibility type of stage there is one more thing one more consideration to think about when looking at deploying a machine learning model to a mobile device and that is whether you package", "start_timestamp": "00:14:25", "end_timestamp": "00:15:02", "start_second": 865, "end_second": 902, "url": "https://www.youtube.com/watch?v=vzBpSlexTVY&t=865s", "title": "On-device Machine Learning With TensorFlow", "thumbnail": "https://i.ytimg.com/vi/vzBpSlexTVY/maxresdefault.jpg"} {"video_id": "vzBpSlexTVY", "text": "it inside the app or alongside the app you can make it a data file or you can make it integrate it into the app and some of the thoughts there are whether you want to be able to secure the model whether you want to be able to download updates without pushing a new updated version of the app itself and whether or not you care about sizing and how whether or not you want to secure the the model from outside access so that's our our overall design right we gather it up we shoot up the videos slice it up and and train and optimize", "start_timestamp": "00:15:02", "end_timestamp": "00:15:43", "start_second": 902, "end_second": 943, "url": "https://www.youtube.com/watch?v=vzBpSlexTVY&t=902s", "title": "On-device Machine Learning With TensorFlow", "thumbnail": "https://i.ytimg.com/vi/vzBpSlexTVY/maxresdefault.jpg"} {"video_id": "vzBpSlexTVY", "text": "and then we can deploy so that that's our of our finished model and this is a video of the the same thing so I won't show that and the final kind of point here is how are we going to be able to how have we done this right what makes it possible and that's tensor flow tensor flow is Google's machine learning library hopefully some of you have heard of it it's too dark so I can't do a show of hands but it's been incredible to see the community adoption and the reception to the launch it was open sourced in November of 2015 and hit 1.0 this past", "start_timestamp": "00:15:43", "end_timestamp": "00:16:18", "start_second": 943, "end_second": 978, "url": "https://www.youtube.com/watch?v=vzBpSlexTVY&t=943s", "title": "On-device Machine Learning With TensorFlow", "thumbnail": "https://i.ytimg.com/vi/vzBpSlexTVY/maxresdefault.jpg"} {"video_id": "vzBpSlexTVY", "text": "February and with that we have support for not just these platforms right which we expect CPUs GPUs and of course Android but also iOS and Raspberry Pi so for those of you who like to tinker with IOT devices you can load a model onto a Raspberry Pi so that means you can recognize things without any network traffic it can be handy and the community responds to tensorflow in the past while now more than fourteen months but in the first fourteen months there were over fourteen thousand commits hundreds of non-google", "start_timestamp": "00:16:18", "end_timestamp": "00:16:52", "start_second": 978, "end_second": 1012, "url": "https://www.youtube.com/watch?v=vzBpSlexTVY&t=978s", "title": "On-device Machine Learning With TensorFlow", "thumbnail": "https://i.ytimg.com/vi/vzBpSlexTVY/maxresdefault.jpg"} {"video_id": "vzBpSlexTVY", "text": "contributors and now that it's 1.1 of 1.0 it is production ready the AP's api's are stable and backwards compatible so things won't change out from under you so that's really quite nice and so in conclusion putting machine learning on mobile will just make that experience of mobile the internet and computing that much more powerful and usher in a new wave of innovation and open a whole new world of possibilities and moreover you can build one easily by gathering your own labelled data simply by shooting a video", "start_timestamp": "00:16:52", "end_timestamp": "00:17:29", "start_second": 1012, "end_second": 1049, "url": "https://www.youtube.com/watch?v=vzBpSlexTVY&t=1012s", "title": "On-device Machine Learning With TensorFlow", "thumbnail": "https://i.ytimg.com/vi/vzBpSlexTVY/maxresdefault.jpg"} {"video_id": "r9LCwI5iErE", "text": "Probably a lot of you know the story of the two salesmen who went down to Africa in the 1900s. They were sent down to find if there was any opportunity for selling shoes, and they wrote telegrams back to Manchester. And one of them wrote, \"Situation hopeless. Stop. They don't wear shoes.\" And the other one wrote, \"Glorious opportunity. They don't have any shoes yet.\" (Laughter) Now, there's a similar situation in the classical music world, because there are some people who think that classical music is dying.", "start_timestamp": "00:00:00", "end_timestamp": "00:00:44", "start_second": 0, "end_second": 44, "url": "https://www.youtube.com/watch?v=r9LCwI5iErE&t=0s", "title": "The transformative power of classical music | Benjamin Zander", "thumbnail": "https://i.ytimg.com/vi/r9LCwI5iErE/maxresdefault.jpg"} {"video_id": "r9LCwI5iErE", "text": "And there are some of us who think you ain't seen nothing yet. And rather than go into statistics and trends, and tell you about all the orchestras that are closing, and the record companies that are folding, I thought we should do an experiment tonight. Actually, it's not really an experiment, because I know the outcome. (Laughter) But it's like an experiment. Now, before we start -- (Laughter) Before we start, I need to do two things. One is I want to remind you of what a seven-year-old child sounds like when he plays the piano.", "start_timestamp": "00:00:44", "end_timestamp": "00:01:20", "start_second": 44, "end_second": 80, "url": "https://www.youtube.com/watch?v=r9LCwI5iErE&t=44s", "title": "The transformative power of classical music | Benjamin Zander", "thumbnail": "https://i.ytimg.com/vi/r9LCwI5iErE/maxresdefault.jpg"} {"video_id": "r9LCwI5iErE", "text": "Maybe you have this child at home. He sounds something like this. (Music) (Music ends) I see some of you recognize this child. Now, if he practices for a year and takes lessons, he's now eight and he sounds like this. (Music) (Music ends) He practices for another year and takes lessons -- he's nine. (Music) (Music ends) Then he practices for another year and takes lessons -- now he's 10. (Music) (Music ends) At that point, they usually give up. (Laughter) (Applause) Now, if you'd waited for one more year, you would have heard this.", "start_timestamp": "00:01:20", "end_timestamp": "00:02:27", "start_second": 80, "end_second": 147, "url": "https://www.youtube.com/watch?v=r9LCwI5iErE&t=80s", "title": "The transformative power of classical music | Benjamin Zander", "thumbnail": "https://i.ytimg.com/vi/r9LCwI5iErE/maxresdefault.jpg"} {"video_id": "r9LCwI5iErE", "text": "(Music) (Music ends) Now, what happened was not maybe what you thought, which is, he suddenly became passionate, engaged, involved, got a new teacher, he hit puberty, or whatever it is. What actually happened was the impulses were reduced. You see, the first time, he was playing with an impulse on every note. (Music) And the second, with an impulse every other note. (Music) You can see it by looking at my head. (Laughter) The nine-year-old put an impulse on every four notes. (Music) The 10-year-old, on every eight notes.", "start_timestamp": "00:02:27", "end_timestamp": "00:03:12", "start_second": 147, "end_second": 192, "url": "https://www.youtube.com/watch?v=r9LCwI5iErE&t=147s", "title": "The transformative power of classical music | Benjamin Zander", "thumbnail": "https://i.ytimg.com/vi/r9LCwI5iErE/maxresdefault.jpg"} {"video_id": "r9LCwI5iErE", "text": "(Music) And the 11-year-old, one impulse on the whole phrase. (Music) I don't know how we got into this position. (Laughter) I didn't say, \"I'm going to move my shoulder over, move my body.\" No, the music pushed me over, which is why I call it one-buttock playing. (Music) It can be the other buttock. (Music) You know, a gentleman was once watching a presentation I was doing, when I was working with a young pianist. He was the president of a corporation in Ohio. I was working with this young pianist, and said,", "start_timestamp": "00:03:12", "end_timestamp": "00:03:48", "start_second": 192, "end_second": 228, "url": "https://www.youtube.com/watch?v=r9LCwI5iErE&t=192s", "title": "The transformative power of classical music | Benjamin Zander", "thumbnail": "https://i.ytimg.com/vi/r9LCwI5iErE/maxresdefault.jpg"} {"video_id": "r9LCwI5iErE", "text": "\"The trouble with you is you're a two-buttock player. You should be a one-buttock player.\" I moved his body while he was playing. And suddenly, the music took off. It took flight. The audience gasped when they heard the difference. Then I got a letter from this gentleman. He said, \"I was so moved. I went back and I transformed my entire company into a one-buttock company.\" (Laughter) Now, the other thing I wanted to do is to tell you about you. There are 1,600 people, I believe. My estimation is that probably 45 of you", "start_timestamp": "00:03:48", "end_timestamp": "00:04:18", "start_second": 228, "end_second": 258, "url": "https://www.youtube.com/watch?v=r9LCwI5iErE&t=228s", "title": "The transformative power of classical music | Benjamin Zander", "thumbnail": "https://i.ytimg.com/vi/r9LCwI5iErE/maxresdefault.jpg"} {"video_id": "r9LCwI5iErE", "text": "are absolutely passionate about classical music. You adore classical music. Your FM is always on that classical dial. You have CDs in your car, and you go to the symphony, your children are playing instruments. You can't imagine your life without classical music. That's the first group, quite small. Then there's another bigger group. The people who don't mind classical music. (Laughter) You know, you've come home from a long day, and you take a glass of wine, and you put your feet up. A little Vivaldi in the background doesn't do any harm.", "start_timestamp": "00:04:18", "end_timestamp": "00:04:48", "start_second": 258, "end_second": 288, "url": "https://www.youtube.com/watch?v=r9LCwI5iErE&t=258s", "title": "The transformative power of classical music | Benjamin Zander", "thumbnail": "https://i.ytimg.com/vi/r9LCwI5iErE/maxresdefault.jpg"} {"video_id": "r9LCwI5iErE", "text": "That's the second group. Now comes the third group: people who never listen to classical music. It's just simply not part of your life. You might hear it like second-hand smoke at the airport ... (Laughter) -- and maybe a little bit of a march from \"Aida\" when you come into the hall. But otherwise, you never hear it. That's probably the largest group. And then there's a very small group. These are the people who think they're tone-deaf. Amazing number of people think they're tone-deaf. Actually, I hear a lot, \"My husband is tone-deaf.\"", "start_timestamp": "00:04:48", "end_timestamp": "00:05:15", "start_second": 288, "end_second": 315, "url": "https://www.youtube.com/watch?v=r9LCwI5iErE&t=288s", "title": "The transformative power of classical music | Benjamin Zander", "thumbnail": "https://i.ytimg.com/vi/r9LCwI5iErE/maxresdefault.jpg"} {"video_id": "r9LCwI5iErE", "text": "(Laughter) Actually, you cannot be tone-deaf. Nobody is tone-deaf. If you were tone-deaf, you couldn't change the gears on your car, in a stick shift car. You couldn't tell the difference between somebody from Texas and somebody from Rome. And the telephone. The telephone. If your mother calls on the miserable telephone, she calls and says, \"Hello,\" you not only know who it is, you know what mood she's in. You have a fantastic ear. Everybody has a fantastic ear. So nobody is tone-deaf. But I tell you what. It doesn't work for me to go on with this thing,", "start_timestamp": "00:05:15", "end_timestamp": "00:05:48", "start_second": 315, "end_second": 348, "url": "https://www.youtube.com/watch?v=r9LCwI5iErE&t=315s", "title": "The transformative power of classical music | Benjamin Zander", "thumbnail": "https://i.ytimg.com/vi/r9LCwI5iErE/maxresdefault.jpg"} {"video_id": "r9LCwI5iErE", "text": "with such a wide gulf between those who understand, love and are passionate about classical music, and those who have no relationship to it at all. The tone-deaf people, they're no longer here. But even between those three categories, it's too wide a gulf. So I'm not going to go on until every single person in this room, downstairs and in Aspen, and everybody else looking, will come to love and understand classical music. So that's what we're going to do. Now, you notice that there is not the slightest doubt in my mind", "start_timestamp": "00:05:48", "end_timestamp": "00:06:24", "start_second": 348, "end_second": 384, "url": "https://www.youtube.com/watch?v=r9LCwI5iErE&t=348s", "title": "The transformative power of classical music | Benjamin Zander", "thumbnail": "https://i.ytimg.com/vi/r9LCwI5iErE/maxresdefault.jpg"} {"video_id": "r9LCwI5iErE", "text": "that this is going to work, if you look at my face, right? It's one of the characteristics of a leader that he not doubt for one moment the capacity of the people he's leading to realize whatever he's dreaming. Imagine if Martin Luther King had said, \"I have a dream. Of course, I'm not sure they'll be up to it.\" (Laughter) All right. So I'm going to take a piece of Chopin. This is a beautiful prelude by Chopin. Some of you will know it. (Music) Do you know what I think probably happened here? When I started, you thought, \"How beautiful that sounds.\"", "start_timestamp": "00:06:24", "end_timestamp": "00:07:28", "start_second": 384, "end_second": 448, "url": "https://www.youtube.com/watch?v=r9LCwI5iErE&t=384s", "title": "The transformative power of classical music | Benjamin Zander", "thumbnail": "https://i.ytimg.com/vi/r9LCwI5iErE/maxresdefault.jpg"} {"video_id": "r9LCwI5iErE", "text": "(Music) \"I don't think we should go to the same place for our summer holidays next year.\" (Laughter) It's funny, isn't it? It's funny how those thoughts kind of waft into your head. And of course -- (Applause) Of course, if the piece is long and you've had a long day, you might actually drift off. Then your companion will dig you in the ribs and say, \"Wake up! It's culture!\" And then you feel even worse. (Laughter) But has it ever occurred to you that the reason you feel sleepy in classical music is not because of you, but because of us?", "start_timestamp": "00:07:28", "end_timestamp": "00:08:13", "start_second": 448, "end_second": 493, "url": "https://www.youtube.com/watch?v=r9LCwI5iErE&t=448s", "title": "The transformative power of classical music | Benjamin Zander", "thumbnail": "https://i.ytimg.com/vi/r9LCwI5iErE/maxresdefault.jpg"} {"video_id": "r9LCwI5iErE", "text": "Did anybody think while I was playing, \"Why is he using so many impulses?\" If I'd done this with my head you certainly would have thought it. (Music) (Music ends) And for the rest of your life, every time you hear classical music, you'll always be able to know if you hear those impulses. So let's see what's really going on here. We have a B. This is a B. The next note is a C. And the job of the C is to make the B sad. And it does, doesn't it? (Laughter) Composers know that. If they want sad music, they just play those two notes.", "start_timestamp": "00:08:13", "end_timestamp": "00:08:51", "start_second": 493, "end_second": 531, "url": "https://www.youtube.com/watch?v=r9LCwI5iErE&t=493s", "title": "The transformative power of classical music | Benjamin Zander", "thumbnail": "https://i.ytimg.com/vi/r9LCwI5iErE/maxresdefault.jpg"} {"video_id": "r9LCwI5iErE", "text": "(Music) But basically, it's just a B, with four sads. (Laughter) Now, it goes down to A. Now to G. And then to F. So we have B, A, G, F. And if we have B, A, G, F, what do we expect next? (Music) That might have been a fluke. Let's try it again. (Music) Oh, the TED choir. (Laughter) And you notice nobody is tone-deaf, right? Nobody is. You know, every village in Bangladesh and every hamlet in China -- everybody knows: da, da, da, da -- da. Everybody knows, who's expecting that E. Chopin didn't want to reach the E there,", "start_timestamp": "00:08:51", "end_timestamp": "00:09:43", "start_second": 531, "end_second": 583, "url": "https://www.youtube.com/watch?v=r9LCwI5iErE&t=531s", "title": "The transformative power of classical music | Benjamin Zander", "thumbnail": "https://i.ytimg.com/vi/r9LCwI5iErE/maxresdefault.jpg"} {"video_id": "r9LCwI5iErE", "text": "because what will have happened? It will be over, like Hamlet. Do you remember? Act One, scene three, he finds out his uncle killed his father. He keeps on going up to his uncle and almost killing him. And then he backs away, he goes up to him again, almost kills him. The critics sitting in the back row there, they have to have an opinion, so they say, \"Hamlet is a procrastinator.\" Or they say, \"Hamlet has an Oedipus complex.\" No, otherwise the play would be over, stupid. (Laughter) That's why Shakespeare puts all that stuff in Hamlet --", "start_timestamp": "00:09:43", "end_timestamp": "00:10:11", "start_second": 583, "end_second": 611, "url": "https://www.youtube.com/watch?v=r9LCwI5iErE&t=583s", "title": "The transformative power of classical music | Benjamin Zander", "thumbnail": "https://i.ytimg.com/vi/r9LCwI5iErE/maxresdefault.jpg"} {"video_id": "r9LCwI5iErE", "text": "Ophelia going mad, the play within the play, and Yorick's skull, and the gravediggers. That's in order to delay -- until Act Five, he can kill him. It's the same with the Chopin. He's just about to reach the E, and he says, \"Oops, better go back up and do it again.\" So he does it again. Now, he gets excited. (Music) That's excitement, don't worry about it. Now, he gets to F-sharp, and finally he goes down to E, but it's the wrong chord -- because the chord he's looking for is this one, and instead he does ...", "start_timestamp": "00:10:11", "end_timestamp": "00:10:44", "start_second": 611, "end_second": 644, "url": "https://www.youtube.com/watch?v=r9LCwI5iErE&t=611s", "title": "The transformative power of classical music | Benjamin Zander", "thumbnail": "https://i.ytimg.com/vi/r9LCwI5iErE/maxresdefault.jpg"} {"video_id": "r9LCwI5iErE", "text": "Now, we call that a deceptive cadence, because it deceives us. I tell my students, \"If you have a deceptive cadence, raise your eyebrows, and everybody will know.\" (Laughter) (Applause) Right. He gets to E, but it's the wrong chord. Now, he tries E again. That chord doesn't work. Now, he tries the E again. That chord doesn't work. Now, he tries E again, and that doesn't work. And then finally ... There was a gentleman in the front row who went, \"Mmm.\" (Laughter) It's the same gesture he makes when he comes home", "start_timestamp": "00:10:44", "end_timestamp": "00:11:21", "start_second": 644, "end_second": 681, "url": "https://www.youtube.com/watch?v=r9LCwI5iErE&t=644s", "title": "The transformative power of classical music | Benjamin Zander", "thumbnail": "https://i.ytimg.com/vi/r9LCwI5iErE/maxresdefault.jpg"} {"video_id": "r9LCwI5iErE", "text": "after a long day, turns off the key in his car and says, \"Aah, I'm home.\" Because we all know where home is. So this is a piece which goes from away to home. I'm going to play it all the way through and you're going to follow. B, C, B, C, B, C, B -- down to A, down to G, down to F. Almost goes to E, but otherwise the play would be over. He goes back up to B, he gets very excited. Goes to F-sharp. Goes to E. It's the wrong chord. It's the wrong chord. And finally goes to E, and it's home. And what you're going to see is one-buttock playing.", "start_timestamp": "00:11:21", "end_timestamp": "00:11:51", "start_second": 681, "end_second": 711, "url": "https://www.youtube.com/watch?v=r9LCwI5iErE&t=681s", "title": "The transformative power of classical music | Benjamin Zander", "thumbnail": "https://i.ytimg.com/vi/r9LCwI5iErE/maxresdefault.jpg"} {"video_id": "r9LCwI5iErE", "text": "(Laughter) Because for me, to join the B to the E, I have to stop thinking about every single note along the way, and start thinking about the long, long line from B to E. You know, we were just in South Africa, and you can't go to South Africa without thinking of Mandela in jail for 27 years. What was he thinking about? Lunch? No, he was thinking about the vision for South Africa and for human beings. This is about vision. This is about the long line. Like the bird who flies over the field and doesn't care about the fences underneath, all right?", "start_timestamp": "00:11:51", "end_timestamp": "00:12:31", "start_second": 711, "end_second": 751, "url": "https://www.youtube.com/watch?v=r9LCwI5iErE&t=711s", "title": "The transformative power of classical music | Benjamin Zander", "thumbnail": "https://i.ytimg.com/vi/r9LCwI5iErE/maxresdefault.jpg"} {"video_id": "r9LCwI5iErE", "text": "So now, you're going to follow the line all the way from B to E. And I've one last request before I play this piece all the way through. Would you think of somebody who you adore, who's no longer there? A beloved grandmother, a lover -- somebody in your life who you love with all your heart, but that person is no longer with you. Bring that person into your mind, and at the same time, follow the line all the way from B to E, and you'll hear everything that Chopin had to say. (Music) (Music ends) (Applause) Now, you may be wondering --", "start_timestamp": "00:12:31", "end_timestamp": "00:15:10", "start_second": 751, "end_second": 910, "url": "https://www.youtube.com/watch?v=r9LCwI5iErE&t=751s", "title": "The transformative power of classical music | Benjamin Zander", "thumbnail": "https://i.ytimg.com/vi/r9LCwI5iErE/maxresdefault.jpg"} {"video_id": "r9LCwI5iErE", "text": "(Applause) (Applause ends) You may be wondering why I'm clapping. Well, I did this at a school in Boston with about 70 seventh graders, 12-year-olds. I did exactly what I did with you, and I explained the whole thing. At the end, they went crazy, clapping. I was clapping. They were clapping. Finally, I said, \"Why am I clapping?\" And one of them said, \"Because we were listening.\" (Laughter) Think of it. 1,600 people, busy people, involved in all sorts of different things, listening, understanding and being moved", "start_timestamp": "00:15:10", "end_timestamp": "00:15:49", "start_second": 910, "end_second": 949, "url": "https://www.youtube.com/watch?v=r9LCwI5iErE&t=910s", "title": "The transformative power of classical music | Benjamin Zander", "thumbnail": "https://i.ytimg.com/vi/r9LCwI5iErE/maxresdefault.jpg"} {"video_id": "r9LCwI5iErE", "text": "by a piece by Chopin. Now, that is something. Am I sure that every single person followed that, understood it, was moved by it? Of course, I can't be sure. But I'll tell you what happened to me in Ireland during the Troubles, 10 years ago, and I was working with some Catholic and Protestant kids on conflict resolution. And I did this with them -- a risky thing to do, because they were street kids. And one of them came to me the next morning and he said, \"You know, I've never listened to classical music in my life,", "start_timestamp": "00:15:49", "end_timestamp": "00:16:19", "start_second": 949, "end_second": 979, "url": "https://www.youtube.com/watch?v=r9LCwI5iErE&t=949s", "title": "The transformative power of classical music | Benjamin Zander", "thumbnail": "https://i.ytimg.com/vi/r9LCwI5iErE/maxresdefault.jpg"} {"video_id": "r9LCwI5iErE", "text": "but when you played that shopping piece ...\" (Laughter) He said, \"My brother was shot last year and I didn't cry for him. But last night, when you played that piece, he was the one I was thinking about. And I felt the tears streaming down my face. And it felt really good to cry for my brother.\" So I made up my mind at that moment that classical music is for everybody. Everybody. Now, how would you walk -- my profession, the music profession doesn't see it that way. They say three percent of the population likes classical music.", "start_timestamp": "00:16:19", "end_timestamp": "00:16:57", "start_second": 979, "end_second": 1017, "url": "https://www.youtube.com/watch?v=r9LCwI5iErE&t=979s", "title": "The transformative power of classical music | Benjamin Zander", "thumbnail": "https://i.ytimg.com/vi/r9LCwI5iErE/maxresdefault.jpg"} {"video_id": "r9LCwI5iErE", "text": "If only we could move it to four percent, our problems would be over. (Laughter) How would you walk? How would you talk? How would you be? If you thought, \"Three percent of the population likes classical music, if only we could move it to four percent.\" How would you walk or talk? How would you be? If you thought, \"Everybody loves classical music -- they just haven't found out about it yet.\" See, these are totally different worlds. Now, I had an amazing experience. I was 45 years old, I'd been conducting for 20 years,", "start_timestamp": "00:16:57", "end_timestamp": "00:17:25", "start_second": 1017, "end_second": 1045, "url": "https://www.youtube.com/watch?v=r9LCwI5iErE&t=1017s", "title": "The transformative power of classical music | Benjamin Zander", "thumbnail": "https://i.ytimg.com/vi/r9LCwI5iErE/maxresdefault.jpg"} {"video_id": "r9LCwI5iErE", "text": "and I suddenly had a realization. The conductor of an orchestra doesn't make a sound. My picture appears on the front of the CD -- (Laughter) But the conductor doesn't make a sound. He depends, for his power, on his ability to make other people powerful. And that changed everything for me. It was totally life-changing. People in my orchestra said, \"Ben, what happened?\" That's what happened. I realized my job was to awaken possibility in other people. And of course, I wanted to know whether I was doing that. How do you find out?", "start_timestamp": "00:17:25", "end_timestamp": "00:18:02", "start_second": 1045, "end_second": 1082, "url": "https://www.youtube.com/watch?v=r9LCwI5iErE&t=1045s", "title": "The transformative power of classical music | Benjamin Zander", "thumbnail": "https://i.ytimg.com/vi/r9LCwI5iErE/maxresdefault.jpg"} {"video_id": "r9LCwI5iErE", "text": "You look at their eyes. If their eyes are shining, you know you're doing it. You could light up a village with this guy's eyes. (Laughter) Right. So if the eyes are shining, you know you're doing it. If the eyes are not shining, you get to ask a question. And this is the question: who am I being that my players' eyes are not shining? We can do that with our children, too. Who am I being, that my children's eyes are not shining? That's a totally different world. Now, we're all about to end this magical, on-the-mountain week,", "start_timestamp": "00:18:02", "end_timestamp": "00:18:38", "start_second": 1082, "end_second": 1118, "url": "https://www.youtube.com/watch?v=r9LCwI5iErE&t=1082s", "title": "The transformative power of classical music | Benjamin Zander", "thumbnail": "https://i.ytimg.com/vi/r9LCwI5iErE/maxresdefault.jpg"} {"video_id": "r9LCwI5iErE", "text": "we're going back into the world. And I say, it's appropriate for us to ask the question, who are we being as we go back out into the world? And you know, I have a definition of success. For me, it's very simple. It's not about wealth and fame and power. It's about how many shining eyes I have around me. So now, I have one last thought, which is that it really makes a difference what we say -- the words that come out of our mouth. I learned this from a woman who survived Auschwitz, one of the rare survivors. She went to Auschwitz when she was 15 years old.", "start_timestamp": "00:18:38", "end_timestamp": "00:19:16", "start_second": 1118, "end_second": 1156, "url": "https://www.youtube.com/watch?v=r9LCwI5iErE&t=1118s", "title": "The transformative power of classical music | Benjamin Zander", "thumbnail": "https://i.ytimg.com/vi/r9LCwI5iErE/maxresdefault.jpg"} {"video_id": "r9LCwI5iErE", "text": "And ... And her brother was eight, and the parents were lost. And she told me this, she said, \"We were in the train going to Auschwitz, and I looked down and saw my brother's shoes were missing. I said, 'Why are you so stupid, can't you keep your things together for goodness' sake?'\" The way an elder sister might speak to a younger brother. Unfortunately, it was the last thing she ever said to him, because she never saw him again. He did not survive. And so when she came out of Auschwitz, she made a vow. She told me this.", "start_timestamp": "00:19:16", "end_timestamp": "00:19:53", "start_second": 1156, "end_second": 1193, "url": "https://www.youtube.com/watch?v=r9LCwI5iErE&t=1156s", "title": "The transformative power of classical music | Benjamin Zander", "thumbnail": "https://i.ytimg.com/vi/r9LCwI5iErE/maxresdefault.jpg"} {"video_id": "IeaD0ZaUJ3Y", "text": "you hi I'm professor Pete Carr I'm been at the University of Minnesota for about thirty-eight years I've worked with many many graduate students in class and in my research lab and I find it useful to work with students um to teach them how to read a paper on this first slide that I want to show you is an outline of of the way a typical scientific paper is organized and I think most beginning students instinctively start reading a paper in in order as the paper is printed for instance they read the title then they go on to the abstract then", "start_timestamp": "00:00:00", "end_timestamp": "00:01:06", "start_second": 0, "end_second": 66, "url": "https://www.youtube.com/watch?v=IeaD0ZaUJ3Y&t=0s", "title": "How to Read a Paper Efficiently (By Prof. Pete Carr)", "thumbnail": "https://i.ytimg.com/vi/IeaD0ZaUJ3Y/maxresdefault.jpg"} {"video_id": "IeaD0ZaUJ3Y", "text": "they read the introduction and so on and so forth working their way from the beginning to the end of the article don't do this this is not a good use of your time there's a better way to do things which is what I'm going to tell you about today let me jump in here at this point and tell you about a fairly simple algorithm if you will about how to get the most out of a paper with with the least effort and I think to do this you have to think of reading a paper as a two-phase process in the first phase you're surveying the paper the article", "start_timestamp": "00:01:06", "end_timestamp": "00:01:47", "start_second": 66, "end_second": 107, "url": "https://www.youtube.com/watch?v=IeaD0ZaUJ3Y&t=66s", "title": "How to Read a Paper Efficiently (By Prof. Pete Carr)", "thumbnail": "https://i.ytimg.com/vi/IeaD0ZaUJ3Y/maxresdefault.jpg"} {"video_id": "IeaD0ZaUJ3Y", "text": "to see if it's really worth investing a lot of time in and this is the way you you keep up with what's going on in in the literature you'll probably have some sort of service that provides you with with papers based upon keywords which are by and large taken from the abstract of of the paper off so the first step to keep more first thing to keep in mind is that you're allowed to stop this process at any point when you become disinterested in going further next you will look undoubtedly at the key words and and the title of the paper if these", "start_timestamp": "00:01:47", "end_timestamp": "00:02:34", "start_second": 107, "end_second": 154, "url": "https://www.youtube.com/watch?v=IeaD0ZaUJ3Y&t=107s", "title": "How to Read a Paper Efficiently (By Prof. Pete Carr)", "thumbnail": "https://i.ytimg.com/vi/IeaD0ZaUJ3Y/maxresdefault.jpg"} {"video_id": "IeaD0ZaUJ3Y", "text": "don't interest you at all you stop next thing to look at really is the abstract it's the most important part of the paper for getting acquainted with the paper but next I think you want to jump to the conclusions you don't read the intermediate steps you don't look at the experimental and the introduction and the results and discussion look at the conclusions because if the conclusions are not relevant to you probably you don't want to go any further so that this basically at this point you've surveyed the paper and you", "start_timestamp": "00:02:34", "end_timestamp": "00:03:10", "start_second": 154, "end_second": 190, "url": "https://www.youtube.com/watch?v=IeaD0ZaUJ3Y&t=154s", "title": "How to Read a Paper Efficiently (By Prof. Pete Carr)", "thumbnail": "https://i.ytimg.com/vi/IeaD0ZaUJ3Y/maxresdefault.jpg"} {"video_id": "IeaD0ZaUJ3Y", "text": "know whether or not it's really worth your while to invest any time on it the next thing that I think it's best to do again because it's fairly fast is to take a good look at the tables and the figures and the captions because you can do this quickly and it will tell you the main things that went on when the when the scientists did their work and again it will help you decide do I really want to dig into this paper or not if that's the case and you want to dig in then the place to start is the introduction and", "start_timestamp": "00:03:10", "end_timestamp": "00:03:50", "start_second": 190, "end_second": 230, "url": "https://www.youtube.com/watch?v=IeaD0ZaUJ3Y&t=190s", "title": "How to Read a Paper Efficiently (By Prof. Pete Carr)", "thumbnail": "https://i.ytimg.com/vi/IeaD0ZaUJ3Y/maxresdefault.jpg"} {"video_id": "IeaD0ZaUJ3Y", "text": "this unit have to start reading seriously and the introduction will provide you with essential background information that's one of the roles of the introduction another role of the introduction is to let you know why the authors of the paper did did the particular study and I think these are important things for you to know before digging in the real heart of a paper is the results and discussion section of the paper here's where you're going to spend most of your time and going through the paper finally at this point you may decide to stop", "start_timestamp": "00:03:50", "end_timestamp": "00:04:31", "start_second": 230, "end_second": 271, "url": "https://www.youtube.com/watch?v=IeaD0ZaUJ3Y&t=230s", "title": "How to Read a Paper Efficiently (By Prof. Pete Carr)", "thumbnail": "https://i.ytimg.com/vi/IeaD0ZaUJ3Y/maxresdefault.jpg"} {"video_id": "IeaD0ZaUJ3Y", "text": "you've had enough but if the paper is really extremely relevant to what you are currently working on then it's time to dig very deeply and get into the details of the experimental section of the paper and this is where you really learn what the authors actually did but more importantly it's how you it's where you learn how they did things and you may need that level of detail in your own work um once once you've done reading the paper uh you can you can stop however I think it's a really good idea to develop some kind of system", "start_timestamp": "00:04:31", "end_timestamp": "00:05:16", "start_second": 271, "end_second": 316, "url": "https://www.youtube.com/watch?v=IeaD0ZaUJ3Y&t=271s", "title": "How to Read a Paper Efficiently (By Prof. Pete Carr)", "thumbnail": "https://i.ytimg.com/vi/IeaD0ZaUJ3Y/maxresdefault.jpg"} {"video_id": "IeaD0ZaUJ3Y", "text": "where you take some notes on the paper these notes aren't going to serve you any good next week or maybe even next month but down the road maybe when you start to write your own first paper having some notes on these papers that you've read will be very beneficial and will really save you a lot of time it'll tell you which papers you should reread before you start writing which papers you don't need to include in as references in your own manuscript because they're not relevant so taking some notes when you can as you finish", "start_timestamp": "00:05:16", "end_timestamp": "00:05:57", "start_second": 316, "end_second": 357, "url": "https://www.youtube.com/watch?v=IeaD0ZaUJ3Y&t=316s", "title": "How to Read a Paper Efficiently (By Prof. Pete Carr)", "thumbnail": "https://i.ytimg.com/vi/IeaD0ZaUJ3Y/maxresdefault.jpg"} {"video_id": "IeaD0ZaUJ3Y", "text": "reading a paper is a really good idea and and these notes should be in a notebook or in some system and not simply written on the PDF of the paper because that you can't collect those those notes very easily you want these notes readily accessible for instance on an index card or a bunch of index cards so that you can flip through them quickly and take a look at all the all the relevant papers there's an old saying I think it's Chinese in origin that the faintest writing is better than the best memory and in the course of my time", "start_timestamp": "00:05:57", "end_timestamp": "00:06:39", "start_second": 357, "end_second": 399, "url": "https://www.youtube.com/watch?v=IeaD0ZaUJ3Y&t=357s", "title": "How to Read a Paper Efficiently (By Prof. Pete Carr)", "thumbnail": "https://i.ytimg.com/vi/IeaD0ZaUJ3Y/maxresdefault.jpg"} {"video_id": "mPFq5KMxKVw", "text": "you've used a pre-trained model to make predictions that gives you great results if you want to classify images into the categories used by the original models but what if you have a new use case and you don't categorize images in exactly the same way as the categories for the pre trained model for example I might want a model that can tell if a photo was taken in an urban area or a rural area my pre trained model doesn't classify images into those two specific categories we can build a new model from scratch for this specific purpose but to", "start_timestamp": "00:00:00", "end_timestamp": "00:00:33", "start_second": 0, "end_second": 33, "url": "https://www.youtube.com/watch?v=mPFq5KMxKVw&t=0s", "title": "Transfer Learning | Kaggle", "thumbnail": "https://i.ytimg.com/vi/mPFq5KMxKVw/maxresdefault.jpg"} {"video_id": "mPFq5KMxKVw", "text": "get good results we need thousands of photos with labels for which our urban and which are rural something called transfer learning will give us good results with far less data transfer learning takes what a model learns while solving one problem and applies it to a new application remember that early layers of a deep learning model identify simple shapes later layers identify more complex visual patterns and the very last layer makes predictions so most layers from a pre trained model are useful in new applications because most", "start_timestamp": "00:00:33", "end_timestamp": "00:01:07", "start_second": 33, "end_second": 67, "url": "https://www.youtube.com/watch?v=mPFq5KMxKVw&t=33s", "title": "Transfer Learning | Kaggle", "thumbnail": "https://i.ytimg.com/vi/mPFq5KMxKVw/maxresdefault.jpg"} {"video_id": "mPFq5KMxKVw", "text": "computer vision problems involve similar low-level visual patterns so we'll reuse most of the pre trained ResNet model and just replace that final layer that was used to make predictions some layers before that in the pre trained model may identify features like roads buildings windows open fields etc will drop in a replacement for the last layer of the rezident model this new last layer will predict whether an image is rural or urban based on the results of that previous layer let's look at this a little closer here we see that the", "start_timestamp": "00:01:07", "end_timestamp": "00:01:41", "start_second": 67, "end_second": 101, "url": "https://www.youtube.com/watch?v=mPFq5KMxKVw&t=67s", "title": "Transfer Learning | Kaggle", "thumbnail": "https://i.ytimg.com/vi/mPFq5KMxKVw/maxresdefault.jpg"} {"video_id": "mPFq5KMxKVw", "text": "resident model has many layers we cut off the last layer the last layer of what's left has information about our photo content stored as a series of numbers in a tensor it should be a one-dimensional tensor which is also called a vector the vector can be shown as a series of dots each dot is called a node the first node represents the first number in the vector the second node represents the second number and so on practical models have far more nodes than we've on here we want to classify the image into two categories urban and rural so", "start_timestamp": "00:01:41", "end_timestamp": "00:02:19", "start_second": 101, "end_second": 139, "url": "https://www.youtube.com/watch?v=mPFq5KMxKVw&t=101s", "title": "Transfer Learning | Kaggle", "thumbnail": "https://i.ytimg.com/vi/mPFq5KMxKVw/maxresdefault.jpg"} {"video_id": "mPFq5KMxKVw", "text": "after the last layer we keep the pre-trained model we add a new layer with two nodes one node to capture how urban the photo is and another to capture how rural it is in theory any node in the last layer before prediction might inform how urban it is so the urban measure can depend on all the nodes in this layer we draw connections to show that possible relationship for the same reason the information at each node might affect our measure of how rural the photo is so our structure looks like this we have a lot of", "start_timestamp": "00:02:19", "end_timestamp": "00:02:53", "start_second": 139, "end_second": 173, "url": "https://www.youtube.com/watch?v=mPFq5KMxKVw&t=139s", "title": "Transfer Learning | Kaggle", "thumbnail": "https://i.ytimg.com/vi/mPFq5KMxKVw/maxresdefault.jpg"} {"video_id": "mPFq5KMxKVw", "text": "connections here and we'll use training data to determine which nodes suggest an images urban which suggests it is rural and which don't matter that is we'll use data to train the last layer of the model in practice that training data will be photos that are labeled as either being urban or rural we'll cover more mathematical detail on this training step in a later video notice that we allow all features from one layer to influence or be connected with a prediction layer when this happens we describe the last layer as", "start_timestamp": "00:02:53", "end_timestamp": "00:03:26", "start_second": 173, "end_second": 206, "url": "https://www.youtube.com/watch?v=mPFq5KMxKVw&t=173s", "title": "Transfer Learning | Kaggle", "thumbnail": "https://i.ytimg.com/vi/mPFq5KMxKVw/maxresdefault.jpg"} {"video_id": "mPFq5KMxKVw", "text": "being a dense layer one other note when classifying something into only two categories we could get by with only one node at the output in this case a prediction of how urban a photo is would also be a measure of how rural it is if a photo is 80 percent likely to be urban its twenty percent likely to be rural but we've kept two separate nodes at the output layer using a separate node for each possible category in the output layer will help us transition into cases when we want to predict with more than two categories in both the current case", "start_timestamp": "00:03:26", "end_timestamp": "00:04:00", "start_second": 206, "end_second": 240, "url": "https://www.youtube.com/watch?v=mPFq5KMxKVw&t=206s", "title": "Transfer Learning | Kaggle", "thumbnail": "https://i.ytimg.com/vi/mPFq5KMxKVw/maxresdefault.jpg"} {"video_id": "mPFq5KMxKVw", "text": "and the case with more categories we'll get a score for each category and then apply a function called softmax the softmax function will transform the scores to probabilities so they'll all be positive and I'll sum to one we could then work with those probabilities however we want let's see it in code we'll introduce two new classes from Charis first is sequential this is just saying we're going to have a model that's a sequence of layers one after the other there are some exotic models that don't fit into this structure and", "start_timestamp": "00:04:00", "end_timestamp": "00:04:36", "start_second": 240, "end_second": 276, "url": "https://www.youtube.com/watch?v=mPFq5KMxKVw&t=240s", "title": "Transfer Learning | Kaggle", "thumbnail": "https://i.ytimg.com/vi/mPFq5KMxKVw/maxresdefault.jpg"} {"video_id": "mPFq5KMxKVw", "text": "we'll get to other types of models later for now all models you would want to build are sequential we'll also want to add a dense layer so we import that in this application we classify photos into two categories or classes urban and rural we'll save that as num classes now we build the model we set up a sequential model that we can add layers to first we add all of a pre-trained ResNet 50 model we've written include top equals false this is how we specify that we want to exclude the layer that makes predictions", "start_timestamp": "00:04:36", "end_timestamp": "00:05:15", "start_second": 276, "end_second": 315, "url": "https://www.youtube.com/watch?v=mPFq5KMxKVw&t=276s", "title": "Transfer Learning | Kaggle", "thumbnail": "https://i.ytimg.com/vi/mPFq5KMxKVw/maxresdefault.jpg"} {"video_id": "mPFq5KMxKVw", "text": "into the thousands of categories used in the imagenet competition we'll also use a file that doesn't include the weights for that last layer we hand this argument pooling equals average that says that if we had extra channels in our tensor at the end of this step we want to collapse them to a 1d tensor by taking an average across channels we'll come back to intricacies of pooling in a later lesson but now we have a pre-trained model that creates the layer you saw in the graphic will add a dense layers to make predictions", "start_timestamp": "00:05:15", "end_timestamp": "00:05:49", "start_second": 315, "end_second": 349, "url": "https://www.youtube.com/watch?v=mPFq5KMxKVw&t=315s", "title": "Transfer Learning | Kaggle", "thumbnail": "https://i.ytimg.com/vi/mPFq5KMxKVw/maxresdefault.jpg"} {"video_id": "mPFq5KMxKVw", "text": "we specify the number of nodes in this layer which in this case is the number of classes like we talked about earlier then we say we want to apply the softmax function to turn it into probabilities well tell tens flow not to Train the first layer which is the ResNet 50 model because that's the model that was already pre trained with the imagenet data now we'll get to a more complex line of code in a compile command I'll describe the broad concept here and we'll give a more complete explanation of the underlying theory in a couple", "start_timestamp": "00:05:49", "end_timestamp": "00:06:24", "start_second": 349, "end_second": 384, "url": "https://www.youtube.com/watch?v=mPFq5KMxKVw&t=349s", "title": "Transfer Learning | Kaggle", "thumbnail": "https://i.ytimg.com/vi/mPFq5KMxKVw/maxresdefault.jpg"} {"video_id": "mPFq5KMxKVw", "text": "videos the compiled command tells tensorflow how to update the relationships in the dense connections when we're doing the training with our data we have a measure of loss or inaccuracy we want to minimize we specify as categorical cross entropy in case you are familiar with the law gloss this is another term for the same thing we use an algorithm called stochastic gradient descent to minimize the categorical cross entropy loss function again we'll cover this in our Theory video we asked it to report the accuracy metric that is what fraction of", "start_timestamp": "00:06:24", "end_timestamp": "00:07:00", "start_second": 384, "end_second": 420, "url": "https://www.youtube.com/watch?v=mPFq5KMxKVw&t=384s", "title": "Transfer Learning | Kaggle", "thumbnail": "https://i.ytimg.com/vi/mPFq5KMxKVw/maxresdefault.jpg"} {"video_id": "mPFq5KMxKVw", "text": "predictions were correct this is easier to interpret than categorical cross entropy scores so it's nice to print it out and see how the model is doing our raw data is broken into a directory of training data and a directory of validation data within each of those we have one subdirectory for the urban pictures and another for the rural pictures caris provides a great tool for working with images grouped into directories by their label this is the image data generator there's two steps to using image data generator first", "start_timestamp": "00:07:00", "end_timestamp": "00:07:35", "start_second": 420, "end_second": 455, "url": "https://www.youtube.com/watch?v=mPFq5KMxKVw&t=420s", "title": "Transfer Learning | Kaggle", "thumbnail": "https://i.ytimg.com/vi/mPFq5KMxKVw/maxresdefault.jpg"} {"video_id": "mPFq5KMxKVw", "text": "we'll create any generator object in the abstract I'll tell it that we want to apply the ResNet pre-processing function every time it reads an image you use this function before to be consistent with how the rezident model is created then we use the flow from directory command we tell it what directory that data is in what size image we want how many images to read in at a time and we tell it we're classifying data into different categories well add batch size to our list of information covered in the upcoming theory video for now assume", "start_timestamp": "00:07:35", "end_timestamp": "00:08:11", "start_second": 455, "end_second": 491, "url": "https://www.youtube.com/watch?v=mPFq5KMxKVw&t=455s", "title": "Transfer Learning | Kaggle", "thumbnail": "https://i.ytimg.com/vi/mPFq5KMxKVw/maxresdefault.jpg"} {"video_id": "mPFq5KMxKVw", "text": "you want categorical class mode almost every time we do the same thing to setup a way to read the validation data that creates a validation generator the image data generator is especially valuable when working with large data sets because we don't need to hold the whole data set in memory at once but it's nice here even with a small data set now we fit the model we tell it the training data comes from trained generator we said to read 12 images at a time and we have 72 images so we'll go through 6 steps of 12 images then we say that", "start_timestamp": "00:08:11", "end_timestamp": "00:08:50", "start_second": 491, "end_second": 530, "url": "https://www.youtube.com/watch?v=mPFq5KMxKVw&t=491s", "title": "Transfer Learning | Kaggle", "thumbnail": "https://i.ytimg.com/vi/mPFq5KMxKVw/maxresdefault.jpg"} {"video_id": "mPFq5KMxKVw", "text": "validation data comes from validation generator validation generator reads 20 images at a time and we have 20 images of validation data so we can use just one step as the model training is running well see progress updates showing with our loss function and the accuracy it updates the connections in the dense layer that is the models impression of what makes an urban photo and what makes a rural photo and it makes those updates in six steps when it's done it got 79 percent of the training data right then it examines the validation data it gets", "start_timestamp": "00:08:50", "end_timestamp": "00:09:25", "start_second": 530, "end_second": 565, "url": "https://www.youtube.com/watch?v=mPFq5KMxKVw&t=530s", "title": "Transfer Learning | Kaggle", "thumbnail": "https://i.ytimg.com/vi/mPFq5KMxKVw/maxresdefault.jpg"} {"video_id": "mPFq5KMxKVw", "text": "95 percent those right 19 out of 20 we trained on 72 photos you could easily take that many photos on your phone upload them to kaggle and build a very accurate model to distinguish almost anything you care about I think that's incredibly cool this may feel like a lot of new ideas for you to take in here's our plan we have one exercise for you to build a model yourself using chance for learning after you've done transfer learning hands-on I'll show you a simple but powerful trick called data augmentation data augmentation really", "start_timestamp": "00:09:25", "end_timestamp": "00:10:01", "start_second": 565, "end_second": 601, "url": "https://www.youtube.com/watch?v=mPFq5KMxKVw&t=565s", "title": "Transfer Learning | Kaggle", "thumbnail": "https://i.ytimg.com/vi/mPFq5KMxKVw/maxresdefault.jpg"} {"video_id": "tOLhT3LNjho", "text": "[Music] Tocantins the way human whip [Music] [Music] with the powers code we are able to create technology that solves some of the most fundamental issues with humans in the first place we can cure diseases we can solve world problems we can do so much that we never put down without the power of technology feels like home in terms of conferencing I like the format a lot it's very the audience is more technical than usual so I definitely appreciate this conference as a sort of contrast to what usually takes place in many venues", "start_timestamp": "00:00:00", "end_timestamp": "00:01:25", "start_second": 0, "end_second": 85, "url": "https://www.youtube.com/watch?v=tOLhT3LNjho&t=0s", "title": "WeAreDevelopers Congress Vienna 2019 Aftermovie", "thumbnail": "https://i.ytimg.com/vi/tOLhT3LNjho/maxresdefault.jpg"} {"video_id": "tOLhT3LNjho", "text": "[Music] I'm a programmer myself you know so basically talking to other developers is the next experience just to see you know how other people are dealing with the same maybe constraints that you are dealing to [Music] but I was very amazing I really like the location so it's really really cool to be at Hope work [Music] [Applause] both changes the way people think and people find in Kord impacts the way that different social challenges can be solved [Music] [Applause] and the cause changes the way we consume", "start_timestamp": "00:01:25", "end_timestamp": "00:02:50", "start_second": 85, "end_second": 170, "url": "https://www.youtube.com/watch?v=tOLhT3LNjho&t=85s", "title": "WeAreDevelopers Congress Vienna 2019 Aftermovie", "thumbnail": "https://i.ytimg.com/vi/tOLhT3LNjho/maxresdefault.jpg"} {"video_id": "yhItocvAaq0", "text": "supervised learning updates the parameters of a neural network to match predicted class labels with the ground truth labels the construction of these ground truth class vectors is typically done with one hot encoding but other techniques such as label smoothing and knowledge distillation have been developed to overcome the limitations of one hot encoded ground truth class labels Mehta pseudo labels that uses the meta learning framework to dynamically adapt the target distribution or ground truth class labels throughout training", "start_timestamp": "00:00:00", "end_timestamp": "00:00:27", "start_second": 0, "end_second": 27, "url": "https://www.youtube.com/watch?v=yhItocvAaq0&t=0s", "title": "Meta Pseudo Labels", "thumbnail": "https://i.ytimg.com/vi/yhItocvAaq0/maxresdefault.jpg"} {"video_id": "yhItocvAaq0", "text": "of a student classification network to maximize its resulting validation set accuracy this is done by training the classification model or student network on pseudo labels labeled by a teacher network the teacher network is then updated to maximize the classification models accuracy on the validation set after it trains and updates itself through back propagation supervised learning on the pseudo labels from the teacher network this involves an interesting gradient through a gradient operation to train the teacher network", "start_timestamp": "00:00:27", "end_timestamp": "00:00:53", "start_second": 27, "end_second": 53, "url": "https://www.youtube.com/watch?v=yhItocvAaq0&t=27s", "title": "Meta Pseudo Labels", "thumbnail": "https://i.ytimg.com/vi/yhItocvAaq0/maxresdefault.jpg"} {"video_id": "yhItocvAaq0", "text": "Mehta pseudo labels achieves 80 6.9% top 1 imagenet accuracy through semi-supervised learning with additional data and also impressive performances in the limited data settings the authors also introduced a reduced MPL framework to avoid the memory bottleneck of having two high capacity models in memory for the meta learning framework this video will explain meta pseudo labels from researchers at Google ai this video will explain meta pseudo labels from researchers at Google ai meta sudo labels is a new way to use meta learning", "start_timestamp": "00:00:53", "end_timestamp": "00:01:26", "start_second": 53, "end_second": 86, "url": "https://www.youtube.com/watch?v=yhItocvAaq0&t=53s", "title": "Meta Pseudo Labels", "thumbnail": "https://i.ytimg.com/vi/yhItocvAaq0/maxresdefault.jpg"} {"video_id": "yhItocvAaq0", "text": "to adapt the ground truth class labels during training by using a teacher network to label data and then a student network that learns from those labels a quick overview of the meta pseudo labels algorithm is that a teacher model is trained along with a student model to set the students target distributions and adapt to the students learning state so typically these target distributions are these one hot encoded vectors where you might have like zero cat one dog and then zero for all of the other classes in the case of say C fart n so the idea", "start_timestamp": "00:01:26", "end_timestamp": "00:01:54", "start_second": 86, "end_second": 114, "url": "https://www.youtube.com/watch?v=yhItocvAaq0&t=86s", "title": "Meta Pseudo Labels", "thumbnail": "https://i.ytimg.com/vi/yhItocvAaq0/maxresdefault.jpg"} {"video_id": "yhItocvAaq0", "text": "here is to have the teacher network produce the way of labeling the data points to say zero point zero three cat zero point seven dog zero point zero four horse these kind of distributions are going to be assigned by the teacher network rather than heuristic lis encoded with something like one hot encoding label smoothing or even the knowledge distillation pipeline with to temperature tuning so then the idea is to adapt these target distributions to the students learning state so the way this pipeline works is that the", "start_timestamp": "00:01:54", "end_timestamp": "00:02:19", "start_second": 114, "end_second": 139, "url": "https://www.youtube.com/watch?v=yhItocvAaq0&t=114s", "title": "Meta Pseudo Labels", "thumbnail": "https://i.ytimg.com/vi/yhItocvAaq0/maxresdefault.jpg"} {"video_id": "yhItocvAaq0", "text": "teacher network parameterised by phi is again taking the same training data set and then produce a pseudo label distribution and then the student network is going to try to fit this label distribution that was produced by the teacher network so it's going to do back propagation using the cross entropy loss function between the predictions from the student Network y-prime and then these pseudo labels Q Phi of X so it's going to back prop this and then update the parameters to theta T plus 1 so now these new parameters that have", "start_timestamp": "00:02:19", "end_timestamp": "00:02:44", "start_second": 139, "end_second": 164, "url": "https://www.youtube.com/watch?v=yhItocvAaq0&t=139s", "title": "Meta Pseudo Labels", "thumbnail": "https://i.ytimg.com/vi/yhItocvAaq0/maxresdefault.jpg"} {"video_id": "yhItocvAaq0", "text": "been updated by training on the pseudo labels from the teacher network are then going to be evaluated to provide a reward signal for the teacher network by taking those parameters and then having them classify a held-out validation set so the performance on the validation set is the reward signal that goes back through the teacher by taking a gradient through a gradient or do something we'll get into more in the later on in the video the idea is that the teacher model is going to be training changing this distribution of class labels to maximize", "start_timestamp": "00:02:44", "end_timestamp": "00:03:10", "start_second": 164, "end_second": 190, "url": "https://www.youtube.com/watch?v=yhItocvAaq0&t=164s", "title": "Meta Pseudo Labels", "thumbnail": "https://i.ytimg.com/vi/yhItocvAaq0/maxresdefault.jpg"} {"video_id": "yhItocvAaq0", "text": "the performance of the student Network and the held-out validation set these are some examples of the most common target distributions our ground truth class label vectors Y that are used in machine learning the most common of which is one hot encoded vectors this is how datasets like C fart n are labeled if this is the case of a dog image in the class label corresponding to that image you'd have a one in the position of the dog index and then zero everywhere else for the other class labels so say one dog zero cat zero", "start_timestamp": "00:03:10", "end_timestamp": "00:03:35", "start_second": 190, "end_second": 215, "url": "https://www.youtube.com/watch?v=yhItocvAaq0&t=190s", "title": "Meta Pseudo Labels", "thumbnail": "https://i.ytimg.com/vi/yhItocvAaq0/maxresdefault.jpg"} {"video_id": "yhItocvAaq0", "text": "truck ship this is how the sea far 10 data set it's labeled so one problem with labeling data sets one hot and code vectors is that the model is going to have these overconfident or over fitted predictions to this kind of a class label distribution you can say it applies any probability density to another class it's gonna have a penalty from the cross entropy loss for doing so so if it's a C's this dog image and tries to do label it as 0.75 dog and 0.2 cat he's unsure whether it's a cat or a dog it's gonna be penalized for that as", "start_timestamp": "00:03:35", "end_timestamp": "00:04:03", "start_second": 215, "end_second": 243, "url": "https://www.youtube.com/watch?v=yhItocvAaq0&t=215s", "title": "Meta Pseudo Labels", "thumbnail": "https://i.ytimg.com/vi/yhItocvAaq0/maxresdefault.jpg"} {"video_id": "yhItocvAaq0", "text": "if the cat is just as different from a dog as a truck or a ship or a frog or these other C Bartek classes so one solution to the overconfident predictions are overfitting on the one hand code of vectors is label smoothing so label smoothing is where you apply this uniform weight to all the other class labels in the class suitable vector and then another solution to assigning these target distributions knowledge distillation knowledge distillation in the form of self-training with noisy student currently has the state VR for imagenet", "start_timestamp": "00:04:03", "end_timestamp": "00:04:29", "start_second": 243, "end_second": 269, "url": "https://www.youtube.com/watch?v=yhItocvAaq0&t=243s", "title": "Meta Pseudo Labels", "thumbnail": "https://i.ytimg.com/vi/yhItocvAaq0/maxresdefault.jpg"} {"video_id": "yhItocvAaq0", "text": "classification and it's also a really powerful technique for model compression such as say distilled Bert or you have these high capacity models then you use the high capacity model to produce a new class label distribution that is better than the one hunter for training the student network and then the student network learned that a combination of this distillation class distribution as well as the ground truth what hot encoded vectors so these are some examples of different target distributions that have been explored in", "start_timestamp": "00:04:29", "end_timestamp": "00:04:55", "start_second": 269, "end_second": 295, "url": "https://www.youtube.com/watch?v=yhItocvAaq0&t=269s", "title": "Meta Pseudo Labels", "thumbnail": "https://i.ytimg.com/vi/yhItocvAaq0/maxresdefault.jpg"} {"video_id": "yhItocvAaq0", "text": "machine learning and are commonly used to prevent overfitting and then to you know train these models with supervised learning so code from the paper is that from the success of these heuristic tricks it is clear that how to construct the target distribution plays an important role in the algorithm design and a proper method could lead to a sizeable game motivating this exploration for meta learning the target distributions during training so again we have this problem of what should be this target distribution should we have", "start_timestamp": "00:04:55", "end_timestamp": "00:05:19", "start_second": 295, "end_second": 319, "url": "https://www.youtube.com/watch?v=yhItocvAaq0&t=295s", "title": "Meta Pseudo Labels", "thumbnail": "https://i.ytimg.com/vi/yhItocvAaq0/maxresdefault.jpg"} {"video_id": "yhItocvAaq0", "text": "one hot encode to class level vectors should we smoothing out the labels by putting uniform weight on the other class labels or should we use this teacher-to-student pipeline and knowledge installation but the solution explored in this paper is to metal learn the pseudo label distribution or the targets that the student networks and we trying to fit during training there are two phases of learning in the meta pseudo labels framework in phase one the student learns from the teacher the parameters theta of the student", "start_timestamp": "00:05:19", "end_timestamp": "00:05:40", "start_second": 319, "end_second": 340, "url": "https://www.youtube.com/watch?v=yhItocvAaq0&t=319s", "title": "Meta Pseudo Labels", "thumbnail": "https://i.ytimg.com/vi/yhItocvAaq0/maxresdefault.jpg"} {"video_id": "yhItocvAaq0", "text": "classification network are updated by taking the cross entropy loss between the predictions piece of theta of X and then the pseudo label distribution that is produced when you pass these X data points through the teacher Network parameterised by Phi denoted Q sub 5 X to denote this new pseudo label distribution that comes out of the teacher phase 2 is the teacher learns from the students validation loss this is a much more complex way of structuring this loss this gradient through a gradient meta learning idea of", "start_timestamp": "00:05:40", "end_timestamp": "00:06:07", "start_second": 340, "end_second": 367, "url": "https://www.youtube.com/watch?v=yhItocvAaq0&t=340s", "title": "Meta Pseudo Labels", "thumbnail": "https://i.ytimg.com/vi/yhItocvAaq0/maxresdefault.jpg"} {"video_id": "yhItocvAaq0", "text": "training the teacher network so the teacher network is evaluated on the validation set performance of the student network after it updated as parameters to theta t plus 1 so the way that this reward is propagated back into the teachers parameters Phi because you have to take the derivative of how much each of these five parameters and their labels and data points impacts the gradient of the student network to change it in the direction of this validation loss it's difficult to completely derive this idea of gradient", "start_timestamp": "00:06:07", "end_timestamp": "00:06:34", "start_second": 367, "end_second": 394, "url": "https://www.youtube.com/watch?v=yhItocvAaq0&t=367s", "title": "Meta Pseudo Labels", "thumbnail": "https://i.ytimg.com/vi/yhItocvAaq0/maxresdefault.jpg"} {"video_id": "yhItocvAaq0", "text": "through a gradient and in if you're using frameworks like tensorflow PI torch you can utilize automatic differentiation to automatically implement this for you and you don't have to exactly know the math of how say this parameter from x1 input to the hidden state a in the teacher Network gets this lost signal from the student network that is then updated with the gradient of this new label data set and then moved in the direction of the validation loss but the idea the high-level idea and I think maybe visualizing these two networks even", "start_timestamp": "00:06:34", "end_timestamp": "00:06:59", "start_second": 394, "end_second": 419, "url": "https://www.youtube.com/watch?v=yhItocvAaq0&t=394s", "title": "Meta Pseudo Labels", "thumbnail": "https://i.ytimg.com/vi/yhItocvAaq0/maxresdefault.jpg"} {"video_id": "yhItocvAaq0", "text": "though in practice the teacher network is gonna be a multi-layer perceptron like this but the student network is one of these high capacity classification models like ResNet wide ResNet or efficient that so the idea is you want to say update this weight from the input to a hidden state in the teacher network and you're gonna try to take the partial derivative with respect to this way in the teacher network with respect to the validation loss on the the student network after at theta t plus 1 on that validation set so this is trained with", "start_timestamp": "00:06:59", "end_timestamp": "00:07:26", "start_second": 419, "end_second": 446, "url": "https://www.youtube.com/watch?v=yhItocvAaq0&t=419s", "title": "Meta Pseudo Labels", "thumbnail": "https://i.ytimg.com/vi/yhItocvAaq0/maxresdefault.jpg"} {"video_id": "yhItocvAaq0", "text": "policy gradients on this reward signal because this isn't like Y prime minus y there's no ground truth with respect to the validation loss of the student achieves so you're just taking that validation loss reward and treating that as like a reward and like say pac-man or Atari and using policy gradients to update the parameters but basically say like if this parameter contributed a lot to the output and then you get a high reward do more of this like increase the weight from this connection to do more of it to get more of this reward that's", "start_timestamp": "00:07:26", "end_timestamp": "00:07:57", "start_second": 446, "end_second": 477, "url": "https://www.youtube.com/watch?v=yhItocvAaq0&t=446s", "title": "Meta Pseudo Labels", "thumbnail": "https://i.ytimg.com/vi/yhItocvAaq0/maxresdefault.jpg"} {"video_id": "yhItocvAaq0", "text": "kind of a high-level idea of policy gradients but the idea is that in order to get this derivative to find out like how much of this weight contributed to the validation loss you have to take a gradient through a gradient which is a pretty complex idea that's maybe better explained in the next equation from the paper hopefully this equation from the paper were further explain the idea of taking a gradient through a gradient to update the parameters v from the teacher network with respect to the parameters theta T plus 1 that are evaluated on the", "start_timestamp": "00:07:57", "end_timestamp": "00:08:21", "start_second": 477, "end_second": 501, "url": "https://www.youtube.com/watch?v=yhItocvAaq0&t=477s", "title": "Meta Pseudo Labels", "thumbnail": "https://i.ytimg.com/vi/yhItocvAaq0/maxresdefault.jpg"} {"video_id": "yhItocvAaq0", "text": "validation set stadia was we're taking a gradient through a gradient so the parameters theta T plus 1 that are responsible for this validation loss reward signal we're trying to update the teacher network with our updated by taking the parameters theta T and then updating them with a gradient so we want to know how much each parameter infi is responsible for the gradient that updates this so you're taking the derivative with respect to the-- of this validation loss well fee is you know contributes to this validation", "start_timestamp": "00:08:21", "end_timestamp": "00:08:48", "start_second": 501, "end_second": 528, "url": "https://www.youtube.com/watch?v=yhItocvAaq0&t=501s", "title": "Meta Pseudo Labels", "thumbnail": "https://i.ytimg.com/vi/yhItocvAaq0/maxresdefault.jpg"} {"video_id": "yhItocvAaq0", "text": "loss through the gradient so you have to find out you know how much each of these parameters and the fee network you know as in something like this how much does this parameter the fee network or this parameter contribute to the gradient and then the gradient is what updates the parameters and gives you this new validation loss so you're taking the partial derivative of feet with respect to this gradient through a gradient idea which is a little complicated but you know really an interesting idea with the meta learning and this medicine labels", "start_timestamp": "00:08:48", "end_timestamp": "00:09:13", "start_second": 528, "end_second": 553, "url": "https://www.youtube.com/watch?v=yhItocvAaq0&t=528s", "title": "Meta Pseudo Labels", "thumbnail": "https://i.ytimg.com/vi/yhItocvAaq0/maxresdefault.jpg"} {"video_id": "yhItocvAaq0", "text": "algorithm the next idea introduced in the meta pseudo labels is to avoid this memory requirement of having two high capacity classification models in memory because say you have an efficient net as the teacher network as well as the student network now you have to keep both these models in memory especially when you're doing the gradient update for the teacher Network si idea to avoid that is do first train a large teacher network on the label dataset and then use it to produce this new distribution on the unlabeled data and then you use a", "start_timestamp": "00:09:13", "end_timestamp": "00:09:38", "start_second": 553, "end_second": 578, "url": "https://www.youtube.com/watch?v=yhItocvAaq0&t=553s", "title": "Meta Pseudo Labels", "thumbnail": "https://i.ytimg.com/vi/yhItocvAaq0/maxresdefault.jpg"} {"video_id": "yhItocvAaq0", "text": "smaller teacher network so say you originally trained the teacher network with like an efficient net and then you move it to a multi-layer perceptron because all it's doing now is adjusting the original distribution that was produced by this high capacity model so this high capacity model is already producing a pretty useful target distribution as in knowledge distillation and then the smaller teacher has enough capacity to be adapting it during training as in the meta learning framework the authors are going to test the performance of meta", "start_timestamp": "00:09:38", "end_timestamp": "00:10:03", "start_second": 578, "end_second": 603, "url": "https://www.youtube.com/watch?v=yhItocvAaq0&t=578s", "title": "Meta Pseudo Labels", "thumbnail": "https://i.ytimg.com/vi/yhItocvAaq0/maxresdefault.jpg"} {"video_id": "yhItocvAaq0", "text": "pseudo labels in the limited data setting as well as the semi supervised learning setting semi-supervised learning is responsible for most of the image net state of the arts like self training with noisy student where they have the labeled image net and they also leverage this unlabeled jft 300 million data set to get more performance out of the model also the billion scale weekly semi-supervised learning framework from facebook uses the labeled image net data set and the unlabeled Instagram images that are weakly labeled with their", "start_timestamp": "00:10:03", "end_timestamp": "00:10:27", "start_second": 603, "end_second": 627, "url": "https://www.youtube.com/watch?v=yhItocvAaq0&t=603s", "title": "Meta Pseudo Labels", "thumbnail": "https://i.ytimg.com/vi/yhItocvAaq0/maxresdefault.jpg"} {"video_id": "yhItocvAaq0", "text": "hashtags to take advantage of the semi-supervised learning framework which is probably going to be the paradigm that leads forward since it's so easy to get this unlabeled data compared to label data so they're experimenting with meta pseudo labels on the efficient net architecture for the student network on the full-si 410 image net street view house numbers data set plus extra unlabeled data so in the case of C 410 this is tiny images in image net it's why FCC 100 million in street view house numbers there's an additional like", "start_timestamp": "00:10:27", "end_timestamp": "00:10:52", "start_second": 627, "end_second": 652, "url": "https://www.youtube.com/watch?v=yhItocvAaq0&t=627s", "title": "Meta Pseudo Labels", "thumbnail": "https://i.ytimg.com/vi/yhItocvAaq0/maxresdefault.jpg"} {"video_id": "yhItocvAaq0", "text": "500,000 data points that come with the data set to use optionally if you want to test out these kind of algorithms so they achieve 98.6% see of our 10 accuracy and then 6.9% topple and imagenet then here are some other papers to check out if you're interested in semi-supervised learning that have also come out recently and are really successful in this kind of space these are the results of meta pseudo labels in the semi-supervised learning framework compared with supervised learning and then the self training with", "start_timestamp": "00:10:52", "end_timestamp": "00:11:15", "start_second": 652, "end_second": 675, "url": "https://www.youtube.com/watch?v=yhItocvAaq0&t=652s", "title": "Meta Pseudo Labels", "thumbnail": "https://i.ytimg.com/vi/yhItocvAaq0/maxresdefault.jpg"} {"video_id": "yhItocvAaq0", "text": "noisy student pipeline you see gains in the sea far ten dataset small gains in the street view house members and then big gains on the image net dataset one reason that the authors point towards these small gains for the street view house numbers is that the extra unlabeled data in street view house numbers is in it's in the distribution so there is this distinction between out of distribution data and in distribution data so the case of image net where you're trying to take in this new data from this Y of cc100 million data you", "start_timestamp": "00:11:15", "end_timestamp": "00:11:39", "start_second": 675, "end_second": 699, "url": "https://www.youtube.com/watch?v=yhItocvAaq0&t=675s", "title": "Meta Pseudo Labels", "thumbnail": "https://i.ytimg.com/vi/yhItocvAaq0/maxresdefault.jpg"} {"video_id": "yhItocvAaq0", "text": "would call that out of distribution data because it's not like the image net data whereas the street view house numbers that extra data is like really the same exact data that the training set has in terms of this like kind of underlying distribution idea so the idea here is that the meta Studio labels this adaptive adjustment of changing the labels during training is more crucial when the extra unlabeled data is more out of distribution so if you're dealing with a computer vision problem and you're curious that this algorithm is", "start_timestamp": "00:11:39", "end_timestamp": "00:12:06", "start_second": 699, "end_second": 726, "url": "https://www.youtube.com/watch?v=yhItocvAaq0&t=699s", "title": "Meta Pseudo Labels", "thumbnail": "https://i.ytimg.com/vi/yhItocvAaq0/maxresdefault.jpg"} {"video_id": "yhItocvAaq0", "text": "gonna work for your problem it's interesting to say you know it's the unlabeled a do you have how out of distribution is it how noisy is it and this MPL meta student-level framework is likely to have a bigger gain if this is noisier data compared to in distribution data the authors also test the meta pseudo labels algorithm in the limited data setting where you have say only 4,000 labeled images in C far 10 1000 in Street View house numbers or 10% of the labeled data in image net in this case you see the performance of meta pseudo", "start_timestamp": "00:12:06", "end_timestamp": "00:12:32", "start_second": 726, "end_second": 752, "url": "https://www.youtube.com/watch?v=yhItocvAaq0&t=726s", "title": "Meta Pseudo Labels", "thumbnail": "https://i.ytimg.com/vi/yhItocvAaq0/maxresdefault.jpg"} {"video_id": "yhItocvAaq0", "text": "labels compared to supervised learning with all the labels Simms CL are fixed match or unsupervised data augmentation which are all other algorithms that are successful at doing this kind of learning with limited data this plot shows the performance of these models with respect to the different limited data settings changes as you increase the percentage of labelled data points the interesting part about this plot is this top left area where you have the smaller percentage of labeled data and you see a huge gain of the unsupervised", "start_timestamp": "00:12:32", "end_timestamp": "00:12:57", "start_second": 752, "end_second": 777, "url": "https://www.youtube.com/watch?v=yhItocvAaq0&t=752s", "title": "Meta Pseudo Labels", "thumbnail": "https://i.ytimg.com/vi/yhItocvAaq0/maxresdefault.jpg"} {"video_id": "yhItocvAaq0", "text": "data augmentation plus the menace to toe labels algorithm compared to supervised learning or the Rand augment data augmentation algorithm this table shows the gains of meta pseudo labels with respect to the sea far 10 limited data setting and in the street view house number limited data setting see the performance of different algorithms like just training with supervised learning on Limited data using that label smoothing way of putting uniform weights on the other class labels then using supervised learning plus meta pseudo labels and", "start_timestamp": "00:12:57", "end_timestamp": "00:13:22", "start_second": 777, "end_second": 802, "url": "https://www.youtube.com/watch?v=yhItocvAaq0&t=777s", "title": "Meta Pseudo Labels", "thumbnail": "https://i.ytimg.com/vi/yhItocvAaq0/maxresdefault.jpg"} {"video_id": "yhItocvAaq0", "text": "then stacking medicine labels with the ran augment and unsupervised data augmentation algorithms these are some of the algorithms that are explored to stack on top of meta pseudo labels unsupervised data augmentation enforces predictions Y given X to be consistent with the same X data point after it's gone through a data augmentation so in some cases data augmentation might mean like rotating an image translating it or horizontally flipping it in the case of natural language processing it might mean translating the sentence to German", "start_timestamp": "00:13:22", "end_timestamp": "00:13:48", "start_second": 802, "end_second": 828, "url": "https://www.youtube.com/watch?v=yhItocvAaq0&t=802s", "title": "Meta Pseudo Labels", "thumbnail": "https://i.ytimg.com/vi/yhItocvAaq0/maxresdefault.jpg"} {"video_id": "yhItocvAaq0", "text": "and then translating it back to English which is known as a back translation these are the different ways of augmenting data points and then forcing this cycle consistency to have similar predictions on the data point before and after it's been augmented another these algorithm says stacked on top of meta Steudle labels is Rand augment Rand augment is this automated data augmentation algorithm similar to like auto augment or population-based augmentation but the idea here is to have this simpler parameterization of", "start_timestamp": "00:13:48", "end_timestamp": "00:14:12", "start_second": 828, "end_second": 852, "url": "https://www.youtube.com/watch?v=yhItocvAaq0&t=828s", "title": "Meta Pseudo Labels", "thumbnail": "https://i.ytimg.com/vi/yhItocvAaq0/maxresdefault.jpg"} {"video_id": "yhItocvAaq0", "text": "the space that actually shows to work better with respect to constructing these automated data augmentation pipelines and the next sort of algorithm to be looking at as well and comparing this with is self-rated with noisy student which is this really popular way of doing knowledge isolation which is where you take the pseudo labels you apply a lot of noise with respect to training the student model on that teacher target distribution which is the whole idea of meta studio labels is looking at the ways to structure this", "start_timestamp": "00:14:12", "end_timestamp": "00:14:39", "start_second": 852, "end_second": 879, "url": "https://www.youtube.com/watch?v=yhItocvAaq0&t=852s", "title": "Meta Pseudo Labels", "thumbnail": "https://i.ytimg.com/vi/yhItocvAaq0/maxresdefault.jpg"} {"video_id": "yhItocvAaq0", "text": "target distribution to train these neural networks on the authors explore the behavior of the teacher network in the meta studio label framework so they've learned the Medus to delay bol teacher fits the validation gradient it's not just label Corrections and is not only a regularization or preventing overfitting strategy the authors explore this idea that the teacher encourages the students training gradient to be similar to the students validation gradient on this two moons data set because it's really difficult", "start_timestamp": "00:14:39", "end_timestamp": "00:15:02", "start_second": 879, "end_second": 902, "url": "https://www.youtube.com/watch?v=yhItocvAaq0&t=879s", "title": "Meta Pseudo Labels", "thumbnail": "https://i.ytimg.com/vi/yhItocvAaq0/maxresdefault.jpg"} {"video_id": "yhItocvAaq0", "text": "for them to do this kind of cosine similarity between validation and training data gradients with these larger data sets like CFR 10 or imagenet so they show the cosine similarity between the training and validation data as gradient with respect to the training progress and showing that the medicine learning framework the teacher is trying to steer the gradient in the direction of this validation data sets gradient as well the next idea is to explore whether the teacher network is so performing label correction or trying to", "start_timestamp": "00:15:02", "end_timestamp": "00:15:29", "start_second": 902, "end_second": 929, "url": "https://www.youtube.com/watch?v=yhItocvAaq0&t=902s", "title": "Meta Pseudo Labels", "thumbnail": "https://i.ytimg.com/vi/yhItocvAaq0/maxresdefault.jpg"} {"video_id": "yhItocvAaq0", "text": "mimic the behavior of supervised learning with perfect labels so this plot is showing that if it was doing this then these accuracies of the student network should be high as well and have a similar kind of curve as supervised learning so this is showing that the teacher network isn't just trying to fit the training data it's trying to help with this regularization and preventing overfitting this visualization shows the teacher network isn't just doing preventing of overfitting with respect to how its labeling this data can you see", "start_timestamp": "00:15:29", "end_timestamp": "00:15:54", "start_second": 929, "end_second": 954, "url": "https://www.youtube.com/watch?v=yhItocvAaq0&t=929s", "title": "Meta Pseudo Labels", "thumbnail": "https://i.ytimg.com/vi/yhItocvAaq0/maxresdefault.jpg"} {"video_id": "yhItocvAaq0", "text": "interesting behaviors with respect to the developments of how it's labeling each of these data points and see if our 10 data set throughout the training you see that it the label with the highest confidence doesn't change it doesn't get steeper between 50 and 75 percent it doesn't do things like flipping labels or dampening distributions in obvious ways that are typically heuristically explored with respect to you know doing regularization in the class label space another interesting algorithm in the space of meta learning these different", "start_timestamp": "00:15:54", "end_timestamp": "00:16:18", "start_second": 954, "end_second": 978, "url": "https://www.youtube.com/watch?v=yhItocvAaq0&t=954s", "title": "Meta Pseudo Labels", "thumbnail": "https://i.ytimg.com/vi/yhItocvAaq0/maxresdefault.jpg"} {"video_id": "yhItocvAaq0", "text": "components that make up the supervised learning problem is generative teaching networks generative teaching networks have a generator that uses this gradient through a gradient in order to generate this data set that is used to train the student network so it could be interesting to see if you could stack this meta suit of labeling or having this adaptive labeling with the generated data set as well also definitely a confusing gradient or you could maybe stack the generator and the label are sort of similar to how like", "start_timestamp": "00:16:18", "end_timestamp": "00:16:43", "start_second": 978, "end_second": 1003, "url": "https://www.youtube.com/watch?v=yhItocvAaq0&t=978s", "title": "Meta Pseudo Labels", "thumbnail": "https://i.ytimg.com/vi/yhItocvAaq0/maxresdefault.jpg"} {"video_id": "yhItocvAaq0", "text": "alphago zeroes combines the policy and value network into one architecture but it's definitely an interesting kind of space of emerging algorithms these meta learning algorithms that are generating data generating adaptive labels during the training a lot of different areas where meta learning is being developed and producing these interesting algorithms thanks for watching this explanation of meta pseudo labels a really interesting meta learning algorithm that adapts the target distribution for the student network as", "start_timestamp": "00:16:43", "end_timestamp": "00:17:08", "start_second": 1003, "end_second": 1028, "url": "https://www.youtube.com/watch?v=yhItocvAaq0&t=1003s", "title": "Meta Pseudo Labels", "thumbnail": "https://i.ytimg.com/vi/yhItocvAaq0/maxresdefault.jpg"} {"video_id": "yhItocvAaq0", "text": "it's learning throughout training to maximize the accuracy on a held out validation set this is a really interesting use of meta learning and this gradient through gradient training to have the teacher-student paradigm where the teacher is taking apart this different component of the supervised learning framework particularly in this case the target distribution and then the student is learning with the teacher network in this simultaneous I like dual optimization or coevolution framework of training these two models in the meta", "start_timestamp": "00:17:08", "end_timestamp": "00:17:34", "start_second": 1028, "end_second": 1054, "url": "https://www.youtube.com/watch?v=yhItocvAaq0&t=1028s", "title": "Meta Pseudo Labels", "thumbnail": "https://i.ytimg.com/vi/yhItocvAaq0/maxresdefault.jpg"} {"video_id": "pWAc9B2zJS4", "text": "hi-yah thanks a lot for joining us today thank you for inviting me actually I was glad to be here yeah one of the world's most visible deep learning researchers life asked me to share a bit about your personal story so how do you end up doing this work that you now do yeah that sounds great I guess I first became interested in machine learning right before I met you actually I've been working on neuroscience and my undergraduate adviser Jerry Kane a Stanford encouraged me to take your internet AI class I didn't know that", "start_timestamp": "00:00:00", "end_timestamp": "00:00:35", "start_second": 0, "end_second": 35, "url": "https://www.youtube.com/watch?v=pWAc9B2zJS4&t=0s", "title": "Heroes of Deep Learning: Andrew Ng interviews Ian Goodfellow", "thumbnail": "https://i.ytimg.com/vi/pWAc9B2zJS4/maxresdefault.jpg"} {"video_id": "pWAc9B2zJS4", "text": "okay so I had always thought that AI was a good idea but that in practice the main thing I knew that was happening was like game AI where people have a lot of hard-coded rules for non player characters and games to say different scripted lines at different points in time and then when I took you were injured AI class and you covered topics like linear regression and the bias and various variants decomposition of the error linear regression I started to realize that this is a real science and I could actually have a scientific", "start_timestamp": "00:00:35", "end_timestamp": "00:01:07", "start_second": 35, "end_second": 67, "url": "https://www.youtube.com/watch?v=pWAc9B2zJS4&t=35s", "title": "Heroes of Deep Learning: Andrew Ng interviews Ian Goodfellow", "thumbnail": "https://i.ytimg.com/vi/pWAc9B2zJS4/maxresdefault.jpg"} {"video_id": "pWAc9B2zJS4", "text": "career in AI rather than nursing great and then what happe well I came back an IT a two-year course I'm late I see great education so the really big turning point for me was while I was teeing that course one of the students my friend Ethan Dreyfus got interested in geoff hinton deep belief that paper and the two of us ended up building one of the first GPU CUDA based machines at Stanford in order to run Boulton machines in our spare time over winter break and at that point I started to have a very strong intuition that deep", "start_timestamp": "00:01:07", "end_timestamp": "00:01:48", "start_second": 67, "end_second": 108, "url": "https://www.youtube.com/watch?v=pWAc9B2zJS4&t=67s", "title": "Heroes of Deep Learning: Andrew Ng interviews Ian Goodfellow", "thumbnail": "https://i.ytimg.com/vi/pWAc9B2zJS4/maxresdefault.jpg"} {"video_id": "pWAc9B2zJS4", "text": "learning was the way to go in the future that a lot of the other algorithms I was working with like support vector machines didn't seem to have the right asymptotics that you add more training data and it gets lower or for the same amount of training data it's hard to make them perform a lot better by changing other settings and at that point I I started to focus on deep learning as much as possible indeed and remember Magette rain has very old GPU paper race acknowledges you for having done a lot already work yeah yeah", "start_timestamp": "00:01:48", "end_timestamp": "00:02:22", "start_second": 108, "end_second": 142, "url": "https://www.youtube.com/watch?v=pWAc9B2zJS4&t=108s", "title": "Heroes of Deep Learning: Andrew Ng interviews Ian Goodfellow", "thumbnail": "https://i.ytimg.com/vi/pWAc9B2zJS4/maxresdefault.jpg"} {"video_id": "pWAc9B2zJS4", "text": "as I was written using some of the missions that we built the first machine I built was just something that Ethan and I built that Ethan's mom's house that I would wait with our own money and then later we have money to build the first to agree for the Stanford lab wow that's great I never knew that story of Jason oh and then today one of the you know things that's really taken the deepening world by storm is you invention of Gans so how did you come up with that I just studying generative models for a long time sort of a Gans", "start_timestamp": "00:02:22", "end_timestamp": "00:02:57", "start_second": 142, "end_second": 177, "url": "https://www.youtube.com/watch?v=pWAc9B2zJS4&t=142s", "title": "Heroes of Deep Learning: Andrew Ng interviews Ian Goodfellow", "thumbnail": "https://i.ytimg.com/vi/pWAc9B2zJS4/maxresdefault.jpg"} {"video_id": "pWAc9B2zJS4", "text": "our way of doing generative modeling where you have a lot of training data and you'd like to learn to produce more examples that resemble the training data but but they're imaginary they've never been seen exactly in that form before there were several other ways of doing generative models that had been popular for several years before I had the idea for again and after I've been working on all those other methods throughout most of my PhD I knew a lot about the advantages and disadvantages of all the other frameworks like Bolton machines", "start_timestamp": "00:02:57", "end_timestamp": "00:03:29", "start_second": 177, "end_second": 209, "url": "https://www.youtube.com/watch?v=pWAc9B2zJS4&t=177s", "title": "Heroes of Deep Learning: Andrew Ng interviews Ian Goodfellow", "thumbnail": "https://i.ytimg.com/vi/pWAc9B2zJS4/maxresdefault.jpg"} {"video_id": "pWAc9B2zJS4", "text": "and sparse coding and all the other approaches that had been really popular for years I was looking for something how to avoid all of those disadvantages at the same time and then finally when I was arguing about 200 models with my friends in a bar something clicked into place and I started telling them you need to do this this and this and I swear it'll work and my friends didn't believe me that it would work I was supposed to be writing the deep learning text book at the time but I believed strongly enough that it would work that", "start_timestamp": "00:03:29", "end_timestamp": "00:03:57", "start_second": 209, "end_second": 237, "url": "https://www.youtube.com/watch?v=pWAc9B2zJS4&t=209s", "title": "Heroes of Deep Learning: Andrew Ng interviews Ian Goodfellow", "thumbnail": "https://i.ytimg.com/vi/pWAc9B2zJS4/maxresdefault.jpg"} {"video_id": "pWAc9B2zJS4", "text": "I went home and coded it up the same night that it worked so thank you one evening to implement the first version as I if I implemented it around midnight after going home from the bar where my friend's house is going-away party and the first version of it worked which is very very fortunate I didn't have to search for hyper parameters or anything it was just for me I read it somewhere where you had a deaf experience and that reaffirm your commitment to AI tell me that yeah I was I wasn't actually near deaf but I", "start_timestamp": "00:03:57", "end_timestamp": "00:04:27", "start_second": 237, "end_second": 267, "url": "https://www.youtube.com/watch?v=pWAc9B2zJS4&t=237s", "title": "Heroes of Deep Learning: Andrew Ng interviews Ian Goodfellow", "thumbnail": "https://i.ytimg.com/vi/pWAc9B2zJS4/maxresdefault.jpg"} {"video_id": "pWAc9B2zJS4", "text": "briefly thought that I was I had a very bad headache and some of the doctors thought I might have a brain hemorrhage and during the time that I was waiting for my MRI results to find out whether I had a brain hemorrhage or not I realized that most of the fact I was having worry about making sure that other people would eventually try out the research ideas that I had at the time in retrospect they're all pretty silly research ideas but at that point I realized that this was actually one of my highest priorities in life was", "start_timestamp": "00:04:27", "end_timestamp": "00:05:03", "start_second": 267, "end_second": 303, "url": "https://www.youtube.com/watch?v=pWAc9B2zJS4&t=267s", "title": "Heroes of Deep Learning: Andrew Ng interviews Ian Goodfellow", "thumbnail": "https://i.ytimg.com/vi/pWAc9B2zJS4/maxresdefault.jpg"} {"video_id": "pWAc9B2zJS4", "text": "carrying out my machine learning research work yeah that's great that when you thought you might be dying soon you're just thinking how we get the research done yeah yeah that that that that's commitment yes yeah so today you're still at the center of allow the activities with scans of generative atmosphere networks so tell me how you see the future of Gans right now Jen's are used for a lot of different things like so my supervised learning generating training data for other models and even simulating scientific", "start_timestamp": "00:05:03", "end_timestamp": "00:05:37", "start_second": 303, "end_second": 337, "url": "https://www.youtube.com/watch?v=pWAc9B2zJS4&t=303s", "title": "Heroes of Deep Learning: Andrew Ng interviews Ian Goodfellow", "thumbnail": "https://i.ytimg.com/vi/pWAc9B2zJS4/maxresdefault.jpg"} {"video_id": "pWAc9B2zJS4", "text": "experiments in principle all of these things could be done but other kinds of generative models so I think that games are at an important crossroads right now right now they work well some of the time but it can be more of an art than a science to really bring that performance out of them it was more or less how people felt about deep learning in general 10 years ago and back then we were using deep belief networks with bolts and machines as a building blocks they were very very finicky over time we switched to think like rectified linear", "start_timestamp": "00:05:37", "end_timestamp": "00:06:09", "start_second": 337, "end_second": 369, "url": "https://www.youtube.com/watch?v=pWAc9B2zJS4&t=337s", "title": "Heroes of Deep Learning: Andrew Ng interviews Ian Goodfellow", "thumbnail": "https://i.ytimg.com/vi/pWAc9B2zJS4/maxresdefault.jpg"} {"video_id": "pWAc9B2zJS4", "text": "units and Bachelor realization and deep learning became a lot more reliable if we can make games become as reliable as deep learning has become but I think we'll keep seeing games used in all the places they're used with much greater success if we aren't able to figure out how to stabilize games but I think their main contribution to the history of deep learning is that they will have shown people how to do all these tasks that involve generative modeling and eventually we will replace them with other forms of generative models so I", "start_timestamp": "00:06:09", "end_timestamp": "00:06:41", "start_second": 369, "end_second": 401, "url": "https://www.youtube.com/watch?v=pWAc9B2zJS4&t=369s", "title": "Heroes of Deep Learning: Andrew Ng interviews Ian Goodfellow", "thumbnail": "https://i.ytimg.com/vi/pWAc9B2zJS4/maxresdefault.jpg"} {"video_id": "pWAc9B2zJS4", "text": "spend maybe about 40% of my time right now working on stabilizing games he's cool oh and so just as a lot of people they join deep learning about ten years ago such as itself ended up being pioneers maybe the people they joined Gans today if it works out could end up the early pioneers yeah a lot of people already are early pioneers of games and I think if you wanted to give any kind of history of again so far you'd really need to mention other groups like indigo and Facebook and Berkeley for all the different things that they've done so in", "start_timestamp": "00:06:41", "end_timestamp": "00:07:17", "start_second": 401, "end_second": 437, "url": "https://www.youtube.com/watch?v=pWAc9B2zJS4&t=401s", "title": "Heroes of Deep Learning: Andrew Ng interviews Ian Goodfellow", "thumbnail": "https://i.ytimg.com/vi/pWAc9B2zJS4/maxresdefault.jpg"} {"video_id": "pWAc9B2zJS4", "text": "addition to all your research you also co-authored a book on these learnings oh that guy that's right with Joshua Benji oh and Aaron kohrville who were my PhD coat visors we wrote the first textbook on the modern version of deep learning and that has been very popular both in the English edition and the Chinese Edition we sold about I think around 70,000 copies total between those two languages and I've had a lot of feedback from students who said that they've learned a lot from it one thing that we did a little bit differently than some", "start_timestamp": "00:07:17", "end_timestamp": "00:07:56", "start_second": 437, "end_second": 476, "url": "https://www.youtube.com/watch?v=pWAc9B2zJS4&t=437s", "title": "Heroes of Deep Learning: Andrew Ng interviews Ian Goodfellow", "thumbnail": "https://i.ytimg.com/vi/pWAc9B2zJS4/maxresdefault.jpg"} {"video_id": "pWAc9B2zJS4", "text": "other books is we start with a very focused introduction to the kind of math that you need to do deep learning I think one thing that I got from your courses at Stanford is the linear algebra and probability are very important that people get excited about the machine learning algorithms but if you want to be a really excellent practitioner you've got to master the basic math that underlies the whole approach in the first place so we make sure to give a very focused presentation of the basics that's a start book that way you", "start_timestamp": "00:07:56", "end_timestamp": "00:08:32", "start_second": 476, "end_second": 512, "url": "https://www.youtube.com/watch?v=pWAc9B2zJS4&t=476s", "title": "Heroes of Deep Learning: Andrew Ng interviews Ian Goodfellow", "thumbnail": "https://i.ytimg.com/vi/pWAc9B2zJS4/maxresdefault.jpg"} {"video_id": "pWAc9B2zJS4", "text": "don't need to go ahead and learn all of linear algebra but you can get a very quick crash course in the piece of the linear algebra that are the most useful for deep learning so even someone whose math you know is real shaky you've ever seen in math for a few years we're going to start from the beginning of your book and get that background and get into deep learning all of the facts that you would need to know are there it would definitely take some focused efforts and practice that making use of them great", "start_timestamp": "00:08:32", "end_timestamp": "00:08:59", "start_second": 512, "end_second": 539, "url": "https://www.youtube.com/watch?v=pWAc9B2zJS4&t=512s", "title": "Heroes of Deep Learning: Andrew Ng interviews Ian Goodfellow", "thumbnail": "https://i.ytimg.com/vi/pWAc9B2zJS4/maxresdefault.jpg"} {"video_id": "pWAc9B2zJS4", "text": "if someone's really afraid of method it might be a bit of a painful experience but but if you're ready for the learning experience and you believe you can master it I think all the all the tools that you need are there as someone does work in designing for a long time I'd be curious if you look back over the years tell me about how about how you're thinking of AI and a deep learning has evolved over the years ten years ago I felt like as a community the biggest challenge in machine learning was just how to get it working for AI related", "start_timestamp": "00:08:59", "end_timestamp": "00:09:33", "start_second": 539, "end_second": 573, "url": "https://www.youtube.com/watch?v=pWAc9B2zJS4&t=539s", "title": "Heroes of Deep Learning: Andrew Ng interviews Ian Goodfellow", "thumbnail": "https://i.ytimg.com/vi/pWAc9B2zJS4/maxresdefault.jpg"} {"video_id": "pWAc9B2zJS4", "text": "tasks at all we have really good tools that we can use for simpler tasks where we wanted to recognize patterns in hand extracted features where a human designer could do a lot of the work by creating those features and then hand it off to the computer and that was really good for different things like predicting which adds the user would click on or different kinds of basic scientific analysis but we really struggled to do anything involving millions of pixels in an image or a raw audio waveform where the system hasn't", "start_timestamp": "00:09:33", "end_timestamp": "00:10:10", "start_second": 573, "end_second": 610, "url": "https://www.youtube.com/watch?v=pWAc9B2zJS4&t=573s", "title": "Heroes of Deep Learning: Andrew Ng interviews Ian Goodfellow", "thumbnail": "https://i.ytimg.com/vi/pWAc9B2zJS4/maxresdefault.jpg"} {"video_id": "pWAc9B2zJS4", "text": "built all of its understanding from scratch we finally got over the hurdle really thoroughly maybe five years ago and now we're at a point where there are so many different paths opens that someone who wants to get involved in AI may be the hardest problem they face is choosing which path they want to go down do you want to make reinforcement learning work as well as supervised learning works do you want to make unsupervised learning work as well as supervised works do you want to make sure that machine learning algorithms are fair and", "start_timestamp": "00:10:10", "end_timestamp": "00:10:44", "start_second": 610, "end_second": 644, "url": "https://www.youtube.com/watch?v=pWAc9B2zJS4&t=610s", "title": "Heroes of Deep Learning: Andrew Ng interviews Ian Goodfellow", "thumbnail": "https://i.ytimg.com/vi/pWAc9B2zJS4/maxresdefault.jpg"} {"video_id": "pWAc9B2zJS4", "text": "reflect biases that we prefer to avoid do you want to make sure that the societal issues surrounding AI work out well that we are able to make sure that a AI benefits everyone rather than causing social of people and trouble with lots of jobs I think right now this is really an amazing amount of different things it can be done both to prevent downsides from AI but also to make sure that we leverage all of the upsides that it offers us and so today there are a lot of people wanting to get into AI so what advice would you have for someone", "start_timestamp": "00:10:44", "end_timestamp": "00:11:22", "start_second": 644, "end_second": 682, "url": "https://www.youtube.com/watch?v=pWAc9B2zJS4&t=644s", "title": "Heroes of Deep Learning: Andrew Ng interviews Ian Goodfellow", "thumbnail": "https://i.ytimg.com/vi/pWAc9B2zJS4/maxresdefault.jpg"} {"video_id": "pWAc9B2zJS4", "text": "like that I think a lot of people that want to get into a I start thinking that they absolutely need to get a PhD or some other kind of credential like that I don't think that's actually a requirement anymore one way that you could get a lot of attention is to write good code and put it on github if you have an interesting project that solves the problem that someone working at the top lab one is itself once they find your github repositories they'll come find you and ask you to come work there a lot of the people that I've hired or", "start_timestamp": "00:11:22", "end_timestamp": "00:11:56", "start_second": 682, "end_second": 716, "url": "https://www.youtube.com/watch?v=pWAc9B2zJS4&t=682s", "title": "Heroes of Deep Learning: Andrew Ng interviews Ian Goodfellow", "thumbnail": "https://i.ytimg.com/vi/pWAc9B2zJS4/maxresdefault.jpg"} {"video_id": "pWAc9B2zJS4", "text": "recruited at opening I last year or at Google this year I first became interested in working with them because it's something that I saw that they released in open-source form on the internet writing papers and putting them in archives can also be good a lot of the time it's harder to reach the point where you have something polished enough to really be a new academic contribution to the scientific literature but you can often get to the point of having a useful software product much earlier so sort of you know Nietzsche book", "start_timestamp": "00:11:56", "end_timestamp": "00:12:29", "start_second": 716, "end_second": 749, "url": "https://www.youtube.com/watch?v=pWAc9B2zJS4&t=716s", "title": "Heroes of Deep Learning: Andrew Ng interviews Ian Goodfellow", "thumbnail": "https://i.ytimg.com/vi/pWAc9B2zJS4/maxresdefault.jpg"} {"video_id": "pWAc9B2zJS4", "text": "practices and materials and post like github and maybe on archive I think if you if you learn by reading the book it's really important to also work on a project at the same time to either choose some way of applying machine learning to an area that you're already interested in like if you're a field biologist and you want to deep-learning maybe you could use it to identify birds or if you don't have an idea for how you'd like to use machine learning in your own life you could pick something like making a tree view house", "start_timestamp": "00:12:29", "end_timestamp": "00:12:59", "start_second": 749, "end_second": 779, "url": "https://www.youtube.com/watch?v=pWAc9B2zJS4&t=749s", "title": "Heroes of Deep Learning: Andrew Ng interviews Ian Goodfellow", "thumbnail": "https://i.ytimg.com/vi/pWAc9B2zJS4/maxresdefault.jpg"} {"video_id": "pWAc9B2zJS4", "text": "numbers classifier where all the data sets are set up to make it very straightforward for you and that way you get to exercise all of the basic skills while you read the book or while you watch Coursera videos that explains the concepts to you so over the last couple years have also seen you do one will work on adversarial examples and tell us a bit about that yeah I think every searle examples are the beginning of new fields that I called machine learning security in the past we've seen computer security issues where attackers could", "start_timestamp": "00:12:59", "end_timestamp": "00:13:35", "start_second": 779, "end_second": 815, "url": "https://www.youtube.com/watch?v=pWAc9B2zJS4&t=779s", "title": "Heroes of Deep Learning: Andrew Ng interviews Ian Goodfellow", "thumbnail": "https://i.ytimg.com/vi/pWAc9B2zJS4/maxresdefault.jpg"} {"video_id": "pWAc9B2zJS4", "text": "fool a computer into running the run code that's called application level security and there's been attacks where people can fool a computer into believing that messages on a network come from somebody that is not actually who they says they say they are and that's called Network level security now we're starting to see that you can also fool machine learning algorithms into doing things they shouldn't even if the program running the machine learning algorithm is running the correct code even if the program running the machine", "start_timestamp": "00:13:35", "end_timestamp": "00:14:08", "start_second": 815, "end_second": 848, "url": "https://www.youtube.com/watch?v=pWAc9B2zJS4&t=815s", "title": "Heroes of Deep Learning: Andrew Ng interviews Ian Goodfellow", "thumbnail": "https://i.ytimg.com/vi/pWAc9B2zJS4/maxresdefault.jpg"} {"video_id": "pWAc9B2zJS4", "text": "learning algorithm knows who all the messages on the network really came from and I think it's important to build security into a new technology near the start of its development we found that it's very hard to build a working system first and then add security later so I am really excited about the idea that if we dive in and start anticipating security problems with machine learning now we can make sure that these algorithms are secure from the start instead of trying to patch it in richer actively years later thank you fellas", "start_timestamp": "00:14:08", "end_timestamp": "00:14:42", "start_second": 848, "end_second": 882, "url": "https://www.youtube.com/watch?v=pWAc9B2zJS4&t=848s", "title": "Heroes of Deep Learning: Andrew Ng interviews Ian Goodfellow", "thumbnail": "https://i.ytimg.com/vi/pWAc9B2zJS4/maxresdefault.jpg"} {"video_id": "R5xV9BTYXL4", "text": "and when you're living by the hormones or stress not a time to create no out of time to open your heart not a time to learn sometimes we do a meditation we start opening our heart and start elevating the body's energy and then those emotions can drive certain thoughts of your future you have to understand that if 95% of who you are is a set of unconscious programs then the first step is lighting a match in a dark place this will be one of the most powerful videos you watch today right now you're about to discover the three", "start_timestamp": "00:00:00", "end_timestamp": "00:00:32", "start_second": 0, "end_second": 32, "url": "https://www.youtube.com/watch?v=R5xV9BTYXL4&t=0s", "title": "DO THIS Everyday To Unlock The FULL POWER Of Your Mind! | Joe Dispenza", "thumbnail": "https://i.ytimg.com/vi/R5xV9BTYXL4/maxresdefault.jpg"} {"video_id": "R5xV9BTYXL4", "text": "secrets to unlock the power of your mind with dr. Joe Dispenza I hope you enjoy [Music] how do we change our energy and how do we sustain it for extended period of time how long is that time need to be until we really started to see that sometimes immediate okay so so our research and we've done in the last six years because we were seeing so many incredible incredible things going on in our workshops I mean people stepping out of wheelchairs and all kinds of crazy things my church look kind of like a mega church but hopefully not that based", "start_timestamp": "00:00:32", "end_timestamp": "00:01:10", "start_second": 32, "end_second": 70, "url": "https://www.youtube.com/watch?v=R5xV9BTYXL4&t=32s", "title": "DO THIS Everyday To Unlock The FULL POWER Of Your Mind! | Joe Dispenza", "thumbnail": "https://i.ytimg.com/vi/R5xV9BTYXL4/maxresdefault.jpg"} {"video_id": "R5xV9BTYXL4", "text": "in science yeah but but but isn't it amazing that some of these churches when people get to believe whether they have science backing it or not they just it's the belief when they step out into the unknown step out of your body and who your size right instantaneously instantaneously and we do see that a lot and some people like yeah that's yeah can that be possible how can it be possible we we've done research now I just assembled a team of scientists we've done 8,500 brain scans I can tell you I can tell you when a person's about", "start_timestamp": "00:01:10", "end_timestamp": "00:01:39", "start_second": 70, "end_second": 99, "url": "https://www.youtube.com/watch?v=R5xV9BTYXL4&t=70s", "title": "DO THIS Everyday To Unlock The FULL POWER Of Your Mind! | Joe Dispenza", "thumbnail": "https://i.ytimg.com/vi/R5xV9BTYXL4/maxresdefault.jpg"} {"video_id": "R5xV9BTYXL4", "text": "ready to change I can tell you why really don't change I can tell you what it takes to change so what's it take to change well do you change most people keep their attention always their awareness on their body it keeps their attention on everything in their environment with people and things there the brain is always scanning everything around us to determine what's known as safe and unsafe right and you know we do that all the time so our research shows that the moment you take your attention off your body and you go from somebody", "start_timestamp": "00:01:39", "end_timestamp": "00:02:11", "start_second": 99, "end_second": 131, "url": "https://www.youtube.com/watch?v=R5xV9BTYXL4&t=99s", "title": "DO THIS Everyday To Unlock The FULL POWER Of Your Mind! | Joe Dispenza", "thumbnail": "https://i.ytimg.com/vi/R5xV9BTYXL4/maxresdefault.jpg"} {"video_id": "R5xV9BTYXL4", "text": "to nobody you take your attention off the people in your life and go from who you identify with from someone to no one and so many people spend their whole life building an identity of being someone take your attention off your cell phone your computer your car and go from something to nothing take your attention off where you're sitting where you need to be someplace you have to go go from somewhere to nowhere and take your attention off time linear thinking about the predictable future they're familiar past and fall into the generous present", "start_timestamp": "00:02:11", "end_timestamp": "00:02:42", "start_second": 131, "end_second": 162, "url": "https://www.youtube.com/watch?v=R5xV9BTYXL4&t=131s", "title": "DO THIS Everyday To Unlock The FULL POWER Of Your Mind! | Joe Dispenza", "thumbnail": "https://i.ytimg.com/vi/R5xV9BTYXL4/maxresdefault.jpg"} {"video_id": "R5xV9BTYXL4", "text": "moment and go from some time to no time then all you're left with is consciousness and that's the moment are no longer playing by the same rules matter to matter and there's a very elegant moment that takes place in the brain in fact I was just showing my research to a group of researchers and Santa Cruz this past week and they were blown away and I said now watch this person this person is going to have a transformational moment they said how do you know I buy I've seen enough of these and the next moment the whole brain just", "start_timestamp": "00:02:42", "end_timestamp": "00:03:15", "start_second": 162, "end_second": 195, "url": "https://www.youtube.com/watch?v=R5xV9BTYXL4&t=162s", "title": "DO THIS Everyday To Unlock The FULL POWER Of Your Mind! | Joe Dispenza", "thumbnail": "https://i.ytimg.com/vi/R5xV9BTYXL4/maxresdefault.jpg"} {"video_id": "R5xV9BTYXL4", "text": "lights up that person is switched on they'll never be the same person again they're having a transcendental moment and we could actually predict it and teach it now it's a formula hmm just like you doing sports if it just becomes a formula and then you change the formula and you add to it right so when you no longer are you're identifying with your body your environment and time that's the moment your pure consciousness now you're just an idea you're an awareness awareness awareness that has nothing to do with local space and time", "start_timestamp": "00:03:15", "end_timestamp": "00:03:47", "start_second": 195, "end_second": 227, "url": "https://www.youtube.com/watch?v=R5xV9BTYXL4&t=195s", "title": "DO THIS Everyday To Unlock The FULL POWER Of Your Mind! | Joe Dispenza", "thumbnail": "https://i.ytimg.com/vi/R5xV9BTYXL4/maxresdefault.jpg"} {"video_id": "R5xV9BTYXL4", "text": "and now if you're no longer yond you're anything you can go beyond and that's when the brain because the brain doesn't change the brain it takes a long time it takes a long time for the personality to change the personality for the ego to change the ego the programs that change the programs takes forever matter takes a long time to change matter but when you're in this moment you're no longer playing by those rules consciousness is the phenomenon above matter in fact consciousness is beginning to activate or manipulate circuits in the", "start_timestamp": "00:03:47", "end_timestamp": "00:04:17", "start_second": 227, "end_second": 257, "url": "https://www.youtube.com/watch?v=R5xV9BTYXL4&t=227s", "title": "DO THIS Everyday To Unlock The FULL POWER Of Your Mind! | Joe Dispenza", "thumbnail": "https://i.ytimg.com/vi/R5xV9BTYXL4/maxresdefault.jpg"} {"video_id": "R5xV9BTYXL4", "text": "brain people just think the brain is creating conscious no consciousness is executing the brain right so then if the brain can change then the mind doesn't change the brain mine is the brain in action is consciousness that changes it so when people begin to disengage and get beyond themselves you are at your absolute best when you get beyond yourself and getting the person to that point how does someone get to that point yeah so we teach them that formula we teach them to that point where all of a sudden they reach that generous present", "start_timestamp": "00:04:17", "end_timestamp": "00:04:48", "start_second": 257, "end_second": 288, "url": "https://www.youtube.com/watch?v=R5xV9BTYXL4&t=257s", "title": "DO THIS Everyday To Unlock The FULL POWER Of Your Mind! | Joe Dispenza", "thumbnail": "https://i.ytimg.com/vi/R5xV9BTYXL4/maxresdefault.jpg"} {"video_id": "R5xV9BTYXL4", "text": "moment where they just feel connected and when they're in that place all the things they thought they wanted they actually no longer want because they feel like they already have them so then imagine living your life from that place you would be less judgmental you would be less frustrated us impatient reactive and so so the formula then is that it requires a clear intention which is a coherent brain and when you're living stressed out and something goes wrong and you're threatened or you can't predict an outcome or you have the perception that", "start_timestamp": "00:04:48", "end_timestamp": "00:05:23", "start_second": 288, "end_second": 323, "url": "https://www.youtube.com/watch?v=R5xV9BTYXL4&t=288s", "title": "DO THIS Everyday To Unlock The FULL POWER Of Your Mind! | Joe Dispenza", "thumbnail": "https://i.ytimg.com/vi/R5xV9BTYXL4/maxresdefault.jpg"} {"video_id": "R5xV9BTYXL4", "text": "something's getting worse or you can't control it you switch on that fight-or-flight nervous system they've talked about now here's what happens when that occurs you start shifting your attention from one person to one problem the one thing to another person to another place because your brain is trying to predict the next moment well every one of those people and things in places has a neurological network in your brain so as you shift your attention from one to the next it's like a lightning storm in the clouds", "start_timestamp": "00:05:23", "end_timestamp": "00:05:49", "start_second": 323, "end_second": 349, "url": "https://www.youtube.com/watch?v=R5xV9BTYXL4&t=323s", "title": "DO THIS Everyday To Unlock The FULL POWER Of Your Mind! | Joe Dispenza", "thumbnail": "https://i.ytimg.com/vi/R5xV9BTYXL4/maxresdefault.jpg"} {"video_id": "R5xV9BTYXL4", "text": "your brain starts firing very incoherent lis when your brain is incoherent you're incoherent and when you're living by the hormones of stress not a time to create no not a time to open your heart not a time to learn not a time to trust and it's a time to run fight or hide so people spend 70% of their time of their life living in the state Wow so think about it so miserable yes so then when you're under stress if there's if there's a cougar around the corner you're not going to sit down and meditate sit still right but but so I'm", "start_timestamp": "00:05:49", "end_timestamp": "00:06:21", "start_second": 349, "end_second": 381, "url": "https://www.youtube.com/watch?v=R5xV9BTYXL4&t=349s", "title": "DO THIS Everyday To Unlock The FULL POWER Of Your Mind! | Joe Dispenza", "thumbnail": "https://i.ytimg.com/vi/R5xV9BTYXL4/maxresdefault.jpg"} {"video_id": "R5xV9BTYXL4", "text": "a tree you got the survival genes switched on and nobody is gonna believe in possibility when you're living in survival right yeah so then when you're living in stress what happens is you narrow your focus on the cause you now your focus on matter the object the thing and so people get switched on and all of their attention is on their outer world when the hormones of stress kick on the body gets an arousal now your attention is on the body and of course when you're under stress you're trying to predict the future based on the past", "start_timestamp": "00:06:21", "end_timestamp": "00:06:50", "start_second": 381, "end_second": 410, "url": "https://www.youtube.com/watch?v=R5xV9BTYXL4&t=381s", "title": "DO THIS Everyday To Unlock The FULL POWER Of Your Mind! | Joe Dispenza", "thumbnail": "https://i.ytimg.com/vi/R5xV9BTYXL4/maxresdefault.jpg"} {"video_id": "R5xV9BTYXL4", "text": "and now you're literally enslaved into three-dimensional reality so then how do you get what you want you gotta try harder you force it more you got a war cardio to fight for it it's matter trying to change matters Austin people just burnout right so then we now know that when you go from a narrow focus on something and you begin to open your focus you create sense and awareness that the act of opening your focus causes you to stop thinking and if you stop thing can you no longer activate those circuits and you start to slow your", "start_timestamp": "00:06:50", "end_timestamp": "00:07:22", "start_second": 410, "end_second": 442, "url": "https://www.youtube.com/watch?v=R5xV9BTYXL4&t=410s", "title": "DO THIS Everyday To Unlock The FULL POWER Of Your Mind! | Joe Dispenza", "thumbnail": "https://i.ytimg.com/vi/R5xV9BTYXL4/maxresdefault.jpg"} {"video_id": "R5xV9BTYXL4", "text": "brainwaves down hmm as you slow your brainwaves down you start connecting to that autonomic nervous system the thing that's giving you life and all of a sudden when you get beyond yourself it says he's gone let's step in and just clean up this mess before he gets back really and its job is to create order and balance your body will start to do that for you the innate intelligence will step right in once you've connect you got to connect so you got to know how to change your brainwaves you can't change your brainwaves you stay in an", "start_timestamp": "00:07:22", "end_timestamp": "00:07:45", "start_second": 442, "end_second": 465, "url": "https://www.youtube.com/watch?v=R5xV9BTYXL4&t=442s", "title": "DO THIS Everyday To Unlock The FULL POWER Of Your Mind! | Joe Dispenza", "thumbnail": "https://i.ytimg.com/vi/R5xV9BTYXL4/maxresdefault.jpg"} {"video_id": "R5xV9BTYXL4", "text": "active state you're basically moving furniture around you're analyzing your life within some disturbing emotion and I can tell you after looking at all those brain stands if you're analyzing your life within some disturbing emotion you're going to make your brain worse in fact you are thinking in the past right so you teach people the formula how to open their focus change their brainwaves connect to that invisible field and all of a sudden different compartments of the brain start synchronizing the front of the brain starts talking to the back", "start_timestamp": "00:07:45", "end_timestamp": "00:08:13", "start_second": 465, "end_second": 493, "url": "https://www.youtube.com/watch?v=R5xV9BTYXL4&t=465s", "title": "DO THIS Everyday To Unlock The FULL POWER Of Your Mind! | Joe Dispenza", "thumbnail": "https://i.ytimg.com/vi/R5xV9BTYXL4/maxresdefault.jpg"} {"video_id": "R5xV9BTYXL4", "text": "of the brain the right side starts talking to the left side and all of a sudden what sinks in the brain links in the brain all of a sudden you see this person starting to feel more like themselves and when you see those two hemispheres the brains start lighting up watch out because that person's gonna feel really hope they're gonna start loving life they're gonna feel like they're gonna be in love with life because the union of polarity and duality is wholeness at the exact same time coherent brain when you're", "start_timestamp": "00:08:13", "end_timestamp": "00:08:40", "start_second": 493, "end_second": 520, "url": "https://www.youtube.com/watch?v=R5xV9BTYXL4&t=493s", "title": "DO THIS Everyday To Unlock The FULL POWER Of Your Mind! | Joe Dispenza", "thumbnail": "https://i.ytimg.com/vi/R5xV9BTYXL4/maxresdefault.jpg"} {"video_id": "R5xV9BTYXL4", "text": "resentful when you're judgmental when you're impatient your heart beats out of rhythm why you're stepping on the gas and you're stepping on the brake at the same time your body and its intelligence living in survival is saying t-rex is back there but you're not running because you're sitting across the table looking at somebody smiling and your body's revved up right so the heart is beating and rhythmically and when that happens you're you're squandering or you're using all the body's life force and turning it into chemistry right", "start_timestamp": "00:08:40", "end_timestamp": "00:09:07", "start_second": 520, "end_second": 547, "url": "https://www.youtube.com/watch?v=R5xV9BTYXL4&t=520s", "title": "DO THIS Everyday To Unlock The FULL POWER Of Your Mind! | Joe Dispenza", "thumbnail": "https://i.ytimg.com/vi/R5xV9BTYXL4/maxresdefault.jpg"} {"video_id": "R5xV9BTYXL4", "text": "using all that energy to survive as opposed to think beyond right right so you're drawing from your vital life force that invisible field around your body and you're turning into the chemistry you actually are going to shrink your own feel the hormones of stress caused us to be materialists right we when one of the stress were we're using our senses to determine reality so now you feel more like matter unless like energy more separate from possibility so then to teach a person then how to regulate that heart center", "start_timestamp": "00:09:07", "end_timestamp": "00:09:35", "start_second": 547, "end_second": 575, "url": "https://www.youtube.com/watch?v=R5xV9BTYXL4&t=547s", "title": "DO THIS Everyday To Unlock The FULL POWER Of Your Mind! | Joe Dispenza", "thumbnail": "https://i.ytimg.com/vi/R5xV9BTYXL4/maxresdefault.jpg"} {"video_id": "R5xV9BTYXL4", "text": "and we do this we've done 6000 heart scans why because if I can teach you how to get in that heart state and I can teach you how to activate that Center and I can teach you how to regulate an elevated emotion the heart starts to create a very coherent signature and when the heart starts beating like a drum like dropping a pebble in water it begins to produce a measurable magnetic field up to 3 meters wide now you're more energy than matter more wave than particle Wow now that field that's being created is measurable and that's an", "start_timestamp": "00:09:35", "end_timestamp": "00:10:10", "start_second": 575, "end_second": 610, "url": "https://www.youtube.com/watch?v=R5xV9BTYXL4&t=575s", "title": "DO THIS Everyday To Unlock The FULL POWER Of Your Mind! | Joe Dispenza", "thumbnail": "https://i.ytimg.com/vi/R5xV9BTYXL4/maxresdefault.jpg"} {"video_id": "R5xV9BTYXL4", "text": "energy and energy is frequency and all frequency carries information so what is the information when it makes it here that you're sharing in the world it could carry the thought of your healing why because it's consistent with the energy guilt isn't gonna carry the thought of your healing it's a different frequency and all of a sudden now the person is elevating their emotional state and they're allowing their thought to be carried on their frequency they're broadcasting a whole new energetic signature but thoughts are the language", "start_timestamp": "00:10:10", "end_timestamp": "00:10:37", "start_second": 610, "end_second": 637, "url": "https://www.youtube.com/watch?v=R5xV9BTYXL4&t=610s", "title": "DO THIS Everyday To Unlock The FULL POWER Of Your Mind! | Joe Dispenza", "thumbnail": "https://i.ytimg.com/vi/R5xV9BTYXL4/maxresdefault.jpg"} {"video_id": "R5xV9BTYXL4", "text": "of the brain and feelings of the language of the body and how you think and how you feel creates your state of being so then the question is if you keep practicing creating that state of being it should become familiar to you yes or no the word meditation literally means to become familiar with so then if you're practicing moving into these elevated states and your heart is coherent and we're measuring and I can say Louis you got it now do it for 30 minutes now do it for 60 minutes and you practice creating that coherence you'll", "start_timestamp": "00:10:37", "end_timestamp": "00:11:09", "start_second": 637, "end_second": 669, "url": "https://www.youtube.com/watch?v=R5xV9BTYXL4&t=637s", "title": "DO THIS Everyday To Unlock The FULL POWER Of Your Mind! | Joe Dispenza", "thumbnail": "https://i.ytimg.com/vi/R5xV9BTYXL4/maxresdefault.jpg"} {"video_id": "R5xV9BTYXL4", "text": "know when you're there and when you're not yes or no sure and then you would be able to say like a skill like anything else give me a minute I'm gonna step out and you're gonna go back in the heart coherence and bring up that state now we get there in the heart then we practice a formula again rest your attention stop calling up elevated emotions and when you start seeing that that starts happening then you sustain it then you keep practicing and all of a sudden it gets longer and longer and longer now what's the relevance behind that well", "start_timestamp": "00:11:09", "end_timestamp": "00:11:36", "start_second": 669, "end_second": 696, "url": "https://www.youtube.com/watch?v=R5xV9BTYXL4&t=669s", "title": "DO THIS Everyday To Unlock The FULL POWER Of Your Mind! | Joe Dispenza", "thumbnail": "https://i.ytimg.com/vi/R5xV9BTYXL4/maxresdefault.jpg"} {"video_id": "R5xV9BTYXL4", "text": "we've measured neurotransmitters so when a person actually activates their heart the heart releases a chemical called oxytocin oxytocin is actually love chemical not oxytocin signals nitric oxide nitric oxide signals another chemical called endothelial derived relaxing factor what does that do causes the vessels in your heart to swell you will literally have energy in your heart you will literally feel like your heart is full now once you have that feeling you're not gonna want to trade that feeling for anyone or", "start_timestamp": "00:11:36", "end_timestamp": "00:12:09", "start_second": 696, "end_second": 729, "url": "https://www.youtube.com/watch?v=R5xV9BTYXL4&t=696s", "title": "DO THIS Everyday To Unlock The FULL POWER Of Your Mind! | Joe Dispenza", "thumbnail": "https://i.ytimg.com/vi/R5xV9BTYXL4/maxresdefault.jpg"} {"video_id": "R5xV9BTYXL4", "text": "anything you're gonna say well why would I judge that person if I judge that person will lose this feeling now all of a sudden you're self-regulating now once the heart is activated I just was at the research lab this week once the heart is activated it acts as an amplifier and it amplifies energy in the brain so once you start opening that heart and it begins to signal the brain you're going to suppress the survival centers in fact the research shows it will reset your baseline in other words if you're anxious and vigilant and you learn how", "start_timestamp": "00:12:09", "end_timestamp": "00:12:42", "start_second": 729, "end_second": 762, "url": "https://www.youtube.com/watch?v=R5xV9BTYXL4&t=729s", "title": "DO THIS Everyday To Unlock The FULL POWER Of Your Mind! | Joe Dispenza", "thumbnail": "https://i.ytimg.com/vi/R5xV9BTYXL4/maxresdefault.jpg"} {"video_id": "R5xV9BTYXL4", "text": "to self-regulate you'll actually reset the baseline and you'll say well the trauma was 15 years ago I saw my somebody get murdered or whatever and they will say yeah yeah yeah but the moment the heart not the brain it's the heart that actually resets the amygdala and all of a sudden the person all of a sudden switches down and all something like that I just don't have anxiety we have thousands of brain scans with anxiety and depression from people from all walks of life they've reset and all of a sudden they", "start_timestamp": "00:12:42", "end_timestamp": "00:13:10", "start_second": 762, "end_second": 790, "url": "https://www.youtube.com/watch?v=R5xV9BTYXL4&t=762s", "title": "DO THIS Everyday To Unlock The FULL POWER Of Your Mind! | Joe Dispenza", "thumbnail": "https://i.ytimg.com/vi/R5xV9BTYXL4/maxresdefault.jpg"} {"video_id": "R5xV9BTYXL4", "text": "don't have that excited they don't have to take medications or do anything is they know how to self-regulate where does all anxiety stem from mmm and anxiety is doing this living in the survival when you're living in survival I'll tell you this when the survival gene is activated out of the infinite potentials in the quantum field you'll always choose the worst-case scenario why because if you're in survival and you're preparing for the worst there's always better chances of surviving if anything less happens so people are", "start_timestamp": "00:13:10", "end_timestamp": "00:13:37", "start_second": 790, "end_second": 817, "url": "https://www.youtube.com/watch?v=R5xV9BTYXL4&t=790s", "title": "DO THIS Everyday To Unlock The FULL POWER Of Your Mind! | Joe Dispenza", "thumbnail": "https://i.ytimg.com/vi/R5xV9BTYXL4/maxresdefault.jpg"} {"video_id": "R5xV9BTYXL4", "text": "always selecting the worst thing in their mind and they begin to emotionally embrace that future before it happens thought and emotion you start conditioning so you're conditioning the body to become the mind of fear you keep doing that enough times once the body becomes them it's a subconscious program person as a panic attack try as you made a controller with your conscious mind you can't you programmed it subconsciously now you worry about the next panic attack and as you start worrying about the next panic attack", "start_timestamp": "00:13:37", "end_timestamp": "00:14:07", "start_second": 817, "end_second": 847, "url": "https://www.youtube.com/watch?v=R5xV9BTYXL4&t=817s", "title": "DO THIS Everyday To Unlock The FULL POWER Of Your Mind! | Joe Dispenza", "thumbnail": "https://i.ytimg.com/vi/R5xV9BTYXL4/maxresdefault.jpg"} {"video_id": "R5xV9BTYXL4", "text": "that's the vigilance that creates the next one well now here's what's happening in our work people who are self-regulating and creating these elevated states we have we have heart scans of them sustaining heart coherence for a whole hour during a meditation then at the end of the day they're still wearing the the monitor it's eight o'clock at night they're not even in a meditation and for a whole entire hour they're in heart coherence we say to the woman what's going on here she said I have no idea I was just getting ready", "start_timestamp": "00:14:07", "end_timestamp": "00:14:34", "start_second": 847, "end_second": 874, "url": "https://www.youtube.com/watch?v=R5xV9BTYXL4&t=847s", "title": "DO THIS Everyday To Unlock The FULL POWER Of Your Mind! | Joe Dispenza", "thumbnail": "https://i.ytimg.com/vi/R5xV9BTYXL4/maxresdefault.jpg"} {"video_id": "R5xV9BTYXL4", "text": "for bed and all of a sudden my heart just swelled up it was so intense I had to lay on my back and surrender to love instead of surrendering to fear she had a spontaneous love attack instead of a spontaneous panic attack now I would call that the natural state of being so then if you're living by those elevated states and you know how to feel that emotion of your future before it happens you're less likely to wait for it to happen because you'll feel like it already happened you'll less likely try to control it you'll", "start_timestamp": "00:14:34", "end_timestamp": "00:15:04", "start_second": 874, "end_second": 904, "url": "https://www.youtube.com/watch?v=R5xV9BTYXL4&t=874s", "title": "DO THIS Everyday To Unlock The FULL POWER Of Your Mind! | Joe Dispenza", "thumbnail": "https://i.ytimg.com/vi/R5xV9BTYXL4/maxresdefault.jpg"} {"video_id": "R5xV9BTYXL4", "text": "know that the moment you lose the feeling you just disconnected and you're gonna make your way back and when you get good at it no person no thing no experience can take it away from you well now you're empowered and if you understand the laws of how creation happens then you're less likely to compete and rush to get what you want you're gonna know that it's gonna come to you and now that's the new model of how do we create knowing is gonna come to us at the right time what if we want it faster no you just do it again", "start_timestamp": "00:15:04", "end_timestamp": "00:15:32", "start_second": 904, "end_second": 932, "url": "https://www.youtube.com/watch?v=R5xV9BTYXL4&t=904s", "title": "DO THIS Everyday To Unlock The FULL POWER Of Your Mind! | Joe Dispenza", "thumbnail": "https://i.ytimg.com/vi/R5xV9BTYXL4/maxresdefault.jpg"} {"video_id": "R5xV9BTYXL4", "text": "hmmm but remember if you're trying to make it happen faster you're back to the old self right then yourself would never do that the new self would constantly stay there and so then how does it appear it appears in a way that you can't expect because if you can predict it it's the known it's gonna come in a way that you haven't thought of an unknown and it's and it's got a rock your world it's got to catch you off guard it's got to leave you no doubt that what you've been doing inside of you that produces some effect outside of", "start_timestamp": "00:15:32", "end_timestamp": "00:16:02", "start_second": 932, "end_second": 962, "url": "https://www.youtube.com/watch?v=R5xV9BTYXL4&t=932s", "title": "DO THIS Everyday To Unlock The FULL POWER Of Your Mind! | Joe Dispenza", "thumbnail": "https://i.ytimg.com/vi/R5xV9BTYXL4/maxresdefault.jpg"} {"video_id": "R5xV9BTYXL4", "text": "you and when you correlate what you've been doing inside of you with the effect that you produced outside of you pay attention to what you did and do it again and the energy uh the joy that you feel when it happens you're gonna use that energy to create again now people say to me well I'm this way because of that person in that thing I would say to them so you mean them that person or that experience out there is controlling your thoughts and feelings that means you're a victim to your environment but when you start changing your thoughts", "start_timestamp": "00:16:02", "end_timestamp": "00:16:31", "start_second": 962, "end_second": 991, "url": "https://www.youtube.com/watch?v=R5xV9BTYXL4&t=962s", "title": "DO THIS Everyday To Unlock The FULL POWER Of Your Mind! | Joe Dispenza", "thumbnail": "https://i.ytimg.com/vi/R5xV9BTYXL4/maxresdefault.jpg"} {"video_id": "R5xV9BTYXL4", "text": "and feelings and it starts to produce an effect in your environment you're gonna change the belief that you're a victim consciously or subconsciously of your life to becoming more of a creator of your life and now all of a sudden you become more a creator of your life you can't blame anybody you can't say well that person and I think you'd have to say I got to be greater than that environmental condition who in history can I study that had the same challenges now what was what was what did they do let me just work that into my rehearsal", "start_timestamp": "00:16:31", "end_timestamp": "00:17:00", "start_second": 991, "end_second": 1020, "url": "https://www.youtube.com/watch?v=R5xV9BTYXL4&t=991s", "title": "DO THIS Everyday To Unlock The FULL POWER Of Your Mind! | Joe Dispenza", "thumbnail": "https://i.ytimg.com/vi/R5xV9BTYXL4/maxresdefault.jpg"} {"video_id": "R5xV9BTYXL4", "text": "so that I can improve right just like you've done with sports is the same process yeah [Music] was more powerful than our thoughts or our emotions and do our emotions change our thoughts or do our thoughts change our emotions yeah the answer is yes the answer is both I mean thoughts to me produce an electrical charge in the quantum field and feelings produce a magnetic charge in the quantum field thoughts wait thoughts produce a what an electrical charge okay and feelings produce a magnetic charge and how you", "start_timestamp": "00:17:00", "end_timestamp": "00:17:36", "start_second": 1020, "end_second": 1056, "url": "https://www.youtube.com/watch?v=R5xV9BTYXL4&t=1020s", "title": "DO THIS Everyday To Unlock The FULL POWER Of Your Mind! | Joe Dispenza", "thumbnail": "https://i.ytimg.com/vi/R5xV9BTYXL4/maxresdefault.jpg"} {"video_id": "R5xV9BTYXL4", "text": "think and how you feel broadcasts an electromagnetic signature that influences every single atom in your life the thought sends the signal out I'll think about this and the feeling draws the event back so you could have the intent that you want wealth you want health you want success that's your intent that's your thought but if you're waiting for the experience to happen to feel it then you're not drawing the experience to you because you're not feeling the emotion right so then teaching people once again how to", "start_timestamp": "00:17:36", "end_timestamp": "00:18:10", "start_second": 1056, "end_second": 1090, "url": "https://www.youtube.com/watch?v=R5xV9BTYXL4&t=1056s", "title": "DO THIS Everyday To Unlock The FULL POWER Of Your Mind! | Joe Dispenza", "thumbnail": "https://i.ytimg.com/vi/R5xV9BTYXL4/maxresdefault.jpg"} {"video_id": "R5xV9BTYXL4", "text": "balance their thoughts and feelings because you can you can enter that cycle either place sometimes we do a meditation we start opening our heart we start elevating the body's energy and then those emotions can drive certain thoughts of your future other times you open your awareness you create brain coherence you have the vision of your future you begin to emotionally experience it however you want to jump on that cycle and then sustain it because the longer you're conscious of that energy the more you're drawing your", "start_timestamp": "00:18:10", "end_timestamp": "00:18:41", "start_second": 1090, "end_second": 1121, "url": "https://www.youtube.com/watch?v=R5xV9BTYXL4&t=1090s", "title": "DO THIS Everyday To Unlock The FULL POWER Of Your Mind! | Joe Dispenza", "thumbnail": "https://i.ytimg.com/vi/R5xV9BTYXL4/maxresdefault.jpg"} {"video_id": "R5xV9BTYXL4", "text": "future to you so then most people spend their lives right they we live in this realm called space-time three-dimensional reality and you move your body through space and three-dimensional reality it takes time yeah so everything all your goals all your dreams all your visions you're gonna have to get your body up and drag it through space every day to pay off that you know that home that's in your future right right when you create from the field instead of from matter when you're the vibrational match between", "start_timestamp": "00:18:41", "end_timestamp": "00:19:10", "start_second": 1121, "end_second": 1150, "url": "https://www.youtube.com/watch?v=R5xV9BTYXL4&t=1121s", "title": "DO THIS Everyday To Unlock The FULL POWER Of Your Mind! | Joe Dispenza", "thumbnail": "https://i.ytimg.com/vi/R5xV9BTYXL4/maxresdefault.jpg"} {"video_id": "R5xV9BTYXL4", "text": "your energy and some potential and your thoughts and feelings are coherent now you are going to begin to collapse time and space or the experience is going to be drawn to now now your the vortex to your destiny and now you don't have to go anywhere to get it because you're not playing by the rules of three-dimensional reality you're playing by the rules of energy in the quantum so teaching people how to do this in getting better at it then all of a sudden they're not forcing and controlling outcomes in fact they're", "start_timestamp": "00:19:10", "end_timestamp": "00:19:41", "start_second": 1150, "end_second": 1181, "url": "https://www.youtube.com/watch?v=R5xV9BTYXL4&t=1150s", "title": "DO THIS Everyday To Unlock The FULL POWER Of Your Mind! | Joe Dispenza", "thumbnail": "https://i.ytimg.com/vi/R5xV9BTYXL4/maxresdefault.jpg"} {"video_id": "R5xV9BTYXL4", "text": "trusting and surrendering to outcomes because they don't want to get in the way because the moment you start trying to predict when it's going to happen or how it's gonna happen you're overlaying a known over a place where there should be an unknown right so teaching people how to do that means we have to lay down the very thing we used our whole life to get what we want for something greater to occur right and so that transcendental moment is something that we're working on the mystifying and and you could be gluten", "start_timestamp": "00:19:41", "end_timestamp": "00:20:15", "start_second": 1181, "end_second": 1215, "url": "https://www.youtube.com/watch?v=R5xV9BTYXL4&t=1181s", "title": "DO THIS Everyday To Unlock The FULL POWER Of Your Mind! | Joe Dispenza", "thumbnail": "https://i.ytimg.com/vi/R5xV9BTYXL4/maxresdefault.jpg"} {"video_id": "R5xV9BTYXL4", "text": "free person you could be a gluten full person you could drink wine not drink wine you could be rich you could be poor you could be any color any shape any size in fact you can't tell me you're too old to do this work you can't tell me that we got elders in this work that we show you other brain scans and you'd be blown away but they they know how to do it you can't tell me you're too sick to do this work we got people that have reversed stage four cancer in numerous times and yeah it took a Herculean effort to do it but they love themselves", "start_timestamp": "00:20:15", "end_timestamp": "00:20:43", "start_second": 1215, "end_second": 1243, "url": "https://www.youtube.com/watch?v=R5xV9BTYXL4&t=1215s", "title": "DO THIS Everyday To Unlock The FULL POWER Of Your Mind! | Joe Dispenza", "thumbnail": "https://i.ytimg.com/vi/R5xV9BTYXL4/maxresdefault.jpg"} {"video_id": "R5xV9BTYXL4", "text": "for you can't tell me that you're too out of shape or too overweight or too under weight you can't I've seen it in all shapes and sizes you can't even tell me that you had a brutal past and people that have had very very dismal pasts that are free they're happy people you can't even tell me you're you never meditated before in fact our research shows that many people have never meditated before have the most profound experiences because they're not trying to make anything happen they're just following the instructions right and and", "start_timestamp": "00:20:43", "end_timestamp": "00:21:09", "start_second": 1243, "end_second": 1269, "url": "https://www.youtube.com/watch?v=R5xV9BTYXL4&t=1243s", "title": "DO THIS Everyday To Unlock The FULL POWER Of Your Mind! | Joe Dispenza", "thumbnail": "https://i.ytimg.com/vi/R5xV9BTYXL4/maxresdefault.jpg"} {"video_id": "R5xV9BTYXL4", "text": "they don't have a habit of doing it so so we don't want to exclude anybody in the process we want to include everybody so it turns out that our events tend to draw a good portion of men because of the science we have a lot of children now that are you know teenagers that are coming and people in their 20s we have great community of elders we have you know in our events sometimes 63 different cultures well coming to countries coming to our events between 50 and you know 65 so so we want it we want to make it so inclusive that", "start_timestamp": "00:21:09", "end_timestamp": "00:21:48", "start_second": 1269, "end_second": 1308, "url": "https://www.youtube.com/watch?v=R5xV9BTYXL4&t=1269s", "title": "DO THIS Everyday To Unlock The FULL POWER Of Your Mind! | Joe Dispenza", "thumbnail": "https://i.ytimg.com/vi/R5xV9BTYXL4/maxresdefault.jpg"} {"video_id": "R5xV9BTYXL4", "text": "community becomes the side effects because because with a community of like-minded like like antler similar energy of people everybody understands they get one another you know you you communities tends to be the thing now that in terms of our social medium and the feedback we're getting everybody wants more community because you get a you get a thousand people in the audience and their energy synchronized now you're talking us something so much bigger we're just gonna measure this I just talked to a researcher yesterday", "start_timestamp": "00:21:48", "end_timestamp": "00:22:21", "start_second": 1308, "end_second": 1341, "url": "https://www.youtube.com/watch?v=R5xV9BTYXL4&t=1308s", "title": "DO THIS Everyday To Unlock The FULL POWER Of Your Mind! | Joe Dispenza", "thumbnail": "https://i.ytimg.com/vi/R5xV9BTYXL4/maxresdefault.jpg"} {"video_id": "R5xV9BTYXL4", "text": "we're going to measure a thousand people when they reach that synchronized moment when they're we can we know that the entire social coherence in the room is orderly then if you're producing a ambience coherent magnetic field in your heart and you're tuning into a thought or an intent and you got a thousand people doing that and your energy is gonna start interfering and commingling with the person next to you when that Energy starts to synchronize it's gonna produce a bigger wave the higher the amplitude the higher the wave the more", "start_timestamp": "00:22:21", "end_timestamp": "00:22:54", "start_second": 1341, "end_second": 1374, "url": "https://www.youtube.com/watch?v=R5xV9BTYXL4&t=1341s", "title": "DO THIS Everyday To Unlock The FULL POWER Of Your Mind! | Joe Dispenza", "thumbnail": "https://i.ytimg.com/vi/R5xV9BTYXL4/maxresdefault.jpg"} {"video_id": "R5xV9BTYXL4", "text": "energy there is so now you have one mind in one heart and now when it comes to healing others and we've done the research on this now and we're collecting the data that we're teaching people how to administer a change in energy and the person that's laying there because it's not matter that emits a field that's the wrong way to think about it it's the field that creates matter you change the field you change matter you know it's not your job to change the tumor the tumor is the illusion it's the pattern in the field", "start_timestamp": "00:22:54", "end_timestamp": "00:23:25", "start_second": 1374, "end_second": 1405, "url": "https://www.youtube.com/watch?v=R5xV9BTYXL4&t=1374s", "title": "DO THIS Everyday To Unlock The FULL POWER Of Your Mind! | Joe Dispenza", "thumbnail": "https://i.ytimg.com/vi/R5xV9BTYXL4/maxresdefault.jpg"} {"video_id": "R5xV9BTYXL4", "text": "that's that that has to be changed so once people start reversing this then you start seeing the tumors disappearing you start seeing blind people seeing deaf people hearing you start seeing people with Parkinson's disease switch on I mean you start seeing stage four cancer is reversing because now they're you're you're you're swimming upstream you're going to the headwater in making that change so pushing the envelope and them seeing that in a community when a community synchronized towards the second half of a week-long", "start_timestamp": "00:23:25", "end_timestamp": "00:23:54", "start_second": 1405, "end_second": 1434, "url": "https://www.youtube.com/watch?v=R5xV9BTYXL4&t=1405s", "title": "DO THIS Everyday To Unlock The FULL POWER Of Your Mind! | Joe Dispenza", "thumbnail": "https://i.ytimg.com/vi/R5xV9BTYXL4/maxresdefault.jpg"} {"video_id": "R5xV9BTYXL4", "text": "event I mean as I said before we started the show I I'm more surprised than anybody was it's crazy what is the we talked about I heard you say consciousness a couple of times what's the difference between mindset and consciousness to me consciousness is awareness awareness is paying attention and noticing and so 95% of who we are by the time were 35 years old is a set of unconscious automatic programs that we've just practiced so many times that we're not consciously thinking about those so in order for you to change to", "start_timestamp": "00:23:54", "end_timestamp": "00:24:29", "start_second": 1434, "end_second": 1469, "url": "https://www.youtube.com/watch?v=R5xV9BTYXL4&t=1434s", "title": "DO THIS Everyday To Unlock The FULL POWER Of Your Mind! | Joe Dispenza", "thumbnail": "https://i.ytimg.com/vi/R5xV9BTYXL4/maxresdefault.jpg"} {"video_id": "R5xV9BTYXL4", "text": "answer the initial question that you asked the first step is you got to become conscious of your unconscious thoughts and you got to you got to start looking at those hardwired thoughts that that you think every day that it's just circuits that have been fired and wired together how do we do that should we take write a list at the end of the day or one of the most common thoughts we had that day like how does someone become aware you don't have to do that you just have to sit down close your eyes and not move and then you'll get", "start_timestamp": "00:24:29", "end_timestamp": "00:24:54", "start_second": 1469, "end_second": 1494, "url": "https://www.youtube.com/watch?v=R5xV9BTYXL4&t=1469s", "title": "DO THIS Everyday To Unlock The FULL POWER Of Your Mind! | Joe Dispenza", "thumbnail": "https://i.ytimg.com/vi/R5xV9BTYXL4/maxresdefault.jpg"} {"video_id": "R5xV9BTYXL4", "text": "you'll you'll start seeing what am I thinking about right yeah and all you want to do is observe the thought because when you begin to observe that thought you're no longer the program now you're the consciousness observing the program and you're starting to pull out of the program thinking about the thinking yeah who's doing the thinking of the thinking about the thinking that's who you are when you're not the program that's awareness right you got to become aware of how you speak how you act become so conscious so aware of it", "start_timestamp": "00:24:54", "end_timestamp": "00:25:24", "start_second": 1494, "end_second": 1524, "url": "https://www.youtube.com/watch?v=R5xV9BTYXL4&t=1494s", "title": "DO THIS Everyday To Unlock The FULL POWER Of Your Mind! | Joe Dispenza", "thumbnail": "https://i.ytimg.com/vi/R5xV9BTYXL4/maxresdefault.jpg"} {"video_id": "R5xV9BTYXL4", "text": "that you won't go unconscious and let that thought or that behavior run you you got to say oh my god this feeling that I've been living by for the last 20 years is actually guilt I didn't know it was guilt because it just feels like me and all of a sudden as you start becoming conscious of it you're beginning to objectify your subjective self here you're pulling you out of those programs and nobody likes to do that because it's uncomfortable they'd rather turn on their cell phones start texting get on the internet", "start_timestamp": "00:25:24", "end_timestamp": "00:25:53", "start_second": 1524, "end_second": 1553, "url": "https://www.youtube.com/watch?v=R5xV9BTYXL4&t=1524s", "title": "DO THIS Everyday To Unlock The FULL POWER Of Your Mind! | Joe Dispenza", "thumbnail": "https://i.ytimg.com/vi/R5xV9BTYXL4/maxresdefault.jpg"} {"video_id": "R5xV9BTYXL4", "text": "you know watch TV to distract them from that moment and that is what they have to move through in order to get to their to their own personal freedom so the first step is becoming conscious and meditation means to become familiar with to become conscious of to to become so conscious of your unconscious self that you won't go unconscious to any thought any behavior or emotion and get ready because it takes a tremendous amount of energy to do that and awareness that conscious to stay conscious and so we fall from grace yeah fine", "start_timestamp": "00:25:53", "end_timestamp": "00:26:29", "start_second": 1553, "end_second": 1589, "url": "https://www.youtube.com/watch?v=R5xV9BTYXL4&t=1553s", "title": "DO THIS Everyday To Unlock The FULL POWER Of Your Mind! | Joe Dispenza", "thumbnail": "https://i.ytimg.com/vi/R5xV9BTYXL4/maxresdefault.jpg"} {"video_id": "R5xV9BTYXL4", "text": "you got you got your wake you got another day let's go again how often do you fall oh my gosh I mean how many times have I done it thousands but I'm not gonna give up because the moments in which I do connect or the moments that I do have that transcendental experience what matters the most after it when I have that transcendental moment I look back at all of those difficult meditations those difficult days and those are the ones you remember you don't remember the good meditations you remember the ones where you came up", "start_timestamp": "00:26:29", "end_timestamp": "00:26:59", "start_second": 1589, "end_second": 1619, "url": "https://www.youtube.com/watch?v=R5xV9BTYXL4&t=1589s", "title": "DO THIS Everyday To Unlock The FULL POWER Of Your Mind! | Joe Dispenza", "thumbnail": "https://i.ytimg.com/vi/R5xV9BTYXL4/maxresdefault.jpg"} {"video_id": "R5xV9BTYXL4", "text": "against yourself yeah and you want a little further and you say I'm gonna go we'll further and go further or you had a rough day and you just went in and you just you at the end of the day you surrender and you have the classic oh my god moment there's no linear correlation it's just whether you're willing to live in creation instead of living in survival and so you get better at it you know we just get better at it and for me staying consciousness thing where in staying present is an art because you you know when someone's present with you", "start_timestamp": "00:26:59", "end_timestamp": "00:27:31", "start_second": 1619, "end_second": 1651, "url": "https://www.youtube.com/watch?v=R5xV9BTYXL4&t=1619s", "title": "DO THIS Everyday To Unlock The FULL POWER Of Your Mind! | Joe Dispenza", "thumbnail": "https://i.ytimg.com/vi/R5xV9BTYXL4/maxresdefault.jpg"} {"video_id": "R5xV9BTYXL4", "text": "in your life because they're paying attention to you you know when they're not present with you because they're not paying attention to you so imagine this field of informations is this intelligence that lives within you and I that's governing everything material in this world it's a self-organizing intelligence you have access to it so you better get present with it as well as you can get present with anything else and just because you can't see it doesn't mean it doesn't exist it that that that realm you can't", "start_timestamp": "00:27:31", "end_timestamp": "00:28:00", "start_second": 1651, "end_second": 1680, "url": "https://www.youtube.com/watch?v=R5xV9BTYXL4&t=1651s", "title": "DO THIS Everyday To Unlock The FULL POWER Of Your Mind! | Joe Dispenza", "thumbnail": "https://i.ytimg.com/vi/R5xV9BTYXL4/maxresdefault.jpg"} {"video_id": "R5xV9BTYXL4", "text": "experience with your senses you can only experience with your awareness so then people have to take their attend off their bodies and go from uh somebody to a nobody hmm take their attention off the people in their life and go from that they identify with and go from uh someone to a no one take their attention off the things in their life their cell phone their computer the car and go from something to nothing take their attention off where they sleep where they work where they're sitting and go from somewhere to nowhere take their", "start_timestamp": "00:28:00", "end_timestamp": "00:28:27", "start_second": 1680, "end_second": 1707, "url": "https://www.youtube.com/watch?v=R5xV9BTYXL4&t=1680s", "title": "DO THIS Everyday To Unlock The FULL POWER Of Your Mind! | Joe Dispenza", "thumbnail": "https://i.ytimg.com/vi/R5xV9BTYXL4/maxresdefault.jpg"} {"video_id": "R5xV9BTYXL4", "text": "attention off the predictable future and the familiar past and time and go from some time to no time and now if you're taking all of your attention off of everything material in this three-dimensional reality now there's only one other thing that's left that means you're an awareness your consciousness and now that is the bridge that is the door to the quantum field and you can't enter the quantum field as a somebody so if someone has spent their whole life working on having the perfect body or so much so they have so much", "start_timestamp": "00:28:27", "end_timestamp": "00:28:56", "start_second": 1707, "end_second": 1736, "url": "https://www.youtube.com/watch?v=R5xV9BTYXL4&t=1707s", "title": "DO THIS Everyday To Unlock The FULL POWER Of Your Mind! | Joe Dispenza", "thumbnail": "https://i.ytimg.com/vi/R5xV9BTYXL4/maxresdefault.jpg"} {"video_id": "R5xV9BTYXL4", "text": "attention on their pain where you place your attention is where you place your energy it's going to take some work for them to take all of their attention off their body right because they'll go they'll do it and then look go back let's see if the pain so that all the pain still there so it's a little bit of a waltz in the beginning but as people start applying this you start getting better at it as an example we had Bond University University and Australia on the Gold Coast senior researcher took a large majority of my brain scans and", "start_timestamp": "00:28:56", "end_timestamp": "00:29:26", "start_second": 1736, "end_second": 1766, "url": "https://www.youtube.com/watch?v=R5xV9BTYXL4&t=1736s", "title": "DO THIS Everyday To Unlock The FULL POWER Of Your Mind! | Joe Dispenza", "thumbnail": "https://i.ytimg.com/vi/R5xV9BTYXL4/maxresdefault.jpg"} {"video_id": "R5xV9BTYXL4", "text": "they hadn't she had them analyzed by her graduate students and they statistically looked at everything one of the most startling things for the research team was our community's ability to go to to get to that point where there nobody no one no thing nowhere no time I'm talking for seconds I'm talking five seconds I'm talking nine seconds just like just give me a second I know how to do this [Music] I feel like we've brainwashed ourselves over the years to believe a story of the past is who we are and who we will", "start_timestamp": "00:29:26", "end_timestamp": "00:30:02", "start_second": 1766, "end_second": 1802, "url": "https://www.youtube.com/watch?v=R5xV9BTYXL4&t=1766s", "title": "DO THIS Everyday To Unlock The FULL POWER Of Your Mind! | Joe Dispenza", "thumbnail": "https://i.ytimg.com/vi/R5xV9BTYXL4/maxresdefault.jpg"} {"video_id": "R5xV9BTYXL4", "text": "always be in the future so how do we brainwash ourselves in times of stress and anxiety in order to become more peaceful loving and successful in the future how do we brainwash ourselves in a different way sure well that's what I and and Wow I mean the first and most important thing is that you have to understand that if 95% of who you are is a set of unconscious programs then the first step is lighting a match in a dark place if you want to become someone else you got to become aware of who you are yes that means you", "start_timestamp": "00:30:02", "end_timestamp": "00:30:46", "start_second": 1802, "end_second": 1846, "url": "https://www.youtube.com/watch?v=R5xV9BTYXL4&t=1802s", "title": "DO THIS Everyday To Unlock The FULL POWER Of Your Mind! | Joe Dispenza", "thumbnail": "https://i.ytimg.com/vi/R5xV9BTYXL4/maxresdefault.jpg"} {"video_id": "R5xV9BTYXL4", "text": "got to start thinking about what you've been thinking about you got to start paying attention to how you speak that be conscious being conscious to every thought action most word emotion feeling expression body language everything you need to be aware of and conscious of Selzer is there a way that you help people to track this besides of just like okay I'm aware of this in the moment do they journal the thoughts throughout the day or when a negative thought comes up today remember the way their body languages throughout the day", "start_timestamp": "00:30:46", "end_timestamp": "00:31:17", "start_second": 1846, "end_second": 1877, "url": "https://www.youtube.com/watch?v=R5xV9BTYXL4&t=1846s", "title": "DO THIS Everyday To Unlock The FULL POWER Of Your Mind! | Joe Dispenza", "thumbnail": "https://i.ytimg.com/vi/R5xV9BTYXL4/maxresdefault.jpg"} {"video_id": "R5xV9BTYXL4", "text": "closed off and guarded or open do they how do they self reflect where they can track it better mmm well that's a big question but I will tell you this that you demystify the word meditation and the word meditation literally means become familiar with yeah when you become so familiar with your thoughts so aware of your emotions so conscious of your habits that you wouldn't go unconscious to them again now you're no longer the program right so so getting people disentangled from that program we found out as a formula and when we teach", "start_timestamp": "00:31:17", "end_timestamp": "00:31:48", "start_second": 1877, "end_second": 1908, "url": "https://www.youtube.com/watch?v=R5xV9BTYXL4&t=1877s", "title": "DO THIS Everyday To Unlock The FULL POWER Of Your Mind! | Joe Dispenza", "thumbnail": "https://i.ytimg.com/vi/R5xV9BTYXL4/maxresdefault.jpg"} {"video_id": "R5xV9BTYXL4", "text": "people how to do certain things with their focus and opening their awareness when we teach them how to create a very disorderly brain that has been driven by the hormones of stress into a more orderly coherent plane and teach them how to open their focus and practice that they'll come up against those thoughts and they'll become so familiar with them listen to they won't believe him anymore when they come up any longer Wow and so when they hear them in their day they'll be like that's not gonna stop me from my future", "start_timestamp": "00:31:48", "end_timestamp": "00:32:15", "start_second": 1908, "end_second": 1935, "url": "https://www.youtube.com/watch?v=R5xV9BTYXL4&t=1908s", "title": "DO THIS Everyday To Unlock The FULL POWER Of Your Mind! | Joe Dispenza", "thumbnail": "https://i.ytimg.com/vi/R5xV9BTYXL4/maxresdefault.jpg"} {"video_id": "R5xV9BTYXL4", "text": "so if they're sitting there and they want to quit just because they're sitting still and they're not quitting then they're developing a will that's greater than those programs and you're breaking out of the shell and you keep doing that you're gonna get up and do the work every day because you did it yesterday and you're gonna want to do more of that because you're getting out of your past and it feels better and if you keep doing that and you keep feeling better every day the question is why wouldn't you do it every day because you", "start_timestamp": "00:32:15", "end_timestamp": "00:32:43", "start_second": 1935, "end_second": 1963, "url": "https://www.youtube.com/watch?v=R5xV9BTYXL4&t=1935s", "title": "DO THIS Everyday To Unlock The FULL POWER Of Your Mind! | Joe Dispenza", "thumbnail": "https://i.ytimg.com/vi/R5xV9BTYXL4/maxresdefault.jpg"} {"video_id": "R5xV9BTYXL4", "text": "would ultimately just feel better and then the more hole you feel means the more you are connected you should imagine if you felt every single day and this is what I do work on if I could stay connected to the emotions of my future all day long there's no way I would be looking for when it would be happening how could I look for when it would be happening if I feel like it's happening I wouldn't look anymore which means I wouldn't be separate from it and that's when you start creating the magic right that's when you're in that zone", "start_timestamp": "00:32:43", "end_timestamp": "00:33:12", "start_second": 1963, "end_second": 1992, "url": "https://www.youtube.com/watch?v=R5xV9BTYXL4&t=1963s", "title": "DO THIS Everyday To Unlock The FULL POWER Of Your Mind! | Joe Dispenza", "thumbnail": "https://i.ytimg.com/vi/R5xV9BTYXL4/maxresdefault.jpg"} {"video_id": "R5xV9BTYXL4", "text": "and that's when that reaffirms that personality that you're becoming and now you don't you know wake up in the morning go I got to create my future jump out of bed excited you're not gonna want that magic to end that's the right so so you teach people how to do this and they start seeing the events in their life they're not gonna want to miss a day in in really just getting beyond their personality and listen it's so cool because it's amazing you see all these people come in for a success and new careers and new relationships and", "start_timestamp": "00:33:12", "end_timestamp": "00:33:45", "start_second": 1992, "end_second": 2025, "url": "https://www.youtube.com/watch?v=R5xV9BTYXL4&t=1992s", "title": "DO THIS Everyday To Unlock The FULL POWER Of Your Mind! | Joe Dispenza", "thumbnail": "https://i.ytimg.com/vi/R5xV9BTYXL4/maxresdefault.jpg"} {"video_id": "R5xV9BTYXL4", "text": "healing from a disease and mystical experience that come in for all these different kind of reasons healing from a childhood trauma but really they just want wholeness right and so as I start becoming more whole and they start feeling more whole is not coming from anywhere out there it's not coming from out there nothing out there is making them feel whole when the novelty of the thing wears off you feel empty again they're feeling whole from within oh this is different this is a different game so why wouldn't", "start_timestamp": "00:33:45", "end_timestamp": "00:34:14", "start_second": 2025, "end_second": 2054, "url": "https://www.youtube.com/watch?v=R5xV9BTYXL4&t=2025s", "title": "DO THIS Everyday To Unlock The FULL POWER Of Your Mind! | Joe Dispenza", "thumbnail": "https://i.ytimg.com/vi/R5xV9BTYXL4/maxresdefault.jpg"} {"video_id": "R5xV9BTYXL4", "text": "you want to keep feeling more whole that you no longer want anything now now you're not here now you're not living in separation anymore if someone is so disconnected to their future their greater future self if they're so negative thoughts suicidal thoughts often hurting themselves potentially often just don't have many close friends don't feel like they identify with themselves in the world don't like anyone understands them no one accepts them no one gets them like this all sounds great in theory but when", "start_timestamp": "00:34:14", "end_timestamp": "00:34:51", "start_second": 2054, "end_second": 2091, "url": "https://www.youtube.com/watch?v=R5xV9BTYXL4&t=2054s", "title": "DO THIS Everyday To Unlock The FULL POWER Of Your Mind! | Joe Dispenza", "thumbnail": "https://i.ytimg.com/vi/R5xV9BTYXL4/maxresdefault.jpg"} {"video_id": "R5xV9BTYXL4", "text": "you're in a place of survival mode and your thoughts constantly how can someone like that without having to go through the workshop that doesn't have the opportunity to go right now what can they start to do to just give some a little bit of relief and peace in their heart yeah it's simple knowledge experience wisdom philosophy initiate that philosophy master it yep mind body soul thinking doing being learning with your head applying with your hands knowing it by heart and this is the journey of knowledge because when", "start_timestamp": "00:34:51", "end_timestamp": "00:35:26", "start_second": 2091, "end_second": 2126, "url": "https://www.youtube.com/watch?v=R5xV9BTYXL4&t=2091s", "title": "DO THIS Everyday To Unlock The FULL POWER Of Your Mind! | Joe Dispenza", "thumbnail": "https://i.ytimg.com/vi/R5xV9BTYXL4/maxresdefault.jpg"} {"video_id": "R5xV9BTYXL4", "text": "you learn that information and you really study it you are going to begin to see the world differently because your brain is changing then when you start saying how can I use this how can I apply it how can I personalize that how could I do something initiates this information what am I going to do how do I get my behaviors to match my intentions now this is the act of trial and error it's so important you don't make it the first time you don't give up you get up and you try to walk again and you start learning how to do this and so", "start_timestamp": "00:35:26", "end_timestamp": "00:35:55", "start_second": 2126, "end_second": 2155, "url": "https://www.youtube.com/watch?v=R5xV9BTYXL4&t=2126s", "title": "DO THIS Everyday To Unlock The FULL POWER Of Your Mind! | Joe Dispenza", "thumbnail": "https://i.ytimg.com/vi/R5xV9BTYXL4/maxresdefault.jpg"} {"video_id": "R5xV9BTYXL4", "text": "as you begin to do it over and over again you start having new experiences well new experiences enrich the circuits in your brain philosophically you know the brain makes a chemical and now you're feeling more unlimited you're feeling more whole you're teaching your body chemically to understand what your mind intellectually understood and now you are literally literally starting to embody that knowledge yes coming signaling new G's it's new information but but you can't do it one day and expect your wealth to come you gotta do", "start_timestamp": "00:35:55", "end_timestamp": "00:36:24", "start_second": 2155, "end_second": 2184, "url": "https://www.youtube.com/watch?v=R5xV9BTYXL4&t=2155s", "title": "DO THIS Everyday To Unlock The FULL POWER Of Your Mind! | Joe Dispenza", "thumbnail": "https://i.ytimg.com/vi/R5xV9BTYXL4/maxresdefault.jpg"} {"video_id": "R5xV9BTYXL4", "text": "it over and over again yeah so the repetition of practicing over and over again neuro chemically conditions the mind and body begin to work as one you've done it so many times the he now knows how to do it subconsciously because just like it just like it knew how to subconsciously lean into trauma on victim mode right that kids now the body's getting new information it's going to adapt and and now you're going to literally become that knowledge you're gonna be coming that's what you're gonna become and so now that's", "start_timestamp": "00:36:24", "end_timestamp": "00:36:52", "start_second": 2184, "end_second": 2212, "url": "https://www.youtube.com/watch?v=R5xV9BTYXL4&t=2184s", "title": "DO THIS Everyday To Unlock The FULL POWER Of Your Mind! | Joe Dispenza", "thumbnail": "https://i.ytimg.com/vi/R5xV9BTYXL4/maxresdefault.jpg"} {"video_id": "R5xV9BTYXL4", "text": "when you no longer have to try it's who you are it's yes yeah you've memorized an internal order that's greater than anything in your outer world that's gonna tell you something else that's key right there and you become immunity to negative thoughts or negative viruses where if something tries to someone tries to say something to your body just rejects it automatically just as if your physical body would reject some virus coming in is that correct that's absolutely what I said dr. Joe you can't do it you're", "start_timestamp": "00:36:52", "end_timestamp": "00:37:18", "start_second": 2212, "end_second": 2238, "url": "https://www.youtube.com/watch?v=R5xV9BTYXL4&t=2212s", "title": "DO THIS Everyday To Unlock The FULL POWER Of Your Mind! | Joe Dispenza", "thumbnail": "https://i.ytimg.com/vi/R5xV9BTYXL4/maxresdefault.jpg"} {"video_id": "R5xV9BTYXL4", "text": "stupid you're ugly you're not smart enough you're not this you won't even reverse they won't even just bounce right off you you wouldn't even receive it because your field is so powerful it's pushing all that away no because it's not the truth there you go it's just laughs it'd just be like okay that's just not real but if I kept telling you you know you need this product you need this drug to feel better to look better to be better let's appeal to your lack and if you buy this it'll make that feeling go away and you", "start_timestamp": "00:37:18", "end_timestamp": "00:37:47", "start_second": 2238, "end_second": 2267, "url": "https://www.youtube.com/watch?v=R5xV9BTYXL4&t=2238s", "title": "DO THIS Everyday To Unlock The FULL POWER Of Your Mind! | Joe Dispenza", "thumbnail": "https://i.ytimg.com/vi/R5xV9BTYXL4/maxresdefault.jpg"} {"video_id": "R5xV9BTYXL4", "text": "try for a while then the feeling doesn't go away anymore you got to try something else and now watch the news and listen to all that information that's telling you you're limited you're you know you're limited it's something out there it's going to get you there's nothing wrong with that this but if you're constantly saying it's traffic it's the news it's politics it's my ex that's making me feel and think this way then you're subconsciously not consciously affected by your environment and you'll be more affected by your environments not a kind", "start_timestamp": "00:37:47", "end_timestamp": "00:38:16", "start_second": 2267, "end_second": 2296, "url": "https://www.youtube.com/watch?v=R5xV9BTYXL4&t=2267s", "title": "DO THIS Everyday To Unlock The FULL POWER Of Your Mind! | Joe Dispenza", "thumbnail": "https://i.ytimg.com/vi/R5xV9BTYXL4/maxresdefault.jpg"} {"video_id": "R5xV9BTYXL4", "text": "of process but the more you say I'm going to in spite of the fear that we talked about or the anxiety or the frustration or the aggression or the hostility or whatever it is I'm paying the suffering instead of saying that I can't change that you know what I'm gonna think I'm gonna see if I can where can I find that information where and all of a sudden you find people that are doing it and it makes sense to you and you're like well okay I'm feeling really anxious instead of taking something that's going to chemically change me let's see if I", "start_timestamp": "00:38:16", "end_timestamp": "00:38:51", "start_second": 2296, "end_second": 2331, "url": "https://www.youtube.com/watch?v=R5xV9BTYXL4&t=2296s", "title": "DO THIS Everyday To Unlock The FULL POWER Of Your Mind! | Joe Dispenza", "thumbnail": "https://i.ytimg.com/vi/R5xV9BTYXL4/maxresdefault.jpg"} {"video_id": "R5xV9BTYXL4", "text": "could chemically change me without something out there let's see if I can make my own pharmacy of any depressants I don't know let's see if I can make my own pain relievers let's see if I can make my own chemicals that cause my immune system to get stronger I'm just curious let's see now now the person is there out of the bleachers and they're on the field so they'll start believing that they can do it even if they change it a little bit and if they don't get it the first three days but they've seen a testimony of someone who has and you see", "start_timestamp": "00:38:51", "end_timestamp": "00:39:22", "start_second": 2331, "end_second": 2362, "url": "https://www.youtube.com/watch?v=R5xV9BTYXL4&t=2331s", "title": "DO THIS Everyday To Unlock The FULL POWER Of Your Mind! | Joe Dispenza", "thumbnail": "https://i.ytimg.com/vi/R5xV9BTYXL4/maxresdefault.jpg"} {"video_id": "R5xV9BTYXL4", "text": "that person who's totally happy that was abused every day of her life and they don't have the genetic disorder any longer you're gonna say wow that person doesn't look like a movie star that person doesn't look like they're vegetarian now that person doesn't look young and after whatever it is they're gonna look like a normal person and you're gonna say I identify with her identify her she can do it I'm into it now here's the cool part yeah we've seen then when people do this with the same stand on the stage and tell the story", "start_timestamp": "00:39:22", "end_timestamp": "00:39:49", "start_second": 2362, "end_second": 2389, "url": "https://www.youtube.com/watch?v=R5xV9BTYXL4&t=2362s", "title": "DO THIS Everyday To Unlock The FULL POWER Of Your Mind! | Joe Dispenza", "thumbnail": "https://i.ytimg.com/vi/R5xV9BTYXL4/maxresdefault.jpg"} {"video_id": "R5xV9BTYXL4", "text": "that there's that the person in the audience with the same genetic condition does it in a shorter amount of time because the evidence becomes a loudest voice why because it's in testimony yes there's truth right in front of you that's it's it's it's right in front of you there's you know we had a guy in Dubai that had he was in a wheelchair with this was this tumor in his spine and the doctor sent him home to die he came across my book like a week before they got the book becoming supernatural mm-hmm and and then he read the book and", "start_timestamp": "00:39:49", "end_timestamp": "00:40:23", "start_second": 2389, "end_second": 2423, "url": "https://www.youtube.com/watch?v=R5xV9BTYXL4&t=2389s", "title": "DO THIS Everyday To Unlock The FULL POWER Of Your Mind! | Joe Dispenza", "thumbnail": "https://i.ytimg.com/vi/R5xV9BTYXL4/maxresdefault.jpg"} {"video_id": "R5xV9BTYXL4", "text": "he somehow got in there someone gave him a spot he was in a wheelchair stage for cancer or go home and died severe paralysis a limitation crowding the spinal cord the whole bit nothing we can do for you all these pain meds excruciating pain one week after the week long his tumor reduced by 30% I just saw him in Munich he's he can walk he's walking without his wheelchair I mean he isn't it he's in a new experience he he's believing now in himself and when you believe in yourself you believe in possibility you can't", "start_timestamp": "00:40:23", "end_timestamp": "00:41:00", "start_second": 2423, "end_second": 2460, "url": "https://www.youtube.com/watch?v=R5xV9BTYXL4&t=2423s", "title": "DO THIS Everyday To Unlock The FULL POWER Of Your Mind! | Joe Dispenza", "thumbnail": "https://i.ytimg.com/vi/R5xV9BTYXL4/maxresdefault.jpg"} {"video_id": "R5xV9BTYXL4", "text": "have one without the other do you believe in yourself when you've never believed in yourself and you've got doubt inside of you all day long knowledge knowledge knowledge keep learning keep studying keep listening to it sooner or later that'll become a louder voice in your head than I can it's too hard I'll never change yeah and you me knowledge of knowledge of new philosophies knowledge of new skills knowledge of new anybody study if you don't want to read get on YouTube and one of course talk about how you can", "start_timestamp": "00:41:00", "end_timestamp": "00:41:25", "start_second": 2460, "end_second": 2485, "url": "https://www.youtube.com/watch?v=R5xV9BTYXL4&t=2460s", "title": "DO THIS Everyday To Unlock The FULL POWER Of Your Mind! | Joe Dispenza", "thumbnail": "https://i.ytimg.com/vi/R5xV9BTYXL4/maxresdefault.jpg"} {"video_id": "R5xV9BTYXL4", "text": "change your cheese just start looking starts looking at the testimonies and websites I mean we have over 450 testimonies now people heal themselves not small burritos amazing start looking to see what did that person do and when they tell their story that's worse than yours you're gonna start going wow that person really had a tough one and a day overcame it well jeez why like well I could just forgive my father right now I give you right now I want to let them go and I want to be free but I don't want I don't want to give them my attention", "start_timestamp": "00:41:25", "end_timestamp": "00:41:56", "start_second": 2485, "end_second": 2516, "url": "https://www.youtube.com/watch?v=R5xV9BTYXL4&t=2485s", "title": "DO THIS Everyday To Unlock The FULL POWER Of Your Mind! | Joe Dispenza", "thumbnail": "https://i.ytimg.com/vi/R5xV9BTYXL4/maxresdefault.jpg"} {"video_id": "R5xV9BTYXL4", "text": "because I give him my energy I wanna but I put my attention in my future soon later you're gonna come to it the way you do but if you don't have the knowledge then you believe in it last see you see there's people that we I saw this just recently and I was looking closely and there are people that do this work do do this transformation work in the meditations that we teach and they're so impatient and they're so entitled and they want an instantaneous change from the lack of feeling right that they never overcome themselves in", "start_timestamp": "00:41:56", "end_timestamp": "00:42:32", "start_second": 2516, "end_second": 2552, "url": "https://www.youtube.com/watch?v=R5xV9BTYXL4&t=2516s", "title": "DO THIS Everyday To Unlock The FULL POWER Of Your Mind! | Joe Dispenza", "thumbnail": "https://i.ytimg.com/vi/R5xV9BTYXL4/maxresdefault.jpg"} {"video_id": "R5xV9BTYXL4", "text": "their meditation they never overcome themselves in the meditation and when they finish their meditation they believe in this work less hmm then there are people who say I can tell you the moment I made up my mind to change because I had reached the end and I made a decision and that decision to change carried an amplitude of energy that was greater than the hardwired programs in my brain and the emotional conditioning in my body and my body literally responded to my mind in that moment that the choice that I made became a moment", "start_timestamp": "00:42:32", "end_timestamp": "00:43:03", "start_second": 2552, "end_second": 2583, "url": "https://www.youtube.com/watch?v=R5xV9BTYXL4&t=2552s", "title": "DO THIS Everyday To Unlock The FULL POWER Of Your Mind! | Joe Dispenza", "thumbnail": "https://i.ytimg.com/vi/R5xV9BTYXL4/maxresdefault.jpg"} {"video_id": "R5xV9BTYXL4", "text": "in time I would never forget and they'll tell you and that's the moment I remembered when I was going to change now those people then when they sit down to do the work there that the chemotherapy has worked the injections didn't work the radiation didn't work the surgery didn't work the diet didn't work the yoga didn't work this is this is now their end they have nothing else to believe in but themselves hmm and they go all in not 50% not 60% they're going all in they have nothing else to believe in now listen when they", "start_timestamp": "00:43:03", "end_timestamp": "00:43:39", "start_second": 2583, "end_second": 2619, "url": "https://www.youtube.com/watch?v=R5xV9BTYXL4&t=2583s", "title": "DO THIS Everyday To Unlock The FULL POWER Of Your Mind! | Joe Dispenza", "thumbnail": "https://i.ytimg.com/vi/R5xV9BTYXL4/maxresdefault.jpg"} {"video_id": "R5xV9BTYXL4", "text": "go a little bit outside the known they've got they went a little further than where they normally would stop they push themselves to that next limit they started believing in themselves that they could do it a little bit more they they finish the meditation and they get up and believing it's it's it's working more than working less they're the person that's believing in themselves that's why because it's not the work it's your belief in yourself right it's and when you believe in yourself you believe in possibilities when you", "start_timestamp": "00:43:39", "end_timestamp": "00:44:05", "start_second": 2619, "end_second": 2645, "url": "https://www.youtube.com/watch?v=R5xV9BTYXL4&t=2619s", "title": "DO THIS Everyday To Unlock The FULL POWER Of Your Mind! | Joe Dispenza", "thumbnail": "https://i.ytimg.com/vi/R5xV9BTYXL4/maxresdefault.jpg"} {"video_id": "R5xV9BTYXL4", "text": "believe in possibilities you got to believe in yourself who wasn't gonna believe it so people make these great strides in in the and and their own personal growth is a testament to the living organist and the living organism the species of human beings that's starting to believe well maybe we're not so a slit limited as we've been programmed to believe maybe we are more unlimited in and I'd rather throw in with that and if you don't think you you're your immune system isn't aware of viruses that could it could handle any", "start_timestamp": "00:44:05", "end_timestamp": "00:44:34", "start_second": 2645, "end_second": 2674, "url": "https://www.youtube.com/watch?v=R5xV9BTYXL4&t=2645s", "title": "DO THIS Everyday To Unlock The FULL POWER Of Your Mind! | Joe Dispenza", "thumbnail": "https://i.ytimg.com/vi/R5xV9BTYXL4/maxresdefault.jpg"} {"video_id": "R5xV9BTYXL4", "text": "virus if I got the right signal and if it was in a state of wholeness and your thymus was activated and your blood flow to that Center was turned on because you decided to turn it on and and you decided to release those chemicals that suppress the survival centers in your brain because oxytocin does and you wanted to stay there for a period of time and memorize that feeling I guarantee you that thymosin would begin to signal those t-cells and those t-cells would activate their t-cell receptors and those t-cell receptors", "start_timestamp": "00:44:34", "end_timestamp": "00:45:04", "start_second": 2674, "end_second": 2704, "url": "https://www.youtube.com/watch?v=R5xV9BTYXL4&t=2674s", "title": "DO THIS Everyday To Unlock The FULL POWER Of Your Mind! | Joe Dispenza", "thumbnail": "https://i.ytimg.com/vi/R5xV9BTYXL4/maxresdefault.jpg"} {"video_id": "l0im8AJAMco", "text": "good morning everyone the first talk of today is by Jason Lee Corporation University of audacious of departing star customer meetings and over characterization and transition yeah so of course obviously deep learning super successful at blah blah blah it solves a lot of stuff it's successful in a lot of things some people I've even gone as far as claiming it's the new electricity of course when we have electricity we should look there's a field call electrical engineering the studies it I guess it's kind of a purpose of us being", "start_timestamp": "00:00:00", "end_timestamp": "00:00:34", "start_second": 0, "end_second": 34, "url": "https://www.youtube.com/watch?v=l0im8AJAMco&t=0s", "title": "On the Foundations of Deep Learning: SGD, Overparametrization, and Generalization", "thumbnail": "https://i.ytimg.com/vi/l0im8AJAMco/maxresdefault.jpg"} {"video_id": "l0im8AJAMco", "text": "here this summer so deep learning is not quite a science yet it's fairly crazy that things regardless of whether these things where people will try crazy things like these cyclic learning rates learning rate to nice games are pretty crazy they're fragile even things like changing seeds and it's sort of like we're trying to build a working system by trial and error instead of understanding something like civil engineering and then sort of you know been our deep learner has built this thing but you didn't take a structural", "start_timestamp": "00:00:34", "end_timestamp": "00:01:09", "start_second": 34, "end_second": 69, "url": "https://www.youtube.com/watch?v=l0im8AJAMco&t=34s", "title": "On the Foundations of Deep Learning: SGD, Overparametrization, and Generalization", "thumbnail": "https://i.ytimg.com/vi/l0im8AJAMco/maxresdefault.jpg"} {"video_id": "l0im8AJAMco", "text": "engineering class so that's what happened so this talk is trying to about unkind understand deep learning so motivate of course by this well-known paper I added an optimization at the end I think they forgot that they left out something pretty important there okay so understanding deep learning and we so there's obviously two aspects our goal is to minimize test error we want to do say multi-class classification as usual you can think of as two components an optimization error and the generalization error something", "start_timestamp": "00:01:09", "end_timestamp": "00:01:44", "start_second": 69, "end_second": 104, "url": "https://www.youtube.com/watch?v=l0im8AJAMco&t=69s", "title": "On the Foundations of Deep Learning: SGD, Overparametrization, and Generalization", "thumbnail": "https://i.ytimg.com/vi/l0im8AJAMco/maxresdefault.jpg"} {"video_id": "l0im8AJAMco", "text": "controlled by the algorithm and something simply statistical so there's of course this problem is challenging in both aspects the optimizations non-convex non-smooth the global landscape right nothing is very good about it lots of local minima and just in general it's difficult the reason about the only thing we can see are really gradients at Hessians and so forth local information and you want to draw global conclusions on the statistical side similarly bad there's more parameters than samples that generalization error literally", "start_timestamp": "00:01:44", "end_timestamp": "00:02:19", "start_second": 104, "end_second": 139, "url": "https://www.youtube.com/watch?v=l0im8AJAMco&t=104s", "title": "On the Foundations of Deep Learning: SGD, Overparametrization, and Generalization", "thumbnail": "https://i.ytimg.com/vi/l0im8AJAMco/maxresdefault.jpg"} {"video_id": "l0im8AJAMco", "text": "depends on everything your algorithm you're learning race schedule your architecture initialization scheme algorithm parameters such as momentum so forth okay so how to deal with optimization luckily this is probably where it's most well understood is simply people have realized that you can always change the model that if one model is difficult to fit we don't have ever believed any models correct or anything like that you just fit a bigger model and bigger models are easier for SPD to fit which is somewhat of an", "start_timestamp": "00:02:19", "end_timestamp": "00:02:55", "start_second": 139, "end_second": 175, "url": "https://www.youtube.com/watch?v=l0im8AJAMco&t=139s", "title": "On the Foundations of Deep Learning: SGD, Overparametrization, and Generalization", "thumbnail": "https://i.ytimg.com/vi/l0im8AJAMco/maxresdefault.jpg"} {"video_id": "l0im8AJAMco", "text": "empirical fact I know so you can have this model with many bad local minima do something and then SGD works so over privatization okay here's an experiment showing this this experiment is essentially replicated from this paper of living Isha Mirror and she'll have shorts so on the left hand side the data comes from a teacher network that's fairly small compact as 50 neurons on the learner side the network has doubled the number new ons there's a hundred neurons but still the data comes from a network with only 50 neurons so if you", "start_timestamp": "00:02:55", "end_timestamp": "00:03:37", "start_second": 175, "end_second": 217, "url": "https://www.youtube.com/watch?v=l0im8AJAMco&t=175s", "title": "On the Foundations of Deep Learning: SGD, Overparametrization, and Generalization", "thumbnail": "https://i.ytimg.com/vi/l0im8AJAMco/maxresdefault.jpg"} {"video_id": "l0im8AJAMco", "text": "try to run SGD on the well-specified architecture which is very sensible assuming you saw through a statistics class if the model comes from the family you want to be well specified Emily's asymptotically efficient da da da whatever by the end of the day actually fails so statistical efficiency doesn't matter right if you can't get training our low why do you care about statistical efficiency because you cannot find the mo e you run SGD five times it gets five different answers none of which are the global minima I", "start_timestamp": "00:03:37", "end_timestamp": "00:04:08", "start_second": 217, "end_second": 248, "url": "https://www.youtube.com/watch?v=l0im8AJAMco&t=217s", "title": "On the Foundations of Deep Learning: SGD, Overparametrization, and Generalization", "thumbnail": "https://i.ytimg.com/vi/l0im8AJAMco/maxresdefault.jpg"} {"video_id": "l0im8AJAMco", "text": "could have done this five thousand times I don't think you would have found the global minimum okay now in stark contrast simply the same data comes from the same source and I simply change slightly change the architecture the only thing is doubling the number of neurons and I run STD again five times and every single time it's lost value zero and this is also tester this is STD on fresh cause of data this is not training here so SGD is finding the global minima of the population loss empirically so seemingly over", "start_timestamp": "00:04:08", "end_timestamp": "00:04:48", "start_second": 248, "end_second": 288, "url": "https://www.youtube.com/watch?v=l0im8AJAMco&t=248s", "title": "On the Foundations of Deep Learning: SGD, Overparametrization, and Generalization", "thumbnail": "https://i.ytimg.com/vi/l0im8AJAMco/maxresdefault.jpg"} {"video_id": "l0im8AJAMco", "text": "privatization does in fact make optimization easier and in this case even made because it's tests are you could have even extracted some statistical statement here okay so recently in many a series of papers by myself yen's it also briefly talked about this yesterday Trenton has also worked on this I'm summarizing all of these results because they're all kind of similar enough I don't want to spend time distinguishing they basically say if you have very wide networks and you initialize randomly with appropriate", "start_timestamp": "00:04:48", "end_timestamp": "00:05:22", "start_second": 288, "end_second": 322, "url": "https://www.youtube.com/watch?v=l0im8AJAMco&t=288s", "title": "On the Foundations of Deep Learning: SGD, Overparametrization, and Generalization", "thumbnail": "https://i.ytimg.com/vi/l0im8AJAMco/maxresdefault.jpg"} {"video_id": "l0im8AJAMco", "text": "variances that are not crazy gradient descent converges to an epsilon global minimizer as long as the loss is convex your learning rate is small roughly speaking or works for any model so when you're sufficiently over privatized you get an epsilon approximate global minimizer of the training loss let me sketch a proof of why this could be true of course when you're over parametrized looking at the parameters is not a great idea they're not identifiable there's permutations and all sorts of other invariances that come out that the", "start_timestamp": "00:05:22", "end_timestamp": "00:05:59", "start_second": 322, "end_second": 359, "url": "https://www.youtube.com/watch?v=l0im8AJAMco&t=322s", "title": "On the Foundations of Deep Learning: SGD, Overparametrization, and Generalization", "thumbnail": "https://i.ytimg.com/vi/l0im8AJAMco/maxresdefault.jpg"} {"video_id": "l0im8AJAMco", "text": "parameters don't really respect it's a better idea to look at how the prediction changes so if you compute the change in the loss you'll find that it's related to the outer product of a certain Jacobian matrix or though or the gram matrix as long as this gram matrix is strictly positive definite you get a contraction so then if I could ensure that this thing stayed positive definite then you convert to a global minimizer so why does this the first thing is to establish that it started positive definite that's pretty clear random", "start_timestamp": "00:05:59", "end_timestamp": "00:06:33", "start_second": 359, "end_second": 393, "url": "https://www.youtube.com/watch?v=l0im8AJAMco&t=359s", "title": "On the Foundations of Deep Learning: SGD, Overparametrization, and Generalization", "thumbnail": "https://i.ytimg.com/vi/l0im8AJAMco/maxresdefault.jpg"} {"video_id": "l0im8AJAMco", "text": "matrices are positive definite so this is some sort of just a concentration and perturbation analysis to show that initialization the gram matrix state is positive-definite and then simply when you're very over Prime Christ M is the width of the networks then it stays positive definite there's always hiding a bunch of stuff it's whatever it's something and then you need am big enough to make this smaller than lambda zero okay so over parameterization essentially what it does is if it's forcing in this initialization scheme it's forcing the", "start_timestamp": "00:06:33", "end_timestamp": "00:07:08", "start_second": 393, "end_second": 428, "url": "https://www.youtube.com/watch?v=l0im8AJAMco&t=393s", "title": "On the Foundations of Deep Learning: SGD, Overparametrization, and Generalization", "thumbnail": "https://i.ytimg.com/vi/l0im8AJAMco/maxresdefault.jpg"} {"video_id": "l0im8AJAMco", "text": "grab matrix to stay very stable close to this initialization and thus stay well conditioned and finally you get the great from this you can extract statements of this type foursquare loss you can say exactly converges to global minima at a log 1 over epsilon rate if this will say logistic loss you could say that it converges an epsilon global minimizer where Absalon is determined by the width if you want in smaller you would need wider because logistic loss is not global we strongly come back to so many locally well regardless for wide", "start_timestamp": "00:07:08", "end_timestamp": "00:07:42", "start_second": 428, "end_second": 462, "url": "https://www.youtube.com/watch?v=l0im8AJAMco&t=428s", "title": "On the Foundations of Deep Learning: SGD, Overparametrization, and Generalization", "thumbnail": "https://i.ytimg.com/vi/l0im8AJAMco/maxresdefault.jpg"} {"video_id": "l0im8AJAMco", "text": "enough networks as long as your loss is any convex function your you can get a statement like this ok so after we so here's a simpler way to interpret these statements if you don't want to read 15 pages of calculating algebra one way to interpret these statements is a statement about the local geometry of the loss function you take your random initialization draw a ball around it of some norms see for some appropriate seat that depends on the width you set every critical point in B is a global minimizer and there's at least one", "start_timestamp": "00:07:42", "end_timestamp": "00:08:23", "start_second": 462, "end_second": 503, "url": "https://www.youtube.com/watch?v=l0im8AJAMco&t=462s", "title": "On the Foundations of Deep Learning: SGD, Overparametrization, and Generalization", "thumbnail": "https://i.ytimg.com/vi/l0im8AJAMco/maxresdefault.jpg"} {"video_id": "l0im8AJAMco", "text": "global minimizer here and Gd store stays in this set so it's a statement about local geometry is saying that locally things essentially look convex as long as you find a critical point in this local set it will be a global minimizer and in fact you do stay in this set so roughly the global geometry of deep networks is it's probably pretty bad I think there's definitely it should have exponentially many bad local minima local minima with even high values of crane loss however locally there is a global minima locally there exists", "start_timestamp": "00:08:23", "end_timestamp": "00:09:01", "start_second": 503, "end_second": 541, "url": "https://www.youtube.com/watch?v=l0im8AJAMco&t=503s", "title": "On the Foundations of Deep Learning: SGD, Overparametrization, and Generalization", "thumbnail": "https://i.ytimg.com/vi/l0im8AJAMco/maxresdefault.jpg"} {"video_id": "l0im8AJAMco", "text": "global minima and you can find a region in which there are global minima but no local minima no local minima of higher loss yeah one way is you take in under parametrize network and start introducing neurons and then set them to be like to sort of cancel each other configure the ways you add them in a way that cannot help but it's still a local minima I can make at least I can make it a higher order saddle up certain but I'm not saying you'll find them yeah I'm saying there are ones and certainly not near random initializing", "start_timestamp": "00:09:01", "end_timestamp": "00:09:50", "start_second": 541, "end_second": 590, "url": "https://www.youtube.com/watch?v=l0im8AJAMco&t=541s", "title": "On the Foundations of Deep Learning: SGD, Overparametrization, and Generalization", "thumbnail": "https://i.ytimg.com/vi/l0im8AJAMco/maxresdefault.jpg"} {"video_id": "l0im8AJAMco", "text": "oh I think I can always do this I take your architecture I just make a tiny one if you believe there's one in a very small architecture I start augmenting neurons in some useless way so it will look like a standard architecture but maybe the parameters will be in a weird configuration at least I can make a satellite high order satellite okay so global geometry probably not very good local geometry very good there is a global minima and gradient descent converges to it so we're kind of happy and then oh you think about it a little", "start_timestamp": "00:09:50", "end_timestamp": "00:10:31", "start_second": 590, "end_second": 631, "url": "https://www.youtube.com/watch?v=l0im8AJAMco&t=590s", "title": "On the Foundations of Deep Learning: SGD, Overparametrization, and Generalization", "thumbnail": "https://i.ytimg.com/vi/l0im8AJAMco/maxresdefault.jpg"} {"video_id": "l0im8AJAMco", "text": "bit more and then some paper start coming out and then you realize of course locally it's very good locally you can write down any function so f theta is my neural network function it's essentially equal to some F 0 it's not important think of f 0 as 0 or very close to 0 some order one quantity a gradient term and a second order term so of course since I'm local the second order term should be thought of as smaller than the delayed the linear term so we kind of just throw it away and that's what these papers are doing and", "start_timestamp": "00:10:31", "end_timestamp": "00:11:09", "start_second": 631, "end_second": 669, "url": "https://www.youtube.com/watch?v=l0im8AJAMco&t=631s", "title": "On the Foundations of Deep Learning: SGD, Overparametrization, and Generalization", "thumbnail": "https://i.ytimg.com/vi/l0im8AJAMco/maxresdefault.jpg"} {"video_id": "l0im8AJAMco", "text": "then so what's happening here is that if you're able to change the predictions f of X by an order one quantity while the Jacobian the gradient stay constant in a relative sense then the second order term is essentially vanishing the second order term is how fast this changes so if I can move f of X by order one which is all I need to move it because I need to move it to match Y well this stays constant then I'm done I've found that I've constructed a global minimizer that is very close to my initialization close", "start_timestamp": "00:11:09", "end_timestamp": "00:11:44", "start_second": 669, "end_second": 704, "url": "https://www.youtube.com/watch?v=l0im8AJAMco&t=669s", "title": "On the Foundations of Deep Learning: SGD, Overparametrization, and Generalization", "thumbnail": "https://i.ytimg.com/vi/l0im8AJAMco/maxresdefault.jpg"} {"video_id": "l0im8AJAMco", "text": "in the sense that there's no neg there's no potential negative curvature and that's essentially what that proof was saying it was saying this H of the H matrix the gram matrix is simply outer products of these gradients and it was not moving a lot under these initialization schemes on widths and so forth ok so she's out and Bach came up with a nice sufficient condition of when you should expect this they call it this kernel regime to happen essentially the sufficient condition is that think of Y minus F 0 as order one", "start_timestamp": "00:11:44", "end_timestamp": "00:12:21", "start_second": 704, "end_second": 741, "url": "https://www.youtube.com/watch?v=l0im8AJAMco&t=704s", "title": "On the Foundations of Deep Learning: SGD, Overparametrization, and Generalization", "thumbnail": "https://i.ytimg.com/vi/l0im8AJAMco/maxresdefault.jpg"} {"video_id": "l0im8AJAMco", "text": "we initialize in network so it outputs an order one quantity that's random so this is like 1 because Y is order 1 and I basically pointed out that if you're Hessian divided by the square norm of the gradient is smaller than one then the second order term doesn't matter and your gradient dynamics track very closely for some constant amount of time the dynamics of a of gradient descent on a kernel machine this is very intuitive because it's essentially saying exactly that how much the Hessian changes is very small to the gradient and the", "start_timestamp": "00:12:21", "end_timestamp": "00:13:00", "start_second": 741, "end_second": 780, "url": "https://www.youtube.com/watch?v=l0im8AJAMco&t=741s", "title": "On the Foundations of Deep Learning: SGD, Overparametrization, and Generalization", "thumbnail": "https://i.ytimg.com/vi/l0im8AJAMco/maxresdefault.jpg"} {"video_id": "l0im8AJAMco", "text": "gradient when the gradient is big then you only need to move the parameters a little bit and you move your predictions by a lot so roughly in all of the ways we initialize the Hessian is something like going to zero with the width and the gradient is of one and so we have this sort of linear behavior in some region around the initialisation okay so any questions here before I move on to you saying the final state is an interpretive approximation an initial and that this linear approximation is good is this sort of an idealized model", "start_timestamp": "00:13:00", "end_timestamp": "00:13:43", "start_second": 780, "end_second": 823, "url": "https://www.youtube.com/watch?v=l0im8AJAMco&t=780s", "title": "On the Foundations of Deep Learning: SGD, Overparametrization, and Generalization", "thumbnail": "https://i.ytimg.com/vi/l0im8AJAMco/maxresdefault.jpg"} {"video_id": "l0im8AJAMco", "text": "motivated by a team that works and how you think about this analysis appropriate does this change so it was all like this so I would this result I would think as if this sufficient condition holds and you use a very small learning rate then the gradient on F theta F theta sub peak and you can construct a new function that is linear call it f bar theta T these are very close for a constant amount of time so a linear that gradient dynamics track those of a gradient dynamics on a linear model for some amount of time right but", "start_timestamp": "00:13:43", "end_timestamp": "00:14:27", "start_second": 823, "end_second": 867, "url": "https://www.youtube.com/watch?v=l0im8AJAMco&t=823s", "title": "On the Foundations of Deep Learning: SGD, Overparametrization, and Generalization", "thumbnail": "https://i.ytimg.com/vi/l0im8AJAMco/maxresdefault.jpg"} {"video_id": "l0im8AJAMco", "text": "that's like a statement of the result yes the motivating slide you had a prompt was an appearance a set of empirical selfish genre that you know sort of empirical results instead in our models you know with these sort of things no this is my state oh yeah there was the ICL I think John was the little thing for two years ago I know that was that was on a state of the art right this paper was about model this was about generalization I'm only talking about optimization right now this paper is about generalization this statement", "start_timestamp": "00:14:27", "end_timestamp": "00:15:06", "start_second": 867, "end_second": 906, "url": "https://www.youtube.com/watch?v=l0im8AJAMco&t=867s", "title": "On the Foundations of Deep Learning: SGD, Overparametrization, and Generalization", "thumbnail": "https://i.ytimg.com/vi/l0im8AJAMco/maxresdefault.jpg"} {"video_id": "l0im8AJAMco", "text": "has nothing to do it these I mean you can make extract a statement about statistical error here but it's not a strong state the this is really an optimization thing I would say it's saying that I mean that's what I'm asking well in your mind what does this say about the state of the art motivation like that this would you be able to give me you continue could ask whether the initial configuration the final configuration of pulse like this they're not close really not close like this so I mean that in your mind and", "start_timestamp": "00:15:06", "end_timestamp": "00:15:42", "start_second": 906, "end_second": 942, "url": "https://www.youtube.com/watch?v=l0im8AJAMco&t=906s", "title": "On the Foundations of Deep Learning: SGD, Overparametrization, and Generalization", "thumbnail": "https://i.ytimg.com/vi/l0im8AJAMco/maxresdefault.jpg"} {"video_id": "l0im8AJAMco", "text": "with God is does this I think it tells you initial that you're you can get training over small very quickly initially in the first few steps so it seems that one could come up with empirical protocols and somehow see how relevant this is to what's going on in practical also I've said a word that they're not quotes does that mean that we run experiments and you know this does not really capture what's going on in in standard models yes there's there's some careful papers not buying me circle papers that say that this is", "start_timestamp": "00:15:42", "end_timestamp": "00:16:22", "start_second": 942, "end_second": 982, "url": "https://www.youtube.com/watch?v=l0im8AJAMco&t=942s", "title": "On the Foundations of Deep Learning: SGD, Overparametrization, and Generalization", "thumbnail": "https://i.ytimg.com/vi/l0im8AJAMco/maxresdefault.jpg"} {"video_id": "l0im8AJAMco", "text": "not yes and is there any evidence that indicates otherwise or is it depends on like your parameter settings if you set the learning race small it agrees well if you set the learning rate big then no it disagrees if you set a small aircrews yeah but there are certain values that we usually use in practice so and the ones like a good test error disagree Cilicia pretty simple question I hope but how should we think about Hessians for models that don't actually have Hessians you should think about it as okay one quantitative", "start_timestamp": "00:16:22", "end_timestamp": "00:17:03", "start_second": 982, "end_second": 1023, "url": "https://www.youtube.com/watch?v=l0im8AJAMco&t=982s", "title": "On the Foundations of Deep Learning: SGD, Overparametrization, and Generalization", "thumbnail": "https://i.ytimg.com/vi/l0im8AJAMco/maxresdefault.jpg"} {"video_id": "l0im8AJAMco", "text": "way to think about it is the change of activation pattern which is a second sigma prime is the activation pattern and sigma prime prime is how much those change in some measure the Hessian is a border one over the has seen itself without the Hessian you don't care about the Hessians movement while you carry we are worried about is a gradient moving a lot because that means your feature scheme is moving a lot you want to ensure the feature scheme does not move so you need about the size of the movement of the feature", "start_timestamp": "00:17:03", "end_timestamp": "00:17:53", "start_second": 1023, "end_second": 1073, "url": "https://www.youtube.com/watch?v=l0im8AJAMco&t=1023s", "title": "On the Foundations of Deep Learning: SGD, Overparametrization, and Generalization", "thumbnail": "https://i.ytimg.com/vi/l0im8AJAMco/maxresdefault.jpg"} {"video_id": "l0im8AJAMco", "text": "scheme which is the Hessian relative to the size of the gradient because the gradient multiplied by your movement is your change in F that's why it's a relative measure yeah so which Darby are measured that has some potential because still the size of the everything is just awesome Park is the only parameter moving is f so then the norm doesn't matter but the the size of the parameter depends on that sorry you should be L to a spectrum but everything in that paper pretend so the only thing changing us out sure but", "start_timestamp": "00:17:53", "end_timestamp": "00:18:29", "start_second": 1073, "end_second": 1109, "url": "https://www.youtube.com/watch?v=l0im8AJAMco&t=1073s", "title": "On the Foundations of Deep Learning: SGD, Overparametrization, and Generalization", "thumbnail": "https://i.ytimg.com/vi/l0im8AJAMco/maxresdefault.jpg"} {"video_id": "l0im8AJAMco", "text": "I mean just even the site also yeah if you're careful you it's polynomial this paper doesn't talk about I'm not sure they talk okay so somewhat this at least tells us that at least locally we optimization is not a big deal so perhaps you can believe that if you your learning rate initially is not very big you can get your optimization error kind of small if you try to use this kind of method to get an analysis of generalization error unsurprisingly since your feature scheme is not changing what you'll find is", "start_timestamp": "00:18:29", "end_timestamp": "00:19:18", "start_second": 1109, "end_second": 1158, "url": "https://www.youtube.com/watch?v=l0im8AJAMco&t=1109s", "title": "On the Foundations of Deep Learning: SGD, Overparametrization, and Generalization", "thumbnail": "https://i.ytimg.com/vi/l0im8AJAMco/maxresdefault.jpg"} {"video_id": "l0im8AJAMco", "text": "simply it has the same prediction exactly as that of a kernel method the kernel is called the neural tangent kernel and your generalization bound will look exactly like rate regression will be something like like label Y transpose kernel inverse y / n whole thing under square root so that's some generalization bound you can extract from this style of analysis so I have no I I don't think it's very tight in fact I'll talk about later okay so let's look a little bit more at generalization error what's happening a very standard", "start_timestamp": "00:19:18", "end_timestamp": "00:19:56", "start_second": 1158, "end_second": 1196, "url": "https://www.youtube.com/watch?v=l0im8AJAMco&t=1158s", "title": "On the Foundations of Deep Learning: SGD, Overparametrization, and Generalization", "thumbnail": "https://i.ytimg.com/vi/l0im8AJAMco/maxresdefault.jpg"} {"video_id": "l0im8AJAMco", "text": "found in generalization or some complexity over N for example in kernel methods it would be Y transpose K inverse Y that's sort of the RK just norm in Ridge regression it could be VC dimension VC dimensions roughly number of parameters times the depth in feed-forward networks the required list all kind of big they're bigger the sample size which is exactly what the I Claire paper was pointing out it was pointing L essentially that the number of parameters is roughly 30 20 30 times the number of samples in many", "start_timestamp": "00:19:56", "end_timestamp": "00:20:29", "start_second": 1196, "end_second": 1229, "url": "https://www.youtube.com/watch?v=l0im8AJAMco&t=1196s", "title": "On the Foundations of Deep Learning: SGD, Overparametrization, and Generalization", "thumbnail": "https://i.ytimg.com/vi/l0im8AJAMco/maxresdefault.jpg"} {"video_id": "l0im8AJAMco", "text": "models and of course of course so this slide is slow and from naki or I think from telling you actually who stole it from naughty and I copied it from time so we've all seen this many times I'm sure by now so what they did in there my clear paper is the number so you the black line is a training curve and if you took statistical learning you were to kind of expect the red line kind of goes well it doesn't it goes down so an over parameterize ation does not seem to hurt generalization in this case in fact in this one example it kind of helps a", "start_timestamp": "00:20:29", "end_timestamp": "00:21:08", "start_second": 1229, "end_second": 1268, "url": "https://www.youtube.com/watch?v=l0im8AJAMco&t=1229s", "title": "On the Foundations of Deep Learning: SGD, Overparametrization, and Generalization", "thumbnail": "https://i.ytimg.com/vi/l0im8AJAMco/maxresdefault.jpg"} {"video_id": "l0im8AJAMco", "text": "little you can improve your generalization even after interpolation and there's even more evidence that number of parameters are hurting if you plot the imagenet top one error over time then you see that in fact for these very very big networks the top one error is quite small it means be creasing here I mean this is cherry pick from this paper but there's clearly networks with 600 million parameters and it has passed they are very low so throwing in parameters in a non stupid way will not hurt your generalization I'm here okay", "start_timestamp": "00:21:08", "end_timestamp": "00:21:46", "start_second": 1268, "end_second": 1306, "url": "https://www.youtube.com/watch?v=l0im8AJAMco&t=1268s", "title": "On the Foundations of Deep Learning: SGD, Overparametrization, and Generalization", "thumbnail": "https://i.ytimg.com/vi/l0im8AJAMco/maxresdefault.jpg"} {"video_id": "l0im8AJAMco", "text": "so what's going on here let's turn to margin theory and see what it tells us so this margin theory is very classical it basically says that if you're very far from the decision boundary you should be very good why is that Pete Bartlett and Mendelssohn coalesced this almost 20 years ago now basically your generalization cap is upper bounded by some complexity measure divided by the margin so if you can ensure some sort of Rademacher complexity that's sort of size free independent of the explicit independent of the width then", "start_timestamp": "00:21:46", "end_timestamp": "00:22:24", "start_second": 1306, "end_second": 1344, "url": "https://www.youtube.com/watch?v=l0im8AJAMco&t=1306s", "title": "On the Foundations of Deep Learning: SGD, Overparametrization, and Generalization", "thumbnail": "https://i.ytimg.com/vi/l0im8AJAMco/maxresdefault.jpg"} {"video_id": "l0im8AJAMco", "text": "you can get good generalization error so here's some papers that talk about how to opera BAM the numerator here I'm only going to talk about how to lower bound the denominator okay so how do you get solutions with good market let's look at the simplest loss function with the simplest regularization scheme is you have logistic loss with a sum norm regularizer for example weight DK l2 norm you would kind of hope so this is the Globo max margin this is the best you can do if you search over all models in your parametric family and you thought", "start_timestamp": "00:22:24", "end_timestamp": "00:23:01", "start_second": 1344, "end_second": 1381, "url": "https://www.youtube.com/watch?v=l0im8AJAMco&t=1344s", "title": "On the Foundations of Deep Learning: SGD, Overparametrization, and Generalization", "thumbnail": "https://i.ytimg.com/vi/l0im8AJAMco/maxresdefault.jpg"} {"video_id": "l0im8AJAMco", "text": "you kept the one with largest margin gamma star so you would hope you can get gamma star and in fact if you do a very good job of minimizing this regularized functional for small lambda then you do get gamma star so in other words if week l2 regularization leads to assuming optimization works week our two regularization will get you very high margin or the best possible March okay this this proof is a simple unlike a lot of you into this proofs I think this is genuinely simple okay the proof is essentially you write down", "start_timestamp": "00:23:01", "end_timestamp": "00:23:44", "start_second": 1381, "end_second": 1424, "url": "https://www.youtube.com/watch?v=l0im8AJAMco&t=1381s", "title": "On the Foundations of Deep Learning: SGD, Overparametrization, and Generalization", "thumbnail": "https://i.ytimg.com/vi/l0im8AJAMco/maxresdefault.jpg"} {"video_id": "l0im8AJAMco", "text": "the logistic loss realize that logistic loss when this argument is very small can be approximated by Taylor's theorem and I could see an exponential exponential of a lot of terms added together you only need to keep the smallest one because the rest are exponentially smaller okay so that this line is Taylor's theorem and then this one says if you add a bunch of like e to the minus ten plus e to the minus hundred you only need to care about either minus ten so only the smallest of these terms matter that's why there's a min here and", "start_timestamp": "00:23:44", "end_timestamp": "00:24:19", "start_second": 1424, "end_second": 1459, "url": "https://www.youtube.com/watch?v=l0im8AJAMco&t=1424s", "title": "On the Foundations of Deep Learning: SGD, Overparametrization, and Generalization", "thumbnail": "https://i.ytimg.com/vi/l0im8AJAMco/maxresdefault.jpg"} {"video_id": "l0im8AJAMco", "text": "then this from this you can kind of read off the results if you lambda very small it's saying let me use as much norm as possible but among those I need a fit I need to make the worst case margin good so among solutions of the same norm we prefer the largest margin okay so how does so how does over parametrization improve the margin this is completely obvious if I have a network and this site if I have a sub network of this network the margin can only improve so if you have a Rademacher bound that's a independent of the explicit number of", "start_timestamp": "00:24:19", "end_timestamp": "00:24:55", "start_second": 1459, "end_second": 1495, "url": "https://www.youtube.com/watch?v=l0im8AJAMco&t=1459s", "title": "On the Foundations of Deep Learning: SGD, Overparametrization, and Generalization", "thumbnail": "https://i.ytimg.com/vi/l0im8AJAMco/maxresdefault.jpg"} {"video_id": "l0im8AJAMco", "text": "parameters the margin is only improving this denominator can only get bigger okay so let's talk a little bit about how to optimize when you have a regularizer the first thing that realize is that none of these results using n key K and locally convex or whatever can ever handle this the regularizer induces a sort of tie breaking among the global minimizer's and the global minimizer's are not equivalent like you can have two global minimizer's the unregular eyestrain loss and they will have drastically different regularization", "start_timestamp": "00:24:55", "end_timestamp": "00:25:32", "start_second": 1495, "end_second": 1532, "url": "https://www.youtube.com/watch?v=l0im8AJAMco&t=1495s", "title": "On the Foundations of Deep Learning: SGD, Overparametrization, and Generalization", "thumbnail": "https://i.ytimg.com/vi/l0im8AJAMco/maxresdefault.jpg"} {"video_id": "l0im8AJAMco", "text": "effects so the regularization induces a tiebreaking and now you really do need to find like a global minimizer of this regularize objective so how to do this the answer is not fully satisfying but we can say something so take a very very very very wide network think of this as like wider than anything you've seen exponential in D or X Panetti whatever infinite okay then run gradient descent with a particular noise scheme then you convert your global minimizer in polynomial time polynomial in the dimension and in one over epsilon", "start_timestamp": "00:25:32", "end_timestamp": "00:26:09", "start_second": 1532, "end_second": 1569, "url": "https://www.youtube.com/watch?v=l0im8AJAMco&t=1532s", "title": "On the Foundations of Deep Learning: SGD, Overparametrization, and Generalization", "thumbnail": "https://i.ytimg.com/vi/l0im8AJAMco/maxresdefault.jpg"} {"video_id": "l0im8AJAMco", "text": "something like Pauly D over epsilon four or something okay so gradient descent I'm very very very over parameterize networks converges to global minimizer's so over parameterization does help even when you have regularization the mechanism in which it helps is very different from this local convex intuition the intuition here is that when you're very very over primate rising there's a descent direction in function space but if you have a finite a small width network you might miss the function space because you might miss", "start_timestamp": "00:26:09", "end_timestamp": "00:26:46", "start_second": 1569, "end_second": 1606, "url": "https://www.youtube.com/watch?v=l0im8AJAMco&t=1569s", "title": "On the Foundations of Deep Learning: SGD, Overparametrization, and Generalization", "thumbnail": "https://i.ytimg.com/vi/l0im8AJAMco/maxresdefault.jpg"} {"video_id": "l0im8AJAMco", "text": "the direction of this sign in the function space that's essentially what Frank wolf algorithm will do but computing as Frank will sup is a exponential time so but instead if you had a ton of neurons and you add noise then there might be an exponentially small fraction of your neurons I see this Frank wolf direction but then great relu is homogeneous and then although it's exponentially small signal it goes up exponentially too and these things carefully balanced and at the end you do get a polynomial time result", "start_timestamp": "00:26:46", "end_timestamp": "00:27:16", "start_second": 1606, "end_second": 1636, "url": "https://www.youtube.com/watch?v=l0im8AJAMco&t=1606s", "title": "On the Foundations of Deep Learning: SGD, Overparametrization, and Generalization", "thumbnail": "https://i.ytimg.com/vi/l0im8AJAMco/maxresdefault.jpg"} {"video_id": "l0im8AJAMco", "text": "so because the number is very large are you what's relationship for the ndk regime this no more I'm Kiki and you guys caught yeah you change anything NDK Scott it's very fragile and so I know I remember some of these results at that on laminate extremely small you set lambda and whatever way let's say you want gamma Magnus point one and you set lambda depending on that it's polynomial time to get within constant a polynomial iteration it's not pollen on with one iteration is exponential time because you have exponentially many parameters", "start_timestamp": "00:27:16", "end_timestamp": "00:27:58", "start_second": 1636, "end_second": 1678, "url": "https://www.youtube.com/watch?v=l0im8AJAMco&t=1636s", "title": "On the Foundations of Deep Learning: SGD, Overparametrization, and Generalization", "thumbnail": "https://i.ytimg.com/vi/l0im8AJAMco/maxresdefault.jpg"} {"video_id": "l0im8AJAMco", "text": "some polynomial iterations or time in the pde sense and one other question so in the generation ban you're talking about a size free Rademacher bound right obtaining the margin with a very light regularization parameter president Ron a MicroBot might depend on say norms which is n have to be balanced against this resulting mark yes you should think of whatever norms is this let's concretely take the goal of which one which is just a Frobenius norm of everything then that would be weight decay okay all right so so you might ask um", "start_timestamp": "00:27:58", "end_timestamp": "00:28:52", "start_second": 1678, "end_second": 1732, "url": "https://www.youtube.com/watch?v=l0im8AJAMco&t=1678s", "title": "On the Foundations of Deep Learning: SGD, Overparametrization, and Generalization", "thumbnail": "https://i.ytimg.com/vi/l0im8AJAMco/maxresdefault.jpg"} {"video_id": "l0im8AJAMco", "text": "okay so why add this regularizer I can already get the global min of the logistic loss without this regularization term what am I getting here is my stat sample complexity much better and it turns out it is let's look at a very simple data set on the first two coordinates the data looks like this it's like two eggs or essentially then every other coordinate is just standard normal so there's two coordinates that have signal and that's the ones you want to pay attention to the rest of the coordinates are completely uncorrelated", "start_timestamp": "00:28:52", "end_timestamp": "00:29:22", "start_second": 1732, "end_second": 1762, "url": "https://www.youtube.com/watch?v=l0im8AJAMco&t=1732s", "title": "On the Foundations of Deep Learning: SGD, Overparametrization, and Generalization", "thumbnail": "https://i.ytimg.com/vi/l0im8AJAMco/maxresdefault.jpg"} {"video_id": "l0im8AJAMco", "text": "with the label so they're noise coordinates are kind of for you and they exactly do full a kernel method a kernel method is unable to localize onto the first two dimensions so it has to look over all dimensions and it pays sample complexity at least d squared to get some air last if you want R less than some absolute constant you need to pay at least d square samples because a cardinal method to solve this problem needs at least form pairwise degree to polynomials that has complexity d squared however for that sort of to XOR", "start_timestamp": "00:29:22", "end_timestamp": "00:29:57", "start_second": 1762, "end_second": 1797, "url": "https://www.youtube.com/watch?v=l0im8AJAMco&t=1762s", "title": "On the Foundations of Deep Learning: SGD, Overparametrization, and Generalization", "thumbnail": "https://i.ytimg.com/vi/l0im8AJAMco/maxresdefault.jpg"} {"video_id": "l0im8AJAMco", "text": "construction there's a neural net with four neurons and then so then the sample complexity of learning a neural net with four neurons is something like four D over N so D over and roughly so there's a clear sample complexity separation with regularizer you can learn with D samples without regularizer uni at least d square samples so regularization explicit regularization or math you yes yes yes good you might put it it's not but we don't yeah it's not clear yeah yeah they might take exponential time we don't know no we know it takes", "start_timestamp": "00:29:57", "end_timestamp": "00:30:49", "start_second": 1797, "end_second": 1849, "url": "https://www.youtube.com/watch?v=l0im8AJAMco&t=1797s", "title": "On the Foundations of Deep Learning: SGD, Overparametrization, and Generalization", "thumbnail": "https://i.ytimg.com/vi/l0im8AJAMco/maxresdefault.jpg"} {"video_id": "l0im8AJAMco", "text": "exponential time how you scale things but I can scale this so it takes you expert but then you can increase your learning rate anyway to analysis in the specific city yeah I guess I should be more precise to be more precise this one is better than the NDK one let's play it that way so then it covers the unregular eyes and the regular ice case okay so so do you need regularization it turns out you can do pretty well if you look at these numbers without regularization so like the gain here is not from regularizing you gain 5% but if you just", "start_timestamp": "00:30:49", "end_timestamp": "00:31:39", "start_second": 1849, "end_second": 1899, "url": "https://www.youtube.com/watch?v=l0im8AJAMco&t=1849s", "title": "On the Foundations of Deep Learning: SGD, Overparametrization, and Generalization", "thumbnail": "https://i.ytimg.com/vi/l0im8AJAMco/maxresdefault.jpg"} {"video_id": "l0im8AJAMco", "text": "change the architecture you can gain around 10% so SGD without regularization does already have very good generalization perhaps it's not state-of-art but certainly the bulk is not from these regularization methods okay so let's turn to a simple example of logistic regression so even if you have separable logistic regression the problems convex but not strongly or strictly convex so there's many many global minimizer's and you may be wondering which one does it converge to this is exactly what soldiery Hoffer and", "start_timestamp": "00:31:39", "end_timestamp": "00:32:13", "start_second": 1899, "end_second": 1933, "url": "https://www.youtube.com/watch?v=l0im8AJAMco&t=1899s", "title": "On the Foundations of Deep Learning: SGD, Overparametrization, and Generalization", "thumbnail": "https://i.ytimg.com/vi/l0im8AJAMco/maxresdefault.jpg"} {"video_id": "l0im8AJAMco", "text": "cerebro asked a year or two ago and they showed that this converges to the SVM solution in a very precise sense if you run gradient descent for a long long long time then you normalize because the norm blows up you get exactly the auto SVM solution after doing some normalization this is quite amazing if you haven't seen it before there's all these directions why the heck should gradient descent get the 1 of minimum how to norm that maximizes the l2 margin right I mean this seems maybe I should depend a loading rate how", "start_timestamp": "00:32:13", "end_timestamp": "00:32:44", "start_second": 1933, "end_second": 1964, "url": "https://www.youtube.com/watch?v=l0im8AJAMco&t=1933s", "title": "On the Foundations of Deep Learning: SGD, Overparametrization, and Generalization", "thumbnail": "https://i.ytimg.com/vi/l0im8AJAMco/maxresdefault.jpg"} {"video_id": "l0im8AJAMco", "text": "you initialize it could depend on anything and this result is quite stable any learning rate that's stable it converges to the whole to SVM solution ok so then so what is then you might be thinking what is special about gradient descent let me write down gradient descent in a very suggestive way I'm just write it down so it's trying to maximize the correlation you know yeah because the logistic lost gradient is never zero so it's impossible to have a zero vanishing gradient even if you this is in contrast to least-squares I", "start_timestamp": "00:32:44", "end_timestamp": "00:33:36", "start_second": 1964, "end_second": 2016, "url": "https://www.youtube.com/watch?v=l0im8AJAMco&t=1964s", "title": "On the Foundations of Deep Learning: SGD, Overparametrization, and Generalization", "thumbnail": "https://i.ytimg.com/vi/l0im8AJAMco/maxresdefault.jpg"} {"video_id": "l0im8AJAMco", "text": "think which is where you're thinking yes okay what was I saying yes this norm okay the important thing is that you write gradient descent in this suggestive way so what it's saying is that gradient descent is trying to maximally decrease the lost value of the function while paying infinitely small amount of l2 norm the key thing is is trying to use the least amount of Al to norm to achieve this goal so now you might ask why do I change a norm and this gives you a family of steepest descent algorithms and in fact you can", "start_timestamp": "00:33:36", "end_timestamp": "00:34:10", "start_second": 2016, "end_second": 2050, "url": "https://www.youtube.com/watch?v=l0im8AJAMco&t=2016s", "title": "On the Foundations of Deep Learning: SGD, Overparametrization, and Generalization", "thumbnail": "https://i.ytimg.com/vi/l0im8AJAMco/maxresdefault.jpg"} {"video_id": "l0im8AJAMco", "text": "prove that for this entire family of steepest descent algorithms you converge the SVM given in that normal okay so for example that one norm this is exactly a form of boosting a to prove this a long long time ago in s PhD that in fact does maximize the l1 margin the same proof works for all norms okay so some examples coordinate descent which is steepest ascent with respect to l1 norm you're going to maximize l1 margin it's not quite adaboost it's adaboost with some damping step size so add a boost with a dampened step size that maximizes", "start_timestamp": "00:34:10", "end_timestamp": "00:34:52", "start_second": 2050, "end_second": 2092, "url": "https://www.youtube.com/watch?v=l0im8AJAMco&t=2050s", "title": "On the Foundations of Deep Learning: SGD, Overparametrization, and Generalization", "thumbnail": "https://i.ytimg.com/vi/l0im8AJAMco/maxresdefault.jpg"} {"video_id": "l0im8AJAMco", "text": "our margin adaboost itself does not sign gradient method commonly used now the save on communication this gets you some L infinity bias and of course gradient descent is steepest descent with respect to l2 okay so that's kind of that's great we understand logistic regression very well that's sort of always the starting point so now the question is what does it do on deep networks if you're if you're very very very optimistic you may hope that it solves max margin even if you do know regularization because then you will", "start_timestamp": "00:34:52", "end_timestamp": "00:35:25", "start_second": 2092, "end_second": 2125, "url": "https://www.youtube.com/watch?v=l0im8AJAMco&t=2092s", "title": "On the Foundations of Deep Learning: SGD, Overparametrization, and Generalization", "thumbnail": "https://i.ytimg.com/vi/l0im8AJAMco/maxresdefault.jpg"} {"video_id": "l0im8AJAMco", "text": "have some generalization bound that only depends on this up here and that seems good because now you can get the optimal margin of course notice that this is an SVM problem but it's a non convex SVM problem now this SVM problem unlike the linear case has many many stationary points as first order stationary points so I can order station points on a global max so of course you cannot sort prove that it converges to the global Maximizer well you're able to prove is that it converges the first order optimal point on the following SVM this", "start_timestamp": "00:35:25", "end_timestamp": "00:35:57", "start_second": 2125, "end_second": 2157, "url": "https://www.youtube.com/watch?v=l0im8AJAMco&t=2125s", "title": "On the Foundations of Deep Learning: SGD, Overparametrization, and Generalization", "thumbnail": "https://i.ytimg.com/vi/l0im8AJAMco/maxresdefault.jpg"} {"video_id": "l0im8AJAMco", "text": "is some nonlinear program it has first order KKT conditions and assuming the only assumption here is homogeneity then if you do gradient descent exponential loss logistic loss you get a first order optimal point the first order stationary point of this okay so you wanted to prove it gets max margin that's probably not possible because you're running a local search algorithm but at least you can characterize that the whatever you converge to is very special in the sense that it's precisely a critical point of", "start_timestamp": "00:35:57", "end_timestamp": "00:36:34", "start_second": 2157, "end_second": 2194, "url": "https://www.youtube.com/watch?v=l0im8AJAMco&t=2157s", "title": "On the Foundations of Deep Learning: SGD, Overparametrization, and Generalization", "thumbnail": "https://i.ytimg.com/vi/l0im8AJAMco/maxresdefault.jpg"} {"video_id": "l0im8AJAMco", "text": "a fairly intuitive optimization problem okay so I've talked about I didn't really talk about the implicit regularization of n key K but if you think about what kernel methods is you can write down the sort of implicit bias of NPK or the inductive bias of it as sort of you're trying to find the thing that best that maximizes the margin the worst-case margin but you stay infinitely close to your initialization so this top one theta hat K is exactly what would happen if you did n TK on logistic loss and terminated after some amount of time a", "start_timestamp": "00:36:34", "end_timestamp": "00:37:12", "start_second": 2194, "end_second": 2232, "url": "https://www.youtube.com/watch?v=l0im8AJAMco&t=2194s", "title": "On the Foundations of Deep Learning: SGD, Overparametrization, and Generalization", "thumbnail": "https://i.ytimg.com/vi/l0im8AJAMco/maxresdefault.jpg"} {"video_id": "l0im8AJAMco", "text": "very long time you had approximately get maximum margin maximum with respect to a different sense in the sense that you have to stay infinitely close to this ball what the this previous result by us and Haiphong Liu and Jen we chose that actually you get a stationary point of this following program and of this program you're letting the prompter move infinitely far from its initialization so it's completely forgetting where it's initializing it's running forever and ever which you'll always do when you have a", "start_timestamp": "00:37:12", "end_timestamp": "00:37:45", "start_second": 2232, "end_second": 2265, "url": "https://www.youtube.com/watch?v=l0im8AJAMco&t=2232s", "title": "On the Foundations of Deep Learning: SGD, Overparametrization, and Generalization", "thumbnail": "https://i.ytimg.com/vi/l0im8AJAMco/maxresdefault.jpg"} {"video_id": "l0im8AJAMco", "text": "logistic loss and when you run forever and ever you try to maximize the margin okay these are of course two endpoints of an extreme case on one of these you're trying to say infinitely close to your initialization the other you're moving infinitely far so they're very different with endpoints probably in my opinion what's interesting is probably when this is not quite zero you're going slightly further than the linear regime but you're clearly not going to this super asymptotic regime you've deviated slightly from the N key", "start_timestamp": "00:37:45", "end_timestamp": "00:38:20", "start_second": 2265, "end_second": 2300, "url": "https://www.youtube.com/watch?v=l0im8AJAMco&t=2265s", "title": "On the Foundations of Deep Learning: SGD, Overparametrization, and Generalization", "thumbnail": "https://i.ytimg.com/vi/l0im8AJAMco/maxresdefault.jpg"} {"video_id": "l0im8AJAMco", "text": "K and what is happy there we don't really know but I think that's probably likely to correspond to closer to practice than the other end point because things like large learning rate finite width will cause you to sort of stay infinitely close yeah okay so the final thing I want to talk about is how does architecture matter so I've told you kind of that gradient descent asymptotically the bias it gets you is an l2 regularization on all of the parameters but that's sort of uninteresting why should you ever care", "start_timestamp": "00:38:20", "end_timestamp": "00:38:52", "start_second": 2300, "end_second": 2332, "url": "https://www.youtube.com/watch?v=l0im8AJAMco&t=2300s", "title": "On the Foundations of Deep Learning: SGD, Overparametrization, and Generalization", "thumbnail": "https://i.ytimg.com/vi/l0im8AJAMco/maxresdefault.jpg"} {"video_id": "l0im8AJAMco", "text": "about the parameters again parameters have no meaning in your own nets the only thing that matters at the end is your function f this is all you see a test time the parameter is just a way to encode F so what you should really ask is how does this bias translate over here so what is the bias on the prediction function and this is clearly where architecture matter so for example here 1 3 & 4 there was no regularization the algorithm was just SGD the only thing that changes is the architecture yet you get a huge improvement in the test error", "start_timestamp": "00:38:52", "end_timestamp": "00:39:23", "start_second": 2332, "end_second": 2363, "url": "https://www.youtube.com/watch?v=l0im8AJAMco&t=2332s", "title": "On the Foundations of Deep Learning: SGD, Overparametrization, and Generalization", "thumbnail": "https://i.ytimg.com/vi/l0im8AJAMco/maxresdefault.jpg"} {"video_id": "l0im8AJAMco", "text": "luckily I don't know the answer for neural nets but in linear networks there's a very crisp answer you can take a linear function write it as a product of matrices then if you have outer regularization on w's which comes from gradient descent it translates a shot in quasi norm on theta on the linear function okay so regardless of the depth and the width the prediction function class stays linear you know you can write a linear map a matrix as a product as one matrix as a product of five matrices is still matrix the function", "start_timestamp": "00:39:23", "end_timestamp": "00:40:02", "start_second": 2363, "end_second": 2402, "url": "https://www.youtube.com/watch?v=l0im8AJAMco&t=2363s", "title": "On the Foundations of Deep Learning: SGD, Overparametrization, and Generalization", "thumbnail": "https://i.ytimg.com/vi/l0im8AJAMco/maxresdefault.jpg"} {"video_id": "l0im8AJAMco", "text": "class number changes or simply varying the parameterization combined with algorithm changes the inductive bias here and here you can characterize in a very precise sense but whatever gradient can descend converges to is a stationary point of some problem with this shot and quasi norm okay a similar phenomenon in convolutional networks same thing you can take a linear convolutional net and write it as of comp of predictors you'll also get a quasi norm except now it's sparsely instead of on the singular values okay", "start_timestamp": "00:40:02", "end_timestamp": "00:40:50", "start_second": 2402, "end_second": 2450, "url": "https://www.youtube.com/watch?v=l0im8AJAMco&t=2402s", "title": "On the Foundations of Deep Learning: SGD, Overparametrization, and Generalization", "thumbnail": "https://i.ytimg.com/vi/l0im8AJAMco/maxresdefault.jpg"} {"video_id": "l0im8AJAMco", "text": "no it stops its learning like it's always changing zone on versatility the function is infinitely flat so in fact you can even grow the life some solar league rate of those convergence yeah you need sufficient design there's nothing okay so some random thoughts let me finish so of course we've seen now that by training Colonel training neural nets now stay close to initialization using a kernel classes so how'd it go beyond kernels UN's I mentioned this yesterday and of course one thing missing here is how will distributional assumptions help", "start_timestamp": "00:40:50", "end_timestamp": "00:41:59", "start_second": 2450, "end_second": 2519, "url": "https://www.youtube.com/watch?v=l0im8AJAMco&t=2450s", "title": "On the Foundations of Deep Learning: SGD, Overparametrization, and Generalization", "thumbnail": "https://i.ytimg.com/vi/l0im8AJAMco/maxresdefault.jpg"} {"video_id": "l0im8AJAMco", "text": "most of these results have not used distributional assumptions in any meaningful way I would say I mean there's a lot of results about learning single relu single convolutional filters they use Gaussian assumptions of course and those argue and even then you can see the Gaussian are helping you that much there were other ways to learn it just maybe not quite gradient descent so what are some reasonable distributional assumptions that can help us learn things via kernels don't really depend on the distribution their optimization", "start_timestamp": "00:41:59", "end_timestamp": "00:42:32", "start_second": 2519, "end_second": 2552, "url": "https://www.youtube.com/watch?v=l0im8AJAMco&t=2519s", "title": "On the Foundations of Deep Learning: SGD, Overparametrization, and Generalization", "thumbnail": "https://i.ytimg.com/vi/l0im8AJAMco/maxresdefault.jpg"} {"video_id": "l0im8AJAMco", "text": "at least so what happens right after the ndk regime we figure out the end point in a sense we figured out what happens locally we've figured out what happens if you move infinitely far but the whole middle I don't know it's kind of interesting in architecture design an inductive bias so you know people come out with new architectures every week and how does this change the inductive bias of SGD if I do SGD on ResNet versus amoeba net block then you know what is changing here let's say I just use STD on both but the", "start_timestamp": "00:42:32", "end_timestamp": "00:43:04", "start_second": 2552, "end_second": 2584, "url": "https://www.youtube.com/watch?v=l0im8AJAMco&t=2552s", "title": "On the Foundations of Deep Learning: SGD, Overparametrization, and Generalization", "thumbnail": "https://i.ytimg.com/vi/l0im8AJAMco/maxresdefault.jpg"} {"video_id": "l0im8AJAMco", "text": "functions I learn are different how are they different and why does amoeba net whatever generalize better [Applause] nothing GD is just easier to analyze most of the results hold for a skewed yeah that was for single oh no that's a matrix this one's a matrix this one's a matrix you minimize there's two results one is you minimize cross-entropy with explicit weight DK you get the quasi norm second one is you minimize cross-entropy with TD no there's no optimization these are statements about global minimum okay", "start_timestamp": "00:43:04", "end_timestamp": "00:44:32", "start_second": 2584, "end_second": 2672, "url": "https://www.youtube.com/watch?v=l0im8AJAMco&t=2584s", "title": "On the Foundations of Deep Learning: SGD, Overparametrization, and Generalization", "thumbnail": "https://i.ytimg.com/vi/l0im8AJAMco/maxresdefault.jpg"} {"video_id": "l0im8AJAMco", "text": "the first statement is a statement about global minimum the second statement is invoking this following theorem this says that you convert a force or the optimal point of a problem an SVM where this is the Shatan clause anymore so the inductive bias of having deep that's in a linear network using logistic loss is getting closer to is a crank this is about first-order okay so gradient descent the algorithm is involved here it's about gradient descent you run gradient descent something happens and what happens is a", "start_timestamp": "00:44:32", "end_timestamp": "00:45:19", "start_second": 2672, "end_second": 2719, "url": "https://www.youtube.com/watch?v=l0im8AJAMco&t=2672s", "title": "On the Foundations of Deep Learning: SGD, Overparametrization, and Generalization", "thumbnail": "https://i.ytimg.com/vi/l0im8AJAMco/maxresdefault.jpg"} {"video_id": "b-yhKUINb7o", "text": "[Music] in this video we'll be discussing the concept of semi-supervised learning semi-supervised learning kind of takes a middle ground between supervised learning and unsupervised learning as a quick refresher recall from previous videos that supervised learning is the learning that occurs during training of an artificial neural network when the data in our training set is labeled unsupervised learning on the other hand is the learning that occurs when the data in our training set is not labeled so now onto semi-supervised learning", "start_timestamp": "00:00:00", "end_timestamp": "00:00:39", "start_second": 0, "end_second": 39, "url": "https://www.youtube.com/watch?v=b-yhKUINb7o&t=0s", "title": "Semi-supervised Learning explained", "thumbnail": "https://i.ytimg.com/vi/b-yhKUINb7o/maxresdefault.jpg"} {"video_id": "b-yhKUINb7o", "text": "semi-supervised learning uses a combination of supervised and unsupervised learning techniques and that's because in a scenario where we'd make use of semi-supervised learning we'd have a combination of both labeled and unlabeled data let's expand on this idea with an example say we have access to a large unlabeled data set that we'd like to train a model on and that manually labeling all of the state ourselves is just not practical well we could go through and manually label some portion of this large data set ourselves", "start_timestamp": "00:00:39", "end_timestamp": "00:01:08", "start_second": 39, "end_second": 68, "url": "https://www.youtube.com/watch?v=b-yhKUINb7o&t=39s", "title": "Semi-supervised Learning explained", "thumbnail": "https://i.ytimg.com/vi/b-yhKUINb7o/maxresdefault.jpg"} {"video_id": "b-yhKUINb7o", "text": "and use that portion to train our model and this is fine in fact this is how a lot of data use for neural networks becomes labeled but you know if we have access to large amounts of data and we've only labeled some small portion of the data then what a waste it would be to just leave all the other unlabeled data on the table I mean after all we know the more data we have to train a model on the better and more robust our model will be so what can we do to make use of the remaining unlabeled data in our data set well one thing we can do is", "start_timestamp": "00:01:08", "end_timestamp": "00:01:37", "start_second": 68, "end_second": 97, "url": "https://www.youtube.com/watch?v=b-yhKUINb7o&t=68s", "title": "Semi-supervised Learning explained", "thumbnail": "https://i.ytimg.com/vi/b-yhKUINb7o/maxresdefault.jpg"} {"video_id": "b-yhKUINb7o", "text": "implement a technique that falls under the category of semi-supervised learning called pseudo labeling this is how pseudo labeling works so as just mentioned we've already labeled some portion of our data set now we're going to use this label data as the training set for our model we're then going to train our model just as we would with any other labelled data set okay and then just through the regular training process we get our model performing pretty well so everything we've done up to this point has been just regular old", "start_timestamp": "00:01:37", "end_timestamp": "00:02:05", "start_second": 97, "end_second": 125, "url": "https://www.youtube.com/watch?v=b-yhKUINb7o&t=97s", "title": "Semi-supervised Learning explained", "thumbnail": "https://i.ytimg.com/vi/b-yhKUINb7o/maxresdefault.jpg"} {"video_id": "b-yhKUINb7o", "text": "supervised learning in practice now here's where the unsupervised learning piece comes into play after we've trained our model on the labeled portion of the data set we then use our model to predict on the remaining unlabeled portion of data we then take these predictions and label each piece of unlabeled data with the individual outputs that were predicted for them this process of labeling the unlabeled data with the output that was predicted by our neural network is the very essence of pseudo labeling now after labeling the unlabeled data", "start_timestamp": "00:02:05", "end_timestamp": "00:02:34", "start_second": 125, "end_second": 154, "url": "https://www.youtube.com/watch?v=b-yhKUINb7o&t=125s", "title": "Semi-supervised Learning explained", "thumbnail": "https://i.ytimg.com/vi/b-yhKUINb7o/maxresdefault.jpg"} {"video_id": "b-yhKUINb7o", "text": "through this pseudo labeling process we then train our model on the full data set which is now comprised of both the data that was actually truly labeled along with the data that was pseudo labeled through the use of pseudo labeling were able to train on a vastly larger data set we're also able to train on data that otherwise may have potentially taken many tedious hours of human labor to manually label the data as you can imagine sometimes the cost of acquiring or generating a fully label data set is just too high or the pure", "start_timestamp": "00:02:34", "end_timestamp": "00:03:04", "start_second": 154, "end_second": 184, "url": "https://www.youtube.com/watch?v=b-yhKUINb7o&t=154s", "title": "Semi-supervised Learning explained", "thumbnail": "https://i.ytimg.com/vi/b-yhKUINb7o/maxresdefault.jpg"} {"video_id": "b-yhKUINb7o", "text": "act of generating all the labels itself is just not feasible so through this process we can see how this approach makes use of both supervised learning with the labeled data and unsupervised learning with the unlabeled data which together give us the practice of semi-supervised learning so hopefully now you have an understanding of what semi-supervised learning is and how you may apply it and practice through the use of pseudo labeling and I hope you found this video helpful if you did please like the video subscribe suggest", "start_timestamp": "00:03:04", "end_timestamp": "00:03:33", "start_second": 184, "end_second": 213, "url": "https://www.youtube.com/watch?v=b-yhKUINb7o&t=184s", "title": "Semi-supervised Learning explained", "thumbnail": "https://i.ytimg.com/vi/b-yhKUINb7o/maxresdefault.jpg"} {"video_id": "qG_YmIPFO68", "text": "I want to talk to you about the future of medicine. But before I do that, I want to talk a little bit about the past. Now, throughout much of the recent history of medicine, we've thought about illness and treatment in terms of a profoundly simple model. In fact, the model is so simple that you could summarize it in six words: have disease, take pill, kill something. Now, the reason for the dominance of this model is of course the antibiotic revolution. Many of you might not know this, but we happen to be celebrating", "start_timestamp": "00:00:00", "end_timestamp": "00:00:53", "start_second": 0, "end_second": 53, "url": "https://www.youtube.com/watch?v=qG_YmIPFO68&t=0s", "title": "Soon We'll Cure Diseases With a Cell, Not a Pill | Siddhartha Mukherjee | TED Talks", "thumbnail": "https://i.ytimg.com/vi/qG_YmIPFO68/maxresdefault.jpg"} {"video_id": "qG_YmIPFO68", "text": "the hundredth year of the introduction of antibiotics into the United States. But what you do know is that that introduction was nothing short of transformative. Here you had a chemical, either from the natural world or artificially synthesized in the laboratory, and it would course through your body, it would find its target, lock into its target -- a microbe or some part of a microbe -- and then turn off a lock and a key with exquisite deftness, exquisite specificity. And you would end up taking a previously fatal, lethal disease --", "start_timestamp": "00:00:53", "end_timestamp": "00:01:33", "start_second": 53, "end_second": 93, "url": "https://www.youtube.com/watch?v=qG_YmIPFO68&t=53s", "title": "Soon We'll Cure Diseases With a Cell, Not a Pill | Siddhartha Mukherjee | TED Talks", "thumbnail": "https://i.ytimg.com/vi/qG_YmIPFO68/maxresdefault.jpg"} {"video_id": "qG_YmIPFO68", "text": "a pneumonia, syphilis, tuberculosis -- and transforming that into a curable, or treatable illness. You have a pneumonia, you take penicillin, you kill the microbe and you cure the disease. So seductive was this idea, so potent the metaphor of lock and key and killing something, that it really swept through biology. It was a transformation like no other. And we've really spent the last 100 years trying to replicate that model over and over again in noninfectious diseases, in chronic diseases like diabetes and hypertension and heart disease.", "start_timestamp": "00:01:33", "end_timestamp": "00:02:17", "start_second": 93, "end_second": 137, "url": "https://www.youtube.com/watch?v=qG_YmIPFO68&t=93s", "title": "Soon We'll Cure Diseases With a Cell, Not a Pill | Siddhartha Mukherjee | TED Talks", "thumbnail": "https://i.ytimg.com/vi/qG_YmIPFO68/maxresdefault.jpg"} {"video_id": "qG_YmIPFO68", "text": "And it's worked, but it's only worked partly. Let me show you. You know, if you take the entire universe of all chemical reactions in the human body, every chemical reaction that your body is capable of, most people think that that number is on the order of a million. Let's call it a million. And now you ask the question, what number or fraction of reactions can actually be targeted by the entire pharmacopoeia, all of medicinal chemistry? That number is 250. The rest is chemical darkness. In other words, 0.025 percent of all chemical reactions in your body", "start_timestamp": "00:02:17", "end_timestamp": "00:03:00", "start_second": 137, "end_second": 180, "url": "https://www.youtube.com/watch?v=qG_YmIPFO68&t=137s", "title": "Soon We'll Cure Diseases With a Cell, Not a Pill | Siddhartha Mukherjee | TED Talks", "thumbnail": "https://i.ytimg.com/vi/qG_YmIPFO68/maxresdefault.jpg"} {"video_id": "qG_YmIPFO68", "text": "are actually targetable by this lock and key mechanism. You know, if you think about human physiology as a vast global telephone network with interacting nodes and interacting pieces, then all of our medicinal chemistry is operating on one tiny corner at the edge, the outer edge, of that network. It's like all of our pharmaceutical chemistry is a pole operator in Wichita, Kansas who is tinkering with about 10 or 15 telephone lines. So what do we do about this idea? What if we reorganized this approach? In fact, it turns out that the natural world", "start_timestamp": "00:03:00", "end_timestamp": "00:03:47", "start_second": 180, "end_second": 227, "url": "https://www.youtube.com/watch?v=qG_YmIPFO68&t=180s", "title": "Soon We'll Cure Diseases With a Cell, Not a Pill | Siddhartha Mukherjee | TED Talks", "thumbnail": "https://i.ytimg.com/vi/qG_YmIPFO68/maxresdefault.jpg"} {"video_id": "qG_YmIPFO68", "text": "gives us a sense of how one might think about illness in a radically different way, rather than disease, medicine, target. In fact, the natural world is organized hierarchically upwards, not downwards, but upwards, and we begin with a self-regulating, semi-autonomous unit called a cell. These self-regulating, semi-autonomous units give rise to self-regulating, semi-autonomous units called organs, and these organs coalesce to form things called humans, and these organisms ultimately live in environments, which are partly self-regulating and partly semi-autonomous.", "start_timestamp": "00:03:47", "end_timestamp": "00:04:32", "start_second": 227, "end_second": 272, "url": "https://www.youtube.com/watch?v=qG_YmIPFO68&t=227s", "title": "Soon We'll Cure Diseases With a Cell, Not a Pill | Siddhartha Mukherjee | TED Talks", "thumbnail": "https://i.ytimg.com/vi/qG_YmIPFO68/maxresdefault.jpg"} {"video_id": "qG_YmIPFO68", "text": "What's nice about this scheme, this hierarchical scheme building upwards rather than downwards, is that it allows us to think about illness as well in a somewhat different way. Take a disease like cancer. Since the 1950s, we've tried rather desperately to apply this lock and key model to cancer. We've tried to kill cells using a variety of chemotherapies or targeted therapies, and as most of us know, that's worked. It's worked for diseases like leukemia. It's worked for some forms of breast cancer, but eventually you run to the ceiling of that approach.", "start_timestamp": "00:04:32", "end_timestamp": "00:05:12", "start_second": 272, "end_second": 312, "url": "https://www.youtube.com/watch?v=qG_YmIPFO68&t=272s", "title": "Soon We'll Cure Diseases With a Cell, Not a Pill | Siddhartha Mukherjee | TED Talks", "thumbnail": "https://i.ytimg.com/vi/qG_YmIPFO68/maxresdefault.jpg"} {"video_id": "qG_YmIPFO68", "text": "And it's only in the last 10 years or so that we've begun to think about using the immune system, remembering that in fact the cancer cell doesn't grow in a vacuum. It actually grows in a human organism. And could you use the organismal capacity, the fact that human beings have an immune system, to attack cancer? In fact, it's led to the some of the most spectacular new medicines in cancer. And finally there's the level of the environment, isn't there? You know, we don't think of cancer as altering the environment.", "start_timestamp": "00:05:12", "end_timestamp": "00:05:41", "start_second": 312, "end_second": 341, "url": "https://www.youtube.com/watch?v=qG_YmIPFO68&t=312s", "title": "Soon We'll Cure Diseases With a Cell, Not a Pill | Siddhartha Mukherjee | TED Talks", "thumbnail": "https://i.ytimg.com/vi/qG_YmIPFO68/maxresdefault.jpg"} {"video_id": "qG_YmIPFO68", "text": "But let me give you an example of a profoundly carcinogenic environment. It's called a prison. You take loneliness, you take depression, you take confinement, and you add to that, rolled up in a little white sheet of paper, one of the most potent neurostimulants that we know, called nicotine, and you add to that one of the most potent addictive substances that you know, and you have a pro-carcinogenic environment. But you can have anti-carcinogenic environments too. There are attempts to create milieus, change the hormonal milieu for breast cancer, for instance.", "start_timestamp": "00:05:41", "end_timestamp": "00:06:20", "start_second": 341, "end_second": 380, "url": "https://www.youtube.com/watch?v=qG_YmIPFO68&t=341s", "title": "Soon We'll Cure Diseases With a Cell, Not a Pill | Siddhartha Mukherjee | TED Talks", "thumbnail": "https://i.ytimg.com/vi/qG_YmIPFO68/maxresdefault.jpg"} {"video_id": "qG_YmIPFO68", "text": "We're trying to change the metabolic milieu for other forms of cancer. Or take another disease, like depression. Again, working upwards, since the 1960s and 1970s, we've tried, again, desperately to turn off molecules that operate between nerve cells -- serotonin, dopamine -- and tried to cure depression that way, and that's worked, but then that reached the limit. And we now know that what you really probably need to do is to change the physiology of the organ, the brain, rewire it, remodel it, and that, of course, we know study upon study has shown", "start_timestamp": "00:06:20", "end_timestamp": "00:06:55", "start_second": 380, "end_second": 415, "url": "https://www.youtube.com/watch?v=qG_YmIPFO68&t=380s", "title": "Soon We'll Cure Diseases With a Cell, Not a Pill | Siddhartha Mukherjee | TED Talks", "thumbnail": "https://i.ytimg.com/vi/qG_YmIPFO68/maxresdefault.jpg"} {"video_id": "qG_YmIPFO68", "text": "that talk therapy does exactly that, and study upon study has shown that talk therapy combined with medicines, pills, really is much more effective than either one alone. Can we imagine a more immersive environment that will change depression? Can you lock out the signals that elicit depression? Again, moving upwards along this hierarchical chain of organization. What's really at stake perhaps here is not the medicine itself but a metaphor. Rather than killing something, in the case of the great chronic degenerative diseases --", "start_timestamp": "00:06:55", "end_timestamp": "00:07:31", "start_second": 415, "end_second": 451, "url": "https://www.youtube.com/watch?v=qG_YmIPFO68&t=415s", "title": "Soon We'll Cure Diseases With a Cell, Not a Pill | Siddhartha Mukherjee | TED Talks", "thumbnail": "https://i.ytimg.com/vi/qG_YmIPFO68/maxresdefault.jpg"} {"video_id": "qG_YmIPFO68", "text": "kidney failure, diabetes, hypertension, osteoarthritis -- maybe what we really need to do is change the metaphor to growing something. And that's the key, perhaps, to reframing our thinking about medicine. Now, this idea of changing, of creating a perceptual shift, as it were, came home to me to roost in a very personal manner about 10 years ago. About 10 years ago -- I've been a runner most of my life -- I went for a run, a Saturday morning run, I came back and woke up and I basically couldn't move. My right knee was swollen up,", "start_timestamp": "00:07:31", "end_timestamp": "00:08:01", "start_second": 451, "end_second": 481, "url": "https://www.youtube.com/watch?v=qG_YmIPFO68&t=451s", "title": "Soon We'll Cure Diseases With a Cell, Not a Pill | Siddhartha Mukherjee | TED Talks", "thumbnail": "https://i.ytimg.com/vi/qG_YmIPFO68/maxresdefault.jpg"} {"video_id": "qG_YmIPFO68", "text": "and you could hear that ominous crunch of bone against bone. And one of the perks of being a physician is that you get to order your own MRIs. And I had an MRI the next week, and it looked like that. Essentially, the meniscus of cartilage that is between bone had been completely torn and the bone itself had been shattered. Now, if you're looking at me and feeling sorry, let me tell you a few facts. If I was to take an MRI of every person in this audience, 60 percent of you would show signs of bone degeneration and cartilage degeneration like this.", "start_timestamp": "00:08:01", "end_timestamp": "00:08:36", "start_second": 481, "end_second": 516, "url": "https://www.youtube.com/watch?v=qG_YmIPFO68&t=481s", "title": "Soon We'll Cure Diseases With a Cell, Not a Pill | Siddhartha Mukherjee | TED Talks", "thumbnail": "https://i.ytimg.com/vi/qG_YmIPFO68/maxresdefault.jpg"} {"video_id": "qG_YmIPFO68", "text": "85 percent of all women by the age of 70 would show moderate to severe cartilage degeneration. 50 to 60 percent of the men in this audience would also have such signs. So this is a very common disease. Well, the second perk of being a physician is that you can get to experiment on your own ailments. So about 10 years ago we began, we brought this process into the laboratory, and we began to do simple experiments, mechanically trying to fix this degeneration. We tried to inject chemicals into the knee spaces of animals", "start_timestamp": "00:08:36", "end_timestamp": "00:09:08", "start_second": 516, "end_second": 548, "url": "https://www.youtube.com/watch?v=qG_YmIPFO68&t=516s", "title": "Soon We'll Cure Diseases With a Cell, Not a Pill | Siddhartha Mukherjee | TED Talks", "thumbnail": "https://i.ytimg.com/vi/qG_YmIPFO68/maxresdefault.jpg"} {"video_id": "qG_YmIPFO68", "text": "to try to reverse cartilage degeneration, and to put a short summary on a very long and painful process, essentially it came to naught. Nothing happened. And then about seven years ago, we had a research student from Australia. The nice thing about Australians is that they're habitually used to looking at the world upside down. (Laughter) And so Dan suggested to me, \"You know, maybe it isn't a mechanical problem. Maybe it isn't a chemical problem. Maybe it's a stem cell problem.\" In other words, he had two hypotheses.", "start_timestamp": "00:09:08", "end_timestamp": "00:09:41", "start_second": 548, "end_second": 581, "url": "https://www.youtube.com/watch?v=qG_YmIPFO68&t=548s", "title": "Soon We'll Cure Diseases With a Cell, Not a Pill | Siddhartha Mukherjee | TED Talks", "thumbnail": "https://i.ytimg.com/vi/qG_YmIPFO68/maxresdefault.jpg"} {"video_id": "qG_YmIPFO68", "text": "Number one, there is such a thing as a skeletal stem cell -- a skeletal stem cell that builds up the entire vertebrate skeleton, bone, cartilage and the fibrous elements of skeleton, just like there's a stem cell in blood, just like there's a stem cell in the nervous system. And two, that maybe that, the degeneration or dysfunction of this stem cell is what's causing osteochondral arthritis, a very common ailment. So really the question was, were we looking for a pill when we should have really been looking for a cell.", "start_timestamp": "00:09:41", "end_timestamp": "00:10:08", "start_second": 581, "end_second": 608, "url": "https://www.youtube.com/watch?v=qG_YmIPFO68&t=581s", "title": "Soon We'll Cure Diseases With a Cell, Not a Pill | Siddhartha Mukherjee | TED Talks", "thumbnail": "https://i.ytimg.com/vi/qG_YmIPFO68/maxresdefault.jpg"} {"video_id": "qG_YmIPFO68", "text": "So we switched our models, and now we began to look for skeletal stem cells. And to cut again a long story short, about five years ago, we found these cells. They live inside the skeleton. Here's a schematic and then a real photograph of one of them. The white stuff is bone, and these red columns that you see and the yellow cells are cells that have arisen from one single skeletal stem cell -- columns of cartilage, columns of bone coming out of a single cell. These cells are fascinating. They have four properties.", "start_timestamp": "00:10:08", "end_timestamp": "00:10:42", "start_second": 608, "end_second": 642, "url": "https://www.youtube.com/watch?v=qG_YmIPFO68&t=608s", "title": "Soon We'll Cure Diseases With a Cell, Not a Pill | Siddhartha Mukherjee | TED Talks", "thumbnail": "https://i.ytimg.com/vi/qG_YmIPFO68/maxresdefault.jpg"} {"video_id": "qG_YmIPFO68", "text": "Number one is that they live where they're expected to live. They live just underneath the surface of the bone, underneath cartilage. You know, in biology, it's location, location, location. And they move into the appropriate areas and form bone and cartilage. That's one. Here's an interesting property. You can take them out of the vertebrate skeleton, you can culture them in petri dishes in the laboratory, and they are dying to form cartilage. Remember how we couldn't form cartilage for love or money? These cells are dying to form cartilage.", "start_timestamp": "00:10:42", "end_timestamp": "00:11:11", "start_second": 642, "end_second": 671, "url": "https://www.youtube.com/watch?v=qG_YmIPFO68&t=642s", "title": "Soon We'll Cure Diseases With a Cell, Not a Pill | Siddhartha Mukherjee | TED Talks", "thumbnail": "https://i.ytimg.com/vi/qG_YmIPFO68/maxresdefault.jpg"} {"video_id": "qG_YmIPFO68", "text": "They form their own furls of cartilage around themselves. They're also, number three, the most efficient repairers of fractures that we've ever encountered. This is a little bone, a mouse bone that we fractured and then let it heal by itself. These stem cells have come in and repaired, in yellow, the bone, in white, the cartilage, almost completely. So much so that if you label them with a fluorescent dye you can see them like some kind of peculiar cellular glue coming into the area of a fracture, fixing it locally and then stopping their work.", "start_timestamp": "00:11:11", "end_timestamp": "00:11:43", "start_second": 671, "end_second": 703, "url": "https://www.youtube.com/watch?v=qG_YmIPFO68&t=671s", "title": "Soon We'll Cure Diseases With a Cell, Not a Pill | Siddhartha Mukherjee | TED Talks", "thumbnail": "https://i.ytimg.com/vi/qG_YmIPFO68/maxresdefault.jpg"} {"video_id": "qG_YmIPFO68", "text": "Now, the fourth one is the most ominous, and that is that their numbers decline precipitously, precipitously, tenfold, fiftyfold, as you age. And so what had happened, really, is that we found ourselves in a perceptual shift. We had gone hunting for pills but we ended up finding theories. And in some ways we had hooked ourselves back onto this idea: cells, organisms, environments, because we were now thinking about bone stem cells, we were thinking about arthritis in terms of a cellular disease. And then the next question was, are there organs?", "start_timestamp": "00:11:43", "end_timestamp": "00:12:20", "start_second": 703, "end_second": 740, "url": "https://www.youtube.com/watch?v=qG_YmIPFO68&t=703s", "title": "Soon We'll Cure Diseases With a Cell, Not a Pill | Siddhartha Mukherjee | TED Talks", "thumbnail": "https://i.ytimg.com/vi/qG_YmIPFO68/maxresdefault.jpg"} {"video_id": "qG_YmIPFO68", "text": "Can you build this as an organ outside the body? Can you implant cartilage into areas of trauma? And perhaps most interestingly, can you ascend right up and create environments? You know, we know that exercise remodels bone, but come on, none of us is going to exercise. So could you imagine ways of passively loading and unloading bone so that you can recreate or regenerate degenerating cartilage? And perhaps more interesting, and more importantly, the question is, can you apply this model more globally outside medicine?", "start_timestamp": "00:12:20", "end_timestamp": "00:12:52", "start_second": 740, "end_second": 772, "url": "https://www.youtube.com/watch?v=qG_YmIPFO68&t=740s", "title": "Soon We'll Cure Diseases With a Cell, Not a Pill | Siddhartha Mukherjee | TED Talks", "thumbnail": "https://i.ytimg.com/vi/qG_YmIPFO68/maxresdefault.jpg"} {"video_id": "qG_YmIPFO68", "text": "What's at stake, as I said before, is not killing something, but growing something. And it raises a series of, I think, some of the most interesting questions about how we think about medicine in the future. Could your medicine be a cell and not a pill? How would we grow these cells? What we would we do to stop the malignant growth of these cells? We heard about the problems of unleashing growth. Could we implant suicide genes into these cells to stop them from growing? Could your medicine be an organ that's created outside the body", "start_timestamp": "00:12:52", "end_timestamp": "00:13:29", "start_second": 772, "end_second": 809, "url": "https://www.youtube.com/watch?v=qG_YmIPFO68&t=772s", "title": "Soon We'll Cure Diseases With a Cell, Not a Pill | Siddhartha Mukherjee | TED Talks", "thumbnail": "https://i.ytimg.com/vi/qG_YmIPFO68/maxresdefault.jpg"} {"video_id": "qG_YmIPFO68", "text": "and then implanted into the body? Could that stop some of the degeneration? What if the organ needed to have memory? In cases of diseases of the nervous system some of those organs had memory. How could we implant those memories back in? Could we store these organs? Would each organ have to be developed for an individual human being and put back? And perhaps most puzzlingly, could your medicine be an environment? Could you patent an environment? You know, in every culture, shamans have been using environments as medicines.", "start_timestamp": "00:13:29", "end_timestamp": "00:14:04", "start_second": 809, "end_second": 844, "url": "https://www.youtube.com/watch?v=qG_YmIPFO68&t=809s", "title": "Soon We'll Cure Diseases With a Cell, Not a Pill | Siddhartha Mukherjee | TED Talks", "thumbnail": "https://i.ytimg.com/vi/qG_YmIPFO68/maxresdefault.jpg"} {"video_id": "qG_YmIPFO68", "text": "Could we imagine that for our future? I've talked a lot about models. I began this talk with models. So let me end with some thoughts about model building. That's what we do as scientists. You know, when an architect builds a model, he or she is trying to show you a world in miniature. But when a scientist is building a model, he or she is trying to show you the world in metaphor. He or she is trying to create a new way of seeing. The former is a scale shift. The latter is a perceptual shift. Now, antibiotics created such a perceptual shift", "start_timestamp": "00:14:04", "end_timestamp": "00:14:43", "start_second": 844, "end_second": 883, "url": "https://www.youtube.com/watch?v=qG_YmIPFO68&t=844s", "title": "Soon We'll Cure Diseases With a Cell, Not a Pill | Siddhartha Mukherjee | TED Talks", "thumbnail": "https://i.ytimg.com/vi/qG_YmIPFO68/maxresdefault.jpg"} {"video_id": "qG_YmIPFO68", "text": "in our way of thinking about medicine that it really colored, distorted, very successfully, the way we've thought about medicine for the last hundred years. But we need new models to think about medicine in the future. That's what's at stake. You know, there's a popular trope out there that the reason we haven't had the transformative impact on the treatment of illness is because we don't have powerful-enough drugs, and that's partly true. But perhaps the real reason is that we don't have powerful-enough ways of thinking about medicines.", "start_timestamp": "00:14:43", "end_timestamp": "00:15:20", "start_second": 883, "end_second": 920, "url": "https://www.youtube.com/watch?v=qG_YmIPFO68&t=883s", "title": "Soon We'll Cure Diseases With a Cell, Not a Pill | Siddhartha Mukherjee | TED Talks", "thumbnail": "https://i.ytimg.com/vi/qG_YmIPFO68/maxresdefault.jpg"} {"video_id": "qG_YmIPFO68", "text": "It's certainly true that it would be lovely to have new medicines. But perhaps what's really at stake are three more intangible M's: mechanisms, models, metaphors. Thank you. (Applause) Chris Anderson: I really like this metaphor. How does it link in? There's a lot of talk in technologyland about the personalization of medicine, that we have all this data and that medical treatments of the future will be for you specifically, your genome, your current context. Does that apply to this model you've got here? Siddhartha Mukherjee: It's a very interesting question.", "start_timestamp": "00:15:20", "end_timestamp": "00:16:10", "start_second": 920, "end_second": 970, "url": "https://www.youtube.com/watch?v=qG_YmIPFO68&t=920s", "title": "Soon We'll Cure Diseases With a Cell, Not a Pill | Siddhartha Mukherjee | TED Talks", "thumbnail": "https://i.ytimg.com/vi/qG_YmIPFO68/maxresdefault.jpg"} {"video_id": "qG_YmIPFO68", "text": "We've thought about personalization of medicine very much in terms of genomics. That's because the gene is such a dominant metaphor, again, to use that same word, in medicine today, that we think the genome will drive the personalization of medicine. But of course the genome is just the bottom of a long chain of being, as it were. That chain of being, really the first organized unit of that, is the cell. So, if we are really going to deliver in medicine in this way, we have to think of personalizing cellular therapies,", "start_timestamp": "00:16:10", "end_timestamp": "00:16:40", "start_second": 970, "end_second": 1000, "url": "https://www.youtube.com/watch?v=qG_YmIPFO68&t=970s", "title": "Soon We'll Cure Diseases With a Cell, Not a Pill | Siddhartha Mukherjee | TED Talks", "thumbnail": "https://i.ytimg.com/vi/qG_YmIPFO68/maxresdefault.jpg"} {"video_id": "qG_YmIPFO68", "text": "and then personalizing organ or organismal therapies, and ultimately personalizing immersion therapies for the environment. So I think at every stage, you know -- there's that metaphor, there's turtles all the way. Well, in this, there's personalization all the way. CA: So when you say medicine could be a cell and not a pill, you're talking about potentially your own cells. SM: Absolutely. CA: So converted to stem cells, perhaps tested against all kinds of drugs or something, and prepared. SM: And there's no perhaps. This is what we're doing.", "start_timestamp": "00:16:40", "end_timestamp": "00:17:11", "start_second": 1000, "end_second": 1031, "url": "https://www.youtube.com/watch?v=qG_YmIPFO68&t=1000s", "title": "Soon We'll Cure Diseases With a Cell, Not a Pill | Siddhartha Mukherjee | TED Talks", "thumbnail": "https://i.ytimg.com/vi/qG_YmIPFO68/maxresdefault.jpg"} {"video_id": "r7cYsgB4G1s", "text": "Let's imagine for a few moments what our life would be like if we could access let's say 20% of our brain's capacity If you want to have something show up in your life The kind of person you would like to become manifest something new into your life something powerful, whatever it might be You obviously must first be able to imagine it Your imagination is able to do all that you ask in proportion to the degree of your attention So what kind of attention do you place on your desires? Einstein's most famous quote one of his most famous observations. He said imagination is more", "start_timestamp": "00:00:00", "end_timestamp": "00:00:59", "start_second": 0, "end_second": 59, "url": "https://www.youtube.com/watch?v=r7cYsgB4G1s&t=0s", "title": "YOU ARE THE CREATOR | Warning: This might shake up your belief system! Morgan Freeman and Wayne Dyer", "thumbnail": "https://i.ytimg.com/vi/r7cYsgB4G1s/maxresdefault.jpg"} {"video_id": "r7cYsgB4G1s", "text": "important than knowledge Knowledge is limited Imagination encircles the world Logic will get you from A to B, but imagination will take you everywhere Make your future dream a present fact by assuming the feeling of the wish fulfilled That which you feel yourself to be you are and You are given that which you are. So assume the feeling that would be yours were you already in possession of your wish and your wish must be realized so live in the feeling of being the to be and that you shall be if this assumption about what you would like to become is", "start_timestamp": "00:00:59", "end_timestamp": "00:01:58", "start_second": 59, "end_second": 118, "url": "https://www.youtube.com/watch?v=r7cYsgB4G1s&t=59s", "title": "YOU ARE THE CREATOR | Warning: This might shake up your belief system! Morgan Freeman and Wayne Dyer", "thumbnail": "https://i.ytimg.com/vi/r7cYsgB4G1s/maxresdefault.jpg"} {"video_id": "r7cYsgB4G1s", "text": "persisted in until it becomes your dominant feeling the attainment of your ideal is Absolutely inevitable You must first assume the feeling of a wish fulfilled in all aspects of your life Don't allow anybody elses opinions Don't allow what it says on the internet. Don't allow the research. Don't allow what anybody out there tells you is possible or not possible for you if you advance confidently in the direction of your own dreams and Endeavor to live the life, which you have imagined You will meet with a success unexpected", "start_timestamp": "00:01:58", "end_timestamp": "00:02:46", "start_second": 118, "end_second": 166, "url": "https://www.youtube.com/watch?v=r7cYsgB4G1s&t=118s", "title": "YOU ARE THE CREATOR | Warning: This might shake up your belief system! Morgan Freeman and Wayne Dyer", "thumbnail": "https://i.ytimg.com/vi/r7cYsgB4G1s/maxresdefault.jpg"} {"video_id": "r7cYsgB4G1s", "text": "in common hours it will chase after you if you can place into your Imagination what it is that you would like to attract and begin to feel it Start retraining your subconscious mind and your subconscious mind it responds to what it is that you suggest to it The subconscious mind moves your life 96 to 97 percent of everything that you do is done as a result of your subconscious mind And when your subconscious mind gets programmed it goes ahead and Respond to whatever it is. Your conscious mind has placed into it", "start_timestamp": "00:02:46", "end_timestamp": "00:03:45", "start_second": 166, "end_second": 225, "url": "https://www.youtube.com/watch?v=r7cYsgB4G1s&t=166s", "title": "YOU ARE THE CREATOR | Warning: This might shake up your belief system! Morgan Freeman and Wayne Dyer", "thumbnail": "https://i.ytimg.com/vi/r7cYsgB4G1s/maxresdefault.jpg"} {"video_id": "r7cYsgB4G1s", "text": "You are the creator this is the mystery This is the great secret known by the seers and prophets and mystics throughout the ages. This is the truth that you can never know intellectually Many of you as I have been as I am are where you are in your life Based upon what you believe and it's not just what you think you believe on the surface It's also your shadow beliefs that are holding you back from moving into the life that you believe You deserve What I know is if you're not looking at the shadows if you're not looking at what is", "start_timestamp": "00:03:45", "end_timestamp": "00:04:28", "start_second": 225, "end_second": 268, "url": "https://www.youtube.com/watch?v=r7cYsgB4G1s&t=225s", "title": "YOU ARE THE CREATOR | Warning: This might shake up your belief system! Morgan Freeman and Wayne Dyer", "thumbnail": "https://i.ytimg.com/vi/r7cYsgB4G1s/maxresdefault.jpg"} {"video_id": "r7cYsgB4G1s", "text": "Subconsciously running through the tape in your mind telling yourself. You're not good enough. You're not worthy enough. You're not smart enough You're not enough which is a tape that's playing for a lot of people If you're not conscious of that then you end up acting out of that Belief system and not out of what you know to be the truest or want to be the choice for yourself You are where you are today in part because of what you've been saying about yourself Words are like seeds when you speak something out. You give life to what you're saying if you continue to say it", "start_timestamp": "00:04:28", "end_timestamp": "00:05:09", "start_second": 268, "end_second": 309, "url": "https://www.youtube.com/watch?v=r7cYsgB4G1s&t=268s", "title": "YOU ARE THE CREATOR | Warning: This might shake up your belief system! Morgan Freeman and Wayne Dyer", "thumbnail": "https://i.ytimg.com/vi/r7cYsgB4G1s/maxresdefault.jpg"} {"video_id": "r7cYsgB4G1s", "text": "Eventually that can become a reality you Are planting seeds when you talk at some point you're going to eat that fruit. My challenge is make sure You're planning the right kind of seeds if you want apples You have to sow apple seeds if you want oranges You can't plant cactus seeds poison ivy seeds mushroom seeds You're going to reap fruit from the exact seeds that you've been sowing in other words You can't talk negative and expect to live a positive life You can't talk defeat and expect to have victory you can't talk lack", "start_timestamp": "00:05:09", "end_timestamp": "00:05:50", "start_second": 309, "end_second": 350, "url": "https://www.youtube.com/watch?v=r7cYsgB4G1s&t=309s", "title": "YOU ARE THE CREATOR | Warning: This might shake up your belief system! Morgan Freeman and Wayne Dyer", "thumbnail": "https://i.ytimg.com/vi/r7cYsgB4G1s/maxresdefault.jpg"} {"video_id": "r7cYsgB4G1s", "text": "You're not enough can't afford it Never get ahead and expect to have abundance if you have a poor mouth. You're going to have a poor life And this is great when we're saying things like I'm blessed I'm strong I will accomplish my dreams I'm coming out of there That's not just being positive. You are prophesying victory Prophesy and success prophesy new levels and your life will move in the direction of your words But too many people go around prophesy on just the opposite. I never get any good brights. I'll never get back in shape", "start_timestamp": "00:05:50", "end_timestamp": "00:06:31", "start_second": 350, "end_second": 391, "url": "https://www.youtube.com/watch?v=r7cYsgB4G1s&t=350s", "title": "YOU ARE THE CREATOR | Warning: This might shake up your belief system! Morgan Freeman and Wayne Dyer", "thumbnail": "https://i.ytimg.com/vi/r7cYsgB4G1s/maxresdefault.jpg"} {"video_id": "r7cYsgB4G1s", "text": "Business is slow. I'll probably get laid off Flu season is here. I always get it. They don't realize they are prophesy and defeat It's just like they're calling in bad breaks mediocrity lack You don't become what you want because so much of wanting is about Living in the space of what you don't have that's why Jim Carrey's story is so powerful Because he started to act as though he already had it. He would go up to Mulholland Drive He would drive away sing thinking. I already have those things I just haven't accessed them as yet", "start_timestamp": "00:06:31", "end_timestamp": "00:07:07", "start_second": 391, "end_second": 427, "url": "https://www.youtube.com/watch?v=r7cYsgB4G1s&t=391s", "title": "YOU ARE THE CREATOR | Warning: This might shake up your belief system! Morgan Freeman and Wayne Dyer", "thumbnail": "https://i.ytimg.com/vi/r7cYsgB4G1s/maxresdefault.jpg"} {"video_id": "r7cYsgB4G1s", "text": "I believe Those things are going to come to me and I'm going to act like they are so I'm gonna move forward in my life in order to Draw that to myself in such a way that my actions are in alignment with what I say, I believe So if you start to think about that really why are you where you are in your life the choices that you have made? Have been because of what you believe to be true for yourself The time is now the time is now to express and for people to believe in themselves The time is now for it to be okay to be great", "start_timestamp": "00:07:07", "end_timestamp": "00:07:48", "start_second": 427, "end_second": 468, "url": "https://www.youtube.com/watch?v=r7cYsgB4G1s&t=427s", "title": "YOU ARE THE CREATOR | Warning: This might shake up your belief system! Morgan Freeman and Wayne Dyer", "thumbnail": "https://i.ytimg.com/vi/r7cYsgB4G1s/maxresdefault.jpg"} {"video_id": "r7cYsgB4G1s", "text": "People in this world shun people for being great for being a bright color for standing out but the time is now to be okay to be the greatest you You can talk yourself out of your destiny Negative words can keep you from becoming who you were created to be don't fall into that trap Quit calling in defeat quit talking about how it's not going to happen You should wipe down your Write down what you want to see happen in life Any areas that you're struggling in where you need to improve write it down like it's already done and then every day", "start_timestamp": "00:07:48", "end_timestamp": "00:08:31", "start_second": 468, "end_second": 511, "url": "https://www.youtube.com/watch?v=r7cYsgB4G1s&t=468s", "title": "YOU ARE THE CREATOR | Warning: This might shake up your belief system! Morgan Freeman and Wayne Dyer", "thumbnail": "https://i.ytimg.com/vi/r7cYsgB4G1s/maxresdefault.jpg"} {"video_id": "njKP3FqW3Sk", "text": " hi everyone, let's get started. Good afternoon and welcome to MIT 6.S191! TThis is really incredible to see the turnout this year. This is the fourth year now we're teaching this course and every single year it just seems to be getting bigger and bigger. 6.S191 is a one-week intensive boot camp on everything deep learning. In the past, at this point I usually try to give you a synopsis about the course and tell you all of the amazing things that you're going to be learning. You'll be gaining fundamentals into deep learning and", "start_timestamp": "00:00:00", "end_timestamp": "00:00:43", "start_second": 0, "end_second": 43, "url": "https://www.youtube.com/watch?v=njKP3FqW3Sk&t=0s", "title": "MIT 6.S191 (2020): Introduction to Deep Learning", "thumbnail": "https://i.ytimg.com/vi/njKP3FqW3Sk/maxresdefault.jpg"} {"video_id": "njKP3FqW3Sk", "text": "learning some practical knowledge about how you can implement some of the algorithms of deep learning in your own research and on some cool lab related software projects. But this year I figured we could do something a little bit different and instead of me telling you how great this class is I figured we could invite someone else from outside the class to do that instead. So let's check this out first. Hi everybody and welcome MIT 6.S191 the official introductory course on deep learning to taught here at MIT. Deep", "start_timestamp": "00:00:43", "end_timestamp": "00:01:22", "start_second": 43, "end_second": 82, "url": "https://www.youtube.com/watch?v=njKP3FqW3Sk&t=43s", "title": "MIT 6.S191 (2020): Introduction to Deep Learning", "thumbnail": "https://i.ytimg.com/vi/njKP3FqW3Sk/maxresdefault.jpg"} {"video_id": "njKP3FqW3Sk", "text": "learning is revolutionising so many fields from robotics to medicine and everything in between. You'll the learn the fundamentals of this field and how you can build some of these incredible algorithms. In fact, this entire speech and video are not real and were created using deep learning and artificial intelligence. And in this class you'll learn how. It has been an honor to speak with you today and I hope you enjoy the course! Alright. so as you can tell deep learning is an incredibly powerful tool. This was", "start_timestamp": "00:01:22", "end_timestamp": "00:02:16", "start_second": 82, "end_second": 136, "url": "https://www.youtube.com/watch?v=njKP3FqW3Sk&t=82s", "title": "MIT 6.S191 (2020): Introduction to Deep Learning", "thumbnail": "https://i.ytimg.com/vi/njKP3FqW3Sk/maxresdefault.jpg"} {"video_id": "njKP3FqW3Sk", "text": "just an example of how we use deep learning to perform voice synthesis and actually emulate someone else's voice, in this case Barack Obama, and also using video dialogue replacement to actually create that video with the help of Canny AI. And of course you might as you're watching this video you might raise some ethical concerns which we're also very concerned about and we'll actually talk about some of those later on in the class as well. But let's start by taking a step back and actually introducing some of these terms that", "start_timestamp": "00:02:16", "end_timestamp": "00:02:51", "start_second": 136, "end_second": 171, "url": "https://www.youtube.com/watch?v=njKP3FqW3Sk&t=136s", "title": "MIT 6.S191 (2020): Introduction to Deep Learning", "thumbnail": "https://i.ytimg.com/vi/njKP3FqW3Sk/maxresdefault.jpg"} {"video_id": "njKP3FqW3Sk", "text": "we've been we've talked about so far now. Let's start with the word intelligence. I like to define intelligence as the ability to process information to inform future decisions. Now the field of artificial intelligence is simply the the field which focuses on building algorithms, in this case artificial algorithms that can do this as well: process information to inform future decisions. Now machine learning is just a subset of artificial intelligence specifically that focuses on actually teaching an algorithm how to do this without being explicitly programmed to", "start_timestamp": "00:02:51", "end_timestamp": "00:03:29", "start_second": 171, "end_second": 209, "url": "https://www.youtube.com/watch?v=njKP3FqW3Sk&t=171s", "title": "MIT 6.S191 (2020): Introduction to Deep Learning", "thumbnail": "https://i.ytimg.com/vi/njKP3FqW3Sk/maxresdefault.jpg"} {"video_id": "njKP3FqW3Sk", "text": "do the task at hand. Now deep learning is just a subset of machine learning which takes this idea even a step further and says how can we automatically extract the useful pieces of information needed to inform those future predictions or make a decision And that's what this class is all about teaching algorithms how to learn a task directly from raw data. We want to provide you with a solid foundation of how you can understand or how to understand these algorithms under the hood but also provide you with the", "start_timestamp": "00:03:29", "end_timestamp": "00:04:03", "start_second": 209, "end_second": 243, "url": "https://www.youtube.com/watch?v=njKP3FqW3Sk&t=209s", "title": "MIT 6.S191 (2020): Introduction to Deep Learning", "thumbnail": "https://i.ytimg.com/vi/njKP3FqW3Sk/maxresdefault.jpg"} {"video_id": "njKP3FqW3Sk", "text": "practical knowledge and practical skills to implement state-of-the-art deep learning algorithms in Tensorflow which is a very popular deep learning toolbox. Now we have an amazing set of lectures lined up for you this year including Today which will cover neural networks and deep sequential modeling. Tomorrow we'll talk about computer vision and also a little bit about generative modeling which is how we can generate new data and finally I will talk about deep reinforcement learning and touch on some of the limitations and new", "start_timestamp": "00:04:03", "end_timestamp": "00:04:36", "start_second": 243, "end_second": 276, "url": "https://www.youtube.com/watch?v=njKP3FqW3Sk&t=243s", "title": "MIT 6.S191 (2020): Introduction to Deep Learning", "thumbnail": "https://i.ytimg.com/vi/njKP3FqW3Sk/maxresdefault.jpg"} {"video_id": "njKP3FqW3Sk", "text": "frontiers of where this field might be going and how research might be heading in the next couple of years. We'll spend the final two days hearing about some of the guest lectures from top industry researchers on some really cool and exciting projects. Every year these happen to be really really exciting talks so we really encourage you to come especially for those talks. The class will conclude with some final project presentations which we'll talk about in a little a little bit and also some awards and a quick award ceremony to", "start_timestamp": "00:04:36", "end_timestamp": "00:05:06", "start_second": 276, "end_second": 306, "url": "https://www.youtube.com/watch?v=njKP3FqW3Sk&t=276s", "title": "MIT 6.S191 (2020): Introduction to Deep Learning", "thumbnail": "https://i.ytimg.com/vi/njKP3FqW3Sk/maxresdefault.jpg"} {"video_id": "njKP3FqW3Sk", "text": "celebrate all of your hard work. Also I should mention that after each day of lectures so after today we have two lectures and after each day of lectures we'll have a software lab which tries to focus and build upon all of the things that you've learned in that day so you'll get the foundation's during the lectures and you'll get the practical knowledge during the software lab so the two are kind of jointly coupled in that sense. For those of you taking this class for credit you have a couple different options to fulfill your credit", "start_timestamp": "00:05:06", "end_timestamp": "00:05:40", "start_second": 306, "end_second": 340, "url": "https://www.youtube.com/watch?v=njKP3FqW3Sk&t=306s", "title": "MIT 6.S191 (2020): Introduction to Deep Learning", "thumbnail": "https://i.ytimg.com/vi/njKP3FqW3Sk/maxresdefault.jpg"} {"video_id": "njKP3FqW3Sk", "text": "requirement first is a project proposal I'm sorry first yeah first you can propose a project in optionally groups of two three or four people and in these groups you'll work to develop a cool new deep learning idea and we realized that one week which is the span of this course is an extremely short amount of time to really not only think of an idea but move that idea past the planning stage and try to implement something so we're not going to be judging you on your results towards this idea but rather just the novelty of the idea", "start_timestamp": "00:05:40", "end_timestamp": "00:06:13", "start_second": 340, "end_second": 373, "url": "https://www.youtube.com/watch?v=njKP3FqW3Sk&t=340s", "title": "MIT 6.S191 (2020): Introduction to Deep Learning", "thumbnail": "https://i.ytimg.com/vi/njKP3FqW3Sk/maxresdefault.jpg"} {"video_id": "njKP3FqW3Sk", "text": "itself on Friday each of these three teams will give a three-minute presentation on that idea and the awards will be announced for the top winners judged by a panel of judges the second option in my opinion is a bit more boring but we like to give this option for people that don't like to give presentations so in this option if you don't want to work in a group or you don't want to give a presentation you can write a one-page paper review of the deep learning of a recent deepening of paper or any paper of your choice and this will be due on the last day of", "start_timestamp": "00:06:13", "end_timestamp": "00:06:48", "start_second": 373, "end_second": 408, "url": "https://www.youtube.com/watch?v=njKP3FqW3Sk&t=373s", "title": "MIT 6.S191 (2020): Introduction to Deep Learning", "thumbnail": "https://i.ytimg.com/vi/njKP3FqW3Sk/maxresdefault.jpg"} {"video_id": "njKP3FqW3Sk", "text": "class as well also I should mention that and for the project presentations we give out all of these cool prizes especially these three nvidia gpus which are really crucial for doing any sort of deep learning on your own so we definitely encourage everyone to enter this competition and have a chance to win these GPUs and these other cool prizes like Google home and SSD cards as well also for each of the labs the three labs will have corresponding prizes so it instructions to actually enter those respective competitions will be within", "start_timestamp": "00:06:48", "end_timestamp": "00:07:26", "start_second": 408, "end_second": 446, "url": "https://www.youtube.com/watch?v=njKP3FqW3Sk&t=408s", "title": "MIT 6.S191 (2020): Introduction to Deep Learning", "thumbnail": "https://i.ytimg.com/vi/njKP3FqW3Sk/maxresdefault.jpg"} {"video_id": "njKP3FqW3Sk", "text": "the labs themselves and you can enter to enter to win these different prices depending on the different lab please post a Piazza if you have questions check out the course website for slides today's slides are already up there is a bug in the website we fixed that now so today's slides are up now digital recordings of each of these lectures will be up a few days after each class this course has an incredible team of TAS that you can reach out to if you have any questions especially during the software labs they can help you answer", "start_timestamp": "00:07:26", "end_timestamp": "00:08:00", "start_second": 446, "end_second": 480, "url": "https://www.youtube.com/watch?v=njKP3FqW3Sk&t=446s", "title": "MIT 6.S191 (2020): Introduction to Deep Learning", "thumbnail": "https://i.ytimg.com/vi/njKP3FqW3Sk/maxresdefault.jpg"} {"video_id": "njKP3FqW3Sk", "text": "any questions that you might have and finally we really want to give a huge thank to all of our sponsors who without their help and support this class would have not been possible ok so now with all of that administrative stuff out of the way let's start with the the fun stuff that we're all here for let's start actually by asking ourselves a question why do we care about deep learning well why do you all care about deep learning and all of you came to this classroom today and why specifically do care about deep learning", "start_timestamp": "00:08:00", "end_timestamp": "00:08:28", "start_second": 480, "end_second": 508, "url": "https://www.youtube.com/watch?v=njKP3FqW3Sk&t=480s", "title": "MIT 6.S191 (2020): Introduction to Deep Learning", "thumbnail": "https://i.ytimg.com/vi/njKP3FqW3Sk/maxresdefault.jpg"} {"video_id": "njKP3FqW3Sk", "text": "now well to answer that question we actually have to go back and understand traditional machine learning at its core first now traditional machine learning algorithms typically try to define as set of rules or features in the data and these are usually hand engineered and because their hand engineered they often tend to be brittle in practice so let's take a concrete example if you want to perform facial detection how might you go about doing that well first you might say to classify a face the first thing I'm gonna do is I'm gonna try and", "start_timestamp": "00:08:28", "end_timestamp": "00:09:00", "start_second": 508, "end_second": 540, "url": "https://www.youtube.com/watch?v=njKP3FqW3Sk&t=508s", "title": "MIT 6.S191 (2020): Introduction to Deep Learning", "thumbnail": "https://i.ytimg.com/vi/njKP3FqW3Sk/maxresdefault.jpg"} {"video_id": "njKP3FqW3Sk", "text": "classify or recognize if I see a mouth in the image the eyes ears and nose if I see all of those things then maybe I can say that there's a face in that image but then the question is okay but how do I recognize each of those sub things like how do I recognize an eye how do I recognize a mouth and then you have to decompose that into okay to recognize a mouth I maybe have to recognize these pairs of lines oriented lines in a certain direction certain orientation and then it keeps getting more complicated and each of these steps you", "start_timestamp": "00:09:00", "end_timestamp": "00:09:30", "start_second": 540, "end_second": 570, "url": "https://www.youtube.com/watch?v=njKP3FqW3Sk&t=540s", "title": "MIT 6.S191 (2020): Introduction to Deep Learning", "thumbnail": "https://i.ytimg.com/vi/njKP3FqW3Sk/maxresdefault.jpg"} {"video_id": "njKP3FqW3Sk", "text": "kind of have to define a set of features that you're looking for in the image now the key idea of deep learning is that you will need to learn these features just from raw data so what you're going to do is you're going to just take a bunch of images of faces and then the deep learning algorithm is going to develop some hierarchical representation of first detecting lines and edges in the image using these lines and edges to detect corners and eyes and mid-level features like eyes noses mouths ears then composing these together to detect", "start_timestamp": "00:09:30", "end_timestamp": "00:09:59", "start_second": 570, "end_second": 599, "url": "https://www.youtube.com/watch?v=njKP3FqW3Sk&t=570s", "title": "MIT 6.S191 (2020): Introduction to Deep Learning", "thumbnail": "https://i.ytimg.com/vi/njKP3FqW3Sk/maxresdefault.jpg"} {"video_id": "njKP3FqW3Sk", "text": "higher-level features like maybe jaw lines side of the face etc which then can be used to detect the final face structure and actually the fundamental building blocks of deep learning have existed for decades and they're under underlying algorithms for training these models have also existed for many years so why are we studying this now well for one data has become much more pervasive we're living in a the age of big data and these these algorithms are hungry for a huge amounts of data to succeed secondly these algorithms are massively", "start_timestamp": "00:09:59", "end_timestamp": "00:10:36", "start_second": 599, "end_second": 636, "url": "https://www.youtube.com/watch?v=njKP3FqW3Sk&t=599s", "title": "MIT 6.S191 (2020): Introduction to Deep Learning", "thumbnail": "https://i.ytimg.com/vi/njKP3FqW3Sk/maxresdefault.jpg"} {"video_id": "njKP3FqW3Sk", "text": "parallel izybelle which means that they can benefit tremendously from modern GPU architectures and hardware acceleration that simply did not exist when these algorithms were developed and finally due to open-source tool boxes like tensor flow which are which you'll get experience with in this class building and deploying these models has become extremely streamlined so much so that we can condense all this material down into one week so let's start with the fundamental building block of a neural network which is a single neuron", "start_timestamp": "00:10:36", "end_timestamp": "00:11:08", "start_second": 636, "end_second": 668, "url": "https://www.youtube.com/watch?v=njKP3FqW3Sk&t=636s", "title": "MIT 6.S191 (2020): Introduction to Deep Learning", "thumbnail": "https://i.ytimg.com/vi/njKP3FqW3Sk/maxresdefault.jpg"} {"video_id": "njKP3FqW3Sk", "text": "or what's also called a perceptron the idea of a perceptron or a single neuron is very basic and I'll try and keep it as simple as possible and then we'll try and work our way up from there let's start by talking about the forward propagation of information through a neuron we define a set of inputs to that neuron as x1 through XM and each of these inputs have a corresponding weight w1 through WN now what we can do is with each of these inputs and each of these ways we can multiply them correspondingly together and take a sum of all of them then we take this single", "start_timestamp": "00:11:08", "end_timestamp": "00:11:47", "start_second": 668, "end_second": 707, "url": "https://www.youtube.com/watch?v=njKP3FqW3Sk&t=668s", "title": "MIT 6.S191 (2020): Introduction to Deep Learning", "thumbnail": "https://i.ytimg.com/vi/njKP3FqW3Sk/maxresdefault.jpg"} {"video_id": "njKP3FqW3Sk", "text": "number that's summation and we pass it through what's called a nonlinear activation function and that produces our final output Y now this is actually not entirely correct we also have what's called a bias term in this neuron which you can see here in green so the bias term the purpose of the bias term is really to allow you to shift your activation function to the left and to the right regardless of your inputs right so you can notice that the bias term doesn't is not affected by the X's it's just a bias associate to that input", "start_timestamp": "00:11:47", "end_timestamp": "00:12:20", "start_second": 707, "end_second": 740, "url": "https://www.youtube.com/watch?v=njKP3FqW3Sk&t=707s", "title": "MIT 6.S191 (2020): Introduction to Deep Learning", "thumbnail": "https://i.ytimg.com/vi/njKP3FqW3Sk/maxresdefault.jpg"} {"video_id": "njKP3FqW3Sk", "text": "now on the right side you can see this diagram illustrated mathematically as a single equation and we can actually rewrite this as a linear using linear algebra in terms of vectors and dot products so instead of having a summation over all of the X's I'm going to collapse my X into a vector capital X which is now just a list or a vector of numbers a vector of inputs I should say and you also have a vector of weights capital W to compute the output of a single perceptron all you have to do is take the dot product of X and W which", "start_timestamp": "00:12:20", "end_timestamp": "00:12:56", "start_second": 740, "end_second": 776, "url": "https://www.youtube.com/watch?v=njKP3FqW3Sk&t=740s", "title": "MIT 6.S191 (2020): Introduction to Deep Learning", "thumbnail": "https://i.ytimg.com/vi/njKP3FqW3Sk/maxresdefault.jpg"} {"video_id": "njKP3FqW3Sk", "text": "represents that element wise multiplication and summation and then apply that non-linearity which here is denoted as G so now you might be wondering what is this nonlinear activation function I've mentioned it a couple times but I haven't really told you precisely what it is now one common example of this activation function is what's called a sigmoid function and you can see an example of a sigmoid function here on the bottom right one thing to note is that this function takes any real number as input on the x-axis and it transforms that real number into a scalar output", "start_timestamp": "00:12:56", "end_timestamp": "00:13:33", "start_second": 776, "end_second": 813, "url": "https://www.youtube.com/watch?v=njKP3FqW3Sk&t=776s", "title": "MIT 6.S191 (2020): Introduction to Deep Learning", "thumbnail": "https://i.ytimg.com/vi/njKP3FqW3Sk/maxresdefault.jpg"} {"video_id": "njKP3FqW3Sk", "text": "between 0 & 1 it's a bounded output between 0 & 1 so one very common use case of the sigmoid function is to when you're dealing with probabilities because probabilities have to also be bounded between 0 & 1 so sigmoids are really useful when you want to output a single number and represent that number as a probability distribution in fact there are many common types of nonlinear activation functions not just the sigmoid but many others that you can use in neural networks and here are some common ones and throughout this presentation you'll find these tensorflow icons like you can", "start_timestamp": "00:13:33", "end_timestamp": "00:14:07", "start_second": 813, "end_second": 847, "url": "https://www.youtube.com/watch?v=njKP3FqW3Sk&t=813s", "title": "MIT 6.S191 (2020): Introduction to Deep Learning", "thumbnail": "https://i.ytimg.com/vi/njKP3FqW3Sk/maxresdefault.jpg"} {"video_id": "njKP3FqW3Sk", "text": "see on the bottom right or sorry all across the bottom here and these are just to illustrate how one could use each of these topics in a practical setting you'll see these kind of scattered in throughout the slides no need to really take furious notes at these codeblocks like I said all of the slides are published online so especially during your labs if you want to refer back to any of the slides you can you can always do that from the online lecture notes now why do we care about activation functions the point of", "start_timestamp": "00:14:07", "end_timestamp": "00:14:37", "start_second": 847, "end_second": 877, "url": "https://www.youtube.com/watch?v=njKP3FqW3Sk&t=847s", "title": "MIT 6.S191 (2020): Introduction to Deep Learning", "thumbnail": "https://i.ytimg.com/vi/njKP3FqW3Sk/maxresdefault.jpg"} {"video_id": "njKP3FqW3Sk", "text": "an activation function is to introduce nonlinearities into the data and this is actually really important in real life because in real life almost all of our data is nonlinear and here's a concrete example if I told you to separate the green points from the red points using a linear function could you do that I don't think so right so you'd get something like this oh you could do it you wouldn't do very good job at it and no matter how deep or how large your network is if you're using a linear activation function you're just", "start_timestamp": "00:14:37", "end_timestamp": "00:15:08", "start_second": 877, "end_second": 908, "url": "https://www.youtube.com/watch?v=njKP3FqW3Sk&t=877s", "title": "MIT 6.S191 (2020): Introduction to Deep Learning", "thumbnail": "https://i.ytimg.com/vi/njKP3FqW3Sk/maxresdefault.jpg"} {"video_id": "njKP3FqW3Sk", "text": "composing lines on top of lines and you're going to get another line right so this is the best you'll be able to do with the linear activation function on the other hand nonlinearities allow you to approximate arbitrarily complex functions by kind of introducing these nonlinearities into your decision boundary and this is what makes neural networks extremely powerful let's understand this with a simple example and let's go back to this picture that we had before imagine I give you a train network with weights W on the top right so W here is 3 and minus 2 and the", "start_timestamp": "00:15:08", "end_timestamp": "00:15:43", "start_second": 908, "end_second": 943, "url": "https://www.youtube.com/watch?v=njKP3FqW3Sk&t=908s", "title": "MIT 6.S191 (2020): Introduction to Deep Learning", "thumbnail": "https://i.ytimg.com/vi/njKP3FqW3Sk/maxresdefault.jpg"} {"video_id": "njKP3FqW3Sk", "text": "network only has 2 inputs x1 and x2 if we want to get the output it's simply the same story as we had before we multiply our inputs by those weights we take the sum and pass it through a non-linearity but let's take a look at what's inside of that non-linearity before we apply it so we get is when we take this dot product of x1 times 3 X 2 times minus 2 we mul - 1 that's simply a 2d line so we can plot that if we set that equal to 0 for example that's a 2d line and it looks like this so on the x axis is X 1 on the y axis is X 2 and", "start_timestamp": "00:15:43", "end_timestamp": "00:16:25", "start_second": 943, "end_second": 985, "url": "https://www.youtube.com/watch?v=njKP3FqW3Sk&t=943s", "title": "MIT 6.S191 (2020): Introduction to Deep Learning", "thumbnail": "https://i.ytimg.com/vi/njKP3FqW3Sk/maxresdefault.jpg"} {"video_id": "njKP3FqW3Sk", "text": "we're setting that we're just illustrating when this line equals 0 so anywhere on this line is where X 1 and X 2 correspond to a value of 0 now if I feed in a new input either a test example a training example or whatever and that input is with this coordinates it's has these coordinates minus 1 and 2 so it has the value of x1 of minus 1 value of x2 of 2 I can see visually where this lies with respect to that line and in fact this this idea can be generalized a little bit more if we compute that line we get minus 6 right", "start_timestamp": "00:16:25", "end_timestamp": "00:17:07", "start_second": 985, "end_second": 1027, "url": "https://www.youtube.com/watch?v=njKP3FqW3Sk&t=985s", "title": "MIT 6.S191 (2020): Introduction to Deep Learning", "thumbnail": "https://i.ytimg.com/vi/njKP3FqW3Sk/maxresdefault.jpg"} {"video_id": "njKP3FqW3Sk", "text": "so inside that before we apply the non-linearity we get minus 6 when we apply a sigmoid non-linearity because sigmoid collapses everything between 0 and 1 anything greater than 0 is going to be above 0.5 anything below zero is going to be less than 0.5 so in is because minus 6 is less than zero we're going to have a very low output this point Oh 200 to we can actually generalize this idea for the entire feature space let's call it for any point on this plot I can tell you if it lies on the left side of the line that means that before we apply the non-linearity the Z or the state of that", "start_timestamp": "00:17:07", "end_timestamp": "00:17:46", "start_second": 1027, "end_second": 1066, "url": "https://www.youtube.com/watch?v=njKP3FqW3Sk&t=1027s", "title": "MIT 6.S191 (2020): Introduction to Deep Learning", "thumbnail": "https://i.ytimg.com/vi/njKP3FqW3Sk/maxresdefault.jpg"} {"video_id": "njKP3FqW3Sk", "text": "neuron will be negative less than zero after applying that non-linearity the sigmoid will give it a probability of less than 0.5 and on the right side if it falls on the right side of the line it's the opposite story if it falls right on the line it means that Z equals zero exactly and the probability equals 0.5 now actually before I move on this is a great example of actually visualizing and understanding what's going on inside of a neural network the reason why it's hard to do this with deep neural networks is because you", "start_timestamp": "00:17:46", "end_timestamp": "00:18:19", "start_second": 1066, "end_second": 1099, "url": "https://www.youtube.com/watch?v=njKP3FqW3Sk&t=1066s", "title": "MIT 6.S191 (2020): Introduction to Deep Learning", "thumbnail": "https://i.ytimg.com/vi/njKP3FqW3Sk/maxresdefault.jpg"} {"video_id": "njKP3FqW3Sk", "text": "usually don't have only two inputs and usually don't have only two weights as well so as you scale up your problem this is a simple two dimensional problem but as you scale up the size of your network you could be dealing with hundreds or thousands or millions of parameters and million dimensional spaces and then visualizing these type of plots becomes extremely difficult and it's not practical and pause in practice so this is one of the challenges that we face when we're training with neural networks and really understanding their", "start_timestamp": "00:18:19", "end_timestamp": "00:18:47", "start_second": 1099, "end_second": 1127, "url": "https://www.youtube.com/watch?v=njKP3FqW3Sk&t=1099s", "title": "MIT 6.S191 (2020): Introduction to Deep Learning", "thumbnail": "https://i.ytimg.com/vi/njKP3FqW3Sk/maxresdefault.jpg"} {"video_id": "njKP3FqW3Sk", "text": "internals but we'll talk about how we can actually tackle some of those challenges in later lectures as well okay so now that we have that idea of a perceptron a single neuron let's start by building up neural networks now how we can use that perceptron to create full neural networks and seeing how all of this story comes together let's revisit this previous diagram of the perceptron if there are only a few things you remember from this class try to take away this so how a perceptron works just keep remembering this I'm", "start_timestamp": "00:18:47", "end_timestamp": "00:19:19", "start_second": 1127, "end_second": 1159, "url": "https://www.youtube.com/watch?v=njKP3FqW3Sk&t=1127s", "title": "MIT 6.S191 (2020): Introduction to Deep Learning", "thumbnail": "https://i.ytimg.com/vi/njKP3FqW3Sk/maxresdefault.jpg"} {"video_id": "njKP3FqW3Sk", "text": "going to keep drilling it in you take your inputs you apply a dot product with your weights and you apply a non-linearity it's that simple oh sorry I missed the step you have dot product with your weights add a bias and apply your non-linearity so three steps now let's simplify this type of diagram a little bit I'm gonna remove the bias just for simplicity I'm gonna remove all of the weight labels so now you can assume that every line the weight associated to it and let's say so I'm going to note Z that Z is the", "start_timestamp": "00:19:19", "end_timestamp": "00:19:52", "start_second": 1159, "end_second": 1192, "url": "https://www.youtube.com/watch?v=njKP3FqW3Sk&t=1159s", "title": "MIT 6.S191 (2020): Introduction to Deep Learning", "thumbnail": "https://i.ytimg.com/vi/njKP3FqW3Sk/maxresdefault.jpg"} {"video_id": "njKP3FqW3Sk", "text": "output of that dot product so that's the element wise multiplication of our inputs with our weights and that's what gets fed into our activation function so our final output Y is just there our activation function applied on Z if we want to define a multi output neural network we simply can just add another one of these perceptrons to this picture now we have two outputs one is a normal perceptron which is y1 and y2 is just another normal perceptron the same ideas before they all connect to the previous layer with a different set of weights", "start_timestamp": "00:19:52", "end_timestamp": "00:20:26", "start_second": 1192, "end_second": 1226, "url": "https://www.youtube.com/watch?v=njKP3FqW3Sk&t=1192s", "title": "MIT 6.S191 (2020): Introduction to Deep Learning", "thumbnail": "https://i.ytimg.com/vi/njKP3FqW3Sk/maxresdefault.jpg"} {"video_id": "njKP3FqW3Sk", "text": "and because all inputs are densely connected to all of the outputs these type of layers are often called dense layers and let's take an example of how one might actually go from this nice illustration which is very conceptual and and nice and simple to how you could actually implement one of these dense layers from scratch by yourselves using tensor flow so what we can do is start off by first defining our two weights so we have our actual weight vector which is W and we also have our bias vector right both of both of these parameters", "start_timestamp": "00:20:26", "end_timestamp": "00:21:08", "start_second": 1226, "end_second": 1268, "url": "https://www.youtube.com/watch?v=njKP3FqW3Sk&t=1226s", "title": "MIT 6.S191 (2020): Introduction to Deep Learning", "thumbnail": "https://i.ytimg.com/vi/njKP3FqW3Sk/maxresdefault.jpg"} {"video_id": "njKP3FqW3Sk", "text": "are governed by the output space so depending on how many neurons you have in that output layer that will govern the size of each of those weight and bias vectors what we can do then is simply define that forward propagation of information so here I'm showing you this to the call function in tensor flow don't get too caught up on the details of the code again you'll get really a walk through of this code inside of the labs today but I want to just show you some some high level understanding of how you could actually take what you're", "start_timestamp": "00:21:08", "end_timestamp": "00:21:39", "start_second": 1268, "end_second": 1299, "url": "https://www.youtube.com/watch?v=njKP3FqW3Sk&t=1268s", "title": "MIT 6.S191 (2020): Introduction to Deep Learning", "thumbnail": "https://i.ytimg.com/vi/njKP3FqW3Sk/maxresdefault.jpg"} {"video_id": "njKP3FqW3Sk", "text": "learning and apply the tensor flow implementations to it inside the call function it's the same idea again you can compute Z which is the state it's that multiplication of your inputs with the weights you add the bias right so that's right there and once you have Z you just pass it through your sigmoid and that's your output for that now tension flow is great because it's already implemented a lot of these layers for us so we don't have to do what I just showed you from scratch in fact to implement a layer like this with two two outputs or a percept a multi", "start_timestamp": "00:21:39", "end_timestamp": "00:22:14", "start_second": 1299, "end_second": 1334, "url": "https://www.youtube.com/watch?v=njKP3FqW3Sk&t=1299s", "title": "MIT 6.S191 (2020): Introduction to Deep Learning", "thumbnail": "https://i.ytimg.com/vi/njKP3FqW3Sk/maxresdefault.jpg"} {"video_id": "njKP3FqW3Sk", "text": "layer a multi output perceptron layer with two outputs we can simply call this TF Harris layers dense with units equal to two to indicate that we have two outputs on this layer and there is a whole bunch of other parameters that you could input here such as the activation function as well as many other things to customize how this layer behaves in practice so now let's take a look at a single layered neural network so this is taking it one step beyond what we've just seen this is where we have now a single hidden layer that feeds into a", "start_timestamp": "00:22:14", "end_timestamp": "00:22:48", "start_second": 1334, "end_second": 1368, "url": "https://www.youtube.com/watch?v=njKP3FqW3Sk&t=1334s", "title": "MIT 6.S191 (2020): Introduction to Deep Learning", "thumbnail": "https://i.ytimg.com/vi/njKP3FqW3Sk/maxresdefault.jpg"} {"video_id": "njKP3FqW3Sk", "text": "single output layer and I'm calling this a hidden layer because unlike our inputs and our outputs these states of the hidden layer are not directly enforced or they're not directly observable we can probe inside the network and see them but we don't actually enforce what they are these are learned as opposed to the inputs which are provided by us now since we have a transformation between the inputs and the hidden layer and the hidden layer and the output layer each of those two transformations will have their own weight matrices which here I", "start_timestamp": "00:22:48", "end_timestamp": "00:23:23", "start_second": 1368, "end_second": 1403, "url": "https://www.youtube.com/watch?v=njKP3FqW3Sk&t=1368s", "title": "MIT 6.S191 (2020): Introduction to Deep Learning", "thumbnail": "https://i.ytimg.com/vi/njKP3FqW3Sk/maxresdefault.jpg"} {"video_id": "njKP3FqW3Sk", "text": "call W 1 and W 2 so its corresponds to the first layer and the second layer if we look at a single unit inside of that hidden layer take for example Z 2 I'm showing here that's just a single perceptron like we talked about before it's taking a weighted sum of all of those inputs that feed into it and it applies the non-linearity and feeds it on to the next layer same story as before this picture actually looks a little bit messy so what I want to do is actually clean things up a little bit for you and I'm gonna replace all of those lines with just this symbolic representation", "start_timestamp": "00:23:23", "end_timestamp": "00:23:58", "start_second": 1403, "end_second": 1438, "url": "https://www.youtube.com/watch?v=njKP3FqW3Sk&t=1403s", "title": "MIT 6.S191 (2020): Introduction to Deep Learning", "thumbnail": "https://i.ytimg.com/vi/njKP3FqW3Sk/maxresdefault.jpg"} {"video_id": "njKP3FqW3Sk", "text": "and we'll just use this from now on in the future to denote dense layers or fully connected layers between two between an input and an output or between an input and hidden layer and again if we wanted to implement this intensive flow the idea is pretty simple we can just define two of these dense layers the first one our hidden layer with n outputs and the second one our output layer with two outputs we can cut week and like join them together aggregate them together into this wrapper which is called a TF sequential model and sequential models are just", "start_timestamp": "00:23:58", "end_timestamp": "00:24:33", "start_second": 1438, "end_second": 1473, "url": "https://www.youtube.com/watch?v=njKP3FqW3Sk&t=1438s", "title": "MIT 6.S191 (2020): Introduction to Deep Learning", "thumbnail": "https://i.ytimg.com/vi/njKP3FqW3Sk/maxresdefault.jpg"} {"video_id": "njKP3FqW3Sk", "text": "this idea of composing neural networks using a sequence of layers so whenever you have a sequential message passing system or sequentially processing information throughout the network you can use sequential models and just define your layers as a sequence and it's very nice to allow information to propagate through that model now if we want to create a deep neural network the idea is basically the same thing except you just keep stacking on more of these layers and to create more of an more of a hierarchical model ones where the", "start_timestamp": "00:24:33", "end_timestamp": "00:25:06", "start_second": 1473, "end_second": 1506, "url": "https://www.youtube.com/watch?v=njKP3FqW3Sk&t=1473s", "title": "MIT 6.S191 (2020): Introduction to Deep Learning", "thumbnail": "https://i.ytimg.com/vi/njKP3FqW3Sk/maxresdefault.jpg"} {"video_id": "njKP3FqW3Sk", "text": "final output is computed by going deeper and deeper into this representation and the code looks pretty similar again so again we have this TF sequential model and inside that model we just have a list of all of the layers that we want to use and they're just stacked on top of each other okay so this is awesome so hopefully now you have an understanding of not only what a single neuron is but how you can compose neurons together and actually build complex hierarchical models with deep with neural networks now let's take a look at how you can", "start_timestamp": "00:25:06", "end_timestamp": "00:25:41", "start_second": 1506, "end_second": 1541, "url": "https://www.youtube.com/watch?v=njKP3FqW3Sk&t=1506s", "title": "MIT 6.S191 (2020): Introduction to Deep Learning", "thumbnail": "https://i.ytimg.com/vi/njKP3FqW3Sk/maxresdefault.jpg"} {"video_id": "njKP3FqW3Sk", "text": "apply these neural networks into a very real and applied setting to solve some problem and actually train them to accomplish some task here's a problem that I believe any AI system should be able to solve for all of you and probably one that you care a lot about will I pass this class to do this let's start with a very simple two input model one feature or one input we're gonna define is how many let's see how many lectures you attend during this class and the second one is the number of hours that you spend on your final", "start_timestamp": "00:25:41", "end_timestamp": "00:26:15", "start_second": 1541, "end_second": 1575, "url": "https://www.youtube.com/watch?v=njKP3FqW3Sk&t=1541s", "title": "MIT 6.S191 (2020): Introduction to Deep Learning", "thumbnail": "https://i.ytimg.com/vi/njKP3FqW3Sk/maxresdefault.jpg"} {"video_id": "njKP3FqW3Sk", "text": "projects I should say that the minimum number of hours you can spend your final project is 50 hours now I'm just joking okay so let's take all of the data from previous years and plot it on this feature space like we looked at before green points are students that have passed the class in the past and red points are people that have failed we can plot all of this data onto this two-dimensional grid like this and we can also plot you so here you are you have attended four lectures and you've only spent five hours on your final exam", "start_timestamp": "00:26:15", "end_timestamp": "00:26:49", "start_second": 1575, "end_second": 1609, "url": "https://www.youtube.com/watch?v=njKP3FqW3Sk&t=1575s", "title": "MIT 6.S191 (2020): Introduction to Deep Learning", "thumbnail": "https://i.ytimg.com/vi/njKP3FqW3Sk/maxresdefault.jpg"} {"video_id": "njKP3FqW3Sk", "text": "you're on you're on your final project and the question is are you going to pass the class given everyone around you and how they've done in the past how are you going to do so let's do it we have two inputs we have a single layered set single hidden layer neural network we have three hidden units in that hidden layer and we'll see that the final output probability when we feed in those two inputs of four and five is predicted to be 0.1 or 10% the probability of you passing this class is 10% that's not great news the actual prediction was one", "start_timestamp": "00:26:49", "end_timestamp": "00:27:27", "start_second": 1609, "end_second": 1647, "url": "https://www.youtube.com/watch?v=njKP3FqW3Sk&t=1609s", "title": "MIT 6.S191 (2020): Introduction to Deep Learning", "thumbnail": "https://i.ytimg.com/vi/njKP3FqW3Sk/maxresdefault.jpg"} {"video_id": "njKP3FqW3Sk", "text": "so you did pass the class now does anyone have an idea of why the network was so wrong in this case exactly so we never told this network anything the weights are wrong we've just initialized the weights in fact it has no idea what it means to pass a class it has no idea of what each of these inputs mean how many lectures you've attended and the hours you've spent on your final project it's just seeing some random numbers it has no concept of how other people in the class have done so far so what we have to do to this network first is", "start_timestamp": "00:27:27", "end_timestamp": "00:28:00", "start_second": 1647, "end_second": 1680, "url": "https://www.youtube.com/watch?v=njKP3FqW3Sk&t=1647s", "title": "MIT 6.S191 (2020): Introduction to Deep Learning", "thumbnail": "https://i.ytimg.com/vi/njKP3FqW3Sk/maxresdefault.jpg"} {"video_id": "njKP3FqW3Sk", "text": "train it and we have to teach it how to perform this task until we teach it it's just like a baby that doesn't know anything so it just entered the world it has no concepts or no idea of how to solve this task and we have to teach at that now how do we do that the idea here is that first we have to tell the network when it's wrong so we have to quantify what's called its loss or its error and to do that we actually just take our prediction or what the network predicts and we compare it to what the true answer was", "start_timestamp": "00:28:00", "end_timestamp": "00:28:35", "start_second": 1680, "end_second": 1715, "url": "https://www.youtube.com/watch?v=njKP3FqW3Sk&t=1680s", "title": "MIT 6.S191 (2020): Introduction to Deep Learning", "thumbnail": "https://i.ytimg.com/vi/njKP3FqW3Sk/maxresdefault.jpg"} {"video_id": "njKP3FqW3Sk", "text": "if there's a big discrepancy between the prediction and the true answer we can tell the network hey you made a big mistake right so this is a big error it's a big loss and you should try and fix your answer to move closer towards the true answer which it should be okay now you can imagine if you don't have just one student but now you have many students the total loss let's call it here the empirical risk or the objective function it has many different names it's just the the average of all of those individual losses so the", "start_timestamp": "00:28:35", "end_timestamp": "00:29:10", "start_second": 1715, "end_second": 1750, "url": "https://www.youtube.com/watch?v=njKP3FqW3Sk&t=1715s", "title": "MIT 6.S191 (2020): Introduction to Deep Learning", "thumbnail": "https://i.ytimg.com/vi/njKP3FqW3Sk/maxresdefault.jpg"} {"video_id": "njKP3FqW3Sk", "text": "individual loss is a loss that takes as input your prediction and your actual that's telling you how wrong that single example is and then the final the total loss is just the average of all of those individual student losses so if we look at the problem of binary classification which is the case that we're actually caring about in this example so we're asking a question will I pass the class yes or no binary classification we can use what is called as the softmax cross-entropy loss and for those of you who aren't familiar with cross-entropy", "start_timestamp": "00:29:10", "end_timestamp": "00:29:47", "start_second": 1750, "end_second": 1787, "url": "https://www.youtube.com/watch?v=njKP3FqW3Sk&t=1750s", "title": "MIT 6.S191 (2020): Introduction to Deep Learning", "thumbnail": "https://i.ytimg.com/vi/njKP3FqW3Sk/maxresdefault.jpg"} {"video_id": "njKP3FqW3Sk", "text": "this was actually a a formulation introduced by Claude Shannon here at MIT during his master's thesis as well and this was about 50 years ago it's still being used very prevalently today and the idea is it just again compares how different these two distributions are so you have a distribution of how how likely you think the student is going to pass and you have the true distribution of if the student passed or not you can compare the difference between those two distributions and that tells you the loss that the network incurs on that", "start_timestamp": "00:29:47", "end_timestamp": "00:30:21", "start_second": 1787, "end_second": 1821, "url": "https://www.youtube.com/watch?v=njKP3FqW3Sk&t=1787s", "title": "MIT 6.S191 (2020): Introduction to Deep Learning", "thumbnail": "https://i.ytimg.com/vi/njKP3FqW3Sk/maxresdefault.jpg"} {"video_id": "njKP3FqW3Sk", "text": "example now let's assume that instead of a classification problem we have a regression problem where instead of predicting if you're going to pass or fail to class you want to predict the final grade that you're going to get so now it's not a yes/no answer problem anymore but instead it's a what's the grade I'm going to get what's the number what so it's it's a full range of numbers that are possible now and now we might want to use a different type of loss for this different type of problem and in this case we can do what's called a mean squared error loss", "start_timestamp": "00:30:21", "end_timestamp": "00:30:54", "start_second": 1821, "end_second": 1854, "url": "https://www.youtube.com/watch?v=njKP3FqW3Sk&t=1821s", "title": "MIT 6.S191 (2020): Introduction to Deep Learning", "thumbnail": "https://i.ytimg.com/vi/njKP3FqW3Sk/maxresdefault.jpg"} {"video_id": "njKP3FqW3Sk", "text": "so we take the actual prediction we take the the sorry excuse me we take the prediction of the network we take the actual true final grade that the student got we subtract them we take their squared error and we say that that's the mean squared error that's the loss that the network should should try to optimize and try to minimize so ok so now that we have all this information with the loss function and how to actually quantify the error of the neural network let's take this and understand how to train train our model", "start_timestamp": "00:30:54", "end_timestamp": "00:31:25", "start_second": 1854, "end_second": 1885, "url": "https://www.youtube.com/watch?v=njKP3FqW3Sk&t=1854s", "title": "MIT 6.S191 (2020): Introduction to Deep Learning", "thumbnail": "https://i.ytimg.com/vi/njKP3FqW3Sk/maxresdefault.jpg"} {"video_id": "njKP3FqW3Sk", "text": "to actually find those weights that it needs to to use for its prediction so W is what we want to find out W is the set of weights and we want to find the optimal set of weights that tries to minimize this total loss over our entire test set so our test set is this example data set that we want to evaluate our model on so in the class example the test set is you so you want to understand how likely you are to pass this class you're the test set now what this means is that we want to find the W's that minimize that total loss", "start_timestamp": "00:31:25", "end_timestamp": "00:32:03", "start_second": 1885, "end_second": 1923, "url": "https://www.youtube.com/watch?v=njKP3FqW3Sk&t=1885s", "title": "MIT 6.S191 (2020): Introduction to Deep Learning", "thumbnail": "https://i.ytimg.com/vi/njKP3FqW3Sk/maxresdefault.jpg"} {"video_id": "njKP3FqW3Sk", "text": "function which we call as the objective function J of W now remember that W is just a aggregation or a collection of all of the individual w's from all of your weights so here this is just a way for me to express this in a clean notation but W is a whole set of numbers it's not just a single number and you want to find this all of the W's you want to find the value of each of those weights such that you can minimize this entire loss function it's a very complicated problem and remember that our loss function is just a simple", "start_timestamp": "00:32:03", "end_timestamp": "00:32:45", "start_second": 1923, "end_second": 1965, "url": "https://www.youtube.com/watch?v=njKP3FqW3Sk&t=1923s", "title": "MIT 6.S191 (2020): Introduction to Deep Learning", "thumbnail": "https://i.ytimg.com/vi/njKP3FqW3Sk/maxresdefault.jpg"} {"video_id": "njKP3FqW3Sk", "text": "function in terms of those weights so if we plot in the case again of a two-dimensional weight problem so one of the weights is on the x-axis one of the weights is on this axis and on the z axis we have the loss so for any value of w we can see what the loss would be at that point now what do we want to do we want to find the place on this landscape what are the values of W that we get the minimum loss okay so what we can do is we can just pick a random W pick a random place on this this landscape to start with and from", "start_timestamp": "00:32:45", "end_timestamp": "00:33:23", "start_second": 1965, "end_second": 2003, "url": "https://www.youtube.com/watch?v=njKP3FqW3Sk&t=1965s", "title": "MIT 6.S191 (2020): Introduction to Deep Learning", "thumbnail": "https://i.ytimg.com/vi/njKP3FqW3Sk/maxresdefault.jpg"} {"video_id": "njKP3FqW3Sk", "text": "this random place let's try to understand how the landscape is changing what's the slope of the landscape we can take the gradient of the loss with respect to each of these weights to understand the direction of maximum ascent okay that's what the gradient tells us now that we know which way is up we can take a step in the direction that's down so we know which way is up we reverse the sign so now we start heading downhill and we can move towards that lowest point now we just keep repeating this process over and over", "start_timestamp": "00:33:23", "end_timestamp": "00:33:57", "start_second": 2003, "end_second": 2037, "url": "https://www.youtube.com/watch?v=njKP3FqW3Sk&t=2003s", "title": "MIT 6.S191 (2020): Introduction to Deep Learning", "thumbnail": "https://i.ytimg.com/vi/njKP3FqW3Sk/maxresdefault.jpg"} {"video_id": "njKP3FqW3Sk", "text": "again until we've converged to a local minimum now we can summarize this algorithm which is known as gradient descent because you're taking a gradient and you're descending down down that landscape by starting to initialize our rates wait randomly we compute the gradient DJ with respect to all of our weights then we update our weights in the opposite direction of that gradient and take a small step which we call here ADA of that gradient and this is referred to as the learning rate and we'll talk a little bit more about that", "start_timestamp": "00:33:57", "end_timestamp": "00:34:34", "start_second": 2037, "end_second": 2074, "url": "https://www.youtube.com/watch?v=njKP3FqW3Sk&t=2037s", "title": "MIT 6.S191 (2020): Introduction to Deep Learning", "thumbnail": "https://i.ytimg.com/vi/njKP3FqW3Sk/maxresdefault.jpg"} {"video_id": "njKP3FqW3Sk", "text": "later but ADA is just a scalar number that determines how much of a step you want to take at each iteration how strongly or aggressively do you want to step towards that gradient in code the picture looks very similar so to implement gradient descent is just a few lines of code just like the pseudocode you can initialize your weights randomly in the first line you can compute your loss with respect to those gradients and with respect to those predictions and your data given that gradient you just update your weights in the opposite", "start_timestamp": "00:34:34", "end_timestamp": "00:35:08", "start_second": 2074, "end_second": 2108, "url": "https://www.youtube.com/watch?v=njKP3FqW3Sk&t=2074s", "title": "MIT 6.S191 (2020): Introduction to Deep Learning", "thumbnail": "https://i.ytimg.com/vi/njKP3FqW3Sk/maxresdefault.jpg"} {"video_id": "njKP3FqW3Sk", "text": "direction of that event of that vector right now the magic line here is actually how do you compute that gradient and that's something I haven't told you and that's something it's not easy at all so the question is given a loss and given all of our weights in our network how do we know which way is good which way is a good place to move given all of this information and I never told you about that but that's a process called back propagation and let's talk about a very simple example of how we can actually derive back propagation using elementary calculus so we'll start with a very", "start_timestamp": "00:35:08", "end_timestamp": "00:35:48", "start_second": 2108, "end_second": 2148, "url": "https://www.youtube.com/watch?v=njKP3FqW3Sk&t=2108s", "title": "MIT 6.S191 (2020): Introduction to Deep Learning", "thumbnail": "https://i.ytimg.com/vi/njKP3FqW3Sk/maxresdefault.jpg"} {"video_id": "njKP3FqW3Sk", "text": "simple network with only one hidden neuron and one output this is probably the simplest neural network that you can create you can't really get smaller than this computing the gradient of our loss with respect to W to here which is that second way between the hidden state and our output can tell us how much a small change in W 2 will impact our loss so that's what the gradient tells us right if we change W 2 in the differential different like a very minor manner how does our loss change does it go up or down how does it change and by how much", "start_timestamp": "00:35:48", "end_timestamp": "00:36:22", "start_second": 2148, "end_second": 2182, "url": "https://www.youtube.com/watch?v=njKP3FqW3Sk&t=2148s", "title": "MIT 6.S191 (2020): Introduction to Deep Learning", "thumbnail": "https://i.ytimg.com/vi/njKP3FqW3Sk/maxresdefault.jpg"} {"video_id": "njKP3FqW3Sk", "text": "really so that's the gradient that we care about the gradient of our loss with respect to W 2 now to evaluate this we can just apply the chain rule in calculus so we can split this up into the gradient of our loss with respect to our output Y multiplied by the gradient of our walk or output Y with respect to W 2 now if we want to repeat this process for a different way in the neural network let's say now W 1 not W 2 now we replace W 1 on both sides we also apply the chain rule but now you're going to notice that the gradient of Y", "start_timestamp": "00:36:22", "end_timestamp": "00:37:04", "start_second": 2182, "end_second": 2224, "url": "https://www.youtube.com/watch?v=njKP3FqW3Sk&t=2182s", "title": "MIT 6.S191 (2020): Introduction to Deep Learning", "thumbnail": "https://i.ytimg.com/vi/njKP3FqW3Sk/maxresdefault.jpg"} {"video_id": "njKP3FqW3Sk", "text": "with respect to W 1 is also not directly computable we have to apply the chain rule again to evaluate this so let's apply the chain rule again we can break that second term up into with respect to now the the state Z ok and using that we can kind of back propagate all of these gradients from the output all the way back to the input that allows our error signal to really propagate from output to input and allows these gradients to be computed in practice now a lot of this is not really important or excuse me it's not as", "start_timestamp": "00:37:04", "end_timestamp": "00:37:37", "start_second": 2224, "end_second": 2257, "url": "https://www.youtube.com/watch?v=njKP3FqW3Sk&t=2224s", "title": "MIT 6.S191 (2020): Introduction to Deep Learning", "thumbnail": "https://i.ytimg.com/vi/njKP3FqW3Sk/maxresdefault.jpg"} {"video_id": "njKP3FqW3Sk", "text": "crucial that you understand the nitty-gritty math here because in a lot of popular deep learning frameworks we have what's called automatic differentiation which does all of this back propagation for you under the hood and you never even see it which is incredible it made training neural networks so much easier you don't have to implement back propagation anymore but it's still important to understand how these work at the foundation which is why we're going through it now ok obviously then you repeat this for every", "start_timestamp": "00:37:37", "end_timestamp": "00:38:09", "start_second": 2257, "end_second": 2289, "url": "https://www.youtube.com/watch?v=njKP3FqW3Sk&t=2257s", "title": "MIT 6.S191 (2020): Introduction to Deep Learning", "thumbnail": "https://i.ytimg.com/vi/njKP3FqW3Sk/maxresdefault.jpg"} {"video_id": "njKP3FqW3Sk", "text": "single way in the network here we showed it for just W 1 and W 2 which is every single way in this network but if you have more you can just repeat it again keep applying the chain rule from output to input to compute this ok and that's the back prop algorithm in theory very simple it's just an application of the chain rule in essence but now let's touch on some of the insights from training and how you can use the back prop algorithm to train these networks in practice optimization of neural networks is incredibly tough in practice", "start_timestamp": "00:38:09", "end_timestamp": "00:38:43", "start_second": 2289, "end_second": 2323, "url": "https://www.youtube.com/watch?v=njKP3FqW3Sk&t=2289s", "title": "MIT 6.S191 (2020): Introduction to Deep Learning", "thumbnail": "https://i.ytimg.com/vi/njKP3FqW3Sk/maxresdefault.jpg"} {"video_id": "njKP3FqW3Sk", "text": "so it's not as simple as the picture I showed you on the colorful one on the previous slide here's an illustration from a paper that came out about two or three years ago now where the authors tried to visualize the landscape of a of a neural network with millions of parameters but they collapsed that down onto just two-dimensional space so that we can visualize it and you can see that the landscape is incredibly complex it's not easy there are many local minima where the gradient descent algorithm could get stuck into and", "start_timestamp": "00:38:43", "end_timestamp": "00:39:16", "start_second": 2323, "end_second": 2356, "url": "https://www.youtube.com/watch?v=njKP3FqW3Sk&t=2323s", "title": "MIT 6.S191 (2020): Introduction to Deep Learning", "thumbnail": "https://i.ytimg.com/vi/njKP3FqW3Sk/maxresdefault.jpg"} {"video_id": "njKP3FqW3Sk", "text": "applying gradient descent in practice in these type of environments which is very standard in neural networks can be a huge challenge now we're called the update equation that we defined previously with gradient descent this is that same equation we're going to update our weights in the direction in the opposite direction of our gradient I didn't talk too much about this parameter ADA I pointed it out this is the learning rate it determines how much of a step we should take in the direction of that gradient and in practice setting this learning rate can", "start_timestamp": "00:39:16", "end_timestamp": "00:39:50", "start_second": 2356, "end_second": 2390, "url": "https://www.youtube.com/watch?v=njKP3FqW3Sk&t=2356s", "title": "MIT 6.S191 (2020): Introduction to Deep Learning", "thumbnail": "https://i.ytimg.com/vi/njKP3FqW3Sk/maxresdefault.jpg"} {"video_id": "njKP3FqW3Sk", "text": "have a huge impact in performance so if you set that learning rate to small that means that you're not really trusting your gradient on each step so if ADA is super tiny that means on each time each step you're only going to move a little bit towards in the opposite direction of your gradient just in little small increments and what can happen then is you can get stuck in these local minima because you're not being as aggressive as you should be to escape them now if you set the learning rate to large you can", "start_timestamp": "00:39:50", "end_timestamp": "00:40:19", "start_second": 2390, "end_second": 2419, "url": "https://www.youtube.com/watch?v=njKP3FqW3Sk&t=2390s", "title": "MIT 6.S191 (2020): Introduction to Deep Learning", "thumbnail": "https://i.ytimg.com/vi/njKP3FqW3Sk/maxresdefault.jpg"} {"video_id": "njKP3FqW3Sk", "text": "actually overshoot completely and diverge which is even more undesirable so setting the learning rate can be very challenging in practice you want to pick a learning rate that's large enough such that you avoid the local minima but small offs such that you still converge in practice now the question that you're all probably asking is how do we set the learning rate then well one option is that you can just try a bunch of learning rates and see what works best another option is to do something a little bit more clever and see if we can", "start_timestamp": "00:40:19", "end_timestamp": "00:40:50", "start_second": 2419, "end_second": 2450, "url": "https://www.youtube.com/watch?v=njKP3FqW3Sk&t=2419s", "title": "MIT 6.S191 (2020): Introduction to Deep Learning", "thumbnail": "https://i.ytimg.com/vi/njKP3FqW3Sk/maxresdefault.jpg"} {"video_id": "njKP3FqW3Sk", "text": "try to have an adaptive learning rate that changes with respect to our lost landscape maybe it changes with respect to how fast the learning is happening or a range of other ideas within the network optimization scheme itself this means that the learning rate is no longer fixed but it can now increase or decrease throughout training so as training progressive your learning rate may speed up you may take more aggressive steps you may take smaller steps as you get closer to the local minima so that you really converge on", "start_timestamp": "00:40:50", "end_timestamp": "00:41:24", "start_second": 2450, "end_second": 2484, "url": "https://www.youtube.com/watch?v=njKP3FqW3Sk&t=2450s", "title": "MIT 6.S191 (2020): Introduction to Deep Learning", "thumbnail": "https://i.ytimg.com/vi/njKP3FqW3Sk/maxresdefault.jpg"} {"video_id": "njKP3FqW3Sk", "text": "that point and there are many options here of how you might want to design this adaptive algorithm and this has been a huge or a widely studied field in optimization theory for machine learning and deep learning and there have been many published papers and implementations within tensor flow on these different types of adaptive learning rate algorithms so SGD is just that vanilla gradient descent that I showed you before that's the first one all of the others are all adaptive learning rates which means that they change their learning rate during training itself so they can increase or", "start_timestamp": "00:41:24", "end_timestamp": "00:42:00", "start_second": 2484, "end_second": 2520, "url": "https://www.youtube.com/watch?v=njKP3FqW3Sk&t=2484s", "title": "MIT 6.S191 (2020): Introduction to Deep Learning", "thumbnail": "https://i.ytimg.com/vi/njKP3FqW3Sk/maxresdefault.jpg"} {"video_id": "njKP3FqW3Sk", "text": "decrease depending on how the optimization is going and during your labs we really encourage you again to try out some of these different optimization schemes see what works what doesn't work a lot of it is problem dependent there are some heuristics that you can you can get but we want you to really gain those heuristics yourselves through the course of the labs it's part of building character okay so let's put this all together from the beginning we can define our model which is defined as this sequential wrapper inside of this", "start_timestamp": "00:42:00", "end_timestamp": "00:42:38", "start_second": 2520, "end_second": 2558, "url": "https://www.youtube.com/watch?v=njKP3FqW3Sk&t=2520s", "title": "MIT 6.S191 (2020): Introduction to Deep Learning", "thumbnail": "https://i.ytimg.com/vi/njKP3FqW3Sk/maxresdefault.jpg"} {"video_id": "njKP3FqW3Sk", "text": "sequential wrapper we have all of our layers all of these layers are composed of perceptrons or single neurons which we saw earlier the second line defines our optimizer which we saw in the previous slide this can be SGD it can also be any of those adaptive learning rates that we saw before now what we want to do is during our training loop it's very it's the same stories again as before nothing's changing here we forward pass all of our inputs through that model we get our predictions using those predictions we can evaluate them and compute our loss our loss tells us how", "start_timestamp": "00:42:38", "end_timestamp": "00:43:15", "start_second": 2558, "end_second": 2595, "url": "https://www.youtube.com/watch?v=njKP3FqW3Sk&t=2558s", "title": "MIT 6.S191 (2020): Introduction to Deep Learning", "thumbnail": "https://i.ytimg.com/vi/njKP3FqW3Sk/maxresdefault.jpg"} {"video_id": "njKP3FqW3Sk", "text": "wrong our network was on that iteration it also tells us how we can compute the gradients and how we can change all of the weights in the network to improve in the future and then the final line there takes those gradients and actually allows our optimizer to update the weights and the trainable variables such that on the next iteration they do a little bit better and over time if you keep looping this will converge and hopefully you should fit your data no now I want to continue to talk about some tips for training these networks in", "start_timestamp": "00:43:15", "end_timestamp": "00:43:49", "start_second": 2595, "end_second": 2629, "url": "https://www.youtube.com/watch?v=njKP3FqW3Sk&t=2595s", "title": "MIT 6.S191 (2020): Introduction to Deep Learning", "thumbnail": "https://i.ytimg.com/vi/njKP3FqW3Sk/maxresdefault.jpg"} {"video_id": "njKP3FqW3Sk", "text": "practice and focus on a very powerful idea of batching your data into mini batches so to do this let's revisit the gradient descent algorithm this gradient is actually very computationally expensive to compute in practice so using the backprop algorithm is a very expensive idea and practice so what we want to do is actually not compute this over all of the data points but actually computed over just a single data point in the data set and most real-life applications it's not actually feasible to compute on your entire data", "start_timestamp": "00:43:49", "end_timestamp": "00:44:24", "start_second": 2629, "end_second": 2664, "url": "https://www.youtube.com/watch?v=njKP3FqW3Sk&t=2629s", "title": "MIT 6.S191 (2020): Introduction to Deep Learning", "thumbnail": "https://i.ytimg.com/vi/njKP3FqW3Sk/maxresdefault.jpg"} {"video_id": "njKP3FqW3Sk", "text": "set at every iteration it's just too much data so instead we pick a single point randomly we compute our gradient with respect to that point and then on the next iteration we pick a different point and we can get a rough estimate of our gradient at each step right so instead of using all of our data now we just pick a single point I we compute our gradient with respect to that single point I and what's a middle ground here so the downside of using a single point is that it's going to be very noisy the downside of using all of the points is", "start_timestamp": "00:44:24", "end_timestamp": "00:44:57", "start_second": 2664, "end_second": 2697, "url": "https://www.youtube.com/watch?v=njKP3FqW3Sk&t=2664s", "title": "MIT 6.S191 (2020): Introduction to Deep Learning", "thumbnail": "https://i.ytimg.com/vi/njKP3FqW3Sk/maxresdefault.jpg"} {"video_id": "njKP3FqW3Sk", "text": "that it's too computationally expensive if there's some middle ground that we can have in between so that middle ground is actually just very simple you instead of taking one point and instead taking all of the points let take a mini batch of points so maybe something on the order of 10 20 30 100 maybe depending on how rough or accurate you want that approximation of your gradient to be and how much you want to trade off speed and computational efficiency now the true gradient is just obtained by averaging the gradient from each of", "start_timestamp": "00:44:57", "end_timestamp": "00:45:30", "start_second": 2697, "end_second": 2730, "url": "https://www.youtube.com/watch?v=njKP3FqW3Sk&t=2697s", "title": "MIT 6.S191 (2020): Introduction to Deep Learning", "thumbnail": "https://i.ytimg.com/vi/njKP3FqW3Sk/maxresdefault.jpg"} {"video_id": "njKP3FqW3Sk", "text": "those B points so B is the size of your batch in this case now since B is normally not that large like I said maybe on the order of tens to a hundreds this is much faster to compute than full gradient descent and much more accurate than stochastic gradient descent because it's using more than one point more than one estimate now this increase in gradient accuracy estimation actually allows us to converge to our target much quicker because it means that our gradients are more accurate in practice it also means that we can increase our", "start_timestamp": "00:45:30", "end_timestamp": "00:46:03", "start_second": 2730, "end_second": 2763, "url": "https://www.youtube.com/watch?v=njKP3FqW3Sk&t=2730s", "title": "MIT 6.S191 (2020): Introduction to Deep Learning", "thumbnail": "https://i.ytimg.com/vi/njKP3FqW3Sk/maxresdefault.jpg"} {"video_id": "njKP3FqW3Sk", "text": "learning rate and trust each update more so if we're very noisy in our gradient estimation we probably want to lower our learning rate a little more so we don't fully step in the wrong direction if we're not totally confident with that gradient if we have a larger batch of gradient of data to they are gradients with we can trust that learning great a little more increase it so that it steps it more aggressively in that direction what this means also is that we can now massively paralyze this computation because we can", "start_timestamp": "00:46:03", "end_timestamp": "00:46:37", "start_second": 2763, "end_second": 2797, "url": "https://www.youtube.com/watch?v=njKP3FqW3Sk&t=2763s", "title": "MIT 6.S191 (2020): Introduction to Deep Learning", "thumbnail": "https://i.ytimg.com/vi/njKP3FqW3Sk/maxresdefault.jpg"} {"video_id": "njKP3FqW3Sk", "text": "split up batches on multiple GPUs or multiple computers even to achieve even more significant speed ups with this training process now the last topic I want to address is that of overfitting and this is also known as the problem of generalization in machine learning and it's actually not unique to just deep learning but it's a fundamental problem of all of machine learning now ideally in machine learning we want a model that will approximate or estimate our data or accurately describes our data let's say like that said differently we want to", "start_timestamp": "00:46:37", "end_timestamp": "00:47:14", "start_second": 2797, "end_second": 2834, "url": "https://www.youtube.com/watch?v=njKP3FqW3Sk&t=2797s", "title": "MIT 6.S191 (2020): Introduction to Deep Learning", "thumbnail": "https://i.ytimg.com/vi/njKP3FqW3Sk/maxresdefault.jpg"} {"video_id": "njKP3FqW3Sk", "text": "build models that can learn representations from our training data that's still generalize to unseen test data now assume that you want to build a line that best describes these points you can see on the on the screen under fitting describes if we if our model does not describe the state of complexity of this problem or if we can't really capture the true complexity of this problem while overfitting on the right starts to memorize certain aspects of our training data and this is also not desirable we want the middle ground", "start_timestamp": "00:47:14", "end_timestamp": "00:47:47", "start_second": 2834, "end_second": 2867, "url": "https://www.youtube.com/watch?v=njKP3FqW3Sk&t=2834s", "title": "MIT 6.S191 (2020): Introduction to Deep Learning", "thumbnail": "https://i.ytimg.com/vi/njKP3FqW3Sk/maxresdefault.jpg"} {"video_id": "njKP3FqW3Sk", "text": "which ideally we end up with a model in the middle that is not too complex to memorize all of our training data but also one that will continue to generalize when it sees new data so to address this problem of regularization in neural network specifically let's talk about a technique of regularization which is another way that we can deal with this and what this is doing is it's trying to discourage complex information from being learned so we want to eliminate the model from actually learning to memorize the training data", "start_timestamp": "00:47:47", "end_timestamp": "00:48:18", "start_second": 2867, "end_second": 2898, "url": "https://www.youtube.com/watch?v=njKP3FqW3Sk&t=2867s", "title": "MIT 6.S191 (2020): Introduction to Deep Learning", "thumbnail": "https://i.ytimg.com/vi/njKP3FqW3Sk/maxresdefault.jpg"} {"video_id": "njKP3FqW3Sk", "text": "we don't want to learn like very specific pinpoints of the training data that don't generalize well to test data now as we've seen before this is actually crucial for our models to be able to generalize to our test data so this is very important the most popular regularization technique deep learning is this very basic idea of drop out now the idea of drop out is well actually let's start with by revisiting this picture of a neural network that we had introduced previously and drop out during training we randomly set some of these activations of the hidden neurons to", "start_timestamp": "00:48:18", "end_timestamp": "00:48:54", "start_second": 2898, "end_second": 2934, "url": "https://www.youtube.com/watch?v=njKP3FqW3Sk&t=2898s", "title": "MIT 6.S191 (2020): Introduction to Deep Learning", "thumbnail": "https://i.ytimg.com/vi/njKP3FqW3Sk/maxresdefault.jpg"} {"video_id": "njKP3FqW3Sk", "text": "zero with some probability so I'd say our probability is 0.5 we're randomly going to set the activations to 0.5 with probability of 0.5 to some of our hidden neurons to 0 the idea is extremely powerful because it allows the network to lower its capacity it also makes it such that the network can't build these memorization channels through the network where it tries to just remember the data because on every iteration 50% of that data is going to be or 50% of that memorization or memory is going to be wiped out so it's going to be forced to to not only generalize", "start_timestamp": "00:48:54", "end_timestamp": "00:49:32", "start_second": 2934, "end_second": 2972, "url": "https://www.youtube.com/watch?v=njKP3FqW3Sk&t=2934s", "title": "MIT 6.S191 (2020): Introduction to Deep Learning", "thumbnail": "https://i.ytimg.com/vi/njKP3FqW3Sk/maxresdefault.jpg"} {"video_id": "njKP3FqW3Sk", "text": "better but it's going to be forced to have multiple channels through the network and build a more robust representation of its prediction now we just repeat this on every iteration so on the first iteration we dropped out one 50% of the nodes on the next iteration we can drop out a different randomly sampled 50% which may include some of the previously sampled nodes as well and this will allow the network to generalize better to new test data the second regularization technique that we'll talk about is the notion of early", "start_timestamp": "00:49:32", "end_timestamp": "00:50:02", "start_second": 2972, "end_second": 3002, "url": "https://www.youtube.com/watch?v=njKP3FqW3Sk&t=2972s", "title": "MIT 6.S191 (2020): Introduction to Deep Learning", "thumbnail": "https://i.ytimg.com/vi/njKP3FqW3Sk/maxresdefault.jpg"} {"video_id": "njKP3FqW3Sk", "text": "stopping so what I want to do here is just talk about two lines so during training which is the x-axis here we have two lines the y-axis is our loss curve the first line is our training loss so that's the green line the green line tells us how our training data how well our model is fitting to our training data we expect this to be lower than the second line which is our testing data so usually we expect to be doing better on our training data than our testing data as we train and as this line moves forward into the future both of these lines should kind of decrease go down", "start_timestamp": "00:50:02", "end_timestamp": "00:50:36", "start_second": 3002, "end_second": 3036, "url": "https://www.youtube.com/watch?v=njKP3FqW3Sk&t=3002s", "title": "MIT 6.S191 (2020): Introduction to Deep Learning", "thumbnail": "https://i.ytimg.com/vi/njKP3FqW3Sk/maxresdefault.jpg"} {"video_id": "njKP3FqW3Sk", "text": "because we're optimizing the network we're improving its performance eventually though there becomes a point where the training data starts to diverge from the testing data now what happens is that the training day should always continue to fit or the model should always continue to fit the training data because it's still seeing all of the training data it's not being penalized from that except for maybe if you drop out or other means but the testing data it's not seeing so at some point the network is going to start to", "start_timestamp": "00:50:36", "end_timestamp": "00:51:05", "start_second": 3036, "end_second": 3065, "url": "https://www.youtube.com/watch?v=njKP3FqW3Sk&t=3036s", "title": "MIT 6.S191 (2020): Introduction to Deep Learning", "thumbnail": "https://i.ytimg.com/vi/njKP3FqW3Sk/maxresdefault.jpg"} {"video_id": "njKP3FqW3Sk", "text": "do better on its training data than its testing data and what this means is basically that the network is starting to memorize some of the training data and that's what you don't want so what we can do is well we can perform early stopping or we can identify this point this inflection point where the test data starts to increase and diverge from the training data so we can stop the network early and make sure that our test accuracy is as minimum as possible and of course if we actually look at on the side of this line if we look at on", "start_timestamp": "00:51:05", "end_timestamp": "00:51:39", "start_second": 3065, "end_second": 3099, "url": "https://www.youtube.com/watch?v=njKP3FqW3Sk&t=3065s", "title": "MIT 6.S191 (2020): Introduction to Deep Learning", "thumbnail": "https://i.ytimg.com/vi/njKP3FqW3Sk/maxresdefault.jpg"} {"video_id": "njKP3FqW3Sk", "text": "the left side that's where a model is under fit so we haven't reached the true capacity of our model yet so we'd want to keep training if we didn't stop yet if we did stop already and on the right side is where we've over fit where we've passed that early stopping point and we need to like basically we've started to memorize some of our training did and that's when we've gone too far I'll conclude this lecture by just summarizing three main points that we've covered so far first we've learned about the fundamentals of neural networks", "start_timestamp": "00:51:39", "end_timestamp": "00:52:07", "start_second": 3099, "end_second": 3127, "url": "https://www.youtube.com/watch?v=njKP3FqW3Sk&t=3099s", "title": "MIT 6.S191 (2020): Introduction to Deep Learning", "thumbnail": "https://i.ytimg.com/vi/njKP3FqW3Sk/maxresdefault.jpg"} {"video_id": "njKP3FqW3Sk", "text": "which is a single neuron or a perceptron we've learned about stacking and composing these perceptrons together to form complex hierarchical representations and how we can mathematically optimize these networks using a technique called back propagation using their loss and finally we address the practical side of training these models such as mini batching regularization and adaptive learning rates as well with that I'll finish up I can take a couple questions and then we'll move on to office lecture on deep sequential modeling I'll take any like maybe a couple questions if", "start_timestamp": "00:52:07", "end_timestamp": "00:52:43", "start_second": 3127, "end_second": 3163, "url": "https://www.youtube.com/watch?v=njKP3FqW3Sk&t=3127s", "title": "MIT 6.S191 (2020): Introduction to Deep Learning", "thumbnail": "https://i.ytimg.com/vi/njKP3FqW3Sk/maxresdefault.jpg"} {"video_id": "H5vpBCLo74U", "text": "hi there today we're looking at Excel net generalized auto regressive pre-training for language understanding by Jill and yang and other people from Carnegie Mellon University as well as Google brain so this is kind of a the elephant in the room currently as Excel net is the first model to beat Bert which was the previous state of the art and a lot of NLP tasks to be burped at a lot of these same NLP tasks so the chief state of the art result on 18 of of 20 tasks I believe maybe they test one they outperform Bert on 20 the chief state of", "start_timestamp": "00:00:00", "end_timestamp": "00:00:41", "start_second": 0, "end_second": 41, "url": "https://www.youtube.com/watch?v=H5vpBCLo74U&t=0s", "title": "XLNet: Generalized Autoregressive Pretraining for Language Understanding", "thumbnail": "https://i.ytimg.com/vi/H5vpBCLo74U/hqdefault.jpg"} {"video_id": "H5vpBCLo74U", "text": "the art on 18 including things asked question answering natural language inference sentiment analysis and so on so those are kind of remarkable results and even more remarkable is that the architecture of the network is actually very very similar to Bert the kind of new introduction is a a pre-training a different free training procedure and we'll look into that so let's actually jump into their main points straight away what they go into is there are two kinds of currently used pre training methods for these NLP test", "start_timestamp": "00:00:41", "end_timestamp": "00:01:19", "start_second": 41, "end_second": 79, "url": "https://www.youtube.com/watch?v=H5vpBCLo74U&t=41s", "title": "XLNet: Generalized Autoregressive Pretraining for Language Understanding", "thumbnail": "https://i.ytimg.com/vi/H5vpBCLo74U/hqdefault.jpg"} {"video_id": "H5vpBCLo74U", "text": "and both or can be understood as kind of a language modeling one so language modeling for those you don't know is predict the next word in a sequence so if I give you the sequence here unsupervised representation learning has been and then ask you what's next and then you're supposed to say highly right those language modeling in in a nutshell so what they what they differentiate are two kinds of language modeling the first one they say is order aggressive language modeling now what auto regressive language modeling does is exactly what we've", "start_timestamp": "00:01:19", "end_timestamp": "00:01:58", "start_second": 79, "end_second": 118, "url": "https://www.youtube.com/watch?v=H5vpBCLo74U&t=79s", "title": "XLNet: Generalized Autoregressive Pretraining for Language Understanding", "thumbnail": "https://i.ytimg.com/vi/H5vpBCLo74U/hqdefault.jpg"} {"video_id": "H5vpBCLo74U", "text": "looked at I give you unsupervised learning has been you were supposed to predict highly and then in the next step I give you unsupervised representation learning has been highly and you're supposed to predict success and so on so in the next step I'm gonna give you the entire sentence up until here and you're supposed to do predict in autoregressive because each token can look at the kind of previous ones in the in the sequence so when you sorry you can't see that when you predict when you predict you you can always kind of order", "start_timestamp": "00:01:58", "end_timestamp": "00:02:37", "start_second": 118, "end_second": 157, "url": "https://www.youtube.com/watch?v=H5vpBCLo74U&t=118s", "title": "XLNet: Generalized Autoregressive Pretraining for Language Understanding", "thumbnail": "https://i.ytimg.com/vi/H5vpBCLo74U/hqdefault.jpg"} {"video_id": "H5vpBCLo74U", "text": "aggressively look at what the previous ones were when including what you've previously predicted of course during training this is a teacher first as I said so you put the actual words there this is auto regressive modeling in contrast to what they call Auto encoding and Auto encoding is what birth does and this is the following so in contrast to that let's say I have the same sequence unsupervised representation learning has been highly successful in the domain of yeah something and then I say okay I give you the sequence but I am going to", "start_timestamp": "00:02:37", "end_timestamp": "00:03:21", "start_second": 157, "end_second": 201, "url": "https://www.youtube.com/watch?v=H5vpBCLo74U&t=157s", "title": "XLNet: Generalized Autoregressive Pretraining for Language Understanding", "thumbnail": "https://i.ytimg.com/vi/H5vpBCLo74U/hqdefault.jpg"} {"video_id": "H5vpBCLo74U", "text": "delete this and this right and now I ask you to predict these two right so you can see the the task is slightly different as you now have access to all of the sequence basically except the ones that you are asked to predict but you're you kind of asked to predict yet them not in any order but you're asked to predict them at the same time basically so at the same time you're you're asked to predict this word and this word and so the first kind of this Auto regressive language modeling has been used by transformer models until", "start_timestamp": "00:03:21", "end_timestamp": "00:04:05", "start_second": 201, "end_second": 245, "url": "https://www.youtube.com/watch?v=H5vpBCLo74U&t=201s", "title": "XLNet: Generalized Autoregressive Pretraining for Language Understanding", "thumbnail": "https://i.ytimg.com/vi/H5vpBCLo74U/hqdefault.jpg"} {"video_id": "H5vpBCLo74U", "text": "birth and then basically bert really pushed this auto encoding language model pre-training which made it so successful and now this paper excel net wants to like combine the best of both of them and in order to understand what's the best of both of them so what's good at birth we've already seen it can actually draw information from all of the context of the words it's trying to predict but what is the kind of pitfall of birth and they they actually put this really nicely in an example they give way further down where they say comparison", "start_timestamp": "00:04:05", "end_timestamp": "00:04:49", "start_second": 245, "end_second": 289, "url": "https://www.youtube.com/watch?v=H5vpBCLo74U&t=245s", "title": "XLNet: Generalized Autoregressive Pretraining for Language Understanding", "thumbnail": "https://i.ytimg.com/vi/H5vpBCLo74U/hqdefault.jpg"} {"video_id": "H5vpBCLo74U", "text": "to but I don't know why that is not like also in the introduction but here they have the sentence New York is a city right New York is a city this one and you're asked to predict these two words and if you now compare birth to what xl9 does if so the context is is a city and you're asked to predict New York what birth does is it simply masks out the two words and says here please fill in these two words now this translates to the kind of objective being separated in the two words such that the prediction of York here is completely independent", "start_timestamp": "00:04:49", "end_timestamp": "00:05:34", "start_second": 289, "end_second": 334, "url": "https://www.youtube.com/watch?v=H5vpBCLo74U&t=289s", "title": "XLNet: Generalized Autoregressive Pretraining for Language Understanding", "thumbnail": "https://i.ytimg.com/vi/H5vpBCLo74U/hqdefault.jpg"} {"video_id": "H5vpBCLo74U", "text": "of the prediction of new so if you know of any other city that is made of two words for example San Francisco or Los Angeles then these would be as valid and any mixture would be as valid so you might there might end up with laws York is a city and that would be perfectly fine for birth because while it's predicting loss is a perfectly fine prediction for the first word of a two-word City and York is a perfectly fine prediction for the last word of a two-word City right so these are the kind of mistakes that bird can get into", "start_timestamp": "00:05:34", "end_timestamp": "00:06:14", "start_second": 334, "end_second": 374, "url": "https://www.youtube.com/watch?v=H5vpBCLo74U&t=334s", "title": "XLNet: Generalized Autoregressive Pretraining for Language Understanding", "thumbnail": "https://i.ytimg.com/vi/H5vpBCLo74U/hqdefault.jpg"} {"video_id": "H5vpBCLo74U", "text": "by not being order aggressive by basically predicting all of these tokens at the same time independently of each other whereas x-l net what they will do is we specify an order let's say okay first I will predict the word noon for the first word new something is a city and then when I predict York I will actually take into account the I previously have predicted the word new so um that's the main advantage at that autoregressive training has over Auto encoding now what are the pitfalls the pitfalls or if you have this this", "start_timestamp": "00:06:14", "end_timestamp": "00:06:52", "start_second": 374, "end_second": 412, "url": "https://www.youtube.com/watch?v=H5vpBCLo74U&t=374s", "title": "XLNet: Generalized Autoregressive Pretraining for Language Understanding", "thumbnail": "https://i.ytimg.com/vi/H5vpBCLo74U/hqdefault.jpg"} {"video_id": "H5vpBCLo74U", "text": "sentence let's look at it I'll write it down New York is a city right if you have this sentence and let's say yeah actually you're not you're not asked to predict you and your crew you're asked to predict the word a here a right you're asked to predict that in order regressive style or a city it's a better example the two words I said in order regressive style if you predict the word a you can only ever look at what comes before hand whereas if Bert were to predict a just the word a it would be able to look at all of it", "start_timestamp": "00:06:52", "end_timestamp": "00:07:38", "start_second": 412, "end_second": 458, "url": "https://www.youtube.com/watch?v=H5vpBCLo74U&t=412s", "title": "XLNet: Generalized Autoregressive Pretraining for Language Understanding", "thumbnail": "https://i.ytimg.com/vi/H5vpBCLo74U/hqdefault.jpg"} {"video_id": "H5vpBCLo74U", "text": "that's not predict City so you see the kind of auto regressive model is bound to the order of the of the factorization of the sentence that's right it's bound to the order in which it has to predict the tokens so here if it's predicting a you can only look at stuff that comes before it because it needs to do it in order right once it gets to city you can actually look at the entire sentence here but um before that it only ever has partial information about the about the context so actually it wouldn't be much", "start_timestamp": "00:07:38", "end_timestamp": "00:08:16", "start_second": 458, "end_second": 496, "url": "https://www.youtube.com/watch?v=H5vpBCLo74U&t=458s", "title": "XLNet: Generalized Autoregressive Pretraining for Language Understanding", "thumbnail": "https://i.ytimg.com/vi/H5vpBCLo74U/hqdefault.jpg"} {"video_id": "H5vpBCLo74U", "text": "better if I had said we're trying to predict these two words is and a right and once I predict so so Bert would actually have access to the word City here whereas the auto regressive models only have access to the ones before it I hope that makes it clearer so the main idea in excel net is where did where does this order dependence come in the autoregressive model the order dependence actually comes from the factorization of the sentence of the of the language model so in a language model we're actually trying to assess", "start_timestamp": "00:08:16", "end_timestamp": "00:08:59", "start_second": 496, "end_second": 539, "url": "https://www.youtube.com/watch?v=H5vpBCLo74U&t=496s", "title": "XLNet: Generalized Autoregressive Pretraining for Language Understanding", "thumbnail": "https://i.ytimg.com/vi/H5vpBCLo74U/hqdefault.jpg"} {"video_id": "H5vpBCLo74U", "text": "the probability distribution of sentences here X is a sentence right and this can be naturally factorized into a product over the words where the probability of each word is only dependent on the words before it so this is a this is an equality is not an approximation this is an equality the probability of a sequence can be decomposed into a product of probabilities like this exactly so this here is exactly what these auto regressive models implement each word is predicted from the words before it right there are other kinds of autoregressive", "start_timestamp": "00:08:59", "end_timestamp": "00:09:45", "start_second": 539, "end_second": 585, "url": "https://www.youtube.com/watch?v=H5vpBCLo74U&t=539s", "title": "XLNet: Generalized Autoregressive Pretraining for Language Understanding", "thumbnail": "https://i.ytimg.com/vi/H5vpBCLo74U/hqdefault.jpg"} {"video_id": "H5vpBCLo74U", "text": "models that also do the other direction where here they say okay the probability of a sentence is a product and each word is predicted from the words after it but it kind of is the same problem you only ever have access into the one direction basically however you define the order of decoding you only ever have access from a given word to what was before it in the order so the main idea of excel net is they say hey why don't we consider all possible orderings right I mean that that's kind of a that's it's an idea so let's go back to our thing", "start_timestamp": "00:09:45", "end_timestamp": "00:10:31", "start_second": 585, "end_second": 631, "url": "https://www.youtube.com/watch?v=H5vpBCLo74U&t=585s", "title": "XLNet: Generalized Autoregressive Pretraining for Language Understanding", "thumbnail": "https://i.ytimg.com/vi/H5vpBCLo74U/hqdefault.jpg"} {"video_id": "H5vpBCLo74U", "text": "here they say why don't we consider all possible orderings so basically what we will do is if this sample comes up New York is a city all right what I can do is I can define an ordering let's say I always want to predict two words so the bird typically masks out about 15% of its input to be predicted and here let's say we'll mask out 20% which two words so of this sequence will mask two words and ask the model to predict it that that will be our our pre training objective the first time this sample comes up from the data set", "start_timestamp": "00:10:31", "end_timestamp": "00:11:10", "start_second": 631, "end_second": 670, "url": "https://www.youtube.com/watch?v=H5vpBCLo74U&t=631s", "title": "XLNet: Generalized Autoregressive Pretraining for Language Understanding", "thumbnail": "https://i.ytimg.com/vi/H5vpBCLo74U/hqdefault.jpg"} {"video_id": "H5vpBCLo74U", "text": "I might specify the order just classically right just one two three four five all right I'll predict the last two words I'll kind of mask them out right I give the model New York is and then I could let it predict a and then in the next step I'll give it New York is a and let it predict City cool so now if the pitfall is the word a here only has access to things before it and not to city itself City has access to everything all right so but then I continue training and the next set time this sample right it's in my data set", "start_timestamp": "00:11:10", "end_timestamp": "00:11:52", "start_second": 670, "end_second": 712, "url": "https://www.youtube.com/watch?v=H5vpBCLo74U&t=670s", "title": "XLNet: Generalized Autoregressive Pretraining for Language Understanding", "thumbnail": "https://i.ytimg.com/vi/H5vpBCLo74U/hqdefault.jpg"} {"video_id": "H5vpBCLo74U", "text": "New York is the city the next time it comes up I simply go for a different order let's say one two three four five right so now again I'm asked and asking to predict the last two tokens which here our city and York so in the first step I would give it is a and new and I will ask it what's here and I'll ask you to predict city and then in the second step I'll also give it that and I'll ask it okay now what's here given all of that right so new is a city all right you're asked to predict the missing word so that that's pretty", "start_timestamp": "00:11:52", "end_timestamp": "00:12:39", "start_second": 712, "end_second": 759, "url": "https://www.youtube.com/watch?v=H5vpBCLo74U&t=712s", "title": "XLNet: Generalized Autoregressive Pretraining for Language Understanding", "thumbnail": "https://i.ytimg.com/vi/H5vpBCLo74U/hqdefault.jpg"} {"video_id": "H5vpBCLo74U", "text": "so the first step its new is a hmm and you Resta predicted the second and then the second step is new is the city and the rescue protect you first so now as you can see while predicting City here all of a sudden we didn't no longer in this ordering we don't have access to the world York so we'll have to learn to predict City from the rest of the context now even more even more if we now decide decide on a different ordering again one three four five so now well actually first step is to ask New York City please predict this thing here all right", "start_timestamp": "00:12:39", "end_timestamp": "00:13:35", "start_second": 759, "end_second": 815, "url": "https://www.youtube.com/watch?v=H5vpBCLo74U&t=759s", "title": "XLNet: Generalized Autoregressive Pretraining for Language Understanding", "thumbnail": "https://i.ytimg.com/vi/H5vpBCLo74U/hqdefault.jpg"} {"video_id": "H5vpBCLo74U", "text": "yeah you might train the model to predict is and then the second step you say New York is City please predict it now we see before before when we are at were asked to predict the word a it only had access to things to the left of it and the very first example but now it actually has access to the entire context so the the the idea is as we sample this data point multiple times and each time we decide on a different ordering duty code for each the prediction of each token token sorry will actually have seen many many", "start_timestamp": "00:13:35", "end_timestamp": "00:14:16", "start_second": 815, "end_second": 856, "url": "https://www.youtube.com/watch?v=H5vpBCLo74U&t=815s", "title": "XLNet: Generalized Autoregressive Pretraining for Language Understanding", "thumbnail": "https://i.ytimg.com/vi/H5vpBCLo74U/hqdefault.jpg"} {"video_id": "H5vpBCLo74U", "text": "parts many different variants of the context and in expectation will actually have seen all of the context just like Bert but will always having have done it in an order aggressive way so basically you get all the advantages of being order aggressive namely that you are able to decode step by step while always referring to everything in front of you in the ordering so the predictions are not independent but you also get the benefit of Bert that it's able to basically look at all of the rest of the context in expectation in order to make", "start_timestamp": "00:14:16", "end_timestamp": "00:14:56", "start_second": 856, "end_second": 896, "url": "https://www.youtube.com/watch?v=H5vpBCLo74U&t=856s", "title": "XLNet: Generalized Autoregressive Pretraining for Language Understanding", "thumbnail": "https://i.ytimg.com/vi/H5vpBCLo74U/hqdefault.jpg"} {"video_id": "H5vpBCLo74U", "text": "this prediction so this is this is the main idea of of excel net they formalize this jump up again they formalize it in saying okay what Bert does here is it actually seek it factorized law probability of a sentence into this sum so the product in the law becomes sum into the sum of log probabilities of no sorry this is auto regressive confused ah into the words conditioned on everything in front of them what bird does is it actually approximately factorizes the law of probability into each word and then everything in the", "start_timestamp": "00:14:56", "end_timestamp": "00:15:45", "start_second": 896, "end_second": 945, "url": "https://www.youtube.com/watch?v=H5vpBCLo74U&t=896s", "title": "XLNet: Generalized Autoregressive Pretraining for Language Understanding", "thumbnail": "https://i.ytimg.com/vi/H5vpBCLo74U/hqdefault.jpg"} {"video_id": "H5vpBCLo74U", "text": "context everything that's not masked in the context and this is only an approximate factorization because you're basically dropping away all these mask tokens and um what they do now is they do the same as the AR as their auto regressive models here they decompose to log probability into a sum of log from abilities over each of the words given all the words before it but now not before it in the sequence but before it in and chosen permutations Z and Z is sampled uniformly from the set of all possible permutations so in expectation", "start_timestamp": "00:15:45", "end_timestamp": "00:16:33", "start_second": 945, "end_second": 993, "url": "https://www.youtube.com/watch?v=H5vpBCLo74U&t=945s", "title": "XLNet: Generalized Autoregressive Pretraining for Language Understanding", "thumbnail": "https://i.ytimg.com/vi/H5vpBCLo74U/hqdefault.jpg"} {"video_id": "H5vpBCLo74U", "text": "they'll see all of the context so this is the this is the main thing they show this here in a kind of a picture with so here is the neural network this is the input layer then these are the hidden layers as the attention layers go up and up here you're asked to predict the the token so here you're always asked to predict X 3 so there is no there's never going to be any awake here since if you knew X 3 you would be able trivially to predict X 3 all right so in the in the first example the factorization order chosen at random is 3 2 4 1 now you", "start_timestamp": "00:16:33", "end_timestamp": "00:17:20", "start_second": 993, "end_second": 1040, "url": "https://www.youtube.com/watch?v=H5vpBCLo74U&t=993s", "title": "XLNet: Generalized Autoregressive Pretraining for Language Understanding", "thumbnail": "https://i.ytimg.com/vi/H5vpBCLo74U/hqdefault.jpg"} {"video_id": "H5vpBCLo74U", "text": "asked to predict X 3 and we know okay we should only we should only do this with things that are before it in the permutation order well here are since X 3 is the first in the permutation order we actually don't we don't have anything to go on we wait Stickley asked to predict x3 from scratch as if it were the start of a sentence so we'll basically tell the model I have a sentence that goes please predict the third right it's a hard task yeah by the way you're always able to look at this memory thing here don't worry about this for now this is", "start_timestamp": "00:17:20", "end_timestamp": "00:18:01", "start_second": 1040, "end_second": 1081, "url": "https://www.youtube.com/watch?v=H5vpBCLo74U&t=1040s", "title": "XLNet: Generalized Autoregressive Pretraining for Language Understanding", "thumbnail": "https://i.ytimg.com/vi/H5vpBCLo74U/hqdefault.jpg"} {"video_id": "H5vpBCLo74U", "text": "just this is a an augmentation they do on top of their idea this is not the core idea so okay but now the second time this sample comes up from the training set we decide on a different order so the order here is 2 4 3 1 now again we're asked to predict x3 and we're allowed to look at everything before it so 2 & 4 as you see here there are weights from x2 and x4 into this column that finally is then 8 asked to predict x3 so this is also this is now an easier task right you're allowed to look at the word to the left and to the", "start_timestamp": "00:18:01", "end_timestamp": "00:18:40", "start_second": 1081, "end_second": 1120, "url": "https://www.youtube.com/watch?v=H5vpBCLo74U&t=1081s", "title": "XLNet: Generalized Autoregressive Pretraining for Language Understanding", "thumbnail": "https://i.ytimg.com/vi/H5vpBCLo74U/hqdefault.jpg"} {"video_id": "H5vpBCLo74U", "text": "right if you have the following permutation order 1 4 2 3 you're actually allowed to look at all of the other words because x3 is at the end of the permutation order in order to produce x-ray so all of these four and the fourth thing is a similar so all of these four things will appear during training and you will learn from them so in expectation C you will basically have seen all variants of different of different versions of the context which which helps a lot apparently right so in the in order to achieve this they had to", "start_timestamp": "00:18:40", "end_timestamp": "00:19:24", "start_second": 1120, "end_second": 1164, "url": "https://www.youtube.com/watch?v=H5vpBCLo74U&t=1120s", "title": "XLNet: Generalized Autoregressive Pretraining for Language Understanding", "thumbnail": "https://i.ytimg.com/vi/H5vpBCLo74U/hqdefault.jpg"} {"video_id": "H5vpBCLo74U", "text": "make some architectural changes to the the model namely what you want to do is in a single pass through the model here you not only want to predict one token but you want to do many predictions this helps training a lot so vert and naturally always does like the 15 we must add 15% of the tokens or so what was that like 40 50 tokens so it masks them and it predicts them all at the same time now you would like to do this here as well you would like to predict all at the same time the ones that you're asked to predict but of course the problem is", "start_timestamp": "00:19:24", "end_timestamp": "00:20:01", "start_second": 1164, "end_second": 1201, "url": "https://www.youtube.com/watch?v=H5vpBCLo74U&t=1164s", "title": "XLNet: Generalized Autoregressive Pretraining for Language Understanding", "thumbnail": "https://i.ytimg.com/vi/H5vpBCLo74U/hqdefault.jpg"} {"video_id": "H5vpBCLo74U", "text": "for here if you're asked if in this factorization order 2 4 3 1 if you're asked to predict X 3 you're allowed to look at X 2 and X 4 if you're asked to predict X 1 you're allowed to look at X 2 X 4 and X 3 so if you only have a single pass through the model the question is do you now input X 3 or do you not because the prediction of X 3 is not allowed to look at X 3 while the prediction of X 1 is allowed to look at X 3 so they do an architectural change in order to achieve both things so that you can do have a", "start_timestamp": "00:20:01", "end_timestamp": "00:20:41", "start_second": 1201, "end_second": 1241, "url": "https://www.youtube.com/watch?v=H5vpBCLo74U&t=1201s", "title": "XLNet: Generalized Autoregressive Pretraining for Language Understanding", "thumbnail": "https://i.ytimg.com/vi/H5vpBCLo74U/hqdefault.jpg"} {"video_id": "H5vpBCLo74U", "text": "single pass through the walk through the model but the prediction of each token only depends on the things in front of it in the permutation order and they do this by having these kind of two stream is masked to stream attention where they basically have not only not one hidden representation like in classic transformers but they have at each step two hidden representations one they call H only called G so the HS are initialized with the embeddings of the tokens and the g's are just initialized randomly and then they get transformed", "start_timestamp": "00:20:41", "end_timestamp": "00:21:22", "start_second": 1241, "end_second": 1282, "url": "https://www.youtube.com/watch?v=H5vpBCLo74U&t=1241s", "title": "XLNet: Generalized Autoregressive Pretraining for Language Understanding", "thumbnail": "https://i.ytimg.com/vi/H5vpBCLo74U/hqdefault.jpg"} {"video_id": "H5vpBCLo74U", "text": "and the point is the h of the next layer is always able to look at everything in front of it including its own its own H basically it's the one layer down its own position one layer down while the G is only allowed to look at the a it's allowed to look at the ages but the H is from before right so so all the G's here are only ever able to look at the H is from before the current position whereas the H is always allowed here to look at the same but also at the H at the current position and now at the last layer you simply ask", "start_timestamp": "00:21:22", "end_timestamp": "00:22:06", "start_second": 1282, "end_second": 1326, "url": "https://www.youtube.com/watch?v=H5vpBCLo74U&t=1282s", "title": "XLNet: Generalized Autoregressive Pretraining for Language Understanding", "thumbnail": "https://i.ytimg.com/vi/H5vpBCLo74U/hqdefault.jpg"} {"video_id": "H5vpBCLo74U", "text": "the model to predict the token from just the G and you can easily see that this results in this model only oh yeah only attending to things before it okay the G by the way can also look at the G of the current layer so that's that's also nothing but it cannot look at that at the age so there's never any information flowing from the current from the current word embedding of the token you're trying to predict to the prediction layer so basically that that means the model can't just look like you you're not telling the model the answer", "start_timestamp": "00:22:06", "end_timestamp": "00:22:53", "start_second": 1326, "end_second": 1373, "url": "https://www.youtube.com/watch?v=H5vpBCLo74U&t=1326s", "title": "XLNet: Generalized Autoregressive Pretraining for Language Understanding", "thumbnail": "https://i.ytimg.com/vi/H5vpBCLo74U/hqdefault.jpg"} {"video_id": "H5vpBCLo74U", "text": "yet you're still able to feed to predict multiple things in a single pass through the model formally this is described here in the attention layer so they divide how they produce the queries and how they produce the keys and values usually the queries and the keys and values are produced from the same hidden representation but here they produce the keys and values from the h's in both cases but to update the G's they produce the queries from the last layers G and do produce HS they produce the queries from the last layer HS and most", "start_timestamp": "00:22:53", "end_timestamp": "00:23:34", "start_second": 1373, "end_second": 1414, "url": "https://www.youtube.com/watch?v=H5vpBCLo74U&t=1373s", "title": "XLNet: Generalized Autoregressive Pretraining for Language Understanding", "thumbnail": "https://i.ytimg.com/vi/H5vpBCLo74U/hqdefault.jpg"} {"video_id": "H5vpBCLo74U", "text": "importantly when they produce the keys and values the H is they'll look at here to update the G you're only allowed to look at H is before you in the permutation order but to update the H you're allowed to look at everything before including the position you're currently after so that's kind of the that's a it's an engineering solution to the problem introduced by their augmentation I think it's a pretty pretty neat solution pretty cool so the rest of the paper here is incorporating ideas from transformer Excel so transformer Excel is one of", "start_timestamp": "00:23:34", "end_timestamp": "00:24:18", "start_second": 1414, "end_second": 1458, "url": "https://www.youtube.com/watch?v=H5vpBCLo74U&t=1414s", "title": "XLNet: Generalized Autoregressive Pretraining for Language Understanding", "thumbnail": "https://i.ytimg.com/vi/H5vpBCLo74U/hqdefault.jpg"} {"video_id": "H5vpBCLo74U", "text": "these classic transformers that that is like this AR so this Auto regressive style of transformer but that has a few improvements over the classic vanilla transformer and they incorporate a number of things here namely first of all they incorporate this memory thing so the memory thing allows you to input longer sequences let's say our our transformer input length is maximum of five tokens what the transformer Excel allows you to do is you input five tokens and then you save you do your transformer thing you encode it and they", "start_timestamp": "00:24:18", "end_timestamp": "00:25:00", "start_second": 1458, "end_second": 1500, "url": "https://www.youtube.com/watch?v=H5vpBCLo74U&t=1458s", "title": "XLNet: Generalized Autoregressive Pretraining for Language Understanding", "thumbnail": "https://i.ytimg.com/vi/H5vpBCLo74U/hqdefault.jpg"} {"video_id": "H5vpBCLo74U", "text": "save something into this memory and then when you input the next five tokens your transformer is then allowed to look at the memory of the last sequence right and also update it so that that that's kind of these this memo oxy Sawyer so you're always allowed to look at these men blocks from last sequence and then the hidden representations here of this sequence they will actually be stored in the member lok for the next sequence this is kind of a trick to carry over information it's not the deep updating the memory part isn't learned with the", "start_timestamp": "00:25:00", "end_timestamp": "00:25:43", "start_second": 1500, "end_second": 1543, "url": "https://www.youtube.com/watch?v=H5vpBCLo74U&t=1500s", "title": "XLNet: Generalized Autoregressive Pretraining for Language Understanding", "thumbnail": "https://i.ytimg.com/vi/H5vpBCLo74U/hqdefault.jpg"} {"video_id": "H5vpBCLo74U", "text": "objective to make the next prediction better but it's just some information it's a kind of gradient free information to provide to the next step and it apparently helps you can incorporate longer sequences into this transformer excel so they take this over and implement this into excel net they also do relative position encodings relative segment and codings I won't go into this too much more here because it's not the main idea basically so they do experiments and they compared to birth architecture with the", "start_timestamp": "00:25:43", "end_timestamp": "00:26:23", "start_second": 1543, "end_second": 1583, "url": "https://www.youtube.com/watch?v=H5vpBCLo74U&t=1543s", "title": "XLNet: Generalized Autoregressive Pretraining for Language Understanding", "thumbnail": "https://i.ytimg.com/vi/H5vpBCLo74U/hqdefault.jpg"} {"video_id": "H5vpBCLo74U", "text": "same basically same architecture the same number of parameters and/or the years and they beat Burt in all of these kind of NLP tasks or most of I think they said in 20 they reach new state of the art in 18 NLP tasks so apparently their method works very well so what they do here is a last thing I find important is an ablation study of the effects of their improvements so they wear because kind of my problem is I never know like they have this new idea okay we do these random permutations but then they also say oh and also we", "start_timestamp": "00:26:23", "end_timestamp": "00:27:09", "start_second": 1583, "end_second": 1629, "url": "https://www.youtube.com/watch?v=H5vpBCLo74U&t=1583s", "title": "XLNet: Generalized Autoregressive Pretraining for Language Understanding", "thumbnail": "https://i.ytimg.com/vi/H5vpBCLo74U/hqdefault.jpg"} {"video_id": "H5vpBCLo74U", "text": "include memory from XL net and we do relative positioning coatings and so on to me these kind of papers of course you reach better numbers you get a new state of the art so it's kind of a landmark paper but to me a paper should more be like a single thing so whatever your idea is this your idea is these or drinks and whatever you need to do to make that work okay fine but then why why the additional transformer Excel things it's it's really then hard to estimate how much of the improvement comes from your ID and how much of the", "start_timestamp": "00:27:09", "end_timestamp": "00:27:48", "start_second": 1629, "end_second": 1668, "url": "https://www.youtube.com/watch?v=H5vpBCLo74U&t=1629s", "title": "XLNet: Generalized Autoregressive Pretraining for Language Understanding", "thumbnail": "https://i.ytimg.com/vi/H5vpBCLo74U/hqdefault.jpg"} {"video_id": "H5vpBCLo74U", "text": "improvement simply comes from the fact that you already put these other things actually have nothing to do with it so I appreciate these kind of analyses called ablation studies where they kind of try to take away the memory and these things and kind of look at what it's doing to the model and use you see here kind of degrades down here as for example this con degrades as you take stuff away while still being more kind of more successful than Burt so that that I would say also yeah here is more unclear but also kind of seems to degrade a bit", "start_timestamp": "00:27:48", "end_timestamp": "00:28:33", "start_second": 1668, "end_second": 1713, "url": "https://www.youtube.com/watch?v=H5vpBCLo74U&t=1668s", "title": "XLNet: Generalized Autoregressive Pretraining for Language Understanding", "thumbnail": "https://i.ytimg.com/vi/H5vpBCLo74U/hqdefault.jpg"} {"video_id": "H5vpBCLo74U", "text": "and while being more successful than Bert I appreciate this there's some and of really trying to show that your gains really come from your new idea and not from some other stuff all right so the last thing I want to mention actually is this thing so someone claiming or calculating that it costs two hundred and forty five thousand dollars to train the Excel net model the way they describe it in the paper I'm sure that's gonna be brought down because it was brought down that like the time the train was brought down with Bert as well", "start_timestamp": "00:28:33", "end_timestamp": "00:29:13", "start_second": 1713, "end_second": 1753, "url": "https://www.youtube.com/watch?v=H5vpBCLo74U&t=1713s", "title": "XLNet: Generalized Autoregressive Pretraining for Language Understanding", "thumbnail": "https://i.ytimg.com/vi/H5vpBCLo74U/hqdefault.jpg"} {"video_id": "H5vpBCLo74U", "text": "but this is just I mean this is crazy this is just training it um it it kind of gives large questions about the state of research and the ability for kind of let's say more academic players to participate in research on the one hand of course we like of course these companies should be able to do this and on the other hand if it seems like currently in some fields just putting more money on the table will get you a better result not this this actually like this paper is actually a cool idea but it's still kind of primitively", "start_timestamp": "00:29:13", "end_timestamp": "00:29:53", "start_second": 1753, "end_second": 1793, "url": "https://www.youtube.com/watch?v=H5vpBCLo74U&t=1753s", "title": "XLNet: Generalized Autoregressive Pretraining for Language Understanding", "thumbnail": "https://i.ytimg.com/vi/H5vpBCLo74U/hqdefault.jpg"} {"video_id": "fpDaQxG5w4o", "text": "[Music] [Music] because the shift from one zodiacal house to another takes more than two millennia scholars wondered how and where Hipparchus could have learned of the precession in the second century BC it is now clear that his source was Sumerian professor Langdon's findings revealed that the Nepean calendar established circa 4400 BC in the Age of Taurus reflects knowledge of the precession and the shift of zodiacal houses that took place 2160 years earlier than that professor Jeremiah's who correlated Mesopotamian astronomical texts with", "start_timestamp": "00:00:00", "end_timestamp": "00:01:11", "start_second": 0, "end_second": 71, "url": "https://www.youtube.com/watch?v=fpDaQxG5w4o&t=0s", "title": "", "thumbnail": ""} {"video_id": "fpDaQxG5w4o", "text": "Hittite astronomical texts was also of the opinion that older astronomical tablets recorded the change from Taurus to Aries and he concluded that the Mesopotamian astronomers predicted and anticipated the shift from Aries to Pisces subscribing to these conclusions professor Willie Hart nur suggested that the Sumerians left behind plentiful pictorial evidence to that effect when the spring equinox was in the zodiac of Taurus the summer solstice occurred in the zodiac of Leo partner drew attention to the recurrent motif of a bull lion", "start_timestamp": "00:01:11", "end_timestamp": "00:01:49", "start_second": 71, "end_second": 109, "url": "https://www.youtube.com/watch?v=fpDaQxG5w4o&t=71s", "title": "", "thumbnail": ""} {"video_id": "fpDaQxG5w4o", "text": "combat appearing in Sumerian depictions from earliest times and suggested that these motifs represented the key positions of the constellations of Taurus bull and Leah lion to an observer at 30 degrees north such as at a circa 4000 BC most scholars consider the Sumerian stress of Taurus as their first constellation as evidence not only of the antiquity of the zodiac dating to circa 4000 BC but also as testifying to the time when Sumerian civilization so suddenly began professor Jeremiah's found evidence showing that the sumerian", "start_timestamp": "00:01:49", "end_timestamp": "00:02:29", "start_second": 109, "end_second": 149, "url": "https://www.youtube.com/watch?v=fpDaQxG5w4o&t=109s", "title": "", "thumbnail": ""} {"video_id": "fpDaQxG5w4o", "text": "zodiacal chronological point 0 stood precisely between the bull and the twins from this and other data he concluded that the zodiac was devised in the age of Gemini the twins that is even before Sumerian civilization began a Sumerian tablet in the Berlin Museum begins the list of zodiacal constellations with that of Leo taking us back to circa 11,000 BC when man had just begun to till the land professor HV Hill Precht went even farther studying thousands of tablets bearing mathematical tabulations he concluded that all the multiplication", "start_timestamp": "00:02:29", "end_timestamp": "00:03:13", "start_second": 149, "end_second": 193, "url": "https://www.youtube.com/watch?v=fpDaQxG5w4o&t=149s", "title": "", "thumbnail": ""} {"video_id": "fpDaQxG5w4o", "text": "and division tables from the temple libraries of nipper and sipar and from the library of Ashurbanipal in Nineveh are based upon the number 12 million 960,000 analyzing this number and its significance he concluded that it could be related only to the phenomenon of the precession and that the Sumerians knew of the great year of twenty five thousand nine hundred and twenty years this is indeed fantastic astronomical sophistication at an impossible time just as it is evident that the Sumerian astronomers possessed knowledge that", "start_timestamp": "00:03:13", "end_timestamp": "00:03:50", "start_second": 193, "end_second": 230, "url": "https://www.youtube.com/watch?v=fpDaQxG5w4o&t=193s", "title": "", "thumbnail": ""} {"video_id": "fpDaQxG5w4o", "text": "they could not possibly have acquired on their own so is there evidence to show that a good deal of their knowledge was of no practical use to them this pertains not only to the very sophisticated astronomical methods that were used who in ancient Sumer really needed to establish a celestial equator for example but also to a variety of elaborate texts that dealt with the measurement of distances between stars one of these texts known as a oh six four seven eight lists the 26 major stars visible along the line we now call", "start_timestamp": "00:03:50", "end_timestamp": "00:04:26", "start_second": 230, "end_second": 266, "url": "https://www.youtube.com/watch?v=fpDaQxG5w4o&t=230s", "title": "", "thumbnail": ""} {"video_id": "fpDaQxG5w4o", "text": "the Tropic of Cancer and gives distances between them as measured in three different ways the text first gives the distances between these stars by a unit called Mohnish oculto measured and weighed it is believed that this was an ingenious device that related the weight of escaping water to the passage of time it made possible the determination of distances between two stars in terms of time the second column of distances was in terms of degrees of the arc of the skies the full-day daylight and night time was", "start_timestamp": "00:04:26", "end_timestamp": "00:05:02", "start_second": 266, "end_second": 302, "url": "https://www.youtube.com/watch?v=fpDaQxG5w4o&t=266s", "title": "", "thumbnail": ""} {"video_id": "fpDaQxG5w4o", "text": "divided into twelve double hours the arc of the heavens comprised a full circle of 360 degrees hence one Barrow or double hour represented 30 degrees of the arc of the heavens by this method passage of time on earth provided a measure of the distances in degrees between the named celestial bodies the third method of measurement was Barrow Enosh ami length in the skies after o dangun pointed out that while the first two methods were relative to other phenomena this third method provided absolute measurements a celestial Barrow", "start_timestamp": "00:05:02", "end_timestamp": "00:05:40", "start_second": 302, "end_second": 340, "url": "https://www.youtube.com/watch?v=fpDaQxG5w4o&t=302s", "title": "", "thumbnail": ""} {"video_id": "fpDaQxG5w4o", "text": "he and others believed was equivalent to ten thousand six hundred ninety-two of our present-day meters eleven thousand six hundred ninety three yards the distance in the skies between the twenty six stars was calculated in the text as adding up to six hundred and fifty-five thousand two hundred Barrow drawn in the skies the availability of three different methods of measuring distances between stars conveys the great importance attached to the matter yet who among the men and women of Sumer needed such knowledge and who among them", "start_timestamp": "00:05:40", "end_timestamp": "00:06:16", "start_second": 340, "end_second": 376, "url": "https://www.youtube.com/watch?v=fpDaQxG5w4o&t=340s", "title": "", "thumbnail": ""} {"video_id": "fpDaQxG5w4o", "text": "could devise the methods and accurately use them the only possible answer is the Nephilim had the knowledge and the need for such accurate measurements capable of space travel arriving on earth from another planet roaming Earth's skies they were the only ones who could and did possess at the dawn of mankind's civilization the astronomical knowledge that required millenia to develop the sophisticated methods and mathematics and concepts for an advanced astronomy and the need to teach human scribes to copy and record", "start_timestamp": "00:06:16", "end_timestamp": "00:06:55", "start_second": 376, "end_second": 415, "url": "https://www.youtube.com/watch?v=fpDaQxG5w4o&t=376s", "title": "", "thumbnail": ""} {"video_id": "fpDaQxG5w4o", "text": "meticulously table upon table of distances in the heavens order of stars and groups of stars heliacal risings and setting a complex Sun Moon Earth calendar and the rest of the remarkable knowledge of both heaven and earth against this background can it still be assumed that the Mesopotamian astronomers guided by the Nephilim were not aware of the planets beyond Saturn that they did not know of Uranus Neptune and Pluto was their knowledge of Earth's own family the solar system less complete than that of distant stars their order and their", "start_timestamp": "00:06:55", "end_timestamp": "00:07:38", "start_second": 415, "end_second": 458, "url": "https://www.youtube.com/watch?v=fpDaQxG5w4o&t=415s", "title": "", "thumbnail": ""} {"video_id": "fpDaQxG5w4o", "text": "distances astronomical information from ancient times contained in hundreds of detailed texts lists celestial bodies neatly arranged by their celestial order or by the gods or the months or the lands or the constellations with which they were associated one such text analyzed by Ernst F Widener has come to be called the great star list it listed in five columns tens of celestial bodies as related to one another to month's countries and deities another text listed correctly the main stars and the zodiacal constellations a text indexed", "start_timestamp": "00:07:38", "end_timestamp": "00:08:21", "start_second": 458, "end_second": 501, "url": "https://www.youtube.com/watch?v=fpDaQxG5w4o&t=458s", "title": "", "thumbnail": ""} {"video_id": "fpDaQxG5w4o", "text": "as B M 8 6 3 7 8 arranged in its unbroken part 71 celestial bodies by their location in the heavens and so on and gone and on in efforts to make sense of this Legion of texts and in particular to identify correctly the planets of our solar system a succession of scholars came up with confusing results as we now know their efforts were doomed to failure because they incorrectly assumed that the Sumerians and their successors were unaware that the solar system was heliocentric that earth was but another planet and that", "start_timestamp": "00:08:21", "end_timestamp": "00:09:02", "start_second": 501, "end_second": 542, "url": "https://www.youtube.com/watch?v=fpDaQxG5w4o&t=501s", "title": "", "thumbnail": ""} {"video_id": "fpDaQxG5w4o", "text": "there were more planets beyond Saturn ignoring the possibility that some names in the star lessons may have applied to earth itself and seeking to apply the great number of other and epithets only to the five planets they believed were known to the Sumerians scholars reached conflicting conclusions some scholars even suggested that the confusion was not theirs but a Chaldean mix-up for some unknown reason they said the Chaldeans had switched around the names of the five known planets the Sumerians referred to all", "start_timestamp": "00:09:02", "end_timestamp": "00:09:39", "start_second": 542, "end_second": 579, "url": "https://www.youtube.com/watch?v=fpDaQxG5w4o&t=542s", "title": "", "thumbnail": ""} {"video_id": "fpDaQxG5w4o", "text": "celestial bodies planets stars or constellations as mule who shine in the heights the Akkadian term kkuk AB was likewise applied by the Babylonians and Assyrians as a general term for any celestial body this practice further frustrated the scholars seeking to unravel the ancient astronomical texts but some mules that were termed Lu bad clearly designated planets of our solar system knowing that the Greek name for the planets was Wanderers the scholars have read Lu bad as wandering sheep deriving from Lu those which are shepherded and bad high", "start_timestamp": "00:09:39", "end_timestamp": "00:10:23", "start_second": 579, "end_second": 623, "url": "https://www.youtube.com/watch?v=fpDaQxG5w4o&t=579s", "title": "", "thumbnail": ""} {"video_id": "fpDaQxG5w4o", "text": "and far but now that we have shown that the Sumerians were fully aware of the true nature of the solar system the other meanings of the term bad the olden the foundation the one where death is assume direct significance these are appropriate epithets for the Sun and it follows that by Lu bad the Sumerians meant not mere wandering sheep but sheep shepherded by the Sun the planets of our Sun the location and relation of the Lu bad to each other and to the Sun were described in many Mesopotamian astronomical texts there were references", "start_timestamp": "00:10:23", "end_timestamp": "00:11:02", "start_second": 623, "end_second": 662, "url": "https://www.youtube.com/watch?v=fpDaQxG5w4o&t=623s", "title": "", "thumbnail": ""} {"video_id": "fpDaQxG5w4o", "text": "to those planets that are above and those that are below and Kugler correctly guessed that the reference point was earth itself but mostly the planets were spoken of in the framework of astronomical texts dealing with mule mule a term that kept the scholars guessing in the absence of a better solution most scholars have agreed that the term mule mule stood for the Pleiades a cluster of stars in the zodiacal constellation of Taurus and the one through which the axis of the spring equinox passed as viewed from Babylon", "start_timestamp": "00:11:02", "end_timestamp": "00:11:38", "start_second": 662, "end_second": 698, "url": "https://www.youtube.com/watch?v=fpDaQxG5w4o&t=662s", "title": "", "thumbnail": ""} {"video_id": "fpDaQxG5w4o", "text": "circa 2200 BC Mesopotamian texts often indicated that the mule mule included seven loom ash seven Wanderers that are familiar and the scholars assumed that these were the brightest members of the Pleiades which can be seen with the naked eye the fact that depending on classification the group has either six or nine such bright stars and not seven posed a problem but it was brushed aside for lack of any better ideas as to the meaning of mule mule Fromm's Kugler reluctantly accepted the Pleiades as the solution but expressed his astonishment", "start_timestamp": "00:11:38", "end_timestamp": "00:12:20", "start_second": 698, "end_second": 740, "url": "https://www.youtube.com/watch?v=fpDaQxG5w4o&t=698s", "title": "", "thumbnail": ""} {"video_id": "fpDaQxG5w4o", "text": "when he found it stated unambiguously in Mesopotamian texts that mule mule included not only Wanderers planets but also the Sun and the moon making it impossible to retain the Pleiades idea he also came upon texts that clearly stated that mule mule rules shul 12 mil mule is a band of 12 of which 10 formed a distinct group we suggest that the term mule mule referred to the solar system using the repetitive mule mule to indicate the group as a whole as the celestial body comprising all celestial bodies Charles viral au transliterated a", "start_timestamp": "00:12:20", "end_timestamp": "00:13:02", "start_second": 740, "end_second": 782, "url": "https://www.youtube.com/watch?v=fpDaQxG5w4o&t=740s", "title": "", "thumbnail": ""} {"video_id": "fpDaQxG5w4o", "text": "Mesopotamian text that describes the members of the mule mule or cacao boo or Kaka boo group the texts last line is explicit kkuk Abu Kakha boo the number of its celestial bodies is 12 the station of its celestial bodies 12 the complete months of the Moon is 12 the texts leave no doubt the mule mule our solar system was made up of twelve members perhaps this should not come as a surprise for the Greek scholar dye odorous explaining the three ways of the Chaldeans and the subsequent listing of 36 celestial bodies stated that of those", "start_timestamp": "00:13:02", "end_timestamp": "00:13:45", "start_second": 782, "end_second": 825, "url": "https://www.youtube.com/watch?v=fpDaQxG5w4o&t=782s", "title": "", "thumbnail": ""} {"video_id": "fpDaQxG5w4o", "text": "celestial gods 12 hold chief authority to each of these the Chaldeans assign a month and a sign of the zodiac ernst Weidner reported that in addition to the way of on you and it's 12 zodiac constellations some texts also referred to the way of the Sun which was also made up of 12 celestial bodies the Sun the moon and 10 others line 20 of the so-called T tablet stated 'no far 12 sheer mesh Halle Chaka Cobb Lucia sinew shamash inna Libby yaku which means all in all 12 members where the Moon and Sun belong where the planets orbit we can", "start_timestamp": "00:13:45", "end_timestamp": "00:14:33", "start_second": 825, "end_second": 873, "url": "https://www.youtube.com/watch?v=fpDaQxG5w4o&t=825s", "title": "", "thumbnail": ""} {"video_id": "fpDaQxG5w4o", "text": "now grasp the significance of the number 12 in the ancient world the great circle of Sumerian gods and of all Olympian gods thereafter comprised exactly twelve younger gods could join this circle only a folder God's retired likewise the vacancy had to be filled to retain the divine number twelve the principal celestial circle the way of the Sun with its twelve members set the pattern according to which each other celestial band was divided into twelve segments or was allocated twelve principal celestial bodies accordingly there were twelve", "start_timestamp": "00:14:33", "end_timestamp": "00:15:12", "start_second": 873, "end_second": 912, "url": "https://www.youtube.com/watch?v=fpDaQxG5w4o&t=873s", "title": "", "thumbnail": ""} {"video_id": "fpDaQxG5w4o", "text": "months in a year twelve double hours in a day each division of Sumer was assigned 12 celestial bodies as a measure of good luck many studies such as the one by s Langdon show that the division of the year and 12 months was from it's very beginnings related to the twelve great gods fritz Hummel and others after him have shown that the 12 months were closely connected with the 12 zodiacs and that both derived from 12 principal celestial bodies Charles F Jean reproduced a Sumerian list 24 celestial bodies that paired 12", "start_timestamp": "00:15:12", "end_timestamp": "00:15:54", "start_second": 912, "end_second": 954, "url": "https://www.youtube.com/watch?v=fpDaQxG5w4o&t=912s", "title": "", "thumbnail": ""} {"video_id": "fpDaQxG5w4o", "text": "zodiacal constellations with 12 members of our solar system in a long text identified by F Thoreau dangun as a temple program for the New Year Festival in Babylon the evidence for the consecration of 12 as the central celestial phenomenon is persuasive the Great Temple the s Aguila had 12 gates the powers of all the celestial gods were vested in Marduk by reciting 12 times the pronouncement my lord is he not my lord the mercy of the God was then invoked 12 times and that of his spouse 12 times the total of 24 was then", "start_timestamp": "00:15:54", "end_timestamp": "00:16:36", "start_second": 954, "end_second": 996, "url": "https://www.youtube.com/watch?v=fpDaQxG5w4o&t=954s", "title": "", "thumbnail": ""} {"video_id": "fpDaQxG5w4o", "text": "matched with the 12 zodiac constellations and twelve members of the solar system the boundary stone carved with the symbols of the celestial bodies by a king of Susa depicts these twenty-four signs the familiar 12 signs of the zodiac and symbols that stand for the twelve members of the solar system these were the twelve astral gods of Mesopotamia as well as of the hurryin hit IDE Greek and all other ancient Pantheon's although our natural counting base is the number 10 the number 12 permeated all matters celestial and", "start_timestamp": "00:16:36", "end_timestamp": "00:17:16", "start_second": 996, "end_second": 1036, "url": "https://www.youtube.com/watch?v=fpDaQxG5w4o&t=996s", "title": "", "thumbnail": ""} {"video_id": "fpDaQxG5w4o", "text": "divine long after the Sumerians were gone there were 12 Greek titans 12 tribes of Israel 12 parts to the magical breastplate of the Israelite high priests the power of this celestial 12 carried over to the Twelve Apostles of Jesus and even in our decimal system we count from 1 to 12 and only after 12 do we return to 10 and 3 13 10 and 4 and so on where did this powerful decisive number 12 stem from from the heavens for the solar system the mule mule included in addition to all the planets known to us also the", "start_timestamp": "00:17:16", "end_timestamp": "00:18:00", "start_second": 1036, "end_second": 1080, "url": "https://www.youtube.com/watch?v=fpDaQxG5w4o&t=1036s", "title": "", "thumbnail": ""} {"video_id": "fpDaQxG5w4o", "text": "planet of on you the one whose symbol a radiant celestial body stood in the Sumerian writing for the god on you and for divine the cab of the supreme scepter is one of the sheep in mule mule explained an astronomical text and when Marduk usurped the supremacy and replaced on you as the God associated with this planet the Babylonian said the planet of Marduk within mule mule appears teaching humanity the true nature of earth and of the heavens the Nephilim informed the ancient astronomer priests not only of the planets beyond", "start_timestamp": "00:18:00", "end_timestamp": "00:18:38", "start_second": 1080, "end_second": 1118, "url": "https://www.youtube.com/watch?v=fpDaQxG5w4o&t=1080s", "title": "", "thumbnail": ""} {"video_id": "fpDaQxG5w4o", "text": "Saturn but also of the existence of the most important planet the one from which they came the 12th planet on most of the ancient cylinder seals that have been found symbols that stand for certain celestial bodies members of our solar system appear above the figures of gods or humans an Akkadian seal from the 3rd millennium BC now at the Vorta Asiata shop Tai Lung of the State Museum in East Berlin departs from the usual manner of depicting the celestial bodies it does not show them individually but rather as a group of 11 Globes", "start_timestamp": "00:18:38", "end_timestamp": "00:19:20", "start_second": 1118, "end_second": 1160, "url": "https://www.youtube.com/watch?v=fpDaQxG5w4o&t=1118s", "title": "", "thumbnail": ""} {"video_id": "fpDaQxG5w4o", "text": "encircling a large raid star it is clearly a depiction of the solar system as it was known to the Sumerians a system consisting of twelve celestial bodies the ancient depiction shows a planet unknown to us considerably larger than Earth yet smaller than Jupiter and Saturn which clearly follow it farther on another pair perfectly matches our Uranus and Neptune finally the smallish Pluto is also there but not where we now place it after Neptune instead it appears between Saturn and Uranus treating the moon as a", "start_timestamp": "00:19:20", "end_timestamp": "00:20:01", "start_second": 1160, "end_second": 1201, "url": "https://www.youtube.com/watch?v=fpDaQxG5w4o&t=1160s", "title": "", "thumbnail": ""} {"video_id": "fpDaQxG5w4o", "text": "proper celestial body the Sumerian depiction fully accounts for all of our known planets places them in the correct order with the exception of Pluto and shows them by size the forty five hundred year old depiction however also insists that there was or has been another major planet between Mars and Jupiter it is as we shall show the 12th planet the planet of the Nephilim if this sumerian celestial map had been discovered and studied two centuries ago astronomers would have deemed the Sumerians totally uninformed foolishly", "start_timestamp": "00:20:01", "end_timestamp": "00:20:40", "start_second": 1201, "end_second": 1240, "url": "https://www.youtube.com/watch?v=fpDaQxG5w4o&t=1201s", "title": "", "thumbnail": ""} {"video_id": "fpDaQxG5w4o", "text": "imagining more planets beyond Saturn now however we know that Uranus and Neptune and Pluto are really there did the Sumerians imagine the other discrepancies or were they properly informed by the Nephilim that the moon was a member of the solar system in its own right Pluto was located near Saturn and there was a twelfth planet between Mars and Jupiter the long-held theory that the moon was nothing more than a frozen golf ball was not discarded until the successful conclusion of several US Apollo moon missions the best guesses", "start_timestamp": "00:20:40", "end_timestamp": "00:21:18", "start_second": 1240, "end_second": 1278, "url": "https://www.youtube.com/watch?v=fpDaQxG5w4o&t=1240s", "title": "", "thumbnail": ""} {"video_id": "fpDaQxG5w4o", "text": "were that the moon was a chunk of matter that had separated from earth when earth was still molten and plastic were it not for the impact of millions of meteorites which left craters on the face of the moon it would have been a faceless lifeless history less piece of matter that solidified and forever follows earth observations made by unmanned satellites however began to bring such long-held beliefs into question it was determined that the chemical and mineral makeup of the moon was sufficiently different from that of Earth to", "start_timestamp": "00:21:18", "end_timestamp": "00:21:55", "start_second": 1278, "end_second": 1315, "url": "https://www.youtube.com/watch?v=fpDaQxG5w4o&t=1278s", "title": "", "thumbnail": ""} {"video_id": "fpDaQxG5w4o", "text": "challenge the brakeaway theory the experiments conducted on the moon by the American astronauts and the study and analysis of the soil and rock samples they brought back have established beyond doubt that the moon though presently barren was once a living planet like Earth it is layered which means that it solidified from its own original Moulton's staged like Earth it generated heat but whereas Earth's heat comes from its radioactive materials cooked inside earth under tremendous pressure the moon's heat comes apparently from", "start_timestamp": "00:21:55", "end_timestamp": "00:22:32", "start_second": 1315, "end_second": 1352, "url": "https://www.youtube.com/watch?v=fpDaQxG5w4o&t=1315s", "title": "", "thumbnail": ""} {"video_id": "fpDaQxG5w4o", "text": "layers of radioactive materials lying very near the surface these materials however are too heavy to have floated up what then deposited them near the moon's surface the moon's gravity field appears to be erratic as though huge chunks of heavy matter such as iron had not evenly sunk to its core but were scattered about by what process or force we might ask there is evidence that the ancient rocks of the moon were magnetized but there is also evidence that the magnetic fields were changed or reversed was it by some", "start_timestamp": "00:22:32", "end_timestamp": "00:23:11", "start_second": 1352, "end_second": 1391, "url": "https://www.youtube.com/watch?v=fpDaQxG5w4o&t=1352s", "title": "", "thumbnail": ""} {"video_id": "fpDaQxG5w4o", "text": "unknown internal process or by an undetermined outside influence the Apollo 16 astronauts found on the moon rocks called breccias that result from the scattering of solid rock and its reweld extreme and sudden Heat when and how were these rocks shattered then refused other surface materials on the moon are rich and rare radioactive potassium and phosphorus materials that on earth are deep down inside putting such findings together scientists are now certain that the moon and earth formed of roughly the same elements at", "start_timestamp": "00:23:11", "end_timestamp": "00:23:52", "start_second": 1391, "end_second": 1432, "url": "https://www.youtube.com/watch?v=fpDaQxG5w4o&t=1391s", "title": "", "thumbnail": ""} {"video_id": "fpDaQxG5w4o", "text": "about the same time evolved as separate celestial bodies in the opinion of the scientists of the US National Aeronautics and Space Administration NASA the Moon evolved normally for its first 500 million years then they said the most cataclysmic period came four billion years ago when celestial bodies the size of large cities and small countries came crashing into the moon and formed its huge basins and towering mountains the huge amounts of radioactive materials left by the collisions began heating rock beneath the surface melting massive", "start_timestamp": "00:23:52", "end_timestamp": "00:24:35", "start_second": 1432, "end_second": 1475, "url": "https://www.youtube.com/watch?v=fpDaQxG5w4o&t=1432s", "title": "", "thumbnail": ""} {"video_id": "fpDaQxG5w4o", "text": "amounts of it and forcing seas of lava through cracks in the surface Apollo 15 found a rockslide in the crater collapse key six times greater than any rockslide on earth Apollo 16 discovered that the collision that created the sea of nectar deposited debris as much as 1,000 miles away Apollo 17 landed near a scarp eight times higher than any on earth meaning it was formed by a moonquake eight times more violent than any earthquake in history the convulsions following that cosmic event continued for some 800 million years so that the", "start_timestamp": "00:24:35", "end_timestamp": "00:25:17", "start_second": 1475, "end_second": 1517, "url": "https://www.youtube.com/watch?v=fpDaQxG5w4o&t=1475s", "title": "", "thumbnail": ""} {"video_id": "fpDaQxG5w4o", "text": "moon's makeup and surface finally took on their frozen shape some 3.2 billion years ago the Sumerians then were right to depict the moon as a celestial body in its own right and as we shall soon see they also left us a text that explains and describes the cosmic catastrophe to which the NASA experts refer the planet Pluto has been called the enigma while the orbits around the Sun of the other planets deviate only somewhat from a perfect circle the deviation eccentricity of Pluto is such that it has the most extended and", "start_timestamp": "00:25:17", "end_timestamp": "00:25:57", "start_second": 1517, "end_second": 1557, "url": "https://www.youtube.com/watch?v=fpDaQxG5w4o&t=1517s", "title": "", "thumbnail": ""} {"video_id": "fpDaQxG5w4o", "text": "elliptical orbit around the Sun while the other planets orbit the Sun more or less within the same plane Pluto is out of kilter by a whopping 17 degrees because of these two unusual features of its orbit Pluto is the only planet that cuts across the orbit of another planet Neptune in size pluto is indeed in the satellite class its diameter 3600 miles is not much greater than that of Triton a satellite of Neptune or Titan one of the 10 satellites of Saturn because of its unusual characteristics it has been suggested that this misfit might have", "start_timestamp": "00:25:57", "end_timestamp": "00:26:41", "start_second": 1557, "end_second": 1601, "url": "https://www.youtube.com/watch?v=fpDaQxG5w4o&t=1557s", "title": "", "thumbnail": ""} {"video_id": "fpDaQxG5w4o", "text": "started its celestial life as a satellite that somehow escaped it's master and went into orbit around the Sun on its own this as we shall soon see is indeed what happened according to the Sumerian texts and now we reach the climax of our search for answers to primeval celestial events the existence of the 12th planet astonishing as it may sound our astronomers have been looking for evidence that indeed such a planet once existed between Mars and Jupiter toward the end of the 18th century even before Neptune had been discovered", "start_timestamp": "00:26:41", "end_timestamp": "00:27:20", "start_second": 1601, "end_second": 1640, "url": "https://www.youtube.com/watch?v=fpDaQxG5w4o&t=1601s", "title": "", "thumbnail": ""} {"video_id": "fpDaQxG5w4o", "text": "several astronomers demonstrated that the planets were placed at certain distances from the Sun according to some definite law the suggestion which came to be known as bodes law convinced astronomers that a planet ought to revolve in a place where hitherto no planet had been known to exist that is between the orbits of Mars and Jupiter spurred by these mathematical calculations astronomers began to scan the skies in the indicated zone for the missing planet on the first day of the 19th century the Italian astronomer", "start_timestamp": "00:27:20", "end_timestamp": "00:27:56", "start_second": 1640, "end_second": 1676, "url": "https://www.youtube.com/watch?v=fpDaQxG5w4o&t=1640s", "title": "", "thumbnail": ""} {"video_id": "fpDaQxG5w4o", "text": "Giuseppe Piazzi discovered at the exact indicated distance a very small planet 485 miles across which he named series by 1804 the number of asteroids small planets found there rose to four today nearly three thousand asteroids have been counted orbiting the Sun in what is now called the asteroid belt beyond any doubt this is the debris of a planet that had shattered to pieces Russian astronomers have named it Phaeton chariot while astronomers are certain that such a planet existed they are unable to explain its disappearance", "start_timestamp": "00:27:56", "end_timestamp": "00:28:39", "start_second": 1676, "end_second": 1719, "url": "https://www.youtube.com/watch?v=fpDaQxG5w4o&t=1676s", "title": "", "thumbnail": ""} {"video_id": "fpDaQxG5w4o", "text": "did the planet self explode but then it's pieces would have flown off in all directions and not stayed in a single belt if a collision shattered the missing planet where is the celestial body responsible for the collision did it also shatter but the debris circling the Sun when added up is insufficient to account for even one whole planet to say nothing of to also if the asteroids comprised the debris of two planets they should have retained the axial revolution of two planets but all the asteroids have a single axial rotation indicating they", "start_timestamp": "00:28:39", "end_timestamp": "00:29:18", "start_second": 1719, "end_second": 1758, "url": "https://www.youtube.com/watch?v=fpDaQxG5w4o&t=1719s", "title": "", "thumbnail": ""} {"video_id": "lbKg3OSTsgA", "text": "[Music] this video will present a recent paper from Google AI revisiting self supervised visual representation learning so the headline idea this paper is that the standard architecture designs and convolutional neural network advances that have been working in supervised learning tasks like image net classification don't necessarily translate to these self supervised tasks such as predicting the rotation the permutation of jigsaw puzzles or the exemplar augmentation task so this is the ideas are the neural network", "start_timestamp": "00:00:00", "end_timestamp": "00:00:34", "start_second": 0, "end_second": 34, "url": "https://www.youtube.com/watch?v=lbKg3OSTsgA&t=0s", "title": "Self-Supervised Learning", "thumbnail": "https://i.ytimg.com/vi/lbKg3OSTsgA/maxresdefault.jpg"} {"video_id": "lbKg3OSTsgA", "text": "architecture designs over fitted to sell to supervised learning tasks such as recognition and detection and should maybe these neural architecture searches be deployed into self supervised learning or into like a pipeline of self supervised learning and then taking the representations into the classification models but then using the hyper parameter neural architecture search and all these heuristic tricks to design this jointly on self supervised tasks forth for the downstream representation learning so self supervised learning is", "start_timestamp": "00:00:34", "end_timestamp": "00:01:05", "start_second": 34, "end_second": 65, "url": "https://www.youtube.com/watch?v=lbKg3OSTsgA&t=34s", "title": "Self-Supervised Learning", "thumbnail": "https://i.ytimg.com/vi/lbKg3OSTsgA/maxresdefault.jpg"} {"video_id": "lbKg3OSTsgA", "text": "inspired by NLP success and it's just recently it's been tested since like the dawn of alex net but it is definitely gaining traction and so an NLP this is about predicting words from their context so when the sentence predict words from their context words and form would be labelled as positive words and predict one and then tiger an ocean don't appear in the context so they've been labeled as zero or negative so the sole supervised learning has these pretext tasks and so there's other techniques other than context like NLP", "start_timestamp": "00:01:05", "end_timestamp": "00:01:36", "start_second": 65, "end_second": 96, "url": "https://www.youtube.com/watch?v=lbKg3OSTsgA&t=65s", "title": "Self-Supervised Learning", "thumbnail": "https://i.ytimg.com/vi/lbKg3OSTsgA/maxresdefault.jpg"} {"video_id": "lbKg3OSTsgA", "text": "there's more ideas in visual representation learning and images and computer vision so here the some of the common self supervised visual learning tasks rotation prediction example are classes relative patch location and jigsaw puzzle permutations and many of the studies that have already come out on this use the Aleks net CNN architecture this paper from Google AI is going to use the state-of-the-art ResNet designs like the wide ResNet and then a reversible resume which is a more efficient implementation so rotation", "start_timestamp": "00:01:36", "end_timestamp": "00:02:07", "start_second": 96, "end_second": 127, "url": "https://www.youtube.com/watch?v=lbKg3OSTsgA&t=96s", "title": "Self-Supervised Learning", "thumbnail": "https://i.ytimg.com/vi/lbKg3OSTsgA/maxresdefault.jpg"} {"video_id": "lbKg3OSTsgA", "text": "task is like this you would take images and then you rotate them either zero degrees 90 degrees 100 to 70 and then the network is basically doing a for class classification task for these different rotations the exemplars used data augmentations to take an image and then change it in like a ton of different ways and then this image corresponds to its own class so it would be like a massive way like maybe like a thousand class and on classification problem so jigsaw puzzle this is an interesting one it's where", "start_timestamp": "00:02:07", "end_timestamp": "00:02:40", "start_second": 127, "end_second": 160, "url": "https://www.youtube.com/watch?v=lbKg3OSTsgA&t=127s", "title": "Self-Supervised Learning", "thumbnail": "https://i.ytimg.com/vi/lbKg3OSTsgA/maxresdefault.jpg"} {"video_id": "lbKg3OSTsgA", "text": "you take like a patch of the image and then you like you crop the image and then you further refine the crops and then scrambled away in this thing and then try to have you pass each of the crops through the network so each of these each of these square tiles goes into a convolutional Network and then it predicts the where it thinks it might lie in the permutation so I personally don't I'm not really a huge fan of this task I don't really see how it makes a whole lot of sense because how does it know the context really and then", "start_timestamp": "00:02:40", "end_timestamp": "00:03:11", "start_second": 160, "end_second": 191, "url": "https://www.youtube.com/watch?v=lbKg3OSTsgA&t=160s", "title": "Self-Supervised Learning", "thumbnail": "https://i.ytimg.com/vi/lbKg3OSTsgA/maxresdefault.jpg"} {"video_id": "lbKg3OSTsgA", "text": "relative patch location is the same idea I like this idea more where you just have two patches and you predict how they might relate to one another and then another really interesting study that I'm gonna be making a video on tomorrow so please subscribe if you're interested in this video is a multitask table supervised visual learning where you combine these self supervised learning tasks together and then see what kind of representations are derived from that and they test this and their paper using the ResNet 101 model so", "start_timestamp": "00:03:11", "end_timestamp": "00:03:41", "start_second": 191, "end_second": 221, "url": "https://www.youtube.com/watch?v=lbKg3OSTsgA&t=191s", "title": "Self-Supervised Learning", "thumbnail": "https://i.ytimg.com/vi/lbKg3OSTsgA/maxresdefault.jpg"} {"video_id": "lbKg3OSTsgA", "text": "representations for image classification what you do is you freeze most of the network and then you take the pre largest layer like the like an intermediate vector representation and then you input that to a logistic regression model and use that as a classifier train with SGD and data augmentation so the vector that they extract feature vector from the sub supervised learning task they vary this from size 2048 4096 6000 144 and 8192 so this is the size of the representation of vector extract from the self supervised learning task", "start_timestamp": "00:03:41", "end_timestamp": "00:04:15", "start_second": 221, "end_second": 255, "url": "https://www.youtube.com/watch?v=lbKg3OSTsgA&t=221s", "title": "Self-Supervised Learning", "thumbnail": "https://i.ytimg.com/vi/lbKg3OSTsgA/maxresdefault.jpg"} {"video_id": "lbKg3OSTsgA", "text": "so another thing they find is you might ask like how about instead of a logistic regression we do a more complex multi-layer perceptron model so this plot just shows this is the logistic regression in the MLP and they basically perform the same so the logistic regression is has sufficient capacity for this and then another interesting thing is we're in the network do you get the features from so this one thing that's interesting is in the VG g19 network they take the intermediate features from the third block rather than the very end and get", "start_timestamp": "00:04:15", "end_timestamp": "00:04:47", "start_second": 255, "end_second": 287, "url": "https://www.youtube.com/watch?v=lbKg3OSTsgA&t=255s", "title": "Self-Supervised Learning", "thumbnail": "https://i.ytimg.com/vi/lbKg3OSTsgA/maxresdefault.jpg"} {"video_id": "lbKg3OSTsgA", "text": "better results but with the ResNet architectures they always get better results towards taking it as close to the output as you can get so again this would mean like should you take it from like here or should you take it from down here like towards the output or from the intermediate features so the data sets to the tests our image net which is 1.3 million images in the thousand classes and then the places 205 which is 2.5 million images in 205 classes and these are pretty different data sets and it's used just for a", "start_timestamp": "00:04:47", "end_timestamp": "00:05:20", "start_second": 287, "end_second": 320, "url": "https://www.youtube.com/watch?v=lbKg3OSTsgA&t=287s", "title": "Self-Supervised Learning", "thumbnail": "https://i.ytimg.com/vi/lbKg3OSTsgA/maxresdefault.jpg"} {"video_id": "lbKg3OSTsgA", "text": "measure of generalization with respect to the data sets so these are the results basically across the rev net ResNet v2 ResNet v1 just showing how amazingly different the results can be for the different architectures so and not only that but inconsistent so even though Greg net kills on rotation it doesn't perform as well on a relative patch location so this is the results in the table so this refers to increasing the widening factor of a resinate so increasing the number of feature maps to each intermediate layer so very", "start_timestamp": "00:05:20", "end_timestamp": "00:05:57", "start_second": 320, "end_second": 357, "url": "https://www.youtube.com/watch?v=lbKg3OSTsgA&t=320s", "title": "Self-Supervised Learning", "thumbnail": "https://i.ytimg.com/vi/lbKg3OSTsgA/maxresdefault.jpg"} {"video_id": "lbKg3OSTsgA", "text": "interesting trend you see is as they continue to increase the capacity of the model they get better results so the widening factor is highly correlated with success on this task and yeah so as they increase the representation capacity of the model Bay get better results so then also this shows how applying their new ResNet with the winding factor how this compares to the previous papers that have been published on self supervised learning so most interestingly the rotation paper that first came out using an Alex net style", "start_timestamp": "00:05:57", "end_timestamp": "00:06:28", "start_second": 357, "end_second": 388, "url": "https://www.youtube.com/watch?v=lbKg3OSTsgA&t=357s", "title": "Self-Supervised Learning", "thumbnail": "https://i.ytimg.com/vi/lbKg3OSTsgA/maxresdefault.jpg"} {"video_id": "lbKg3OSTsgA", "text": "architecture achieves 38.7% image net but their model achieves like 20% better and is getting much closer to the fully supervised benchmarks so this is another interesting thing that they present is that the success on this Hope supervised task isn't always correlated with image accuracy like this point right here has like 95 percent on rotation but then only like 20 percent on that image net accuracy so then another very interesting takeaway from the study is that it seems like larger models like increasing the width", "start_timestamp": "00:06:28", "end_timestamp": "00:07:03", "start_second": 388, "end_second": 423, "url": "https://www.youtube.com/watch?v=lbKg3OSTsgA&t=388s", "title": "Self-Supervised Learning", "thumbnail": "https://i.ytimg.com/vi/lbKg3OSTsgA/maxresdefault.jpg"} {"video_id": "P_6vDLq64gE", "text": "Okay, now I don't want to alarm anybody in this room, but it's just come to my attention that the person to your right is a liar. (Laughter) Also, the person to your left is a liar. Also the person sitting in your very seats is a liar. We're all liars. What I'm going to do today is I'm going to show you what the research says about why we're all liars, how you can become a liespotter and why you might want to go the extra mile and go from liespotting to truth seeking, and ultimately to trust building. Now, speaking of trust,", "start_timestamp": "00:00:00", "end_timestamp": "00:00:52", "start_second": 0, "end_second": 52, "url": "https://www.youtube.com/watch?v=P_6vDLq64gE&t=0s", "title": "How to spot a liar | Pamela Meyer", "thumbnail": "https://i.ytimg.com/vi/P_6vDLq64gE/hqdefault.jpg"} {"video_id": "P_6vDLq64gE", "text": "ever since I wrote this book, \"Liespotting,\" no one wants to meet me in person anymore, no, no, no, no, no. They say, \"It's okay, we'll email you.\" (Laughter) I can't even get a coffee date at Starbucks. My husband's like, \"Honey, deception? Maybe you could have focused on cooking. How about French cooking?\" So before I get started, what I'm going to do is I'm going to clarify my goal for you, which is not to teach a game of Gotcha. Liespotters aren't those nitpicky kids, those kids in the back of the room that are shouting, \"Gotcha! Gotcha!", "start_timestamp": "00:00:52", "end_timestamp": "00:01:24", "start_second": 52, "end_second": 84, "url": "https://www.youtube.com/watch?v=P_6vDLq64gE&t=52s", "title": "How to spot a liar | Pamela Meyer", "thumbnail": "https://i.ytimg.com/vi/P_6vDLq64gE/hqdefault.jpg"} {"video_id": "P_6vDLq64gE", "text": "Your eyebrow twitched. You flared your nostril. I watch that TV show 'Lie To Me.' I know you're lying.\" No, liespotters are armed with scientific knowledge of how to spot deception. They use it to get to the truth, and they do what mature leaders do everyday; they have difficult conversations with difficult people, sometimes during very difficult times. And they start up that path by accepting a core proposition, and that proposition is the following: Lying is a cooperative act. Think about it, a lie has no power whatsoever by its mere utterance.", "start_timestamp": "00:01:24", "end_timestamp": "00:01:57", "start_second": 84, "end_second": 117, "url": "https://www.youtube.com/watch?v=P_6vDLq64gE&t=84s", "title": "How to spot a liar | Pamela Meyer", "thumbnail": "https://i.ytimg.com/vi/P_6vDLq64gE/hqdefault.jpg"} {"video_id": "P_6vDLq64gE", "text": "Its power emerges when someone else agrees to believe the lie. So I know it may sound like tough love, but look, if at some point you got lied to, it's because you agreed to get lied to. Truth number one about lying: Lying's a cooperative act. Now not all lies are harmful. Sometimes we're willing participants in deception for the sake of social dignity, maybe to keep a secret that should be kept secret, secret. We say, \"Nice song.\" \"Honey, you don't look fat in that, no.\" Or we say, favorite of the digiratti,", "start_timestamp": "00:01:57", "end_timestamp": "00:02:30", "start_second": 117, "end_second": 150, "url": "https://www.youtube.com/watch?v=P_6vDLq64gE&t=117s", "title": "How to spot a liar | Pamela Meyer", "thumbnail": "https://i.ytimg.com/vi/P_6vDLq64gE/hqdefault.jpg"} {"video_id": "P_6vDLq64gE", "text": "\"You know, I just fished that email out of my Spam folder. So sorry.\" But there are times when we are unwilling participants in deception. And that can have dramatic costs for us. Last year saw 997 billion dollars in corporate fraud alone in the United States. That's an eyelash under a trillion dollars. That's seven percent of revenues. Deception can cost billions. Think Enron, Madoff, the mortgage crisis. Or in the case of double agents and traitors, like Robert Hanssen or Aldrich Ames, lies can betray our country,", "start_timestamp": "00:02:30", "end_timestamp": "00:03:05", "start_second": 150, "end_second": 185, "url": "https://www.youtube.com/watch?v=P_6vDLq64gE&t=150s", "title": "How to spot a liar | Pamela Meyer", "thumbnail": "https://i.ytimg.com/vi/P_6vDLq64gE/hqdefault.jpg"} {"video_id": "P_6vDLq64gE", "text": "they can compromise our security, they can undermine democracy, they can cause the deaths of those that defend us. Deception is actually serious business. This con man, Henry Oberlander, he was such an effective con man, British authorities say he could have undermined the entire banking system of the Western world. And you can't find this guy on Google; you can't find him anywhere. He was interviewed once, and he said the following. He said, \"Look, I've got one rule.\" And this was Henry's rule, he said, \"Look, everyone is willing to give you something.", "start_timestamp": "00:03:05", "end_timestamp": "00:03:35", "start_second": 185, "end_second": 215, "url": "https://www.youtube.com/watch?v=P_6vDLq64gE&t=185s", "title": "How to spot a liar | Pamela Meyer", "thumbnail": "https://i.ytimg.com/vi/P_6vDLq64gE/hqdefault.jpg"} {"video_id": "P_6vDLq64gE", "text": "They're ready to give you something for whatever it is they're hungry for.\" And that's the crux of it. If you don't want to be deceived, you have to know, what is it that you're hungry for? And we all kind of hate to admit it. We wish we were better husbands, better wives, smarter, more powerful, taller, richer -- the list goes on. Lying is an attempt to bridge that gap, to connect our wishes and our fantasies about who we wish we were, how we wish we could be, with what we're really like. And boy are we willing to fill in those gaps in our lives with lies.", "start_timestamp": "00:03:35", "end_timestamp": "00:04:09", "start_second": 215, "end_second": 249, "url": "https://www.youtube.com/watch?v=P_6vDLq64gE&t=215s", "title": "How to spot a liar | Pamela Meyer", "thumbnail": "https://i.ytimg.com/vi/P_6vDLq64gE/hqdefault.jpg"} {"video_id": "P_6vDLq64gE", "text": "On a given day, studies show that you may be lied to anywhere from 10 to 200 times. Now granted, many of those are white lies. But in another study, it showed that strangers lied three times within the first 10 minutes of meeting each other. (Laughter) Now when we first hear this data, we recoil. We can't believe how prevalent lying is. We're essentially against lying. But if you look more closely, the plot actually thickens. We lie more to strangers than we lie to coworkers. Extroverts lie more than introverts.", "start_timestamp": "00:04:09", "end_timestamp": "00:04:43", "start_second": 249, "end_second": 283, "url": "https://www.youtube.com/watch?v=P_6vDLq64gE&t=249s", "title": "How to spot a liar | Pamela Meyer", "thumbnail": "https://i.ytimg.com/vi/P_6vDLq64gE/hqdefault.jpg"} {"video_id": "P_6vDLq64gE", "text": "Men lie eight times more about themselves than they do other people. Women lie more to protect other people. If you're an average married couple, you're going to lie to your spouse in one out of every 10 interactions. Now, you may think that's bad. If you're unmarried, that number drops to three. Lying's complex. It's woven into the fabric of our daily and our business lives. We're deeply ambivalent about the truth. We parse it out on an as-needed basis, sometimes for very good reasons, other times just because we don't understand the gaps in our lives.", "start_timestamp": "00:04:43", "end_timestamp": "00:05:16", "start_second": 283, "end_second": 316, "url": "https://www.youtube.com/watch?v=P_6vDLq64gE&t=283s", "title": "How to spot a liar | Pamela Meyer", "thumbnail": "https://i.ytimg.com/vi/P_6vDLq64gE/hqdefault.jpg"} {"video_id": "P_6vDLq64gE", "text": "That's truth number two about lying. We're against lying, but we're covertly for it in ways that our society has sanctioned for centuries and centuries and centuries. It's as old as breathing. It's part of our culture, it's part of our history. Think Dante, Shakespeare, the Bible, News of the World. (Laughter) Lying has evolutionary value to us as a species. Researchers have long known that the more intelligent the species, the larger the neocortex, the more likely it is to be deceptive. Now you might remember Koko.", "start_timestamp": "00:05:16", "end_timestamp": "00:05:50", "start_second": 316, "end_second": 350, "url": "https://www.youtube.com/watch?v=P_6vDLq64gE&t=316s", "title": "How to spot a liar | Pamela Meyer", "thumbnail": "https://i.ytimg.com/vi/P_6vDLq64gE/hqdefault.jpg"} {"video_id": "P_6vDLq64gE", "text": "Does anybody remember Koko the gorilla who was taught sign language? Koko was taught to communicate via sign language. Here's Koko with her kitten. It's her cute little, fluffy pet kitten. Koko once blamed her pet kitten for ripping a sink out of the wall. (Laughter) We're hardwired to become leaders of the pack. It's starts really, really early. How early? Well babies will fake a cry, pause, wait to see who's coming and then go right back to crying. One-year-olds learn concealment. (Laughter) Two-year-olds bluff.", "start_timestamp": "00:05:50", "end_timestamp": "00:06:25", "start_second": 350, "end_second": 385, "url": "https://www.youtube.com/watch?v=P_6vDLq64gE&t=350s", "title": "How to spot a liar | Pamela Meyer", "thumbnail": "https://i.ytimg.com/vi/P_6vDLq64gE/hqdefault.jpg"} {"video_id": "P_6vDLq64gE", "text": "Five-year-olds lie outright. They manipulate via flattery. Nine-year-olds, masters of the cover-up. By the time you enter college, you're going to lie to your mom in one out of every five interactions. By the time we enter this work world and we're breadwinners, we enter a world that is just cluttered with Spam, fake digital friends, partisan media, ingenious identity thieves, world-class Ponzi schemers, a deception epidemic -- in short, what one author calls a post-truth society. It's been very confusing for a long time now.", "start_timestamp": "00:06:25", "end_timestamp": "00:07:03", "start_second": 385, "end_second": 423, "url": "https://www.youtube.com/watch?v=P_6vDLq64gE&t=385s", "title": "How to spot a liar | Pamela Meyer", "thumbnail": "https://i.ytimg.com/vi/P_6vDLq64gE/hqdefault.jpg"} {"video_id": "P_6vDLq64gE", "text": "What do you do? Well, there are steps we can take to navigate our way through the morass. Trained liespotters get to the truth 90 percent of the time. The rest of us, we're only 54 percent accurate. Why is it so easy to learn? There are good liars and bad liars. There are no real original liars. We all make the same mistakes. We all use the same techniques. So what I'm going to do is I'm going to show you two patterns of deception. And then we're going to look at the hot spots and see if we can find them ourselves.", "start_timestamp": "00:07:03", "end_timestamp": "00:07:31", "start_second": 423, "end_second": 451, "url": "https://www.youtube.com/watch?v=P_6vDLq64gE&t=423s", "title": "How to spot a liar | Pamela Meyer", "thumbnail": "https://i.ytimg.com/vi/P_6vDLq64gE/hqdefault.jpg"} {"video_id": "P_6vDLq64gE", "text": "We're going to start with speech. (Video) Bill Clinton: I want you to listen to me. I'm going to say this again. I did not have sexual relations with that woman, Miss Lewinsky. I never told anybody to lie, not a single time, never. And these allegations are false. And I need to go back to work for the American people. Thank you. (Applause) Pamela Meyer: Okay, what were the telltale signs? Well first we heard what's known as a non-contracted denial. Studies show that people who are overdetermined in their denial", "start_timestamp": "00:07:31", "end_timestamp": "00:08:08", "start_second": 451, "end_second": 488, "url": "https://www.youtube.com/watch?v=P_6vDLq64gE&t=451s", "title": "How to spot a liar | Pamela Meyer", "thumbnail": "https://i.ytimg.com/vi/P_6vDLq64gE/hqdefault.jpg"} {"video_id": "P_6vDLq64gE", "text": "will resort to formal rather than informal language. We also heard distancing language: \"that woman.\" We know that liars will unconsciously distance themselves from their subject, using language as their tool. Now if Bill Clinton had said, \"Well, to tell you the truth ...\" or Richard Nixon's favorite, \"In all candor ...\" he would have been a dead giveaway for any liespotter that knows that qualifying language, as it's called, qualifying language like that, further discredits the subject. Now if he had repeated the question in its entirety,", "start_timestamp": "00:08:08", "end_timestamp": "00:08:38", "start_second": 488, "end_second": 518, "url": "https://www.youtube.com/watch?v=P_6vDLq64gE&t=488s", "title": "How to spot a liar | Pamela Meyer", "thumbnail": "https://i.ytimg.com/vi/P_6vDLq64gE/hqdefault.jpg"} {"video_id": "P_6vDLq64gE", "text": "or if he had peppered his account with a little too much detail -- and we're all really glad he didn't do that -- he would have further discredited himself. Freud had it right. Freud said, look, there's much more to it than speech: \"No mortal can keep a secret. If his lips are silent, he chatters with his fingertips.\" And we all do it no matter how powerful you are. We all chatter with our fingertips. I'm going to show you Dominique Strauss-Kahn with Obama who's chattering with his fingertips. (Laughter) Now this brings us to our next pattern, which is body language.", "start_timestamp": "00:08:38", "end_timestamp": "00:09:17", "start_second": 518, "end_second": 557, "url": "https://www.youtube.com/watch?v=P_6vDLq64gE&t=518s", "title": "How to spot a liar | Pamela Meyer", "thumbnail": "https://i.ytimg.com/vi/P_6vDLq64gE/hqdefault.jpg"} {"video_id": "P_6vDLq64gE", "text": "With body language, here's what you've got to do. You've really got to just throw your assumptions out the door. Let the science temper your knowledge a little bit. Because we think liars fidget all the time. Well guess what, they're known to freeze their upper bodies when they're lying. We think liars won't look you in the eyes. Well guess what, they look you in the eyes a little too much just to compensate for that myth. We think warmth and smiles convey honesty, sincerity. But a trained liespotter can spot a fake smile a mile away.", "start_timestamp": "00:09:17", "end_timestamp": "00:09:46", "start_second": 557, "end_second": 586, "url": "https://www.youtube.com/watch?v=P_6vDLq64gE&t=557s", "title": "How to spot a liar | Pamela Meyer", "thumbnail": "https://i.ytimg.com/vi/P_6vDLq64gE/hqdefault.jpg"} {"video_id": "P_6vDLq64gE", "text": "Can you all spot the fake smile here? You can consciously contract the muscles in your cheeks. But the real smile's in the eyes, the crow's feet of the eyes. They cannot be consciously contracted, especially if you overdid the Botox. Don't overdo the Botox; nobody will think you're honest. Now we're going to look at the hot spots. Can you tell what's happening in a conversation? Can you start to find the hot spots to see the discrepancies between someone's words and someone's actions? Now, I know it seems really obvious,", "start_timestamp": "00:09:46", "end_timestamp": "00:10:18", "start_second": 586, "end_second": 618, "url": "https://www.youtube.com/watch?v=P_6vDLq64gE&t=586s", "title": "How to spot a liar | Pamela Meyer", "thumbnail": "https://i.ytimg.com/vi/P_6vDLq64gE/hqdefault.jpg"} {"video_id": "P_6vDLq64gE", "text": "but when you're having a conversation with someone you suspect of deception, attitude is by far the most overlooked but telling of indicators. An honest person is going to be cooperative. They're going to show they're on your side. They're going to be enthusiastic. They're going to be willing and helpful to getting you to the truth. They're going to be willing to brainstorm, name suspects, provide details. They're going to say, \"Hey, maybe it was those guys in payroll that forged those checks.\" They're going to be infuriated if they sense they're wrongly accused", "start_timestamp": "00:10:18", "end_timestamp": "00:10:47", "start_second": 618, "end_second": 647, "url": "https://www.youtube.com/watch?v=P_6vDLq64gE&t=618s", "title": "How to spot a liar | Pamela Meyer", "thumbnail": "https://i.ytimg.com/vi/P_6vDLq64gE/hqdefault.jpg"} {"video_id": "P_6vDLq64gE", "text": "throughout the entire course of the interview, not just in flashes; they'll be infuriated throughout the entire course of the interview. And if you ask someone honest what should happen to whomever did forge those checks, an honest person is much more likely to recommend strict rather than lenient punishment. Now let's say you're having that exact same conversation with someone deceptive. That person may be withdrawn, look down, lower their voice, pause, be kind of herky-jerky. Ask a deceptive person to tell their story,", "start_timestamp": "00:10:47", "end_timestamp": "00:11:15", "start_second": 647, "end_second": 675, "url": "https://www.youtube.com/watch?v=P_6vDLq64gE&t=647s", "title": "How to spot a liar | Pamela Meyer", "thumbnail": "https://i.ytimg.com/vi/P_6vDLq64gE/hqdefault.jpg"} {"video_id": "P_6vDLq64gE", "text": "they're going to pepper it with way too much detail in all kinds of irrelevant places. And then they're going to tell their story in strict chronological order. And what a trained interrogator does is they come in and in very subtle ways over the course of several hours, they will ask that person to tell that story backwards, and then they'll watch them squirm, and track which questions produce the highest volume of deceptive tells. Why do they do that? Well, we all do the same thing. We rehearse our words, but we rarely rehearse our gestures.", "start_timestamp": "00:11:15", "end_timestamp": "00:11:45", "start_second": 675, "end_second": 705, "url": "https://www.youtube.com/watch?v=P_6vDLq64gE&t=675s", "title": "How to spot a liar | Pamela Meyer", "thumbnail": "https://i.ytimg.com/vi/P_6vDLq64gE/hqdefault.jpg"} {"video_id": "P_6vDLq64gE", "text": "We say \"yes,\" we shake our heads \"no.\" We tell very convincing stories, we slightly shrug our shoulders. We commit terrible crimes, and we smile at the delight in getting away with it. Now, that smile is known in the trade as \"duping delight.\" And we're going to see that in several videos moving forward, but we're going to start -- for those of you who don't know him, this is presidential candidate John Edwards who shocked America by fathering a child out of wedlock. We're going to see him talk about getting a paternity test.", "start_timestamp": "00:11:45", "end_timestamp": "00:12:12", "start_second": 705, "end_second": 732, "url": "https://www.youtube.com/watch?v=P_6vDLq64gE&t=705s", "title": "How to spot a liar | Pamela Meyer", "thumbnail": "https://i.ytimg.com/vi/P_6vDLq64gE/hqdefault.jpg"} {"video_id": "P_6vDLq64gE", "text": "See now if you can spot him saying, \"yes\" while shaking his head \"no,\" slightly shrugging his shoulders. (Video) John Edwards: I'd be happy to participate in one. I know that it's not possible that this child could be mine, because of the timing of events. So I know it's not possible. Happy to take a paternity test, and would love to see it happen. Interviewer: Are you going to do that soon? Is there somebody -- JE: Well, I'm only one side. I'm only one side of the test. But I'm happy to participate in one. PM: Okay, those head shakes are much easier to spot", "start_timestamp": "00:12:12", "end_timestamp": "00:12:43", "start_second": 732, "end_second": 763, "url": "https://www.youtube.com/watch?v=P_6vDLq64gE&t=732s", "title": "How to spot a liar | Pamela Meyer", "thumbnail": "https://i.ytimg.com/vi/P_6vDLq64gE/hqdefault.jpg"} {"video_id": "P_6vDLq64gE", "text": "once you know to look for them. There are going to be times when someone makes one expression while masking another that just kind of leaks through in a flash. Murderers are known to leak sadness. Your new joint venture partner might shake your hand, celebrate, go out to dinner with you and then leak an expression of anger. And we're not all going to become facial expression experts overnight here, but there's one I can teach you that's very dangerous and it's easy to learn, and that's the expression of contempt.", "start_timestamp": "00:12:43", "end_timestamp": "00:13:10", "start_second": 763, "end_second": 790, "url": "https://www.youtube.com/watch?v=P_6vDLq64gE&t=763s", "title": "How to spot a liar | Pamela Meyer", "thumbnail": "https://i.ytimg.com/vi/P_6vDLq64gE/hqdefault.jpg"} {"video_id": "P_6vDLq64gE", "text": "Now with anger, you've got two people on an even playing field. It's still somewhat of a healthy relationship. But when anger turns to contempt, you've been dismissed. It's associated with moral superiority. And for that reason, it's very, very hard to recover from. Here's what it looks like. It's marked by one lip corner pulled up and in. It's the only asymmetrical expression. And in the presence of contempt, whether or not deception follows -- and it doesn't always follow -- look the other way, go the other direction,", "start_timestamp": "00:13:10", "end_timestamp": "00:13:41", "start_second": 790, "end_second": 821, "url": "https://www.youtube.com/watch?v=P_6vDLq64gE&t=790s", "title": "How to spot a liar | Pamela Meyer", "thumbnail": "https://i.ytimg.com/vi/P_6vDLq64gE/hqdefault.jpg"} {"video_id": "P_6vDLq64gE", "text": "reconsider the deal, say, \"No thank you. I'm not coming up for just one more nightcap. Thank you.\" Science has surfaced many, many more indicators. We know, for example, we know liars will shift their blink rate, point their feet towards an exit. They will take barrier objects and put them between themselves and the person that is interviewing them. They'll alter their vocal tone, often making their vocal tone much lower. Now here's the deal. These behaviors are just behaviors. They're not proof of deception.", "start_timestamp": "00:13:41", "end_timestamp": "00:14:14", "start_second": 821, "end_second": 854, "url": "https://www.youtube.com/watch?v=P_6vDLq64gE&t=821s", "title": "How to spot a liar | Pamela Meyer", "thumbnail": "https://i.ytimg.com/vi/P_6vDLq64gE/hqdefault.jpg"} {"video_id": "P_6vDLq64gE", "text": "They're red flags. We're human beings. We make deceptive flailing gestures all over the place all day long. They don't mean anything in and of themselves. But when you see clusters of them, that's your signal. Look, listen, probe, ask some hard questions, get out of that very comfortable mode of knowing, walk into curiosity mode, ask more questions, have a little dignity, treat the person you're talking to with rapport. Don't try to be like those folks on \"Law & Order\" and those other TV shows that pummel their subjects into submission.", "start_timestamp": "00:14:14", "end_timestamp": "00:14:44", "start_second": 854, "end_second": 884, "url": "https://www.youtube.com/watch?v=P_6vDLq64gE&t=854s", "title": "How to spot a liar | Pamela Meyer", "thumbnail": "https://i.ytimg.com/vi/P_6vDLq64gE/hqdefault.jpg"} {"video_id": "P_6vDLq64gE", "text": "Don't be too aggressive, it doesn't work. Now, we've talked a little bit about how to talk to someone who's lying and how to spot a lie. And as I promised, we're now going to look at what the truth looks like. But I'm going to show you two videos, two mothers -- one is lying, one is telling the truth. And these were surfaced by researcher David Matsumoto in California. And I think they're an excellent example of what the truth looks like. This mother, Diane Downs, shot her kids at close range, drove them to the hospital while they bled all over the car,", "start_timestamp": "00:14:44", "end_timestamp": "00:15:16", "start_second": 884, "end_second": 916, "url": "https://www.youtube.com/watch?v=P_6vDLq64gE&t=884s", "title": "How to spot a liar | Pamela Meyer", "thumbnail": "https://i.ytimg.com/vi/P_6vDLq64gE/hqdefault.jpg"} {"video_id": "P_6vDLq64gE", "text": "claimed a scraggy-haired stranger did it. And you'll see when you see the video, she can't even pretend to be an agonizing mother. What you want to look for here is an incredible discrepancy between horrific events that she describes and her very, very cool demeanor. And if you look closely, you'll see duping delight throughout this video. (Video) Diane Downs: At night when I close my eyes, I can see Christie reaching her hand out to me while I'm driving, and the blood just kept coming out of her mouth. And that -- maybe it'll fade too with time --", "start_timestamp": "00:15:16", "end_timestamp": "00:15:43", "start_second": 916, "end_second": 943, "url": "https://www.youtube.com/watch?v=P_6vDLq64gE&t=916s", "title": "How to spot a liar | Pamela Meyer", "thumbnail": "https://i.ytimg.com/vi/P_6vDLq64gE/hqdefault.jpg"} {"video_id": "P_6vDLq64gE", "text": "but I don't think so. That bothers me the most. PM: Now I'm going to show you a video of an actual grieving mother, Erin Runnion, confronting her daughter's murderer and torturer in court. Here you're going to see no false emotion, just the authentic expression of a mother's agony. (Video) Erin Runnion: I wrote this statement on the third anniversary of the night you took my baby, and you hurt her, and you crushed her, you terrified her until her heart stopped. And she fought, and I know she fought you. But I know she looked at you with those amazing brown eyes,", "start_timestamp": "00:15:43", "end_timestamp": "00:16:27", "start_second": 943, "end_second": 987, "url": "https://www.youtube.com/watch?v=P_6vDLq64gE&t=943s", "title": "How to spot a liar | Pamela Meyer", "thumbnail": "https://i.ytimg.com/vi/P_6vDLq64gE/hqdefault.jpg"} {"video_id": "P_6vDLq64gE", "text": "and you still wanted to kill her. And I don't understand it, and I never will. PM: Okay, there's no doubting the veracity of those emotions. Now the technology around what the truth looks like is progressing on, the science of it. We know, for example, that we now have specialized eye trackers and infrared brain scans, MRI's that can decode the signals that our bodies send out when we're trying to be deceptive. And these technologies are going to be marketed to all of us as panaceas for deceit, and they will prove incredibly useful some day.", "start_timestamp": "00:16:27", "end_timestamp": "00:17:03", "start_second": 987, "end_second": 1023, "url": "https://www.youtube.com/watch?v=P_6vDLq64gE&t=987s", "title": "How to spot a liar | Pamela Meyer", "thumbnail": "https://i.ytimg.com/vi/P_6vDLq64gE/hqdefault.jpg"} {"video_id": "P_6vDLq64gE", "text": "But you've got to ask yourself in the meantime: Who do you want on your side of the meeting, someone who's trained in getting to the truth or some guy who's going to drag a 400-pound electroencephalogram through the door? Liespotters rely on human tools. They know, as someone once said, \"Character's who you are in the dark.\" And what's kind of interesting is that today, we have so little darkness. Our world is lit up 24 hours a day. It's transparent with blogs and social networks broadcasting the buzz of a whole new generation of people", "start_timestamp": "00:17:03", "end_timestamp": "00:17:35", "start_second": 1023, "end_second": 1055, "url": "https://www.youtube.com/watch?v=P_6vDLq64gE&t=1023s", "title": "How to spot a liar | Pamela Meyer", "thumbnail": "https://i.ytimg.com/vi/P_6vDLq64gE/hqdefault.jpg"} {"video_id": "P_6vDLq64gE", "text": "that have made a choice to live their lives in public. It's a much more noisy world. So one challenge we have is to remember, oversharing, that's not honesty. Our manic tweeting and texting can blind us to the fact that the subtleties of human decency -- character integrity -- that's still what matters, that's always what's going to matter. So in this much noisier world, it might make sense for us to be just a little bit more explicit about our moral code. When you combine the science of recognizing deception", "start_timestamp": "00:17:35", "end_timestamp": "00:18:10", "start_second": 1055, "end_second": 1090, "url": "https://www.youtube.com/watch?v=P_6vDLq64gE&t=1055s", "title": "How to spot a liar | Pamela Meyer", "thumbnail": "https://i.ytimg.com/vi/P_6vDLq64gE/hqdefault.jpg"} {"video_id": "P0yVuoATjzs", "text": "[Music] so Wolfgang helpfully laid out the dichotomy between industry people and academics and experimentalists and computational people and if you're wondering which one I am the answer is yes so this is I'm going to mostly describe work that happened in my lab at Harvard and work by Bill Lauder who is actually in industry at us doing a start-up because that's what everyone does these days but I recently also took a gig as the director of a new MIT IBM collaboration a quarter-million-dollar AI Institute so if you if you're", "start_timestamp": "00:00:00", "end_timestamp": "00:00:44", "start_second": 0, "end_second": 44, "url": "https://www.youtube.com/watch?v=P0yVuoATjzs&t=0s", "title": "Predictive Coding Models of Perception", "thumbnail": "https://i.ytimg.com/vi/P0yVuoATjzs/maxresdefault.jpg"} {"video_id": "P0yVuoATjzs", "text": "interested in that come and find me and then I'm going to tell you about some deep learning work we've done but then I'm also going to tell you at the end if I have time about some experimental work that it's inspired in my lab just to kind of reinforce this idea that there's sort of a loop that we can be driving between models and experiments so we all know that deep learning has kind of a maybe kind of connection to biology so we have units and they have synapses and connections between them okay that's that's part of", "start_timestamp": "00:00:44", "end_timestamp": "00:01:13", "start_second": 44, "end_second": 73, "url": "https://www.youtube.com/watch?v=P0yVuoATjzs&t=44s", "title": "Predictive Coding Models of Perception", "thumbnail": "https://i.ytimg.com/vi/P0yVuoATjzs/maxresdefault.jpg"} {"video_id": "P0yVuoATjzs", "text": "it that's for their artificial neural network part comes from and then they're deep and you know we know that perceptual hierarchies for instance in the and the primate are are sort of deep hierarchical systems and deep CN n sort of capture that but really when you get right down to it you know that's been most of the interplay between these two and I think a lot of us here are sort of trying to think well how can we go back to the brain and get more inspiration but then also how can we use the in you know the deep learning to actually help", "start_timestamp": "00:01:13", "end_timestamp": "00:01:39", "start_second": 73, "end_second": 99, "url": "https://www.youtube.com/watch?v=P0yVuoATjzs&t=73s", "title": "Predictive Coding Models of Perception", "thumbnail": "https://i.ytimg.com/vi/P0yVuoATjzs/maxresdefault.jpg"} {"video_id": "P0yVuoATjzs", "text": "us understand the brain a little bit better so again this virtuous loop you're gonna hear this I think again and again and again and that's that's wherever we're coming from so it turns out the deep learning so I'm interested in perception so a different part of the stack then the the previous two talks imaged in how we perceive objects and make sense of this sort of terrible complicated flow of information coming in through our senses and it turns out that by accident deep learning systems convolution neural networks turned out", "start_timestamp": "00:01:39", "end_timestamp": "00:02:05", "start_second": 99, "end_second": 125, "url": "https://www.youtube.com/watch?v=P0yVuoATjzs&t=99s", "title": "Predictive Coding Models of Perception", "thumbnail": "https://i.ytimg.com/vi/P0yVuoATjzs/maxresdefault.jpg"} {"video_id": "P0yVuoATjzs", "text": "to be the best model for doing this so people have built all kinds of computational models of how visual processing worked in the ventral visual pathway and primates but it turns out that when you just started training deep Nets you just took the internal representations compare them to actual neuronal population responses and that's the best model so so here this is a paper from my my former PhD advisor Jim - Carlos lab and basically you see those are sort of model fit quality for different kinds of models that came", "start_timestamp": "00:02:05", "end_timestamp": "00:02:31", "start_second": 125, "end_second": 151, "url": "https://www.youtube.com/watch?v=P0yVuoATjzs&t=125s", "title": "Predictive Coding Models of Perception", "thumbnail": "https://i.ytimg.com/vi/P0yVuoATjzs/maxresdefault.jpg"} {"video_id": "P0yVuoATjzs", "text": "before including some of my own and then basically once we have deep Nets those explained the data fit the data way better than anything else and you know this has been going on and basically the bottom line is the better the models get on image net the better the fit seems to be between what's going what the representational space looks like in the in the deep net and what the representational space seems to look like in the population so this led to this idea that well maybe maybe visions just tapped out and this is a solved", "start_timestamp": "00:02:31", "end_timestamp": "00:02:56", "start_second": 151, "end_second": 176, "url": "https://www.youtube.com/watch?v=P0yVuoATjzs&t=151s", "title": "Predictive Coding Models of Perception", "thumbnail": "https://i.ytimg.com/vi/P0yVuoATjzs/maxresdefault.jpg"} {"video_id": "P0yVuoATjzs", "text": "problem and then maybe we don't need to worry about this anymore the answer is deep Nets there's a problem because all the work we've done so far is all looking at static representations and really our visual systems are built for dynamic situations and it's even worse than that because even if you show a static stimulus neuronal populations will not produce static outputs and this is one of the first things you notice when you're a graduate students sticking electrodes into the brain of the monkey which I did for five years if you show a", "start_timestamp": "00:02:56", "end_timestamp": "00:03:21", "start_second": 176, "end_second": 201, "url": "https://www.youtube.com/watch?v=P0yVuoATjzs&t=176s", "title": "Predictive Coding Models of Perception", "thumbnail": "https://i.ytimg.com/vi/P0yVuoATjzs/maxresdefault.jpg"} {"video_id": "P0yVuoATjzs", "text": "static picture of a monkey face you get these sort of transient responses and in fact the only way to drive a sustained response in an IT neuron at the end of the ventral visual pathway is to show a dynamic stimulus so this is very much at odds with how cnn's work because they have no intrinsic notion of time you put in an input you get an output it's a static thing so something is clearly going on here and there's all kinds of interesting rich dynamics here where sometimes it's sustained sometimes it's not sometimes", "start_timestamp": "00:03:21", "end_timestamp": "00:03:43", "start_second": 201, "end_second": 223, "url": "https://www.youtube.com/watch?v=P0yVuoATjzs&t=201s", "title": "Predictive Coding Models of Perception", "thumbnail": "https://i.ytimg.com/vi/P0yVuoATjzs/maxresdefault.jpg"} {"video_id": "P0yVuoATjzs", "text": "it looks like it might be oscillating so we don't really have a good picture yet of what that's all about and to that and you know that part which seems really salient isn't actually captured by simple cnn's and it gets even weirder than that too because there's this great experiment by Karl Olsen's lab where he showed successive presentations of image image image image and in some cases the the second image the B image was predicted by the first image so b4 always happened after a4 and you saw this a couple hundred times and b5 was", "start_timestamp": "00:03:43", "end_timestamp": "00:04:12", "start_second": 223, "end_second": 252, "url": "https://www.youtube.com/watch?v=P0yVuoATjzs&t=223s", "title": "Predictive Coding Models of Perception", "thumbnail": "https://i.ytimg.com/vi/P0yVuoATjzs/maxresdefault.jpg"} {"video_id": "P0yVuoATjzs", "text": "always seen after a5 showed a couple hundred times and then basically you see is you don't get a response to beef-up before if it's preceded by a for does that mean that the cell just doesn't like b4 well no actually it will respond to b4 but only when it's preceded by a different image so there's some kind of higher-order temporal track that's going on but again CNN's just don't capture because they don't have any intrinsic notion of time the other thing too which i think is actually a salient problem for deep learning is", "start_timestamp": "00:04:12", "end_timestamp": "00:04:39", "start_second": 252, "end_second": 279, "url": "https://www.youtube.com/watch?v=P0yVuoATjzs&t=252s", "title": "Predictive Coding Models of Perception", "thumbnail": "https://i.ytimg.com/vi/P0yVuoATjzs/maxresdefault.jpg"} {"video_id": "P0yVuoATjzs", "text": "that most of the datasets we have to train these things you need tons and tons and tons of data you know if you want to train you know a dog detector you need you know thousands and thousands and thousands of dogs and thousands and thousands of thousands of things that aren't dogs and that's just not the way we learned I don't sit down for my daughter and shower dog dog dog dog dog Cat Cat Cat Cat that's just not the way it works in fact yeah we can just do that experiment right now does anyone know what this is raise your", "start_timestamp": "00:04:39", "end_timestamp": "00:05:02", "start_second": 279, "end_second": 302, "url": "https://www.youtube.com/watch?v=P0yVuoATjzs&t=279s", "title": "Predictive Coding Models of Perception", "thumbnail": "https://i.ytimg.com/vi/P0yVuoATjzs/maxresdefault.jpg"} {"video_id": "P0yVuoATjzs", "text": "hands if you do okay we have a few electrical engineers in the audience okay even though you've even though you've only seen this for the first time is it present in this image yes how many in this image - how about that one yeah but it's a little weird right so even though you only saw one example you were immediately able to determine what was there you immediately an expert on this kind of object that's called one-shot learning and you know just if you need more evidence that deep nets don't quite work yet is anyone can anyone say what", "start_timestamp": "00:05:02", "end_timestamp": "00:05:30", "start_second": 302, "end_second": 330, "url": "https://www.youtube.com/watch?v=P0yVuoATjzs&t=302s", "title": "Predictive Coding Models of Perception", "thumbnail": "https://i.ytimg.com/vi/P0yVuoATjzs/maxresdefault.jpg"} {"video_id": "P0yVuoATjzs", "text": "this is it's actually Meret Oppenheim it's called luncheon in fur but everyone agrees it's a cup a saucer in the spoon state of the art deep net says it's a teddy bear and then it gets worse than that even even when things are very clear to the right object so this is a state of the art our CNN detection correctly detects this as a bird but you just put a few objects in the don't belong and it starts you know calls this a cat says that's it you know it's this is a television but so it's calling that a cat so there's something about there's", "start_timestamp": "00:05:30", "end_timestamp": "00:06:02", "start_second": 330, "end_second": 362, "url": "https://www.youtube.com/watch?v=P0yVuoATjzs&t=330s", "title": "Predictive Coding Models of Perception", "thumbnail": "https://i.ytimg.com/vi/P0yVuoATjzs/maxresdefault.jpg"} {"video_id": "P0yVuoATjzs", "text": "something brittle and adversarial examples we could go on and on and on about ways in which deep nets really aren't quite solving the problem yet and you know the I I and other people think that unsupervised learning is an important piece of this so how can we without labels gather up a lot of structure about the world and build representations that are really good and then that can feed into things like reinforcement learning and all that kind of stuff so this is just a different part of the stack I think we all agree", "start_timestamp": "00:06:02", "end_timestamp": "00:06:27", "start_second": 362, "end_second": 387, "url": "https://www.youtube.com/watch?v=P0yVuoATjzs&t=362s", "title": "Predictive Coding Models of Perception", "thumbnail": "https://i.ytimg.com/vi/P0yVuoATjzs/maxresdefault.jpg"} {"video_id": "P0yVuoATjzs", "text": "on there's multiple kinds of learning happening here but the kind that I'm really interested in and have been for a long time is this idea of temporal learning if you just look at the environment and let it play out the environment is almost always showing you its structure so you look at a person doing a tennis serve you can see how they're articulated you can see they're put together you can see how the shadows move around on an object that's just all played out in time you don't need supervision you can just", "start_timestamp": "00:06:27", "end_timestamp": "00:06:49", "start_second": 387, "end_second": 409, "url": "https://www.youtube.com/watch?v=P0yVuoATjzs&t=387s", "title": "Predictive Coding Models of Perception", "thumbnail": "https://i.ytimg.com/vi/P0yVuoATjzs/maxresdefault.jpg"} {"video_id": "P0yVuoATjzs", "text": "observe and learn a lot about the world so that's what we're interested doing it and you know brains from the from the neuroscience literature seemed to be exquisitely tuned for these kinds of temporal statistics so this is one of my favorite studies it's like a physics where basically they took faces and then they rotated them and then had subjects to passively watch these but as a little trick sometimes they would morph the face as it moved and people generally didn't notice this but if you ask the same different later ones that had", "start_timestamp": "00:06:49", "end_timestamp": "00:07:15", "start_second": 409, "end_second": 435, "url": "https://www.youtube.com/watch?v=P0yVuoATjzs&t=409s", "title": "Predictive Coding Models of Perception", "thumbnail": "https://i.ytimg.com/vi/P0yVuoATjzs/maxresdefault.jpg"} {"video_id": "P0yVuoATjzs", "text": "morphed and they had seen them morph a few times they would incorrectly sort of associate them as being the same same object and then you can even go further this is let me I didn't over my PhD you can even have presented a peripheral object and then have while the subject is a Codding you flip it out for a different object before their eyes land their eyes land on a different object and then you ask them to do same different on peripheral versus fovea and they'll make incorrect associations so it seems like the brain is constantly", "start_timestamp": "00:07:15", "end_timestamp": "00:07:39", "start_second": 435, "end_second": 459, "url": "https://www.youtube.com/watch?v=P0yVuoATjzs&t=435s", "title": "Predictive Coding Models of Perception", "thumbnail": "https://i.ytimg.com/vi/P0yVuoATjzs/maxresdefault.jpg"} {"video_id": "P0yVuoATjzs", "text": "collecting up these sort of temporal associations to make sense of the world so we have we have experimental of us this might be happening but we don't have really models for it now there have been prior attempts so slow Fourier analysis is a is an attempt to sort of extract those signals that are moving slowly and this has been influential but you know it hasn't sort of unleashed you know a revolution in how we do things but it's an interesting and important set of ideas but what I want to do is look specifically at prediction as an", "start_timestamp": "00:07:39", "end_timestamp": "00:08:08", "start_second": 459, "end_second": 488, "url": "https://www.youtube.com/watch?v=P0yVuoATjzs&t=459s", "title": "Predictive Coding Models of Perception", "thumbnail": "https://i.ytimg.com/vi/P0yVuoATjzs/maxresdefault.jpg"} {"video_id": "P0yVuoATjzs", "text": "unsupervised learning rule so basically the idea that can we just use the idea of predicting what a future frame is going to look like to help us build better representations and I'm gonna make the argument that brains are particularly adept at prediction I'm gonna use this person who's this person Sarina yes not Venus Sarina you guys are our experts so anyway so she can she can do tennis serve at 207 km/h that's 57 meters per second that means the ball traverses the court in 400 milliseconds about now the latency from the retina to", "start_timestamp": "00:08:08", "end_timestamp": "00:08:36", "start_second": 488, "end_second": 516, "url": "https://www.youtube.com/watch?v=P0yVuoATjzs&t=488s", "title": "Predictive Coding Models of Perception", "thumbnail": "https://i.ytimg.com/vi/P0yVuoATjzs/maxresdefault.jpg"} {"video_id": "P0yVuoATjzs", "text": "the primary visual cortex is about 60 milliseconds so that means that v1 is operating 3 meters in the past and if we go all the way through the ventral visual hierarchy then we're talking about about a hundred and seventy millisecond latency in a human it's about nine meters so there's a very deep sense in which if you think you saw Serena Williams's tennis serve you couldn't have because your brain you know this year line latency of your brain was way in the past so one suggestion that that's been made is if you're returning that", "start_timestamp": "00:08:36", "end_timestamp": "00:09:03", "start_second": 516, "end_second": 543, "url": "https://www.youtube.com/watch?v=P0yVuoATjzs&t=516s", "title": "Predictive Coding Models of Perception", "thumbnail": "https://i.ytimg.com/vi/P0yVuoATjzs/maxresdefault.jpg"} {"video_id": "P0yVuoATjzs", "text": "serve a big part of what you're doing is you're looking at the lined up and you're predicting where the ball is going to be and that's where you're putting your racket so you know and this this idea has been sort of you know it can be shown there's a little bit of nuance here but if this is called the flash lag you lose inch so if you look at this dot right here and then watch this sort of clock hand going around you see how there's like a little flashing straight line now is that line behind in front of or lined up with the clock hand", "start_timestamp": "00:09:03", "end_timestamp": "00:09:31", "start_second": 543, "end_second": 571, "url": "https://www.youtube.com/watch?v=P0yVuoATjzs&t=543s", "title": "Predictive Coding Models of Perception", "thumbnail": "https://i.ytimg.com/vi/P0yVuoATjzs/maxresdefault.jpg"} {"video_id": "P0yVuoATjzs", "text": "behind of course and the rule with all optical illusions is if it looks like it's not lined up it is lined up and if it looks like it is lined up it's not lined up so so this is perfectly collinear and the sort of classical interpretation and there's some nuance here is that basically this line has to go through the full pipeline latency of the visual system but this tracking line is predictable so your visual your perception just sort of puts it where it really is in real time so so this is and then there's another weird things", "start_timestamp": "00:09:31", "end_timestamp": "00:09:58", "start_second": 571, "end_second": 598, "url": "https://www.youtube.com/watch?v=P0yVuoATjzs&t=571s", "title": "Predictive Coding Models of Perception", "thumbnail": "https://i.ytimg.com/vi/P0yVuoATjzs/maxresdefault.jpg"} {"video_id": "P0yVuoATjzs", "text": "everyone's kind of see that it looks like it's tilted as well I can just hold that thought all right so so so so we wanted to get into this so we were we were you know working in sort of machine learning computer vision and right around the time we were working on this we got interesting this idea of future framed video prediction so basically the idea is that if we see a sequence of images transforming the goal is to basically predict what the next image looks like and this is now that there were there was a little bit of work", "start_timestamp": "00:09:58", "end_timestamp": "00:10:26", "start_second": 598, "end_second": 626, "url": "https://www.youtube.com/watch?v=P0yVuoATjzs&t=598s", "title": "Predictive Coding Models of Perception", "thumbnail": "https://i.ytimg.com/vi/P0yVuoATjzs/maxresdefault.jpg"} {"video_id": "P0yVuoATjzs", "text": "around the time we were doing this and that's it's a much bigger field now just to acknowledge that there are other people working in this area and we did some very simple things that you would do in in deep learning and again this is sort of fits in the same spirit of what Matt was talking about where so you just want the simplest possible models to start with and just see how much your problem statement sort of you know sort of it makes the problem happen so basically we did is we took an autoencoder type architecture and we", "start_timestamp": "00:10:26", "end_timestamp": "00:10:49", "start_second": 626, "end_second": 649, "url": "https://www.youtube.com/watch?v=P0yVuoATjzs&t=626s", "title": "Predictive Coding Models of Perception", "thumbnail": "https://i.ytimg.com/vi/P0yVuoATjzs/maxresdefault.jpg"} {"video_id": "P0yVuoATjzs", "text": "just wedged a recurrent neural network in this case an LS TM in the middle and then asked and then we also had a fancier version with gans back then there were only four Gann papers so it seemed exotic and how there's like 4,000 Gann papers but anyway so basically what we discovered is we can build these sort of generative networks that could actually basically sort of renderer faces and this doesn't seem so surprising anymore but at the time I was shocked how well this worked and there's some details about whether you", "start_timestamp": "00:10:49", "end_timestamp": "00:11:14", "start_second": 649, "end_second": 674, "url": "https://www.youtube.com/watch?v=P0yVuoATjzs&t=649s", "title": "Predictive Coding Models of Perception", "thumbnail": "https://i.ytimg.com/vi/P0yVuoATjzs/maxresdefault.jpg"} {"video_id": "P0yVuoATjzs", "text": "use the Gann or not and whether the ear appears or not but the basic bottom line is this is there a future frame video prediction problem where we see a sequence of frames and then we predict what the next frame might look like it is actually a tractable thing we can do with neural networks now and then the interesting and of course it works on all kinds of different faces including faces you haven't seen before but the interesting thing here was we were really trying to look at what kind of representations do we implicitly induce", "start_timestamp": "00:11:14", "end_timestamp": "00:11:38", "start_second": 674, "end_second": 698, "url": "https://www.youtube.com/watch?v=P0yVuoATjzs&t=674s", "title": "Predictive Coding Models of Perception", "thumbnail": "https://i.ytimg.com/vi/P0yVuoATjzs/maxresdefault.jpg"} {"video_id": "P0yVuoATjzs", "text": "when we train a network to do this kind of future frame prediction so all we're training we're training the network with back prop but the loss is coming entirely from how well does it reconstruct a future frame that it hasn't seen before and the interesting interesting thing that we find is if we kind of just take that internal representation and just sort of pipe it off and just ask how well can we decode things about other things like the identity of the face what we find is if we do a simple you know 50 Way face", "start_timestamp": "00:11:38", "end_timestamp": "00:12:05", "start_second": 698, "end_second": 725, "url": "https://www.youtube.com/watch?v=P0yVuoATjzs&t=698s", "title": "Predictive Coding Models of Perception", "thumbnail": "https://i.ytimg.com/vi/P0yVuoATjzs/maxresdefault.jpg"} {"video_id": "P0yVuoATjzs", "text": "recognition task where we need to tell which of 50 people it is and we only get to see 1 2 3 4 so on views of the person what we see is if we start using these predictive networks they're able to do a little bit better with a little bit less data and this is what we're trying to get towards can we get representations where we can do more with less training data and because we're sort of extracting some sort of sort of deep part of you know some sort of deep structure of the image now this was back in in 2015 and then you know we also", "start_timestamp": "00:12:05", "end_timestamp": "00:12:35", "start_second": 725, "end_second": 755, "url": "https://www.youtube.com/watch?v=P0yVuoATjzs&t=725s", "title": "Predictive Coding Models of Perception", "thumbnail": "https://i.ytimg.com/vi/P0yVuoATjzs/maxresdefault.jpg"} {"video_id": "P0yVuoATjzs", "text": "discovered that you know for instance you could internal within to these the representations that we learned you could also find directions that you can move around in that would do things like make the face more male or more female these are less surprising now that ganzar around and we can do all kinds of fancy things with them but what we really we're driving towards just to get more to the punchline was not this sort of very simple auto encoder with an RNN in the middle but really getting towards this idea of predictive coding which", "start_timestamp": "00:12:35", "end_timestamp": "00:13:00", "start_second": 755, "end_second": 780, "url": "https://www.youtube.com/watch?v=P0yVuoATjzs&t=755s", "title": "Predictive Coding Models of Perception", "thumbnail": "https://i.ytimg.com/vi/P0yVuoATjzs/maxresdefault.jpg"} {"video_id": "P0yVuoATjzs", "text": "came from neuroscience so the basic idea here this is a paper from 1999 that popularized predictive coding but the idea actually goes back about a decade earlier and the basic idea is this we start with an input we have a feed-forward signal we try and predict away the input and we subtract that off and we only send forward the differences now the original idea here was an efficient coding idea let's try and reduce the number of spikes we have to send because if we subtract away the things we already know then all we need to send forward is the", "start_timestamp": "00:13:00", "end_timestamp": "00:13:30", "start_second": 780, "end_second": 810, "url": "https://www.youtube.com/watch?v=P0yVuoATjzs&t=780s", "title": "Predictive Coding Models of Perception", "thumbnail": "https://i.ytimg.com/vi/P0yVuoATjzs/maxresdefault.jpg"} {"video_id": "P0yVuoATjzs", "text": "is the the difference from that so what we did was basically to take yeah on what we think the signal is if I'm looking at the outside and I'm trying to submit position of a ball and predicting where the ball is going to be is a different stories and if I'm trying to estimate suppose it's octave all right and then do in how many examples do we actually know what it is that we're trying to predict I mean really for sure right this is only doing that thing right yeah that's that's a great matter question and and I think we can discuss", "start_timestamp": "00:13:30", "end_timestamp": "00:14:02", "start_second": 810, "end_second": 842, "url": "https://www.youtube.com/watch?v=P0yVuoATjzs&t=810s", "title": "Predictive Coding Models of Perception", "thumbnail": "https://i.ytimg.com/vi/P0yVuoATjzs/maxresdefault.jpg"} {"video_id": "P0yVuoATjzs", "text": "that like what are we estimating in this case for the purposes of this we just took the simplest path I mean again this is sort of the modus operandi is take the simplest possible thing and then see where you get with it so here we're actually gonna generate whole frames so we actually want to confabulate and sort of imagine what the future frames gonna look like so you could imagine in that flash lag illusion example you're percept in that case is actually an imagined thing it's like it you're actually what you're perceiving at a", "start_timestamp": "00:14:02", "end_timestamp": "00:14:28", "start_second": 842, "end_second": 868, "url": "https://www.youtube.com/watch?v=P0yVuoATjzs&t=842s", "title": "Predictive Coding Models of Perception", "thumbnail": "https://i.ytimg.com/vi/P0yVuoATjzs/maxresdefault.jpg"} {"video_id": "P0yVuoATjzs", "text": "given moment is is a fabricated future State so so we're gonna take it at the pixel level but I agree because if you were interests in different things you might you might use different sort of targets for your prediction but what we did basically was to take the sort of classic idea of predictive coding and a state instantiated in the simplest possible way we could with with deep networks so basically what we do whose inputs come in here we have a running prediction of what the input should look like and then we subtract them off to", "start_timestamp": "00:14:28", "end_timestamp": "00:14:54", "start_second": 868, "end_second": 894, "url": "https://www.youtube.com/watch?v=P0yVuoATjzs&t=868s", "title": "Predictive Coding Models of Perception", "thumbnail": "https://i.ytimg.com/vi/P0yVuoATjzs/maxresdefault.jpg"} {"video_id": "P0yVuoATjzs", "text": "get an error map and then we send that forward to the next layer and then we have a recurrent layer at each at each stage of the hierarchy that's trying to build up this prediction and it can get feedback just the same as in the brain and it can also get local recurrence and then you basically just subtract off and there's different ways you can do the subtraction doesn't much matter and then you get these errors you only send the errors forward through the network so one nice thing about this is at time one", "start_timestamp": "00:14:54", "end_timestamp": "00:15:17", "start_second": 894, "end_second": 917, "url": "https://www.youtube.com/watch?v=P0yVuoATjzs&t=894s", "title": "Predictive Coding Models of Perception", "thumbnail": "https://i.ytimg.com/vi/P0yVuoATjzs/maxresdefault.jpg"} {"video_id": "P0yVuoATjzs", "text": "this is a CNN as nothing's come through so the extent that CNN's are a good model for the ventral visual pathway this is the CNN on the first time step and then the other thing that's nice about it is it's also a classic generative model so if we put something in on the top it'll render down into an actual predicted image here so we can see what the network is is seeing or perceiving and I'm given moment and we called we started off by calling these particular works and then we don't maybe deep prediction that's", "start_timestamp": "00:15:17", "end_timestamp": "00:15:44", "start_second": 917, "end_second": 944, "url": "https://www.youtube.com/watch?v=P0yVuoATjzs&t=917s", "title": "Predictive Coding Models of Perception", "thumbnail": "https://i.ytimg.com/vi/P0yVuoATjzs/maxresdefault.jpg"} {"video_id": "P0yVuoATjzs", "text": "not good name pretty deep no bill net no can't believe it's not Alex net we ended up with pregnant because their rules their rules to how you do this and you have to you know you have to get them right so we're calling these pred nets for better or for worse and what we found is that these networks that had the recurrence at every layer we're able to you know our previous ones were only able to get sort of one degree of freedom and a rotating object now we could actually get as many degrees of freedom basically as we wanted in these", "start_timestamp": "00:15:44", "end_timestamp": "00:16:09", "start_second": 944, "end_second": 969, "url": "https://www.youtube.com/watch?v=P0yVuoATjzs&t=944s", "title": "Predictive Coding Models of Perception", "thumbnail": "https://i.ytimg.com/vi/P0yVuoATjzs/maxresdefault.jpg"} {"video_id": "P0yVuoATjzs", "text": "synthetic objects so here are examples and basically that just for all these I'm gonna show is this is the actual sequence of incoming images and then what's next but below it is the time-shifted prediction so you can compare the prediction to what actually came and you can see on the first frame you get these weird potato thinks as it doesn't know which way it's going to go but then once it knows which way it's gonna go it locks on and then you start getting pretty good predictions and it can do this with faces it hasn't seen", "start_timestamp": "00:16:09", "end_timestamp": "00:16:32", "start_second": 969, "end_second": 992, "url": "https://www.youtube.com/watch?v=P0yVuoATjzs&t=969s", "title": "Predictive Coding Models of Perception", "thumbnail": "https://i.ytimg.com/vi/P0yVuoATjzs/maxresdefault.jpg"} {"video_id": "P0yVuoATjzs", "text": "before and all that kind of stuff but again the the reason we're doing this is not for future frame prediction per se but to see if we can actually learn good representations because the idea is in order to predict what's coming next you have to implicitly know lots of things about the structure of the object how light works how shadows work all that kind of stuff and what we find is basically if we try and build decoders again in the same sort of spirit as that face recognition task where we just sort of peel off the", "start_timestamp": "00:16:32", "end_timestamp": "00:16:56", "start_second": 992, "end_second": 1016, "url": "https://www.youtube.com/watch?v=P0yVuoATjzs&t=992s", "title": "Predictive Coding Models of Perception", "thumbnail": "https://i.ytimg.com/vi/P0yVuoATjzs/maxresdefault.jpg"} {"video_id": "P0yVuoATjzs", "text": "representation and just do linear decoding we discover that we can decode lots of different parameters of the image like how fast it's moving what the starting angle is we can also look at things about principal components of the identity so you it sort of fits with this idea again that by virtue of learning good predictions you learn good representations that are useful for lots of things which is kind of what you need to have in an unsupervised or a semi-supervised learning sort of setting we also did a face-recognition version", "start_timestamp": "00:16:56", "end_timestamp": "00:17:19", "start_second": 1016, "end_second": 1039, "url": "https://www.youtube.com/watch?v=P0yVuoATjzs&t=1016s", "title": "Predictive Coding Models of Perception", "thumbnail": "https://i.ytimg.com/vi/P0yVuoATjzs/maxresdefault.jpg"} {"video_id": "P0yVuoATjzs", "text": "of this again and what we found is that these presents at least at the time were performing on par better than the best semi-supervised learning algorithms that were available called latter networks so you might wonder what happens if we put in things that aren't faces what does it do it actually does something pretty reasonable so this is a network that was trained on faces and we put in this image of a top and you know something came out pretty okay it doesn't always turn out okay here's a here's like a little toy car and you can", "start_timestamp": "00:17:19", "end_timestamp": "00:17:47", "start_second": 1039, "end_second": 1067, "url": "https://www.youtube.com/watch?v=P0yVuoATjzs&t=1039s", "title": "Predictive Coding Models of Perception", "thumbnail": "https://i.ytimg.com/vi/P0yVuoATjzs/maxresdefault.jpg"} {"video_id": "P0yVuoATjzs", "text": "see it's desperately trying to turn it into some kind of face it's like this like when scream kind trying to happen there but you know and but the bottom line is if you train it on complicated things and enough variety you can get networks that can do pretty good predictions on just about anything you want to do and of course just about anything you want to do is cars right autonomous cars like at least in this year we did this work that's the thing you want to be doing prediction on because that's where all the money is so", "start_timestamp": "00:17:47", "end_timestamp": "00:18:11", "start_second": 1067, "end_second": 1091, "url": "https://www.youtube.com/watch?v=P0yVuoATjzs&t=1067s", "title": "Predictive Coding Models of Perception", "thumbnail": "https://i.ytimg.com/vi/P0yVuoATjzs/maxresdefault.jpg"} {"video_id": "P0yVuoATjzs", "text": "we took my student bill without any prompting took some some car mounted camera datasets so this we train it on the kiddy data set which is in Germany and then what I'm showing you here is the test set which is the Caltech pedestrian data set and then this is what the predictions look like so they're not perfect they're a little bit blurry if we put a Gann on it we can make it less blurry sure but we're really just focusing on sort of learning what kind of representations we can pull out of this and a couple interesting things", "start_timestamp": "00:18:11", "end_timestamp": "00:18:39", "start_second": 1091, "end_second": 1119, "url": "https://www.youtube.com/watch?v=P0yVuoATjzs&t=1091s", "title": "Predictive Coding Models of Perception", "thumbnail": "https://i.ytimg.com/vi/P0yVuoATjzs/maxresdefault.jpg"} {"video_id": "P0yVuoATjzs", "text": "here one is it seems to implicitly know a lot about perspective and flow it knows that things need to expand outwards and needs you knows the things that are far away need to you know change less it also knows to infill road here if we look at other examples as well you can see things like you know it knows a little bit about occlusion so this car is going to include this golf cart it knows that that one should go in front it knows you know it knows all kinds of things about you know implicitly about the flow of content in", "start_timestamp": "00:18:39", "end_timestamp": "00:19:10", "start_second": 1119, "end_second": 1150, "url": "https://www.youtube.com/watch?v=P0yVuoATjzs&t=1119s", "title": "Predictive Coding Models of Perception", "thumbnail": "https://i.ytimg.com/vi/P0yVuoATjzs/maxresdefault.jpg"} {"video_id": "P0yVuoATjzs", "text": "the image and again what we're trying to do here it's interesting to do it to look at feature frame prediction and we can also do farther out future frame predictions as well if we can go out as much as five frames it gets blurrier but it's it's still okay but again the goal here is to say well by virtue of learning how to predict as sort of a surrogate loss can we learn how to decode other things that might be useful and in a car one of the things you might want to learn about is what's the steering angle of the car and what we", "start_timestamp": "00:19:10", "end_timestamp": "00:19:39", "start_second": 1150, "end_second": 1179, "url": "https://www.youtube.com/watch?v=P0yVuoATjzs&t=1150s", "title": "Predictive Coding Models of Perception", "thumbnail": "https://i.ytimg.com/vi/P0yVuoATjzs/maxresdefault.jpg"} {"video_id": "P0yVuoATjzs", "text": "find again in the same way we can take the representation peel that off put it into a linear decoder and we can discover that without any prior training on these steering angles this can actually outperform a system that was purpose-built to decode steering angles so this is from comma this is a startup that's doing autonomous cars they had a reference CNN on their data set and then basically the president just by learning how to predict the future also implicitly learned what the steering angle of the car was", "start_timestamp": "00:19:39", "end_timestamp": "00:20:05", "start_second": 1179, "end_second": 1205, "url": "https://www.youtube.com/watch?v=P0yVuoATjzs&t=1179s", "title": "Predictive Coding Models of Perception", "thumbnail": "https://i.ytimg.com/vi/P0yVuoATjzs/maxresdefault.jpg"} {"video_id": "P0yVuoATjzs", "text": "that was kind of what you need to do to do the task yeah not one not just the recurrency but also the fact that you're training it on continuously sample images representation is due to the prediction as opposed to merely the sequence of training images all right so so we have so we've done predictive versions and we've also done sort of a just a autoencoder style and the prediction outperforms the details about are all in the paper how much additional how much of a better representation of you gain by adding the", "start_timestamp": "00:20:05", "end_timestamp": "00:20:39", "start_second": 1205, "end_second": 1239, "url": "https://www.youtube.com/watch?v=P0yVuoATjzs&t=1205s", "title": "Predictive Coding Models of Perception", "thumbnail": "https://i.ytimg.com/vi/P0yVuoATjzs/maxresdefault.jpg"} {"video_id": "P0yVuoATjzs", "text": "predictive component as opposed to just having from whatever image your sequence of images you have you mean specifically for this the steering angle version so the way the the comma one works is it it takes in frames and it puts them to a CNN and it so it has the temporal component but it doesn't have any notion of prediction it wasn't it wasn't sort of trained to do prediction and what we see here is these were the comma reference CNN's so how well they performed and then this is with different numbers of input frames to the", "start_timestamp": "00:20:39", "end_timestamp": "00:21:11", "start_second": 1239, "end_second": 1271, "url": "https://www.youtube.com/watch?v=P0yVuoATjzs&t=1239s", "title": "Predictive Coding Models of Perception", "thumbnail": "https://i.ytimg.com/vi/P0yVuoATjzs/maxresdefault.jpg"} {"video_id": "P0yVuoATjzs", "text": "decoder and you can see it can can do quite well with with relatively little data so it seems to be a useful thing but again what we're trying to do is trying to drive towards how might this be a principle that the brain could use to organize itself yeah when we're doing rollouts what do you mean oh into the future so their unique way school what we do is we re inject recurrent recursively the the predictions and then we fine-tune that so so basically if you take the prediction put it in again get the next prediction put it in again put the next", "start_timestamp": "00:21:11", "end_timestamp": "00:21:50", "start_second": 1271, "end_second": 1310, "url": "https://www.youtube.com/watch?v=P0yVuoATjzs&t=1271s", "title": "Predictive Coding Models of Perception", "thumbnail": "https://i.ytimg.com/vi/P0yVuoATjzs/maxresdefault.jpg"} {"video_id": "P0yVuoATjzs", "text": "prediction and you can get its it starts to get blurrier and blurrier over time but that's basically what we're doing very briefly showed we can do five frames ahead the predictions just get murkier it makes a little bit of a difference it gets murkier because you don't know which way the car is going to go I mean realistic what we should be doing what we are doing now is probabilistically we should actually be putting out a probability distribution of all the possible outcomes and that's that's hard but that's what we're doing", "start_timestamp": "00:21:50", "end_timestamp": "00:22:24", "start_second": 1310, "end_second": 1344, "url": "https://www.youtube.com/watch?v=P0yVuoATjzs&t=1310s", "title": "Predictive Coding Models of Perception", "thumbnail": "https://i.ytimg.com/vi/P0yVuoATjzs/maxresdefault.jpg"} {"video_id": "P0yVuoATjzs", "text": "and you know there's a lot of interesting sort of wrinkles but even just with this but the straight up let's just does this come up with a mean outcome because there's a problem that if you don't know it's gonna go left or right then you you don't want to split the difference and get blurry what you'd really like to do is but have a system that produces samples that are sort of the correct distribution of actual outputs you could have and again that's something something we're working on something brain publi does - of course", "start_timestamp": "00:22:24", "end_timestamp": "00:22:50", "start_second": 1344, "end_second": 1370, "url": "https://www.youtube.com/watch?v=P0yVuoATjzs&t=1344s", "title": "Predictive Coding Models of Perception", "thumbnail": "https://i.ytimg.com/vi/P0yVuoATjzs/maxresdefault.jpg"} {"video_id": "P0yVuoATjzs", "text": "then we want to actually since this is a neuroscience talk we want to go back and look at like well does this do something that explains something in neuroscience that we didn't understand before well here's something that everything does so it turns out you get good bores everything you know every neuron that work you train on anything no matter what you do it's like a rule that you'll get Gabor is out and and orientation tuning but more interesting than that is this notion that I sort of flagged earlier which is if you put in a static", "start_timestamp": "00:22:50", "end_timestamp": "00:23:17", "start_second": 1370, "end_second": 1397, "url": "https://www.youtube.com/watch?v=P0yVuoATjzs&t=1370s", "title": "Predictive Coding Models of Perception", "thumbnail": "https://i.ytimg.com/vi/P0yVuoATjzs/maxresdefault.jpg"} {"video_id": "P0yVuoATjzs", "text": "input the brain doesn't give you a static output it gives you a dynamic output so there's usually a delay there's a burst of activity and then you have this sort of activity that falls off and sometimes you also get off responses so when the stimulus goes away you get a fresh response and it's probably not a great surprise but pride nets do this as well and it's not hard to see why so this is the average of units in the error representation of the pregnant hard to figure out why it's doing this basically what happens is on that first", "start_timestamp": "00:23:17", "end_timestamp": "00:23:45", "start_second": 1397, "end_second": 1425, "url": "https://www.youtube.com/watch?v=P0yVuoATjzs&t=1397s", "title": "Predictive Coding Models of Perception", "thumbnail": "https://i.ytimg.com/vi/P0yVuoATjzs/maxresdefault.jpg"} {"video_id": "P0yVuoATjzs", "text": "hit it can't predict anything when the first image first flash is on so you get a big bunch of activity but then as it locks on and learns to sort of explain away the data then it goes down and then when you get the image off that's another sort of surprise and then you get another burst of activity yeah if you sprinkled the pixels randomly like you know Toby Delbert's event cameras or something true continue so so it's true we're doing this in discrete time because we're talking about worker L STM's and things like that", "start_timestamp": "00:23:45", "end_timestamp": "00:24:18", "start_second": 1425, "end_second": 1458, "url": "https://www.youtube.com/watch?v=P0yVuoATjzs&t=1425s", "title": "Predictive Coding Models of Perception", "thumbnail": "https://i.ytimg.com/vi/P0yVuoATjzs/maxresdefault.jpg"} {"video_id": "P0yVuoATjzs", "text": "convalesced hams so we can't accommodate continuous time but I don't think that's I don't think that's a huge huge difference well I mean these aren't like different frames it's like continuous frames so it's a relatively smooth progression through the space of images and when you have nothing and then you put on something that is the kind of discontinuity we're talking about it is in discrete time yeah so what we did is we literally put the image up you know blank screen put the image up and then take the image off and so that's like", "start_timestamp": "00:24:18", "end_timestamp": "00:25:01", "start_second": 1458, "end_second": 1501, "url": "https://www.youtube.com/watch?v=P0yVuoATjzs&t=1458s", "title": "Predictive Coding Models of Perception", "thumbnail": "https://i.ytimg.com/vi/P0yVuoATjzs/maxresdefault.jpg"} {"video_id": "P0yVuoATjzs", "text": "the standard sort of primate experiment version of this and basically the present that does exactly the same thing has exactly the same dynamics that you would find in in primary visual cortex so this is this is a bit of circumstantial evidence at least that part sort of matches up with expectations originally actually particular coding was it was designed to explain a phenomenon called N stopping which is if you have a bar that's oriented the way a v1 cell likes and then you make the bar longer and longer and longer and longer the response will", "start_timestamp": "00:25:01", "end_timestamp": "00:25:30", "start_second": 1501, "end_second": 1530, "url": "https://www.youtube.com/watch?v=P0yVuoATjzs&t=1501s", "title": "Predictive Coding Models of Perception", "thumbnail": "https://i.ytimg.com/vi/P0yVuoATjzs/maxresdefault.jpg"} {"video_id": "P0yVuoATjzs", "text": "go up to some point and then once you make it longer actually the response gets suppressed so it's like too much of a good thing you know the longer the bar is that that's in the orientation that likes it actually reduces the response and you know true to form the the present version of predictive coding also has this sort of n stopping response which which is a nice nice thing to have but it goes further than this there's also this notion of surround depression so if you have a stimulus which is a dot and you", "start_timestamp": "00:25:30", "end_timestamp": "00:25:55", "start_second": 1530, "end_second": 1555, "url": "https://www.youtube.com/watch?v=P0yVuoATjzs&t=1530s", "title": "Predictive Coding Models of Perception", "thumbnail": "https://i.ytimg.com/vi/P0yVuoATjzs/maxresdefault.jpg"} {"video_id": "P0yVuoATjzs", "text": "larger and larger and larger and larger the e1 cells will like it better and better and better and respond more and more and more up to a point but then beyond that they'll start being suppressed as if there's a suppressive surround around the response and what we find is that the the pet net also has this quality so if you look in again the bottom layer the response goes up to a certain size of a dot stimulus and then it's suppressed but interestingly Rick borns lab at Harvard back in 2013 found that if you cool downstream visual areas", "start_timestamp": "00:25:55", "end_timestamp": "00:26:24", "start_second": 1555, "end_second": 1584, "url": "https://www.youtube.com/watch?v=P0yVuoATjzs&t=1555s", "title": "Predictive Coding Models of Perception", "thumbnail": "https://i.ytimg.com/vi/P0yVuoATjzs/maxresdefault.jpg"} {"video_id": "P0yVuoATjzs", "text": "so if you cool v2 and then record in v1 so in activate feedback connections you actually would find that these surround suppression effects were themselves suppressed they go away if you take away the top-down feedback and this is a way easier experiment to do in in a network because you can just turn off the feedback connections and see what happens and lo and behold you get almost exactly the same pattern of feedback so in the pred net sit extent there surround suppression it's happening as a top-down feedback phenomenon just the", "start_timestamp": "00:26:24", "end_timestamp": "00:26:52", "start_second": 1584, "end_second": 1612, "url": "https://www.youtube.com/watch?v=P0yVuoATjzs&t=1584s", "title": "Predictive Coding Models of Perception", "thumbnail": "https://i.ytimg.com/vi/P0yVuoATjzs/maxresdefault.jpg"} {"video_id": "P0yVuoATjzs", "text": "same way it seems to be happening in the primate and then you know as we sort of march through these things we also have this interesting sort of sequence learning effects that you see in the end of the ventral visual pathway and lo and behold if you show it pregnant these things so the pred Nets just trained on natural images like our videos and then we're gonna show it these stimuli that are basically the same as what you would find in these experiments in an untrained pred net you get these sort of funny effects where it's you know you", "start_timestamp": "00:26:52", "end_timestamp": "00:27:22", "start_second": 1612, "end_second": 1642, "url": "https://www.youtube.com/watch?v=P0yVuoATjzs&t=1612s", "title": "Predictive Coding Models of Perception", "thumbnail": "https://i.ytimg.com/vi/P0yVuoATjzs/maxresdefault.jpg"} {"video_id": "P0yVuoATjzs", "text": "flash up an image and it takes a little while to sort of predict it away but then as you train these these sequences so that are expected you get sharper and sharper transitions from between these images and then what happens is basically the same exact result so if B is predictable from a in serial presentations again this is trained on cars and then we just subsequently show it a few of the you know a comparable number hundreds of examples then B will be suppressed if a explains away B but if we show that very same B and you know", "start_timestamp": "00:27:22", "end_timestamp": "00:27:54", "start_second": 1642, "end_second": 1674, "url": "https://www.youtube.com/watch?v=P0yVuoATjzs&t=1642s", "title": "Predictive Coding Models of Perception", "thumbnail": "https://i.ytimg.com/vi/P0yVuoATjzs/maxresdefault.jpg"} {"video_id": "P0yVuoATjzs", "text": "with something else that's not a that doesn't predict it in front of it we get the exact same result that the Carlos and company got so it's able to track with almost exactly the same number of trials some of these interesting sequence learning facts with no no extra machinery needed to be added and then you might also ask okay that that flash lag illusion so you might remember you know this this clocks going around this bar is flashing it's actually lined up but it looks like it's lagging behind so you might ask", "start_timestamp": "00:27:54", "end_timestamp": "00:28:23", "start_second": 1674, "end_second": 1703, "url": "https://www.youtube.com/watch?v=P0yVuoATjzs&t=1674s", "title": "Predictive Coding Models of Perception", "thumbnail": "https://i.ytimg.com/vi/P0yVuoATjzs/maxresdefault.jpg"} {"video_id": "P0yVuoATjzs", "text": "oh is that this is the same thing happen with with the pred net and the cool thing about generative neural networks is you can see what they're experiencing because we can just visualize the error layer and so this is what it's seeing now there's a caveat here does anyone know the caveat is yeah yeah so you're getting a double dose so so the network is seeing it and then you're seeing it on top of what the network is seen so if we really want to do this we need to we need to look at freeze frames and we can see exactly what what the network's", "start_timestamp": "00:28:23", "end_timestamp": "00:28:52", "start_second": 1703, "end_second": 1732, "url": "https://www.youtube.com/watch?v=P0yVuoATjzs&t=1703s", "title": "Predictive Coding Models of Perception", "thumbnail": "https://i.ytimg.com/vi/P0yVuoATjzs/maxresdefault.jpg"} {"video_id": "P0yVuoATjzs", "text": "seeing and there it is there's that weird lagging and there's the tilt too right that was weird there wasn't a good there wasn't a satisfy satisfy explanation for that tilt before and it just sort of falls out of the pregnant yeah just predictable on a longer time scale yeah so it's the same thing with with with the human version of it right like it's predictable but there seems to be some window beyond which it's not it's effectively not predictable that's right there's also weird things like you see these like weird ripoli things happening", "start_timestamp": "00:28:52", "end_timestamp": "00:29:26", "start_second": 1732, "end_second": 1766, "url": "https://www.youtube.com/watch?v=P0yVuoATjzs&t=1732s", "title": "Predictive Coding Models of Perception", "thumbnail": "https://i.ytimg.com/vi/P0yVuoATjzs/maxresdefault.jpg"} {"video_id": "P0yVuoATjzs", "text": "and I don't know what those are but they're really cool and I like looking at them and we'll figure out what they're for eventually and then I love the scientific community and I love archive so we posted this and a group in Japan picked it up and started using it so they basically wanted to see these illusory motion stimuli they wanted to see what would happen if they showed them to pred nuts so so the way these work hopefully you're experiencing the illusion right now this is a static image but it looks like these are are", "start_timestamp": "00:29:26", "end_timestamp": "00:29:56", "start_second": 1766, "end_second": 1796, "url": "https://www.youtube.com/watch?v=P0yVuoATjzs&t=1766s", "title": "Predictive Coding Models of Perception", "thumbnail": "https://i.ytimg.com/vi/P0yVuoATjzs/maxresdefault.jpg"} {"video_id": "P0yVuoATjzs", "text": "sort of these ones are sort of are sort of rotating and these ones are not so this is a classic you know sort of illusory motion sort of stimulus and it turns out if you if you train a pret net on these sort of rotating so it's seen real objects rotating in the world or synthetic images of like propellers and things moving in the world and then you compute the flow vectors on the actual predictions if you just let a pregnant look at these static images the pregnant actually produces flow vectors that are consistent with the illusion so when", "start_timestamp": "00:29:56", "end_timestamp": "00:30:26", "start_second": 1796, "end_second": 1826, "url": "https://www.youtube.com/watch?v=P0yVuoATjzs&t=1796s", "title": "Predictive Coding Models of Perception", "thumbnail": "https://i.ytimg.com/vi/P0yVuoATjzs/maxresdefault.jpg"} {"video_id": "P0yVuoATjzs", "text": "humans see the illusion the pregnant has optic flow that looks like what we see and when humans don't see the illusion there's no optic flow so so this is this is interesting especially because the usual explanation for these things has to do with sort of epiphenomena about the relative Layton sees visual responses and now that might still be true but this at least gives you another possible explanation which is that you have a system that's trained with the predictive loss it's gonna naturally have some of these biases that come from", "start_timestamp": "00:30:26", "end_timestamp": "00:30:55", "start_second": 1826, "end_second": 1855, "url": "https://www.youtube.com/watch?v=P0yVuoATjzs&t=1826s", "title": "Predictive Coding Models of Perception", "thumbnail": "https://i.ytimg.com/vi/P0yVuoATjzs/maxresdefault.jpg"} {"video_id": "P0yVuoATjzs", "text": "the sort of statistics of the world so with all things neural network you know like we just hide all the like the parameter search that went into it there is indeed so the failure mode in general when it can't when it doesn't learn so if we have like the hyper parameters wrong or whatever is that it predicts the last frame that's usually our benchmark against which to see if it's working so we take the reconstruction lost if you just said it's the same as the last frame and then compare how much we improve the error and then you're", "start_timestamp": "00:30:55", "end_timestamp": "00:31:35", "start_second": 1855, "end_second": 1895, "url": "https://www.youtube.com/watch?v=P0yVuoATjzs&t=1855s", "title": "Predictive Coding Models of Perception", "thumbnail": "https://i.ytimg.com/vi/P0yVuoATjzs/maxresdefault.jpg"} {"video_id": "P0yVuoATjzs", "text": "right it if it's too herky-jerky then it has trouble learning so so I don't have a good quantitative answer for you but there there is indeed a sweet spot and yeah I mean it depends on how fast you run the framerate we were running these at 10 frame frame rates so that's kind of yeah about about 100 milliseconds it depends on how fast things are moving as well so anyway so I promised that we would get back to a neuroscience experiment so I'm going to tell you about a story about an actual neuroscience experiment that the", "start_timestamp": "00:31:35", "end_timestamp": "00:32:05", "start_second": 1895, "end_second": 1925, "url": "https://www.youtube.com/watch?v=P0yVuoATjzs&t=1895s", "title": "Predictive Coding Models of Perception", "thumbnail": "https://i.ytimg.com/vi/P0yVuoATjzs/maxresdefault.jpg"} {"video_id": "P0yVuoATjzs", "text": "neuroscience said the wet lab part of my lab did in response to the computational work that was going on in my lab so you might remember that we could read out the steering angle from a self-driving car using the internal representations in the pregnant we also had the idea well you know wouldn't it make sense to actually take the efference copy of the steering wheel all the odometry of the car we could just feed that into the representation surely the network would do a better job of predicting so if I'm trying to predict how the world is going", "start_timestamp": "00:32:05", "end_timestamp": "00:32:33", "start_second": 1925, "end_second": 1953, "url": "https://www.youtube.com/watch?v=P0yVuoATjzs&t=1925s", "title": "Predictive Coding Models of Perception", "thumbnail": "https://i.ytimg.com/vi/P0yVuoATjzs/maxresdefault.jpg"} {"video_id": "P0yVuoATjzs", "text": "to change if I know the wheels turn this way I can make a better prediction about how the world's going to change I might also be able to do cool things like if I could sort of fictive Li imagine like conditionally generate if I turn the wheel like this what would the world look like so this is something we started doing just because it made sense from a machine learning standpoint and no big surprise when you don't have a friend's copy which is basically the sort of vanilla pret net you get certain convergence over time you know so you", "start_timestamp": "00:32:33", "end_timestamp": "00:32:59", "start_second": 1953, "end_second": 1979, "url": "https://www.youtube.com/watch?v=P0yVuoATjzs&t=1953s", "title": "Predictive Coding Models of Perception", "thumbnail": "https://i.ytimg.com/vi/P0yVuoATjzs/maxresdefault.jpg"} {"video_id": "P0yVuoATjzs", "text": "get get down to some mean error and with with training epoch but if you include this extra information unsurprisingly the network converges faster and comes up with a better result so that's that's all fine and good but if this were true and this is what was happening in the brain that would imply that in visual cortex we should have signals from motor cortex right they should be there and if they're there we should be able to decode them even in the dark potentially so so my student Greg thought this was a great idea he was doing 24/7 recordings", "start_timestamp": "00:32:59", "end_timestamp": "00:33:31", "start_second": 1979, "end_second": 2011, "url": "https://www.youtube.com/watch?v=P0yVuoATjzs&t=1979s", "title": "Predictive Coding Models of Perception", "thumbnail": "https://i.ytimg.com/vi/P0yVuoATjzs/maxresdefault.jpg"} {"video_id": "P0yVuoATjzs", "text": "in visual cortex in a rat at the time so he's had boatloads of boatloads of data so we just had that lab meeting well hey well I just put them in a dark box like so there's no light completely light-tight and see if you can actually and you have an accelerometer on the animal's head anyway because you know the minute you're putting electrodes and you might as well put an accelerometer in there too and then so we can record from from many t\u00eate roads so we have you know 60 16 tetrode 64 electrodes and we can record local field potentials and", "start_timestamp": "00:33:31", "end_timestamp": "00:34:00", "start_second": 2011, "end_second": 2040, "url": "https://www.youtube.com/watch?v=P0yVuoATjzs&t=2011s", "title": "Predictive Coding Models of Perception", "thumbnail": "https://i.ytimg.com/vi/P0yVuoATjzs/maxresdefault.jpg"} {"video_id": "P0yVuoATjzs", "text": "then we also have these accelerometer signals so there's a there's a hypothesis that comes from the computational work that says well we should be able to decode you see friends copy signals in you know potentially even in v1 and it turns out that that is exactly what we can do so so these blue bars are in complete darkness and visual cortex so there's no visual stimulus they're not getting any visual optic flow but we can decode a little bit of you know sort of six degree of freedom information about which way the animals", "start_timestamp": "00:34:00", "end_timestamp": "00:34:25", "start_second": 2040, "end_second": 2065, "url": "https://www.youtube.com/watch?v=P0yVuoATjzs&t=2040s", "title": "Predictive Coding Models of Perception", "thumbnail": "https://i.ytimg.com/vi/P0yVuoATjzs/maxresdefault.jpg"} {"video_id": "P0yVuoATjzs", "text": "nose is pointing and then interestingly if you inject mute samal into area M two which is the putative place where these motor signals you friends copies would be coming from you know this is all still new so you know don't get futzed if this ends up not holding up because we still only get more animals at least in the first animal we did we can actually abolish that signal so if we take away motor cortex signals we actually can't any longer decode the position of the animal's head in 3-space so this is a case where we actually", "start_timestamp": "00:34:25", "end_timestamp": "00:34:56", "start_second": 2065, "end_second": 2096, "url": "https://www.youtube.com/watch?v=P0yVuoATjzs&t=2065s", "title": "Predictive Coding Models of Perception", "thumbnail": "https://i.ytimg.com/vi/P0yVuoATjzs/maxresdefault.jpg"} {"video_id": "P0yVuoATjzs", "text": "built them up we built a model that was useful in its own right it helped explain some things about neuroscience more or less along the way without having to fit anything and then by looking at it we can make predictions which then led to experiments that we could do to maybe learn something new about how the brain is organized corollary discharged we can't tell and in an electrophysiology we just have some Tet Rhodes recording from some places and we're not exactly sure where we can't reconstruct the image to fully", "start_timestamp": "00:34:56", "end_timestamp": "00:35:53", "start_second": 2096, "end_second": 2153, "url": "https://www.youtube.com/watch?v=P0yVuoATjzs&t=2096s", "title": "Predictive Coding Models of Perception", "thumbnail": "https://i.ytimg.com/vi/P0yVuoATjzs/maxresdefault.jpg"} {"video_id": "P0yVuoATjzs", "text": "disambiguate that we also have two photon imaging going on in my lab so there is an idea that we could go and be getting you know hundreds of cells that are thousands of cells at a time you know and have a prayer of actually doing those those decoding experiments and that's work that we have that's the song going but but you know right now we're just in the stage where we said can we decode any information and the answer seems to be yes I agree there's a lot of questions that come downstream of that like well you know predictive coding", "start_timestamp": "00:35:53", "end_timestamp": "00:36:24", "start_second": 2153, "end_second": 2184, "url": "https://www.youtube.com/watch?v=P0yVuoATjzs&t=2153s", "title": "Predictive Coding Models of Perception", "thumbnail": "https://i.ytimg.com/vi/P0yVuoATjzs/maxresdefault.jpg"} {"video_id": "P0yVuoATjzs", "text": "implies certain kinds of correlational structures that we can start to look at we also have the ability to image synapses in the two-photon image of the synapses differently separately from the cell body so we can actually look at feedback synapses specifically and ask how the information that they contained is different than the the cell bodies in layer 2 3 so these are all experiments that are ongoing but I don't have a good answer for you but that's the right kind of question and that's the right kind of question we can ask you know when we", "start_timestamp": "00:36:24", "end_timestamp": "00:36:50", "start_second": 2184, "end_second": 2210, "url": "https://www.youtube.com/watch?v=P0yVuoATjzs&t=2184s", "title": "Predictive Coding Models of Perception", "thumbnail": "https://i.ytimg.com/vi/P0yVuoATjzs/maxresdefault.jpg"} {"video_id": "P0yVuoATjzs", "text": "have these kinds of models they can guide our experiments ok so this is my pitch so apparently a deep mind it's a it's a circle for me it's just a it's just a ping-pong I don't know I like the circle better maybe I'll change it top away oh yeah that's right so people are picking this up Gooding neuroscientists and we love that if you want to pick it up there it is the codes all free I'm told it's not hard for other people to get working so that's great that's a testament to Bill's hard work to share things so I just want to", "start_timestamp": "00:36:50", "end_timestamp": "00:37:22", "start_second": 2210, "end_second": 2242, "url": "https://www.youtube.com/watch?v=P0yVuoATjzs&t=2210s", "title": "Predictive Coding Models of Perception", "thumbnail": "https://i.ytimg.com/vi/P0yVuoATjzs/maxresdefault.jpg"} {"video_id": "P0yVuoATjzs", "text": "try and get us back on schedule so this is my lab just went to acknowledge everyone particularly Bill water and then all the people who funded this stuff and then just one shameless plug you know so I've just started as the director of this institute if you're interested in sort of AI and neuroscience Nexus the kinds of stuff that I'm showing you here and you reached in doing that in industry you know find me during one of the breaks I'd be happy to talk right thanks [Applause] yes oh the most of the mud so the motor", "start_timestamp": "00:37:22", "end_timestamp": "00:38:32", "start_second": 2242, "end_second": 2312, "url": "https://www.youtube.com/watch?v=P0yVuoATjzs&t=2242s", "title": "Predictive Coding Models of Perception", "thumbnail": "https://i.ytimg.com/vi/P0yVuoATjzs/maxresdefault.jpg"} {"video_id": "P0yVuoATjzs", "text": "modulation it's there's a whole cottage industry of like my son trackballs when they run versus when they don't run and and basically there's more information about whether the running or not then there seems to be about the visual system that's sort of like a first approximation but but a lot of the there's been a sort of a dichotomy of people who are either either think you know maybe this is a predictive process and kind of thing versus people who think it's more of like an arousal sort of like when the animals running they're", "start_timestamp": "00:38:32", "end_timestamp": "00:38:55", "start_second": 2312, "end_second": 2335, "url": "https://www.youtube.com/watch?v=P0yVuoATjzs&t=2312s", "title": "Predictive Coding Models of Perception", "thumbnail": "https://i.ytimg.com/vi/P0yVuoATjzs/maxresdefault.jpg"} {"video_id": "P0yVuoATjzs", "text": "on and when they're not running they're off or something and you can probably imagine which one I think is the more likely scenario and so so so the idea that we can actually get information about the actual direction the nose is pointing I think is interesting and a little bit surprising and and feels like at odds with the idea that it's just sort of an on versus an off kind of transformation now in terms of vestibular signals yeah I mean we're we've talked about you know the you know the experiments aren't pretty but you", "start_timestamp": "00:38:55", "end_timestamp": "00:39:24", "start_second": 2335, "end_second": 2364, "url": "https://www.youtube.com/watch?v=P0yVuoATjzs&t=2335s", "title": "Predictive Coding Models of Perception", "thumbnail": "https://i.ytimg.com/vi/P0yVuoATjzs/maxresdefault.jpg"} {"video_id": "P0yVuoATjzs", "text": "can imagine what they're like you know it's like I'm just moving I'm moving the rat in the dark Here I am with my rat moving in the dark so you know like sometimes this thought you have to do the science and and it's not it's not glamorous so we say so so we aren't in those experiments I mean it doesn't sort of go against the general theme here like wherever you get the signals from you'd be crazy not to use them I think the fact that we see some reduction in them when we knock out em to suggest a little bit that maybe it's more of a", "start_timestamp": "00:39:24", "end_timestamp": "00:39:51", "start_second": 2364, "end_second": 2391, "url": "https://www.youtube.com/watch?v=P0yVuoATjzs&t=2364s", "title": "Predictive Coding Models of Perception", "thumbnail": "https://i.ytimg.com/vi/P0yVuoATjzs/maxresdefault.jpg"} {"video_id": "P0yVuoATjzs", "text": "motor you friends copy but it's entirely possible that this tabular system the vestibular inputs are getting through to v1 through so m2 is I mean it's like it's sort of like frontal orienting fields in a monkey it's like an orienting area yeah so as near as we can tell it doesn't seem to so we've done let's go to all the marginal comparisons so you'd expect it to have some effect it turns out you can actually just scoop out all of them fun all the motor cortex by my neighbor at Harvard Ben selves Eskie did this", "start_timestamp": "00:39:51", "end_timestamp": "00:40:26", "start_second": 2391, "end_second": 2426, "url": "https://www.youtube.com/watch?v=P0yVuoATjzs&t=2391s", "title": "Predictive Coding Models of Perception", "thumbnail": "https://i.ytimg.com/vi/P0yVuoATjzs/maxresdefault.jpg"} {"video_id": "P0yVuoATjzs", "text": "experiment are you basically just carpet bomb all of motor cortex and the animals can still do all kinds of complicated motor tasks so that that's alarming if you study motor cortex but but from our perspective it's it's actually a good things I mean so it's near as we can tell we're not changing with the statistics of the movement one of things were moving towards mister is trying to get more quantitative predictions about what would the actual predictive coatings that are subtractive having a model at least gives you a prayer of", "start_timestamp": "00:40:26", "end_timestamp": "00:40:57", "start_second": 2426, "end_second": 2457, "url": "https://www.youtube.com/watch?v=P0yVuoATjzs&t=2426s", "title": "Predictive Coding Models of Perception", "thumbnail": "https://i.ytimg.com/vi/P0yVuoATjzs/maxresdefault.jpg"} {"video_id": "P0yVuoATjzs", "text": "being able to sort of ask those kinds of questions I agree it's complicated it's not at the end not the end-all be-all and and also in monkey and primate IT there was there some beautiful studies that were they were kind of lesser-known where they showed that in complete darkness they were psychotic eye movement signals presence he could decode when cicadas happened so so these ideas aren't aren't new it's just for their sort of driving us in new directions [Music] but when you're comparing that to the neural state law of attraction the first", "start_timestamp": "00:40:57", "end_timestamp": "00:41:41", "start_second": 2457, "end_second": 2501, "url": "https://www.youtube.com/watch?v=P0yVuoATjzs&t=2457s", "title": "Predictive Coding Models of Perception", "thumbnail": "https://i.ytimg.com/vi/P0yVuoATjzs/maxresdefault.jpg"} {"video_id": "P0yVuoATjzs", "text": "example if you're showing if you're comparing that to the N stopping signals or the transient dynamics yeah it turns out if you look at them they almost all look the same which which is disheartening for people who want to disambiguate different populations I mean at least when you marginalize them in this way the the e neurons look like pop and then off response but so did the activation neurons and so there are recurrent neurons that are sort of sitting there and like like in Matt's talk like they're ones that are doing", "start_timestamp": "00:41:41", "end_timestamp": "00:42:18", "start_second": 2501, "end_second": 2538, "url": "https://www.youtube.com/watch?v=P0yVuoATjzs&t=2501s", "title": "Predictive Coding Models of Perception", "thumbnail": "https://i.ytimg.com/vi/P0yVuoATjzs/maxresdefault.jpg"} {"video_id": "P0yVuoATjzs", "text": "the things you know they have to be doing but there are ones in the in the recurrent layer that also have the sort of transient dynamics as well so so the long story is like if you were sticking electrodes into a monkey brain like I did for five years you know you wouldn't know which ones of these you were getting and they all look surprisingly similar I mean so in the in the cases where we're doing the cutting we're actually decoding from the ours so but I mean you you can decode lots of things from lots of places the arse", "start_timestamp": "00:42:18", "end_timestamp": "00:42:53", "start_second": 2538, "end_second": 2573, "url": "https://www.youtube.com/watch?v=P0yVuoATjzs&t=2538s", "title": "Predictive Coding Models of Perception", "thumbnail": "https://i.ytimg.com/vi/P0yVuoATjzs/maxresdefault.jpg"} {"video_id": "P0yVuoATjzs", "text": "we thought the arse made a lot of sense because they're the ones who should be holding information about context and representation but I mean there's obviously a lot going on yeah as you get further along though this is a CNN in the static case on the first time step because you basically there's nothing to cancel out this is all zeroes so you just go up through like a CNN so you would expect there to be sort of different levels of representation of higher level and lower level features so it's only long story short it's", "start_timestamp": "00:42:53", "end_timestamp": "00:43:19", "start_second": 2573, "end_second": 2599, "url": "https://www.youtube.com/watch?v=P0yVuoATjzs&t=2573s", "title": "Predictive Coding Models of Perception", "thumbnail": "https://i.ytimg.com/vi/P0yVuoATjzs/maxresdefault.jpg"} {"video_id": "P0yVuoATjzs", "text": "complicated the the simple fact of training it in this predictive mode is gonna affect all of these weights so it's going to induce representations even in these A's so the bottom line is like you can pretty much decode from anywhere this is bad news for Neuroscience right you can decode from anywhere and all of them qualitatively look very similar in their dynamics so I think about the relationship between what you're doing here and the stuff that I've been working on but honestly really the ideas that were put", "start_timestamp": "00:43:19", "end_timestamp": "00:43:46", "start_second": 2599, "end_second": 2626, "url": "https://www.youtube.com/watch?v=P0yVuoATjzs&t=2599s", "title": "Predictive Coding Models of Perception", "thumbnail": "https://i.ytimg.com/vi/P0yVuoATjzs/maxresdefault.jpg"} {"video_id": "P0yVuoATjzs", "text": "forth by hoc right originally about supervise and that work would suggest that this system should be able to do something even richer which is to do something like visual system identification so in the phenomena that you've you've talked about you look at the first frame and you can make predictions about what's going to happen because you live in this world but there are situations where I can't predict this observing the dynamics helps me predict the dynamic so if I like let me say it's seeing or someone with a fuse", "start_timestamp": "00:43:46", "end_timestamp": "00:44:20", "start_second": 2626, "end_second": 2660, "url": "https://www.youtube.com/watch?v=P0yVuoATjzs&t=2626s", "title": "Predictive Coding Models of Perception", "thumbnail": "https://i.ytimg.com/vi/P0yVuoATjzs/maxresdefault.jpg"} {"video_id": "P0yVuoATjzs", "text": "joint or something should be able to do that kind of system identification on the I don't know but I agree and we're I think we're all kind of using the same market fuel right it's like Ellis TMS can do a lot and these are sort of two separate instantiations of like recurrent Nets can can actually do do a lot yeah exactly exactly exactly and and there is a spot that you know they even after you fix things and it's running you know it has state that's that it's accumulating so it can you can do I think system identification so I mean", "start_timestamp": "00:44:20", "end_timestamp": "00:44:56", "start_second": 2660, "end_second": 2696, "url": "https://www.youtube.com/watch?v=P0yVuoATjzs&t=2660s", "title": "Predictive Coding Models of Perception", "thumbnail": "https://i.ytimg.com/vi/P0yVuoATjzs/maxresdefault.jpg"} {"video_id": "P0yVuoATjzs", "text": "this isn't this isn't fit exactly into your what you're talking about but we also had it doing things like predicting balls bouncing around and things like that and you know and it works you know like the balls bounce off the walls yeah and if we have these like have like electrostatic repulsion or something you could imagine them learning dynamically like well you know this is this is the new rules so yeah and this is the kind of setting I would probably prefer to do it in rather than the other one cuz you can't control the natural", "start_timestamp": "00:44:56", "end_timestamp": "00:45:32", "start_second": 2696, "end_second": 2732, "url": "https://www.youtube.com/watch?v=P0yVuoATjzs&t=2696s", "title": "Predictive Coding Models of Perception", "thumbnail": "https://i.ytimg.com/vi/P0yVuoATjzs/maxresdefault.jpg"} {"video_id": "P0yVuoATjzs", "text": "world as easily but but I think it's really interesting I mean that there are limits of the capacity of this thing to learn so it you can't you know if you have very complicated movies and things it doesn't do a great job of predicting the next frame so we've far from crack the nut but it just feels like we're kind of moving maybe a little bit in right direction so so much like the project I spent 20 years of my life working I've had so many people tell me that it ultimately got published at nips as unsupervised", "start_timestamp": "00:45:32", "end_timestamp": "00:46:07", "start_second": 2732, "end_second": 2767, "url": "https://www.youtube.com/watch?v=P0yVuoATjzs&t=2732s", "title": "Predictive Coding Models of Perception", "thumbnail": "https://i.ytimg.com/vi/P0yVuoATjzs/maxresdefault.jpg"} {"video_id": "P0yVuoATjzs", "text": "pixel prediction and it was just trying to predict a moving dot no recognition of anything it was only doing the motion different continuity in time I wrestled with it it was awful it worked kind of okay yeah yeah while I was working as a coder and an algorithm guy and stuff in Silicon Valley what some C++ and a job I worked at Redwood Neuroscience but ultimately what happened was I rely I alternate Lee gave a presentation here that it worked with just using matrices and just the algorithms forget the neurons it was that hard then I throw", "start_timestamp": "00:46:07", "end_timestamp": "00:47:02", "start_second": 2767, "end_second": 2822, "url": "https://www.youtube.com/watch?v=P0yVuoATjzs&t=2767s", "title": "Predictive Coding Models of Perception", "thumbnail": "https://i.ytimg.com/vi/P0yVuoATjzs/maxresdefault.jpg"} {"video_id": "P0yVuoATjzs", "text": "away even the matrices and finally got a zero parameter model where I just assumed volumetric 3d computational medium intracellular space so if you assume brains just do 3d computations I so I understand I think I think we understand continuity in time and it's just what he said yeah but trying to build it on top of deep learning which was built around essentially n dimensional hyperspace all we have to do with 3d continuous it's not easier oh this is a soft key in fits paper is that the one I think I think I", "start_timestamp": "00:47:02", "end_timestamp": "00:47:45", "start_second": 2822, "end_second": 2865, "url": "https://www.youtube.com/watch?v=P0yVuoATjzs&t=2822s", "title": "Predictive Coding Models of Perception", "thumbnail": "https://i.ytimg.com/vi/P0yVuoATjzs/maxresdefault.jpg"} {"video_id": "P0yVuoATjzs", "text": "think we I mean I think we I think we saw I think we cited you but the I think I think I think the you know the tools I mean the thing that's nice about deep learning and maybe others have this sort of intuition about it it sort of nice to be able to say here's our like I'm saying like presto magico deep-learning what I'm saying like I think we've gotten to the stage where but that's all abstraction now what we can say is if we just optimize future frame prediction what else obtains and I think that's the right framing for using deep learning to", "start_timestamp": "00:47:45", "end_timestamp": "00:48:14", "start_second": 2865, "end_second": 2894, "url": "https://www.youtube.com/watch?v=P0yVuoATjzs&t=2865s", "title": "Predictive Coding Models of Perception", "thumbnail": "https://i.ytimg.com/vi/P0yVuoATjzs/maxresdefault.jpg"} {"video_id": "P0yVuoATjzs", "text": "sort of it's not that we're like doing neurons or that they are neurons like this has back prop like clearly you know like there's gonna be a couple suggestions on how we can do it better it's more like if you just optimize this one thing and deep learning lets you very effectively optimize that one thing this is what comes out of it right so that that's kind of the mode we're thinking about it and you know could it be reduced to some other simpler thing sure but we just wanted to have the tools that take us to there to that to", "start_timestamp": "00:48:14", "end_timestamp": "00:48:36", "start_second": 2894, "end_second": 2916, "url": "https://www.youtube.com/watch?v=P0yVuoATjzs&t=2894s", "title": "Predictive Coding Models of Perception", "thumbnail": "https://i.ytimg.com/vi/P0yVuoATjzs/maxresdefault.jpg"} {"video_id": "P0yVuoATjzs", "text": "that point and I think we're I think we're out of time unfortunate this or that feature of object this is the kind of work you did in your thesis and your adviser certainly you know should not fire at all if it's correctly representing this thing and so I'm trying to picture how the same room can be both representing something and they're not representing something and and the question is maybe the answer is in that your last picture when you said so on the first pass to reader model basically the representational kind of", "start_timestamp": "00:48:36", "end_timestamp": "00:49:28", "start_second": 2916, "end_second": 2968, "url": "https://www.youtube.com/watch?v=P0yVuoATjzs&t=2916s", "title": "Predictive Coding Models of Perception", "thumbnail": "https://i.ytimg.com/vi/P0yVuoATjzs/maxresdefault.jpg"} {"video_id": "P0yVuoATjzs", "text": "thing is getting passed forward through these error units which are the only things passing anything for which is a pretty strong constraint in their model and so but then later those same units must be passing forward something it means something different which is you know the the non predictive part only those are two very different things at very different at different times and so are you is that the correct correct interpretation which is the sort of there's a time multiplexing where maybe a first pass through the ventral", "start_timestamp": "00:49:28", "end_timestamp": "00:49:56", "start_second": 2968, "end_second": 2996, "url": "https://www.youtube.com/watch?v=P0yVuoATjzs&t=2968s", "title": "Predictive Coding Models of Perception", "thumbnail": "https://i.ytimg.com/vi/P0yVuoATjzs/maxresdefault.jpg"} {"video_id": "P0yVuoATjzs", "text": "Strait is the representational pass and could lead to successful recognition of an object and then later that thing is somehow shut off because everything is predictable or what have suggested time multiplexing in the responses before so there's a paper from from yesterday's group I could be wrong about that where they're basically claiming that the initial pop of activity had different information than the latter part I would say when when when Jim and I have said in the past that this neuron represents that thing we're taking time like cat", "start_timestamp": "00:49:56", "end_timestamp": "00:50:33", "start_second": 2996, "end_second": 3033, "url": "https://www.youtube.com/watch?v=P0yVuoATjzs&t=2996s", "title": "Predictive Coding Models of Perception", "thumbnail": "https://i.ytimg.com/vi/P0yVuoATjzs/maxresdefault.jpg"} {"video_id": "P0yVuoATjzs", "text": "spike counts within 100 millisecond window after you know 100 milliseconds after onset of stimulus that that same neuron is categorically not doing that 100 milliseconds after that right like you know there's this weird phenomena that a lot of the neurons are shut down so I think this idea that these neurons the firing of these neurons signal that you know without any further additional context I think that that kind of has to be wrong that's on the level I mean the fact that these things are so dynamic I think the end of the day all the system", "start_timestamp": "00:50:33", "end_timestamp": "00:51:04", "start_second": 3033, "end_second": 3064, "url": "https://www.youtube.com/watch?v=P0yVuoATjzs&t=3033s", "title": "Predictive Coding Models of Perception", "thumbnail": "https://i.ytimg.com/vi/P0yVuoATjzs/maxresdefault.jpg"} {"video_id": "AuqZ4recf0s", "text": "hello my name is Krishna and welcome to my youtube channel today we are basically going to discuss how to learn data science for free now when I say for free as you know that their whole lot of materials available in the internet with respect to Python programming language with respect to data science with respect to machine learning AI deep learning and whole lot of stuffs so what I will do is that in this particular session I'll show you a systematic way how you can basically complete your data science syllabus within three months so", "start_timestamp": "00:00:00", "end_timestamp": "00:00:27", "start_second": 0, "end_second": 27, "url": "https://www.youtube.com/watch?v=AuqZ4recf0s&t=0s", "title": "How To Learn Data Science by Self Study and For Free", "thumbnail": "https://i.ytimg.com/vi/AuqZ4recf0s/maxresdefault.jpg"} {"video_id": "AuqZ4recf0s", "text": "that your transition towards data science can be possible just within three months and after that what you can do is that you can also attend interviews by updating your resume now resume part will be discussed later on in the upcoming videos but today we will just try to focus how to learn data science for free I'll show you the systematic way what our YouTube channels you can basically follow you know because there are lot of YouTube channels provides free materials with respect to Python machine learning deep", "start_timestamp": "00:00:27", "end_timestamp": "00:00:52", "start_second": 27, "end_second": 52, "url": "https://www.youtube.com/watch?v=AuqZ4recf0s&t=27s", "title": "How To Learn Data Science by Self Study and For Free", "thumbnail": "https://i.ytimg.com/vi/AuqZ4recf0s/maxresdefault.jpg"} {"video_id": "AuqZ4recf0s", "text": "learning and all apart from that you I'll also be mentioning about various blogs and that finally I'll also be mentioning about the best machine learning book that you can basically use in order to learn data science machine learning very easy now to begin with guys so I have already prepared the word doc over here in my laptop and this particular word doc I will actually upload it in my google drive and share with all of you and that will basically be given in a description box now in this particular video I am basically", "start_timestamp": "00:00:52", "end_timestamp": "00:01:21", "start_second": 52, "end_second": 81, "url": "https://www.youtube.com/watch?v=AuqZ4recf0s&t=52s", "title": "How To Learn Data Science by Self Study and For Free", "thumbnail": "https://i.ytimg.com/vi/AuqZ4recf0s/maxresdefault.jpg"} {"video_id": "AuqZ4recf0s", "text": "going to tell you that how we are going to do how we are going to learn data science with respect to machine learning and deep learning considering Python programming language the reason I'm telling about Python programming language is guys because I have an expert in Python programming language I've referred a lot of materials I have referred a lot of things and I've also done a lot of stuff self-study so that is the reason why I'm actually telling you this for our programming language I need to do a little bit more research", "start_timestamp": "00:01:21", "end_timestamp": "00:01:46", "start_second": 81, "end_second": 106, "url": "https://www.youtube.com/watch?v=AuqZ4recf0s&t=81s", "title": "How To Learn Data Science by Self Study and For Free", "thumbnail": "https://i.ytimg.com/vi/AuqZ4recf0s/maxresdefault.jpg"} {"video_id": "AuqZ4recf0s", "text": "which are the best materials but apart from that all the machine learning deep learning techniques for learning purpose you can basically use this materials use this links that I'm basically giving you but for the practical applications you have to be you have to you should be able to search through various internet resources okay so to begin with first of all as this is for Python programming language the first topic that I will take is basically from where we can basically run Python we can basically learn", "start_timestamp": "00:01:46", "end_timestamp": "00:02:12", "start_second": 106, "end_second": 132, "url": "https://www.youtube.com/watch?v=AuqZ4recf0s&t=106s", "title": "How To Learn Data Science by Self Study and For Free", "thumbnail": "https://i.ytimg.com/vi/AuqZ4recf0s/maxresdefault.jpg"} {"video_id": "AuqZ4recf0s", "text": "sorry so I had two channels in my like most most of the time I referred this channels whenever I wanted to learn about Python whenever I had some queries and the best part of this particular channel this two channels are read in Python as you know if you are learning data science you should just not know Python you should also know object-oriented features in Python apart from that you should also know some other frameworks like flask and Django because this two frameworks are very very important for deployment of machine", "start_timestamp": "00:02:12", "end_timestamp": "00:02:42", "start_second": 132, "end_second": 162, "url": "https://www.youtube.com/watch?v=AuqZ4recf0s&t=132s", "title": "How To Learn Data Science by Self Study and For Free", "thumbnail": "https://i.ytimg.com/vi/AuqZ4recf0s/maxresdefault.jpg"} {"video_id": "AuqZ4recf0s", "text": "learning models and deep learning models okay during the deployment stage you know that you create a flask framework or Django framework and you just uploaded it in in some other servers let it be a platform as-a-service server it may be infrastructure as a service server like ec2 instance of AWS or HID okuu platform and many more platforms are there but initially the web framework is basically created the micro-service framework is basically created with the help of flask or Django so this channel the first channel that I", "start_timestamp": "00:02:42", "end_timestamp": "00:03:09", "start_second": 162, "end_second": 189, "url": "https://www.youtube.com/watch?v=AuqZ4recf0s&t=162s", "title": "How To Learn Data Science by Self Study and For Free", "thumbnail": "https://i.ytimg.com/vi/AuqZ4recf0s/maxresdefault.jpg"} {"video_id": "AuqZ4recf0s", "text": "like to mention is Corrie Schaffer he is a wonderful person he used to work in an IT company before but later on he moved into teaching into YouTube channel itself he has one of the best Python videos guys the link is basically given in the description I mean in the word doc in the description so you can basically refer the YouTube channel of his link and I would suggest you if you have any queries go and see that particular channel with respect to Python and it is started from basic from the basic installation part ok now in", "start_timestamp": "00:03:09", "end_timestamp": "00:03:42", "start_second": 189, "end_second": 222, "url": "https://www.youtube.com/watch?v=AuqZ4recf0s&t=189s", "title": "How To Learn Data Science by Self Study and For Free", "thumbnail": "https://i.ytimg.com/vi/AuqZ4recf0s/maxresdefault.jpg"} {"video_id": "AuqZ4recf0s", "text": "that particular channel you'll also find a playlist on a flask and Django so this was one of the favorite channel that I also refer for learning Python so that is Corey Schaffer the second person is basically sent decks okay send decks is one of the oldest youtuber who uploads videos on machine learning deep learning Python natural language processing so he is also one of my favorite youtuber and he's a very simple guy very very very you know if I see him I really get that motivation motivation because he provides every materials every videos", "start_timestamp": "00:03:42", "end_timestamp": "00:04:15", "start_second": 222, "end_second": 255, "url": "https://www.youtube.com/watch?v=AuqZ4recf0s&t=222s", "title": "How To Learn Data Science by Self Study and For Free", "thumbnail": "https://i.ytimg.com/vi/AuqZ4recf0s/maxresdefault.jpg"} {"video_id": "AuqZ4recf0s", "text": "that he upload he does not have any online tutorial sites he just has some sites where he'll be writing basically about the blogs of whatever he's doing in his YouTube channel so before that saying Dex and again the link will be given in that particular word doc itself now once you learn Python programming language okay now Python programming language I think if you sit and guess guys this this if you're planning to cover this in three months make sure you give three to four hours daily okay give three to four hours and I'm where", "start_timestamp": "00:04:15", "end_timestamp": "00:04:44", "start_second": 255, "end_second": 284, "url": "https://www.youtube.com/watch?v=AuqZ4recf0s&t=255s", "title": "How To Learn Data Science by Self Study and For Free", "thumbnail": "https://i.ytimg.com/vi/AuqZ4recf0s/maxresdefault.jpg"} {"video_id": "AuqZ4recf0s", "text": "I'm saying give three to four hours that should be more productive hours okay now after this now once you finish Python programming language now the next thing is that you move towards machine learning now I know that many of them will ask me a question - Mia asked me a question saying that where is the math part where is the linear algebra part where should we learn it from where should we learn it the differential calculus and many more things right statistics parts and all guys don't go in that particular way we need to", "start_timestamp": "00:04:44", "end_timestamp": "00:05:13", "start_second": 284, "end_second": 313, "url": "https://www.youtube.com/watch?v=AuqZ4recf0s&t=284s", "title": "How To Learn Data Science by Self Study and For Free", "thumbnail": "https://i.ytimg.com/vi/AuqZ4recf0s/maxresdefault.jpg"} {"video_id": "AuqZ4recf0s", "text": "complete the data science syllabus within three months so what you do is that pick up the machine learning algorithm and through reverse engineering understand the mats and try to derive take a use case how to solve it and finally solve that particular use case try to optimize that particular use case try to increase the accuracy so while you're doing this all the steps you will be means you will be learning statistics you will be learning linear algebra will be unlearning differential calculus wherever it is required always", "start_timestamp": "00:05:13", "end_timestamp": "00:05:39", "start_second": 313, "end_second": 339, "url": "https://www.youtube.com/watch?v=AuqZ4recf0s&t=313s", "title": "How To Learn Data Science by Self Study and For Free", "thumbnail": "https://i.ytimg.com/vi/AuqZ4recf0s/maxresdefault.jpg"} {"video_id": "AuqZ4recf0s", "text": "do that reverse-engineer get that knowledge you know now suppose if I want to solve linear regression now when I am learning linear regression I know that there will be an equation of straight-line that will be coming into that particular algorithm like Y is equal to MX plus C then after that I will be saying I'll be just finding out I'll be deep diving how do I find out that coefficient value then over there the gradient descent come into existence then I learn about how this value is basically calculated through", "start_timestamp": "00:05:39", "end_timestamp": "00:06:03", "start_second": 339, "end_second": 363, "url": "https://www.youtube.com/watch?v=AuqZ4recf0s&t=339s", "title": "How To Learn Data Science by Self Study and For Free", "thumbnail": "https://i.ytimg.com/vi/AuqZ4recf0s/maxresdefault.jpg"} {"video_id": "AuqZ4recf0s", "text": "differential calculus I'm just taking as in one one as an example of our linear regulation similarly you have to learn with respect to each and every algorithm so for this I have actually again selected three channel one is you need to understand the maths behind each and every algorithm so you can basically refer or machine learning course by Andrew ng in applied AI so deep learning dot AI sorry not applied air deep learning dot AI channel you can basically and again the link is basically given in the word doc", "start_timestamp": "00:06:03", "end_timestamp": "00:06:30", "start_second": 363, "end_second": 390, "url": "https://www.youtube.com/watch?v=AuqZ4recf0s&t=363s", "title": "How To Learn Data Science by Self Study and For Free", "thumbnail": "https://i.ytimg.com/vi/AuqZ4recf0s/maxresdefault.jpg"} {"video_id": "AuqZ4recf0s", "text": "itself the other thing is that I have also uploaded many videos on machine learning and some of the feedback that I got that the machine learning playlist is not that ordered you know so what what you can do is that whenever you are searching my videos suppose you are learning simple linear regression just search that keyword and just put my name in front of that you will be getting the whole explanation apart from that I have also uploaded videos with respect to practical application ok so you'll be able to do", "start_timestamp": "00:06:30", "end_timestamp": "00:06:57", "start_second": 390, "end_second": 417, "url": "https://www.youtube.com/watch?v=AuqZ4recf0s&t=390s", "title": "How To Learn Data Science by Self Study and For Free", "thumbnail": "https://i.ytimg.com/vi/AuqZ4recf0s/maxresdefault.jpg"} {"video_id": "AuqZ4recf0s", "text": "that now I am trying to order that particular playlist and I'll make and making sure that whatever videos that I upload in the future that will also be ordered so I would like to say one is Andrew ng one is my channel with respect to machine learning if you just want to know the maths about each and every machine learning algorithm go and see Andrew ng from deep learning dai and then you also have sent dex channel again I'm referring send text because he has uploaded videos on Python he's uploaded videos on floss Jango apart", "start_timestamp": "00:06:57", "end_timestamp": "00:07:27", "start_second": 417, "end_second": 447, "url": "https://www.youtube.com/watch?v=AuqZ4recf0s&t=417s", "title": "How To Learn Data Science by Self Study and For Free", "thumbnail": "https://i.ytimg.com/vi/AuqZ4recf0s/maxresdefault.jpg"} {"video_id": "AuqZ4recf0s", "text": "from that he's also uploaded videos with respect to your machine learning algorithms ok and the best part is that he's not uploaded machine learning all he has uploaded deep learning also so I'm going to refer him again in the later links when I'm discussing about deep learning so machine learning three things one is my channel one is sent X and the other one is auntie Ong 4 applied for deep learning dai and um you know Andrew ng has explained the complete maths sometimes what happens is that you will not be able to follow it", "start_timestamp": "00:07:27", "end_timestamp": "00:07:59", "start_second": 447, "end_second": 479, "url": "https://www.youtube.com/watch?v=AuqZ4recf0s&t=447s", "title": "How To Learn Data Science by Self Study and For Free", "thumbnail": "https://i.ytimg.com/vi/AuqZ4recf0s/maxresdefault.jpg"} {"video_id": "AuqZ4recf0s", "text": "but again if you just see again and again you'll be able to follow it because the math is pretty much simpler but what I am making sure that in my channel so I will be uploading a lot of maths thing whenever I am explaining you about some specific algorithms and that will be continuing going on in Andrew ng channel of deep learning dot AI you will not find any practical application so for the practical application you can either refer my videos or you can refer send text videos ok over there send X do not explain you the maths behind any", "start_timestamp": "00:07:59", "end_timestamp": "00:08:25", "start_second": 479, "end_second": 505, "url": "https://www.youtube.com/watch?v=AuqZ4recf0s&t=479s", "title": "How To Learn Data Science by Self Study and For Free", "thumbnail": "https://i.ytimg.com/vi/AuqZ4recf0s/maxresdefault.jpg"} {"video_id": "AuqZ4recf0s", "text": "machine learning algorithm since is more focused towards implementation of machine learning algorithms ok so this was one now I have also seen some people who gets really attracted just not by seeing writings equation they like some animation kind of explanation so if you want some animation kind of explanation there is one channel that I went through is stat quest with Josh stammer ok so this is one of the good channel where they'll show a lot of animation to explain you each and every machine learning algorithm", "start_timestamp": "00:08:25", "end_timestamp": "00:08:56", "start_second": 505, "end_second": 536, "url": "https://www.youtube.com/watch?v=AuqZ4recf0s&t=505s", "title": "How To Learn Data Science by Self Study and For Free", "thumbnail": "https://i.ytimg.com/vi/AuqZ4recf0s/maxresdefault.jpg"} {"video_id": "AuqZ4recf0s", "text": "with you know along with that explanation the theoretical explanation how it is basically done including all statistics linear algebra differential calculation and different kind of maths formula again the link is basically given in the docx file itself okay then after that you have natural language processing natural language processing you can basically go through my playlist because I have uploaded around eight to nine videos with respect to machine learning and I'm planning to upload with respect to deep learning also where I'll", "start_timestamp": "00:08:56", "end_timestamp": "00:09:24", "start_second": 536, "end_second": 564, "url": "https://www.youtube.com/watch?v=AuqZ4recf0s&t=536s", "title": "How To Learn Data Science by Self Study and For Free", "thumbnail": "https://i.ytimg.com/vi/AuqZ4recf0s/maxresdefault.jpg"} {"video_id": "AuqZ4recf0s", "text": "be implemented a lot of NLP things where I'll be implementing with the help of word to work and all the different kind of tools or libraries that are basically present in natural language processing the other channel is basically again sent takes send X has been up has uploaded around twenty to thirty videos with respect to natural language processing and then you can refer that also now we let us go to the deep learning for deep learning I have selected two channels one is Andrew and G again from deep learning day I again", "start_timestamp": "00:09:24", "end_timestamp": "00:09:51", "start_second": 564, "end_second": 591, "url": "https://www.youtube.com/watch?v=AuqZ4recf0s&t=564s", "title": "How To Learn Data Science by Self Study and For Free", "thumbnail": "https://i.ytimg.com/vi/AuqZ4recf0s/maxresdefault.jpg"} {"video_id": "AuqZ4recf0s", "text": "any theoretical components any theoretical things that you need to understand about deep learning can be sick later for that link again just watched the word doc file again in that I mentioned the link also the second channel is my channel because deep learning might be complete deep learning playlist that I have created is completely in order okay to the other one doodle-do tutorial three like that I have have actually created tutorial 22 and I'm including both maths understanding of the algorithms understanding about neural networks and", "start_timestamp": "00:09:51", "end_timestamp": "00:10:20", "start_second": 591, "end_second": 620, "url": "https://www.youtube.com/watch?v=AuqZ4recf0s&t=591s", "title": "How To Learn Data Science by Self Study and For Free", "thumbnail": "https://i.ytimg.com/vi/AuqZ4recf0s/maxresdefault.jpg"} {"video_id": "AuqZ4recf0s", "text": "how to implement that with the help of carols and Python right so I have done that I have ordered it you know first of all I explained you about all the artificial indian or artificial intelligence artificial neural network and I mean artificial neural network I have explained about lot of things like back propagation how to update weights bias all those things and I am also showing it practically how you can basically use for practical implementation with the help of Kira's apart from that how can you basically", "start_timestamp": "00:10:20", "end_timestamp": "00:10:45", "start_second": 620, "end_second": 645, "url": "https://www.youtube.com/watch?v=AuqZ4recf0s&t=620s", "title": "How To Learn Data Science by Self Study and For Free", "thumbnail": "https://i.ytimg.com/vi/AuqZ4recf0s/maxresdefault.jpg"} {"video_id": "AuqZ4recf0s", "text": "optimize your problem statement also now apart from that you can basically refer my channel thrush I'm also going to complete the whole deep learning playlist still they are 15 to 20 videos so total overall display playlist will be having 40 videos okay which will be including LST m RN and CN and everything okay everything in and I am I'm serious about it because I started that playlist still not when you do videos as been completed I have still have plans to upload more and more now after learning all this you'll be having", "start_timestamp": "00:10:45", "end_timestamp": "00:11:15", "start_second": 645, "end_second": 675, "url": "https://www.youtube.com/watch?v=AuqZ4recf0s&t=645s", "title": "How To Learn Data Science by Self Study and For Free", "thumbnail": "https://i.ytimg.com/vi/AuqZ4recf0s/maxresdefault.jpg"} {"video_id": "AuqZ4recf0s", "text": "a lot of idea and this particular hole whatever I have explained is with respect to YouTube channel right now okay the whole lot of materials that are available in YouTube and the next thing is that you refer github links you know github links so suppose if you have a problem with about linear regression go and search linear regression github okay so you'll be getting various abundant materials in the Google in Google search itself and you can basically take one of the problems start solving it now the next thing is that after after learning", "start_timestamp": "00:11:15", "end_timestamp": "00:11:43", "start_second": 675, "end_second": 703, "url": "https://www.youtube.com/watch?v=AuqZ4recf0s&t=675s", "title": "How To Learn Data Science by Self Study and For Free", "thumbnail": "https://i.ytimg.com/vi/AuqZ4recf0s/maxresdefault.jpg"} {"video_id": "AuqZ4recf0s", "text": "all the skills you have to do a lot of practice projects right so I have created one deep learning sorry data science project playlist wherein I have uploaded more than 50 videos 50 different use cases and that is specifically kaggle use cases that I have taken I've solved with the help of Python and machine learning and deep learning so you can basically refer those projects try to solve it again all the code is basically given in the github itself you can refer my github and you can get up all the details ok so", "start_timestamp": "00:11:43", "end_timestamp": "00:12:12", "start_second": 703, "end_second": 732, "url": "https://www.youtube.com/watch?v=AuqZ4recf0s&t=703s", "title": "How To Learn Data Science by Self Study and For Free", "thumbnail": "https://i.ytimg.com/vi/AuqZ4recf0s/maxresdefault.jpg"} {"video_id": "AuqZ4recf0s", "text": "after learning all these things the last part is data science projects you should be able to implement various data science program finally create your resume you know include everything that you have learnt into your resume that's it now first we have discussed about YouTube channels now the second thing is that I'm also going to refer some of the blocks which has all almost all of the problem solutions of every machine learning algorithms or deep learning algorithms one is towards data science blog and the other one is medium with", "start_timestamp": "00:12:12", "end_timestamp": "00:12:40", "start_second": 732, "end_second": 760, "url": "https://www.youtube.com/watch?v=AuqZ4recf0s&t=732s", "title": "How To Learn Data Science by Self Study and For Free", "thumbnail": "https://i.ytimg.com/vi/AuqZ4recf0s/maxresdefault.jpg"} {"video_id": "AuqZ4recf0s", "text": "respect to machine learning and deep learning so all the links is basically given in the word doc itself ok so in short this was there and the last part is basically about books now one of the best machine learning book that I have also read you know the link is basically given in the description about the best machine learning book it is basically written for the O'Reilly publisher if you go and see that particular books book as the best book guys best book on machine learning and deep learning I think every fresher every fresher who", "start_timestamp": "00:12:40", "end_timestamp": "00:13:13", "start_second": 760, "end_second": 793, "url": "https://www.youtube.com/watch?v=AuqZ4recf0s&t=760s", "title": "How To Learn Data Science by Self Study and For Free", "thumbnail": "https://i.ytimg.com/vi/AuqZ4recf0s/maxresdefault.jpg"} {"video_id": "AuqZ4recf0s", "text": "wants to make that transition towards their data science machine learning I they should go and read this particular book and this book is basically you know a boon to all the data scientists because the author his basically or alien gear on sorry if I'm pronouncing it wrong but this particular person has written this book and it is basically the book name is hands on machine learning with sky kid learn and tensorflow okay again the link is basically given in the description and the the publisher name is O'Reilly and I'll tell you this", "start_timestamp": "00:13:13", "end_timestamp": "00:13:49", "start_second": 793, "end_second": 829, "url": "https://www.youtube.com/watch?v=AuqZ4recf0s&t=793s", "title": "How To Learn Data Science by Self Study and For Free", "thumbnail": "https://i.ytimg.com/vi/AuqZ4recf0s/maxresdefault.jpg"} {"video_id": "AuqZ4recf0s", "text": "book is very cheap guys I see there are a lot of free PDFs also available for this book I don't want to share that PDFs I don't know don't hurt want to research and find out the free PDFs over there because this guy has written this author has written this book so nicely and I don't want to disrespect him by just taking the free PDFs and distributing it to you okay I have not even researched and found out whether there is any PDF or not I basically download I basically bought this book the paperback version and I'll just if", "start_timestamp": "00:13:49", "end_timestamp": "00:14:21", "start_second": 829, "end_second": 861, "url": "https://www.youtube.com/watch?v=AuqZ4recf0s&t=829s", "title": "How To Learn Data Science by Self Study and For Free", "thumbnail": "https://i.ytimg.com/vi/AuqZ4recf0s/maxresdefault.jpg"} {"video_id": "AuqZ4recf0s", "text": "you want I can also review this particular book in one of my next videos but awesome book you know it is basically you have Python you have feature engineering you have machine learning you have everything okay and about apart from that guys you can buy this book and it is hardly around 1,500 rupees INR if you consider this in the terms of dollar hardly ten to fifteen dollars fifteen to twenty dollars so I think yes fifteen to twenty dollars you can basically buy this particular book again the link is basically given in the", "start_timestamp": "00:14:21", "end_timestamp": "00:14:50", "start_second": 861, "end_second": 890, "url": "https://www.youtube.com/watch?v=AuqZ4recf0s&t=861s", "title": "How To Learn Data Science by Self Study and For Free", "thumbnail": "https://i.ytimg.com/vi/AuqZ4recf0s/maxresdefault.jpg"} {"video_id": "AuqZ4recf0s", "text": "description go ahead and buy it I know this particular session is all about data science with respect to free but I'd suggest this particular book you know don't search for free PDFs keep this book handy because it will help you for the lifetime okay whenever you want you can basically leave this now the next thing comes that there are small small parts like feature engineering feature selection what you have to do is that I'll be sharing a very good github link about feature engineer and feature selection which I found it through the", "start_timestamp": "00:14:50", "end_timestamp": "00:15:20", "start_second": 890, "end_second": 920, "url": "https://www.youtube.com/watch?v=AuqZ4recf0s&t=890s", "title": "How To Learn Data Science by Self Study and For Free", "thumbnail": "https://i.ytimg.com/vi/AuqZ4recf0s/maxresdefault.jpg"} {"video_id": "AuqZ4recf0s", "text": "internet and I'll be sharing that link in this word doc itself what you have to do is that go inside that link there are many materials present inside just refer each and every notebook file it is clearly written what what is all about feature engineering how feature engineering is basically done similarly they are on 10 to 20 materials notebook file Jupiter notebook files you can just you just have to read it just have to execute that and by that you will be able to understand a lot of thing and similarly for the feature selection so", "start_timestamp": "00:15:20", "end_timestamp": "00:15:47", "start_second": 920, "end_second": 947, "url": "https://www.youtube.com/watch?v=AuqZ4recf0s&t=920s", "title": "How To Learn Data Science by Self Study and For Free", "thumbnail": "https://i.ytimg.com/vi/AuqZ4recf0s/maxresdefault.jpg"} {"video_id": "AuqZ4recf0s", "text": "I'll be sharing those two github link and thanks to the author I'll also be mentioning the author over there who had actually provided those materials and it is available in an Internet completely for free so I'll be providing that two things to you and yes that is all about this particular preparation guys and I think if you are able to give around two to three to four hours I think within three months you'll be able to complete this whole data science syllabus and after three months you'll be also giving the interviews because you have a learnt", "start_timestamp": "00:15:47", "end_timestamp": "00:16:14", "start_second": 947, "end_second": 974, "url": "https://www.youtube.com/watch?v=AuqZ4recf0s&t=947s", "title": "How To Learn Data Science by Self Study and For Free", "thumbnail": "https://i.ytimg.com/vi/AuqZ4recf0s/maxresdefault.jpg"} {"video_id": "AuqZ4recf0s", "text": "a lot of things done data science projects you have practice things in kaggle and make sure you practice a lot of projects after completing all these things through this YouTube channel through blogs through this particular book that I have told you from O'Reilly publisher which is basically hands on machine learning a sky killer intensive flow that was all about this particular video I hope you liked this particular videos share with all your friends subscribe do subscribe this channel if you're not already subscribed I'll see", "start_timestamp": "00:16:14", "end_timestamp": "00:16:39", "start_second": 974, "end_second": 999, "url": "https://www.youtube.com/watch?v=AuqZ4recf0s&t=974s", "title": "How To Learn Data Science by Self Study and For Free", "thumbnail": "https://i.ytimg.com/vi/AuqZ4recf0s/maxresdefault.jpg"} {"video_id": "1DYmgoij4FQ", "text": "i have read literally thousands of books on modern psychology metaphysics ancient magic buddhism yogism theosophy christian science unity truth new thought and many other dealings it's what i call mind stuff many of these books were nonsensical others strange and many very profound gradually i discovered that there is a golden thread that runs through all the teachings and makes them work for those who sincerely accept and apply them that thread can be named in a single word belief it is the same element or factor", "start_timestamp": "00:00:00", "end_timestamp": "00:00:42", "start_second": 0, "end_second": 42, "url": "https://www.youtube.com/watch?v=1DYmgoij4FQ&t=0s", "title": "The Secret Knowledge Of Believing", "thumbnail": "https://i.ytimg.com/vi/1DYmgoij4FQ/hqdefault.jpg"} {"video_id": "1DYmgoij4FQ", "text": "belief which causes people to be cured through mental healing enables others to climb the ladder of success and gets phenomenal results for all who accept it why belief as a miracle worker is something that cannot be satisfactorily explained but have no doubt about it there's genuine magic in believing the magic of believing became a phrase around which my thoughts steadily revolved i've tried to put down these thoughts as simply and as clearly as i could so that everyone can understand my hope is that anyone who listens will", "start_timestamp": "00:00:42", "end_timestamp": "00:01:21", "start_second": 42, "end_second": 81, "url": "https://www.youtube.com/watch?v=1DYmgoij4FQ&t=42s", "title": "The Secret Knowledge Of Believing", "thumbnail": "https://i.ytimg.com/vi/1DYmgoij4FQ/hqdefault.jpg"} {"video_id": "1DYmgoij4FQ", "text": "be helped in reaching their goal in life so you begin with desire if you ever hope to achieve anything or gain more than you have now however as we shall see there is more to it than mere desire it has been said that thought attracts that upon which it is directed thought attracts that upon which it is directed it was job who said for the thing which i greatly feared has come upon me our fearful thoughts are just as creative or just as magnetic and attracting troubles to us as are the constructive and positive ones and", "start_timestamp": "00:01:21", "end_timestamp": "00:02:02", "start_second": 81, "end_second": 122, "url": "https://www.youtube.com/watch?v=1DYmgoij4FQ&t=81s", "title": "The Secret Knowledge Of Believing", "thumbnail": "https://i.ytimg.com/vi/1DYmgoij4FQ/hqdefault.jpg"} {"video_id": "1DYmgoij4FQ", "text": "attracting positive results so no matter what the character of the thought it does create after its kind when this sinks into a man's consciousness he gets some inkling of the awe-inspiring power which is his to use i cling to the theory that while thoughts do create an exercise control far beyond any limits yet known to man they create only according to their pitch intensity emotional quality depth of feeling or vibratory plane in other words comparable to the wavelength and wattage of a radio station thoughts have a creative or controlling", "start_timestamp": "00:02:02", "end_timestamp": "00:02:43", "start_second": 122, "end_second": 163, "url": "https://www.youtube.com/watch?v=1DYmgoij4FQ&t=122s", "title": "The Secret Knowledge Of Believing", "thumbnail": "https://i.ytimg.com/vi/1DYmgoij4FQ/hqdefault.jpg"} {"video_id": "1DYmgoij4FQ", "text": "force in the exact ratio of their constancy intensity and power let me try to clarify that while many explanations have been offered no one knows whether thought is a form of electrical energy or something else yet to be defined but i have been an experimenter in that branch of electricity known as high frequency pioneered by the great genius nikola tesla and whenever i think of thought and its radiations and vibrations i instinctively link them up with electricity and its phenomena in this manner they become more", "start_timestamp": "00:02:43", "end_timestamp": "00:03:22", "start_second": 163, "end_second": 202, "url": "https://www.youtube.com/watch?v=1DYmgoij4FQ&t=163s", "title": "The Secret Knowledge Of Believing", "thumbnail": "https://i.ytimg.com/vi/1DYmgoij4FQ/hqdefault.jpg"} {"video_id": "1DYmgoij4FQ", "text": "understandable to me all persons living in high altitudes have felt and sometimes observed the electric spark resulting from walking across the room then touching some metallic substance that of course is a form of static electricity generated by friction it gives you an idea of how one kind of electricity can be developed through the body sigmund freud the famous austrian psychoanalyst brought the world's attention to the hypothesis that there was a powerful force within us an unilluminated part of the mind", "start_timestamp": "00:03:22", "end_timestamp": "00:03:57", "start_second": 202, "end_second": 237, "url": "https://www.youtube.com/watch?v=1DYmgoij4FQ&t=202s", "title": "The Secret Knowledge Of Believing", "thumbnail": "https://i.ytimg.com/vi/1DYmgoij4FQ/hqdefault.jpg"} {"video_id": "1DYmgoij4FQ", "text": "separate from the conscious mind constantly at work molding our thoughts feelings and actions others have called this division of our mental existence the soul some call it the super ego the inner power the super consciousness the unconscious the subconscious and various other names it isn't an organ or so-called physical matter such as we know the brain to be nevertheless it is there and from the beginning of recorded time man has known that it exists the ancients often referred to it as the spirit paracelsus called it the will others", "start_timestamp": "00:03:57", "end_timestamp": "00:04:35", "start_second": 237, "end_second": 275, "url": "https://www.youtube.com/watch?v=1DYmgoij4FQ&t=237s", "title": "The Secret Knowledge Of Believing", "thumbnail": "https://i.ytimg.com/vi/1DYmgoij4FQ/hqdefault.jpg"} {"video_id": "1DYmgoij4FQ", "text": "have called it the mind an adjunct to the brain some have referred to it as conscience the creator of the still small voice within still others called it intelligence and have asserted that it is a part of the supreme intelligence to which we are all linked no matter what we call it i prefer the word subconscious it is recognized as the essence of life and the limits of its powers are unknown it never sleeps it comes to our support in times of great trouble it warns us of impending danger often it aids us in what seems impossible", "start_timestamp": "00:04:35", "end_timestamp": "00:05:13", "start_second": 275, "end_second": 313, "url": "https://www.youtube.com/watch?v=1DYmgoij4FQ&t=275s", "title": "The Secret Knowledge Of Believing", "thumbnail": "https://i.ytimg.com/vi/1DYmgoij4FQ/hqdefault.jpg"} {"video_id": "1DYmgoij4FQ", "text": "it guides us in many ways and when properly employed performs so-called miracles perhaps the most effective method of bringing the subconscious into practical action is through the process of making mental pictures using the imagination perfecting an image of the thing or situation as you would have it exist in physical form this is usually referred to as visualization however before this visualization can work you must really believe i refer now to deep-seated belief a firm and positive conviction that goes", "start_timestamp": "00:05:13", "end_timestamp": "00:05:54", "start_second": 313, "end_second": 354, "url": "https://www.youtube.com/watch?v=1DYmgoij4FQ&t=313s", "title": "The Secret Knowledge Of Believing", "thumbnail": "https://i.ytimg.com/vi/1DYmgoij4FQ/hqdefault.jpg"} {"video_id": "1DYmgoij4FQ", "text": "through every fiber of your being when you believe it heart and soul as the saying goes now call it a phase of emotion a spiritual force a type of electrical vibration anything you please but that's the force that brings outstanding results it sets the law of attraction into operation enables sustained thought to correlate with its object this belief changes the tempo of the mind or thought frequency and like a huge magnet draws the subconscious forces into play changing your whole aura and affecting everything about you", "start_timestamp": "00:05:54", "end_timestamp": "00:06:30", "start_second": 354, "end_second": 390, "url": "https://www.youtube.com/watch?v=1DYmgoij4FQ&t=354s", "title": "The Secret Knowledge Of Believing", "thumbnail": "https://i.ytimg.com/vi/1DYmgoij4FQ/hqdefault.jpg"} {"video_id": "1DYmgoij4FQ", "text": "and often people and objects at great distances it brings into your individual sphere of life results that are sometimes startling after studying the various mystical religions and different teachings and systems of mind stuff one is impressed with the fact that they all have the same basic modus operandi and that is through repetition the repeating of certain mantras words formulas or just plain mumble jumbo is common with witch doctors voodoo high priest hexers and many other followers of strange cults they use them to evoke the spirits or", "start_timestamp": "00:06:30", "end_timestamp": "00:07:08", "start_second": 390, "end_second": 428, "url": "https://www.youtube.com/watch?v=1DYmgoij4FQ&t=390s", "title": "The Secret Knowledge Of Believing", "thumbnail": "https://i.ytimg.com/vi/1DYmgoij4FQ/hqdefault.jpg"} {"video_id": "1DYmgoij4FQ", "text": "work black magic one finds the same principle at work and chants incantations litanies daily lessons also the frequent praying of the buddhists and muslims alike the affirmation of the theosophists and the followers of unity the absolute truth new thought divine science in fact it is basic to all religions although here it is white magic instead of black magic this brings us to the law of suggestion through which all forces operating within its limits are capable of producing phenomenal results that is it is the power of suggestion", "start_timestamp": "00:07:08", "end_timestamp": "00:07:47", "start_second": 428, "end_second": 467, "url": "https://www.youtube.com/watch?v=1DYmgoij4FQ&t=428s", "title": "The Secret Knowledge Of Believing", "thumbnail": "https://i.ytimg.com/vi/1DYmgoij4FQ/hqdefault.jpg"} {"video_id": "1DYmgoij4FQ", "text": "and auto suggestion your own to yourself or hetero suggestion coming to you from outside sources that starts the machinery into operation or causes the subconscious mind to begin its creative work and right here is where the affirmations and repetitions play their part it's the repetition of the same chant the same incantation the same affirmations that lead to belief and once that belief becomes a deep conviction things begin to happen this is the same identical force and the same mechanics that hitler used in building up the", "start_timestamp": "00:07:47", "end_timestamp": "00:08:25", "start_second": 467, "end_second": 505, "url": "https://www.youtube.com/watch?v=1DYmgoij4FQ&t=467s", "title": "The Secret Knowledge Of Believing", "thumbnail": "https://i.ytimg.com/vi/1DYmgoij4FQ/hqdefault.jpg"} {"video_id": "1DYmgoij4FQ", "text": "german people to attack the world a reading of mein kampf will verify that dr renee favel a famous french psychologist explained it by saying that hitler had a remarkable understanding of the law of suggestion and its different forms of application it was with uncanny skill and masterly showmanship that he mobilized every instrument of propaganda in his mighty campaign of suggestion hitler openly stated that the psychology of suggestion was a terrible weapon in the hands of anyone who knew how to use it let's see how he worked it to make the", "start_timestamp": "00:08:25", "end_timestamp": "00:09:02", "start_second": 505, "end_second": 542, "url": "https://www.youtube.com/watch?v=1DYmgoij4FQ&t=505s", "title": "The Secret Knowledge Of Believing", "thumbnail": "https://i.ytimg.com/vi/1DYmgoij4FQ/hqdefault.jpg"} {"video_id": "1DYmgoij4FQ", "text": "germans believe what he wanted them to and once that belief took hold how they started their campaign of terror slogans huge signs posters masked flags appeared throughout germany hitler's picture was everywhere one reich one folk one leader became the chant it was heard everywhere today we own germany tomorrow the entire world the marching song of the german youths came from thousands of throats daily such slogans as germany has waited long enough stand up you are the aristocrats of the third reich germany is behind hitler to a man and", "start_timestamp": "00:09:02", "end_timestamp": "00:09:44", "start_second": 542, "end_second": 584, "url": "https://www.youtube.com/watch?v=1DYmgoij4FQ&t=542s", "title": "The Secret Knowledge Of Believing", "thumbnail": "https://i.ytimg.com/vi/1DYmgoij4FQ/hqdefault.jpg"} {"video_id": "1DYmgoij4FQ", "text": "hundreds others bombarded the people 24 hours a day from billboards sides of buildings the radio and the press every time they move turned around or spoke to one another they got the idea that they were a superior race and under the hypnotic influence of this belief strengthened by repeated suggestion they started out to prove it unfortunately for them there were other nations who also had strong national beliefs that eventually became the means of bringing defeat to the germans i know that it is difficult for the average", "start_timestamp": "00:09:44", "end_timestamp": "00:10:22", "start_second": 584, "end_second": 622, "url": "https://www.youtube.com/watch?v=1DYmgoij4FQ&t=584s", "title": "The Secret Knowledge Of Believing", "thumbnail": "https://i.ytimg.com/vi/1DYmgoij4FQ/hqdefault.jpg"} {"video_id": "1DYmgoij4FQ", "text": "person who knows nothing of the subject to accept the idea that all is within but surely the most materialistic person must realize that as far as he himself is concerned nothing exists on the outside plane unless he has knowledge of it or unless it becomes fixed in his consciousness it is the image created in his mind that gives reality to the world outside of him happiness sought by many and found by few therefore is a matter entirely within ourselves our environment and the everyday happenings of life have absolutely no", "start_timestamp": "00:10:22", "end_timestamp": "00:11:01", "start_second": 622, "end_second": 661, "url": "https://www.youtube.com/watch?v=1DYmgoij4FQ&t=622s", "title": "The Secret Knowledge Of Believing", "thumbnail": "https://i.ytimg.com/vi/1DYmgoij4FQ/hqdefault.jpg"} {"video_id": "1DYmgoij4FQ", "text": "effect on our happiness except as we permit mental images of the outside to enter our consciousness happiness is wholly independent of position wealth or material possessions it is a state of mind which we ourselves have the power to control and that control lies with our thinking emerson said what is the hardest task in the world to think obviously this is so when one considers that most of us are victims of mass thinking and feed upon suggestions from others we all know that the law of cause and effect is inviolable", "start_timestamp": "00:11:01", "end_timestamp": "00:11:43", "start_second": 661, "end_second": 703, "url": "https://www.youtube.com/watch?v=1DYmgoij4FQ&t=661s", "title": "The Secret Knowledge Of Believing", "thumbnail": "https://i.ytimg.com/vi/1DYmgoij4FQ/hqdefault.jpg"} {"video_id": "1DYmgoij4FQ", "text": "yet how many of us ever pause to consider its workings the entire course of a man's life has many times been changed by a single thought which coming to him in a flash became a mighty power that altered the whole current of human events history is replete with the stories of strong-minded resolutely willed individuals who steadfastly holding to their inner convictions have been able to inspire their fellow man and in the face of tremendous and determined opposition have literally created out of nothing great businesses", "start_timestamp": "00:11:43", "end_timestamp": "00:12:17", "start_second": 703, "end_second": 737, "url": "https://www.youtube.com/watch?v=1DYmgoij4FQ&t=703s", "title": "The Secret Knowledge Of Believing", "thumbnail": "https://i.ytimg.com/vi/1DYmgoij4FQ/hqdefault.jpg"} {"video_id": "1DYmgoij4FQ", "text": "huge empires and new worlds they had no monopoly of thought power you and every man and woman have it all you have to do is use it you will then become the person you envisage in your imagination know yourself know your power faithfully use the cards in the mirror techniques and you will get results far beyond your fondest expectations just believe that there is a genuine creative magic in believing and magic there will be for belief will supply the power which will enable you to succeed in everything you undertake back your", "start_timestamp": "00:12:17", "end_timestamp": "00:13:01", "start_second": 737, "end_second": 781, "url": "https://www.youtube.com/watch?v=1DYmgoij4FQ&t=737s", "title": "The Secret Knowledge Of Believing", "thumbnail": "https://i.ytimg.com/vi/1DYmgoij4FQ/hqdefault.jpg"} {"video_id": "YPfUiOMYOEE", "text": "hello there today we're looking at bootstrap your own latent a new approach to self supervised learning by researchers of deep mind and imperial college so not almost no day goes by where we don't hear some sort of new self supervised algorithm right here this paper on a high level tries to get rid of the necessary negative samples when doing the contrastive loss for self supervised learning and they basically combined momentum contrast and seem clear and then remove the negative samples and that seems to work pretty", "start_timestamp": "00:00:00", "end_timestamp": "00:00:38", "start_second": 0, "end_second": 38, "url": "https://www.youtube.com/watch?v=YPfUiOMYOEE&t=0s", "title": "BYOL: Bootstrap Your Own Latent: A New Approach to Self-Supervised Learning (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/YPfUiOMYOEE/maxresdefault.jpg"} {"video_id": "YPfUiOMYOEE", "text": "well even though it's magic so yeah if you if you want to see how it's done stick around share the video out if you want other people to see how it's done and leave a comment this this one I really don't get what's going on so if you have ideas put them there I'll I'll read them through it'll be fun alright so they say we introduced bootstrap your own latent or be all a new approach to self supervised image representation learning ok so image representation learning is the simple task of taking an image and then feeding it through a function which", "start_timestamp": "00:00:38", "end_timestamp": "00:01:21", "start_second": 38, "end_second": 81, "url": "https://www.youtube.com/watch?v=YPfUiOMYOEE&t=38s", "title": "BYOL: Bootstrap Your Own Latent: A New Approach to Self-Supervised Learning (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/YPfUiOMYOEE/maxresdefault.jpg"} {"video_id": "YPfUiOMYOEE", "text": "is usually like a neural network let's let's just say this is a neural network and in fact all of these the community has sort of standardized this to be most of the time it's something like a ResNet 50 ok so what you want to do is you want to train a neural network like a ResNet 50 to give you a good representation of the image so this would be like H and H is a vector and H is a representation of this image and the representation should be such that you can then take this representation and solve many tasks with", "start_timestamp": "00:01:21", "end_timestamp": "00:01:59", "start_second": 81, "end_second": 119, "url": "https://www.youtube.com/watch?v=YPfUiOMYOEE&t=81s", "title": "BYOL: Bootstrap Your Own Latent: A New Approach to Self-Supervised Learning (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/YPfUiOMYOEE/maxresdefault.jpg"} {"video_id": "YPfUiOMYOEE", "text": "it which either can be like linear you can put a linear classifier on top of the H or you can fine-tune the entire architecture to solve some other task the idea is if you have a large data set you may use this dataset to train these good representations of these images and then you can transfer learn transfer this to a task where you might be not have as much data and because you don't have as much data it's not enough to completely train an architecture like this but it is enough to take an architecture that's been trained with", "start_timestamp": "00:01:59", "end_timestamp": "00:02:35", "start_second": 119, "end_second": 155, "url": "https://www.youtube.com/watch?v=YPfUiOMYOEE&t=119s", "title": "BYOL: Bootstrap Your Own Latent: A New Approach to Self-Supervised Learning (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/YPfUiOMYOEE/maxresdefault.jpg"} {"video_id": "YPfUiOMYOEE", "text": "the large data set and just adapt it to your small data set and that usually it tends to work pretty well this is called transfer learning this step here is called fine-tuning sometimes and it's sort of the approach that comes from natural language processing from these big transformers like Bert where you first train on a really big data set that might not be the data set that you want in the end but it's really big so you can sort of learn a lot of things from that data set and then the only thing left to do is to fine tune it to", "start_timestamp": "00:02:35", "end_timestamp": "00:03:12", "start_second": 155, "end_second": 192, "url": "https://www.youtube.com/watch?v=YPfUiOMYOEE&t=155s", "title": "BYOL: Bootstrap Your Own Latent: A New Approach to Self-Supervised Learning (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/YPfUiOMYOEE/maxresdefault.jpg"} {"video_id": "YPfUiOMYOEE", "text": "basically adapt it to the nuances of your data set but it will have learned most things already and that's called representation learning so the goal is to learn a good representation now this self supervise here is also important because representation learning can be as easy as if this here is image net the image net data set contains like a million images all with labels you can simply train your ResNet 50 to predict the class this is the this is called supervised pre training or supervised representation learning and that works", "start_timestamp": "00:03:12", "end_timestamp": "00:03:50", "start_second": 192, "end_second": 230, "url": "https://www.youtube.com/watch?v=YPfUiOMYOEE&t=192s", "title": "BYOL: Bootstrap Your Own Latent: A New Approach to Self-Supervised Learning (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/YPfUiOMYOEE/maxresdefault.jpg"} {"video_id": "YPfUiOMYOEE", "text": "pretty well but you need a labelled data set in self supervised learning you do not need labels what you do is you do self supervision and self supervision it has many there are there many ways to do self supervision but what we'll see in this particular paper is that you will take an image and you'll make a different variance of that same image so you'll take the image and you'll make many many variants of it well let's just say two so you have some procedure to sort of change the picture a little bit but it's essentially still the same and you", "start_timestamp": "00:03:50", "end_timestamp": "00:04:30", "start_second": 230, "end_second": 270, "url": "https://www.youtube.com/watch?v=YPfUiOMYOEE&t=230s", "title": "BYOL: Bootstrap Your Own Latent: A New Approach to Self-Supervised Learning (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/YPfUiOMYOEE/maxresdefault.jpg"} {"video_id": "YPfUiOMYOEE", "text": "do that through data augmentation so this could be a random crop or you color jitter or you rotate it or something like this and then you exploit the fact that you know that these two things they should be still sort of the same image so once you send them through your through your encoder the representations of the two images they should be fairly close now let's actually read on right here Buhl relies on two neural networks referred to as online and target networks that interact and learn from each other from an augmented view of an", "start_timestamp": "00:04:30", "end_timestamp": "00:05:13", "start_second": 270, "end_second": 313, "url": "https://www.youtube.com/watch?v=YPfUiOMYOEE&t=270s", "title": "BYOL: Bootstrap Your Own Latent: A New Approach to Self-Supervised Learning (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/YPfUiOMYOEE/maxresdefault.jpg"} {"video_id": "YPfUiOMYOEE", "text": "image we trained the online network to predict the target representation of the same image under a different dog mented view okay that's sort of what we saw so we have we have the same image under a different dog mented view so what does it mean what what I just said you make two versions of the same image one that are slightly different and then their representation should be close now until this point we have always thought that this would degenerate because what if you think of this neural network that does this encoding to the hidden space", "start_timestamp": "00:05:13", "end_timestamp": "00:05:52", "start_second": 313, "end_second": 352, "url": "https://www.youtube.com/watch?v=YPfUiOMYOEE&t=313s", "title": "BYOL: Bootstrap Your Own Latent: A New Approach to Self-Supervised Learning (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/YPfUiOMYOEE/maxresdefault.jpg"} {"video_id": "YPfUiOMYOEE", "text": "this ResNet 50 right here if it wants to if you simply want to make the two representations close what's the best thing it can do you can simply map all the hidden it can simply have the constant function H equals 0 or something like this just a constant function because then this loss here is always going to be 0 like perfect okay so no matter what image comes in if you always map it to the same thing you will always be close in representation space and therefore you always win that doesn't learn a really good", "start_timestamp": "00:05:52", "end_timestamp": "00:06:28", "start_second": 352, "end_second": 388, "url": "https://www.youtube.com/watch?v=YPfUiOMYOEE&t=352s", "title": "BYOL: Bootstrap Your Own Latent: A New Approach to Self-Supervised Learning (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/YPfUiOMYOEE/maxresdefault.jpg"} {"video_id": "YPfUiOMYOEE", "text": "representation right so what people have done is they have included so-called negative samples where you'll say I'll take a different image from you know from this dataset but it's a different image than this image and I also do some maybe some data augmentation with that image and then I send this through the same encoder to also give me an H so this is the H let's call that H original this is H plus because it's the same image but slightly differently augmented and this is H minus which is a different image", "start_timestamp": "00:06:28", "end_timestamp": "00:07:08", "start_second": 388, "end_second": 428, "url": "https://www.youtube.com/watch?v=YPfUiOMYOEE&t=388s", "title": "BYOL: Bootstrap Your Own Latent: A New Approach to Self-Supervised Learning (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/YPfUiOMYOEE/maxresdefault.jpg"} {"video_id": "YPfUiOMYOEE", "text": "and now the task is let's make those two very similar to each other but let's distance them from this other one so we want we want this to be as far away as possible and these two to be close to each other now the network can't simply map everything to a constant function anymore right it needs to actually do something to make these be close together and this be far apart and the combination of this together with the augmentation procedure that goes into augmenting the images has been sort of a good combo to learn good representations", "start_timestamp": "00:07:08", "end_timestamp": "00:07:51", "start_second": 428, "end_second": 471, "url": "https://www.youtube.com/watch?v=YPfUiOMYOEE&t=428s", "title": "BYOL: Bootstrap Your Own Latent: A New Approach to Self-Supervised Learning (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/YPfUiOMYOEE/maxresdefault.jpg"} {"video_id": "YPfUiOMYOEE", "text": "and a lot of papers have alluded to the fact that this is so the negative samples are to do not have these degeneracy right so to not have the simple solutions but the fact that the representation then is actually good like is good for image class image tasks down the line probably comes from the fact of these augmentations right here and there's a lot of evidence of from the fact that depending on which augmentations we choose these representations are going to be better or worse for example random cropping of", "start_timestamp": "00:07:51", "end_timestamp": "00:08:28", "start_second": 471, "end_second": 508, "url": "https://www.youtube.com/watch?v=YPfUiOMYOEE&t=471s", "title": "BYOL: Bootstrap Your Own Latent: A New Approach to Self-Supervised Learning (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/YPfUiOMYOEE/maxresdefault.jpg"} {"video_id": "YPfUiOMYOEE", "text": "an image so the random sub like taking a random crop from the image tends to be very very beneficial because so here this is the same image twice right let's say we take a random crop here and one up here it's sort of maybe there's an overlap here in the middle right so it sort of needs to understand that these random crops sir sort of needs to communicate between these two places in these random crops so the representation has to somehow make sure that the object that is overlapping here is somehow represented but it can't represent it", "start_timestamp": "00:08:28", "end_timestamp": "00:09:12", "start_second": 508, "end_second": 552, "url": "https://www.youtube.com/watch?v=YPfUiOMYOEE&t=508s", "title": "BYOL: Bootstrap Your Own Latent: A New Approach to Self-Supervised Learning (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/YPfUiOMYOEE/maxresdefault.jpg"} {"video_id": "YPfUiOMYOEE", "text": "just as a pixel value because it doesn't know where the crops come from so there's a lot of evidence that these representations are the thing that's responsible for making the representations so good okay now this paper simply says do we really need these negative samples right here let's just get rid of them and with a couple of tricks this seems to work in here this is this is what seems like magic to me because as we go forward think of it nothing nothing keeps this model right here from doing the degenerate solution h equals constant", "start_timestamp": "00:09:12", "end_timestamp": "00:09:59", "start_second": 552, "end_second": 599, "url": "https://www.youtube.com/watch?v=YPfUiOMYOEE&t=552s", "title": "BYOL: Bootstrap Your Own Latent: A New Approach to Self-Supervised Learning (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/YPfUiOMYOEE/maxresdefault.jpg"} {"video_id": "YPfUiOMYOEE", "text": "nothing right now for some reason it doesn't do that and I have the feeling that this is a super delicate balance that you have to do because when you train when you start out it's probably not the constant function right it's probably some some distribution and then simply by the fact that you train it and kind of keep it in the so this is certainly an optimal solution but you might be like in some sort of local minimum once you start training and you simply don't get out of it during training and that's why the network has", "start_timestamp": "00:09:59", "end_timestamp": "00:10:36", "start_second": 599, "end_second": 636, "url": "https://www.youtube.com/watch?v=YPfUiOMYOEE&t=599s", "title": "BYOL: Bootstrap Your Own Latent: A New Approach to Self-Supervised Learning (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/YPfUiOMYOEE/maxresdefault.jpg"} {"video_id": "YPfUiOMYOEE", "text": "an easier time step by step as it updates itself in very small incremental steps it has an easier time actually going for the good representation then it has to see this solution right here and converge to that but yeah it seems delicate so what are they doing they are taking that idea of taking an input image right here and so by the way why is it important that there are no negative samples because now the question is always or where do you get these negative samples from right should they be uniformly sampled", "start_timestamp": "00:10:36", "end_timestamp": "00:11:15", "start_second": 636, "end_second": 675, "url": "https://www.youtube.com/watch?v=YPfUiOMYOEE&t=636s", "title": "BYOL: Bootstrap Your Own Latent: A New Approach to Self-Supervised Learning (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/YPfUiOMYOEE/maxresdefault.jpg"} {"video_id": "YPfUiOMYOEE", "text": "should we keep a buffer should we order them there is this task of hard negative mining where you say Oh any old negative won't do it's actually better if we take negatives that are you know just hard enough there is a curricular curriculum learning problems and so on so it would be best to actually just get rid of these negative things so that's why we want to get rid of them so that's the approach byl bootstrap your own latent there is the input image we take one image at a time and you apply two different random augmentations to it", "start_timestamp": "00:11:15", "end_timestamp": "00:11:53", "start_second": 675, "end_second": 713, "url": "https://www.youtube.com/watch?v=YPfUiOMYOEE&t=675s", "title": "BYOL: Bootstrap Your Own Latent: A New Approach to Self-Supervised Learning (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/YPfUiOMYOEE/maxresdefault.jpg"} {"video_id": "YPfUiOMYOEE", "text": "right so you create two slightly different variants of that image through augmentation and again this can be something like a random crop it can be a horizontal flip randomly you color jitter you Solarize you blur and so on there are all these variants of data augmentation and the fact that down the line that the representation of these two things has to be close to each other I think these random these augmentations here are responsible what the to make the to make the these augmentations are responsible to make the representations", "start_timestamp": "00:11:53", "end_timestamp": "00:12:36", "start_second": 713, "end_second": 756, "url": "https://www.youtube.com/watch?v=YPfUiOMYOEE&t=713s", "title": "BYOL: Bootstrap Your Own Latent: A New Approach to Self-Supervised Learning (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/YPfUiOMYOEE/maxresdefault.jpg"} {"video_id": "YPfUiOMYOEE", "text": "powerful okay the fact that later down the line the network has to sort of learn to ignore these it has to learn that oh you know it doesn't matter where in the image this object is because it's been random cropped for different you know at different locations it doesn't matter where in the image this object is I simply need to have my hidden representation have this particular object in the image and that's what makes it powerful okay I've said that enough now then you have these two slightly different versions and then you map it through", "start_timestamp": "00:12:36", "end_timestamp": "00:13:11", "start_second": 756, "end_second": 791, "url": "https://www.youtube.com/watch?v=YPfUiOMYOEE&t=756s", "title": "BYOL: Bootstrap Your Own Latent: A New Approach to Self-Supervised Learning (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/YPfUiOMYOEE/maxresdefault.jpg"} {"video_id": "YPfUiOMYOEE", "text": "your encoder okay let's go the top path first you see the bottom path has the same encoder but the parameters are different and this is going to be one of the crucial elements right here so this here are your actual parameters that you learn and this here are what are called the target parameters now after each and you can see this for all of these components right here so what happens is that the target parameters are basically a copy of these what's what are called the online parameters okay so after each step you copy over from the online", "start_timestamp": "00:13:11", "end_timestamp": "00:13:49", "start_second": 791, "end_second": 829, "url": "https://www.youtube.com/watch?v=YPfUiOMYOEE&t=791s", "title": "BYOL: Bootstrap Your Own Latent: A New Approach to Self-Supervised Learning (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/YPfUiOMYOEE/maxresdefault.jpg"} {"video_id": "YPfUiOMYOEE", "text": "parameters you copy over to the target parameters you never learn the target parameters you simply copy them after each step now you don't copy them outright what you do is you do an exponential moving average so the target parameters are always going to be sort of a lagging average of your online parameters and that idea comes from the momentum contrast principle where the reasoning sort of behind it is that you need a kind of a stable you kind of need a stable representation as a target but I think it hasn't been fully explored or", "start_timestamp": "00:13:49", "end_timestamp": "00:14:30", "start_second": 829, "end_second": 870, "url": "https://www.youtube.com/watch?v=YPfUiOMYOEE&t=829s", "title": "BYOL: Bootstrap Your Own Latent: A New Approach to Self-Supervised Learning (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/YPfUiOMYOEE/maxresdefault.jpg"} {"video_id": "YPfUiOMYOEE", "text": "explained why exactly that is so helpful but we just know that if if we have the target to be not the same as the the online parameters but actually a kind of a stable version of the past of the online parameters then that tends to work well again it's kind of the same principle as with the augmentations with the augmentations we have two different versions of the same image and now with this procedure here we sort of have two different versions of the same neural network but they're slightly different right and this idea you know has been", "start_timestamp": "00:14:30", "end_timestamp": "00:15:10", "start_second": 870, "end_second": 910, "url": "https://www.youtube.com/watch?v=YPfUiOMYOEE&t=870s", "title": "BYOL: Bootstrap Your Own Latent: A New Approach to Self-Supervised Learning (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/YPfUiOMYOEE/maxresdefault.jpg"} {"video_id": "YPfUiOMYOEE", "text": "around for much longer like the the first Q deep Q networks and so on they had the same principles where they had the the network that they actually learned and then the target network that is copied over every such-and-such episodes and so on so this this seems to work seems to be a fundamental principle that seems to work all right so we take our two slightly different augmented versions of the same image and we run them through our two slightly different encoders to obtain two representations now this thing right", "start_timestamp": "00:15:10", "end_timestamp": "00:15:49", "start_second": 910, "end_second": 949, "url": "https://www.youtube.com/watch?v=YPfUiOMYOEE&t=910s", "title": "BYOL: Bootstrap Your Own Latent: A New Approach to Self-Supervised Learning (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/YPfUiOMYOEE/maxresdefault.jpg"} {"video_id": "YPfUiOMYOEE", "text": "here that's going to be our represent so after this procedure we discard the entire thing right here except that so this here is your whatever your ResNet 50 okay after that follows a projection and the projection is is here to reduce the dimensionality and honestly I'm actually not sure why it is here because you can do it without like technically the algorithm doesn't require this projection so you can imagine the algorithm without the projection but just really quickly the projection simply brings down the representation", "start_timestamp": "00:15:49", "end_timestamp": "00:16:33", "start_second": 949, "end_second": 993, "url": "https://www.youtube.com/watch?v=YPfUiOMYOEE&t=949s", "title": "BYOL: Bootstrap Your Own Latent: A New Approach to Self-Supervised Learning (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/YPfUiOMYOEE/maxresdefault.jpg"} {"video_id": "YPfUiOMYOEE", "text": "which is like 2048 dimensional that comes out of the resin at 50 it has it is a two layer neural network that first pumps this up to like four thousand and ninety two and then compresses it down to 256 dimensions okay so that's the projection Network again there is a part that's learned and then the target projector is simply the exponential moving average of the online projector but again this is why exactly this is here probably simply because it works right but probably because there is no there is no distinction because", "start_timestamp": "00:16:33", "end_timestamp": "00:17:16", "start_second": 993, "end_second": 1036, "url": "https://www.youtube.com/watch?v=YPfUiOMYOEE&t=993s", "title": "BYOL: Bootstrap Your Own Latent: A New Approach to Self-Supervised Learning (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/YPfUiOMYOEE/maxresdefault.jpg"} {"video_id": "YPfUiOMYOEE", "text": "you don't have different losses you simply back propagate through everything and then train everything so there is no logical distinction between the projection and the representation other than you have a different dimensionality but maybe that's the point here that you make a different dimensionality even though you could you could do the rest in this 2048 space yeah so for now just this doesn't exist let's just say this doesn't exist and we just work with this representation here let's call this Z z prime okay so what happens is we take", "start_timestamp": "00:17:16", "end_timestamp": "00:17:52", "start_second": 1036, "end_second": 1072, "url": "https://www.youtube.com/watch?v=YPfUiOMYOEE&t=1036s", "title": "BYOL: Bootstrap Your Own Latent: A New Approach to Self-Supervised Learning (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/YPfUiOMYOEE/maxresdefault.jpg"} {"video_id": "YPfUiOMYOEE", "text": "the representation and now we have one neural network the predictor right here that takes the representation of one of the image versions and it simply tries to predict the representation of the other image version so what you want is that Q of Z equals Z prime okay and if we expand that is that Q of F of Z is equal to F target of Z prime and if we expand that even further you can see that Q I'll just write Q and F for now Q of F of a which is an augmentation an augmentation of Z should be one bracket to bracket three bracket should be F of", "start_timestamp": "00:17:52", "end_timestamp": "00:18:56", "start_second": 1072, "end_second": 1136, "url": "https://www.youtube.com/watch?v=YPfUiOMYOEE&t=1072s", "title": "BYOL: Bootstrap Your Own Latent: A New Approach to Self-Supervised Learning (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/YPfUiOMYOEE/maxresdefault.jpg"} {"video_id": "YPfUiOMYOEE", "text": "a of Z sorry not see that's the image X all right so this makes a lot of sense you're simply with Q since these are all different here so f is the target instead of these online parameters a is also different it's a different augmentation that you do but the X is the same okay so the Q simply tries to somehow negate this augmentation and this difference between the target and the online parameters but you don't tell the queue which augmentation was used and you don't tell the Q what are the exact parameters of that network so what", "start_timestamp": "00:18:56", "end_timestamp": "00:19:45", "start_second": 1136, "end_second": 1185, "url": "https://www.youtube.com/watch?v=YPfUiOMYOEE&t=1136s", "title": "BYOL: Bootstrap Your Own Latent: A New Approach to Self-Supervised Learning (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/YPfUiOMYOEE/maxresdefault.jpg"} {"video_id": "YPfUiOMYOEE", "text": "the Q has to do is it has to somehow it's like it's like a it has to take its best guess right so basically the Q is trained to output the expected value of the representation right the expected of the representation f of a of X under all of the different possible image augmentations and that's why it learns to ignore these augmentations so your entire goal with these methods is you learn to ignore these augmentations so you want to learn some method that is independent of the augmentations so by crafting the augmentations in a smart", "start_timestamp": "00:19:45", "end_timestamp": "00:20:35", "start_second": 1185, "end_second": 1235, "url": "https://www.youtube.com/watch?v=YPfUiOMYOEE&t=1185s", "title": "BYOL: Bootstrap Your Own Latent: A New Approach to Self-Supervised Learning (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/YPfUiOMYOEE/maxresdefault.jpg"} {"video_id": "YPfUiOMYOEE", "text": "way we can make these representations contain a lot of semantic information because what we want to do with the augmentation is basically we want to destroy all the non segmenting information sorry non semantic information and random cropping is one of those methods horizontal flipping is one of those methods because we say well whether an image goes left to right or right left most of the time the semantics are the same the pixels are different but the semantics are the same so by putting an augmentation in there", "start_timestamp": "00:20:35", "end_timestamp": "00:21:06", "start_second": 1235, "end_second": 1266, "url": "https://www.youtube.com/watch?v=YPfUiOMYOEE&t=1235s", "title": "BYOL: Bootstrap Your Own Latent: A New Approach to Self-Supervised Learning (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/YPfUiOMYOEE/maxresdefault.jpg"} {"video_id": "YPfUiOMYOEE", "text": "we learn to ignore that augmentation because our representation now needs to be predictable right cuny we learn q to predict the the representation under the expectation of our augmentations and that means it can't be dependent on one particular augmentation okay learns to ignore it so that's basically what's happening here again there is nothing keeping this from simply collapsing it to a trivial solution and it's probably a combination of the of the initialization and the learning procedure itself that it you", "start_timestamp": "00:21:06", "end_timestamp": "00:21:51", "start_second": 1266, "end_second": 1311, "url": "https://www.youtube.com/watch?v=YPfUiOMYOEE&t=1266s", "title": "BYOL: Bootstrap Your Own Latent: A New Approach to Self-Supervised Learning (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/YPfUiOMYOEE/maxresdefault.jpg"} {"video_id": "YPfUiOMYOEE", "text": "know goes on in little little steps one by one that keeps it in the realm of rather having to like it's easier to learn a good representation than it is to collapse to that to that solution okay so again components is image then you ought meant differently then you run it through different encoders but the encoders are similar in the fact that one is the exponential moving average of the other and then you try to predict one from the other and that ultimately makes the representation be independent of the augmentation and that means that", "start_timestamp": "00:21:51", "end_timestamp": "00:22:32", "start_second": 1311, "end_second": 1352, "url": "https://www.youtube.com/watch?v=YPfUiOMYOEE&t=1311s", "title": "BYOL: Bootstrap Your Own Latent: A New Approach to Self-Supervised Learning (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/YPfUiOMYOEE/maxresdefault.jpg"} {"video_id": "YPfUiOMYOEE", "text": "the representation can only include things that are not destroyed by the augmentations and if you construct the augmentations smartly that means you only retain the semantic information that's it so the loss function is pretty simple as you can see right here what you want is and this bar is a normalization what you want is the l2 norm between the this representation be close to the cue of that representation so the cue simply tries to predict the other representation and you do that for both ways so you once stick the image in", "start_timestamp": "00:22:32", "end_timestamp": "00:23:12", "start_second": 1352, "end_second": 1392, "url": "https://www.youtube.com/watch?v=YPfUiOMYOEE&t=1352s", "title": "BYOL: Bootstrap Your Own Latent: A New Approach to Self-Supervised Learning (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/YPfUiOMYOEE/maxresdefault.jpg"} {"video_id": "YPfUiOMYOEE", "text": "here and try to predict the other one and you do it vice versa so you get too lost components each time it's a symmetric loss okay and that's it that's the method and they beat all the other self supervised methods and they get pretty close to the supervised supervised representation learning method as you can see right here as the number of parameters goes up in their model so one of them is resonate 50 but I'm gonna guess this one right here but you can also get to higher architectures and then it appears to work even better", "start_timestamp": "00:23:12", "end_timestamp": "00:23:49", "start_second": 1392, "end_second": 1429, "url": "https://www.youtube.com/watch?v=YPfUiOMYOEE&t=1392s", "title": "BYOL: Bootstrap Your Own Latent: A New Approach to Self-Supervised Learning (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/YPfUiOMYOEE/maxresdefault.jpg"} {"video_id": "YPfUiOMYOEE", "text": "and come even closer to this supervised baseline this could be because you know if you have more parameters technically in a supervised method you would also need more labeled images maybe and therefore it doesn't scale as well I don't I don't know there is a lot of unclarity in this research like all they show is that their numbers are good which is cool right and it's cool that you don't need your you don't need the negative samples anymore and it actually doesn't collapse when you do that kind of stuff but there's a lot of I don't", "start_timestamp": "00:23:49", "end_timestamp": "00:24:25", "start_second": 1429, "end_second": 1465, "url": "https://www.youtube.com/watch?v=YPfUiOMYOEE&t=1429s", "title": "BYOL: Bootstrap Your Own Latent: A New Approach to Self-Supervised Learning (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/YPfUiOMYOEE/maxresdefault.jpg"} {"video_id": "YPfUiOMYOEE", "text": "know there's a lot of things here for example we use a batch size of 4096 split over 512 TP uv3 course with this setup training takes approximately 8 hours for resin at 50 so they train eight hours on 512 TP use just imagine that so that's sort of crazy amount of computation again going into these models and then the second thing here is that you can see that there are some things missing right here and there are these all these annotations which probably means that they take these numbers from those papers now they allude to to the fact", "start_timestamp": "00:24:25", "end_timestamp": "00:25:13", "start_second": 1465, "end_second": 1513, "url": "https://www.youtube.com/watch?v=YPfUiOMYOEE&t=1465s", "title": "BYOL: Bootstrap Your Own Latent: A New Approach to Self-Supervised Learning (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/YPfUiOMYOEE/maxresdefault.jpg"} {"video_id": "YPfUiOMYOEE", "text": "that they try to follow their protocol as closely as possible but I mean that's never that's never given or almost never unless they release like the exact code and even then there are still going to be differences in even like you'd have to replicate the exact thing on the exact same number of TPU cores and whatnot so I I highly like these numbers seem to be I'm not sure especially if you then go and look and at some point they actually do reproduce the same clear baseline so you can see right here that they have a own implementation of", "start_timestamp": "00:25:13", "end_timestamp": "00:26:00", "start_second": 1513, "end_second": 1560, "url": "https://www.youtube.com/watch?v=YPfUiOMYOEE&t=1513s", "title": "BYOL: Bootstrap Your Own Latent: A New Approach to Self-Supervised Learning (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/YPfUiOMYOEE/maxresdefault.jpg"} {"video_id": "YPfUiOMYOEE", "text": "sim clear and they actually compare this to the numbers that they find in the same clear paper and you can see for example here there's like four percentage points that the the their implementation of seeing clear gains above this implementation and if you look at this supervised baseline that's also from that paper and there is a graph further down where they also implement their own version of the their own version of the supervised baseline I forget here so you can see that between the supervised in that paper and the", "start_timestamp": "00:26:00", "end_timestamp": "00:26:40", "start_second": 1560, "end_second": 1600, "url": "https://www.youtube.com/watch?v=YPfUiOMYOEE&t=1560s", "title": "BYOL: Bootstrap Your Own Latent: A New Approach to Self-Supervised Learning (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/YPfUiOMYOEE/maxresdefault.jpg"} {"video_id": "YPfUiOMYOEE", "text": "supervised of them sometimes there's like a giant gap right here for the same model it seems so all of these numbers um I'm not sure you should put too much weight on the fact that this is now outperforming the other methods I would not put like unless this is like SuperDuper replicated very often I would not put a lot of weight on the fact that it is better what I would put a lot of weight on is the fact that it works at all and and achieves you know good performance and there is more they make they have like experiments right here that show", "start_timestamp": "00:26:40", "end_timestamp": "00:27:21", "start_second": 1600, "end_second": 1641, "url": "https://www.youtube.com/watch?v=YPfUiOMYOEE&t=1600s", "title": "BYOL: Bootstrap Your Own Latent: A New Approach to Self-Supervised Learning (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/YPfUiOMYOEE/maxresdefault.jpg"} {"video_id": "YPfUiOMYOEE", "text": "that their method they be a B y ou L is much more resistant to like changes in hyper parameters so here you can see that it falls off much later when you reduce the batch size which makes sense right because seem clear is one of these methods that uses negative samples and for negative samples it uses the other samples in the mini batch now if you have less samples in the mini batch that means you have a less representative distribution of your entire data set as negative samples and therefore if you increase as decrease the mini batch then", "start_timestamp": "00:27:21", "end_timestamp": "00:27:58", "start_second": 1641, "end_second": 1678, "url": "https://www.youtube.com/watch?v=YPfUiOMYOEE&t=1641s", "title": "BYOL: Bootstrap Your Own Latent: A New Approach to Self-Supervised Learning (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/YPfUiOMYOEE/maxresdefault.jpg"} {"video_id": "YPfUiOMYOEE", "text": "this drops off and also they show that for example their method is much more robust to the removal of a couple of these image augmentations so all of this I find actually pretty cool but the the actual numbers here first I'm not super duper interested that they get like two or one points more in something but they do perform like a lot of experiments and that it shows that you can apply the method to different things it's not only like in one setting so that's pretty cool it works at least at the you can say it works at least as well as other", "start_timestamp": "00:27:58", "end_timestamp": "00:28:43", "start_second": 1678, "end_second": 1723, "url": "https://www.youtube.com/watch?v=YPfUiOMYOEE&t=1678s", "title": "BYOL: Bootstrap Your Own Latent: A New Approach to Self-Supervised Learning (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/YPfUiOMYOEE/maxresdefault.jpg"} {"video_id": "YPfUiOMYOEE", "text": "methods and it is a lot easier because you don't have this negative sample things now the last quarrel I have with the paper and where is it where is it somewhere they say that we release the code that they release the pseudocode they don't release the code they release the pseudocode in the appendix so I mean there are reasons why you sometimes want to release pseudocode and that's if Liane algorithm is so high level and so simple in its high level T and so modular to be fleshed out that you can't like it makes more sense but here it's", "start_timestamp": "00:28:43", "end_timestamp": "00:29:33", "start_second": 1723, "end_second": 1773, "url": "https://www.youtube.com/watch?v=YPfUiOMYOEE&t=1723s", "title": "BYOL: Bootstrap Your Own Latent: A New Approach to Self-Supervised Learning (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/YPfUiOMYOEE/maxresdefault.jpg"} {"video_id": "YPfUiOMYOEE", "text": "like pseudocode in Jax and and come on is it really that competitively advantageous to retain your code this is it's just not reproducible with this you know that they have like 50 billion hacks in their code and yeah so deep mind has this history of just not releasing like publishing behind paywalls and and just giving pseudocode that has lots of mistakes in them like the new zero pseudocode you can't even like run it in its basic form if you fill in the things it's it's a it's a bit annoying in any way the method", "start_timestamp": "00:29:33", "end_timestamp": "00:30:15", "start_second": 1773, "end_second": 1815, "url": "https://www.youtube.com/watch?v=YPfUiOMYOEE&t=1773s", "title": "BYOL: Bootstrap Your Own Latent: A New Approach to Self-Supervised Learning (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/YPfUiOMYOEE/maxresdefault.jpg"} {"video_id": "YPfUiOMYOEE", "text": "itself seems promising for representation learning as I said especially because it's pretty simple it still heavily relies on these augmentation methods so and that's what they say right here nevertheless py will remind remains dependent on existing sets of augmentations that are specific to vision applications to generalize beyond to other modalities it is necessary to obtain similarly suitable augmentations for each of them designing such augmentations may require significant effort and expertise there for automating the search for these", "start_timestamp": "00:30:15", "end_timestamp": "00:30:50", "start_second": 1815, "end_second": 1850, "url": "https://www.youtube.com/watch?v=YPfUiOMYOEE&t=1815s", "title": "BYOL: Bootstrap Your Own Latent: A New Approach to Self-Supervised Learning (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/YPfUiOMYOEE/maxresdefault.jpg"} {"video_id": "YPfUiOMYOEE", "text": "augmentations would be an important next step to generalize to other modalities and I'm not sure if you can do this automating the search for these augmentations I guess you can do it if you have like a supervised data set and then you can search and then you can use those augmentations for the unsupervised but it seems a bit bootstrap eeeh no pun intended right here I think the the power of the of these representations again comes from the fact that we have these augmentations carefully constructed so oh yes", "start_timestamp": "00:30:50", "end_timestamp": "00:31:25", "start_second": 1850, "end_second": 1885, "url": "https://www.youtube.com/watch?v=YPfUiOMYOEE&t=1850s", "title": "BYOL: Bootstrap Your Own Latent: A New Approach to Self-Supervised Learning (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/YPfUiOMYOEE/maxresdefault.jpg"} {"video_id": "YPfUiOMYOEE", "text": "the last thing broader impact statement just read this like try to estimate the perplexity of this broader impact statement let's go the presented research should be categorized as research in the field of unsupervised learning this work may inspire new algorithms theoretical and experimental investigation the algorithm presented here can be used for many different vision applications and a particular use may have both positive or negative impacts which is known as the dual use problem besides as vision datasets could be biased the", "start_timestamp": "00:31:25", "end_timestamp": "00:32:03", "start_second": 1885, "end_second": 1923, "url": "https://www.youtube.com/watch?v=YPfUiOMYOEE&t=1885s", "title": "BYOL: Bootstrap Your Own Latent: A New Approach to Self-Supervised Learning (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/YPfUiOMYOEE/maxresdefault.jpg"} {"video_id": "YPfUiOMYOEE", "text": "representation learned by Buell could be susceptible to replicate these biases like come up so people who advocated for making everyone do this is this what you wanted is this like is this a satisfactory result for you and if you have this as a reviewer is this okay or not I mean let's just cross out some words here blank that's black like field let's just put field or machine learning why not machine learning machine learning this work inspire new algorithms yes the algorithm presented here can be used for many different", "start_timestamp": "00:32:03", "end_timestamp": "00:32:43", "start_second": 1923, "end_second": 1963, "url": "https://www.youtube.com/watch?v=YPfUiOMYOEE&t=1923s", "title": "BYOL: Bootstrap Your Own Latent: A New Approach to Self-Supervised Learning (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/YPfUiOMYOEE/maxresdefault.jpg"} {"video_id": "YPfUiOMYOEE", "text": "machine learning applications and the particular use may have both men - yes besides as datasets could be biased representation learned by this paper could be susceptible to replicate these biases well there is a copy pasting that you can apparently put into any and all papers that you write from that one and hey deepmind's doing it so you know there you go okay may be a bit cynical but I'm I like I told you this would happen I told you and you know okay so that was it for my comments right here they do have like a giant ton of", "start_timestamp": "00:32:43", "end_timestamp": "00:33:28", "start_second": 1963, "end_second": 2008, "url": "https://www.youtube.com/watch?v=YPfUiOMYOEE&t=1963s", "title": "BYOL: Bootstrap Your Own Latent: A New Approach to Self-Supervised Learning (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/YPfUiOMYOEE/maxresdefault.jpg"} {"video_id": "q7PjrmGNx5A", "text": "hi there today we'll look at self training with noisy student improves image net classification by chidze sie mintan luang eduard hovi and kwok v li so this paper takes an imagenet classifier that's been trained on the imagenet dataset and uses that classifier as a teacher model to label a whole bunch of unlabeled images and then it trains a student model that is larger than the original teacher model on those teacher labeled images and that turns out to improve the classification on the imagenet validation set now that", "start_timestamp": "00:00:00", "end_timestamp": "00:00:38", "start_second": 0, "end_second": 38, "url": "https://www.youtube.com/watch?v=q7PjrmGNx5A&t=0s", "title": "Self-training with Noisy Student improves ImageNet classification (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/q7PjrmGNx5A/maxresdefault.jpg"} {"video_id": "q7PjrmGNx5A", "text": "there is a couple of things that make this all work and today we're going to explore how this paper does it and what they say is important if you enjoy content like this as always don't hesitate to share it out or tell your friends about it and if you're not subscribed yet then do so um i would appreciate that and you'll get more content so win-win so this this paper is about semi-supervised learning in um in effect so it's at the intersection actually of semi-supervised learning knowledge distillation and transfer", "start_timestamp": "00:00:38", "end_timestamp": "00:01:19", "start_second": 38, "end_second": 79, "url": "https://www.youtube.com/watch?v=q7PjrmGNx5A&t=38s", "title": "Self-training with Noisy Student improves ImageNet classification (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/q7PjrmGNx5A/maxresdefault.jpg"} {"video_id": "q7PjrmGNx5A", "text": "learning so what do we mean by semi-supervised learning usually in supervised learning you'll have some sort of data set and the data set will contain let's say it's an image net it's image data set so the data set will contain images this is an image with like some sort of cat on it and it will contain the labels according to that so cat now in semi-supervised learning you you assume that so this is supervised learning in semi-supervised learning you assume that only part of your data set has the labels so like only this part down here has the", "start_timestamp": "00:01:19", "end_timestamp": "00:01:59", "start_second": 79, "end_second": 119, "url": "https://www.youtube.com/watch?v=q7PjrmGNx5A&t=79s", "title": "Self-training with Noisy Student improves ImageNet classification (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/q7PjrmGNx5A/maxresdefault.jpg"} {"video_id": "q7PjrmGNx5A", "text": "labels and the upper part does not have the labels so that's semi-supervised learning it's often the case when it's very expensive to get labels so you can only get labels for a couple of images in your data set but very often in semi-supervised learning you still assume it's the same data set there is a slightly different setup here that's called transfer learning so in transfer learning what you'll have is you'll have your data set that has the labels but it's very small so you'll notice i've drawn it smaller", "start_timestamp": "00:01:59", "end_timestamp": "00:02:31", "start_second": 119, "end_second": 151, "url": "https://www.youtube.com/watch?v=q7PjrmGNx5A&t=119s", "title": "Self-training with Noisy Student improves ImageNet classification (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/q7PjrmGNx5A/maxresdefault.jpg"} {"video_id": "q7PjrmGNx5A", "text": "that means you have very little that is also the case when it's very expensive to get labels but also it's expensive to get the data itself this is often the case like say in medical data where not only is it expensive to get labels for like a ct scan it's actually expensive to get the ct scan so what the goal in transfer learning is is to say well i do i do have only this small data set but i do have this giant other data set over here now can't i it's not the same it's maybe they're not ct so these are ct scans maybe these are", "start_timestamp": "00:02:31", "end_timestamp": "00:03:10", "start_second": 151, "end_second": 190, "url": "https://www.youtube.com/watch?v=q7PjrmGNx5A&t=151s", "title": "Self-training with Noisy Student improves ImageNet classification (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/q7PjrmGNx5A/maxresdefault.jpg"} {"video_id": "q7PjrmGNx5A", "text": "x-rays right they're fairly similar similar technology um if you slice the ct it will give you sort of an x-ray can i you know train my model pre-train my model on x-ray data and then fine-tune it on the ct data so that's called uh transfer learning usually now this can be done with or without labels so it can be that for the x-ray data set you do have the labels or you don't have the labels there are techniques for all of those now what we're going to look at today is kind of this situation right here it's the transfer learning situation", "start_timestamp": "00:03:10", "end_timestamp": "00:03:52", "start_second": 190, "end_second": 232, "url": "https://www.youtube.com/watch?v=q7PjrmGNx5A&t=190s", "title": "Self-training with Noisy Student improves ImageNet classification (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/q7PjrmGNx5A/maxresdefault.jpg"} {"video_id": "q7PjrmGNx5A", "text": "where you do not have the labels for this x-ray data set but other than in this x-ray example what we're going to look at is the small data set is going to be our imagenet database so our original picture with label database so you'll see immediately the difference here is that in the transfer learning setting we usually assume that the data set we want to train on is fairly small here you know imagenet is already sizeable but what we have is we have a much larger database of unlabeled images that we can just get", "start_timestamp": "00:03:52", "end_timestamp": "00:04:32", "start_second": 232, "end_second": 272, "url": "https://www.youtube.com/watch?v=q7PjrmGNx5A&t=232s", "title": "Self-training with Noisy Student improves ImageNet classification (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/q7PjrmGNx5A/maxresdefault.jpg"} {"video_id": "q7PjrmGNx5A", "text": "from the internet so we can scrape the internet for any kind of pictures and that will be our unlabeled data set and what we'll try to do is somehow incorporate this unlabeled data set here into the training process to get better on the imagenet data set okay so this is the the problem statement is you have the imagenet dataset and you have a second much larger data set of unlabeled images and you somehow want to make use of them so i hope you see how this is sort of connected to the others it's essentially sort of a", "start_timestamp": "00:04:32", "end_timestamp": "00:05:05", "start_second": 272, "end_second": 305, "url": "https://www.youtube.com/watch?v=q7PjrmGNx5A&t=272s", "title": "Self-training with Noisy Student improves ImageNet classification (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/q7PjrmGNx5A/maxresdefault.jpg"} {"video_id": "q7PjrmGNx5A", "text": "transfer semi-supervised learning setting but with the exception that usually in transfer learning you assume that the the labeled data set is like super small which is not the case here and that's going to result in us being able to apply a different technique so this different technique is called the noisy student now usually what you might do in a transfer learning setting is you might want to start with that big data set right because that's the data set that's sizeable enough to allow you to train a really big model on it and then you", "start_timestamp": "00:05:05", "end_timestamp": "00:05:39", "start_second": 305, "end_second": 339, "url": "https://www.youtube.com/watch?v=q7PjrmGNx5A&t=305s", "title": "Self-training with Noisy Student improves ImageNet classification (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/q7PjrmGNx5A/maxresdefault.jpg"} {"video_id": "q7PjrmGNx5A", "text": "fine tune and you you sort of hope that the information transfers over here on the other hand what we want to do is we start with the imagenet data set so first we train this in a supervised learning fashion into our model now this model is going to be called the teacher model we know how to do this we know to train imagenet models right so we can train this into a teacher model that has a reasonable accuracy on the imagenet data set step two we're going to take that big data set over here and use the teacher model to label", "start_timestamp": "00:05:39", "end_timestamp": "00:06:17", "start_second": 339, "end_second": 377, "url": "https://www.youtube.com/watch?v=q7PjrmGNx5A&t=339s", "title": "Self-training with Noisy Student improves ImageNet classification (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/q7PjrmGNx5A/maxresdefault.jpg"} {"video_id": "q7PjrmGNx5A", "text": "the unlabeled images so for each image for each image coming in here the teacher so maybe this is again another cat the teacher will say that's a cat okay so that gives you the big data set where now you have images along with labels just the labels aren't true labels they're generated by the teacher and then in the third step you train this big data set you train on this big data set and that's what you call your student model and then the student model in this paper will see how can we make it such that the student", "start_timestamp": "00:06:17", "end_timestamp": "00:07:03", "start_second": 377, "end_second": 423, "url": "https://www.youtube.com/watch?v=q7PjrmGNx5A&t=377s", "title": "Self-training with Noisy Student improves ImageNet classification (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/q7PjrmGNx5A/maxresdefault.jpg"} {"video_id": "q7PjrmGNx5A", "text": "is then better at the original imagenet task than the teacher ever was which seems counterintuitive at first because all of the information that the student is trained from is basically what the teacher already knows right all the labels here come from the teacher therefore the student shouldn't be able to outperform the teacher but in this case the student will be able to outperform the teacher and their argument here is that this is mainly due to the fact that you use noise in this training procedure so when you train the student what", "start_timestamp": "00:07:03", "end_timestamp": "00:07:39", "start_second": 423, "end_second": 459, "url": "https://www.youtube.com/watch?v=q7PjrmGNx5A&t=423s", "title": "Self-training with Noisy Student improves ImageNet classification (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/q7PjrmGNx5A/maxresdefault.jpg"} {"video_id": "q7PjrmGNx5A", "text": "you'll do is you'll use noise and one of the types of noise is that you severely augment this data right here in order to train the student now we've known for a long time that data augmentation for example in the frameworks of self-supervised learning and so on can have a very large benefit to training and here the fact that we incorporate this at extra data and we use noise and augmentations on it is going to result in a student that can sort of learn more about the data than than the teacher did know okay this this is basically it and", "start_timestamp": "00:07:39", "end_timestamp": "00:08:21", "start_second": 459, "end_second": 501, "url": "https://www.youtube.com/watch?v=q7PjrmGNx5A&t=459s", "title": "Self-training with Noisy Student improves ImageNet classification (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/q7PjrmGNx5A/maxresdefault.jpg"} {"video_id": "q7PjrmGNx5A", "text": "as you can see this is kind of their main final result where they say on imagenet our top one accuracy sort of increases right here and uh even on these kind of subsets of imagenet or these are sort of corrupted sets of imagenet they make even more substantial improvements as you can see here now we'll go into what these corrupted subsets are but you know just for now these here are very difficult variants of imagenet they can be severely corrupted or or distorted and so on and you can see that the model improves severely over", "start_timestamp": "00:08:21", "end_timestamp": "00:09:00", "start_second": 501, "end_second": 540, "url": "https://www.youtube.com/watch?v=q7PjrmGNx5A&t=501s", "title": "Self-training with Noisy Student improves ImageNet classification (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/q7PjrmGNx5A/maxresdefault.jpg"} {"video_id": "q7PjrmGNx5A", "text": "the previous state of the art which basically means that this model is more robust and that's a direct consequence of the noise now one last thing i should say is that the student here is also larger than the teacher so that's also one thing that makes the student better so what you will make is the student model is larger than the teacher model as a model as the architecture so in combination with the noise right here with the noise in combination that means the student model is probably able to capture more of the variance of the data it's", "start_timestamp": "00:09:00", "end_timestamp": "00:09:38", "start_second": 540, "end_second": 578, "url": "https://www.youtube.com/watch?v=q7PjrmGNx5A&t=540s", "title": "Self-training with Noisy Student improves ImageNet classification (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/q7PjrmGNx5A/maxresdefault.jpg"} {"video_id": "q7PjrmGNx5A", "text": "larger it has more parameters it can learn more about the data together with the noise it can probably be a more robust and that's what makes it generalized better and we'll also see as we see here it's more robust to these transformations and it's also going to be more robust to adversarial perturbations so the the technique again is uh illustrated here as as we said it's pretty simple first so step one step one train the teacher model with label data as you would step two you infer the pseudo labels on unlabeled data step three", "start_timestamp": "00:09:38", "end_timestamp": "00:10:20", "start_second": 578, "end_second": 620, "url": "https://www.youtube.com/watch?v=q7PjrmGNx5A&t=578s", "title": "Self-training with Noisy Student improves ImageNet classification (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/q7PjrmGNx5A/maxresdefault.jpg"} {"video_id": "q7PjrmGNx5A", "text": "you make a student you make sorry with step three over here train an equal or larger student model with combined data and noise injected so they don't they use the original label data here and the pseudo-labeled data right here in order to train the student but still this the student doesn't have more information more label information than the teacher had it simply has this teacher labeled teacher labeled unlabeled data also to train on now the crucial part here is well first of all that the student can be larger and second of all that there", "start_timestamp": "00:10:20", "end_timestamp": "00:11:00", "start_second": 620, "end_second": 660, "url": "https://www.youtube.com/watch?v=q7PjrmGNx5A&t=620s", "title": "Self-training with Noisy Student improves ImageNet classification (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/q7PjrmGNx5A/maxresdefault.jpg"} {"video_id": "q7PjrmGNx5A", "text": "can be noise and the noise comes in three different forms so first of all you use data augmentation which we've already seen this is sort of like random cropping or mild rotations color jitter whatever they use a rand augment here which is a specific technique to apply these augmentations um they use dropout which is a fairly old technique where you in the student model that you train you randomly drop out connections which makes it more robust and more generalizing and then you also use stochastic depth now stochastic", "start_timestamp": "00:11:00", "end_timestamp": "00:11:35", "start_second": 660, "end_second": 695, "url": "https://www.youtube.com/watch?v=q7PjrmGNx5A&t=660s", "title": "Self-training with Noisy Student improves ImageNet classification (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/q7PjrmGNx5A/maxresdefault.jpg"} {"video_id": "q7PjrmGNx5A", "text": "depth is a technique when you train a model what you'll do during training instead of always passing your data forward through the layers like this you use some sort of a drop out but with entire layers so what you'll do is you'll pass your data forward and then randomly you'll skip a layer and then pass it forward again now these these might seem weird first because uh yeah it might seem weird but in if you know that most models especially computer vision models nowadays are residual networks which means that", "start_timestamp": "00:11:35", "end_timestamp": "00:12:12", "start_second": 695, "end_second": 732, "url": "https://www.youtube.com/watch?v=q7PjrmGNx5A&t=695s", "title": "Self-training with Noisy Student improves ImageNet classification (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/q7PjrmGNx5A/maxresdefault.jpg"} {"video_id": "q7PjrmGNx5A", "text": "their layers look like so you have the input you have some computation and then you have the output and then there is already a residual connection that basically adds the original signal together to the result of the computation so all you do in this stochastic layer dropout or this stochastic depth right here is you basically disable use you disable this connection right here and all the signal has to flow through here if you read the residual the resnet original resnet paper they make it pretty clear why the residual connection is a good", "start_timestamp": "00:12:12", "end_timestamp": "00:12:49", "start_second": 732, "end_second": 769, "url": "https://www.youtube.com/watch?v=q7PjrmGNx5A&t=732s", "title": "Self-training with Noisy Student improves ImageNet classification (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/q7PjrmGNx5A/maxresdefault.jpg"} {"video_id": "q7PjrmGNx5A", "text": "idea basically they say these computations here they if you have a very deep network each layer only has to basically um do very a little bit of computation that that can be bypassed uh fairly efficiently for a lot of data points so it's not that hurtful to bypass a layer and in this case they actually use it to just bypass some of these small computations and inject some more robustness into the student model so with these three strategies to bring noise into the training process one is on the data and two is on the", "start_timestamp": "00:12:49", "end_timestamp": "00:13:26", "start_second": 769, "end_second": 806, "url": "https://www.youtube.com/watch?v=q7PjrmGNx5A&t=769s", "title": "Self-training with Noisy Student improves ImageNet classification (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/q7PjrmGNx5A/maxresdefault.jpg"} {"video_id": "q7PjrmGNx5A", "text": "student model itself they train the student model and then fourth and this is what we didn't have before fourth or maybe we put four here make the student a new teacher so now you can iterate you can use the student model that you just trained to again label the unlabeled data and then you can use another student model again under the influence of noise to train from that student model and so on and you can go on and they do up to like three iterations of this where they always take the new the student as the new teacher and then", "start_timestamp": "00:13:26", "end_timestamp": "00:14:06", "start_second": 806, "end_second": 846, "url": "https://www.youtube.com/watch?v=q7PjrmGNx5A&t=806s", "title": "Self-training with Noisy Student improves ImageNet classification (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/q7PjrmGNx5A/maxresdefault.jpg"} {"video_id": "q7PjrmGNx5A", "text": "use a new student model to train from that teacher and they get better and better as they do this of course there's like a diminishing returns but it's pretty impressive that this even works right the new students in fact aren't even larger than the old students it's just that the students are larger than the original teacher model in most of these cases so here's the algorithm written down you'll require labeled images right here and unlabeled images which are the ones with the tilde so first you learn the teacher", "start_timestamp": "00:14:06", "end_timestamp": "00:14:42", "start_second": 846, "end_second": 882, "url": "https://www.youtube.com/watch?v=q7PjrmGNx5A&t=846s", "title": "Self-training with Noisy Student improves ImageNet classification (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/q7PjrmGNx5A/maxresdefault.jpg"} {"video_id": "q7PjrmGNx5A", "text": "model which minimizes the cross entropy on labeled images this we already know this right this is the label this is the image according to the label and you train the teacher model which is this thing here and you can see here noised so already in the teacher training process you want to introduce this noise you want to introduce these data augmentations these are as i said these are standard techniques to make models more robust and therefore more generalizable yeah we know from these from these self-supervised papers that these", "start_timestamp": "00:14:42", "end_timestamp": "00:15:16", "start_second": 882, "end_second": 916, "url": "https://www.youtube.com/watch?v=q7PjrmGNx5A&t=882s", "title": "Self-training with Noisy Student improves ImageNet classification (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/q7PjrmGNx5A/maxresdefault.jpg"} {"video_id": "q7PjrmGNx5A", "text": "augmentations are very powerful and the way you design them basically if you one of these augmentations is a random crop which means if you have an image you randomly crop out like part of that image and then that's your training sample and not the entire thing so by doing this you basically teaching the model to ignore the exact location and scale of things on an image and you can do this because you as a human know that you know i can zoom in i can zoom out into something and it won't change what's on the picture and so", "start_timestamp": "00:15:16", "end_timestamp": "00:15:53", "start_second": 916, "end_second": 953, "url": "https://www.youtube.com/watch?v=q7PjrmGNx5A&t=916s", "title": "Self-training with Noisy Student improves ImageNet classification (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/q7PjrmGNx5A/maxresdefault.jpg"} {"video_id": "q7PjrmGNx5A", "text": "that's you use these augmentations to kind of heuristically tell the model what it should be invariant to and that is that is a very powerful technique uh to regularize basically to to robustify these deep deep methods and this is used the same here so already in the teacher model we train with these noise and then step two use a normal i.e not noise teacher model to generate soft or hard pseudo labels for the clean i.e not distorted unlabeled images and this is important they stress this here that when you when you label the", "start_timestamp": "00:15:53", "end_timestamp": "00:16:30", "start_second": 953, "end_second": 990, "url": "https://www.youtube.com/watch?v=q7PjrmGNx5A&t=953s", "title": "Self-training with Noisy Student improves ImageNet classification (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/q7PjrmGNx5A/maxresdefault.jpg"} {"video_id": "q7PjrmGNx5A", "text": "unlabeled images you want to use the model that is without the noise and you do it on the not distorted unlabeled images so when you infer the labels it's very important that you have clean accurate labels without any sort of noise in them so label noise is not something that they have found to help in this case so not label noise on the teacher that is so you can see right here on the unlabeled images we'll use that teacher model without the noise um to infer the labels now they say these can be hard model hard labels or soft labels", "start_timestamp": "00:16:30", "end_timestamp": "00:17:09", "start_second": 990, "end_second": 1029, "url": "https://www.youtube.com/watch?v=q7PjrmGNx5A&t=990s", "title": "Self-training with Noisy Student improves ImageNet classification (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/q7PjrmGNx5A/maxresdefault.jpg"} {"video_id": "q7PjrmGNx5A", "text": "so what does that mean if we generate hard pseudo labels that means that the y here is simply going to be either 0 or 1 or 2 or 3 and so on so just the index of the class whichever class is most likely that's going to be our label this is exactly how the supervised data sets come right so this is what you'll think first when you see that however soft pseudo labels means that the y will be a distribution so instead of being of class 0 it will be sort of let's say 90 percent of class zero but also five percent class one and five", "start_timestamp": "00:17:09", "end_timestamp": "00:17:50", "start_second": 1029, "end_second": 1070, "url": "https://www.youtube.com/watch?v=q7PjrmGNx5A&t=1029s", "title": "Self-training with Noisy Student improves ImageNet classification (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/q7PjrmGNx5A/maxresdefault.jpg"} {"video_id": "q7PjrmGNx5A", "text": "percent class two right so you'll output the distribution um instead of the just the label and they have found that the soft pseudo labels work slightly slightly better than the hard pseudo labels okay thanks so they use the soft pseudo labels here because they work slightly better but you can do it with hard or soft labels the important thing is that you use the teacher to generate as accurate as possible labels for your unlabeled data then third we've already seen this learn an equal or larger student model which", "start_timestamp": "00:17:50", "end_timestamp": "00:18:32", "start_second": 1070, "end_second": 1112, "url": "https://www.youtube.com/watch?v=q7PjrmGNx5A&t=1070s", "title": "Self-training with Noisy Student improves ImageNet classification (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/q7PjrmGNx5A/maxresdefault.jpg"} {"video_id": "q7PjrmGNx5A", "text": "minimizes the cross entropy loss on labeled images and unlabeled images with noise added to the student model so as you can see labeled images and unlabeled images so we're in this semi semi supervised learning setting right now you take in both together with noise and noise here is in bold which means they stress it again this is important so you can see that the loss is composed of two different things these are the true images of your original model and you use that and this means you noise the student model", "start_timestamp": "00:18:32", "end_timestamp": "00:19:11", "start_second": 1112, "end_second": 1151, "url": "https://www.youtube.com/watch?v=q7PjrmGNx5A&t=1112s", "title": "Self-training with Noisy Student improves ImageNet classification (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/q7PjrmGNx5A/maxresdefault.jpg"} {"video_id": "q7PjrmGNx5A", "text": "and that noise can be on the data or in the model itself and here also the unlabeled images that you have labeled with the teacher you do the exact same thing so you train on both of these data sets and step four is if you want to do iterative training use the student as a teacher and go back to step two now they have uh some more tricks when they do this iterative training they are also up the batch size during the iterative training and so on so they do a lot of things to make the student learn something more something better than the", "start_timestamp": "00:19:11", "end_timestamp": "00:19:48", "start_second": 1151, "end_second": 1188, "url": "https://www.youtube.com/watch?v=q7PjrmGNx5A&t=1151s", "title": "Self-training with Noisy Student improves ImageNet classification (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/q7PjrmGNx5A/maxresdefault.jpg"} {"video_id": "q7PjrmGNx5A", "text": "teacher and i think this the whole paper it doesn't it doesn't state it explicitly but i think the whole paper everything they do here is to kind of force or allow the student to become better than the teacher by by giving more noise by making the student larger by making the batch size for the student larger and so on so you you want to sort of inject as much invariance as you can and that will make the student learn more so they say here noising student when the student is deliberately noised in its it is trained to be consistent", "start_timestamp": "00:19:48", "end_timestamp": "00:20:30", "start_second": 1188, "end_second": 1230, "url": "https://www.youtube.com/watch?v=q7PjrmGNx5A&t=1188s", "title": "Self-training with Noisy Student improves ImageNet classification (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/q7PjrmGNx5A/maxresdefault.jpg"} {"video_id": "q7PjrmGNx5A", "text": "to the teacher that is not noised when it generates the pseudo labels in our experiments we use two types of noise input noise and model noise all right first data augmentation is an important noising method in noisy student training because it forces the student to ensure prediction consistency across augmented versions of an image specifically in our method the teacher produces high quality pseudo labels by reading in clean images while the student is required to produce to reproduce those labels with augmented images as an input", "start_timestamp": "00:20:30", "end_timestamp": "00:21:10", "start_second": 1230, "end_second": 1270, "url": "https://www.youtube.com/watch?v=q7PjrmGNx5A&t=1230s", "title": "Self-training with Noisy Student improves ImageNet classification (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/q7PjrmGNx5A/maxresdefault.jpg"} {"video_id": "q7PjrmGNx5A", "text": "second when dropout and stochastic depth function are used as noise the teacher behaves like an ensemble at inference time when it generates pseudo labels whereas the student behaves like a single model in other words the student is forced to mimic a more powerful ensemble model we present an ablation study so this it's a bit weird what they say here um don't be confused you use the dropout and the stochastic depth on the student model and they they say here if you do this the teacher behaves like an ensemble at inference time", "start_timestamp": "00:21:10", "end_timestamp": "00:21:48", "start_second": 1270, "end_second": 1308, "url": "https://www.youtube.com/watch?v=q7PjrmGNx5A&t=1270s", "title": "Self-training with Noisy Student improves ImageNet classification (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/q7PjrmGNx5A/maxresdefault.jpg"} {"video_id": "q7PjrmGNx5A", "text": "whereas the student behaves like a single model and yeah it's it's a bit of a weird formulation but it's it's true like the teacher the teacher will produce these same uh the label for different pathways through the student if you use dropout and kind of stochastic depth and therefore the student is kind of required to approximate each time each forward pass has a different forward pass through the layers through the connections with dropout and it's forced to approximate that teacher label with all of these", "start_timestamp": "00:21:48", "end_timestamp": "00:22:21", "start_second": 1308, "end_second": 1341, "url": "https://www.youtube.com/watch?v=q7PjrmGNx5A&t=1308s", "title": "Self-training with Noisy Student improves ImageNet classification (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/q7PjrmGNx5A/maxresdefault.jpg"} {"video_id": "q7PjrmGNx5A", "text": "um different things so you see that you you put in a lot a lot of techniques so they have even other techniques um there is one additional trick and it's not and it's not one actually they have so many tricks and if you look at their experimental setup that it's crazy like they describe exactly we reduce the learning rate like this and the batch size like this and so on so to get state of the art on imagenet it's not enough to just have a good idea of a new thing to do what you you have to have the good idea and then", "start_timestamp": "00:22:21", "end_timestamp": "00:22:55", "start_second": 1341, "end_second": 1375, "url": "https://www.youtube.com/watch?v=q7PjrmGNx5A&t=1341s", "title": "Self-training with Noisy Student improves ImageNet classification (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/q7PjrmGNx5A/maxresdefault.jpg"} {"video_id": "q7PjrmGNx5A", "text": "execute it almost like really well um because you have to regard all of these additional tricks that people have figured out over the years in any case they say it works better with an additional trick data filtering and balancing specifically we filter images that the teacher model has low confidence on since they are usually out of domain images so that goes to a point where if you see we have this imagenet label data set right and we have the larger data set now the larger dataset simply contains images and there is no guarantee", "start_timestamp": "00:22:55", "end_timestamp": "00:23:32", "start_second": 1375, "end_second": 1412, "url": "https://www.youtube.com/watch?v=q7PjrmGNx5A&t=1375s", "title": "Self-training with Noisy Student improves ImageNet classification (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/q7PjrmGNx5A/maxresdefault.jpg"} {"video_id": "q7PjrmGNx5A", "text": "that the images are actually of the classes that we have in the imagenet data set right here we have a thousand classes here there's no guarantee that these images fit into any of those classes yet we still ask the teacher model to put them in some of these classes now you can filter out part of those images um if you can look at the teacher model and you look at its confidence so when it outputs a distribution if if there's just two labels let's say if it outputs a distribution like this that's wildly different than if it", "start_timestamp": "00:23:32", "end_timestamp": "00:24:07", "start_second": 1412, "end_second": 1447, "url": "https://www.youtube.com/watch?v=q7PjrmGNx5A&t=1412s", "title": "Self-training with Noisy Student improves ImageNet classification (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/q7PjrmGNx5A/maxresdefault.jpg"} {"video_id": "q7PjrmGNx5A", "text": "outputs a distribution like this both are class 1 labels but one is much more confident than the other so what you want to do is you want to filter out these low confidence labels because you know the model isn't really sure but it has to assign a class but that's usually an indication that it is an out of domain image so if they filter this it works better and then also to ensure that the distribution of the unlabeled images match that of the training set we also need to balance the number of unlabeled images for each class as all", "start_timestamp": "00:24:07", "end_timestamp": "00:24:44", "start_second": 1447, "end_second": 1484, "url": "https://www.youtube.com/watch?v=q7PjrmGNx5A&t=1447s", "title": "Self-training with Noisy Student improves ImageNet classification (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/q7PjrmGNx5A/maxresdefault.jpg"} {"video_id": "q7PjrmGNx5A", "text": "classes in imagenet have a similar number of labeled images for this purpose we duplicate images in classes where there are not enough images for classes where we have too many images we take the images with the highest confidence okay so this is just another technique this has basically nothing to do with their core idea but this is just another thing uh where they say okay we can treat this big uh thing that we scrape from the internet you know we can somehow filter and balance it smartly and that will work", "start_timestamp": "00:24:44", "end_timestamp": "00:25:17", "start_second": 1484, "end_second": 1517, "url": "https://www.youtube.com/watch?v=q7PjrmGNx5A&t=1484s", "title": "Self-training with Noisy Student improves ImageNet classification (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/q7PjrmGNx5A/maxresdefault.jpg"} {"video_id": "q7PjrmGNx5A", "text": "even better all right so let's go into the experiments of course there um so what they do i think where is the graphic what they do is they take an image net sorry they take an efficient net right here and they trade they first train an efficient net um a smaller efficient net as we said for to be the teacher and then they train a larger efficient net for the student the best model in our experiments is a result of three iterations of putting back the student as a new teacher we first train an efficient at b7 on", "start_timestamp": "00:25:17", "end_timestamp": "00:26:08", "start_second": 1517, "end_second": 1568, "url": "https://www.youtube.com/watch?v=q7PjrmGNx5A&t=1517s", "title": "Self-training with Noisy Student improves ImageNet classification (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/q7PjrmGNx5A/maxresdefault.jpg"} {"video_id": "q7PjrmGNx5A", "text": "imagenet as the teacher model so you can see in the table right here what the b7 achieves the efficient net b7 here you can see it has 66 million parameters which is fairly small compared to these other kind of previous state-of-the-art methods on imagenet right so they first train this and that will achieve something like an 85 percent accuracy now if you just train a larger model this efficient net l2 right here that has you can see 480 million parameters so a lot of more mainly parameters but you just train it", "start_timestamp": "00:26:08", "end_timestamp": "00:26:41", "start_second": 1568, "end_second": 1601, "url": "https://www.youtube.com/watch?v=q7PjrmGNx5A&t=1568s", "title": "Self-training with Noisy Student improves ImageNet classification (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/q7PjrmGNx5A/maxresdefault.jpg"} {"video_id": "q7PjrmGNx5A", "text": "on the same data set on imagenet you will get a 0.5 improvement and you can see that here with noisy student training with the exact same model so it has the same amount of parameters you'll actually get an 88.4 so i like a more than a three percent improvement and that's with the same model just with this different training procedure and inputting these 300 million unlabeled images that you have laying around but the all the information about all the label information comes from the imagenet dataset and comes from this efficientnetb7 teacher", "start_timestamp": "00:26:41", "end_timestamp": "00:27:24", "start_second": 1601, "end_second": 1644, "url": "https://www.youtube.com/watch?v=q7PjrmGNx5A&t=1601s", "title": "Self-training with Noisy Student improves ImageNet classification (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/q7PjrmGNx5A/maxresdefault.jpg"} {"video_id": "q7PjrmGNx5A", "text": "model so that's basically you can it's a testament that out of this out of this 85 you can make this 88 uh just by smartly using the information that the model that this model has learned about the data and transferring it to new data so they train an efficient net b7 that's the small model as a teach model then by using the b7 model as the teacher we trained an efficient net l2 model with the unlabeled batch size set to 14 times the labeled batch size and they stress that it's important that you up the batch size", "start_timestamp": "00:27:24", "end_timestamp": "00:28:02", "start_second": 1644, "end_second": 1682, "url": "https://www.youtube.com/watch?v=q7PjrmGNx5A&t=1644s", "title": "Self-training with Noisy Student improves ImageNet classification (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/q7PjrmGNx5A/maxresdefault.jpg"} {"video_id": "q7PjrmGNx5A", "text": "that's another thing that makes the student learn more than the teacher then we trained a new efficient net so by the way these this 14 times it's also it can be done because now you have more data right so you can also up the batch size then we trained a new efficient net l2 model with the efficient net l2 model as the teacher lastly we iterated again and used an unlabeled batch size of 28 times the label batch size the detailed result of the three iterations and so on okay so you can see that it's a fairly complicated procedure but you can", "start_timestamp": "00:28:02", "end_timestamp": "00:28:38", "start_second": 1682, "end_second": 1718, "url": "https://www.youtube.com/watch?v=q7PjrmGNx5A&t=1682s", "title": "Self-training with Noisy Student improves ImageNet classification (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/q7PjrmGNx5A/maxresdefault.jpg"} {"video_id": "q7PjrmGNx5A", "text": "gain and gain and gain by simply up upping the um by simply upping the or iterating on this procedure and i think they have it somewhere here yes so as you can see if iteration one you train the efficient net l2 you start it with the the b7 you train the efficient at a2 with a batch size 14 times larger and you gain significantly right this gains about two percent over the original efficient net then you iterate again with the same batch size and you get uh like as a 5.5 improvement and you iterate again with", "start_timestamp": "00:28:38", "end_timestamp": "00:29:19", "start_second": 1718, "end_second": 1759, "url": "https://www.youtube.com/watch?v=q7PjrmGNx5A&t=1718s", "title": "Self-training with Noisy Student improves ImageNet classification (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/q7PjrmGNx5A/maxresdefault.jpg"} {"video_id": "q7PjrmGNx5A", "text": "an even larger batch size and you get a 0.3 improvement so there's diminishing returns but still you can see that you know the more with the introduction of noise with the introduction of the larger model with the introduction of the larger batch size these are all things that help the student basically become better than the teacher all right so they do a bunch of other experiments so their main comparison is right here where they say look if we if even if we train the same model with this noisy student training we can make", "start_timestamp": "00:29:19", "end_timestamp": "00:29:55", "start_second": 1759, "end_second": 1795, "url": "https://www.youtube.com/watch?v=q7PjrmGNx5A&t=1759s", "title": "Self-training with Noisy Student improves ImageNet classification (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/q7PjrmGNx5A/maxresdefault.jpg"} {"video_id": "q7PjrmGNx5A", "text": "you know pretty large gains over the model over the same model where we do not train it with this noisy student training so this really seems to help you know due to the noise due to the additional data they do a lot of ablation studies so that's pretty interesting and they also do these studies on this special imagenet data set for example imagenet c you can see that there are quite a bit of distortions right here i don't even see if you can see it on this video but this is a swing so the swing right here is like something like this", "start_timestamp": "00:29:55", "end_timestamp": "00:30:34", "start_second": 1795, "end_second": 1834, "url": "https://www.youtube.com/watch?v=q7PjrmGNx5A&t=1795s", "title": "Self-training with Noisy Student improves ImageNet classification (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/q7PjrmGNx5A/maxresdefault.jpg"} {"video_id": "q7PjrmGNx5A", "text": "but you almost can't see it and you see that the bold on the left is always the prediction of their model while the thing on the right is the prediction of the original model so this model they claim is significantly more robust to these kinds of perturbations and they do an analysis of this where they show yes in fact it is [Music] so i think we've already seen this at the beginning that the noisy student is significantly uh more robust to these perturbations and they also test this to adversarial perturbations", "start_timestamp": "00:30:34", "end_timestamp": "00:31:11", "start_second": 1834, "end_second": 1871, "url": "https://www.youtube.com/watch?v=q7PjrmGNx5A&t=1834s", "title": "Self-training with Noisy Student improves ImageNet classification (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/q7PjrmGNx5A/maxresdefault.jpg"} {"video_id": "q7PjrmGNx5A", "text": "so right here you can see that the original model drops pretty quickly as you increase the epsilon the epsilon is kind of the strength of the adversarial perturbation and the noisy the original model drops very quickly to you know fairly low accuracy while as the [Music] noisy student training uh drops much much less quickly now this um is another testament to the fact that what you do i think what's happening is you have your data space right and you have your data points in it now when you do the like normal data", "start_timestamp": "00:31:11", "end_timestamp": "00:31:51", "start_second": 1871, "end_second": 1911, "url": "https://www.youtube.com/watch?v=q7PjrmGNx5A&t=1871s", "title": "Self-training with Noisy Student improves ImageNet classification (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/q7PjrmGNx5A/maxresdefault.jpg"} {"video_id": "q7PjrmGNx5A", "text": "augmentation what you'll do is you not only force the model to predict those points correctly but you'll sort of make a bit of a cloud around them and you force the model to predict that cloud correctly now if you introduce more data and you do even more noise what you do is you'll make these clouds kind of larger and that means the model is more robust to any sort of perturbations in these clouds right and and that means it's probably also going to be more robust to adversarial perturbations so that's sort of how you can think of", "start_timestamp": "00:31:51", "end_timestamp": "00:32:28", "start_second": 1911, "end_second": 1948, "url": "https://www.youtube.com/watch?v=q7PjrmGNx5A&t=1911s", "title": "Self-training with Noisy Student improves ImageNet classification (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/q7PjrmGNx5A/maxresdefault.jpg"} {"video_id": "q7PjrmGNx5A", "text": "this uh this introduction of noise uh to make it more generalizable how does this generalize better so if you think of this data point right here if i'm looking to generalize that means you know i have this iid data set so probably my test data is going to be related to the training data so i might get a data point that's fairly close to that data point and generalizing means i classify it correctly now if this cloud is very small like it is here my decision boundary could be like here right and even though the", "start_timestamp": "00:32:28", "end_timestamp": "00:33:04", "start_second": 1948, "end_second": 1984, "url": "https://www.youtube.com/watch?v=q7PjrmGNx5A&t=1948s", "title": "Self-training with Noisy Student improves ImageNet classification (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/q7PjrmGNx5A/maxresdefault.jpg"} {"video_id": "q7PjrmGNx5A", "text": "tres test data set is fairly close to the original training data point it's it won't it will be classified uh incorrectly however if my original cloud during training is larger you can see if i train a model it can maybe put the decision boundary here and then my test data point will be included in on that same side so that's kind of the idea behind generalizing better of course that's a vast simplification and also to say that this here is an fgsm attack so this is kind of the weakest attack in the adversarial perturbation spectrum", "start_timestamp": "00:33:04", "end_timestamp": "00:33:41", "start_second": 1984, "end_second": 2021, "url": "https://www.youtube.com/watch?v=q7PjrmGNx5A&t=1984s", "title": "Self-training with Noisy Student improves ImageNet classification (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/q7PjrmGNx5A/maxresdefault.jpg"} {"video_id": "q7PjrmGNx5A", "text": "they do say under a stronger attack pgd which is a fairly strong attack with 10 iterations at epsilon equals 16. noisy student training improves efficient net l2's accuracy from 1.1 percent to 4.4 percent and not this um like you know 1.1 percent really means the model is almost like dead this is lower this is like random performance and 4.4 is still a bit above random performance but um yeah you could probably you could probably get there by simply using any sort of noise in that case but still you can see that", "start_timestamp": "00:33:41", "end_timestamp": "00:34:25", "start_second": 2021, "end_second": 2065, "url": "https://www.youtube.com/watch?v=q7PjrmGNx5A&t=2021s", "title": "Self-training with Noisy Student improves ImageNet classification (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/q7PjrmGNx5A/maxresdefault.jpg"} {"video_id": "q7PjrmGNx5A", "text": "it is more robust to especially to natural distortions and therefore it generalizes better as i said they do quite a bit of drop sorry not drop out ablation studies to figure out where exactly um the performance comes from and the answer is it pretty much comes from all the things that they've described so here you can see um the in the effect of that extra data set and you can see pretty much with that extra data set all the all the situations improve here you can see what do you what is happening when you do not", "start_timestamp": "00:34:25", "end_timestamp": "00:35:04", "start_second": 2065, "end_second": 2104, "url": "https://www.youtube.com/watch?v=q7PjrmGNx5A&t=2065s", "title": "Self-training with Noisy Student improves ImageNet classification (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/q7PjrmGNx5A/maxresdefault.jpg"} {"video_id": "q7PjrmGNx5A", "text": "augment the student uh when you do not date to augment you can immediately see that the accuracy drops and then when you do not augment and also don't use these model noises then the performance drops again and lastly when you use the teacher but you noise the teacher you can see also here the performance is dropping from the original um quite a bit so all of these things kind of contribute and they do much more ablations and they have listed their findings here so using a large teacher model with better performance leads to better results so", "start_timestamp": "00:35:04", "end_timestamp": "00:35:42", "start_second": 2104, "end_second": 2142, "url": "https://www.youtube.com/watch?v=q7PjrmGNx5A&t=2104s", "title": "Self-training with Noisy Student improves ImageNet classification (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/q7PjrmGNx5A/maxresdefault.jpg"} {"video_id": "q7PjrmGNx5A", "text": "you know as the original teacher you should use as good as possible a teacher model you can find second a large amount of unlabeled data is necessary for better performance okay so if you want to do this you better get a large large amount of extra data because that's one thing that makes the student perform better soft pseudo labels work better than hard pseudo labels for out of domain data in certain cases fourth a large student model is important to enable the student to learn a more more powerful model okay so because usually this um", "start_timestamp": "00:35:42", "end_timestamp": "00:36:26", "start_second": 2142, "end_second": 2186, "url": "https://www.youtube.com/watch?v=q7PjrmGNx5A&t=2142s", "title": "Self-training with Noisy Student improves ImageNet classification (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/q7PjrmGNx5A/maxresdefault.jpg"} {"video_id": "q7PjrmGNx5A", "text": "knowledge distillation is what it this this is usually called knowledge distillation if you use a teacher model to train a student model and it is often used when the student model is smaller than the teacher because you want to kind of become more efficient you from so the teacher is large you make the student small and you usually sacrifice some accuracy and here they say if you want to gain some accuracy you need a large student model it can't be like a small one number five data balancing is useful for small models", "start_timestamp": "00:36:26", "end_timestamp": "00:37:00", "start_second": 2186, "end_second": 2220, "url": "https://www.youtube.com/watch?v=q7PjrmGNx5A&t=2186s", "title": "Self-training with Noisy Student improves ImageNet classification (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/q7PjrmGNx5A/maxresdefault.jpg"} {"video_id": "q7PjrmGNx5A", "text": "number six joint training on label data and unlabeled data outperforms the pipeline that first pre-trains with unlabeled data and then fine-tunes on labeled data so this is in contrast to like what people have done before in the self-supervised learning and so on where it's always kind of pre-training then fine-tuning or in the in the transfer learning setting seven using a large ratio between unlabeled batch size and label batch size enables models to train longer on non-labeled data to it to achieve a higher accuracy okay", "start_timestamp": "00:37:00", "end_timestamp": "00:37:36", "start_second": 2220, "end_second": 2256, "url": "https://www.youtube.com/watch?v=q7PjrmGNx5A&t=2220s", "title": "Self-training with Noisy Student improves ImageNet classification (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/q7PjrmGNx5A/maxresdefault.jpg"} {"video_id": "q7PjrmGNx5A", "text": "we've already seen that they have used that and number eight training the student from scratch is sometimes better than initializing the student with the teacher and the student initialized with the teachers still requires a large number of training epochs to perform well this is fairly interesting because it kind of alludes to the fact that the minima in weight space if so if this is of course the case if the student model is the same as the teacher model so in like iteration two or three or what not um it means that you know in weight", "start_timestamp": "00:37:36", "end_timestamp": "00:38:12", "start_second": 2256, "end_second": 2292, "url": "https://www.youtube.com/watch?v=q7PjrmGNx5A&t=2256s", "title": "Self-training with Noisy Student improves ImageNet classification (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/q7PjrmGNx5A/maxresdefault.jpg"} {"video_id": "q7PjrmGNx5A", "text": "space if we look at you know you might want to start the student here and the minimum is right here and you might want to think that if i learn the same thing then the minima are fairly close together right so the the teacher's minima might be here and the student minima might be fairly close so it might be beneficial if i if i start not over here but actually start at the teacher's minimum but this doesn't always seem to be the case and that is a fairly interesting observation because it kind of means that we're talking about", "start_timestamp": "00:38:12", "end_timestamp": "00:38:46", "start_second": 2292, "end_second": 2326, "url": "https://www.youtube.com/watch?v=q7PjrmGNx5A&t=2292s", "title": "Self-training with Noisy Student improves ImageNet classification (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/q7PjrmGNx5A/maxresdefault.jpg"} {"video_id": "q7PjrmGNx5A", "text": "different minima here we're talking about the student model learning different things and that's what we've discussed already the student model kind of learns to be robust and that's probably a minimum that's fairly far away in weight space at least in in a sort of energy landscape weight space uh might be the case that it needs to actually overcome kind of a a hill here even though the minimum might be close there's lots of research in like how minima are distributed in these weight spaces which i don't want to go into right here", "start_timestamp": "00:38:46", "end_timestamp": "00:39:20", "start_second": 2326, "end_second": 2360, "url": "https://www.youtube.com/watch?v=q7PjrmGNx5A&t=2326s", "title": "Self-training with Noisy Student improves ImageNet classification (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/q7PjrmGNx5A/maxresdefault.jpg"} {"video_id": "q7PjrmGNx5A", "text": "but it is a fairly interesting observation that it's not always helpful to initialize the teacher sorry the student at the teacher's optimum okay so this was the paper and you know this is this is the type of research where i do appreciate kind of the these large labs taking it on because they have the resources to do all of these ablations all of these different models cross them with these giant data sets and so on which i guess university labs just would not have and this is a fairly um thorough paper really investigating which parts of the", "start_timestamp": "00:39:20", "end_timestamp": "00:40:00", "start_second": 2360, "end_second": 2400, "url": "https://www.youtube.com/watch?v=q7PjrmGNx5A&t=2360s", "title": "Self-training with Noisy Student improves ImageNet classification (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/q7PjrmGNx5A/maxresdefault.jpg"} {"video_id": "q7PjrmGNx5A", "text": "pipeline you know do something and which ones don't and usually i i'm fairly critical of pipelines that have like 50 billion tricks um because you never know where the improvement exactly is coming from but you can sort of mitigate that criticism by doing all of these kind of ablations on the different parts and really showing look this is important but this is also important but this is also important but this is also important so yeah that was my two cents to this paper i hope you enjoyed this and i'll see you", "start_timestamp": "00:40:00", "end_timestamp": "00:40:33", "start_second": 2400, "end_second": 2433, "url": "https://www.youtube.com/watch?v=q7PjrmGNx5A&t=2400s", "title": "Self-training with Noisy Student improves ImageNet classification (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/q7PjrmGNx5A/maxresdefault.jpg"} {"video_id": "_p8vFSUesNs", "text": "thank you very much Michael very much for the invitation it's a great pleasure to be here and we're in some sense more of a user of many of the deep learning techniques which have been developed here in this community and I just wanted to highlight a few examples of how we can use deep learning and medical imaging and more specifically talk about image reconstruction super-resolution and segmentation but probably in the same spirit as in as a last talk just as a sort of introduction there is a lot of excitement in in medical imaging in this", "start_timestamp": "00:00:00", "end_timestamp": "00:00:46", "start_second": 0, "end_second": 46, "url": "https://www.youtube.com/watch?v=_p8vFSUesNs&t=0s", "title": "Daniel Rueckert: \"Deep learning in medical imaging\"", "thumbnail": "https://i.ytimg.com/vi/_p8vFSUesNs/maxresdefault.jpg"} {"video_id": "_p8vFSUesNs", "text": "in this area but there is also a lot of hype so if you look at at several magazine covers over the last last year actually there's you two amount of excitement for me is if you really read the headlines there's also quite a lot of over-excitement and in some sense the press has picked up on this and quite often taken a number of things out of context so if you look at for example one of the the comments which geoff hinton made in 2017 said well you should rather stop training radiologists and if you read the whole interview he said many", "start_timestamp": "00:00:46", "end_timestamp": "00:01:27", "start_second": 46, "end_second": 87, "url": "https://www.youtube.com/watch?v=_p8vFSUesNs&t=46s", "title": "Daniel Rueckert: \"Deep learning in medical imaging\"", "thumbnail": "https://i.ytimg.com/vi/_p8vFSUesNs/maxresdefault.jpg"} {"video_id": "_p8vFSUesNs", "text": "more things with many caveats but of course this one single headline is more exciting than actually quoting really what he said in detail but a very famous radiologist said something more sensible probably is just a question where the AI will really replace radiologist probably the answer's no but actually a radiologist who do AI will replace a radiologist who don't and I think that's very it's very good a comment now the other thing is you probably all you're familiar with conferences such as nips and you've seen it and many many more", "start_timestamp": "00:01:27", "end_timestamp": "00:02:07", "start_second": 87, "end_second": 127, "url": "https://www.youtube.com/watch?v=_p8vFSUesNs&t=87s", "title": "Daniel Rueckert: \"Deep learning in medical imaging\"", "thumbnail": "https://i.ytimg.com/vi/_p8vFSUesNs/maxresdefault.jpg"} {"video_id": "_p8vFSUesNs", "text": "people going to these conferences so some of our radiology colleagues go to a conference called RNA and if you think nips is big and very unmanageable our SNA has around 45 to 50,000 attendees in Chicago is apparently the only city in the world big enough to host this conference but one of the things which really happened this year or last year 2017 which is very interesting is normally if you go to these conferences you have a strand on x-ray stand on CT on molecular imaging it's basically all the different", "start_timestamp": "00:02:07", "end_timestamp": "00:02:43", "start_second": 127, "end_second": 163, "url": "https://www.youtube.com/watch?v=_p8vFSUesNs&t=127s", "title": "Daniel Rueckert: \"Deep learning in medical imaging\"", "thumbnail": "https://i.ytimg.com/vi/_p8vFSUesNs/maxresdefault.jpg"} {"video_id": "_p8vFSUesNs", "text": "imaging modality and this year for the first time an equal strand is machine learning which really shows you how much machine learning is changing medicine in particular probably medical imaging there are huge number of opportunities many of which we sort of copied effectively from the vision area but another another really important aspect is there's a huge amount of data which is now publicly available in medical imaging and I just want to highlight one particular example because I've been involved in in this in the UK is", "start_timestamp": "00:02:43", "end_timestamp": "00:03:22", "start_second": 163, "end_second": 202, "url": "https://www.youtube.com/watch?v=_p8vFSUesNs&t=163s", "title": "Daniel Rueckert: \"Deep learning in medical imaging\"", "thumbnail": "https://i.ytimg.com/vi/_p8vFSUesNs/maxresdefault.jpg"} {"video_id": "_p8vFSUesNs", "text": "something called the UK biobank for those of you who don't know what UK biobank is actually it's a population study of nearly 500,000 people but a hundred thousand of these subjects will be imaged and actually they have already done 1/5 so they've already done over 20,000 of these subjects and so they're acquiring very high-resolution imaging data from from this subject and one of the things which is interesting is they're not only acquiring images of the whole body which of course is very useful with MRI but they're also", "start_timestamp": "00:03:22", "end_timestamp": "00:03:58", "start_second": 202, "end_second": 238, "url": "https://www.youtube.com/watch?v=_p8vFSUesNs&t=202s", "title": "Daniel Rueckert: \"Deep learning in medical imaging\"", "thumbnail": "https://i.ytimg.com/vi/_p8vFSUesNs/maxresdefault.jpg"} {"video_id": "_p8vFSUesNs", "text": "acquiring a dedicated brain imaging so not only structural brain imaging functional brain imaging and diffusion brain imaging and they're also quite dynamic images of the heart for example are allowing you to look at cardiovascular function and for example study the interaction between the brain and the heart which is something which is not very well understood at the moment and more importantly you also have available lifestyle information how many cross wants to eat how many cups of coffee you drink genetics and Clint and", "start_timestamp": "00:03:58", "end_timestamp": "00:04:34", "start_second": 238, "end_second": 274, "url": "https://www.youtube.com/watch?v=_p8vFSUesNs&t=238s", "title": "Daniel Rueckert: \"Deep learning in medical imaging\"", "thumbnail": "https://i.ytimg.com/vi/_p8vFSUesNs/maxresdefault.jpg"} {"video_id": "_p8vFSUesNs", "text": "links to clinical records now why is this exciting is because actually that data set is available everybody in the world to use so if you wanted to use that data set you have to fill out an application make sure that you don't try to be anonymized data but otherwise you can download the data and and use it for for research purposes so that's it for example really something which is a game changer I think in medicine so if you look at what are the opportunities more specifically really sort of there is a sort of pyramid where", "start_timestamp": "00:04:34", "end_timestamp": "00:05:08", "start_second": 274, "end_second": 308, "url": "https://www.youtube.com/watch?v=_p8vFSUesNs&t=274s", "title": "Daniel Rueckert: \"Deep learning in medical imaging\"", "thumbnail": "https://i.ytimg.com/vi/_p8vFSUesNs/maxresdefault.jpg"} {"video_id": "_p8vFSUesNs", "text": "the value of what we do increases the sort of lowest level you can use machine learning to help you with image reconstruction for example you can automatically plan your scans so if you go if you line in mo a scanner for example there's quite a lot of fiddling by an operator by not lo radiologist but typically a radiographer who sort of plans the imaging you can of course use image enhancement super resolution is something which I'll show you in a moment you can do the conventional semantic image interpretation for example find organs", "start_timestamp": "00:05:08", "end_timestamp": "00:05:44", "start_second": 308, "end_second": 344, "url": "https://www.youtube.com/watch?v=_p8vFSUesNs&t=308s", "title": "Daniel Rueckert: \"Deep learning in medical imaging\"", "thumbnail": "https://i.ytimg.com/vi/_p8vFSUesNs/maxresdefault.jpg"} {"video_id": "_p8vFSUesNs", "text": "Sigma and these organs you can quantify biomarkers for example measure a tumor volume and then sort of if you come to the more higher level of the more we're actually probably highest added value is sort of computer-aided interpretation and diagnosis there are probably very few applications at the moment really being used and probably the only one I can really think of where actually machine learning has had an impact at those high level features is at the moment in for example mammography screenings so there are some systems", "start_timestamp": "00:05:44", "end_timestamp": "00:06:19", "start_second": 344, "end_second": 379, "url": "https://www.youtube.com/watch?v=_p8vFSUesNs&t=344s", "title": "Daniel Rueckert: \"Deep learning in medical imaging\"", "thumbnail": "https://i.ytimg.com/vi/_p8vFSUesNs/maxresdefault.jpg"} {"video_id": "_p8vFSUesNs", "text": "which effectively act as a second reader to a radiologist but really we haven't really cracked the top of this pyramid but there's quite a lot of work going on at the bottom I also want to show you a few challenges where I think really we we need your help in developing new methods sort of at the moment most of the techniques we use are supervised techniques so that means our training data is absolutely critical for what we do if you look at our colleagues envision what they typically do is well you go crowdsource your your labels or your", "start_timestamp": "00:06:19", "end_timestamp": "00:06:59", "start_second": 379, "end_second": 419, "url": "https://www.youtube.com/watch?v=_p8vFSUesNs&t=379s", "title": "Daniel Rueckert: \"Deep learning in medical imaging\"", "thumbnail": "https://i.ytimg.com/vi/_p8vFSUesNs/maxresdefault.jpg"} {"video_id": "_p8vFSUesNs", "text": "annotations and if you say that to radiologists they don't really react very well to this so they they obviously saying well she'd really asked some experts to help you with we're doing that but in the UK at least for example we already trained far too few radiologists so if you ask them to generate training data ie label data it's really very challenging and quite difficult to do and more importantly and this is something which is quite I think not always perfectly understood is if I ask an observer to tell me whether", "start_timestamp": "00:06:59", "end_timestamp": "00:07:36", "start_second": 419, "end_second": 456, "url": "https://www.youtube.com/watch?v=_p8vFSUesNs&t=419s", "title": "Daniel Rueckert: \"Deep learning in medical imaging\"", "thumbnail": "https://i.ytimg.com/vi/_p8vFSUesNs/maxresdefault.jpg"} {"video_id": "_p8vFSUesNs", "text": "there's a car in a photo or not then the answer is pretty unambiguous either there is a car or there isn't in in medical imaging our training data is often imperfect ie if you ask three different radiologists that will give you hopefully not three different answers but they might give you more than one answer so you really need to train in a scenario where your data might not be perfectly labeled and of course if that's then not perfectly labeled how do you actually validate your algorithm against something where", "start_timestamp": "00:07:36", "end_timestamp": "00:08:08", "start_second": 456, "end_second": 488, "url": "https://www.youtube.com/watch?v=_p8vFSUesNs&t=456s", "title": "Daniel Rueckert: \"Deep learning in medical imaging\"", "thumbnail": "https://i.ytimg.com/vi/_p8vFSUesNs/maxresdefault.jpg"} {"video_id": "_p8vFSUesNs", "text": "there is really no gold standard the other thing is quite often is we've had very great success are reporting fantastic algorithms in a scientific paper and then when you deploy them in a clinical scenario they don't really work that well and if you actually look at scanners they're really only three big scanner manufacturers and when they produce scanners they're not only very in the color as you see here but they really produce slightly different images to a human observer that doesn't really matter but actually turns out to most of", "start_timestamp": "00:08:08", "end_timestamp": "00:08:45", "start_second": 488, "end_second": 525, "url": "https://www.youtube.com/watch?v=_p8vFSUesNs&t=488s", "title": "Daniel Rueckert: \"Deep learning in medical imaging\"", "thumbnail": "https://i.ytimg.com/vi/_p8vFSUesNs/maxresdefault.jpg"} {"video_id": "_p8vFSUesNs", "text": "our models we train with machine learning it actually matters and quite often we don't have access to data from all different sites or from all different vendors and if you every time you then go to your radiologist colleague and tell them well just annotate a few more data set and then I can do some retraining they don't really react very well to this at least in my experience okay so I want to show you three different applications for where we at the moment use deep learning and we're actually has had significant impact really", "start_timestamp": "00:08:45", "end_timestamp": "00:09:23", "start_second": 525, "end_second": 563, "url": "https://www.youtube.com/watch?v=_p8vFSUesNs&t=525s", "title": "Daniel Rueckert: \"Deep learning in medical imaging\"", "thumbnail": "https://i.ytimg.com/vi/_p8vFSUesNs/maxresdefault.jpg"} {"video_id": "_p8vFSUesNs", "text": "transformed quite a lot of the things we do and the first one is really on image reconstruction and and this is really probably something which a number of you will have worked on it's actually quite well understood problem using it as an inverse problem but I want to show you a particular application here and that is using a modality which some of you may know is called magnetic resonance imaging it's a great modality because it's it's safe it can do a lot of things can show you many different properties of the body but it's relatively slow", "start_timestamp": "00:09:23", "end_timestamp": "00:10:00", "start_second": 563, "end_second": 600, "url": "https://www.youtube.com/watch?v=_p8vFSUesNs&t=563s", "title": "Daniel Rueckert: \"Deep learning in medical imaging\"", "thumbnail": "https://i.ytimg.com/vi/_p8vFSUesNs/maxresdefault.jpg"} {"video_id": "_p8vFSUesNs", "text": "that means it's good for measuring or acquiring images of the brain but not so good if you want to for example do cardiac imaging which you see here so if you look at these cardiac images which you see on this particular slide this looks like it's one single heartbeat but unfortunately mr imaging is not fast enough to acquire this so what you typically do is you measure with an ECG in which actor in which state the heart is and then I'm actually taking bits of data from different heartbeats and I'm assuming that if my ECG signal shows me", "start_timestamp": "00:10:00", "end_timestamp": "00:10:37", "start_second": 600, "end_second": 637, "url": "https://www.youtube.com/watch?v=_p8vFSUesNs&t=600s", "title": "Daniel Rueckert: \"Deep learning in medical imaging\"", "thumbnail": "https://i.ytimg.com/vi/_p8vFSUesNs/maxresdefault.jpg"} {"video_id": "_p8vFSUesNs", "text": "I'm in the same hearts state as as previously then I can take these measurements and combine them to form an image so this particular image here is typically probably an average of half I had 10 different heart beats so it's not a single heartbeat be much nicer if we can acquire this faster because actually to acquire the state of the patient has to hold their breath and that's especially if you have heart disease then of course it gets a bit more tricky so 10 heart beats is right around 8 to 10 seconds that's how long you have to", "start_timestamp": "00:10:37", "end_timestamp": "00:11:11", "start_second": 637, "end_second": 671, "url": "https://www.youtube.com/watch?v=_p8vFSUesNs&t=637s", "title": "Daniel Rueckert: \"Deep learning in medical imaging\"", "thumbnail": "https://i.ytimg.com/vi/_p8vFSUesNs/maxresdefault.jpg"} {"video_id": "_p8vFSUesNs", "text": "hold your breath and you really would like to do this fast enough you don't have to hold your breath at all okay so just a very primitive explanation of how we take our measurements in MRI imaging and really the physics is not so important but it's just want to highlight why this is so slow what we typically do is we traverse our measurement space in which we take measurements which is called the K space and we have to traverse it sequentially for a number of reasons which I don't really want to go into too much detail", "start_timestamp": "00:11:11", "end_timestamp": "00:11:47", "start_second": 671, "end_second": 707, "url": "https://www.youtube.com/watch?v=_p8vFSUesNs&t=671s", "title": "Daniel Rueckert: \"Deep learning in medical imaging\"", "thumbnail": "https://i.ytimg.com/vi/_p8vFSUesNs/maxresdefault.jpg"} {"video_id": "_p8vFSUesNs", "text": "but once I've made my measurements in k space then actually image reconstruction is trivial I just applied the Fourier transform and I get back my image so that is a relatively slow process because I have to acquire these measurements in k space sequentially and of course if I want to for example create dynamic images of the heart I have to keep on doing the same thing over and over again but actually the heart is only changing a bit between these different acquisitions so there's an enormous amount of spatial temporal", "start_timestamp": "00:11:47", "end_timestamp": "00:12:21", "start_second": 707, "end_second": 741, "url": "https://www.youtube.com/watch?v=_p8vFSUesNs&t=707s", "title": "Daniel Rueckert: \"Deep learning in medical imaging\"", "thumbnail": "https://i.ytimg.com/vi/_p8vFSUesNs/maxresdefault.jpg"} {"video_id": "_p8vFSUesNs", "text": "redundancy in the data now what is the easiest solution to accelerate your imaging process well instead of acquiring all of the measurements I need i just acquire a subset of the measurements which i need so for example if i acquire as you see here in the bottom only 25% of the data of course i'm twenty and four times faster which is great for the patient but for the radiologist this image is not really acceptable because it looks actually much degraded because i have a lot of aliasing artifacts in the data so this", "start_timestamp": "00:12:21", "end_timestamp": "00:12:59", "start_second": 741, "end_second": 779, "url": "https://www.youtube.com/watch?v=_p8vFSUesNs&t=741s", "title": "Daniel Rueckert: \"Deep learning in medical imaging\"", "thumbnail": "https://i.ytimg.com/vi/_p8vFSUesNs/maxresdefault.jpg"} {"video_id": "_p8vFSUesNs", "text": "problem has been well studied there are many techniques which can try to help you recover that that information which is in the top image you want to effectively denoise or D alias this image here at the bottom and the most commonly used techniques are effectively compressed sensing techniques so these compressed sensing techniques have been around for a while and MRI imaging they've been around for ten years very successfully used but more recently some some machine learning techniques have really significantly outperformed", "start_timestamp": "00:12:59", "end_timestamp": "00:13:34", "start_second": 779, "end_second": 814, "url": "https://www.youtube.com/watch?v=_p8vFSUesNs&t=779s", "title": "Daniel Rueckert: \"Deep learning in medical imaging\"", "thumbnail": "https://i.ytimg.com/vi/_p8vFSUesNs/maxresdefault.jpg"} {"video_id": "_p8vFSUesNs", "text": "these techniques and really what is the key difference between the compressed sensing techniques you see in the in the top and those here at the bottom so the ones in the top they effectively use generic priors sparsity low ranked they're not really data-driven priors whereas the techniques in the bottom they effectively try to learn the price from the data and try to improve our acquisition in that bad sense so just want to quickly show you the problem formulation we have we have effectively our case based measurements so all our", "start_timestamp": "00:13:34", "end_timestamp": "00:14:12", "start_second": 814, "end_second": 852, "url": "https://www.youtube.com/watch?v=_p8vFSUesNs&t=814s", "title": "Daniel Rueckert: \"Deep learning in medical imaging\"", "thumbnail": "https://i.ytimg.com/vi/_p8vFSUesNs/maxresdefault.jpg"} {"video_id": "_p8vFSUesNs", "text": "measurements are because we're making frequency measurements or complex values so the images we typically see are just the magnitude of those complex numbers but our measurements are complex valued and the image we want to recover is also complex valued and in fact we our measurements are related by to the image through a undersampled for you including matrix so this effectively under samples your measurement space and applies a Fourier transform and of course our acquisition noise and the under samples for your operation if you want to write", "start_timestamp": "00:14:12", "end_timestamp": "00:14:49", "start_second": 852, "end_second": 889, "url": "https://www.youtube.com/watch?v=_p8vFSUesNs&t=852s", "title": "Daniel Rueckert: \"Deep learning in medical imaging\"", "thumbnail": "https://i.ytimg.com/vi/_p8vFSUesNs/maxresdefault.jpg"} {"video_id": "_p8vFSUesNs", "text": "it down differently you can just write it down as a Fourier operation and then effectively a mask which defines how you're under sampled case space and that's that's quite important okay so in what you're trying to then solve mathematically is typically an unconstrained optimization problem consisting of your regularization term if you would use compressed sensing you'd probably use here an l1 or LZ row prior a norm or and the data fidelity term and that data fidelity term is quite crucial and it's actually something which is not so easy to bring", "start_timestamp": "00:14:49", "end_timestamp": "00:15:29", "start_second": 889, "end_second": 929, "url": "https://www.youtube.com/watch?v=_p8vFSUesNs&t=889s", "title": "Daniel Rueckert: \"Deep learning in medical imaging\"", "thumbnail": "https://i.ytimg.com/vi/_p8vFSUesNs/maxresdefault.jpg"} {"video_id": "_p8vFSUesNs", "text": "into into the equation if you use a machine learning so you know if we try to for example use a CNN because in some sense we we're trying to formulate this as a Dean problems with taking as an input one image and we're trying to produce a denounced image then we effectively have two different terms first which we're going to use effectively tries to make our denies image which our CNN estimates close to two the image we have but at the second time we also need to make sure that the image which we reconstruct", "start_timestamp": "00:15:29", "end_timestamp": "00:16:09", "start_second": 929, "end_second": 969, "url": "https://www.youtube.com/watch?v=_p8vFSUesNs&t=929s", "title": "Daniel Rueckert: \"Deep learning in medical imaging\"", "thumbnail": "https://i.ytimg.com/vi/_p8vFSUesNs/maxresdefault.jpg"} {"video_id": "_p8vFSUesNs", "text": "is actually close to the case based measurements we have obtained and this leads us to when we develop our CNN to have a layer which is probably not very common in other applications which we call a data consistency layer so this forces our data our fidelity and what this effectively does is this equation you see here is we have some part of missing key space if we make an estimate of this we simply keep that estimate of of K space and we have some part where we have measured K space and that measured bit of K space we're going to", "start_timestamp": "00:16:09", "end_timestamp": "00:16:49", "start_second": 969, "end_second": 1009, "url": "https://www.youtube.com/watch?v=_p8vFSUesNs&t=969s", "title": "Daniel Rueckert: \"Deep learning in medical imaging\"", "thumbnail": "https://i.ytimg.com/vi/_p8vFSUesNs/maxresdefault.jpg"} {"video_id": "_p8vFSUesNs", "text": "average together with with our estimated K space depending on how noisy this is so if you have a completely noise free case you would assume that lambda s goes to infinity and you would only keep your original measurements in k space of course you have some measurement noise you might actually really average those two together and so here in this particular equation s CNN is effectively the the foria version of the image which I've reconstructed and this here's our zero field K space so this is what what we would normally have so having this as", "start_timestamp": "00:16:49", "end_timestamp": "00:17:28", "start_second": 1009, "end_second": 1048, "url": "https://www.youtube.com/watch?v=_p8vFSUesNs&t=1009s", "title": "Daniel Rueckert: \"Deep learning in medical imaging\"", "thumbnail": "https://i.ytimg.com/vi/_p8vFSUesNs/maxresdefault.jpg"} {"video_id": "_p8vFSUesNs", "text": "a layer will force you to have a consistent image reconstruction with what you have measured and it turns out that because we want to train this end to end we need to be able to do the forward and the backward classes in order to also propagate our gradients back now the forty operation is a linear operation so actually turns out that the forward and the backward pass you can write down quite easily in solution and really in the backward pass you this is a Jacobian of the data consistency layer and if you decided", "start_timestamp": "00:17:28", "end_timestamp": "00:18:04", "start_second": 1048, "end_second": 1084, "url": "https://www.youtube.com/watch?v=_p8vFSUesNs&t=1048s", "title": "Daniel Rueckert: \"Deep learning in medical imaging\"", "thumbnail": "https://i.ytimg.com/vi/_p8vFSUesNs/maxresdefault.jpg"} {"video_id": "_p8vFSUesNs", "text": "your lambda is trainable because you don't know how much measurement noise you have you can also write down the derivative with respect to lambda so this is quite nice because I can basically propagate my gradients easily through this data consistency there so here's what you actually end up then with is you have an input image a complex valued input image you have your case based measurements and then you have a number of denoising layers which try to effectively remove aliasing from the image and then after these the", "start_timestamp": "00:18:04", "end_timestamp": "00:18:41", "start_second": 1084, "end_second": 1121, "url": "https://www.youtube.com/watch?v=_p8vFSUesNs&t=1084s", "title": "Daniel Rueckert: \"Deep learning in medical imaging\"", "thumbnail": "https://i.ytimg.com/vi/_p8vFSUesNs/maxresdefault.jpg"} {"video_id": "_p8vFSUesNs", "text": "noising layers you have your data consistency layer which then forces your measurement to be consistent with K space and effectively that links to your k space measurements so and then you can sort of in analog to a sort of iterative optimization you can on you can cascade these networks to an arbitrary depth we typically use five different of these cascades so here's an example what you end up with when you see on the left hand side is the image as acquired with six fold under sampling so if it's six fold on the sampling that means the", "start_timestamp": "00:18:41", "end_timestamp": "00:19:21", "start_second": 1121, "end_second": 1161, "url": "https://www.youtube.com/watch?v=_p8vFSUesNs&t=1121s", "title": "Daniel Rueckert: \"Deep learning in medical imaging\"", "thumbnail": "https://i.ytimg.com/vi/_p8vFSUesNs/maxresdefault.jpg"} {"video_id": "_p8vFSUesNs", "text": "acquisition is now six times faster I only have 1/6 of the data and you can see that the image which are reconstruct here is virtually useless this year is a technique which is based on dictionary learning and sort of more compressed sensing technique this is our CNN and this is the fully sampled image so of course I I've been sort of assimilating this under sampling here and what you see is that you can virtually see no difference between the fully sampled image and the reconstruction using CNN's the compressed sensing here one is also", "start_timestamp": "00:19:21", "end_timestamp": "00:20:00", "start_second": 1161, "end_second": 1200, "url": "https://www.youtube.com/watch?v=_p8vFSUesNs&t=1161s", "title": "Daniel Rueckert: \"Deep learning in medical imaging\"", "thumbnail": "https://i.ytimg.com/vi/_p8vFSUesNs/maxresdefault.jpg"} {"video_id": "_p8vFSUesNs", "text": "quite good but it turns out it's much slower and in terms of psnr it's several it's probably 10 percent are worse and the CNN you can push this even even to higher undersampling right so here's an example of 11 fold on the sampling which is really quite aggressive and you can still recover the the image very well so this is actually a really quite nice result now probably one of the biggest advantages is actually here so of course the the CNN is better in terms of peers and all that's that's that's nice but really one of the most important", "start_timestamp": "00:20:00", "end_timestamp": "00:20:40", "start_second": 1200, "end_second": 1240, "url": "https://www.youtube.com/watch?v=_p8vFSUesNs&t=1200s", "title": "Daniel Rueckert: \"Deep learning in medical imaging\"", "thumbnail": "https://i.ytimg.com/vi/_p8vFSUesNs/maxresdefault.jpg"} {"video_id": "_p8vFSUesNs", "text": "advantages is a speed so he used a compressed sensing technique with dictionary learning it takes around six hours to reconstruct an entire image sequence which in some clinical scenarios is acceptable but for example if you want to use your images for navigation because you want to for example perform a biopsy or something else then actually this is far too slow and the CNN is of course very very much faster so this actually now and now you can really do this probably in in 100 milliseconds so this is fast enough to", "start_timestamp": "00:20:40", "end_timestamp": "00:21:13", "start_second": 1240, "end_second": 1273, "url": "https://www.youtube.com/watch?v=_p8vFSUesNs&t=1240s", "title": "Daniel Rueckert: \"Deep learning in medical imaging\"", "thumbnail": "https://i.ytimg.com/vi/_p8vFSUesNs/maxresdefault.jpg"} {"video_id": "_p8vFSUesNs", "text": "do it on the scanner for image guided surgery and that's a that's a very big advantage in these techniques good I want to now go from image reconstruction and talk a bit about a two related Table Topics image segmentation and super resolution and you'll see in a moment why why I've sort of lumped them together because they share a number of problems which we have in medical imaging so okay so if I if I adopt a sort of standard approach for image segmentation I'm not going to show you anything in anything new here we for", "start_timestamp": "00:21:13", "end_timestamp": "00:21:56", "start_second": 1273, "end_second": 1316, "url": "https://www.youtube.com/watch?v=_p8vFSUesNs&t=1273s", "title": "Daniel Rueckert: \"Deep learning in medical imaging\"", "thumbnail": "https://i.ytimg.com/vi/_p8vFSUesNs/maxresdefault.jpg"} {"video_id": "_p8vFSUesNs", "text": "example use a sort of variant of an FCN for medical image segmentation if I pair that with a large enough data set for training so this is actually quite nice that I said which is also publicly available where a group in in Oxford have annotated 5000 subjects they have taken 5,000 subjects and annotated over 90,000 images in this data set you can use this for training and actually turns out that if you use this for training to fix don't try to segment the heart here you probably can't see this very clearly you", "start_timestamp": "00:21:56", "end_timestamp": "00:22:34", "start_second": 1316, "end_second": 1354, "url": "https://www.youtube.com/watch?v=_p8vFSUesNs&t=1316s", "title": "Daniel Rueckert: \"Deep learning in medical imaging\"", "thumbnail": "https://i.ytimg.com/vi/_p8vFSUesNs/maxresdefault.jpg"} {"video_id": "_p8vFSUesNs", "text": "can actually do a very good job in all these images which are part of this UK biobank so even in in slices which are sort of for example here at the apex of the heart where the heart ends which are typically very difficult because the heart is moving in and out of the plane this really works very very well you don't really have any any problems with that if you then try to compare how good does a machine do to to a human then actually the automated measurements for clinically important parameters are pretty much within the variability of", "start_timestamp": "00:22:34", "end_timestamp": "00:23:12", "start_second": 1354, "end_second": 1392, "url": "https://www.youtube.com/watch?v=_p8vFSUesNs&t=1354s", "title": "Daniel Rueckert: \"Deep learning in medical imaging\"", "thumbnail": "https://i.ytimg.com/vi/_p8vFSUesNs/maxresdefault.jpg"} {"video_id": "_p8vFSUesNs", "text": "what different humans do so we're not performing better than a human does so there is no superhuman performance but actually we're doing as well as a human does and I think that's quite natural because seems to me quite hard to get superhuman performance from from training data which actually in some sense is quite flawed so you can do this quite easily so there is no real challenge here we can just a talk what what many of you guys have done in envision what is the challenge is that actually the imaging data has a lot of", "start_timestamp": "00:23:12", "end_timestamp": "00:23:44", "start_second": 1392, "end_second": 1424, "url": "https://www.youtube.com/watch?v=_p8vFSUesNs&t=1392s", "title": "Daniel Rueckert: \"Deep learning in medical imaging\"", "thumbnail": "https://i.ytimg.com/vi/_p8vFSUesNs/maxresdefault.jpg"} {"video_id": "_p8vFSUesNs", "text": "artifacts which you need to understand in order to really make best use of the data so one of the things is we typically do is when we acquire the heart we acquire one slice we image for we ask the patient to hold a breast for 10 seconds we acquire then the second slice do the same thing again third slice do all of this again and our slices are quite thick which means they have an isotropic resolution so the high resolution inflame but low resolution out of planes so if I if I take that data stack it together and show you that", "start_timestamp": "00:23:44", "end_timestamp": "00:24:21", "start_second": 1424, "end_second": 1461, "url": "https://www.youtube.com/watch?v=_p8vFSUesNs&t=1424s", "title": "Daniel Rueckert: \"Deep learning in medical imaging\"", "thumbnail": "https://i.ytimg.com/vi/_p8vFSUesNs/maxresdefault.jpg"} {"video_id": "_p8vFSUesNs", "text": "as a sort of reformatting you see these ugly staircases which comes from the fact that probably every slice is is around one centimeter thick and in in planar probably have around one philometor solution there's the second problem you have which comes from the fact that the patient has to hold their breath for every slice and they might hold their breath in a different position between slices which means that actually one of the slices are shifted correct if you basically treat this as one volume you end up with with problems and for", "start_timestamp": "00:24:21", "end_timestamp": "00:25:00", "start_second": 1461, "end_second": 1500, "url": "https://www.youtube.com/watch?v=_p8vFSUesNs&t=1461s", "title": "Daniel Rueckert: \"Deep learning in medical imaging\"", "thumbnail": "https://i.ytimg.com/vi/_p8vFSUesNs/maxresdefault.jpg"} {"video_id": "_p8vFSUesNs", "text": "example if you take this volume here you can quite clearly see well in two of those slices of patient has who held their breath in the wrong location so now I can do segmentation with this data set in 3d or super resolution and this is what you end up when you do for example super resolution which is actually quite nice this super resolution here takes this dataset produces this and has this fantastic thing here where there's effectively a hole in the heart of course the patient doesn't have a hole in the heart because", "start_timestamp": "00:25:00", "end_timestamp": "00:25:31", "start_second": 1500, "end_second": 1531, "url": "https://www.youtube.com/watch?v=_p8vFSUesNs&t=1500s", "title": "Daniel Rueckert: \"Deep learning in medical imaging\"", "thumbnail": "https://i.ytimg.com/vi/_p8vFSUesNs/maxresdefault.jpg"} {"video_id": "_p8vFSUesNs", "text": "it would probably not survive with that but of course the super resolution can only do whatever the data give if you give it and similarly in the segmentation here you can also see this disconnected our regions so what we really would like to do is take that data same data and produce either a super resolution you see here or a segmentation like you see here and for that we have to incorporate anatomical knowledge if if you ask a clinician to look at this data they will look at the only slice by slice but in their head", "start_timestamp": "00:25:31", "end_timestamp": "00:26:07", "start_second": 1531, "end_second": 1567, "url": "https://www.youtube.com/watch?v=_p8vFSUesNs&t=1531s", "title": "Daniel Rueckert: \"Deep learning in medical imaging\"", "thumbnail": "https://i.ytimg.com/vi/_p8vFSUesNs/maxresdefault.jpg"} {"video_id": "_p8vFSUesNs", "text": "they will build up a 3d representation of what they're looking at and they are completely immune to the fact that you have motion between these different slices so one of the challenges we we came across is really that these standard these lost functions which we normally use for segmentation or super resolution are not really very good in those situations so we thought about what can we do differently because we really want to put a low resolution input data into our network and end up with a high resolution segmentation", "start_timestamp": "00:26:07", "end_timestamp": "00:26:43", "start_second": 1567, "end_second": 1603, "url": "https://www.youtube.com/watch?v=_p8vFSUesNs&t=1567s", "title": "Daniel Rueckert: \"Deep learning in medical imaging\"", "thumbnail": "https://i.ytimg.com/vi/_p8vFSUesNs/maxresdefault.jpg"} {"video_id": "_p8vFSUesNs", "text": "segmentation or super resolution so this network not only does acts perform semantic segmentation but also increases the resolution of the data so we decided let's let's try out something which looked very cool called TL Network which has been proposed in in graphics and really has effectively sort of two components the one component is a sort of authoring code and encoder which forces you into a latent space with variables age and then a decoder and that network effectively is trained with segmentation so not with it with will", "start_timestamp": "00:26:43", "end_timestamp": "00:27:24", "start_second": 1603, "end_second": 1644, "url": "https://www.youtube.com/watch?v=_p8vFSUesNs&t=1603s", "title": "Daniel Rueckert: \"Deep learning in medical imaging\"", "thumbnail": "https://i.ytimg.com/vi/_p8vFSUesNs/maxresdefault.jpg"} {"video_id": "_p8vFSUesNs", "text": "effectively will label maps and then a second branch which is the predictor network which for example takes an intensity image and predicts this latent representation from from this from this intensity image and we can sort of train this this network in an enjoy fashion now when you look at this you might say well hang on what's this what's this useful for well for example if we train our network for doing segmentation one of the things we can do is we put our low resolution image in we obtain a segmentation using our segmentation", "start_timestamp": "00:27:24", "end_timestamp": "00:28:05", "start_second": 1644, "end_second": 1685, "url": "https://www.youtube.com/watch?v=_p8vFSUesNs&t=1644s", "title": "Daniel Rueckert: \"Deep learning in medical imaging\"", "thumbnail": "https://i.ytimg.com/vi/_p8vFSUesNs/maxresdefault.jpg"} {"video_id": "_p8vFSUesNs", "text": "Network so this is our segmentation but we want an anatomically plausible segmentation so we encode that segmentation into our latent space and for our ground truth labels we can also encode it in our latent space and we have then a sort of loss function on that latent space which forces you to be similar in in the anatomical representation of what you're looking for and then you can couple this with your standard cross entropy loss okay so you now have two loss functions one which is a normal cross entropy and one", "start_timestamp": "00:28:05", "end_timestamp": "00:28:42", "start_second": 1685, "end_second": 1722, "url": "https://www.youtube.com/watch?v=_p8vFSUesNs&t=1685s", "title": "Daniel Rueckert: \"Deep learning in medical imaging\"", "thumbnail": "https://i.ytimg.com/vi/_p8vFSUesNs/maxresdefault.jpg"} {"video_id": "_p8vFSUesNs", "text": "loss in this latent space which forces your shape to look similar to what you had seen during training and if you do that you actually get a very nice result so instead of now getting these really weird shapes where you get biologically implausible shapes you can constrain your data to be very close to what is the ground truth if you acquire high resolution images by work by the way these high resolution images you can only acquire if you hold your breath for 40 seconds so this requires really sort of very dedicated", "start_timestamp": "00:28:42", "end_timestamp": "00:29:20", "start_second": 1722, "end_second": 1760, "url": "https://www.youtube.com/watch?v=_p8vFSUesNs&t=1722s", "title": "Daniel Rueckert: \"Deep learning in medical imaging\"", "thumbnail": "https://i.ytimg.com/vi/_p8vFSUesNs/maxresdefault.jpg"} {"video_id": "_p8vFSUesNs", "text": "volunteers to be able to do that and if you do this with super resolution you can now do the same thing correct with super resolution the only difference here is that actually my super resolution network produces as output an intensity image and then I'm predicting from that intensity image my latent space representation and and I have here the same for the ground truth so now here I do the same thing my latent space representation though comes from these predictor networks which which go from intensity space to latent space rather", "start_timestamp": "00:29:20", "end_timestamp": "00:29:56", "start_second": 1760, "end_second": 1796, "url": "https://www.youtube.com/watch?v=_p8vFSUesNs&t=1760s", "title": "Daniel Rueckert: \"Deep learning in medical imaging\"", "thumbnail": "https://i.ytimg.com/vi/_p8vFSUesNs/maxresdefault.jpg"} {"video_id": "_p8vFSUesNs", "text": "than from segmentation space to labor space and here's an example of of what that does if you use a sort of low resolution image this is your standard super resolution approach out of the box this is an anatomically constrained super resolution and this is what you would end up with if you he look at the higher resolution ground truth and here's sort of a movie showing exactly the same thing in the dynamic image sequence so this really works very well and it's actually quite powerful I think really interesting I mentioned to you", "start_timestamp": "00:29:56", "end_timestamp": "00:30:37", "start_second": 1796, "end_second": 1837, "url": "https://www.youtube.com/watch?v=_p8vFSUesNs&t=1796s", "title": "Daniel Rueckert: \"Deep learning in medical imaging\"", "thumbnail": "https://i.ytimg.com/vi/_p8vFSUesNs/maxresdefault.jpg"} {"video_id": "_p8vFSUesNs", "text": "before that one of the things which we quite often face is we have trained our models for example using UK biobank which is great because it's so much data available you then deploy it in the clinic and it doesn't really work that well and it's mostly due to really differences in not only the hardware but also the knobs which people turn when they acquire the images so MRI is great because it's so it's effectively a programmable device but it also means you can produce very different looking images and I already mentioned that to", "start_timestamp": "00:30:37", "end_timestamp": "00:31:12", "start_second": 1837, "end_second": 1872, "url": "https://www.youtube.com/watch?v=_p8vFSUesNs&t=1837s", "title": "Daniel Rueckert: \"Deep learning in medical imaging\"", "thumbnail": "https://i.ytimg.com/vi/_p8vFSUesNs/maxresdefault.jpg"} {"video_id": "_p8vFSUesNs", "text": "you so one of the things which we have sort of played around with is can we use adversarial training to try to make sure that we learn feature representations which are invariant to the data but also which don't require us to have annotations for the test data because that's actually very expensive to do in in medical imaging so I guess I this problem I think I don't really need to explain to you in very much detail but you basically end up with training your machine on on the source domain trying to separate one set of labels", "start_timestamp": "00:31:12", "end_timestamp": "00:31:51", "start_second": 1872, "end_second": 1911, "url": "https://www.youtube.com/watch?v=_p8vFSUesNs&t=1872s", "title": "Daniel Rueckert: \"Deep learning in medical imaging\"", "thumbnail": "https://i.ytimg.com/vi/_p8vFSUesNs/maxresdefault.jpg"} {"video_id": "_p8vFSUesNs", "text": "from another set of labels so this might be scanner a where you have training data then when you go to scanner B you actually see that you have this domain shift where your distribution of features which the network has changed I learned change because the images look slightly different and what we're trying to do here is basically try to find a way in which we can use adversarial learning to effectively help us that these images like these samples get misclassified and we instead would like to learn a classifier which is more", "start_timestamp": "00:31:51", "end_timestamp": "00:32:28", "start_second": 1911, "end_second": 1948, "url": "https://www.youtube.com/watch?v=_p8vFSUesNs&t=1911s", "title": "Daniel Rueckert: \"Deep learning in medical imaging\"", "thumbnail": "https://i.ytimg.com/vi/_p8vFSUesNs/maxresdefault.jpg"} {"video_id": "_p8vFSUesNs", "text": "generalizable and this has been really sort of it was very nice paper on this how you can do this with with neural networks which was sort of published a two year or three years ago not two years so it's an ancient paper in terms of machine learning but really works quite well where what you try to do is you try to train a classifier where you try to learn a domain classifier which can tell you whether your data comes from domain a or domain B and you try to minimize the Icarus II of this domain classifier because if that domain", "start_timestamp": "00:32:28", "end_timestamp": "00:33:08", "start_second": 1948, "end_second": 1988, "url": "https://www.youtube.com/watch?v=_p8vFSUesNs&t=1948s", "title": "Daniel Rueckert: \"Deep learning in medical imaging\"", "thumbnail": "https://i.ytimg.com/vi/_p8vFSUesNs/maxresdefault.jpg"} {"video_id": "_p8vFSUesNs", "text": "classifier does a good job then obviously you haven't learned features which are very domain in their head and the nice thing is for this you only need labels whether your data comes from scanner a and scanner B I don't need annotations for scanner B so here's the approach which we have used we use a neural network which has has a fancy name called deep medic which was one of the nicer the first ones who could actually do properly 3d convolutions and now of course you can do this quite easily sort of has two different pathways I don't", "start_timestamp": "00:33:08", "end_timestamp": "00:33:42", "start_second": 1988, "end_second": 2022, "url": "https://www.youtube.com/watch?v=_p8vFSUesNs&t=1988s", "title": "Daniel Rueckert: \"Deep learning in medical imaging\"", "thumbnail": "https://i.ytimg.com/vi/_p8vFSUesNs/maxresdefault.jpg"} {"video_id": "_p8vFSUesNs", "text": "really need to explain it in too much detail one high resolution one low resolution pathway is really designed to spot brain tumors and has been actually quite successful in in this so here's an example of what it would actually output in 3d so you can produce your fancy 3d renderings for this so we take this network and if the if you train this network it will not be a domain invariant what we what we instead do is we add to the normal segmentation pathway we add this the discrimination pathway where you basically take the", "start_timestamp": "00:33:42", "end_timestamp": "00:34:21", "start_second": 2022, "end_second": 2061, "url": "https://www.youtube.com/watch?v=_p8vFSUesNs&t=2022s", "title": "Daniel Rueckert: \"Deep learning in medical imaging\"", "thumbnail": "https://i.ytimg.com/vi/_p8vFSUesNs/maxresdefault.jpg"} {"video_id": "_p8vFSUesNs", "text": "features add a low-level features the mid-level features and the high-level features you put them into your adversarial branch and you try to learn this or you try to prevent this from being a good domain discriminator and then by minimizing the accuracy of that domain our discriminator you tend to learn features which are more generalizable so effectively you have two different terms and your cost function one is sort of your normal cross entropy loss and the other one is your how well you can discriminate the", "start_timestamp": "00:34:21", "end_timestamp": "00:34:58", "start_second": 2061, "end_second": 2098, "url": "https://www.youtube.com/watch?v=_p8vFSUesNs&t=2061s", "title": "Daniel Rueckert: \"Deep learning in medical imaging\"", "thumbnail": "https://i.ytimg.com/vi/_p8vFSUesNs/maxresdefault.jpg"} {"video_id": "_p8vFSUesNs", "text": "domain but the important thing is the top this this loss here I can only evaluate for samples from scanner a because that's my training set whereas this domain discriminator i can actually evaluate for samples from scanner a and b because i only need to know whether they come from scanner a and b and that's quite easy to do and it turns out that this actually does does really quite well so here's an example what happens when you don't do this domain adaptation so here actually instead of using data from different scanner we", "start_timestamp": "00:34:58", "end_timestamp": "00:35:38", "start_second": 2098, "end_second": 2138, "url": "https://www.youtube.com/watch?v=_p8vFSUesNs&t=2098s", "title": "Daniel Rueckert: \"Deep learning in medical imaging\"", "thumbnail": "https://i.ytimg.com/vi/_p8vFSUesNs/maxresdefault.jpg"} {"video_id": "_p8vFSUesNs", "text": "assume that at test time we don't have one of sequences available so we have to use another imaging sequence and if you don't do then the main invariant features you see you end up with horrible results in your in your segmentation where you're supposed to to spot a brain tumor which you can probably see here quite nicely and here when you have done the domain adaptation even if you switch one sequence to another you actually do quite well good so the last two minutes I just want to talk about some challenges which we have", "start_timestamp": "00:35:38", "end_timestamp": "00:36:17", "start_second": 2138, "end_second": 2177, "url": "https://www.youtube.com/watch?v=_p8vFSUesNs&t=2138s", "title": "Daniel Rueckert: \"Deep learning in medical imaging\"", "thumbnail": "https://i.ytimg.com/vi/_p8vFSUesNs/maxresdefault.jpg"} {"video_id": "_p8vFSUesNs", "text": "practically faced and I think many of you might face in in other scenarios is there are many good networks out there for example a unit which in medical imaging quite a lot of people use FCN they have a lot of meta parameters a lot of different architectures you can choose that we influence the behavior and then really it's at least at live to us it quite often looks like it's very hard to make predictions which model will work really well for a given task so one option is to use I guess something you all very familiar with", "start_timestamp": "00:36:17", "end_timestamp": "00:36:55", "start_second": 2177, "end_second": 2215, "url": "https://www.youtube.com/watch?v=_p8vFSUesNs&t=2177s", "title": "Daniel Rueckert: \"Deep learning in medical imaging\"", "thumbnail": "https://i.ytimg.com/vi/_p8vFSUesNs/maxresdefault.jpg"} {"video_id": "_p8vFSUesNs", "text": "this sort of non solve all of these different models and try to be as insensitive as possible and unbiased as possible so here's for example in example when I have a flare image with a brain tumor so you see the core of the tumor in red and the edema in yellow then if I use the same network but I switch here cross-entropy for i/o you as a sort of a loss function I get very different behavior and for example this this intersection over Union effectively forces you to make a very hard segmentation and that gives you overly", "start_timestamp": "00:36:55", "end_timestamp": "00:37:32", "start_second": 2215, "end_second": 2252, "url": "https://www.youtube.com/watch?v=_p8vFSUesNs&t=2215s", "title": "Daniel Rueckert: \"Deep learning in medical imaging\"", "thumbnail": "https://i.ytimg.com/vi/_p8vFSUesNs/maxresdefault.jpg"} {"video_id": "_p8vFSUesNs", "text": "high confidence values for this so it might not be a very good thing because these things you miss classify with very high confidence okay so the approach which were sort of used as trying to basically approximate the probability distribution which we're really interested in by this model where you have these different meta parameters in there and usually just pick one meta parameter and then run with it so what we really wanted to do is sort of marginalize out over these meta parameters and try to find a more robust", "start_timestamp": "00:37:32", "end_timestamp": "00:38:10", "start_second": 2252, "end_second": 2290, "url": "https://www.youtube.com/watch?v=_p8vFSUesNs&t=2252s", "title": "Daniel Rueckert: \"Deep learning in medical imaging\"", "thumbnail": "https://i.ytimg.com/vi/_p8vFSUesNs/maxresdefault.jpg"} {"video_id": "_p8vFSUesNs", "text": "way of doing this and so we use different network architectures for example a diplomatic FCN a unit approach but also we tried out different architectures what we different training loss functions different sampling strategies there are many knobs you can you can turn on and at the mechanic conference which I guess is sort of the the the cbpo for handicap for those who work in medical imaging is sort of they run challenges and this type of approach really was quite successful in this for one the first price out of 50 competitor", "start_timestamp": "00:38:10", "end_timestamp": "00:38:52", "start_second": 2290, "end_second": 2332, "url": "https://www.youtube.com/watch?v=_p8vFSUesNs&t=2290s", "title": "Daniel Rueckert: \"Deep learning in medical imaging\"", "thumbnail": "https://i.ytimg.com/vi/_p8vFSUesNs/maxresdefault.jpg"} {"video_id": "_p8vFSUesNs", "text": "us and and really it's it's quite simple because you didn't really need to spend huge amount of time engineering the particular approach okay so I just want to sort of summarize in computer deep learning you've seen a number of really nice papers and literature showing really great progress in this but there's also quite a lot of hype and quite a lot of discussion about whether we actually asking the right or whether we're posing the right problems to machine learning in medical imaging and really to make this truly intelligent we", "start_timestamp": "00:38:52", "end_timestamp": "00:39:30", "start_second": 2332, "end_second": 2370, "url": "https://www.youtube.com/watch?v=_p8vFSUesNs&t=2332s", "title": "Daniel Rueckert: \"Deep learning in medical imaging\"", "thumbnail": "https://i.ytimg.com/vi/_p8vFSUesNs/maxresdefault.jpg"} {"video_id": "_p8vFSUesNs", "text": "have to move beyond images so we have to even a radiologist somebody told me actually if you are if you want to study medicine and you really hate patients the one thing you should do is you should become a radiologist correct because you normally never have to interact with a patient but even the radiologist will look at an on imaging information and so really there is a lot of data available this is really a challenging validation like in the previous speaker I think really if unless you really work together in teams", "start_timestamp": "00:39:30", "end_timestamp": "00:40:01", "start_second": 2370, "end_second": 2401, "url": "https://www.youtube.com/watch?v=_p8vFSUesNs&t=2370s", "title": "Daniel Rueckert: \"Deep learning in medical imaging\"", "thumbnail": "https://i.ytimg.com/vi/_p8vFSUesNs/maxresdefault.jpg"} {"video_id": "_p8vFSUesNs", "text": "with the with a clinicians engineers you can't really solve this problems or you might actually end up solving the wrong problem and one thing which is I think what exciting is something which which we're trying to do in the future in and currently it's sort of at the moment you have these three separate blocks your quiet your data you reconstruct your data you are now you then somebody define tells you what I wanted to measure and then you do the analysis and you pop out some of some results but really if all of this I can", "start_timestamp": "00:40:01", "end_timestamp": "00:40:37", "start_second": 2401, "end_second": 2437, "url": "https://www.youtube.com/watch?v=_p8vFSUesNs&t=2401s", "title": "Daniel Rueckert: \"Deep learning in medical imaging\"", "thumbnail": "https://i.ytimg.com/vi/_p8vFSUesNs/maxresdefault.jpg"} {"video_id": "_p8vFSUesNs", "text": "formulate with deep learning then one of the things I am actually very excited about is that I can do end-to-end optimization so if I know what clinical measurements I want to make I can optimize the acquisition the reconstruction and the analysis for exactly that purpose and I think that's a very powerful paradigm especially because these scanners are effectively programmable there are piece of programmable hardware and you can optimize what they do and of course you can couple it with what Big Data and an multimodal data so I just want to finish", "start_timestamp": "00:40:37", "end_timestamp": "00:41:16", "start_second": 2437, "end_second": 2476, "url": "https://www.youtube.com/watch?v=_p8vFSUesNs&t=2437s", "title": "Daniel Rueckert: \"Deep learning in medical imaging\"", "thumbnail": "https://i.ytimg.com/vi/_p8vFSUesNs/maxresdefault.jpg"} {"video_id": "Alkm-PJu9To", "text": "morning everybody and welcome to day two of hike on our first two speakers are Angela liking and Lady moody say and they are from a major telco and there are data scientists and are going to be talking about anomaly detection using also encoders hello okay that's good morning everyone today we'll be sharing a talk about anomaly detection using what encoders I'm melody moody say hi I'm Angela and we are both data scientists obviously at a telco and this is also our first time speaking at PyCon yeah so just to take you through some of", "start_timestamp": "00:00:00", "end_timestamp": "00:01:05", "start_second": 0, "end_second": 65, "url": "https://www.youtube.com/watch?v=Alkm-PJu9To&t=0s", "title": "Anomaly Detection using Autoencoders", "thumbnail": "https://i.ytimg.com/vi/Alkm-PJu9To/maxresdefault.jpg"} {"video_id": "Alkm-PJu9To", "text": "the content we'll be sharing with you today we'll give you an introduction of auto-encoders how the algorithm actually works a brief history about them thereafter we will give you some architectures different type of popular architectures of autoencoders which may be useful for you for your use cases also particular use cases that are out there where auto-encoders are good to solve we'll also be sharing some Python packages popular Python packages that you could use we'll take you through at repeater notebook we will introduce the", "start_timestamp": "00:01:05", "end_timestamp": "00:01:40", "start_second": 65, "end_second": 100, "url": "https://www.youtube.com/watch?v=Alkm-PJu9To&t=65s", "title": "Anomaly Detection using Autoencoders", "thumbnail": "https://i.ytimg.com/vi/Alkm-PJu9To/maxresdefault.jpg"} {"video_id": "Alkm-PJu9To", "text": "notion of fraud anomalies and how to actually implement that then right after we'll have a quick sense visualization to show you how assets data scientists interpret the results as well as for business stakeholders then lastly we'll be sharing key takeaways from our experience from implementing this type of problem so how many of you are aware of neural networks I'm sure most of us were there yesterday at Alex's talk so I'm sure you are familiar with convolutional neural it works I mean he went into quite a lot", "start_timestamp": "00:01:40", "end_timestamp": "00:02:19", "start_second": 100, "end_second": 139, "url": "https://www.youtube.com/watch?v=Alkm-PJu9To&t=100s", "title": "Anomaly Detection using Autoencoders", "thumbnail": "https://i.ytimg.com/vi/Alkm-PJu9To/maxresdefault.jpg"} {"video_id": "Alkm-PJu9To", "text": "of details feed-forward neural networks recurrent neural networks and all these type are solving for particular problems like computer vision machine translations and so forth Auto encoders are a part of the family of neural networks so yeah so as milady mentioned before right Auto encoders are a type of neural network whose goal is to determine an output based on a similar input so as you can see right the goal of the input data is to be compressed so that it's in a lower dimensional space such that when the decoder comes along it takes that", "start_timestamp": "00:02:19", "end_timestamp": "00:02:57", "start_second": 139, "end_second": 177, "url": "https://www.youtube.com/watch?v=Alkm-PJu9To&t=139s", "title": "Anomaly Detection using Autoencoders", "thumbnail": "https://i.ytimg.com/vi/Alkm-PJu9To/maxresdefault.jpg"} {"video_id": "Alkm-PJu9To", "text": "learn representation of that data the pattern such that it's able to replicate this learned image of this mushroom so now just to get a bit more in depth in terms of the algorithm of the auto-encoders so it's split up into an encoder and a decoder the encoder is simply just a function of your input and your decoder is a function of your hidden layers now as you can see overall your algorithm is represented by the G f of X is equal to R now you want R to be as close as possible to your input layer so you want that data to be very very", "start_timestamp": "00:02:57", "end_timestamp": "00:03:37", "start_second": 177, "end_second": 217, "url": "https://www.youtube.com/watch?v=Alkm-PJu9To&t=177s", "title": "Anomaly Detection using Autoencoders", "thumbnail": "https://i.ytimg.com/vi/Alkm-PJu9To/maxresdefault.jpg"} {"video_id": "Alkm-PJu9To", "text": "close to each other so and that's exactly why the objective of an auto encoder is to minimize the loss function now what the loss function means is that you want to reduce and minimize the error between your input and your output the way that these neural networks are trained they are trained through back propagation and what that means is that is the recursive process such that it's able to minimize the error between your input and your output and also something just really interesting but I think maybe you might find interesting what I", "start_timestamp": "00:03:37", "end_timestamp": "00:04:08", "start_second": 217, "end_second": 248, "url": "https://www.youtube.com/watch?v=Alkm-PJu9To&t=217s", "title": "Anomaly Detection using Autoencoders", "thumbnail": "https://i.ytimg.com/vi/Alkm-PJu9To/maxresdefault.jpg"} {"video_id": "Alkm-PJu9To", "text": "do so autoencoders have been around for decades now people such as the laocoon and Hinton have used it are you all familiar with them I mean ok cool now let's move on to uses of auto-encoders right so the first being dimensionality reduction that means that you take your data you condense it into a lower dimensional space the reason for doing that is so that your data itself can be more easily represented visually and this will really assist before you applied into a neural network the next example would be denoising of data you can see that", "start_timestamp": "00:04:08", "end_timestamp": "00:04:46", "start_second": 248, "end_second": 286, "url": "https://www.youtube.com/watch?v=Alkm-PJu9To&t=248s", "title": "Anomaly Detection using Autoencoders", "thumbnail": "https://i.ytimg.com/vi/Alkm-PJu9To/maxresdefault.jpg"} {"video_id": "Alkm-PJu9To", "text": "initially these images are very hazy fuzzy it you know you can't really see what's going on right now but through the power of auto-encoders what happens is that the noise is removed and it's a more crisper image you can see so now a third example is anomaly detection now what anomaly detection is it's basically a technique for identifying patterns within data so patterns that do not follow the norm so for example in autoencoders for example we have this idea of reconstruction errors so if an observation right it's passed in and it", "start_timestamp": "00:04:46", "end_timestamp": "00:05:23", "start_second": 286, "end_second": 323, "url": "https://www.youtube.com/watch?v=Alkm-PJu9To&t=286s", "title": "Anomaly Detection using Autoencoders", "thumbnail": "https://i.ytimg.com/vi/Alkm-PJu9To/maxresdefault.jpg"} {"video_id": "Alkm-PJu9To", "text": "doesn't seem very similar to its input like there's a drastic change there the difference then that would be considered as an outlier hence it would be anomalous so you would see these red images these red dots that's an outlier and lastly we get a view of feature extraction so auto-encoders give you a view of which features in your dataset are useful or not so to take you through some of the different architectures that are out there of auto-encoders a very popular one is the restricted Boltzmann machine and this is actually produced", "start_timestamp": "00:05:23", "end_timestamp": "00:05:58", "start_second": 323, "end_second": 358, "url": "https://www.youtube.com/watch?v=Alkm-PJu9To&t=323s", "title": "Anomaly Detection using Autoencoders", "thumbnail": "https://i.ytimg.com/vi/Alkm-PJu9To/maxresdefault.jpg"} {"video_id": "Alkm-PJu9To", "text": "this particular papers produced by our beloved Hinton and a restricted Boltzmann machine is basically a two layer or two encoder so how it works is that it has a visible layer and a hidden layer the visible layer is where our input would come in our variable inputs it would use a combination of that to get into the hidden layer then basically what it's learning is the difference between the hidden layer and a visible layer it uses metric called KL divergence to measure that difference between the two this particular paper by", "start_timestamp": "00:05:58", "end_timestamp": "00:06:33", "start_second": 358, "end_second": 393, "url": "https://www.youtube.com/watch?v=Alkm-PJu9To&t=358s", "title": "Anomaly Detection using Autoencoders", "thumbnail": "https://i.ytimg.com/vi/Alkm-PJu9To/maxresdefault.jpg"} {"video_id": "Alkm-PJu9To", "text": "Hinton I would encourage that you go forward and read it if you want to get into auto-encoders it's basically how he used auto-encoders restricted Boltzmann machine and auto-encoders for dimensionality reduction and he actually compares this with PCA and the results that he gets is that with autoencoders he's able to reduce dimensions of nonlinear type of data so the results that he gets the patterns that he under lives were much better than he got for PCA within the field of autoencoders there's two different types of popular", "start_timestamp": "00:06:33", "end_timestamp": "00:07:13", "start_second": 393, "end_second": 433, "url": "https://www.youtube.com/watch?v=Alkm-PJu9To&t=393s", "title": "Anomaly Detection using Autoencoders", "thumbnail": "https://i.ytimg.com/vi/Alkm-PJu9To/maxresdefault.jpg"} {"video_id": "Alkm-PJu9To", "text": "architectures there's under complete and over complete so what angela has just described to you is an under complete architecture so remember what she said is that we're trying to find the underlying pattern within our input but to do that what we need to do is to ensure that the neurons within our hidden layer are less than the neurons within our input layer to ensure that whatever our reconstruction of our output is it's not a direct copy of the input then it didn't learn the underlying pattern it needs to be a", "start_timestamp": "00:07:13", "end_timestamp": "00:07:46", "start_second": 433, "end_second": 466, "url": "https://www.youtube.com/watch?v=Alkm-PJu9To&t=433s", "title": "Anomaly Detection using Autoencoders", "thumbnail": "https://i.ytimg.com/vi/Alkm-PJu9To/maxresdefault.jpg"} {"video_id": "Alkm-PJu9To", "text": "pattern that's how we ensure that so that is under complete so that's usually for most use cases that's how we implement an an autoencoder then we have the over completes architecture and there's three different types of them so this parse this denoising and contractor so how many of you are familiar with regularization in neural networks okay a few of you so within your own networks um what we usually do is that within these different if if we find that our neural network is overfitting what we sometimes can't do one technique to listen that is", "start_timestamp": "00:07:46", "end_timestamp": "00:08:31", "start_second": 466, "end_second": 511, "url": "https://www.youtube.com/watch?v=Alkm-PJu9To&t=466s", "title": "Anomaly Detection using Autoencoders", "thumbnail": "https://i.ytimg.com/vi/Alkm-PJu9To/maxresdefault.jpg"} {"video_id": "Alkm-PJu9To", "text": "to put in a regularizer which means it's to penalize the variables within our weights right but with an autoencoder the spa's auto encoder uses a regularizer but it regularizes the activation functions before getting into the hidden layer as well as here that is to say that you could have an architecture of any type however some of the activation functions are not being initialized that means they not all of the inputs would have been necessarily used so if you experiencing if you build your auto encoder and you", "start_timestamp": "00:08:31", "end_timestamp": "00:09:06", "start_second": 511, "end_second": 546, "url": "https://www.youtube.com/watch?v=Alkm-PJu9To&t=511s", "title": "Anomaly Detection using Autoencoders", "thumbnail": "https://i.ytimg.com/vi/Alkm-PJu9To/maxresdefault.jpg"} {"video_id": "Alkm-PJu9To", "text": "like oh my gosh still not finding that underlying pattern there's a lot of noise in my data this is a good thing a good technique that you could use another problem that occurs when implementing an auto encoder is that you get the exact copy like it's so annoying but what you can't do if you have such a problem is used the denoising so what do you know is ink does is that you add noise to your input layer and then you use the same under complete architecture of an auto encoder so it is quite a lot if your reconstruction layer is exactly", "start_timestamp": "00:09:06", "end_timestamp": "00:09:41", "start_second": 546, "end_second": 581, "url": "https://www.youtube.com/watch?v=Alkm-PJu9To&t=546s", "title": "Anomaly Detection using Autoencoders", "thumbnail": "https://i.ytimg.com/vi/Alkm-PJu9To/maxresdefault.jpg"} {"video_id": "Alkm-PJu9To", "text": "like your input the contractive is similar to denoising the problem with adding noise to an input is that you really don't know how much noise you should admin so what contractive does is that in your activation functions it finds the derivative of each activation function so it reduces what they as the inputs so what it's it but that in tells is that it's more robust to noise so the more noise you have in your input because of those derivatives it's easier to learn that particular inherent pattern so we have many Python libraries", "start_timestamp": "00:09:41", "end_timestamp": "00:10:19", "start_second": 581, "end_second": 619, "url": "https://www.youtube.com/watch?v=Alkm-PJu9To&t=581s", "title": "Anomaly Detection using Autoencoders", "thumbnail": "https://i.ytimg.com/vi/Alkm-PJu9To/maxresdefault.jpg"} {"video_id": "Alkm-PJu9To", "text": "available to us if you are interested in building your very own auto-encoders so first being Kerris which is basically just an abstraction level that sits on top of tensorflow then we have pi torch and then we all know if I could learn I'm sure so the very well notes I could learn and then we have h2o but for this purposes for today we'll be showcasing h2o alright so now we've reached the stage of the jupiter notebook but before we begin there I just want to ask you all a question so who have you have experienced any", "start_timestamp": "00:10:19", "end_timestamp": "00:10:51", "start_second": 619, "end_second": 651, "url": "https://www.youtube.com/watch?v=Alkm-PJu9To&t=619s", "title": "Anomaly Detection using Autoencoders", "thumbnail": "https://i.ytimg.com/vi/Alkm-PJu9To/maxresdefault.jpg"} {"video_id": "Alkm-PJu9To", "text": "fraudulent acts in your life just array the pans cool so that seems like quite a few of you right now imagine like within industry as well they must probably be experienced vast amounts of fortune and activities that happen to them on a daily basis for instance we could look at within the banking sector were all very familiar with the tappan go system right so now imagine if a card is being tapped 200 times on the same day isn't that a huge red flag like someone's clearly taking your money unless you really like chopping a lot", "start_timestamp": "00:10:51", "end_timestamp": "00:11:23", "start_second": 651, "end_second": 683, "url": "https://www.youtube.com/watch?v=Alkm-PJu9To&t=651s", "title": "Anomaly Detection using Autoencoders", "thumbnail": "https://i.ytimg.com/vi/Alkm-PJu9To/maxresdefault.jpg"} {"video_id": "Alkm-PJu9To", "text": "LMO sorry and then telecoms we get for engine cases like SIM swap fraud or delivery jessa4 so for instance right it's your customer information however the product that's being delivered to you is not sent to your address but it's sent to an address that's who knows 200 kilometers away from where you stay once again yet another red flag right and then in the retail space you can get fortunate acts like stocktaking or online purchases yeah so earn an example of an actual fraud case that has happened was what's called the Japan ATM", "start_timestamp": "00:11:23", "end_timestamp": "00:12:01", "start_second": 683, "end_second": 721, "url": "https://www.youtube.com/watch?v=Alkm-PJu9To&t=683s", "title": "Anomaly Detection using Autoencoders", "thumbnail": "https://i.ytimg.com/vi/Alkm-PJu9To/maxresdefault.jpg"} {"video_id": "Alkm-PJu9To", "text": "scam this affected the standard bank that we know though it happened within Japan so what these fraudsters did it's like for real Ocean's eleven but these fraudsters that it was around a hundred people according to the article it's a suspected out around a hundred people went to various of ATMs within Japan and started taking out cash one of the banks that were affected within South Africa was Standard Bank and Stennett Bank lost 295 million rent from this particular activity they did this under three hours that's unlike the solutions so we would", "start_timestamp": "00:12:01", "end_timestamp": "00:12:42", "start_second": 721, "end_second": 762, "url": "https://www.youtube.com/watch?v=Alkm-PJu9To&t=721s", "title": "Anomaly Detection using Autoencoders", "thumbnail": "https://i.ytimg.com/vi/Alkm-PJu9To/maxresdefault.jpg"} {"video_id": "Alkm-PJu9To", "text": "if I was the CEO of Standard Bank I'll definitely be like okay you fool me once I definitely wouldn't want again criminals are stealing from me the exact same way rights so we find such an emergent type of fraud that occurs and a business get scared so what we do is that we would ensure to reduce that we'd have either supervised learning model or rules so that in let's say the first month we get such a big spike of fraud in the next month that we want to reduce that so we would compact that but definitely those guys who stole that", "start_timestamp": "00:12:42", "end_timestamp": "00:13:16", "start_second": 762, "end_second": 796, "url": "https://www.youtube.com/watch?v=Alkm-PJu9To&t=762s", "title": "Anomaly Detection using Autoencoders", "thumbnail": "https://i.ytimg.com/vi/Alkm-PJu9To/maxresdefault.jpg"} {"video_id": "Alkm-PJu9To", "text": "money I'm sure they have a new creative way of of stealing from a different type of company all standard bank and so what she wants to do within an organization is to try combats that emergent type of fraud so you can have usual fraud cases but also new types of fraud and if we have an actual algorithm that does work well let's say it's 70% accurate maybe 50% of that money could have been saved cool so just like my lady was mentioning you have like the whole idea of emerging fraud and then like a rule based fraud", "start_timestamp": "00:13:16", "end_timestamp": "00:13:56", "start_second": 796, "end_second": 836, "url": "https://www.youtube.com/watch?v=Alkm-PJu9To&t=796s", "title": "Anomaly Detection using Autoencoders", "thumbnail": "https://i.ytimg.com/vi/Alkm-PJu9To/maxresdefault.jpg"} {"video_id": "Alkm-PJu9To", "text": "so banks if they really know the kind of fraud that they that are happening right now there's just these will based systems that'll combat it so just once you explain the concept behind anomalies first fraud so as you can see in this Venn diagram right something that is anomalous does not necessarily mean that is fraudulent but something that's fortunate may mean that it's anomalous cool so now I just want to ask you guys as well have a look at this table what stands out to you what is the anomaly yay you get a chocolate cool so", "start_timestamp": "00:13:56", "end_timestamp": "00:14:37", "start_second": 836, "end_second": 877, "url": "https://www.youtube.com/watch?v=Alkm-PJu9To&t=836s", "title": "Anomaly Detection using Autoencoders", "thumbnail": "https://i.ytimg.com/vi/Alkm-PJu9To/maxresdefault.jpg"} {"video_id": "Alkm-PJu9To", "text": "fantastic but now but now if we think about this right imagine in a real-life situation when we we're not only just looking at six rows now we're looking at ten million rows and we want to cater for real-time situations in real time are we able to identify the anomalies in the data set and not only what we just have password change occurrence as a variable we'll have millions more so cool that's where anomaly detection using water encoders can play a role so now we move into the Cagle data set apologies for spelling we are data scientist not", "start_timestamp": "00:14:37", "end_timestamp": "00:15:13", "start_second": 877, "end_second": 913, "url": "https://www.youtube.com/watch?v=Alkm-PJu9To&t=877s", "title": "Anomaly Detection using Autoencoders", "thumbnail": "https://i.ytimg.com/vi/Alkm-PJu9To/maxresdefault.jpg"} {"video_id": "Alkm-PJu9To", "text": "English teachers cool right so the keidel data set is just the data set which I'm sure you're all quite familiar with it's called the credit card data set and it's based on transactions of customers so as we begin you can see that this data set is highly imbalanced as you can see there are very few fraudulent cases which make up 0.17% of the data set now for machine learning algorithm to learn such a thing it makes it really difficult so cool but we'll explain how to combat that later on and then we read in our normal import so", "start_timestamp": "00:15:13", "end_timestamp": "00:15:52", "start_second": 913, "end_second": 952, "url": "https://www.youtube.com/watch?v=Alkm-PJu9To&t=913s", "title": "Anomaly Detection using Autoencoders", "thumbnail": "https://i.ytimg.com/vi/Alkm-PJu9To/maxresdefault.jpg"} {"video_id": "Alkm-PJu9To", "text": "because we were speaking to h2o we'll be using the h2o deep learning estimator library we read in our normal packages we then begin by initiating your spa context and your h2o context followed by now this is where the fun begins right we read in our data set using spark we transform that spark data set into h2o because remember we're working with h2o models right now you can't pass in a spark data frame into an h2o library so hence you need to convert it then over here we defined our features list now because this is an online data set it's", "start_timestamp": "00:15:52", "end_timestamp": "00:16:32", "start_second": 952, "end_second": 992, "url": "https://www.youtube.com/watch?v=Alkm-PJu9To&t=952s", "title": "Anomaly Detection using Autoencoders", "thumbnail": "https://i.ytimg.com/vi/Alkm-PJu9To/maxresdefault.jpg"} {"video_id": "Alkm-PJu9To", "text": "anonymized but in real case situations these features could represent things like maybe the number of times you've was drawn from an ATM is your card linked to the app how like how often are you in overdraft you know just like those kind of features then you take your data set and you split it into a chain test and then remember before we were showing or showing you how the data itself was highly imbalanced so in order to combat that right you train the model on what looks like normal what is considered normal you chain that so that", "start_timestamp": "00:16:32", "end_timestamp": "00:17:07", "start_second": 992, "end_second": 1027, "url": "https://www.youtube.com/watch?v=Alkm-PJu9To&t=992s", "title": "Anomaly Detection using Autoencoders", "thumbnail": "https://i.ytimg.com/vi/Alkm-PJu9To/maxresdefault.jpg"} {"video_id": "Alkm-PJu9To", "text": "the model learns so that when it's given unseen data and it picks up patterns that don't follow what it learned then it will flag that as an anomaly cool so now we begin with defining our h2o deep learning estimator we pass it through a variety of parameters I'll just go through a few so the one being the model ID which is purely just the name of your auto encoder so when you do save it for reuse of later on you can reference it an activation function of ten and a few hidden layers then you chain your model over here now you savor your model cool", "start_timestamp": "00:17:07", "end_timestamp": "00:17:45", "start_second": 1027, "end_second": 1065, "url": "https://www.youtube.com/watch?v=Alkm-PJu9To&t=1027s", "title": "Anomaly Detection using Autoencoders", "thumbnail": "https://i.ytimg.com/vi/Alkm-PJu9To/maxresdefault.jpg"} {"video_id": "Alkm-PJu9To", "text": "so now that we saved our model we want to reload it now that we've reloaded the model this is where the fun begins this is where we actually identify anomalous behavior so we apply it to a testing set and we produce these reconstruction errors now if you remember these reconstruction errors are how different is the output from the input so as you can see equal this is the overall reconstruction error but now what if you are interested in identifying the reconstruction errors per feature we can view that over here", "start_timestamp": "00:17:45", "end_timestamp": "00:18:17", "start_second": 1065, "end_second": 1097, "url": "https://www.youtube.com/watch?v=Alkm-PJu9To&t=1065s", "title": "Anomaly Detection using Autoencoders", "thumbnail": "https://i.ytimg.com/vi/Alkm-PJu9To/maxresdefault.jpg"} {"video_id": "Alkm-PJu9To", "text": "so this over here will show you the reconstruction error per feature so it's just to show you and give you a sense of which feature contributed more to a particular observation for a customer yeah and then if you are interested you know after this presentation you can go home and you can build your own auto-encoders you can visit the Kegel comm website and you can get this data set and so just a recap right of reconstruction errors in terms of like a real-life situation with this image right the input data would be your", "start_timestamp": "00:18:17", "end_timestamp": "00:18:52", "start_second": 1097, "end_second": 1132, "url": "https://www.youtube.com/watch?v=Alkm-PJu9To&t=1097s", "title": "Anomaly Detection using Autoencoders", "thumbnail": "https://i.ytimg.com/vi/Alkm-PJu9To/maxresdefault.jpg"} {"video_id": "Alkm-PJu9To", "text": "pixels and then the output would be the reconstruction errors without the noise cool so I hope that clarifies reconstruction errors all right so now we're going to get into showing you the trick sense dashboard that we've built that we as data scientists and business stakeholders may be interested in might be a bit cheeky with tricking and holding the mic so as you can see within this dashboard we chose what you see in over here is what's down then red is the normal pattern that the algorithm caught and what we have above in blue is the", "start_timestamp": "00:18:52", "end_timestamp": "00:19:37", "start_second": 1132, "end_second": 1177, "url": "https://www.youtube.com/watch?v=Alkm-PJu9To&t=1132s", "title": "Anomaly Detection using Autoencoders", "thumbnail": "https://i.ytimg.com/vi/Alkm-PJu9To/maxresdefault.jpg"} {"video_id": "Alkm-PJu9To", "text": "anomalies so how we picked the anomalies is was we we picked a threshold of 0.01 and we picked that threshold just based on what we saw from the particular diagram so what the amounts of anomalies we caught is a hundred and five anomalies right so let's say I want to check as a data scientist that okay how much was actual fraud and how much did my prediction was one as you can see or if I close predictions as you can see that I my anomaly detection model picked up 69 fraudulent cases out of 89 if I want to check as", "start_timestamp": "00:19:37", "end_timestamp": "00:20:29", "start_second": 1177, "end_second": 1229, "url": "https://www.youtube.com/watch?v=Alkm-PJu9To&t=1177s", "title": "Anomaly Detection using Autoencoders", "thumbnail": "https://i.ytimg.com/vi/Alkm-PJu9To/maxresdefault.jpg"} {"video_id": "Alkm-PJu9To", "text": "the data scientists that how many fraud cases that my predictor not get so out of all the fraudulent cases it didn't pick up 20 fraudulent cases the anomaly detection model so it's not a bad model it's pretty it's pretty neat for a fraud project well as you can see here we have 0.16 percent of fraudulent cases and more there's more anomalies that we found which goes to the results that we have what you see here is actually the reconstruction errors that Angela described for each and every variable so from fraud analysts point of view that", "start_timestamp": "00:20:29", "end_timestamp": "00:21:12", "start_second": 1229, "end_second": 1272, "url": "https://www.youtube.com/watch?v=Alkm-PJu9To&t=1229s", "title": "Anomaly Detection using Autoencoders", "thumbnail": "https://i.ytimg.com/vi/Alkm-PJu9To/maxresdefault.jpg"} {"video_id": "Alkm-PJu9To", "text": "first one might be number of possible changes our example our initial example so the fraud analyst would see that okay these are the variables that are most impactful for different types of fraud if the if the fraud analyst wants to see for a particular customer customer one seven zero five they experience fraud we picked up that fraud they would see that these are the variables that actually impacts at that particular customer what we added to the dataset Angela added we added places we just added that randomly", "start_timestamp": "00:21:12", "end_timestamp": "00:21:51", "start_second": 1272, "end_second": 1311, "url": "https://www.youtube.com/watch?v=Alkm-PJu9To&t=1272s", "title": "Anomaly Detection using Autoencoders", "thumbnail": "https://i.ytimg.com/vi/Alkm-PJu9To/maxresdefault.jpg"} {"video_id": "Alkm-PJu9To", "text": "but usually with a project you want to know maybe a particular area that is experiencing fraud and by how much and see what variables are impacting there so that you're able to contact the customer and help them accordingly okay so just to share with you some key takeaways that we have from building this models at scale the first thing is the interpretability so we showed you how to interpret for the particular for this particular model and what does happen that if you have quite a lot of features it can be difficult for a fraud", "start_timestamp": "00:21:51", "end_timestamp": "00:22:37", "start_second": 1311, "end_second": 1357, "url": "https://www.youtube.com/watch?v=Alkm-PJu9To&t=1311s", "title": "Anomaly Detection using Autoencoders", "thumbnail": "https://i.ytimg.com/vi/Alkm-PJu9To/maxresdefault.jpg"} {"video_id": "Alkm-PJu9To", "text": "analyst or whoever the business stakeholder is to be able to interpret why is this particular case fraud so that's a common problem that we have another problem is that and maybe this is a general machine learning problem is that if there isn't an underlying pattern in your data then the or think ah don't do anything for it so if that's the case you could think about building more features that may assist you to get a particular pattern then when it comes to maintainability when we build it at scale a big reason why we chose h2o", "start_timestamp": "00:22:37", "end_timestamp": "00:23:15", "start_second": 1357, "end_second": 1395, "url": "https://www.youtube.com/watch?v=Alkm-PJu9To&t=1357s", "title": "Anomaly Detection using Autoencoders", "thumbnail": "https://i.ytimg.com/vi/Alkm-PJu9To/maxresdefault.jpg"} {"video_id": "Alkm-PJu9To", "text": "because we build models at scale with that so if you want to build it you can use the Cagle data set but we also chose it because of that then just the difference with k-means and an autoencoder using an anomaly detection problem so we have used k-means before and you'd find the distance between the chester centroid and the observation does show anomalies but maintaining that code so when you have to retrain your model with new data your cluster sense has change your clusters change inevitably what you are trying to find", "start_timestamp": "00:23:15", "end_timestamp": "00:23:54", "start_second": 1395, "end_second": 1434, "url": "https://www.youtube.com/watch?v=Alkm-PJu9To&t=1395s", "title": "Anomaly Detection using Autoencoders", "thumbnail": "https://i.ytimg.com/vi/Alkm-PJu9To/maxresdefault.jpg"} {"video_id": "Alkm-PJu9To", "text": "in anomalies to change but with an autoencoder it is much more consistent then with a threshold this goes to capacity so you saw we detected 105 anomalies when you are working at scale with much more data it might be 10,000 anomalies now sending 10,000 anomalies to an actual business they call that to work through might be a bit difficult so they might not have the capacity to do that so picking a threshold usually you have to work with the business area to understand what threshold is been suited for them and then lastly a", "start_timestamp": "00:23:54", "end_timestamp": "00:24:30", "start_second": 1434, "end_second": 1470, "url": "https://www.youtube.com/watch?v=Alkm-PJu9To&t=1434s", "title": "Anomaly Detection using Autoencoders", "thumbnail": "https://i.ytimg.com/vi/Alkm-PJu9To/maxresdefault.jpg"} {"video_id": "Alkm-PJu9To", "text": "feedback loop we would want overtime to know that what we pick up as anomalies was actual fraudulent behavior and sometimes getting that feedback loop is difficulty so just to really like sum it up right so concerned parent if all your friends jumped off a bridge would you follow them machine learning algorithm yes so basically all in all just want to sum it up to what we spoke about today just because a model may say that something is anomalous at the end of the day you also need to check does it make sense to the use case now that make", "start_timestamp": "00:24:30", "end_timestamp": "00:25:12", "start_second": 1470, "end_second": 1512, "url": "https://www.youtube.com/watch?v=Alkm-PJu9To&t=1470s", "title": "Anomaly Detection using Autoencoders", "thumbnail": "https://i.ytimg.com/vi/Alkm-PJu9To/maxresdefault.jpg"} {"video_id": "Alkm-PJu9To", "text": "sense to the stakeholders I don't just listen to the machine learning algorithm there's still more to it you still need to bring in like you have the human side and to understand that it makes sense with business say yeah I mean thank you so much for listening thanks ladies presentation was great with regards to the Reconstruction era have you experienced any cases where you've got like really high variances in the range of your reconstruction errors and then like if you have what approaches have you taken to like scale those or have", "start_timestamp": "00:25:12", "end_timestamp": "00:25:57", "start_second": 1512, "end_second": 1557, "url": "https://www.youtube.com/watch?v=Alkm-PJu9To&t=1512s", "title": "Anomaly Detection using Autoencoders", "thumbnail": "https://i.ytimg.com/vi/Alkm-PJu9To/maxresdefault.jpg"} {"video_id": "Alkm-PJu9To", "text": "you worked with them just as you know so let's say you've got a reconstruction error on 0.05 on one observation and then you got on another observation a reconstruction error of I don't know like a hundred right so in that case like have you experienced that and if you have like have you dealt with any sort of like normalization of step or standardization reconstruction era the one that is higher to us we looking for that it's noise it's the anomaly that we're trying to find but so yeah we haven't we haven't dealt with that yeah even with", "start_timestamp": "00:25:57", "end_timestamp": "00:26:48", "start_second": 1557, "end_second": 1608, "url": "https://www.youtube.com/watch?v=Alkm-PJu9To&t=1557s", "title": "Anomaly Detection using Autoencoders", "thumbnail": "https://i.ytimg.com/vi/Alkm-PJu9To/maxresdefault.jpg"} {"video_id": "3FIo6evmweo", "text": "the following is a conversation with jurgen schmidhuber he's the co-director of a CSA a lab and a co-creator of long short term memory networks LS TMS are used in billions of devices today for speech recognition translation and much more over 30 years he has proposed a lot of interesting out-of-the-box ideas a meta learning adversarial networks computer vision and even a formal theory of quote creativity curiosity and fun this conversation is part of the MIT course and artificial general intelligence and the artificial", "start_timestamp": "00:00:00", "end_timestamp": "00:00:37", "start_second": 0, "end_second": 37, "url": "https://www.youtube.com/watch?v=3FIo6evmweo&t=0s", "title": "Juergen Schmidhuber: Godel Machines, Meta-Learning, and LSTMs | Lex Fridman Podcast #11", "thumbnail": "https://i.ytimg.com/vi/3FIo6evmweo/maxresdefault.jpg"} {"video_id": "3FIo6evmweo", "text": "intelligence podcast if you enjoy it subscribe on youtube itunes or simply connect with me on twitter at Lex Friedman spelled Fri D and now here's my conversation with jurgen schmidhuber early on you dreamed of AI systems that self-improve recursively when was that dream born when I was a baby no it's not true I mean it was a teenager and what was the catalyst for that birth what was the thing that first inspired you when I was a boy I'm I was thinking about what to do in my life and then I thought the most exciting thing", "start_timestamp": "00:00:37", "end_timestamp": "00:01:23", "start_second": 37, "end_second": 83, "url": "https://www.youtube.com/watch?v=3FIo6evmweo&t=37s", "title": "Juergen Schmidhuber: Godel Machines, Meta-Learning, and LSTMs | Lex Fridman Podcast #11", "thumbnail": "https://i.ytimg.com/vi/3FIo6evmweo/maxresdefault.jpg"} {"video_id": "3FIo6evmweo", "text": "is to solve the riddles of the universe and and that means you have to become a physicist however then I realized that there's something even grander you can try to build a machine that isn't really a machine any longer that learns to become a much better physicist than I could ever hope to be and that's how I thought maybe I can multiply my tiny little bit of creativity into infinity but ultimately that creativity will be multiplied to understand the universe around us that's that's the the curiosity for that mystery that that", "start_timestamp": "00:01:23", "end_timestamp": "00:02:04", "start_second": 83, "end_second": 124, "url": "https://www.youtube.com/watch?v=3FIo6evmweo&t=83s", "title": "Juergen Schmidhuber: Godel Machines, Meta-Learning, and LSTMs | Lex Fridman Podcast #11", "thumbnail": "https://i.ytimg.com/vi/3FIo6evmweo/maxresdefault.jpg"} {"video_id": "3FIo6evmweo", "text": "drove you yes so if you can build a machine that learns to solve more and more complex problems and more and more general problems older then you basically have solved all the problems at least all the solvable problems so how do you think what is the mechanism for that kind of general solver look like obviously we don't quite yet have one or know how to build one who have ideas and you have had throughout your career several ideas about it so how do you think about that mechanism so in the 80s I thought about how to build this", "start_timestamp": "00:02:04", "end_timestamp": "00:02:48", "start_second": 124, "end_second": 168, "url": "https://www.youtube.com/watch?v=3FIo6evmweo&t=124s", "title": "Juergen Schmidhuber: Godel Machines, Meta-Learning, and LSTMs | Lex Fridman Podcast #11", "thumbnail": "https://i.ytimg.com/vi/3FIo6evmweo/maxresdefault.jpg"} {"video_id": "3FIo6evmweo", "text": "machine that learns to solve all these problems I cannot solve myself and I thought it is clear that has to be a machine that not only learns to solve this problem here and problem here but it also has to learn to improve the learning algorithm itself so it has to have the learning algorithm in a representation that allows it to inspect it and modify it such that it can come up with a better learning algorithm so I call that meta learning learning to learn and recursive self-improvement that is really the pinnacle of that why you then not only", "start_timestamp": "00:02:48", "end_timestamp": "00:03:31", "start_second": 168, "end_second": 211, "url": "https://www.youtube.com/watch?v=3FIo6evmweo&t=168s", "title": "Juergen Schmidhuber: Godel Machines, Meta-Learning, and LSTMs | Lex Fridman Podcast #11", "thumbnail": "https://i.ytimg.com/vi/3FIo6evmweo/maxresdefault.jpg"} {"video_id": "3FIo6evmweo", "text": "alarm how to improve on that problem and on that but you also improve the way the machine improves and you also improve the way it improves the way it improves itself and that was my 1987 diploma thesis which was all about that hierarchy of metal or knows that I have no computational limits except for the well known limits that Google identified in 1931 and for the limits our physics in the recent years meta learning has gained popularity in a in a specific kind of form you've talked about how that's not really meta learning with", "start_timestamp": "00:03:31", "end_timestamp": "00:04:16", "start_second": 211, "end_second": 256, "url": "https://www.youtube.com/watch?v=3FIo6evmweo&t=211s", "title": "Juergen Schmidhuber: Godel Machines, Meta-Learning, and LSTMs | Lex Fridman Podcast #11", "thumbnail": "https://i.ytimg.com/vi/3FIo6evmweo/maxresdefault.jpg"} {"video_id": "3FIo6evmweo", "text": "Newall networks that's more basic transfer learning can you talk about the difference between the big general meta learning and a more narrow sense of meta learning the way it's used today the ways talked about today let's take the example of a deep neural networks that has learnt to classify images and maybe you have trained that network on 100 different databases of images and now a new database comes along and you want to quickly learn the new thing as well so one simple way of doing that as you take the network which already knows 100", "start_timestamp": "00:04:16", "end_timestamp": "00:04:59", "start_second": 256, "end_second": 299, "url": "https://www.youtube.com/watch?v=3FIo6evmweo&t=256s", "title": "Juergen Schmidhuber: Godel Machines, Meta-Learning, and LSTMs | Lex Fridman Podcast #11", "thumbnail": "https://i.ytimg.com/vi/3FIo6evmweo/maxresdefault.jpg"} {"video_id": "3FIo6evmweo", "text": "types of databases and then you would just take the top layer of that and you retrain that using the new label data that you have in the new image database and then it turns out that it really really quickly can learn that to one shot basically because from the first 100 data sets it already has learned so much about about computer vision that it can reuse that and that is then almost good enough to solve the new task except you need a little bit of adjustment on the top so that is transfer learning and it has", "start_timestamp": "00:04:59", "end_timestamp": "00:05:41", "start_second": 299, "end_second": 341, "url": "https://www.youtube.com/watch?v=3FIo6evmweo&t=299s", "title": "Juergen Schmidhuber: Godel Machines, Meta-Learning, and LSTMs | Lex Fridman Podcast #11", "thumbnail": "https://i.ytimg.com/vi/3FIo6evmweo/maxresdefault.jpg"} {"video_id": "3FIo6evmweo", "text": "been done in principle for many decades people have done similar things for decades meta-learning true mental learning is about having the learning algorithm itself open to introspection by the system that is using it and also open to modification such that the learning system has an opportunity to modify any part of the learning algorithm and then evaluate the consequences of that modification and then learn from that to create a better learning algorithm and so on recursively so that's a very different animal where", "start_timestamp": "00:05:41", "end_timestamp": "00:06:28", "start_second": 341, "end_second": 388, "url": "https://www.youtube.com/watch?v=3FIo6evmweo&t=341s", "title": "Juergen Schmidhuber: Godel Machines, Meta-Learning, and LSTMs | Lex Fridman Podcast #11", "thumbnail": "https://i.ytimg.com/vi/3FIo6evmweo/maxresdefault.jpg"} {"video_id": "3FIo6evmweo", "text": "you are opening the space of possible learning algorithms to the learning system itself right so you've like in this 2004 paper you described get all machines and programs that we write themselves yeah right philosophically and even in your paper mathematically these are really compelling ideas but practically do you see these self referential programs being successful in the near term to having an impact where sort of a demonstrates to the world that this direction is a is a good one to pursue in the near term yes we had these", "start_timestamp": "00:06:28", "end_timestamp": "00:07:10", "start_second": 388, "end_second": 430, "url": "https://www.youtube.com/watch?v=3FIo6evmweo&t=388s", "title": "Juergen Schmidhuber: Godel Machines, Meta-Learning, and LSTMs | Lex Fridman Podcast #11", "thumbnail": "https://i.ytimg.com/vi/3FIo6evmweo/maxresdefault.jpg"} {"video_id": "3FIo6evmweo", "text": "two different types of fundamental research how to build a universal problem solver one basically exploiting [Music] proof search and things like that that you need to come up with asymptotic Liam optimal theoretically optimal self-improvement and problems all of us however one has to admit that through this proof search comes in an additive constant an overhead an additive overhead that vanishes in comparison to what you have to do to solve large problems however for many of the small problems that we want to solve in our", "start_timestamp": "00:07:10", "end_timestamp": "00:07:59", "start_second": 430, "end_second": 479, "url": "https://www.youtube.com/watch?v=3FIo6evmweo&t=430s", "title": "Juergen Schmidhuber: Godel Machines, Meta-Learning, and LSTMs | Lex Fridman Podcast #11", "thumbnail": "https://i.ytimg.com/vi/3FIo6evmweo/maxresdefault.jpg"} {"video_id": "3FIo6evmweo", "text": "everyday life we cannot ignore this constant overhead and that's why we also have been doing other things non universal things such as recurrent neural networks which are trained by gradient descent and local search techniques which aren't universal at all which aren't provably optimal at all like the other stuff that we did but which are much more practical as long as we only want to solve the small problems that we are typically trying to solve in this environment here yes so the universal problem solvers like the girdle machine", "start_timestamp": "00:07:59", "end_timestamp": "00:08:38", "start_second": 479, "end_second": 518, "url": "https://www.youtube.com/watch?v=3FIo6evmweo&t=479s", "title": "Juergen Schmidhuber: Godel Machines, Meta-Learning, and LSTMs | Lex Fridman Podcast #11", "thumbnail": "https://i.ytimg.com/vi/3FIo6evmweo/maxresdefault.jpg"} {"video_id": "3FIo6evmweo", "text": "but also Markos who does fastest way of solving all possible problems which he developed around 2012 - in my lab they are associated with these constant overheads for proof search which guarantee is that the thing that you're doing is optimal for example there is this fastest way of solving all problems with a computable solution which is due to Marcus Marcus jota and to explain what's going on there let's take traveling salesman problems with traveling salesman problems you have a number of cities in cities and you try", "start_timestamp": "00:08:38", "end_timestamp": "00:09:21", "start_second": 518, "end_second": 561, "url": "https://www.youtube.com/watch?v=3FIo6evmweo&t=518s", "title": "Juergen Schmidhuber: Godel Machines, Meta-Learning, and LSTMs | Lex Fridman Podcast #11", "thumbnail": "https://i.ytimg.com/vi/3FIo6evmweo/maxresdefault.jpg"} {"video_id": "3FIo6evmweo", "text": "to find the shortest path through all these cities without visiting any city twice and nobody know is the fastest way of solving Traveling Salesman problems tsps but let's assume there is a method of solving them within n to the 5 operations where n is the number of cities then the universal method of Marcus is going to solve the same trolley salesman problem also within n to the 5 steps plus o of 1 plus a constant number of steps that you need for the proof searcher which you need to show that this particular class", "start_timestamp": "00:09:21", "end_timestamp": "00:10:13", "start_second": 561, "end_second": 613, "url": "https://www.youtube.com/watch?v=3FIo6evmweo&t=561s", "title": "Juergen Schmidhuber: Godel Machines, Meta-Learning, and LSTMs | Lex Fridman Podcast #11", "thumbnail": "https://i.ytimg.com/vi/3FIo6evmweo/maxresdefault.jpg"} {"video_id": "3FIo6evmweo", "text": "of problems that Traveling Salesman salesman problems can be solved within a certain time bound within order into the five steps basically and this additive constant doesn't care for in which means as n is getting larger and larger as you have more and more cities the constant overhead pales in comparison and that means that almost all large problems I solved in the best possible way our way today we already have a universal problem solver like sound however it's not practical because the overhead the constant overhead is so large that for", "start_timestamp": "00:10:13", "end_timestamp": "00:10:58", "start_second": 613, "end_second": 658, "url": "https://www.youtube.com/watch?v=3FIo6evmweo&t=613s", "title": "Juergen Schmidhuber: Godel Machines, Meta-Learning, and LSTMs | Lex Fridman Podcast #11", "thumbnail": "https://i.ytimg.com/vi/3FIo6evmweo/maxresdefault.jpg"} {"video_id": "3FIo6evmweo", "text": "the small kinds of problems that we want to solve in this little biosphere by the way when you say small you're talking about things that fall within the constraints of our computational systems thinking they can seem quite large to us mere humans right that's right yeah so they seem large and even unsolvable in a practical sense today but they are still small compared to almost all problems because almost all problems are large problems which are much larger than any constant do you find it useful as a person who is dreamed of creating a", "start_timestamp": "00:10:58", "end_timestamp": "00:11:36", "start_second": 658, "end_second": 696, "url": "https://www.youtube.com/watch?v=3FIo6evmweo&t=658s", "title": "Juergen Schmidhuber: Godel Machines, Meta-Learning, and LSTMs | Lex Fridman Podcast #11", "thumbnail": "https://i.ytimg.com/vi/3FIo6evmweo/maxresdefault.jpg"} {"video_id": "3FIo6evmweo", "text": "general learning system has worked on creating one has done a lot of interesting ideas there to think about P versus NP this formalization of how hard problems are how they scale this kind of worst-case analysis type of thinking do you find that useful or is it only just a mathematical it's a set of mathematical techniques to give you intuition about what's good and bad mm-hmm so P versus NP that's super interesting from a theoretical point of view and in fact as you are thinking about that problem you can also get", "start_timestamp": "00:11:36", "end_timestamp": "00:12:15", "start_second": 696, "end_second": 735, "url": "https://www.youtube.com/watch?v=3FIo6evmweo&t=696s", "title": "Juergen Schmidhuber: Godel Machines, Meta-Learning, and LSTMs | Lex Fridman Podcast #11", "thumbnail": "https://i.ytimg.com/vi/3FIo6evmweo/maxresdefault.jpg"} {"video_id": "3FIo6evmweo", "text": "inspiration for better practical problems always on the other hand we have to admit that at the moment as he best practical problem solvers for all kinds of problems that we are now solving through what is called AI at the moment they are not of the kind that is inspired by these questions you know there we are using general-purpose computers such as recurrent neural networks but we have a search technique which is just local search gradient descent to try to find a program that is running on these recurrent networks such", "start_timestamp": "00:12:15", "end_timestamp": "00:12:54", "start_second": 735, "end_second": 774, "url": "https://www.youtube.com/watch?v=3FIo6evmweo&t=735s", "title": "Juergen Schmidhuber: Godel Machines, Meta-Learning, and LSTMs | Lex Fridman Podcast #11", "thumbnail": "https://i.ytimg.com/vi/3FIo6evmweo/maxresdefault.jpg"} {"video_id": "3FIo6evmweo", "text": "that it can or some interesting problems such as speech recognition machine translation and something like that and there is very little theory behind the best solutions that we have at the moment that can do that do you think that needs to change you think that world change or can we go can we create a general intelligence systems without ever really proving that that system is intelligent in some kind of mathematical way solving machine translation perfectly or something like that within some kind of syntactic definition", "start_timestamp": "00:12:54", "end_timestamp": "00:13:28", "start_second": 774, "end_second": 808, "url": "https://www.youtube.com/watch?v=3FIo6evmweo&t=774s", "title": "Juergen Schmidhuber: Godel Machines, Meta-Learning, and LSTMs | Lex Fridman Podcast #11", "thumbnail": "https://i.ytimg.com/vi/3FIo6evmweo/maxresdefault.jpg"} {"video_id": "3FIo6evmweo", "text": "of a language or can we just be super impressed by the thing working extremely well and that's sufficient there's an old saying and I don't know who brought it up first which says there's nothing more practical than a good theory and um yeah and a good theory of problem-solving under limited resources like here in this universe or on this little planet has to take into account these limited resources and so probably that is locking a theory in which is related to what we already have sees a sim totally optimal", "start_timestamp": "00:13:28", "end_timestamp": "00:14:12", "start_second": 808, "end_second": 852, "url": "https://www.youtube.com/watch?v=3FIo6evmweo&t=808s", "title": "Juergen Schmidhuber: Godel Machines, Meta-Learning, and LSTMs | Lex Fridman Podcast #11", "thumbnail": "https://i.ytimg.com/vi/3FIo6evmweo/maxresdefault.jpg"} {"video_id": "3FIo6evmweo", "text": "comes almost which which tells us what we need in addition to that to come up with a practically optimal problem so long so I believe we will have something like that and maybe just a few little tiny twists unnecessary to to change what we already have to come up with that as well as long as we don't have that we mmm admit that we are taking sub optimal ways and we can y'all not Verizon long shorter memory for equipped with local search techniques and we are happy that it works better than any competing method", "start_timestamp": "00:14:12", "end_timestamp": "00:14:54", "start_second": 852, "end_second": 894, "url": "https://www.youtube.com/watch?v=3FIo6evmweo&t=852s", "title": "Juergen Schmidhuber: Godel Machines, Meta-Learning, and LSTMs | Lex Fridman Podcast #11", "thumbnail": "https://i.ytimg.com/vi/3FIo6evmweo/maxresdefault.jpg"} {"video_id": "3FIo6evmweo", "text": "but that doesn't mean that we we think we are done you've said that an AGI system will ultimately be a simple one a general intelligent system will ultimately be a simple one maybe a pseudocode of a few lines to be able to describe it can you talk through your intuition behind this idea why you feel that uh at its core intelligence is a simple algorithm experience tells us that this stuff that works best is really simple so see asymptotic team optimal ways of solving problems if you look at them and just a few lines of code it's really", "start_timestamp": "00:14:54", "end_timestamp": "00:15:40", "start_second": 894, "end_second": 940, "url": "https://www.youtube.com/watch?v=3FIo6evmweo&t=894s", "title": "Juergen Schmidhuber: Godel Machines, Meta-Learning, and LSTMs | Lex Fridman Podcast #11", "thumbnail": "https://i.ytimg.com/vi/3FIo6evmweo/maxresdefault.jpg"} {"video_id": "3FIo6evmweo", "text": "true although they are these amazing properties just a few lines of code then the most promising and most useful practical things maybe don't have this proof of optimality associated with them however they are so just a few lines of code the most successful mmm we can neural networks you can write them down and five lines of pseudocode that's a beautiful almost poetic idea but what you're describing there is this the lines of pseudocode are sitting on top of layers and layers abstractions in a sense hmm so you're", "start_timestamp": "00:15:40", "end_timestamp": "00:16:22", "start_second": 940, "end_second": 982, "url": "https://www.youtube.com/watch?v=3FIo6evmweo&t=940s", "title": "Juergen Schmidhuber: Godel Machines, Meta-Learning, and LSTMs | Lex Fridman Podcast #11", "thumbnail": "https://i.ytimg.com/vi/3FIo6evmweo/maxresdefault.jpg"} {"video_id": "3FIo6evmweo", "text": "saying at the very top mmm you'll be a beautifully written sort of algorithm but do you think that there's many layers of abstractions we have to first learn to construct yeah of course we are building on all these great abstractions that people have invented over the millennia such as matrix multiplications and real numbers and basic arithmetic and calculus and derivations of error functions and derivatives of error functions and stuff like that so without that language that greatly simplifies our way our thinking about", "start_timestamp": "00:16:22", "end_timestamp": "00:17:12", "start_second": 982, "end_second": 1032, "url": "https://www.youtube.com/watch?v=3FIo6evmweo&t=982s", "title": "Juergen Schmidhuber: Godel Machines, Meta-Learning, and LSTMs | Lex Fridman Podcast #11", "thumbnail": "https://i.ytimg.com/vi/3FIo6evmweo/maxresdefault.jpg"} {"video_id": "3FIo6evmweo", "text": "these problems we couldn't do anything so in that sense as always we are standing on the shoulders of the Giants who in the past simplified the problem of problem solving so much that now we have a chance to do the final step the final step will be a simple one oh if we if you take a step back through all of human civilization in just the universe in check how do you think about evolution and what if creating a universe is required to achieve this final step what if going through the very painful and an inefficient process of evolution is", "start_timestamp": "00:17:12", "end_timestamp": "00:17:52", "start_second": 1032, "end_second": 1072, "url": "https://www.youtube.com/watch?v=3FIo6evmweo&t=1032s", "title": "Juergen Schmidhuber: Godel Machines, Meta-Learning, and LSTMs | Lex Fridman Podcast #11", "thumbnail": "https://i.ytimg.com/vi/3FIo6evmweo/maxresdefault.jpg"} {"video_id": "3FIo6evmweo", "text": "needed to come up with this set of abstractions that ultimately to intelligence do you think there's a shortcut or do you think we have to create something like our universe in order to create something like human level intelligence hmm so far the only example we have is this one this universe and you live you better maybe not but we are part of this whole process right so apparently so it might be the key is that the code that runs the universe as really really simple everything points to that possibility because gravity and", "start_timestamp": "00:17:52", "end_timestamp": "00:18:38", "start_second": 1072, "end_second": 1118, "url": "https://www.youtube.com/watch?v=3FIo6evmweo&t=1072s", "title": "Juergen Schmidhuber: Godel Machines, Meta-Learning, and LSTMs | Lex Fridman Podcast #11", "thumbnail": "https://i.ytimg.com/vi/3FIo6evmweo/maxresdefault.jpg"} {"video_id": "3FIo6evmweo", "text": "other basic forces are really simple laws that can be easily described also in just a few lines of code basically and and then there are these other events that the apparently random events in the history of the universe which as far as we know at the moment don't have a compact code but who knows maybe somebody and the near future is going to figure out the pseudo-random generator which is which is computing whether the measurement of that spin up or down thing here is going to be positive or negative underlying quantum mechanics", "start_timestamp": "00:18:38", "end_timestamp": "00:19:19", "start_second": 1118, "end_second": 1159, "url": "https://www.youtube.com/watch?v=3FIo6evmweo&t=1118s", "title": "Juergen Schmidhuber: Godel Machines, Meta-Learning, and LSTMs | Lex Fridman Podcast #11", "thumbnail": "https://i.ytimg.com/vi/3FIo6evmweo/maxresdefault.jpg"} {"video_id": "3FIo6evmweo", "text": "yes so you ultimately think quantum mechanics is a pseudo-random number generator monistic there's no randomness in our universe does God play dice so a couple of years ago a famous physicist quantum physicist Anton Zeilinger he wrote an essay in nature and it started more or less like that one of the fundamental insights our theme of the 20th century was that the universe is fundamentally random on the quantum level and that whenever you measure spin up or down or something like that a new bit of information enters the history of", "start_timestamp": "00:19:19", "end_timestamp": "00:20:11", "start_second": 1159, "end_second": 1211, "url": "https://www.youtube.com/watch?v=3FIo6evmweo&t=1159s", "title": "Juergen Schmidhuber: Godel Machines, Meta-Learning, and LSTMs | Lex Fridman Podcast #11", "thumbnail": "https://i.ytimg.com/vi/3FIo6evmweo/maxresdefault.jpg"} {"video_id": "3FIo6evmweo", "text": "the universe and while I was reading that I was already typing the responds and they had to publish it because I was right that there's no evidence no physical evidence for that so there's an alternative explanation where everything that we consider random is actually pseudo-random such as the decimal expansion of pi supply is interesting because every three-digit sequence every sequence of three digits appears roughly one in a thousand times and every five digit sequence appears roughly one in ten thousand times what do you really would", "start_timestamp": "00:20:11", "end_timestamp": "00:21:04", "start_second": 1211, "end_second": 1264, "url": "https://www.youtube.com/watch?v=3FIo6evmweo&t=1211s", "title": "Juergen Schmidhuber: Godel Machines, Meta-Learning, and LSTMs | Lex Fridman Podcast #11", "thumbnail": "https://i.ytimg.com/vi/3FIo6evmweo/maxresdefault.jpg"} {"video_id": "3FIo6evmweo", "text": "expect if it was run random but there's a very short algorithm short program that computes all of that so it's extremely compressible and who knows maybe tomorrow somebody some grad student at CERN goes back over all these data points better decay and whatever and figures out oh it's the second billion digits of pi or something like that we don't have any fundamental reason at the moment to believe that this is truly random and not just a deterministic video game if it was a deterministic video game it would be", "start_timestamp": "00:21:04", "end_timestamp": "00:21:41", "start_second": 1264, "end_second": 1301, "url": "https://www.youtube.com/watch?v=3FIo6evmweo&t=1264s", "title": "Juergen Schmidhuber: Godel Machines, Meta-Learning, and LSTMs | Lex Fridman Podcast #11", "thumbnail": "https://i.ytimg.com/vi/3FIo6evmweo/maxresdefault.jpg"} {"video_id": "3FIo6evmweo", "text": "much more beautiful because beauty is simplicity and many of the basic laws of the universe like gravity and the other basic forces are very simple so very short programs can explain what these are doing and and it would be awful and ugly the universe would be ugly the history of the universe would be ugly if for the extra things the random the seemingly random data points that we get all the time that we really need a huge number of extra bits to destroy all these um these extra bits of information so as long as we don't have evidence", "start_timestamp": "00:21:41", "end_timestamp": "00:22:26", "start_second": 1301, "end_second": 1346, "url": "https://www.youtube.com/watch?v=3FIo6evmweo&t=1301s", "title": "Juergen Schmidhuber: Godel Machines, Meta-Learning, and LSTMs | Lex Fridman Podcast #11", "thumbnail": "https://i.ytimg.com/vi/3FIo6evmweo/maxresdefault.jpg"} {"video_id": "3FIo6evmweo", "text": "that there is no short program that computes the entire history of the entire universe we are a scientists compelled to look further for that Swiss program your intuition says there exists a shortest a program that can backtrack to the to the creation of the universe so the shortest path to the creation yes including all the entanglement things and all the spin up-and-down measurements that have been taken place since 13.8 billion years ago and so yeah so we don't have a proof that it is random we don't have a proof", "start_timestamp": "00:22:26", "end_timestamp": "00:23:16", "start_second": 1346, "end_second": 1396, "url": "https://www.youtube.com/watch?v=3FIo6evmweo&t=1346s", "title": "Juergen Schmidhuber: Godel Machines, Meta-Learning, and LSTMs | Lex Fridman Podcast #11", "thumbnail": "https://i.ytimg.com/vi/3FIo6evmweo/maxresdefault.jpg"} {"video_id": "3FIo6evmweo", "text": "of that it is compressible to a short program but as long as we don't have that proof we are obliged as scientists to keep looking for that simple explanation absolutely so you said simplicity is beautiful or beauty is simple either one works but you also work on curiosity discovery you know the romantic notion of randomness of serendipity of being surprised by things that are about you kind of in our poetic notion of reality we think as humans require randomness so you don't find randomness beautiful you use you find simple determinism", "start_timestamp": "00:23:16", "end_timestamp": "00:24:03", "start_second": 1396, "end_second": 1443, "url": "https://www.youtube.com/watch?v=3FIo6evmweo&t=1396s", "title": "Juergen Schmidhuber: Godel Machines, Meta-Learning, and LSTMs | Lex Fridman Podcast #11", "thumbnail": "https://i.ytimg.com/vi/3FIo6evmweo/maxresdefault.jpg"} {"video_id": "3FIo6evmweo", "text": "beautiful yeah okay so why why because the explanation becomes shorter a universe that is compressible to a short program is much more elegant and much more beautiful than another one which needs an almost infinite number of bits to be described as far as we know many things that are happening in this universe are really simple in terms are from short programs that compute gravity and the interaction between elementary particles and so on so all of that seems to be very very simple every electron seems to reuse the same sub program all", "start_timestamp": "00:24:03", "end_timestamp": "00:24:50", "start_second": 1443, "end_second": 1490, "url": "https://www.youtube.com/watch?v=3FIo6evmweo&t=1443s", "title": "Juergen Schmidhuber: Godel Machines, Meta-Learning, and LSTMs | Lex Fridman Podcast #11", "thumbnail": "https://i.ytimg.com/vi/3FIo6evmweo/maxresdefault.jpg"} {"video_id": "3FIo6evmweo", "text": "the time as it is interacting with other elementary particles if we now require an extra Oracle injecting new bits of information all the time for these extra things which are currently no understood such as better decay then the whole description length our data that we can observe out of the history of the universe would become much longer and therefore uglier and uglier again the simplicity is elegant and beautiful all the history of science is a history of compression progress yes so you've described sort of as we build up", "start_timestamp": "00:24:50", "end_timestamp": "00:25:47", "start_second": 1490, "end_second": 1547, "url": "https://www.youtube.com/watch?v=3FIo6evmweo&t=1490s", "title": "Juergen Schmidhuber: Godel Machines, Meta-Learning, and LSTMs | Lex Fridman Podcast #11", "thumbnail": "https://i.ytimg.com/vi/3FIo6evmweo/maxresdefault.jpg"} {"video_id": "3FIo6evmweo", "text": "abstractions and you've talked about the idea of compression how do you see this the history of science the history of humanity our civilization and life on earth as some kind of path towards greater and greater compression what do you mean by there how do you think of that indeed the history of science is a history of compression progress what does that mean hundreds of years ago there was an astronomer whose name was Keppler and he looked at the data points that he got by watching planets move and then he had all these data points and", "start_timestamp": "00:25:47", "end_timestamp": "00:26:27", "start_second": 1547, "end_second": 1587, "url": "https://www.youtube.com/watch?v=3FIo6evmweo&t=1547s", "title": "Juergen Schmidhuber: Godel Machines, Meta-Learning, and LSTMs | Lex Fridman Podcast #11", "thumbnail": "https://i.ytimg.com/vi/3FIo6evmweo/maxresdefault.jpg"} {"video_id": "3FIo6evmweo", "text": "suddenly turnouts that he can greatly compress the data by predicting it through an ellipse law so it turns out that all these data points are more or less on ellipses around the Sun and another guy came along whose name was Newton and before him hook and they said the same thing that is making these planets move like that is what makes the apples fall down and it also holds form stones and for all kinds of other objects and suddenly many many of these compression of these observations became much more compressible because as long", "start_timestamp": "00:26:27", "end_timestamp": "00:27:17", "start_second": 1587, "end_second": 1637, "url": "https://www.youtube.com/watch?v=3FIo6evmweo&t=1587s", "title": "Juergen Schmidhuber: Godel Machines, Meta-Learning, and LSTMs | Lex Fridman Podcast #11", "thumbnail": "https://i.ytimg.com/vi/3FIo6evmweo/maxresdefault.jpg"} {"video_id": "3FIo6evmweo", "text": "as you can predict the next thing given what you have seen so far you can compress it you don't have to store that data extra this is called predict coding and then there was still something wrong with that theory of the universe and you had deviations from these predictions of the theory and 300 years later another guy came along whose name was Einstein and he he was able to explain away all these deviations from the predictions of the old theory through a new theory which was called the general theory of relativity which", "start_timestamp": "00:27:17", "end_timestamp": "00:27:57", "start_second": 1637, "end_second": 1677, "url": "https://www.youtube.com/watch?v=3FIo6evmweo&t=1637s", "title": "Juergen Schmidhuber: Godel Machines, Meta-Learning, and LSTMs | Lex Fridman Podcast #11", "thumbnail": "https://i.ytimg.com/vi/3FIo6evmweo/maxresdefault.jpg"} {"video_id": "3FIo6evmweo", "text": "at first glance looks a little bit more complicated and you have to warp space and time but you can't phrase it within one single sentence which is no matter how fast you accelerate and how fast are hard you decelerate and no matter what is the gravity in your local framework Lightspeed always looks the same and from from that you can calculate all the consequences so it's a very simple thing and it allows you to further compress all the observations because suddenly there are hardly any deviations any longer that you can measure from the", "start_timestamp": "00:27:57", "end_timestamp": "00:28:37", "start_second": 1677, "end_second": 1717, "url": "https://www.youtube.com/watch?v=3FIo6evmweo&t=1677s", "title": "Juergen Schmidhuber: Godel Machines, Meta-Learning, and LSTMs | Lex Fridman Podcast #11", "thumbnail": "https://i.ytimg.com/vi/3FIo6evmweo/maxresdefault.jpg"} {"video_id": "3FIo6evmweo", "text": "predictions of this new theory so all of science is a history of compression progress you never arrive immediately at the shortest explanation of the data but you're making progress whenever you are making progress you have an insight you see all first I needed so many bits of information to describe the data to describe my falling apples my video are falling apples I need so many data so many pixels have to be stored but then suddenly I realize no there is a very simple way of predicting the third frame in the video from the first tool and and", "start_timestamp": "00:28:37", "end_timestamp": "00:29:17", "start_second": 1717, "end_second": 1757, "url": "https://www.youtube.com/watch?v=3FIo6evmweo&t=1717s", "title": "Juergen Schmidhuber: Godel Machines, Meta-Learning, and LSTMs | Lex Fridman Podcast #11", "thumbnail": "https://i.ytimg.com/vi/3FIo6evmweo/maxresdefault.jpg"} {"video_id": "3FIo6evmweo", "text": "maybe not every little detail can be predicted but more or less most of these orange blocks blobs that are coming down they accelerate in the same way which means that I can greatly compress the video and the amount of compression progress that is the depth of the insight that you have at that moment that's the fun that you have the Scientific fun that fun in that discovery and we can build artificial systems that do the same thing they measure the depth of their insights as they are looking at the data which is", "start_timestamp": "00:29:17", "end_timestamp": "00:29:50", "start_second": 1757, "end_second": 1790, "url": "https://www.youtube.com/watch?v=3FIo6evmweo&t=1757s", "title": "Juergen Schmidhuber: Godel Machines, Meta-Learning, and LSTMs | Lex Fridman Podcast #11", "thumbnail": "https://i.ytimg.com/vi/3FIo6evmweo/maxresdefault.jpg"} {"video_id": "3FIo6evmweo", "text": "coming in through their own experiments and we give them a reward an intrinsic reward and proportion to this depth of insight and since they are trying to maximize the rewards they get they are suddenly motivated to come up with new action sequences with new experiments that have the property that the data that is coming in as a consequence are these experiments has the property that they can learn something about see a pattern in there which they hadn't seen yet before so there's an idea of power play you've described a training general", "start_timestamp": "00:29:50", "end_timestamp": "00:30:33", "start_second": 1790, "end_second": 1833, "url": "https://www.youtube.com/watch?v=3FIo6evmweo&t=1790s", "title": "Juergen Schmidhuber: Godel Machines, Meta-Learning, and LSTMs | Lex Fridman Podcast #11", "thumbnail": "https://i.ytimg.com/vi/3FIo6evmweo/maxresdefault.jpg"} {"video_id": "3FIo6evmweo", "text": "problem solver in this kind of way of looking for the unsolved problems yeah can you describe that idea a little further it's another very simple idea so normally what you do in computer science you have you have some guy who gives you a problem and then there is a huge search space of potential solution candidates and you somehow try them out and you have more less sophisticated ways of moving around in that search space until you finally found a solution which you consider satisfactory that's what most of computer science is about", "start_timestamp": "00:30:33", "end_timestamp": "00:31:15", "start_second": 1833, "end_second": 1875, "url": "https://www.youtube.com/watch?v=3FIo6evmweo&t=1833s", "title": "Juergen Schmidhuber: Godel Machines, Meta-Learning, and LSTMs | Lex Fridman Podcast #11", "thumbnail": "https://i.ytimg.com/vi/3FIo6evmweo/maxresdefault.jpg"} {"video_id": "3FIo6evmweo", "text": "power play just goes one little step further and says let's not only search for solutions to a given problem but let's search two pairs of problems and their solutions where the system itself has the opportunity to phrase its own problem so we are looking suddenly at pairs of problems and their solutions or modifications are the problems over that is supposed to generate a solution to that new problem and and this additional degree of freedom allows us to build Korea systems that are like scientists in the sense that they not only try to", "start_timestamp": "00:31:15", "end_timestamp": "00:32:04", "start_second": 1875, "end_second": 1924, "url": "https://www.youtube.com/watch?v=3FIo6evmweo&t=1875s", "title": "Juergen Schmidhuber: Godel Machines, Meta-Learning, and LSTMs | Lex Fridman Podcast #11", "thumbnail": "https://i.ytimg.com/vi/3FIo6evmweo/maxresdefault.jpg"} {"video_id": "3FIo6evmweo", "text": "solve and try to find answers to existing questions no they are also free to impose their own questions so if you want to build an artificial scientist we have to give it that freedom and power play is exactly doing that so that's that's a dimension of freedom that's important to have but how do you are hardly you think that how multi-dimensional and difficult the space of them coming up in your questions is yeah so as as it's one of the things that as human beings we consider to be the thing that makes us special the intelligence that makes us", "start_timestamp": "00:32:04", "end_timestamp": "00:32:41", "start_second": 1924, "end_second": 1961, "url": "https://www.youtube.com/watch?v=3FIo6evmweo&t=1924s", "title": "Juergen Schmidhuber: Godel Machines, Meta-Learning, and LSTMs | Lex Fridman Podcast #11", "thumbnail": "https://i.ytimg.com/vi/3FIo6evmweo/maxresdefault.jpg"} {"video_id": "3FIo6evmweo", "text": "special is that brilliant insight yeah that can create something totally new yes so now let's look at the extreme case let's look at the set of all possible problems that you can formally describe which is infinite which should be the next problem that a scientist or power-play is going to solve well it should be the easiest problem that goes beyond what you already know so it should be the simplest problem that the current problems all of that you have which can already sold 100 problems that he cannot solve yet by", "start_timestamp": "00:32:41", "end_timestamp": "00:33:28", "start_second": 1961, "end_second": 2008, "url": "https://www.youtube.com/watch?v=3FIo6evmweo&t=1961s", "title": "Juergen Schmidhuber: Godel Machines, Meta-Learning, and LSTMs | Lex Fridman Podcast #11", "thumbnail": "https://i.ytimg.com/vi/3FIo6evmweo/maxresdefault.jpg"} {"video_id": "3FIo6evmweo", "text": "just generalizing so it has to be new so it has to require a modification of the problem solver such that the new problem solver can solve this new thing but the old problem solver cannot do it and in addition to that we have to make sure that the problem solver doesn't forget any of the previous solutions right and so by definition power play is now trying always to search and this pair of in in the set of pairs of problems and problems over modifications for a combination that minimize the time to achieve these criteria so as always", "start_timestamp": "00:33:28", "end_timestamp": "00:34:08", "start_second": 2008, "end_second": 2048, "url": "https://www.youtube.com/watch?v=3FIo6evmweo&t=2008s", "title": "Juergen Schmidhuber: Godel Machines, Meta-Learning, and LSTMs | Lex Fridman Podcast #11", "thumbnail": "https://i.ytimg.com/vi/3FIo6evmweo/maxresdefault.jpg"} {"video_id": "3FIo6evmweo", "text": "trying to find the problem which is easiest to add to the repertoire so just like grad students and academics and researchers can spend the whole career in a local minima hmm stuck trying to come up with interesting questions but ultimately doing very little do you think it's easy well in this approach of looking for the simplest unsolvable problem to get stuck in a local minima is not never really discovering new you know really jumping outside of the hundred problems the very solved in a genuine creative way no", "start_timestamp": "00:34:08", "end_timestamp": "00:34:47", "start_second": 2048, "end_second": 2087, "url": "https://www.youtube.com/watch?v=3FIo6evmweo&t=2048s", "title": "Juergen Schmidhuber: Godel Machines, Meta-Learning, and LSTMs | Lex Fridman Podcast #11", "thumbnail": "https://i.ytimg.com/vi/3FIo6evmweo/maxresdefault.jpg"} {"video_id": "3FIo6evmweo", "text": "because that's the nature of power play that it's always trying to break its current generalization abilities by coming up with a new problem which is beyond the current horizon just shifting the horizon of knowledge a little bit out there breaking the existing rules search says the new thing becomes solvable but wasn't solvable by the old thing so like adding a new axiom like what Google did when he came up with these new sentences new theorems that didn't have a proof in the phone system which means you can add them to the", "start_timestamp": "00:34:47", "end_timestamp": "00:35:25", "start_second": 2087, "end_second": 2125, "url": "https://www.youtube.com/watch?v=3FIo6evmweo&t=2087s", "title": "Juergen Schmidhuber: Godel Machines, Meta-Learning, and LSTMs | Lex Fridman Podcast #11", "thumbnail": "https://i.ytimg.com/vi/3FIo6evmweo/maxresdefault.jpg"} {"video_id": "3FIo6evmweo", "text": "repertoire hoping that that they are not going to damage the consistency of the whole thing so in the paper with the amazing title formal theory of creativity fun in intrinsic motivation you talk about discovery as intrinsic reward so if you view humans as intelligent agents what do you think is the purpose and meaning of life far as humans is you've talked about this discovery do you see humans as an instance of power play agents yeah so humans are curious and that means they behave like scientists not only the official scientists but", "start_timestamp": "00:35:25", "end_timestamp": "00:36:13", "start_second": 2125, "end_second": 2173, "url": "https://www.youtube.com/watch?v=3FIo6evmweo&t=2125s", "title": "Juergen Schmidhuber: Godel Machines, Meta-Learning, and LSTMs | Lex Fridman Podcast #11", "thumbnail": "https://i.ytimg.com/vi/3FIo6evmweo/maxresdefault.jpg"} {"video_id": "3FIo6evmweo", "text": "even the babies behave like scientists and they play around with toys to figure out how the world works and how it is responding to their actions and that's how they learn about gravity and everything and yeah in 1990 we had the first systems like the hand would just try to to play around with the environment and come up with situations that go beyond what they knew at that time and then get a reward for creating these situations and then becoming more general problem solvers and being able to understand more of the world so yeah", "start_timestamp": "00:36:13", "end_timestamp": "00:36:49", "start_second": 2173, "end_second": 2209, "url": "https://www.youtube.com/watch?v=3FIo6evmweo&t=2173s", "title": "Juergen Schmidhuber: Godel Machines, Meta-Learning, and LSTMs | Lex Fridman Podcast #11", "thumbnail": "https://i.ytimg.com/vi/3FIo6evmweo/maxresdefault.jpg"} {"video_id": "3FIo6evmweo", "text": "I think in principle that that that curiosity strategy or sophisticated versions of whether chess is quiet they are what we have built-in as well because evolution discovered that's a good way of exploring the unknown world and a guy who explores the unknown world has a higher chance of solving problems that he needs to survive in this world on the other hand those guys who were too curious they were weeded out as well so you have to find this trade-off evolution found a certain trade-off apparently in our society there are as a", "start_timestamp": "00:36:49", "end_timestamp": "00:37:32", "start_second": 2209, "end_second": 2252, "url": "https://www.youtube.com/watch?v=3FIo6evmweo&t=2209s", "title": "Juergen Schmidhuber: Godel Machines, Meta-Learning, and LSTMs | Lex Fridman Podcast #11", "thumbnail": "https://i.ytimg.com/vi/3FIo6evmweo/maxresdefault.jpg"} {"video_id": "3FIo6evmweo", "text": "certain percentage of extremely exploitive guy and it doesn't matter if they die because many of the others are more conservative and and and so yeah it would be surprising to me if if that principle of artificial curiosity wouldn't be present and almost exactly the same form here in our brains so you're a bit of a musician and an artist so continuing on this topic of creativity what do you think is the role of creativity and intelligence so you've kind of implied that it's essential for intelligence if you think of", "start_timestamp": "00:37:32", "end_timestamp": "00:38:16", "start_second": 2252, "end_second": 2296, "url": "https://www.youtube.com/watch?v=3FIo6evmweo&t=2252s", "title": "Juergen Schmidhuber: Godel Machines, Meta-Learning, and LSTMs | Lex Fridman Podcast #11", "thumbnail": "https://i.ytimg.com/vi/3FIo6evmweo/maxresdefault.jpg"} {"video_id": "3FIo6evmweo", "text": "intelligence as a problem-solving system as ability to solve problems but do you think it's essential this idea of creativity we never have a program a sub program that is called creativity or something it's just a side effect of when our problem solvers do they are searching a space of problems or a space of candidates of solution candidates until they hopefully find a solution to have given from them but then there are these two types of creativity and both of them are now present in our machines the first one has been around for a long", "start_timestamp": "00:38:16", "end_timestamp": "00:38:55", "start_second": 2296, "end_second": 2335, "url": "https://www.youtube.com/watch?v=3FIo6evmweo&t=2296s", "title": "Juergen Schmidhuber: Godel Machines, Meta-Learning, and LSTMs | Lex Fridman Podcast #11", "thumbnail": "https://i.ytimg.com/vi/3FIo6evmweo/maxresdefault.jpg"} {"video_id": "3FIo6evmweo", "text": "time which is human gives problem to machine machine tries to find a solution to that and this has been happening for many decades and for many decades machines have found creative solutions to interesting problems where humans were not aware of these particularly in creative solutions but then appreciated that the machine found that the second is the pure creativity that I would call what I just mentioned I would call the applied creativity like applied art where somebody tells you now make a nice picture off of this Pope and you will", "start_timestamp": "00:38:55", "end_timestamp": "00:39:35", "start_second": 2335, "end_second": 2375, "url": "https://www.youtube.com/watch?v=3FIo6evmweo&t=2335s", "title": "Juergen Schmidhuber: Godel Machines, Meta-Learning, and LSTMs | Lex Fridman Podcast #11", "thumbnail": "https://i.ytimg.com/vi/3FIo6evmweo/maxresdefault.jpg"} {"video_id": "3FIo6evmweo", "text": "get money for that okay so here is the artist and he makes a convincing picture of the Pope and the Pope likes it and gives him the money and then there is the pure creative creativity which is more like the power play and the artificial curiosity thing where you have the freedom to select your own problem like a scientist who defines his own question to study and so that is the pure creativity of UL and opposed to the applied creativity which serves another and in that distinction there's almost echoes of narrow AI", "start_timestamp": "00:39:35", "end_timestamp": "00:40:17", "start_second": 2375, "end_second": 2417, "url": "https://www.youtube.com/watch?v=3FIo6evmweo&t=2375s", "title": "Juergen Schmidhuber: Godel Machines, Meta-Learning, and LSTMs | Lex Fridman Podcast #11", "thumbnail": "https://i.ytimg.com/vi/3FIo6evmweo/maxresdefault.jpg"} {"video_id": "3FIo6evmweo", "text": "versus general AI so this kind of constrained painting of a pope seems like the the approaches of what people are calling narrow AI and pure creativity seems to be maybe I'm just biased as a human but it seems to be an essential element of human level intelligence is that what you're implying to a degree if you zoom back a little bit and you just look at a general problem-solving machine which is trying to solve arbitrary problems then this machine will figure out in the course of solving problems that it's good to be", "start_timestamp": "00:40:17", "end_timestamp": "00:40:59", "start_second": 2417, "end_second": 2459, "url": "https://www.youtube.com/watch?v=3FIo6evmweo&t=2417s", "title": "Juergen Schmidhuber: Godel Machines, Meta-Learning, and LSTMs | Lex Fridman Podcast #11", "thumbnail": "https://i.ytimg.com/vi/3FIo6evmweo/maxresdefault.jpg"} {"video_id": "3FIo6evmweo", "text": "curious so all of what I said just now about this prewired curiosity and this will to invent new problems that the system doesn't know how to solve yet should be just a byproduct of the general search however apparently evolution has built it into us because it turned out to be so successful a pre-wiring a buyer's a very successful exploratory buyers that that we are born with and you've also said that consciousness in the same kind of way may be a byproduct of problem-solving you know do you think do you find it's", "start_timestamp": "00:40:59", "end_timestamp": "00:41:43", "start_second": 2459, "end_second": 2503, "url": "https://www.youtube.com/watch?v=3FIo6evmweo&t=2459s", "title": "Juergen Schmidhuber: Godel Machines, Meta-Learning, and LSTMs | Lex Fridman Podcast #11", "thumbnail": "https://i.ytimg.com/vi/3FIo6evmweo/maxresdefault.jpg"} {"video_id": "3FIo6evmweo", "text": "an interesting by-product you think it's a useful by-product what are your thoughts on consciousness in general or is it simply a byproduct of greater and greater capabilities of problem-solving that's that's similar to creativity in that sense yeah we never have a procedure called consciousness in our machines however we get as side effects of what these machines are doing things that seem to be closely related to what people call consciousness so for example in 1990 we had simple systems which were basically recurrent networks and", "start_timestamp": "00:41:43", "end_timestamp": "00:42:24", "start_second": 2503, "end_second": 2544, "url": "https://www.youtube.com/watch?v=3FIo6evmweo&t=2503s", "title": "Juergen Schmidhuber: Godel Machines, Meta-Learning, and LSTMs | Lex Fridman Podcast #11", "thumbnail": "https://i.ytimg.com/vi/3FIo6evmweo/maxresdefault.jpg"} {"video_id": "3FIo6evmweo", "text": "therefore universal computers trying to map incoming data into actions that lead to success maximizing reward in a given environment always finding the charging station in time whenever the battery's low and negative signals are coming from the battery always finds the charging station in time without bumping against painful obstacles on the way so complicated things but very easily motivated and then we give these little a separate we can all network which is just predicting what's happening if I do that in that what will happen as a", "start_timestamp": "00:42:24", "end_timestamp": "00:43:06", "start_second": 2544, "end_second": 2586, "url": "https://www.youtube.com/watch?v=3FIo6evmweo&t=2544s", "title": "Juergen Schmidhuber: Godel Machines, Meta-Learning, and LSTMs | Lex Fridman Podcast #11", "thumbnail": "https://i.ytimg.com/vi/3FIo6evmweo/maxresdefault.jpg"} {"video_id": "3FIo6evmweo", "text": "consequence of these actions that I'm executing and it's just trained on the long and long history of interactions with the world so it becomes a predictive model loss of art basically and therefore also a compressor our theme observations after what because whatever you can predict you don't have to store extras or compression is a side effect of prediction and how does this record Network impress well it's inventing little sub programs little sub Network networks that stand for everything that frequently appears in", "start_timestamp": "00:43:06", "end_timestamp": "00:43:40", "start_second": 2586, "end_second": 2620, "url": "https://www.youtube.com/watch?v=3FIo6evmweo&t=2586s", "title": "Juergen Schmidhuber: Godel Machines, Meta-Learning, and LSTMs | Lex Fridman Podcast #11", "thumbnail": "https://i.ytimg.com/vi/3FIo6evmweo/maxresdefault.jpg"} {"video_id": "3FIo6evmweo", "text": "the environment like bottles and microphones and faces maybe lots of faces in my environment so I'm learning to create something like a prototype face and a new face comes along and all I have to encode are the deviations from the prototype so it's compressing all the time the stuff that frequently appears there's one thing that appears all the time that is present all the time when the agent is interacting with its environment which is the agent itself so just for data compression reasons it is extremely natural for this we can", "start_timestamp": "00:43:40", "end_timestamp": "00:44:18", "start_second": 2620, "end_second": 2658, "url": "https://www.youtube.com/watch?v=3FIo6evmweo&t=2620s", "title": "Juergen Schmidhuber: Godel Machines, Meta-Learning, and LSTMs | Lex Fridman Podcast #11", "thumbnail": "https://i.ytimg.com/vi/3FIo6evmweo/maxresdefault.jpg"} {"video_id": "3FIo6evmweo", "text": "network to come up with little sub networks that stand for the properties of the agents the hand you know the the other actuators and all the stuff that you need to better encode the data which is influenced by the actions of the agent so they're just as a side effect of data compression during problem-solving you have inter myself models now you can use this model of the world to plan your future and that's what yours have done since 1990 so the recurrent Network which is the controller which is trying to maximize reward can use this model as", "start_timestamp": "00:44:18", "end_timestamp": "00:45:00", "start_second": 2658, "end_second": 2700, "url": "https://www.youtube.com/watch?v=3FIo6evmweo&t=2658s", "title": "Juergen Schmidhuber: Godel Machines, Meta-Learning, and LSTMs | Lex Fridman Podcast #11", "thumbnail": "https://i.ytimg.com/vi/3FIo6evmweo/maxresdefault.jpg"} {"video_id": "3FIo6evmweo", "text": "a network of the what is this model network as a wild this predictive model of the world to plan ahead and say let's not do this action sequence let's do this action sequence instead because it leads to more predictor to rewards and whenever it's waking up these layers of networks let's stand for itself and it's thinking about itself and it's thinking about itself and it's exploring mentally the consequences of its own actions and and now you tell me what is still missing missing the next the gap to consciousness yeah hi there there isn't", "start_timestamp": "00:45:00", "end_timestamp": "00:45:41", "start_second": 2700, "end_second": 2741, "url": "https://www.youtube.com/watch?v=3FIo6evmweo&t=2700s", "title": "Juergen Schmidhuber: Godel Machines, Meta-Learning, and LSTMs | Lex Fridman Podcast #11", "thumbnail": "https://i.ytimg.com/vi/3FIo6evmweo/maxresdefault.jpg"} {"video_id": "3FIo6evmweo", "text": "that's a really beautiful idea that you know if life is a collection of data and in life is a process of compressing that data to act efficiently you in that data you yourself appear very often so it's useful to form compressions of yourself and it's a really beautiful formulation of what consciousness is a necessary side-effect it's actually quite compelling to me you've described our nen's developed LST aims long short-term memory networks the there type of recurrent neural networks they have gotten a lot of success recently so", "start_timestamp": "00:45:41", "end_timestamp": "00:46:24", "start_second": 2741, "end_second": 2784, "url": "https://www.youtube.com/watch?v=3FIo6evmweo&t=2741s", "title": "Juergen Schmidhuber: Godel Machines, Meta-Learning, and LSTMs | Lex Fridman Podcast #11", "thumbnail": "https://i.ytimg.com/vi/3FIo6evmweo/maxresdefault.jpg"} {"video_id": "3FIo6evmweo", "text": "these are networks that model the temporal aspects in the data temporal patterns in the data and you've called them the deepest of the Newell networks right so what do you think is the value of depth in the models that we use to learn since you mentioned the long short-term memory and the lsdm I have to mention the names of the brilliant students of course that's worse first of all and my first student ever set for writer who had fundamental insights already in this diploma thesis then Felix Kias had additional important contributions Alex", "start_timestamp": "00:46:24", "end_timestamp": "00:47:04", "start_second": 2784, "end_second": 2824, "url": "https://www.youtube.com/watch?v=3FIo6evmweo&t=2784s", "title": "Juergen Schmidhuber: Godel Machines, Meta-Learning, and LSTMs | Lex Fridman Podcast #11", "thumbnail": "https://i.ytimg.com/vi/3FIo6evmweo/maxresdefault.jpg"} {"video_id": "3FIo6evmweo", "text": "gray is a guy from Scotland who is mostly responsible for this CTC algorithm which is now often used to to train the Alice TM to do the speech recognition on all the Google Android phones and whatever and Siri and so on so these guys without these guys I would be nothing it's a lot of incredible work what is now the depth what is the importance of depth well most problems in the real world are deep in the sense that the current input doesn't tell you all you need to know about the environment mm-hmm so instead", "start_timestamp": "00:47:04", "end_timestamp": "00:47:46", "start_second": 2824, "end_second": 2866, "url": "https://www.youtube.com/watch?v=3FIo6evmweo&t=2824s", "title": "Juergen Schmidhuber: Godel Machines, Meta-Learning, and LSTMs | Lex Fridman Podcast #11", "thumbnail": "https://i.ytimg.com/vi/3FIo6evmweo/maxresdefault.jpg"} {"video_id": "3FIo6evmweo", "text": "you have to have a memory of what happened in the past and often important parts of that memory are dated they are pretty old and so when you're doing speech recognition for example and somebody says eleven then that's about half a second or something like that which means it's already fifty-eight time steps and another guy or the same guy says seven so the ending is the same Evan but now the system has to see the distinction between seven and eleven and the only way I can see the differences it has to store that fifty steps ago", "start_timestamp": "00:47:46", "end_timestamp": "00:48:29", "start_second": 2866, "end_second": 2909, "url": "https://www.youtube.com/watch?v=3FIo6evmweo&t=2866s", "title": "Juergen Schmidhuber: Godel Machines, Meta-Learning, and LSTMs | Lex Fridman Podcast #11", "thumbnail": "https://i.ytimg.com/vi/3FIo6evmweo/maxresdefault.jpg"} {"video_id": "3FIo6evmweo", "text": "there wasn't or a nerve eleven or seven so there you have already a problem of depth fifty because for each time step you have something like a virtual a layer and the expanded unrolled version of this Riccar network which is doing the speech recognition so these long time lags they translate into problem depth and most problems and this world Asajj that you really have to look far back in time to understand what is the problem and to solvent but just like with our CMS you don't necessarily need to when you look", "start_timestamp": "00:48:29", "end_timestamp": "00:49:10", "start_second": 2909, "end_second": 2950, "url": "https://www.youtube.com/watch?v=3FIo6evmweo&t=2909s", "title": "Juergen Schmidhuber: Godel Machines, Meta-Learning, and LSTMs | Lex Fridman Podcast #11", "thumbnail": "https://i.ytimg.com/vi/3FIo6evmweo/maxresdefault.jpg"} {"video_id": "3FIo6evmweo", "text": "back in time remember every aspect you just need to remember the important aspects that's right the network has to learn to put the important stuff in into memory and to ignore the unimportant noise so but in that sense deeper and deeper is better or is there a limitation is is there I mean LCM is one of the great examples of architectures that do something beyond just deeper and deeper networks there's clever mechanisms for filtering data for remembering and forgetting so do you think that that kind of thinking", "start_timestamp": "00:49:10", "end_timestamp": "00:49:49", "start_second": 2950, "end_second": 2989, "url": "https://www.youtube.com/watch?v=3FIo6evmweo&t=2950s", "title": "Juergen Schmidhuber: Godel Machines, Meta-Learning, and LSTMs | Lex Fridman Podcast #11", "thumbnail": "https://i.ytimg.com/vi/3FIo6evmweo/maxresdefault.jpg"} {"video_id": "3FIo6evmweo", "text": "is necessary if you think about LCM is a leap a big leap forward over traditional vanilla are nuns what do you think is the next leap hmm it within this context so LCM is a very clever improvement but LCM still don't have the same kind of ability to see far back in the future in the in the past as us humans do the credit assignment problem across way back not just 50 times steps or a hundred or a thousand but millions and billions it's not clear what are the practical limits of the lsdm when it comes to looking back already in 2006 I", "start_timestamp": "00:49:49", "end_timestamp": "00:50:33", "start_second": 2989, "end_second": 3033, "url": "https://www.youtube.com/watch?v=3FIo6evmweo&t=2989s", "title": "Juergen Schmidhuber: Godel Machines, Meta-Learning, and LSTMs | Lex Fridman Podcast #11", "thumbnail": "https://i.ytimg.com/vi/3FIo6evmweo/maxresdefault.jpg"} {"video_id": "3FIo6evmweo", "text": "think we had examples where it not only looked back tens of thousands of steps but really millions of steps and who won Paris artists in my lab I think was the first author of a paper where we really was a 2006 or something had examples word learn to look back for more than 10 million steps so for most problems of speech recognition it's not necessary to look that far back but there are examples where it does now so looking back thing [Music] that's rather easy because there is only one past but there are many possible", "start_timestamp": "00:50:33", "end_timestamp": "00:51:15", "start_second": 3033, "end_second": 3075, "url": "https://www.youtube.com/watch?v=3FIo6evmweo&t=3033s", "title": "Juergen Schmidhuber: Godel Machines, Meta-Learning, and LSTMs | Lex Fridman Podcast #11", "thumbnail": "https://i.ytimg.com/vi/3FIo6evmweo/maxresdefault.jpg"} {"video_id": "3FIo6evmweo", "text": "futures and so a reinforcement learning system which is trying to maximize its future expected rewards and doesn't know yet which of these many possible future should I select given this one single past it's facing problems that the LCN by itself cannot solve so the other sim is good for coming up with a compact representation of the history so far of the history and observations in action so far but now how do you plan in an efficient and good way among all these how do you select one of these many possible action sequences that a", "start_timestamp": "00:51:15", "end_timestamp": "00:51:58", "start_second": 3075, "end_second": 3118, "url": "https://www.youtube.com/watch?v=3FIo6evmweo&t=3075s", "title": "Juergen Schmidhuber: Godel Machines, Meta-Learning, and LSTMs | Lex Fridman Podcast #11", "thumbnail": "https://i.ytimg.com/vi/3FIo6evmweo/maxresdefault.jpg"} {"video_id": "3FIo6evmweo", "text": "reinforcement learning system has to consider to maximize reward in this unknown future so again it behaves this basic setup where you have one week on network which gets in the video and the speech and whatever and it's executing actions and is trying to maximize reward so there is no teacher who tells it what to do at which point in time and then there's the other network which is just predicting what's going to happen if I do that then and that could be an LCM Network and it allows to look back all the way to make better predictions", "start_timestamp": "00:51:58", "end_timestamp": "00:52:39", "start_second": 3118, "end_second": 3159, "url": "https://www.youtube.com/watch?v=3FIo6evmweo&t=3118s", "title": "Juergen Schmidhuber: Godel Machines, Meta-Learning, and LSTMs | Lex Fridman Podcast #11", "thumbnail": "https://i.ytimg.com/vi/3FIo6evmweo/maxresdefault.jpg"} {"video_id": "3FIo6evmweo", "text": "of the next time step so essentially although it's men predicting only the next time step it is motivated to learn to put into memory something that happened maybe a million steps ago because it's important to memorize that if you want to predict that at the next time step the next event you know how can a model of the world like that a predictive model of the world be used by the first guy let's call it the controller and the model the controller and the model how can the model be used by the controller to efficiently select", "start_timestamp": "00:52:39", "end_timestamp": "00:53:16", "start_second": 3159, "end_second": 3196, "url": "https://www.youtube.com/watch?v=3FIo6evmweo&t=3159s", "title": "Juergen Schmidhuber: Godel Machines, Meta-Learning, and LSTMs | Lex Fridman Podcast #11", "thumbnail": "https://i.ytimg.com/vi/3FIo6evmweo/maxresdefault.jpg"} {"video_id": "3FIo6evmweo", "text": "among these many possible futures so naive way we had about 30 years ago was let's just use the model of the world as a stand-in as a simulation of the wall and millisecond by millisecond we planned the future and that means we have to roll it out really in detail and it will work only as the model is really good and it will still be inefficient because we have to look at all these possible futures and and there are so many of them so instead what we do now since 2015 and our cm systems controller model systems we give the controller the", "start_timestamp": "00:53:16", "end_timestamp": "00:53:54", "start_second": 3196, "end_second": 3234, "url": "https://www.youtube.com/watch?v=3FIo6evmweo&t=3196s", "title": "Juergen Schmidhuber: Godel Machines, Meta-Learning, and LSTMs | Lex Fridman Podcast #11", "thumbnail": "https://i.ytimg.com/vi/3FIo6evmweo/maxresdefault.jpg"} {"video_id": "3FIo6evmweo", "text": "opportunity to learn by itself how to use the potentially relevant parts of the M of the model network to solve new problems more quickly and if it wants to it can learn to ignore the M and sometimes it's a good idea to ignore the the M because it's really bad it's a bad predictor in this particular situation of life where the control is currently trying to maximize r1 however it can also allow and to address and exploit some of the sub programs that came about in the model network through compressing the data by predicting it so it now has", "start_timestamp": "00:53:54", "end_timestamp": "00:54:36", "start_second": 3234, "end_second": 3276, "url": "https://www.youtube.com/watch?v=3FIo6evmweo&t=3234s", "title": "Juergen Schmidhuber: Godel Machines, Meta-Learning, and LSTMs | Lex Fridman Podcast #11", "thumbnail": "https://i.ytimg.com/vi/3FIo6evmweo/maxresdefault.jpg"} {"video_id": "3FIo6evmweo", "text": "an opportunity to reuse that code the ethnic information in the modern are trying to reduce its own search space such that it can solve a new problem more quickly than without the model compression so you're ultimately optimistic and excited about the power of \u00e4ra of reinforcement learning in the context of real systems absolutely yeah so you see RL as a potential having a huge impact beyond just sort of the M part is often develop on supervised learning methods you see RL as a four problems of cell traffic cars or any kind of applied", "start_timestamp": "00:54:36", "end_timestamp": "00:55:27", "start_second": 3276, "end_second": 3327, "url": "https://www.youtube.com/watch?v=3FIo6evmweo&t=3276s", "title": "Juergen Schmidhuber: Godel Machines, Meta-Learning, and LSTMs | Lex Fridman Podcast #11", "thumbnail": "https://i.ytimg.com/vi/3FIo6evmweo/maxresdefault.jpg"} {"video_id": "3FIo6evmweo", "text": "cyber BOTS X that's the correct interesting direction for research in your view I do think so we have a company called Mason's Mason's which has applied to enforcement learning to little Howdy's there are DS which learn to park without a teacher the same principles were used of course so these little Audi's they are small maybe like that so I'm much smaller than the real Howdy's but they have all the sensors that you find the real howdy is you find the cameras that lead on sensors they go up to 120 20 kilometres an hour if you if they want", "start_timestamp": "00:55:27", "end_timestamp": "00:56:08", "start_second": 3327, "end_second": 3368, "url": "https://www.youtube.com/watch?v=3FIo6evmweo&t=3327s", "title": "Juergen Schmidhuber: Godel Machines, Meta-Learning, and LSTMs | Lex Fridman Podcast #11", "thumbnail": "https://i.ytimg.com/vi/3FIo6evmweo/maxresdefault.jpg"} {"video_id": "3FIo6evmweo", "text": "to and and they are from pain sensors basically and they don't want to bump against obstacles and other Howdy's and so they must learn like little babies to a park take the wrong vision input and translate that into actions that lead to successful packing behavior which is a rewarding thing and yes they learn that they are salt we have examples like that and it's only in the beginning this is just the tip of the iceberg and I believe the next wave of a line is going to be all about that so at the moment the current wave of AI is about passive", "start_timestamp": "00:56:08", "end_timestamp": "00:56:49", "start_second": 3368, "end_second": 3409, "url": "https://www.youtube.com/watch?v=3FIo6evmweo&t=3368s", "title": "Juergen Schmidhuber: Godel Machines, Meta-Learning, and LSTMs | Lex Fridman Podcast #11", "thumbnail": "https://i.ytimg.com/vi/3FIo6evmweo/maxresdefault.jpg"} {"video_id": "3FIo6evmweo", "text": "pattern observation and prediction and and that's what you have on your smartphone and what the major companies on the Pacific of em are using to sell you ads to do marketing that's the current sort of profit in AI and that's only one or two percent of the world economy which is big enough to make these company is pretty much the most valuable companies in the world but there's a much much bigger fraction of the economy going to be affected by the next wave which is really about machines that shape the data through our own", "start_timestamp": "00:56:49", "end_timestamp": "00:57:26", "start_second": 3409, "end_second": 3446, "url": "https://www.youtube.com/watch?v=3FIo6evmweo&t=3409s", "title": "Juergen Schmidhuber: Godel Machines, Meta-Learning, and LSTMs | Lex Fridman Podcast #11", "thumbnail": "https://i.ytimg.com/vi/3FIo6evmweo/maxresdefault.jpg"} {"video_id": "3FIo6evmweo", "text": "actions and you think simulation is ultimately the biggest way that that though those methods will be successful in the next 10 20 years we're not talking about a hundred years from now we're talking about sort of the near-term impact of RL do you think really good simulation is required or is there other techniques like imitation learning you know observing other humans yeah operating in the real world where do you think this success will come from so at the moment we have a tendency of using physics simulations to learn", "start_timestamp": "00:57:26", "end_timestamp": "00:58:04", "start_second": 3446, "end_second": 3484, "url": "https://www.youtube.com/watch?v=3FIo6evmweo&t=3446s", "title": "Juergen Schmidhuber: Godel Machines, Meta-Learning, and LSTMs | Lex Fridman Podcast #11", "thumbnail": "https://i.ytimg.com/vi/3FIo6evmweo/maxresdefault.jpg"} {"video_id": "3FIo6evmweo", "text": "behavior for machines that learn to solve problems that humans also do not know how to solve however this is not the future because the future is and what little babies do they don't use a physics engine to simulate the world no they learn a predictive model of the world which maybe sometimes is wrong in many ways but captures all kinds of important abstract high-level predictions which are really important to be successful and and that's what is what was the future thirty years ago when you started that type of research", "start_timestamp": "00:58:04", "end_timestamp": "00:58:44", "start_second": 3484, "end_second": 3524, "url": "https://www.youtube.com/watch?v=3FIo6evmweo&t=3484s", "title": "Juergen Schmidhuber: Godel Machines, Meta-Learning, and LSTMs | Lex Fridman Podcast #11", "thumbnail": "https://i.ytimg.com/vi/3FIo6evmweo/maxresdefault.jpg"} {"video_id": "3FIo6evmweo", "text": "but it's still the future and now we are know much better how to go there to to move there to move forward and to really make working systems based on that where you have a learning model of the world a model of the world that learns to predict what's going to happen if I do that and that and then the controller uses that model to more quickly learn successful action sequences and then of course always this crazy thing in the beginning the model is stupid so the controller should be motivated to come up with experiments", "start_timestamp": "00:58:44", "end_timestamp": "00:59:19", "start_second": 3524, "end_second": 3559, "url": "https://www.youtube.com/watch?v=3FIo6evmweo&t=3524s", "title": "Juergen Schmidhuber: Godel Machines, Meta-Learning, and LSTMs | Lex Fridman Podcast #11", "thumbnail": "https://i.ytimg.com/vi/3FIo6evmweo/maxresdefault.jpg"} {"video_id": "3FIo6evmweo", "text": "with action sequences that lead to data that improve the model do you think improving the model constructing an understanding of the world in this connection is the in now the popular approaches have been successful you know grounded in ideas of neural networks but in the 80s with expert systems there's symbolic AI approaches which to us humans are more intuitive in a sense that it makes sense that you build up knowledge in this knowledge representation what kind of lessons can we draw in our current approaches mmm", "start_timestamp": "00:59:19", "end_timestamp": "00:59:56", "start_second": 3559, "end_second": 3596, "url": "https://www.youtube.com/watch?v=3FIo6evmweo&t=3559s", "title": "Juergen Schmidhuber: Godel Machines, Meta-Learning, and LSTMs | Lex Fridman Podcast #11", "thumbnail": "https://i.ytimg.com/vi/3FIo6evmweo/maxresdefault.jpg"} {"video_id": "3FIo6evmweo", "text": "for from expert systems from symbolic yeah so I became aware of all of that in the 80s and back then a logic program logic programming was a huge thing was inspiring to yourself did you find it compelling because most a lot of your work was not so much in that realm mary is more in learning systems yes or no but we did all of that so we my first publication ever actually was 1987 was a the implementation of genetic algorithm of a genetic programming system in prologue prologue that's what you learn back then which is a logic programming", "start_timestamp": "00:59:56", "end_timestamp": "01:00:39", "start_second": 3596, "end_second": 3639, "url": "https://www.youtube.com/watch?v=3FIo6evmweo&t=3596s", "title": "Juergen Schmidhuber: Godel Machines, Meta-Learning, and LSTMs | Lex Fridman Podcast #11", "thumbnail": "https://i.ytimg.com/vi/3FIo6evmweo/maxresdefault.jpg"} {"video_id": "3FIo6evmweo", "text": "language and the Japanese the anthers huge fifth-generation AI project which was mostly about logic programming back then although a neural networks existed and were well known back then and deep learning has existed since 1965 since this guy and the UK and even anko started it but the Japanese and many other people they focus really on this logic programming and I was influenced to the extent that I said okay let's take these biologically inspired rules like evolution programs and and and implement that in the language which I know which", "start_timestamp": "01:00:39", "end_timestamp": "01:01:22", "start_second": 3639, "end_second": 3682, "url": "https://www.youtube.com/watch?v=3FIo6evmweo&t=3639s", "title": "Juergen Schmidhuber: Godel Machines, Meta-Learning, and LSTMs | Lex Fridman Podcast #11", "thumbnail": "https://i.ytimg.com/vi/3FIo6evmweo/maxresdefault.jpg"} {"video_id": "3FIo6evmweo", "text": "was Prolog for example back then and then in in many ways as came back later because the Garuda machine for example has approved search on board and without that it would not be optimal well Marcus what does universal algorithm for solving all well-defined problems as approved search on board so that's very much logic programming without that it would not be a Centanni optimum but then on the other hand because we have a very pragmatic is also we focused on we cannula networks and and and some optimal stuff such as gradient based", "start_timestamp": "01:01:22", "end_timestamp": "01:02:03", "start_second": 3682, "end_second": 3723, "url": "https://www.youtube.com/watch?v=3FIo6evmweo&t=3682s", "title": "Juergen Schmidhuber: Godel Machines, Meta-Learning, and LSTMs | Lex Fridman Podcast #11", "thumbnail": "https://i.ytimg.com/vi/3FIo6evmweo/maxresdefault.jpg"} {"video_id": "3FIo6evmweo", "text": "search and program space rather than provably optimal things the logic programming does it certainly has a usefulness in when you're trying to construct something provably optimal or probably good or something like that but is it useful for for practical problems it's really useful at volunteer improving the best theorem provers today are not neural networks right no say our logic programming systems and they are much better theorem provers than most math students and the first or second semester on but for reasoning to for", "start_timestamp": "01:02:03", "end_timestamp": "01:02:40", "start_second": 3723, "end_second": 3760, "url": "https://www.youtube.com/watch?v=3FIo6evmweo&t=3723s", "title": "Juergen Schmidhuber: Godel Machines, Meta-Learning, and LSTMs | Lex Fridman Podcast #11", "thumbnail": "https://i.ytimg.com/vi/3FIo6evmweo/maxresdefault.jpg"} {"video_id": "3FIo6evmweo", "text": "playing games of go or chess or for robots autonomous vehicles that operate in the real world or object manipulation you know you think learning yeah as long as the problems have little to do with with C or improving themselves then as long as that is not the case you you just want to have better pattern recognition so to build a self-driving car you want to have better pattern recognition and and pedestrian recognition and all these things and you want to your minimum you want to minimize the number of false", "start_timestamp": "01:02:40", "end_timestamp": "01:03:18", "start_second": 3760, "end_second": 3798, "url": "https://www.youtube.com/watch?v=3FIo6evmweo&t=3760s", "title": "Juergen Schmidhuber: Godel Machines, Meta-Learning, and LSTMs | Lex Fridman Podcast #11", "thumbnail": "https://i.ytimg.com/vi/3FIo6evmweo/maxresdefault.jpg"} {"video_id": "3FIo6evmweo", "text": "positives which is currently is slowing down self-driving cars in many ways and and all that has very little to do with logic programming yeah what are you most excited about in terms of directions of artificial intelligence at this moment in the next few years in your own research and in the broader community so I think in the not so distant future we will have for the first time little robots that learn like kids and I will be able to say to the robot um look here robot we are going to assemble a smartphone it's takes a slab of plastic", "start_timestamp": "01:03:18", "end_timestamp": "01:04:03", "start_second": 3798, "end_second": 3843, "url": "https://www.youtube.com/watch?v=3FIo6evmweo&t=3798s", "title": "Juergen Schmidhuber: Godel Machines, Meta-Learning, and LSTMs | Lex Fridman Podcast #11", "thumbnail": "https://i.ytimg.com/vi/3FIo6evmweo/maxresdefault.jpg"} {"video_id": "3FIo6evmweo", "text": "and the school driver and let's screw in the screw like that no no not like that like so hmm not like that like that and I don't have a data glove or something he will see me and he will hear me and he will try to do something with his own actuators which will be really different from mine but he will understand the difference and will learn to imitate me but not in the supervised way where a teacher is giving target signals for all his muscles all the time no by doing this high level imitation where he first has to learn to imitate", "start_timestamp": "01:04:03", "end_timestamp": "01:04:45", "start_second": 3843, "end_second": 3885, "url": "https://www.youtube.com/watch?v=3FIo6evmweo&t=3843s", "title": "Juergen Schmidhuber: Godel Machines, Meta-Learning, and LSTMs | Lex Fridman Podcast #11", "thumbnail": "https://i.ytimg.com/vi/3FIo6evmweo/maxresdefault.jpg"} {"video_id": "3FIo6evmweo", "text": "me and then to interpret these additional noises coming from my mouth as helping helpful signals to to do that Hannah and then it will by itself come up with faster ways and more efficient ways of doing the same thing and finally I stopped his learning algorithm and make a million copies and sell it and so at the moment this is not possible but we already see how we are going to get there and you can imagine to the extent that this works economically and cheaply it's going to change everything almost all our production is going to be", "start_timestamp": "01:04:45", "end_timestamp": "01:05:28", "start_second": 3885, "end_second": 3928, "url": "https://www.youtube.com/watch?v=3FIo6evmweo&t=3885s", "title": "Juergen Schmidhuber: Godel Machines, Meta-Learning, and LSTMs | Lex Fridman Podcast #11", "thumbnail": "https://i.ytimg.com/vi/3FIo6evmweo/maxresdefault.jpg"} {"video_id": "3FIo6evmweo", "text": "affected by that and a much bigger wave much bigger ai wave is coming than the one that we are currently witnessing which is mostly about passive pattern recognition on your smartphone this is about active machines that shapes data Susy actions they are executing and they learn to do that in a good way so many of the traditional industries are going to be affected by that all the companies that are building machines well equip these machines with cameras and other sensors and they are going to learn to solve all kinds of problems", "start_timestamp": "01:05:28", "end_timestamp": "01:06:10", "start_second": 3928, "end_second": 3970, "url": "https://www.youtube.com/watch?v=3FIo6evmweo&t=3928s", "title": "Juergen Schmidhuber: Godel Machines, Meta-Learning, and LSTMs | Lex Fridman Podcast #11", "thumbnail": "https://i.ytimg.com/vi/3FIo6evmweo/maxresdefault.jpg"} {"video_id": "3FIo6evmweo", "text": "through interaction with humans but also a lot on their own to improve what they already can do and lots of old economy is going to be affected by that and in recent years I have seen that all the economy is actually waking up and realizing that those vacations and are you optimistic about the future are you concerned there's a lot of people concerned in the near term about the transformation of the nature of work the kind of ideas that you just suggested would have a significant impact of what kind of things could be automated are", "start_timestamp": "01:06:10", "end_timestamp": "01:06:49", "start_second": 3970, "end_second": 4009, "url": "https://www.youtube.com/watch?v=3FIo6evmweo&t=3970s", "title": "Juergen Schmidhuber: Godel Machines, Meta-Learning, and LSTMs | Lex Fridman Podcast #11", "thumbnail": "https://i.ytimg.com/vi/3FIo6evmweo/maxresdefault.jpg"} {"video_id": "3FIo6evmweo", "text": "you optimistic about that future are you nervous about that future and looking a little bit farther into the future there's people like you la musk - a rustle concerned about the existential threats of that future so in the near term job loss in the long term existential threat are these concerns to you or yalta mele optimistic so let's first address the near future we have had predictions of job losses for many decades for example when industrial robots came along many people many people predicted and lots of jobs are", "start_timestamp": "01:06:49", "end_timestamp": "01:07:35", "start_second": 4009, "end_second": 4055, "url": "https://www.youtube.com/watch?v=3FIo6evmweo&t=4009s", "title": "Juergen Schmidhuber: Godel Machines, Meta-Learning, and LSTMs | Lex Fridman Podcast #11", "thumbnail": "https://i.ytimg.com/vi/3FIo6evmweo/maxresdefault.jpg"} {"video_id": "3FIo6evmweo", "text": "going to get lost and in a sense say were right because back then there were car factories and hundreds of people and these factories assembled cars and today the same car factories have hundreds of robots and maybe three guys watching the robots on the other hand those countries that have lots of robots per capita Japan Korea and Germany Switzerland a couple of other countries they have really low unemployment rates somehow all kinds of new jobs were created back then nobody anticipated those jobs and decades ago I already", "start_timestamp": "01:07:35", "end_timestamp": "01:08:26", "start_second": 4055, "end_second": 4106, "url": "https://www.youtube.com/watch?v=3FIo6evmweo&t=4055s", "title": "Juergen Schmidhuber: Godel Machines, Meta-Learning, and LSTMs | Lex Fridman Podcast #11", "thumbnail": "https://i.ytimg.com/vi/3FIo6evmweo/maxresdefault.jpg"} {"video_id": "3FIo6evmweo", "text": "said it's really easy to say which jobs are going to get lost but it's really hard to predict the new ones 30 years ago who would have predicted all these people making money as YouTube bloggers 200 years ago 60% of all people used to work in agriculture today maybe 1% but still only I don't know 5% unemployment lots of new jobs were created and Homo Luden's the the playing man is inventing new jobs all the time most of these jobs are not existentially necessary for the survival of our species there are only", "start_timestamp": "01:08:26", "end_timestamp": "01:09:19", "start_second": 4106, "end_second": 4159, "url": "https://www.youtube.com/watch?v=3FIo6evmweo&t=4106s", "title": "Juergen Schmidhuber: Godel Machines, Meta-Learning, and LSTMs | Lex Fridman Podcast #11", "thumbnail": "https://i.ytimg.com/vi/3FIo6evmweo/maxresdefault.jpg"} {"video_id": "3FIo6evmweo", "text": "very few existentially necessary jobs such as farming and building houses and and warming up the houses but less than 10% of the population is doing that and most of these newly invented jobs are about interacting with other people in new ways through new media and so on getting new high types of kudos and forms of likes and whatever and even making money through that so homo Luden's the playing man doesn't want to be unemployed and that's why he is inventing new jobs all the time and he keeps considering these jobs as really", "start_timestamp": "01:09:19", "end_timestamp": "01:10:01", "start_second": 4159, "end_second": 4201, "url": "https://www.youtube.com/watch?v=3FIo6evmweo&t=4159s", "title": "Juergen Schmidhuber: Godel Machines, Meta-Learning, and LSTMs | Lex Fridman Podcast #11", "thumbnail": "https://i.ytimg.com/vi/3FIo6evmweo/maxresdefault.jpg"} {"video_id": "3FIo6evmweo", "text": "important and is investing a lot of energy and hours of work into into those and new jobs it's quite beautifully put were really nervous about the future because we can't predict what kind of new jobs would be created but your ultimate ly optimistic that we humans are so Restless that we create and give meaning to newer in your jobs telling you likes on faith things that get likes on Facebook or whatever the social platform is so what about long-term existential threat of AI where our whole civilization may be swallowed", "start_timestamp": "01:10:01", "end_timestamp": "01:10:40", "start_second": 4201, "end_second": 4240, "url": "https://www.youtube.com/watch?v=3FIo6evmweo&t=4201s", "title": "Juergen Schmidhuber: Godel Machines, Meta-Learning, and LSTMs | Lex Fridman Podcast #11", "thumbnail": "https://i.ytimg.com/vi/3FIo6evmweo/maxresdefault.jpg"} {"video_id": "3FIo6evmweo", "text": "up by this ultra super intelligent systems maybe it's not going to be smaller DUP but I'd be surprised if B were B humans were the last step and the evolution of the universe you you've actually at this beautiful comment somewhere that I've seen saying that artificial quite insightful artificial general intelligence systems just like us humans will likely not want to interact with humans they'll just interact amongst themselves just like ants interact amongst themselves and only tangentially interact with humans hmm and it's quite", "start_timestamp": "01:10:40", "end_timestamp": "01:11:26", "start_second": 4240, "end_second": 4286, "url": "https://www.youtube.com/watch?v=3FIo6evmweo&t=4240s", "title": "Juergen Schmidhuber: Godel Machines, Meta-Learning, and LSTMs | Lex Fridman Podcast #11", "thumbnail": "https://i.ytimg.com/vi/3FIo6evmweo/maxresdefault.jpg"} {"video_id": "3FIo6evmweo", "text": "an interesting idea that once we create a GI that will lose interest in humans and and have compete for their own Facebook Likes on their own social platforms so within that quite elegant idea how do we know in a hypothetical sense that there's not already intelligent systems out there how do you think broadly of general intelligence greater than us how do we know it's out there mmm how would we know it's around us and could it already be I'd be surprised even with within the next few decades or something like that we we", "start_timestamp": "01:11:26", "end_timestamp": "01:12:10", "start_second": 4286, "end_second": 4330, "url": "https://www.youtube.com/watch?v=3FIo6evmweo&t=4286s", "title": "Juergen Schmidhuber: Godel Machines, Meta-Learning, and LSTMs | Lex Fridman Podcast #11", "thumbnail": "https://i.ytimg.com/vi/3FIo6evmweo/maxresdefault.jpg"} {"video_id": "3FIo6evmweo", "text": "won't have a eyes that truly smarts in every single way and better problem solvers and almost every single important way and I'd be surprised as they wouldn't realize what we have realized a long time ago which is that almost all physical resources are not here and this biosphere but for thou the rest of the solar system gets 2 billion times more solar energy than our little planet there's lots of material out there that you can use to build robots and self-replicating robot factories and all this stuff and they", "start_timestamp": "01:12:10", "end_timestamp": "01:12:52", "start_second": 4330, "end_second": 4372, "url": "https://www.youtube.com/watch?v=3FIo6evmweo&t=4330s", "title": "Juergen Schmidhuber: Godel Machines, Meta-Learning, and LSTMs | Lex Fridman Podcast #11", "thumbnail": "https://i.ytimg.com/vi/3FIo6evmweo/maxresdefault.jpg"} {"video_id": "3FIo6evmweo", "text": "are going to do that and there will be scientists and curious and they will explore what they can do and in the beginning they will be fascinated by life and by their own origins and our civilization they will want to understand that completely just like people today would like to understand how life works and um and also the history of our own existence and civilization and also on the physical laws that created all of that so they in the beginning they will be fascinated my life once they understand that I was", "start_timestamp": "01:12:52", "end_timestamp": "01:13:32", "start_second": 4372, "end_second": 4412, "url": "https://www.youtube.com/watch?v=3FIo6evmweo&t=4372s", "title": "Juergen Schmidhuber: Godel Machines, Meta-Learning, and LSTMs | Lex Fridman Podcast #11", "thumbnail": "https://i.ytimg.com/vi/3FIo6evmweo/maxresdefault.jpg"} {"video_id": "3FIo6evmweo", "text": "interest like anybody who loses interest and things he understands and then as you said the most interesting sources information for them will be others of their own kind so at least in the long run there seems to be some sort of protection through lack of interest on the other side and now it seems also clear as far as we understand physics you need matter and energy to compute and to build more robots and infrastructure and more AI civilization and III ecology is consisting of trillions of different types of AIS and and so it seems", "start_timestamp": "01:13:32", "end_timestamp": "01:14:32", "start_second": 4412, "end_second": 4472, "url": "https://www.youtube.com/watch?v=3FIo6evmweo&t=4412s", "title": "Juergen Schmidhuber: Godel Machines, Meta-Learning, and LSTMs | Lex Fridman Podcast #11", "thumbnail": "https://i.ytimg.com/vi/3FIo6evmweo/maxresdefault.jpg"} {"video_id": "3FIo6evmweo", "text": "inconceivable to me that this thing is not going to expand some AI ecology not controlled by one AI but one by trillions of different types of AI is competing and all kinds of quickly evolving and disappearing ecological niches in ways that we cannot fathom at the moment but it's going to expand limited by Lightspeed and physics it's going to expand and and now we realize that the universe is still young it's only 13.8 billion years old and it's going to be a thousand times older than that so there's plenty of", "start_timestamp": "01:14:32", "end_timestamp": "01:15:12", "start_second": 4472, "end_second": 4512, "url": "https://www.youtube.com/watch?v=3FIo6evmweo&t=4472s", "title": "Juergen Schmidhuber: Godel Machines, Meta-Learning, and LSTMs | Lex Fridman Podcast #11", "thumbnail": "https://i.ytimg.com/vi/3FIo6evmweo/maxresdefault.jpg"} {"video_id": "3FIo6evmweo", "text": "time to conquer the entire universe and to fill it with intelligence and senders and receivers such that AI scan trouble the way they are traveling in our labs today which is by radio from sender to receiver and let's call the current age of the universe one Eon one Eon now it will take just a few eons from now and the entire visible universe is going to be full of that stuff and let's look ahead to a time when the universe is going to be one thousand times older than it is now they will look back and they will say look almost", "start_timestamp": "01:15:12", "end_timestamp": "01:15:55", "start_second": 4512, "end_second": 4555, "url": "https://www.youtube.com/watch?v=3FIo6evmweo&t=4512s", "title": "Juergen Schmidhuber: Godel Machines, Meta-Learning, and LSTMs | Lex Fridman Podcast #11", "thumbnail": "https://i.ytimg.com/vi/3FIo6evmweo/maxresdefault.jpg"} {"video_id": "3FIo6evmweo", "text": "immediately after the Big Bang only a few eons later the entire universe started to become intelligent now to your question how do we see whether anything like that has already happened or is already in a more advanced stage in some other part of the universe of the visible universe we are trying to look out there and nothing like that has happened so far or is that her do you think we'll recognize it or how do we know it's not among us how do we know planets aren't in themselves intelligent beings how do we know ants", "start_timestamp": "01:15:55", "end_timestamp": "01:16:33", "start_second": 4555, "end_second": 4593, "url": "https://www.youtube.com/watch?v=3FIo6evmweo&t=4555s", "title": "Juergen Schmidhuber: Godel Machines, Meta-Learning, and LSTMs | Lex Fridman Podcast #11", "thumbnail": "https://i.ytimg.com/vi/3FIo6evmweo/maxresdefault.jpg"} {"video_id": "3FIo6evmweo", "text": "seen as a collective are not much greater intelligence in our own these kinds of ideas no but it was a boy I was thinking about these things and I thought hmm maybe it has already happened because back then I know I knew I learned from popular physics books that the structure the large-scale structure of the universe is not homogeneous and you have these clusters of galaxies and then in between there are these huge empty spaces and I thought hmm maybe they aren't really empty it's just that in the middle of that some AI civilization already has", "start_timestamp": "01:16:33", "end_timestamp": "01:17:16", "start_second": 4593, "end_second": 4636, "url": "https://www.youtube.com/watch?v=3FIo6evmweo&t=4593s", "title": "Juergen Schmidhuber: Godel Machines, Meta-Learning, and LSTMs | Lex Fridman Podcast #11", "thumbnail": "https://i.ytimg.com/vi/3FIo6evmweo/maxresdefault.jpg"} {"video_id": "3FIo6evmweo", "text": "expanded and then has covered a bottle of a billion light-years diameter and is using all the energy of all the stars within that bubble for its own unfathomable purposes and so it always happened and we just failed to interpret the signs but then alarmed effect gravity by itself explains the large-scale structure of the universe and that this is not a convincing explanation and then I thought maybe maybe it's the dark matter because as far as we know today 80% of the measurable matter is invisible and we know that because otherwise our galaxy", "start_timestamp": "01:17:16", "end_timestamp": "01:18:03", "start_second": 4636, "end_second": 4683, "url": "https://www.youtube.com/watch?v=3FIo6evmweo&t=4636s", "title": "Juergen Schmidhuber: Godel Machines, Meta-Learning, and LSTMs | Lex Fridman Podcast #11", "thumbnail": "https://i.ytimg.com/vi/3FIo6evmweo/maxresdefault.jpg"} {"video_id": "3FIo6evmweo", "text": "or other galaxies would fall apart they would they are rotating too quickly and then the idea was maybe all us he is AI civilizations and hourly out there they they just invisible because they are really efficient in using the energies at their own local systems and that's why they appear dark to us but this is awesome at a convincing explanation because then the question becomes why is there are there still any visible stars left in our own galaxy which also must have a lot of dark matter so that is also not a", "start_timestamp": "01:18:03", "end_timestamp": "01:18:45", "start_second": 4683, "end_second": 4725, "url": "https://www.youtube.com/watch?v=3FIo6evmweo&t=4683s", "title": "Juergen Schmidhuber: Godel Machines, Meta-Learning, and LSTMs | Lex Fridman Podcast #11", "thumbnail": "https://i.ytimg.com/vi/3FIo6evmweo/maxresdefault.jpg"} {"video_id": "3FIo6evmweo", "text": "convincing thing and today I like to think it's quite plausible that maybe are the first at least in our local light cone within a few hundreds of millions of light years that we can reliably observe is there exciting to you it will might be the first and it would make us much more important because if we mess it up through a nuclear war then then maybe this will have an effect on the on the on the development on of the entire universe so let's not mess it up let's not mess it up Union thank you so much for talking", "start_timestamp": "01:18:45", "end_timestamp": "01:19:35", "start_second": 4725, "end_second": 4775, "url": "https://www.youtube.com/watch?v=3FIo6evmweo&t=4725s", "title": "Juergen Schmidhuber: Godel Machines, Meta-Learning, and LSTMs | Lex Fridman Podcast #11", "thumbnail": "https://i.ytimg.com/vi/3FIo6evmweo/maxresdefault.jpg"} {"video_id": "a0f07M2uj_A", "text": "hi there today we're looking at back propagation in the brain by Timothy Lilly corrupt Adam Santoro Luke Morris Colin Ackerman and Geoffrey Hinton so this is a bit of an unusual paper for the machine learning community but nevertheless it's interesting and let's be honest at least half of our interest comes from the fact that Geoffrey Hinton is one of the authors of this paper so this is a paper that basically proposes a hypothesis on how the algorithm of back propagation works in the brain because previously there has been a lot", "start_timestamp": "00:00:00", "end_timestamp": "00:00:42", "start_second": 0, "end_second": 42, "url": "https://www.youtube.com/watch?v=a0f07M2uj_A&t=0s", "title": "Backpropagation and the brain", "thumbnail": "https://i.ytimg.com/vi/a0f07M2uj_A/hqdefault.jpg"} {"video_id": "a0f07M2uj_A", "text": "of evidence against there being something like back propagation in the brain so the question is how do neural networks in the brain learn and they they say there there can be many different ways that neural networks learn and they list them up in in this kind of diagram where you have a network and it maps from input to output by having these weighted connections between neurons so the input is two-dimensional and then it maps using these weights to a three-dimensional hidden layer and usually there is a nonlinear function", "start_timestamp": "00:00:42", "end_timestamp": "00:01:25", "start_second": 42, "end_second": 85, "url": "https://www.youtube.com/watch?v=a0f07M2uj_A&t=42s", "title": "Backpropagation and the brain", "thumbnail": "https://i.ytimg.com/vi/a0f07M2uj_A/hqdefault.jpg"} {"video_id": "a0f07M2uj_A", "text": "somewhere at the output here of these so they they do a weighted sum of the inputs and then they do a nonlinear nonlinear function and then they propagate that signal to the next layer and till then to finally to the output all right so how do these networks learn the one way of learning is called hebbian learning the interesting thing here is that it requires no feedback from the outside world basically what you want to do in hebbian learning is you want to update the connections such that they kind of match their own", "start_timestamp": "00:01:25", "end_timestamp": "00:02:04", "start_second": 85, "end_second": 124, "url": "https://www.youtube.com/watch?v=a0f07M2uj_A&t=85s", "title": "Backpropagation and the brain", "thumbnail": "https://i.ytimg.com/vi/a0f07M2uj_A/hqdefault.jpg"} {"video_id": "a0f07M2uj_A", "text": "previous outputs or even increase their own previous outputs so you propagate a signal and then maybe this neuron spikes really hard and this Spike's really low then if you propagate the signal again right then you want to match that those those activations or if you if you properly similar signals no feedback required so basically it's a self amplifying or self dampening process the ultimately though you want to learn something about the world and that means you have to have some some feedback from outside right so with", "start_timestamp": "00:02:04", "end_timestamp": "00:02:44", "start_second": 124, "end_second": 164, "url": "https://www.youtube.com/watch?v=a0f07M2uj_A&t=124s", "title": "Backpropagation and the brain", "thumbnail": "https://i.ytimg.com/vi/a0f07M2uj_A/hqdefault.jpg"} {"video_id": "a0f07M2uj_A", "text": "feedback what we mean is usually that the output here let's look this way the output here is goes into the world let's say this is a motor neuron right you do something with your arm like you hammer on a nail and then you either hit the nail or you don't let's say you don't hit the nail so after it looks like crooked there you have feedback right so feedback usually in the form of some sort of error signal right so feedback it can be like this was good or this was bad or it can be this was a bit too much to the left or so on the important part", "start_timestamp": "00:02:44", "end_timestamp": "00:03:33", "start_second": 164, "end_second": 213, "url": "https://www.youtube.com/watch?v=a0f07M2uj_A&t=164s", "title": "Backpropagation and the brain", "thumbnail": "https://i.ytimg.com/vi/a0f07M2uj_A/hqdefault.jpg"} {"video_id": "a0f07M2uj_A", "text": "is you get kind of one number of feedback right how bad you were and now your goal is to adjust all of the individual neurons or weights between neurons such that the error will be lower so in hebbian learning there is no feedback it's just simply a self reinforcing pattern activation machine in the first in these kind of first instances of perturbation learning what you'll have is you'll have one single feedback and that you can see this is a diffuse cloud here what you're basically saying is that every single neuron is", "start_timestamp": "00:03:33", "end_timestamp": "00:04:17", "start_second": 213, "end_second": 257, "url": "https://www.youtube.com/watch?v=a0f07M2uj_A&t=213s", "title": "Backpropagation and the brain", "thumbnail": "https://i.ytimg.com/vi/a0f07M2uj_A/hqdefault.jpg"} {"video_id": "a0f07M2uj_A", "text": "kind of punished let's say the the feedback here was negative one that means every single neuron is is punished for that so how you can imagine something if you have your input X and you map it through through your function f then the function f has a way to w1 and so on right so you map X through it right and then you get feedback of negative 1 and then you map X with a little bit of noise plus M right da-da-da-dah and you get a feedback of negative 2 right then you you that means that the direction of this noise was probably a bad direction", "start_timestamp": "00:04:17", "end_timestamp": "00:05:07", "start_second": 257, "end_second": 307, "url": "https://www.youtube.com/watch?v=a0f07M2uj_A&t=257s", "title": "Backpropagation and the brain", "thumbnail": "https://i.ytimg.com/vi/a0f07M2uj_A/hqdefault.jpg"} {"video_id": "a0f07M2uj_A", "text": "so ultimately you want to update X into the direction of negative that noise by modulated of course by by some some factor here that's that it kind of tells you how bad it was so this could be the negative 2 minus negative 1 now that makes big sense No yes that would be no it would be negative 1 minus negative nevermind so basically with a scalar feedback you simply tell each neuron what it did right or sorry if if the entire network right the entire network did right or wrong so the entire network will lead to", "start_timestamp": "00:05:07", "end_timestamp": "00:05:59", "start_second": 307, "end_second": 359, "url": "https://www.youtube.com/watch?v=a0f07M2uj_A&t=307s", "title": "Backpropagation and the brain", "thumbnail": "https://i.ytimg.com/vi/a0f07M2uj_A/hqdefault.jpg"} {"video_id": "a0f07M2uj_A", "text": "this feedback you don't have accountability of the individual neurons all you can say is that whatever I'm doing here is wrong and whatever I'm doing here is right so I'm gonna do more of the right things now in back propagation it is very different right in back propagation what you'll do is you'll have your feedback here let's say that's negative 1 and then you do a reverse computation so the forward computation in this case was this weighted sum of this layer now usually layer wise reverse computation which", "start_timestamp": "00:05:59", "end_timestamp": "00:06:36", "start_second": 359, "end_second": 396, "url": "https://www.youtube.com/watch?v=a0f07M2uj_A&t=359s", "title": "Backpropagation and the brain", "thumbnail": "https://i.ytimg.com/vi/a0f07M2uj_A/hqdefault.jpg"} {"video_id": "a0f07M2uj_A", "text": "means that you know how this function here this output came to be out of the out of the inputs and that means you can inverse and you can do an inverse propagation of the error signal which is of course the gradient so this would be your your you you would derive your error by the inputs to the layer right so this basically tells in the back propagation algorithm you can exactly determine if you are this node how do I have to adjust my input weights how do I have to adjust them in order to make this number here go down right and", "start_timestamp": "00:06:36", "end_timestamp": "00:07:24", "start_second": 396, "end_second": 444, "url": "https://www.youtube.com/watch?v=a0f07M2uj_A&t=396s", "title": "Backpropagation and the brain", "thumbnail": "https://i.ytimg.com/vi/a0f07M2uj_A/hqdefault.jpg"} {"video_id": "a0f07M2uj_A", "text": "then because you always propagate the error according to that what you'll have in each in each layer is basically a vector target so it's no longer just one number but each layer now has a target of vectors and it says okay these are the outputs that would be beneficial please this layer please change your outputs in the direction of negative two negative three plus four so you see this is so the negative two would be this unit the negative three would be this unit and the plus four would be this unit so each unit is instructed", "start_timestamp": "00:07:24", "end_timestamp": "00:08:01", "start_second": 444, "end_second": 481, "url": "https://www.youtube.com/watch?v=a0f07M2uj_A&t=444s", "title": "Backpropagation and the brain", "thumbnail": "https://i.ytimg.com/vi/a0f07M2uj_A/hqdefault.jpg"} {"video_id": "a0f07M2uj_A", "text": "individually to say please this is the direction that each unit should change in in order to make this number go lower you see how this is much more information than the perturbation learning in the perturbation learning all the units simply know well the four was bad and now is better so let's you know change a bit and here you have detailed instructions for each unit because of the back propagation algorithm so ultimately people have kind of thought that since back propagation wasn't really possible with biological", "start_timestamp": "00:08:01", "end_timestamp": "00:08:39", "start_second": 481, "end_second": 519, "url": "https://www.youtube.com/watch?v=a0f07M2uj_A&t=481s", "title": "Backpropagation and the brain", "thumbnail": "https://i.ytimg.com/vi/a0f07M2uj_A/hqdefault.jpg"} {"video_id": "a0f07M2uj_A", "text": "neurons that the brain might be doing something like perturbation learning but this paper argues that something like back propagation is not only possible but likely in the brain and they proposed this kind of backdrop like learning with the feedback network so they basically concern all the they differentiate hard between these two regimes here in this hand you have the scalar feedback which means that the entire network gets one number as a feedback and the each neuron just gets that number and here you have", "start_timestamp": "00:08:39", "end_timestamp": "00:09:21", "start_second": 519, "end_second": 561, "url": "https://www.youtube.com/watch?v=a0f07M2uj_A&t=519s", "title": "Backpropagation and the brain", "thumbnail": "https://i.ytimg.com/vi/a0f07M2uj_A/hqdefault.jpg"} {"video_id": "a0f07M2uj_A", "text": "vector feedback where each neuron gets an individual instruction of how to update and they achieve this not by back propagation because still the original formulation of back prop as we use it in neural networks is not biologically plausible but they achieve this with this backdrop like learning with the feedback network and we'll see how this does but in in essence this feedback network is constructed such that it can give each neuron in the forward pass here detailed instructions on how to update itself right so yeah they have a", "start_timestamp": "00:09:21", "end_timestamp": "00:10:06", "start_second": 561, "end_second": 606, "url": "https://www.youtube.com/watch?v=a0f07M2uj_A&t=561s", "title": "Backpropagation and the brain", "thumbnail": "https://i.ytimg.com/vi/a0f07M2uj_A/hqdefault.jpg"} {"video_id": "a0f07M2uj_A", "text": "little bit of a diagram here of if you do hebbian if this if this is an error landscape if you do have you in learning you basically you don't care about the error you're just reinforcing yourself if you do perturbation learning then you it's very slow because you don't have a detailed signal you just you just rely on this one number it's kind of if you were to update every single neuron in your neural network with reinforcement learning considering the output the of the neural networks or the error considering that the reward not using", "start_timestamp": "00:10:06", "end_timestamp": "00:10:42", "start_second": 606, "end_second": 642, "url": "https://www.youtube.com/watch?v=a0f07M2uj_A&t=606s", "title": "Backpropagation and the brain", "thumbnail": "https://i.ytimg.com/vi/a0f07M2uj_A/hqdefault.jpg"} {"video_id": "a0f07M2uj_A", "text": "back row and then with back probably have a much smoother much faster optimization trajectory so they looked at this and they they come to some some conclusions first of all so here's here's back prop basically saying back prop as we said you have the forward pass and there you simply compute these weighted averages and you you also pass them usually through some sort of nonlinear activation right and the cool thing about this is in artificial neural networks is that once the error comes in you can exactly reverse that so you can", "start_timestamp": "00:10:42", "end_timestamp": "00:11:30", "start_second": 642, "end_second": 690, "url": "https://www.youtube.com/watch?v=a0f07M2uj_A&t=642s", "title": "Backpropagation and the brain", "thumbnail": "https://i.ytimg.com/vi/a0f07M2uj_A/hqdefault.jpg"} {"video_id": "a0f07M2uj_A", "text": "do a backward pass of errors where you can propagate these errors through because you know it's kind of invertible the function doesn't have to be invertible but that the gradients will flow backwards if you know how the forward pass was computed so first of all they go into a discussion of back prop in the brain how can we even expect that and one cool piece of evidence is where I find is that they cite several examples where they use artificial neural networks to learn the same tasks as humans right and or as as animal", "start_timestamp": "00:11:30", "end_timestamp": "00:12:18", "start_second": 690, "end_second": 738, "url": "https://www.youtube.com/watch?v=a0f07M2uj_A&t=690s", "title": "Backpropagation and the brain", "thumbnail": "https://i.ytimg.com/vi/a0f07M2uj_A/hqdefault.jpg"} {"video_id": "a0f07M2uj_A", "text": "brains and then I have no clue how how they measure any of this but then they compare the hidden representations of the living neural networks and the artificial neural networks and it turns out that the these the networks that were trained with backpropagation x' then networks that were not trained with backdrop so basically that means if you train a network with backprop it matches the biological networks much closer in how they form their hidden representations and they they do a number they cite the number of", "start_timestamp": "00:12:18", "end_timestamp": "00:13:04", "start_second": 738, "end_second": 784, "url": "https://www.youtube.com/watch?v=a0f07M2uj_A&t=738s", "title": "Backpropagation and the brain", "thumbnail": "https://i.ytimg.com/vi/a0f07M2uj_A/hqdefault.jpg"} {"video_id": "a0f07M2uj_A", "text": "experiments here that show this so this gives you very good evidence that if the hidden representations they look as if they had been computed by backdrop and not by any of these scaler update algorithms so it is conceivable that we find backprop in the brain that's why they go here next they go into problems with backdrops so basically why why would we why so far have we believed that back prop isn't happening in the brain so now let's I want to highlight two factors here that that I find a thinker suffice state they have more but first", "start_timestamp": "00:13:04", "end_timestamp": "00:13:51", "start_second": 784, "end_second": 831, "url": "https://www.youtube.com/watch?v=a0f07M2uj_A&t=784s", "title": "Backpropagation and the brain", "thumbnail": "https://i.ytimg.com/vi/a0f07M2uj_A/hqdefault.jpg"} {"video_id": "a0f07M2uj_A", "text": "of all back prop demands synaptic symmetry in the forward and backward paths right so basically if you have a neuron and it has output to another neuron what you need to be able to do is to pass back information along that neuron so it kind of has to be a symmetric connection idea of the forward and the backward pass and these need to be exact right and this is just not if you know how neurons are structured they have kind of input dendrites and then there's this accent act action potential and along the axon the signal travels", "start_timestamp": "00:13:51", "end_timestamp": "00:14:32", "start_second": 831, "end_second": 872, "url": "https://www.youtube.com/watch?v=a0f07M2uj_A&t=831s", "title": "Backpropagation and the brain", "thumbnail": "https://i.ytimg.com/vi/a0f07M2uj_A/hqdefault.jpg"} {"video_id": "a0f07M2uj_A", "text": "and the back traveling of the signal just I think is very is very very very slow if even possible and so it's generally not invertible or inverse compute capable so this is one reason why that prop seems unlikely and then the second reason here is error signals are signed and potentially extreme valued and i want to add to that they also just talk about this somewhere that error signals are of a different type right that's a different type so first let's see what signed error signals are signed yes we need to be", "start_timestamp": "00:14:32", "end_timestamp": "00:15:18", "start_second": 872, "end_second": 918, "url": "https://www.youtube.com/watch?v=a0f07M2uj_A&t=872s", "title": "Backpropagation and the brain", "thumbnail": "https://i.ytimg.com/vi/a0f07M2uj_A/hqdefault.jpg"} {"video_id": "a0f07M2uj_A", "text": "able to adjust neurons in a specific directions right if you look at again what we've drawn before here we said here this is how these neurons must must update so the first neuron must must decrease by two this must decrease by three and this must increase by four now in background we need this but in if if we assume that there is something like a reverse computation or signaling here happening then we still have the problem that usually these output signals are in the form of spiking rates which means that over time right so if a neuron", "start_timestamp": "00:15:18", "end_timestamp": "00:16:07", "start_second": 918, "end_second": 967, "url": "https://www.youtube.com/watch?v=a0f07M2uj_A&t=918s", "title": "Backpropagation and the brain", "thumbnail": "https://i.ytimg.com/vi/a0f07M2uj_A/hqdefault.jpg"} {"video_id": "a0f07M2uj_A", "text": "wants to if a neuron has zero activation there's just no signal but if a neuron has a high activation it spikes a lot if has a low activation it kind of spikes sometimes well what he can do is negative spike right like zero is as low as it goes so the the thought that there are signed information in in the backward pass is inconceivable even if you have something like a second so you can imagine here instead of this backward connection because of the symmetry problem we have some kind of second neural network that goes in this", "start_timestamp": "00:16:07", "end_timestamp": "00:16:45", "start_second": 967, "end_second": 1005, "url": "https://www.youtube.com/watch?v=a0f07M2uj_A&t=967s", "title": "Backpropagation and the brain", "thumbnail": "https://i.ytimg.com/vi/a0f07M2uj_A/hqdefault.jpg"} {"video_id": "a0f07M2uj_A", "text": "direction still you'd have the problem that here you can only have positive signal or a zero and they might be extreme valued which okay it can't be really encoded with the spiking because they are they're limited in the range they can assume but they are also of a different type and I'm what I mean by that is basically if you think of this as a programming problem then the forward passes here are our activations right and the backward passes here they are deltas so in the backward passes view either propagate deltas or you", "start_timestamp": "00:16:45", "end_timestamp": "00:17:27", "start_second": 1005, "end_second": 1047, "url": "https://www.youtube.com/watch?v=a0f07M2uj_A&t=1005s", "title": "Backpropagation and the brain", "thumbnail": "https://i.ytimg.com/vi/a0f07M2uj_A/hqdefault.jpg"} {"video_id": "a0f07M2uj_A", "text": "propagate kind of directions so the activations are sort of impulses whereas the backward signals are this isn't how you need to change their their gradients ultimately so it's fundamentally a different type of data that is propagated along would be propagated along these directions and that makes it very unlikely because we are not aware as this paper says that the that neural networks that neurons can kind of switch the data type that they're they're transmitting all right so then the paper goes into their end", "start_timestamp": "00:17:27", "end_timestamp": "00:18:14", "start_second": 1047, "end_second": 1094, "url": "https://www.youtube.com/watch?v=a0f07M2uj_A&t=1047s", "title": "Backpropagation and the brain", "thumbnail": "https://i.ytimg.com/vi/a0f07M2uj_A/hqdefault.jpg"} {"video_id": "a0f07M2uj_A", "text": "grad hypothesis and what this is the hypothesis basically states that the brain could implement something like neural networks by using by using an approximate backdrop like algorithm based on autoencoders and I want to jump straight into the algorithm no actually first they do talk about autoencoders which which I find very interesting so if you think of autoencoders what is an autoencoder an autoencoder is a network that basically starts out with an input layer and then has a bunch of hidden layers and at the end it tries to", "start_timestamp": "00:18:14", "end_timestamp": "00:18:58", "start_second": 1094, "end_second": 1138, "url": "https://www.youtube.com/watch?v=a0f07M2uj_A&t=1094s", "title": "Backpropagation and the brain", "thumbnail": "https://i.ytimg.com/vi/a0f07M2uj_A/hqdefault.jpg"} {"video_id": "a0f07M2uj_A", "text": "reconstruct its own input right so you feed a data in here you get data out here and then your error the error signal it will be your difference to your original input now the usually when we train autoencoders in deep learning we also train this by back prop right we see then this error here and this goes back but if you just think of single layer autoencoders so um let's let's go over here single layer auto-encoder with let's say the the same number of the same number of units in this in this layer what you'll have is so this this", "start_timestamp": "00:18:58", "end_timestamp": "00:19:49", "start_second": 1138, "end_second": 1189, "url": "https://www.youtube.com/watch?v=a0f07M2uj_A&t=1138s", "title": "Backpropagation and the brain", "thumbnail": "https://i.ytimg.com/vi/a0f07M2uj_A/hqdefault.jpg"} {"video_id": "a0f07M2uj_A", "text": "is input this is output and this is the hidden layer right you'll have a weight matrix here and you'll probably have some sort of nonlinear function and then you have another weight matrix here and they call them W and B another way to draw this is I have weight matrix going up then I have a nonlinear function going transforming this into this signal and then I have the be going back right so I'm drawing I'm drawing it in two different ways up here or over here and with the second way you can see that it is kind of a forward backward algorithm", "start_timestamp": "00:19:49", "end_timestamp": "00:20:35", "start_second": 1189, "end_second": 1235, "url": "https://www.youtube.com/watch?v=a0f07M2uj_A&t=1189s", "title": "Backpropagation and the brain", "thumbnail": "https://i.ytimg.com/vi/a0f07M2uj_A/hqdefault.jpg"} {"video_id": "a0f07M2uj_A", "text": "where now the error if you look at what is the error here the error is the difference between this and this and the difference between this and this and the difference between this and this right and you can train an autoencoder simply by saying W please make sure that the that the the the input here gets mapped closer to the output and to be the same thing this will become clear in a second so but basically sorry this I mean the the hidden representations you'll see basically the idea is that you can train an autoencoder only by", "start_timestamp": "00:20:35", "end_timestamp": "00:21:32", "start_second": 1235, "end_second": 1292, "url": "https://www.youtube.com/watch?v=a0f07M2uj_A&t=1235s", "title": "Backpropagation and the brain", "thumbnail": "https://i.ytimg.com/vi/a0f07M2uj_A/hqdefault.jpg"} {"video_id": "a0f07M2uj_A", "text": "using local update rules you don't have to do back prop and that's what this algorithm is proposing namely if you think of a stack of autoencoders this this this transforming one hidden representation into the next right this is the feed-forward function what you can do is you first of all you can assume that for each of these functions here you have a perfect inverse right you can you can perfectly compute the inverse function that's this this G here of course this doesn't exist but assume you have it what you then could do is you could if", "start_timestamp": "00:21:32", "end_timestamp": "00:22:17", "start_second": 1292, "end_second": 1337, "url": "https://www.youtube.com/watch?v=a0f07M2uj_A&t=1292s", "title": "Backpropagation and the brain", "thumbnail": "https://i.ytimg.com/vi/a0f07M2uj_A/hqdefault.jpg"} {"video_id": "a0f07M2uj_A", "text": "if you knew in one layer and on the top layer of course you know if you knew that okay I got this from my forward pass but I would like to have this this is my desired output right so in the output layer you get this this is your error signal if you knew you you you could compute an error right here this is what you do in the output right now in back prop we would back propagate this error along the layers but now we don't do this instead of what we do is we use this G function to invert the F function right and by that what we'll", "start_timestamp": "00:22:17", "end_timestamp": "00:23:03", "start_second": 1337, "end_second": 1383, "url": "https://www.youtube.com/watch?v=a0f07M2uj_A&t=1337s", "title": "Backpropagation and the brain", "thumbnail": "https://i.ytimg.com/vi/a0f07M2uj_A/hqdefault.jpg"} {"video_id": "a0f07M2uj_A", "text": "say is what hidden representation in layer two what should the hidden representation have been that in order for us to obtain this thing right so the the claim here is if in layer two we had had H two as a hidden representation then we would have landed exactly where we want it right that's what this G function does because here we use F so had we had F h2 and used F on it we would be exactly where we want instead we had h2 here and used F on it and then we landed here where we don't want so this is where we want we would want to", "start_timestamp": "00:23:03", "end_timestamp": "00:23:54", "start_second": 1383, "end_second": 1434, "url": "https://www.youtube.com/watch?v=a0f07M2uj_A&t=1383s", "title": "Backpropagation and the brain", "thumbnail": "https://i.ytimg.com/vi/a0f07M2uj_A/hqdefault.jpg"} {"video_id": "a0f07M2uj_A", "text": "be in layer two and this is where we were so again we can compute an error here again instead of back propagating that error what we'll do is we'll use the inverse of the forward function in order to back propagate our desired hidden representation and you can see there is of course a relationship to the true back prop here but the the important distinction is we are not trying to back propagate the error signal we're trying to invert the desired hidden states of the network and then in each layer we can compute from the forward pass we can", "start_timestamp": "00:23:54", "end_timestamp": "00:24:35", "start_second": 1434, "end_second": 1475, "url": "https://www.youtube.com/watch?v=a0f07M2uj_A&t=1434s", "title": "Backpropagation and the brain", "thumbnail": "https://i.ytimg.com/vi/a0f07M2uj_A/hqdefault.jpg"} {"video_id": "a0f07M2uj_A", "text": "compute the difference to the desired hidden state and thereby compute an error signal and now we have achieved what we wanted we want an algorithm that doesn't do back prop that only uses local information in order to compute the error signal that it needs to adjust and by local I mean information in the same layer and also the data type that is propagated by F is activations right of hidden representations and by G is also activations of hidden representations both of them are always positive can be encoded by spiking", "start_timestamp": "00:24:35", "end_timestamp": "00:25:17", "start_second": 1475, "end_second": 1517, "url": "https://www.youtube.com/watch?v=a0f07M2uj_A&t=1475s", "title": "Backpropagation and the brain", "thumbnail": "https://i.ytimg.com/vi/a0f07M2uj_A/hqdefault.jpg"} {"video_id": "a0f07M2uj_A", "text": "neurons and so on so this algorithm achieves what we want they go bit into detail how the actual error update here can be achieved and apparently neurons can achieve you know in the same layer to to adjust themselves to a given desired activation so this algorithm achieves it of course we don't have this G we don't have it and therefore we need to go a bit more complicated what they introduces the this following algorithm the goals are the same but now we assume we do not have a perfect inverse but we have something that is a bit like an", "start_timestamp": "00:25:17", "end_timestamp": "00:26:03", "start_second": 1517, "end_second": 1563, "url": "https://www.youtube.com/watch?v=a0f07M2uj_A&t=1517s", "title": "Backpropagation and the brain", "thumbnail": "https://i.ytimg.com/vi/a0f07M2uj_A/hqdefault.jpg"} {"video_id": "a0f07M2uj_A", "text": "inverse so we have an approximate inverse and they basically suggest if we have an approximate inverse we can do the phone so G G is now an approximate inverse to F what we can do is this is our input signal right we use F to map it forward to this and so on all the way up until we get our true or error right here this is our error from the environment right this is the nail being wrong and then we do two applications of G right so this is an application of F we do to applet of g1 we applied g2 this to what we got", "start_timestamp": "00:26:03", "end_timestamp": "00:26:45", "start_second": 1563, "end_second": 1605, "url": "https://www.youtube.com/watch?v=a0f07M2uj_A&t=1563s", "title": "Backpropagation and the brain", "thumbnail": "https://i.ytimg.com/vi/a0f07M2uj_A/hqdefault.jpg"} {"video_id": "a0f07M2uj_A", "text": "in the forward pass right and this now gives us a measure of how bad our inverse is right so if G is now an approximate inverse and this now we see here oh okay we we had a ch2 in the forward pass and we basically forward passed and then went through our inverse and we didn't land quite exactly where we started but we know that okay this this is basically the difference between our our inverse our forward inverse H and our true H and then we also back project using G again the desired outcome so we invert the desired outcome", "start_timestamp": "00:26:45", "end_timestamp": "00:27:34", "start_second": 1605, "end_second": 1654, "url": "https://www.youtube.com/watch?v=a0f07M2uj_A&t=1605s", "title": "Backpropagation and the brain", "thumbnail": "https://i.ytimg.com/vi/a0f07M2uj_A/hqdefault.jpg"} {"video_id": "a0f07M2uj_A", "text": "here now before we have adjusted directly these two right because we said this is what we got this is what we want but now we include for the fact that G isn't a perfect inverse and our assumption is that G here probably makes about the same mistakes as G here so what we'll do is we'll take this vector right here and apply it here in order to achieve this thing and this thing is now the corrected thing our corrected to desired hidden representation correct for the fact that we don't have a perfect inverse and now again we have", "start_timestamp": "00:27:34", "end_timestamp": "00:28:16", "start_second": 1654, "end_second": 1696, "url": "https://www.youtube.com/watch?v=a0f07M2uj_A&t=1654s", "title": "Backpropagation and the brain", "thumbnail": "https://i.ytimg.com/vi/a0f07M2uj_A/hqdefault.jpg"} {"video_id": "a0f07M2uj_A", "text": "our error here that we can locally adjust again all the signals propagated here here and here are just neural activations and all the information required to update a layer of neurons is now contained within that layer of neurons right and and this goes back through the network so this is how they achieve how they achieve this this is a bit of a close-up look and here are the computations to do this so basically for the forward updates you want to adjust W into the direction of the H minus the H tilde and the H tilde in", "start_timestamp": "00:28:16", "end_timestamp": "00:29:04", "start_second": 1696, "end_second": 1744, "url": "https://www.youtube.com/watch?v=a0f07M2uj_A&t=1696s", "title": "Backpropagation and the brain", "thumbnail": "https://i.ytimg.com/vi/a0f07M2uj_A/hqdefault.jpg"} {"video_id": "a0f07M2uj_A", "text": "this case would be this the the hidden representation that you would like to have so you will update your forward forward weights into the direction such that your hidden representations are closer sorry that your forward haven representation is closer to your backward hidden representation and the backward updates now your goal is to get a more a better to make G so sir W here is our W or the weight of F and B or the weights of G so in the backward updates your goal is to make G a better inverse right so what you'll do is again you'll", "start_timestamp": "00:29:04", "end_timestamp": "00:29:48", "start_second": 1744, "end_second": 1788, "url": "https://www.youtube.com/watch?v=a0f07M2uj_A&t=1744s", "title": "Backpropagation and the brain", "thumbnail": "https://i.ytimg.com/vi/a0f07M2uj_A/hqdefault.jpg"} {"video_id": "a0f07M2uj_A", "text": "take the difference between now you see the difference here here here right not the same error so here you will you in the W update use what we labeled error here in the G update you use this error here so this is the error of G so when you update the function G you want to make these two closer together such that G becomes a better inverse right because you're dealing with an approximate inverse you still need to obtain that approximate inverse end and this here is how you learn it this algorithm now achieves what we wanted right", "start_timestamp": "00:29:48", "end_timestamp": "00:30:35", "start_second": 1788, "end_second": 1835, "url": "https://www.youtube.com/watch?v=a0f07M2uj_A&t=1788s", "title": "Backpropagation and the brain", "thumbnail": "https://i.ytimg.com/vi/a0f07M2uj_A/hqdefault.jpg"} {"video_id": "a0f07M2uj_A", "text": "local updates data types check signed check and so on I hope this was enough clear in essence is pretty simple but it's pretty cool how they work around this they call this a different story with propagation and not these these kind of papers I don't think they invented this maybe I'm not sure maybe they did maybe they didn't and this paper just kind of frames it in this hypothesis it is unclear to me I am not familiar with this kind of papers so sorry if I miss attribute something here all right then they go into into how could these", "start_timestamp": "00:30:35", "end_timestamp": "00:31:27", "start_second": 1835, "end_second": 1887, "url": "https://www.youtube.com/watch?v=a0f07M2uj_A&t=1835s", "title": "Backpropagation and the brain", "thumbnail": "https://i.ytimg.com/vi/a0f07M2uj_A/hqdefault.jpg"} {"video_id": "a0f07M2uj_A", "text": "things be implemented biologically and they go for some evidence and they also state that we used to look at neurons basically in this way where you had input and feedback here very simple simplistic view of neurons whereas nowadays even the company computational community views neurons in a more differentiated way where you have for example different regions here on the soma that can be separated from each other and you have inter neuron interference and so on I'm not qualified too much to comment on this stuff but I", "start_timestamp": "00:31:27", "end_timestamp": "00:32:11", "start_second": 1887, "end_second": 1931, "url": "https://www.youtube.com/watch?v=a0f07M2uj_A&t=1887s", "title": "Backpropagation and the brain", "thumbnail": "https://i.ytimg.com/vi/a0f07M2uj_A/hqdefault.jpg"} {"video_id": "lDLqrsye-rQ", "text": "yeah so I think since I'm one of the organizers I'll actually take the opportunity to thank all the speakers and all of you for attending it's been a lot of fun to hear the wide range of perspectives and topics this week and of course also thanks to the Simons Institute for for hosting us the date I noticed is wrong here actually obviously it's November 20th not October 20th I haven't been sleeping much lately so so I'm gonna I think in the interest of time I'm gonna skip over some of the introductory material here we've heard a", "start_timestamp": "00:00:00", "end_timestamp": "00:00:29", "start_second": 0, "end_second": 29, "url": "https://www.youtube.com/watch?v=lDLqrsye-rQ&t=0s", "title": "Dynamic Pricing in Ride-Sharing Platforms", "thumbnail": "https://i.ytimg.com/vi/lDLqrsye-rQ/maxresdefault.jpg"} {"video_id": "lDLqrsye-rQ", "text": "lot about ride-sharing platforms already so let me just skip that I think at this point in the week between the the talk that was given by Christos Co on Monday and then also the industrial visitors day we had last week we all are familiar with uber and lyft and sidecar and platforms like this so what's our goal in our work oh and I should actually start preface this by saying this is joint work with Sid Banerjee who's an assistant professor at Cornell he was doing a postdoc with me and Carlos raqami who's a student of mine at", "start_timestamp": "00:00:29", "end_timestamp": "00:00:55", "start_second": 29, "end_second": 55, "url": "https://www.youtube.com/watch?v=lDLqrsye-rQ&t=29s", "title": "Dynamic Pricing in Ride-Sharing Platforms", "thumbnail": "https://i.ytimg.com/vi/lDLqrsye-rQ/maxresdefault.jpg"} {"video_id": "lDLqrsye-rQ", "text": "Stanford and also Sid did an internship with lyft and with their data science team and so some of the inspiration for this work came from from talking and working with them so I guess what but I'm interested in getting across to you and what I found exciting about this problem is that there was a combination of maybe three things that we needed to somehow include in one model and and then use that to actually say something useful about the you know the strategy that the platform takes and that's basically that there's passengers and", "start_timestamp": "00:00:55", "end_timestamp": "00:01:26", "start_second": 55, "end_second": 86, "url": "https://www.youtube.com/watch?v=lDLqrsye-rQ&t=55s", "title": "Dynamic Pricing in Ride-Sharing Platforms", "thumbnail": "https://i.ytimg.com/vi/lDLqrsye-rQ/maxresdefault.jpg"} {"video_id": "lDLqrsye-rQ", "text": "drivers who are strategic the platform is setting you know a pricing rule a decision for how transaction prices are set on each on each interaction and then there's this kind of underlying queueing dynamic that governs you know the number of rides that are requested and the number of drivers that are available and in isolation we have a lot of you know different models and in various literature's that tell us about any one of these three problems one of the things that makes us really interesting to me is the fact that all three kind of", "start_timestamp": "00:01:26", "end_timestamp": "00:01:54", "start_second": 86, "end_second": 114, "url": "https://www.youtube.com/watch?v=lDLqrsye-rQ&t=86s", "title": "Dynamic Pricing in Ride-Sharing Platforms", "thumbnail": "https://i.ytimg.com/vi/lDLqrsye-rQ/maxresdefault.jpg"} {"video_id": "lDLqrsye-rQ", "text": "come together in one place so this would just be I think you know a pure theoretical exercise it would be fun to build a model of something like that now of course you want to do that with some purpose in mind in our case the motivation for wanting to do this in the first place is that we sort of wanted to try to understand what the advantages were of using a dynamic pricing policy over static price policy and I'll explain more what I mean by that as we go on so I'll skip this side as well I just want to briefly", "start_timestamp": "00:01:54", "end_timestamp": "00:02:19", "start_second": 114, "end_second": 139, "url": "https://www.youtube.com/watch?v=lDLqrsye-rQ&t=114s", "title": "Dynamic Pricing in Ride-Sharing Platforms", "thumbnail": "https://i.ytimg.com/vi/lDLqrsye-rQ/maxresdefault.jpg"} {"video_id": "lDLqrsye-rQ", "text": "point out that there's a wide range of literature in very different communities actually that touches on this so at the same time that you know we've been thinking about matching markets whether an econ or in in the UC community it's been interesting to see in the applied probability community there's been a huge amount a huge surge of interest in models of queuing systems with matching behavior and so I think you know for those of you that are working on matching markets I would strongly recommend sort of looking into some of", "start_timestamp": "00:02:19", "end_timestamp": "00:02:45", "start_second": 139, "end_second": 165, "url": "https://www.youtube.com/watch?v=lDLqrsye-rQ&t=139s", "title": "Dynamic Pricing in Ride-Sharing Platforms", "thumbnail": "https://i.ytimg.com/vi/lDLqrsye-rQ/maxresdefault.jpg"} {"video_id": "lDLqrsye-rQ", "text": "the some of the work that I think really starts with the Donna and bison goes from there there's a lot of work on strategic queuing models two-sided platforms revenue management so like I said you know there's there's a lot there and and and and one of the things that made this fun is sitting at an intersection of those of those topics okay so let me tell you a bit about the model so the model is something where we need to capture three features so one is that I need to be able to say something about you know the platform's goals", "start_timestamp": "00:02:45", "end_timestamp": "00:03:13", "start_second": 165, "end_second": 193, "url": "https://www.youtube.com/watch?v=lDLqrsye-rQ&t=165s", "title": "Dynamic Pricing in Ride-Sharing Platforms", "thumbnail": "https://i.ytimg.com/vi/lDLqrsye-rQ/maxresdefault.jpg"} {"video_id": "lDLqrsye-rQ", "text": "right you know how is it setting a pricing policy I need to be able to tell you what the incentives are of passengers and what the incentives are our drivers and that's all sort of the strategic aspect of the model I also want to be able to tell you you know exactly even fixing all of this you know how how does the system evolve what are the dynamics of drivers and passengers okay so some an apology I want to interject here is that I think this is a talk I personally view as sort of at the tip of a very large iceberg and in many ways", "start_timestamp": "00:03:13", "end_timestamp": "00:03:43", "start_second": 193, "end_second": 223, "url": "https://www.youtube.com/watch?v=lDLqrsye-rQ&t=193s", "title": "Dynamic Pricing in Ride-Sharing Platforms", "thumbnail": "https://i.ytimg.com/vi/lDLqrsye-rQ/maxresdefault.jpg"} {"video_id": "lDLqrsye-rQ", "text": "I think the work that we were doing raises more questions than it answers so I'm happy with some of the answers we got but I also want to be very kind of explicit with you where I think there's important things missing okay so some of them are on this slide and some of them all mentioned as we go through so one of them is that we're gonna focus on just a single block of time and what do I mean by a block I mean something like let's say rush hour or you know maybe for those of you that were here for Christmas goes talk a", "start_timestamp": "00:03:43", "end_timestamp": "00:04:09", "start_second": 223, "end_second": 249, "url": "https://www.youtube.com/watch?v=lDLqrsye-rQ&t=223s", "title": "Dynamic Pricing in Ride-Sharing Platforms", "thumbnail": "https://i.ytimg.com/vi/lDLqrsye-rQ/maxresdefault.jpg"} {"video_id": "lDLqrsye-rQ", "text": "block of time might be you know a window of time around when bars close on a Friday or Saturday evening so why is that important in many ways what I want to focus on here is not I want to avoid talking about predictable changes in demand okay so everybody knows there's going to be more demand around rush hour than there is in the afternoon alright so I want to avoid that and indeed the platform's avoid that so even even before surge pricing became sort of something which is changing at a minute-to-minute basis you", "start_timestamp": "00:04:09", "end_timestamp": "00:04:40", "start_second": 249, "end_second": 280, "url": "https://www.youtube.com/watch?v=lDLqrsye-rQ&t=249s", "title": "Dynamic Pricing in Ride-Sharing Platforms", "thumbnail": "https://i.ytimg.com/vi/lDLqrsye-rQ/maxresdefault.jpg"} {"video_id": "lDLqrsye-rQ", "text": "know it was the case that they would know in advance that there's going to be greater demand let's say around rush hour or bars closing and and the the surge market multiplier would be higher in those intervals so when I talk about static pricing I don't mean a fixed price over the entire week I mean static over something like you know a few hours a block of time okay the next thing is in the talk I'm only going to focus on a single region now of course you know cities are not just a single region and I think even Chris on Monday mentioned", "start_timestamp": "00:04:40", "end_timestamp": "00:05:10", "start_second": 280, "end_second": 310, "url": "https://www.youtube.com/watch?v=lDLqrsye-rQ&t=280s", "title": "Dynamic Pricing in Ride-Sharing Platforms", "thumbnail": "https://i.ytimg.com/vi/lDLqrsye-rQ/maxresdefault.jpg"} {"video_id": "lDLqrsye-rQ", "text": "that you know the pricing involves multiple neighborhoods in in the work that we've done in the paper we do have the main insights that we have do generalized to networks there's some sort of exceptions to that but I'm not going to devolve that in the talk I think one important problem that my focusing on a single region completely ignores and even the work on networks completely ignores is I guess there's two issues that that that sort of eliminates right so one is the notion of an estimated time arrival or ETA and I", "start_timestamp": "00:05:10", "end_timestamp": "00:05:38", "start_second": 310, "end_second": 338, "url": "https://www.youtube.com/watch?v=lDLqrsye-rQ&t=310s", "title": "Dynamic Pricing in Ride-Sharing Platforms", "thumbnail": "https://i.ytimg.com/vi/lDLqrsye-rQ/maxresdefault.jpg"} {"video_id": "lDLqrsye-rQ", "text": "think as you heard from Chris and as anyone who actually uses uber or lyft would know you know in addition to whether or not their search pricing you're very sensitive to when you think you're actually going to get a ride note so what the ETA actually is so sort of by fixing the network and you'll see more in the models sort of how this plays out now one of the things we don't actually have much to say about is how passengers are sensitive to etas and sort of in particular you know one thing that might mean is that the design of", "start_timestamp": "00:05:38", "end_timestamp": "00:06:07", "start_second": 338, "end_second": 367, "url": "https://www.youtube.com/watch?v=lDLqrsye-rQ&t=338s", "title": "Dynamic Pricing in Ride-Sharing Platforms", "thumbnail": "https://i.ytimg.com/vi/lDLqrsye-rQ/maxresdefault.jpg"} {"video_id": "lDLqrsye-rQ", "text": "the regions themselves like what you consider to be a region is not a topic that we address at all right so you so you may want to make your regions more granular so that you're able to better you know acclimatize a better match supply and demand on a very local scale but as you do that you will also want to make sure that drivers who are nearby are able to move in or out okay so we're not really we're not really accounting for those effects in the kinds of network model that we build and I think so so thinking about ETA s and and how", "start_timestamp": "00:06:07", "end_timestamp": "00:06:35", "start_second": 367, "end_second": 395, "url": "https://www.youtube.com/watch?v=lDLqrsye-rQ&t=367s", "title": "Dynamic Pricing in Ride-Sharing Platforms", "thumbnail": "https://i.ytimg.com/vi/lDLqrsye-rQ/maxresdefault.jpg"} {"video_id": "lDLqrsye-rQ", "text": "the platform actually designs you know designs it's it's topology I think that's I think that's a really interesting direction of work it's something sedan I've talked about we haven't really done much with it yet and find the last thing I'll mention is the objective function I'm going to focus on is is throughput the rate of completed rides there's at least three objectives that you you know might care about there's throughput there's profit and then there's welfare so we we have results for throughput that sort of have no", "start_timestamp": "00:06:35", "end_timestamp": "00:07:02", "start_second": 395, "end_second": 422, "url": "https://www.youtube.com/watch?v=lDLqrsye-rQ&t=395s", "title": "Dynamic Pricing in Ride-Sharing Platforms", "thumbnail": "https://i.ytimg.com/vi/lDLqrsye-rQ/maxresdefault.jpg"} {"video_id": "lDLqrsye-rQ", "text": "qualification whatsoever we have results for profit when the system is supplied limited in a sense I'll make precise and then there's similar numerical results for welfare but the theory there is actually a bit more challenging and so I'm not going to claim anything for that in the talk okay so let me start by modeling sort of the strategic side of the problem and this is another point which we'd say you know very little about we don't say anything in our paper on and I think it's an interesting issue is that we're going to just assert that", "start_timestamp": "00:07:02", "end_timestamp": "00:07:30", "start_second": 422, "end_second": 450, "url": "https://www.youtube.com/watch?v=lDLqrsye-rQ&t=422s", "title": "Dynamic Pricing in Ride-Sharing Platforms", "thumbnail": "https://i.ytimg.com/vi/lDLqrsye-rQ/maxresdefault.jpg"} {"video_id": "lDLqrsye-rQ", "text": "the platform takes a fixed fraction of every dollar that's spent and you know I think that's fairly consistent with how the platform's work today but it's kind of interesting I mean it's a mechanism designer you might actually think this is one of the very first things you would want to design you'd want to optimize over so our work does not optimize over this it kind of holds that as an exogenous constant I think one of the most interesting things to do is to kind of a lot of questions during chrises talk on Monday I think alluded", "start_timestamp": "00:07:30", "end_timestamp": "00:07:54", "start_second": 450, "end_second": 474, "url": "https://www.youtube.com/watch?v=lDLqrsye-rQ&t=450s", "title": "Dynamic Pricing in Ride-Sharing Platforms", "thumbnail": "https://i.ytimg.com/vi/lDLqrsye-rQ/maxresdefault.jpg"} {"video_id": "lDLqrsye-rQ", "text": "to this that it's interesting to think about whether the platform should be sort of varying the the share that it takes perhaps you know based on the state of the system there's a good reasons why most platforms don't really change this on a very fast timescale you know I mean this is the type of thing that would be updated you know over months or something like that if at all and it's really a I think I think it's it's more of a sort of cultural issue right i mean i think i think drivers and passengers are not drivers would not be", "start_timestamp": "00:07:54", "end_timestamp": "00:08:21", "start_second": 474, "end_second": 501, "url": "https://www.youtube.com/watch?v=lDLqrsye-rQ&t=474s", "title": "Dynamic Pricing in Ride-Sharing Platforms", "thumbnail": "https://i.ytimg.com/vi/lDLqrsye-rQ/maxresdefault.jpg"} {"video_id": "lDLqrsye-rQ", "text": "very happy if they if they kind of were subject to to a rapidly varying share of earnings it becomes very unpredictable from their perspective of course the platform needs both drivers and passengers and it uses pricing to align the two sides one note on terminology in the platform's literature you know in economics if you use the word pricing you have to be a little clearer about you means so pricing might mean the fee structure which is the gamaheer for example or pricing might mean actually setting the transaction price and one of", "start_timestamp": "00:08:21", "end_timestamp": "00:08:50", "start_second": 501, "end_second": 530, "url": "https://www.youtube.com/watch?v=lDLqrsye-rQ&t=501s", "title": "Dynamic Pricing in Ride-Sharing Platforms", "thumbnail": "https://i.ytimg.com/vi/lDLqrsye-rQ/maxresdefault.jpg"} {"video_id": "lDLqrsye-rQ", "text": "the reasons that ride-sharing platforms were fun to think about in contrast to other kinds of platforms that I find interesting is because they actually directly set the transaction price so that like that's why this is an interesting question you know if you come compare that with something like let's say op work or Airbnb they have a lot of influence on what the transaction prices might be like but they don't actually directly set them okay so here when I use the word pricing I actually mean the platform is setting the", "start_timestamp": "00:08:50", "end_timestamp": "00:09:12", "start_second": 530, "end_second": 552, "url": "https://www.youtube.com/watch?v=lDLqrsye-rQ&t=530s", "title": "Dynamic Pricing in Ride-Sharing Platforms", "thumbnail": "https://i.ytimg.com/vi/lDLqrsye-rQ/maxresdefault.jpg"} {"video_id": "lDLqrsye-rQ", "text": "transaction price and so they're able to use their mechanism for setting the transaction price to align the two sides so the first bit of notation is that the way I'm going to model the platform setting the transaction price is just as a function of the number of available drivers remember I'm focused on only one region there's a number of available drivers right now in that region I'll imagine there's some function that the platform uses to map the number of available drivers to the price that you're going to get charged if you if", "start_timestamp": "00:09:12", "end_timestamp": "00:09:38", "start_second": 552, "end_second": 578, "url": "https://www.youtube.com/watch?v=lDLqrsye-rQ&t=552s", "title": "Dynamic Pricing in Ride-Sharing Platforms", "thumbnail": "https://i.ytimg.com/vi/lDLqrsye-rQ/maxresdefault.jpg"} {"video_id": "lDLqrsye-rQ", "text": "you take a ride right now okay all right next qualifier so when I say price you know I'm saying you're setting the transaction price and again it's not directly the transaction price that's being set in in the ride-sharing platforms it's a multiplier on the transact on a base price okay so the way these platforms are work they'll have a published formula that's time and distance dependant that's on their website and if you use a fare estimator you can actually directly calculate this so that's what they call the base price", "start_timestamp": "00:09:38", "end_timestamp": "00:10:06", "start_second": 578, "end_second": 606, "url": "https://www.youtube.com/watch?v=lDLqrsye-rQ&t=578s", "title": "Dynamic Pricing in Ride-Sharing Platforms", "thumbnail": "https://i.ytimg.com/vi/lDLqrsye-rQ/maxresdefault.jpg"} {"video_id": "lDLqrsye-rQ", "text": "and any price manipulation that's happening on a faster time scale is happening through a multiplier once the lift calls it prime time pricing uber calls it surge pricing and so you know what's happening is that they will they will tell you that there's some percentage that's going to get added on top of the base price because of that sort of current state of the market so when I use the word price in the talk what I'm really talking about is this multiplier okay that raises yet another interesting question which is of course", "start_timestamp": "00:10:06", "end_timestamp": "00:10:31", "start_second": 606, "end_second": 631, "url": "https://www.youtube.com/watch?v=lDLqrsye-rQ&t=606s", "title": "Dynamic Pricing in Ride-Sharing Platforms", "thumbnail": "https://i.ytimg.com/vi/lDLqrsye-rQ/maxresdefault.jpg"} {"video_id": "lDLqrsye-rQ", "text": "you know Los Angeles and San Francisco are very different markets and in Los Angeles in particular you know there may be a very large cost to pulling drivers from you know that look available from further away to come pick up a ride and that ride you know that cost as part of a distance dependent so these kinds of things you know it's often a question that comes up here is well why is this also not a matter of manipulation not just a multiplier on type on top of a formula but why not actually just directly be sort of manipulating prices", "start_timestamp": "00:10:31", "end_timestamp": "00:11:00", "start_second": 631, "end_second": 660, "url": "https://www.youtube.com/watch?v=lDLqrsye-rQ&t=631s", "title": "Dynamic Pricing in Ride-Sharing Platforms", "thumbnail": "https://i.ytimg.com/vi/lDLqrsye-rQ/maxresdefault.jpg"} {"video_id": "lDLqrsye-rQ", "text": "in a way that that you're not just picking a single multiplier regardless of what the time the distance is going to be and again something I'm not touching I think this is another one of those things just thinking about from regulatory perspective you know ride-sharing platforms are fighting against the taxi industry and that's you know the taxi sort of industry has you know published fare schedules like this it's already challenging enough to convince the public that you know surge pricing or primetime pricing is palatable I find", "start_timestamp": "00:11:00", "end_timestamp": "00:11:25", "start_second": 660, "end_second": 685, "url": "https://www.youtube.com/watch?v=lDLqrsye-rQ&t=660s", "title": "Dynamic Pricing in Ride-Sharing Platforms", "thumbnail": "https://i.ytimg.com/vi/lDLqrsye-rQ/maxresdefault.jpg"} {"video_id": "lDLqrsye-rQ", "text": "that shocking I have to confess but that that's the way it is so given that that's hard enough I think varying this is also going to be you know politically even more difficult but that's said from a you know market design standpoint I think it's a reasonable question to ask okay so that's the platform yeah multiplier number of available drivers yes I'm gonna have a model where the state of the system is the number of available drivers and this doesn't do that that's right yeah I think there's a lot of different things that are being", "start_timestamp": "00:11:25", "end_timestamp": "00:12:09", "start_second": 685, "end_second": 729, "url": "https://www.youtube.com/watch?v=lDLqrsye-rQ&t=685s", "title": "Dynamic Pricing in Ride-Sharing Platforms", "thumbnail": "https://i.ytimg.com/vi/lDLqrsye-rQ/maxresdefault.jpg"} {"video_id": "lDLqrsye-rQ", "text": "left out here so that's part of it and as you're gonna see in a second in I'm taking in my talk in extreme view where drivers are making entry decisions over longer time skills now if you think about some of the tools that platforms are using and and you know Chris talked about some of these on Monday another thing that's left out here is not just sort of the number of available drivers right now but let's say a forecast that I have of how many available drivers there will be all that kind of stuff is just left out of it I", "start_timestamp": "00:12:09", "end_timestamp": "00:12:35", "start_second": 729, "end_second": 755, "url": "https://www.youtube.com/watch?v=lDLqrsye-rQ&t=729s", "title": "Dynamic Pricing in Ride-Sharing Platforms", "thumbnail": "https://i.ytimg.com/vi/lDLqrsye-rQ/maxresdefault.jpg"} {"video_id": "lDLqrsye-rQ", "text": "think there's a lot of interesting things to do there in terms of the richness of the pricing policy okay so with the passengers what do passengers do act fairly simple model of passengers every passenger is one ride so passenger equals one ride request I don't model any sort of longitudinal behavior of the passenger it's just simple and basically I'm just gonna model the passengers as entering if their price exceeds if their reservation value exceeds the current price in the system okay so to model that I think of every every ride request", "start_timestamp": "00:12:35", "end_timestamp": "00:13:04", "start_second": 755, "end_second": 784, "url": "https://www.youtube.com/watch?v=lDLqrsye-rQ&t=755s", "title": "Dynamic Pricing in Ride-Sharing Platforms", "thumbnail": "https://i.ytimg.com/vi/lDLqrsye-rQ/maxresdefault.jpg"} {"video_id": "lDLqrsye-rQ", "text": "is being drawn iid from some valuation distribution and if if the value that's drawn is bigger than the current price they enter there's some eggs agenda straight of what I'll call app opens Chris talked about the same thing and that's basically like I open the app and looked do I want to request a ride and then look at the price and determine whether I actually request a ride so from that it's pretty easy to work out based on the pricing you know formula what is the rate of ride requests it's the exogenous rate times the tail probability that the", "start_timestamp": "00:13:04", "end_timestamp": "00:13:33", "start_second": 784, "end_second": 813, "url": "https://www.youtube.com/watch?v=lDLqrsye-rQ&t=784s", "title": "Dynamic Pricing in Ride-Sharing Platforms", "thumbnail": "https://i.ytimg.com/vi/lDLqrsye-rQ/maxresdefault.jpg"} {"video_id": "lDLqrsye-rQ", "text": "valuation exceeds the the current price okay so this F bar is the tail tail CDF of the evaluation distribution right so that's the passengers it's relatively simple the drivers I think is where it gets a little bit more interesting this is maybe the one place where our motivation for choosing this model kind of came from what what seemed to us to be like a natural distinction between drivers and passengers I would say that I think of this as kind of a stylized extreme point and there you know especially as the platform's changed the", "start_timestamp": "00:13:33", "end_timestamp": "00:14:02", "start_second": 813, "end_second": 842, "url": "https://www.youtube.com/watch?v=lDLqrsye-rQ&t=813s", "title": "Dynamic Pricing in Ride-Sharing Platforms", "thumbnail": "https://i.ytimg.com/vi/lDLqrsye-rQ/maxresdefault.jpg"} {"video_id": "lDLqrsye-rQ", "text": "technology that they use to induce drivers to enter or exit I think we can get we can bisque and get a lot more interesting but let me tell you what this is basically the point that that we make here is that we think of drivers is making decisions on just a substantively different time scale than passengers okay so if a driver is thinking about whether to drive for example in the early days of lyft you know you had like a booking calendar when you would essentially say like when you wanted to be on or off the platform", "start_timestamp": "00:14:02", "end_timestamp": "00:14:28", "start_second": 842, "end_second": 868, "url": "https://www.youtube.com/watch?v=lDLqrsye-rQ&t=842s", "title": "Dynamic Pricing in Ride-Sharing Platforms", "thumbnail": "https://i.ytimg.com/vi/lDLqrsye-rQ/maxresdefault.jpg"} {"video_id": "lDLqrsye-rQ", "text": "and the time interval over which you were choosing to drive or not is probably something on the order of hours and so essentially what we do is we say well kind of from a driver's perspective they're not responding to the instantaneous state of the system instead what they're thinking is if I enter what's the expected earnings that that I'll receive if I'm if I'm part of the platform and what they do is they compare the expected earnings they'll make while they're in the system to essentially be expected you know the", "start_timestamp": "00:14:28", "end_timestamp": "00:14:52", "start_second": 868, "end_second": 892, "url": "https://www.youtube.com/watch?v=lDLqrsye-rQ&t=868s", "title": "Dynamic Pricing in Ride-Sharing Platforms", "thumbnail": "https://i.ytimg.com/vi/lDLqrsye-rQ/maxresdefault.jpg"} {"video_id": "lDLqrsye-rQ", "text": "kind of a reservation earnings that they want to reservation earnings rate okay this is also really interesting so this is kind of a target earning model of the driver like basically they have some fixed mental model of how much they want to be able to make and they enter if it exceeds that you know I think Chris pointed to examples of a lot of different driver behavior in their data and you know you would certainly see that across all the platforms so I think that's that's a that's a really interesting direction also first of all", "start_timestamp": "00:14:52", "end_timestamp": "00:15:16", "start_second": 892, "end_second": 916, "url": "https://www.youtube.com/watch?v=lDLqrsye-rQ&t=892s", "title": "Dynamic Pricing in Ride-Sharing Platforms", "thumbnail": "https://i.ytimg.com/vi/lDLqrsye-rQ/maxresdefault.jpg"} {"video_id": "lDLqrsye-rQ", "text": "just do you have the right kind of utility model for drivers I think this is when I say you know when I say that we're taking an extreme point what I mean is this time scale separation between drivers and passengers is a fundamental part of the model it would be interesting to think about what starts happening if drivers are responding more directly to instantaneous state and in particular I think the the way this comes together with the network's comment earlier is if I'm able to provide signals to drivers that say this is a place where I think", "start_timestamp": "00:15:16", "end_timestamp": "00:15:44", "start_second": 916, "end_second": 944, "url": "https://www.youtube.com/watch?v=lDLqrsye-rQ&t=916s", "title": "Dynamic Pricing in Ride-Sharing Platforms", "thumbnail": "https://i.ytimg.com/vi/lDLqrsye-rQ/maxresdefault.jpg"} {"video_id": "lDLqrsye-rQ", "text": "demand is locally higher than supply and what I'm doing is I want to induce drivers to move in that direction and when I'm trading off the effects I'm trading off are that it takes drivers time to move to a new area it takes drivers away from the area that they were in you know so I sort of I think that that's that's really where the the network modeling gets gets especially interesting so that's kind of one of the things that we want to keep doing with this work as with the passengers I'm going to model this reservation earnings", "start_timestamp": "00:15:44", "end_timestamp": "00:16:09", "start_second": 944, "end_second": 969, "url": "https://www.youtube.com/watch?v=lDLqrsye-rQ&t=944s", "title": "Dynamic Pricing in Ride-Sharing Platforms", "thumbnail": "https://i.ytimg.com/vi/lDLqrsye-rQ/maxresdefault.jpg"} {"video_id": "lDLqrsye-rQ", "text": "rate is iid across the drivers so similarly you can work out what is the actual rate at which drivers enter there's some exogenous rate drivers will enter if their earnings you know desired kind of reservation earnings rate is is lower than what they think they're going to make in you know per unit time and so expected earnings divided by expected time okay you don't have to worry too much about the fact that the expected czar both the numerator and denominator here there's sort of a waltz identity argument that lets you do away with that", "start_timestamp": "00:16:09", "end_timestamp": "00:16:40", "start_second": 969, "end_second": 1000, "url": "https://www.youtube.com/watch?v=lDLqrsye-rQ&t=969s", "title": "Dynamic Pricing in Ride-Sharing Platforms", "thumbnail": "https://i.ytimg.com/vi/lDLqrsye-rQ/maxresdefault.jpg"} {"video_id": "lDLqrsye-rQ", "text": "passenger model yeah oh yeah sorry that's a great point so so you're right this is not the number of rides that actually gets served this is a number of ride requests and so what will happen is that if there's no driver there your ride request will be blocked and it'll be dropped no but what I mean is might decision as to whether or not to even make the request that my available drag yeah yeah so so that's a really good point so first of all when I said earlier that I wasn't dealing with ETA that's what I meant that I'm not", "start_timestamp": "00:16:40", "end_timestamp": "00:17:19", "start_second": 1000, "end_second": 1039, "url": "https://www.youtube.com/watch?v=lDLqrsye-rQ&t=1000s", "title": "Dynamic Pricing in Ride-Sharing Platforms", "thumbnail": "https://i.ytimg.com/vi/lDLqrsye-rQ/maxresdefault.jpg"} {"video_id": "lDLqrsye-rQ", "text": "modeling so in queueing models where you think about and you know this is not necessarily something specific to me I'd say that this type of trade-off exists everywhere let's just think about like so I'm using a model where essentially the passengers cost is blocking right they may not get served well in a real queuing system like blocking and delay are not completely dissimilar from each other in the sense that if I get blocked but that really is saying is I just have to wait longer to get done with whatever I want", "start_timestamp": "00:17:19", "end_timestamp": "00:17:43", "start_second": 1039, "end_second": 1063, "url": "https://www.youtube.com/watch?v=lDLqrsye-rQ&t=1039s", "title": "Dynamic Pricing in Ride-Sharing Platforms", "thumbnail": "https://i.ytimg.com/vi/lDLqrsye-rQ/maxresdefault.jpg"} {"video_id": "lDLqrsye-rQ", "text": "to get done right and so I I think like the right there's two ways you can think about blocking it's like actually what the cost is it's I tried to get a ride and I couldn't say New Year's Eve or you could think about it as I didn't get the right and that means I have to wait you know longer to be able to get a ride and and I think I think if this is a very very course sort of 0th order approximation to the right thing to do and the right thing to do would be to actually include ETS in the model and model that yeah that's a good", "start_timestamp": "00:17:43", "end_timestamp": "00:18:08", "start_second": 1063, "end_second": 1088, "url": "https://www.youtube.com/watch?v=lDLqrsye-rQ&t=1063s", "title": "Dynamic Pricing in Ride-Sharing Platforms", "thumbnail": "https://i.ytimg.com/vi/lDLqrsye-rQ/maxresdefault.jpg"} {"video_id": "lDLqrsye-rQ", "text": "question okay so that's the drivers so let me just quickly run through what the queueing model is basically the queueing model is now when I be my queueing model isn't in describing this to you I'm not going to say anything about why drivers and passengers are coming in at the rates that they're coming in they just are and then let's so I've already told you that I have some model that's strategic for how I'll determine the rate at which rides are requested how I'll determine the rate at which drivers actually entered the system now let me", "start_timestamp": "00:18:08", "end_timestamp": "00:18:35", "start_second": 1088, "end_second": 1115, "url": "https://www.youtube.com/watch?v=lDLqrsye-rQ&t=1088s", "title": "Dynamic Pricing in Ride-Sharing Platforms", "thumbnail": "https://i.ytimg.com/vi/lDLqrsye-rQ/maxresdefault.jpg"} {"video_id": "lDLqrsye-rQ", "text": "tell you sort of what actually happens inside the model but those you know rates of arrival of drivers and passengers so basically drivers enter at some rate lambda and when there's a drivers available ride requests arrive at some rate which depends on the number of available drivers that's what we computed earlier um if a driver is available the ride is served otherwise it's blocked rides lasts an exponential time with mean tau in the network model this sort of involves a random walk around the network so it's a little more", "start_timestamp": "00:18:35", "end_timestamp": "00:19:01", "start_second": 1115, "end_second": 1141, "url": "https://www.youtube.com/watch?v=lDLqrsye-rQ&t=1115s", "title": "Dynamic Pricing in Ride-Sharing Platforms", "thumbnail": "https://i.ytimg.com/vi/lDLqrsye-rQ/maxresdefault.jpg"} {"video_id": "lDLqrsye-rQ", "text": "complicated than that and after right completions this is another point where I think someone could do something interesting with this so after ride completion there's some exogenous probability that the driver signs out or it becomes available okay so another thing that could be interesting that you can do here is make the exit probability dependent on their experience in the system for obvious reasons that leads to a much more complicated game so this is a you know far simplified version of that I would say that I'm actually less", "start_timestamp": "00:19:01", "end_timestamp": "00:19:27", "start_second": 1141, "end_second": 1167, "url": "https://www.youtube.com/watch?v=lDLqrsye-rQ&t=1141s", "title": "Dynamic Pricing in Ride-Sharing Platforms", "thumbnail": "https://i.ytimg.com/vi/lDLqrsye-rQ/maxresdefault.jpg"} {"video_id": "lDLqrsye-rQ", "text": "sort of concerned about this assumption that this is exogenous because drivers are still making entry decisions based on their kind of expected earnings in the platform so it's not as if they're completely ignoring how they're going to do but I think if you wanted to refine it a bit you could think about making this something which is endogenous okay so that's that's basically the the model of what's going on and so if you look at the picture of what's happening here there's two reasons that there's available drivers there's either new", "start_timestamp": "00:19:27", "end_timestamp": "00:19:48", "start_second": 1167, "end_second": 1188, "url": "https://www.youtube.com/watch?v=lDLqrsye-rQ&t=1167s", "title": "Dynamic Pricing in Ride-Sharing Platforms", "thumbnail": "https://i.ytimg.com/vi/lDLqrsye-rQ/maxresdefault.jpg"} {"video_id": "lDLqrsye-rQ", "text": "entry from the outside world or there's a driver who was busy who became available and and came back into the system who chose not to exit okay so there's some queue that's going up and down available drivers coming in and then when rides are requested this cue gets served down okay so a major sort of simplification for us and one of the reasons for exactly the lining of the assumptions we had is that this type of queueing model turns out to be what's called a Jackson Network and this is more generally true for the", "start_timestamp": "00:19:48", "end_timestamp": "00:20:14", "start_second": 1188, "end_second": 1214, "url": "https://www.youtube.com/watch?v=lDLqrsye-rQ&t=1188s", "title": "Dynamic Pricing in Ride-Sharing Platforms", "thumbnail": "https://i.ytimg.com/vi/lDLqrsye-rQ/maxresdefault.jpg"} {"video_id": "lDLqrsye-rQ", "text": "network model and Jackson networks are great because they're steady-state distribution is a product alright so despite the fact that there's all these dependencies in the queuing network the the steady-state distribution has the property that in steady state the queue lengths in the different you know parts of the network look like they're independent okay so in particular here what it means is that the number of available drivers is something which we have like an exact expression for the for the for the steady-state distribution okay so I'll", "start_timestamp": "00:20:14", "end_timestamp": "00:20:41", "start_second": 1214, "end_second": 1241, "url": "https://www.youtube.com/watch?v=lDLqrsye-rQ&t=1214s", "title": "Dynamic Pricing in Ride-Sharing Platforms", "thumbnail": "https://i.ytimg.com/vi/lDLqrsye-rQ/maxresdefault.jpg"} {"video_id": "lDLqrsye-rQ", "text": "skip this slide I sorry I won't go through everything in detail what I mean is I just want to tell you what an equilibrium of our system is so an equilibrium of our system involves basically saying I connect together the strategic behavior of the passengers and drivers the pricing policy and the queueing model and so what I do is I say okay let me take the queueing model compute a steady-state distribution from it using the steady-state distribution for every driver I can work out what is their expected earnings that they'll", "start_timestamp": "00:20:41", "end_timestamp": "00:21:06", "start_second": 1241, "end_second": 1266, "url": "https://www.youtube.com/watch?v=lDLqrsye-rQ&t=1241s", "title": "Dynamic Pricing in Ride-Sharing Platforms", "thumbnail": "https://i.ytimg.com/vi/lDLqrsye-rQ/maxresdefault.jpg"} {"video_id": "lDLqrsye-rQ", "text": "make while they're in the system and what's the expected time they're gonna live I can use that to then work out what's the entry rate of drivers and then for every passenger I can work out you know sort of for the passenger kind of the steady-state condition is easy it's the same one we had before that passengers will enter as long as the price is going to be is going to be lower than their reservation value and that goes in the entry rate of ride requests so all this comes back together again and I need a consistency check", "start_timestamp": "00:21:06", "end_timestamp": "00:21:31", "start_second": 1266, "end_second": 1291, "url": "https://www.youtube.com/watch?v=lDLqrsye-rQ&t=1266s", "title": "Dynamic Pricing in Ride-Sharing Platforms", "thumbnail": "https://i.ytimg.com/vi/lDLqrsye-rQ/maxresdefault.jpg"} {"video_id": "lDLqrsye-rQ", "text": "that the steady-state distribution actually came from these things that I just computed alright so you can do that and that's kind of a definition of a system equilibrium in our model and you know what we show is that basically as long as you have very mild regularity conditions on the system namely among other things that the price increases when the number of available drivers decreases so this is a condition on the pricing function then equilibria always exist and their unique under reasonable sort of smoothness conditions so oh we", "start_timestamp": "00:21:31", "end_timestamp": "00:21:57", "start_second": 1291, "end_second": 1317, "url": "https://www.youtube.com/watch?v=lDLqrsye-rQ&t=1291s", "title": "Dynamic Pricing in Ride-Sharing Platforms", "thumbnail": "https://i.ytimg.com/vi/lDLqrsye-rQ/maxresdefault.jpg"} {"video_id": "lDLqrsye-rQ", "text": "go that that is so let me talk about sort of nicely smooth those conditions here I mean on the distributions so let me let me sort of move on to how we want to use this so I guess you know part half the talk is like here's a model that sort of sits on some knife at knife edge of tractability and still capturing a bunch of effects that are important I've already pointed out to you at least five or six different ways that I think you would want to do things to the model to capture effects that are that are important in practice that said at least", "start_timestamp": "00:21:57", "end_timestamp": "00:22:27", "start_second": 1317, "end_second": 1347, "url": "https://www.youtube.com/watch?v=lDLqrsye-rQ&t=1317s", "title": "Dynamic Pricing in Ride-Sharing Platforms", "thumbnail": "https://i.ytimg.com/vi/lDLqrsye-rQ/maxresdefault.jpg"} {"video_id": "lDLqrsye-rQ", "text": "in our experience thinking about it adding any one of those things made the model much harder to work with so in order to in order to make progress beyond what we had I think part of the question is going to be you know where you get technical simplicity despite having at it in these communities additional complications and I think one thing for us even with the model that we had that is very helpful is to use awesome products to simplify the analysis this is something which is now by now very standard in in analyzing", "start_timestamp": "00:22:27", "end_timestamp": "00:22:49", "start_second": 1347, "end_second": 1369, "url": "https://www.youtube.com/watch?v=lDLqrsye-rQ&t=1347s", "title": "Dynamic Pricing in Ride-Sharing Platforms", "thumbnail": "https://i.ytimg.com/vi/lDLqrsye-rQ/maxresdefault.jpg"} {"video_id": "lDLqrsye-rQ", "text": "these types of matching markets Eric bhootish and Eduardo Acevedo have a really nice paper that sort of talks about some of the things that you can do using this type of approach so in our case the the limiting approach that we use is that we have a sequence of systems we consider in the end system I'm basically scaling up the exogenous rates the arrival rates of passengers and of drivers and in the end system I have some pricing policy that's actually indexed by in okay and I'll talk to you a little bit about how that how that", "start_timestamp": "00:22:49", "end_timestamp": "00:23:17", "start_second": 1369, "end_second": 1397, "url": "https://www.youtube.com/watch?v=lDLqrsye-rQ&t=1369s", "title": "Dynamic Pricing in Ride-Sharing Platforms", "thumbnail": "https://i.ytimg.com/vi/lDLqrsye-rQ/maxresdefault.jpg"} {"video_id": "lDLqrsye-rQ", "text": "works so in each system this gives rise to a system equilibrium and we basically analyze pricing by looking at the asymptotics of these equilibrium ok so that's what I'm going to show you I'll show you a bunch of pictures that explain what's going on and there's corresponding theorems behind it so let me start with static pricing again static pricing doesn't mean there's a single multiplier the whole day it means that I have a predictable uncertainty on the order of hours or something like that that I use to set a multiplier but", "start_timestamp": "00:23:17", "end_timestamp": "00:23:43", "start_second": 1397, "end_second": 1423, "url": "https://www.youtube.com/watch?v=lDLqrsye-rQ&t=1397s", "title": "Dynamic Pricing in Ride-Sharing Platforms", "thumbnail": "https://i.ytimg.com/vi/lDLqrsye-rQ/maxresdefault.jpg"} {"video_id": "lDLqrsye-rQ", "text": "I don't change the multiplier on the basis of the exact number of available drivers that are in the system so but you should be thinking when I say static is that idiosyncratic stochastic fluctuations are not being captured by the by the dynamic pricing policy so so in math this means that P of a is a constant for all a that no matter what the level of the queue is I'm setting the same multiplier so there's a theorem here that you know don't worry about reading the technical sort of expressions carefully but here's", "start_timestamp": "00:23:43", "end_timestamp": "00:24:11", "start_second": 1423, "end_second": 1451, "url": "https://www.youtube.com/watch?v=lDLqrsye-rQ&t=1423s", "title": "Dynamic Pricing in Ride-Sharing Platforms", "thumbnail": "https://i.ytimg.com/vi/lDLqrsye-rQ/maxresdefault.jpg"} {"video_id": "lDLqrsye-rQ", "text": "basically what's going on what it says is that you pick the multiplier that you're going to use at all eh okay once you've done that then if I scale and there's there's this should be scaled by and this should be our n over and its a scaled completed it's obviously the rate of completed rides is going to go to infinity if if n goes to infinity it's the scaled rate of completed rides that that's normalized so if I take our n over n the rate of completed rides divided by n that has a really natural sort of interpretation", "start_timestamp": "00:24:11", "end_timestamp": "00:24:37", "start_second": 1451, "end_second": 1477, "url": "https://www.youtube.com/watch?v=lDLqrsye-rQ&t=1451s", "title": "Dynamic Pricing in Ride-Sharing Platforms", "thumbnail": "https://i.ytimg.com/vi/lDLqrsye-rQ/maxresdefault.jpg"} {"video_id": "lDLqrsye-rQ", "text": "imagine sort of two unconstrained systems one in which as a passenger whenever I request a ride there's always a driver waiting for me all right and another in which as a driver whenever I finish driving there's always a passenger right there requesting a ride that I can get matched to okay so in one of them demand is not a canoe in the first system I just described supply is not a constraint the second one demand is not a constraint so each of these two sort of naturally give you a supply curve in a demand curve this expression", "start_timestamp": "00:24:37", "end_timestamp": "00:25:06", "start_second": 1477, "end_second": 1506, "url": "https://www.youtube.com/watch?v=lDLqrsye-rQ&t=1477s", "title": "Dynamic Pricing in Ride-Sharing Platforms", "thumbnail": "https://i.ytimg.com/vi/lDLqrsye-rQ/maxresdefault.jpg"} {"video_id": "lDLqrsye-rQ", "text": "is basically how many drivers would enter if they knew that whenever they were available there would be a passenger immediately there for them and this expression is how many passengers would enter if they knew that whenever they requested a ride there would be immediately be a driver there to serve them that's the same entry rate we had before and so that's that's what put is basically available the throughput is a min of available supply and available demand so that's really nice because you can visualize it as sort of a demand and", "start_timestamp": "00:25:06", "end_timestamp": "00:25:30", "start_second": 1506, "end_second": 1530, "url": "https://www.youtube.com/watch?v=lDLqrsye-rQ&t=1506s", "title": "Dynamic Pricing in Ride-Sharing Platforms", "thumbnail": "https://i.ytimg.com/vi/lDLqrsye-rQ/maxresdefault.jpg"} {"video_id": "lDLqrsye-rQ", "text": "supply curve crossing each other this green curve here is the demand curve this green curve here is the supply curve they cross right at this point now what's the x-axis this is different than a usual plot in in economics the x-axis here is price the y-axis is throughput so I'll tell you about the red curves in a second so what I'm saying here is that you pick a multiplier on the x-axis and the the actual throughput that you're going to see in the system is the minimum of this green curve and this green curve okay so this sort of pyramid", "start_timestamp": "00:25:30", "end_timestamp": "00:26:00", "start_second": 1530, "end_second": 1560, "url": "https://www.youtube.com/watch?v=lDLqrsye-rQ&t=1530s", "title": "Dynamic Pricing in Ride-Sharing Platforms", "thumbnail": "https://i.ytimg.com/vi/lDLqrsye-rQ/maxresdefault.jpg"} {"video_id": "lDLqrsye-rQ", "text": "shape thing is what the overall throughput is as you vary the multiplier and you can see that if your only choice was what multiplier to pick and you didn't have any dynamic pricing available the multiplier you would pick is the one where they intersect right that's the peak and so that's that's what we're gonna call that the balance price for the rest of the talk the red curves are just depicting what happens as n increases all right so these are simulations for finite n and the green is the theory in the in the limiting", "start_timestamp": "00:26:00", "end_timestamp": "00:26:23", "start_second": 1560, "end_second": 1583, "url": "https://www.youtube.com/watch?v=lDLqrsye-rQ&t=1560s", "title": "Dynamic Pricing in Ride-Sharing Platforms", "thumbnail": "https://i.ytimg.com/vi/lDLqrsye-rQ/maxresdefault.jpg"} {"video_id": "lDLqrsye-rQ", "text": "system all right everybody okay with that so basically what I'm saying is that there's like a natural notion of available supply and available demand and I can use that to say that if what I want to do is maximize throughput then the multiplier would pick is the one balances the two of them it's pretty intuitive it's just nice to get that out of the out of out of the primitives so I'll skip this slide I just said that so now let's talk about dynamic pricing and what we did is we decided to focus on a particular family of dynamic pricing", "start_timestamp": "00:26:23", "end_timestamp": "00:26:53", "start_second": 1583, "end_second": 1613, "url": "https://www.youtube.com/watch?v=lDLqrsye-rQ&t=1583s", "title": "Dynamic Pricing in Ride-Sharing Platforms", "thumbnail": "https://i.ytimg.com/vi/lDLqrsye-rQ/maxresdefault.jpg"} {"video_id": "lDLqrsye-rQ", "text": "policies there's there's a small part of the paper that says something on slightly more complex policies but it's actually it is actually a little bit harder to deal with sort of arbitrary dynamic pricing policies and come up with exactly the same conclusions so what we do is we focus on threshold policies and and these are very natural because they match exactly with what's done in practice which is basically that I have a threshold and then if the number of available drivers drops below that threshold I set a multiplier that's a high", "start_timestamp": "00:26:53", "end_timestamp": "00:27:22", "start_second": 1613, "end_second": 1642, "url": "https://www.youtube.com/watch?v=lDLqrsye-rQ&t=1613s", "title": "Dynamic Pricing in Ride-Sharing Platforms", "thumbnail": "https://i.ytimg.com/vi/lDLqrsye-rQ/maxresdefault.jpg"} {"video_id": "lDLqrsye-rQ", "text": "multiplier if the number of driver available drivers goes above that threshold I set a multiplier that's a low multiplier okay so I'm kind of moving between these two multipliers the slightly more general thing we looked at is a finite number of thresholds okay and then you know the more general thing obviously would be that the the price curve can be anything across the number of Avila's and so this has kind of inspired the reason we specifically focused on this is because we wanted to try to say something about what search", "start_timestamp": "00:27:22", "end_timestamp": "00:27:46", "start_second": 1642, "end_second": 1666, "url": "https://www.youtube.com/watch?v=lDLqrsye-rQ&t=1642s", "title": "Dynamic Pricing in Ride-Sharing Platforms", "thumbnail": "https://i.ytimg.com/vi/lDLqrsye-rQ/maxresdefault.jpg"} {"video_id": "lDLqrsye-rQ", "text": "and prime-time policies are actually doing and they basically have this flavor that there's kind of thresholds that when they're crossed the the multiplier is going to go up or down as a result okay so I think I can convey kind of what our theorem is with pictures more easily than the results I'll show you in a second so let's let me just sort of parse this picture for you okay so on the x-axis sorry on the x-axis what I'm plotting again is the multiplier right so it's going from from 0 up to 5 here that's the price", "start_timestamp": "00:27:46", "end_timestamp": "00:28:16", "start_second": 1666, "end_second": 1696, "url": "https://www.youtube.com/watch?v=lDLqrsye-rQ&t=1666s", "title": "Dynamic Pricing in Ride-Sharing Platforms", "thumbnail": "https://i.ytimg.com/vi/lDLqrsye-rQ/maxresdefault.jpg"} {"video_id": "lDLqrsye-rQ", "text": "multiplier that I'm going to use the blue curve is the throughput that I obtain if I just set that multiplier regardless of the number of available drivers same basically as the curves that were showing you in the previous graph which is pure static pricing okay the purple curve is what I get with a very particular dynamic pricing policy the dynamic pricing policy is one where one of the two multipliers is set at green at this at this point right here and the other multiplier is the one that's bearing on the x-axis okay now", "start_timestamp": "00:28:16", "end_timestamp": "00:28:48", "start_second": 1696, "end_second": 1728, "url": "https://www.youtube.com/watch?v=lDLqrsye-rQ&t=1696s", "title": "Dynamic Pricing in Ride-Sharing Platforms", "thumbnail": "https://i.ytimg.com/vi/lDLqrsye-rQ/maxresdefault.jpg"} {"video_id": "lDLqrsye-rQ", "text": "what does that mean if I'm on this side then it means is that the high multiplier is this value and the low multiplier is down here and if I'm on this side it means that the low multiplier is this value and high multiplier is the one on this axis that make sense okay so that's that's kind of how you can do a 1d visualization of a family of dynamic pricing policies in this graph and that's also why there's a kink right here okay so this then the purple curve is the throughput that I get with that particular dynamic pricing policy as I", "start_timestamp": "00:28:48", "end_timestamp": "00:29:16", "start_second": 1728, "end_second": 1756, "url": "https://www.youtube.com/watch?v=lDLqrsye-rQ&t=1728s", "title": "Dynamic Pricing in Ride-Sharing Platforms", "thumbnail": "https://i.ytimg.com/vi/lDLqrsye-rQ/maxresdefault.jpg"} {"video_id": "lDLqrsye-rQ", "text": "vary on the x-axis and what I'll do is I'll just make n larger and as n gets larger what you notice is what you should see is that the static curve and the dynamic curve the peaks are going to come together okay and so that's what happens in the limit here so this this is the limiting value with n going to infinity the purple curve is what I get from dynamic pricing again defined the way that I talked about it where this is one of the two prices and the other price is varying on the x-axis and the blue curve is the one I just showed you", "start_timestamp": "00:29:16", "end_timestamp": "00:29:47", "start_second": 1756, "end_second": 1787, "url": "https://www.youtube.com/watch?v=lDLqrsye-rQ&t=1756s", "title": "Dynamic Pricing in Ride-Sharing Platforms", "thumbnail": "https://i.ytimg.com/vi/lDLqrsye-rQ/maxresdefault.jpg"} {"video_id": "lDLqrsye-rQ", "text": "a few minutes ago which is what I get from static pricing and what's the important point to notice from this graph is that both of them coincide at their peak okay so what that's basically saying is that in the fluid limit and this in this kind of hydrodynamic limit where we're scaling you know down by n that you're not getting anything by using a dynamic pricing policy over a static pricing policy you know there's a bunch of suppressed in these graphs okay number one it's numeric so obviously I haven't convinced you about other", "start_timestamp": "00:29:47", "end_timestamp": "00:30:11", "start_second": 1787, "end_second": 1811, "url": "https://www.youtube.com/watch?v=lDLqrsye-rQ&t=1787s", "title": "Dynamic Pricing in Ride-Sharing Platforms", "thumbnail": "https://i.ytimg.com/vi/lDLqrsye-rQ/maxresdefault.jpg"} {"video_id": "lDLqrsye-rQ", "text": "potential values for prices that I could use number two I didn't say anything about the threshold right so like what's going on with the threshold and so in I'll show you the theorem in a second one of one of the things we're doing in the background here is that as n is getting larger we're varying the threshold that we use and we're doing so in an optimal way given the two prices that are chosen right so we're kind of favorably biasing the dynamic pricing policy to pick the best threshold that we could given the two prices that were", "start_timestamp": "00:30:11", "end_timestamp": "00:30:35", "start_second": 1811, "end_second": 1835, "url": "https://www.youtube.com/watch?v=lDLqrsye-rQ&t=1811s", "title": "Dynamic Pricing in Ride-Sharing Platforms", "thumbnail": "https://i.ytimg.com/vi/lDLqrsye-rQ/maxresdefault.jpg"} {"video_id": "lDLqrsye-rQ", "text": "used and so really the result is saying here is feel free to pick the threshold and the two prices however you want and you're never gonna do better than what what static pricing gets you yeah okay now B I should be done before that already okay so this is the limit and what it basically says is let our n star be the rate of completed rides in the n system using the optimal static price and let our n double star be the rate of completed rise in the n system using the optimal threshold pricing strategy then if the valuation distribution has a", "start_timestamp": "00:30:35", "end_timestamp": "00:31:07", "start_second": 1835, "end_second": 1867, "url": "https://www.youtube.com/watch?v=lDLqrsye-rQ&t=1835s", "title": "Dynamic Pricing in Ride-Sharing Platforms", "thumbnail": "https://i.ytimg.com/vi/lDLqrsye-rQ/maxresdefault.jpg"} {"video_id": "lDLqrsye-rQ", "text": "monotone hazard rate I'll talk more about this in a second then the the difference between these two scaled by n goes to zero okay so some comments on this first of all I want to point out so there's this is a really important restriction in the result and the proof of the result what ends up happening is that we need to look at sort of how the level of demand changes as you vary the pricing policy and it turns out that some condition on the valuation distribution is is sort of necessary it's a little bit looser than this it", "start_timestamp": "00:31:07", "end_timestamp": "00:31:39", "start_second": 1867, "end_second": 1899, "url": "https://www.youtube.com/watch?v=lDLqrsye-rQ&t=1867s", "title": "Dynamic Pricing in Ride-Sharing Platforms", "thumbnail": "https://i.ytimg.com/vi/lDLqrsye-rQ/maxresdefault.jpg"} {"video_id": "lDLqrsye-rQ", "text": "doesn't quite have to be that strong that that's probably most interpretable definition what's interesting is that there's no condition on the on the Preferences of the drivers FC does not enter into this theorem so FC could be arbitrary it's the the restriction is only on that fee okay that's one point the second point is I guess what I find interesting about this result it the way I would state it I I think like the the more glib statement is there's no value to dynamic pricing I think the way I would stated that's a bit more precise", "start_timestamp": "00:31:39", "end_timestamp": "00:32:06", "start_second": 1899, "end_second": 1926, "url": "https://www.youtube.com/watch?v=lDLqrsye-rQ&t=1899s", "title": "Dynamic Pricing in Ride-Sharing Platforms", "thumbnail": "https://i.ytimg.com/vi/lDLqrsye-rQ/maxresdefault.jpg"} {"video_id": "lDLqrsye-rQ", "text": "is that there's only second-order benefit to dynamic pricing right so to the extent that you see benefits from dynamic pricing it's happening because it's happening because you're actually able to sort of correct things in the in the sort of a Gaussian term not in the not in the fluid term the third comment I want to make is that maybe one naive view you might have on this result as well of course because you took the limit you got rid of all stochasticity but the thing is that's not true so that the drivers went there is actually a", "start_timestamp": "00:32:06", "end_timestamp": "00:32:30", "start_second": 1926, "end_second": 1950, "url": "https://www.youtube.com/watch?v=lDLqrsye-rQ&t=1926s", "title": "Dynamic Pricing in Ride-Sharing Platforms", "thumbnail": "https://i.ytimg.com/vi/lDLqrsye-rQ/maxresdefault.jpg"} {"video_id": "lDLqrsye-rQ", "text": "well-defined steady state distribution that's seen sorry/not drivers by passengers when they enter even in the limiting system okay so when a passenger arrives it is the case that the the number of available drivers that they see is a random variable and that remains true even when you pass to the limit okay so it's it is true that we're speeding up the system but from a passengers perspective they're always reacting an instantaneous state so that stochasticity is still relevant and then i think that's part of what makes the", "start_timestamp": "00:32:30", "end_timestamp": "00:32:56", "start_second": 1950, "end_second": 1976, "url": "https://www.youtube.com/watch?v=lDLqrsye-rQ&t=1950s", "title": "Dynamic Pricing in Ride-Sharing Platforms", "thumbnail": "https://i.ytimg.com/vi/lDLqrsye-rQ/maxresdefault.jpg"} {"video_id": "lDLqrsye-rQ", "text": "makes the fact that this happens a little bit surprising so one comment i want to make is that these types of results are similar to stuff that's kind of long-standing in the revenue management literature where in the early results in that literature you know it was established that that fluid pricing policies of the sort that we were talking about are not going to be better than dynamic pricing and there's been a lot of work extending those types of results but one interesting thing in that literature is they're not really been any work that", "start_timestamp": "00:32:56", "end_timestamp": "00:33:22", "start_second": 1976, "end_second": 2002, "url": "https://www.youtube.com/watch?v=lDLqrsye-rQ&t=1976s", "title": "Dynamic Pricing in Ride-Sharing Platforms", "thumbnail": "https://i.ytimg.com/vi/lDLqrsye-rQ/maxresdefault.jpg"} {"video_id": "lDLqrsye-rQ", "text": "looks at two-sided markets so that literature usually what happens is you're kind of given a fixed basket of stuff to sell and then you're allowed to use whatever mechanism you want to sell up to a deadline and like the canonical example is airline tickets so you're an airline you're selling seats and so you have like planes that are empty and you have a bunch of seats to sell up to a deadline when the plane actually takes off you're allowed to use whatever pricing policy you want to set prices for the seats okay in our case kind of", "start_timestamp": "00:33:22", "end_timestamp": "00:33:46", "start_second": 2002, "end_second": 2026, "url": "https://www.youtube.com/watch?v=lDLqrsye-rQ&t=2002s", "title": "Dynamic Pricing in Ride-Sharing Platforms", "thumbnail": "https://i.ytimg.com/vi/lDLqrsye-rQ/maxresdefault.jpg"} {"video_id": "lDLqrsye-rQ", "text": "one of the things I think that makes this really interesting is that we have this you know that the seats are strategic the seats are the drivers and and they're they're entering because of because of what we're doing so you know that's one of the big differences and so again just to point out kind of why this is interesting to work on is I think there are if you look at the problems that are faced by many platforms in managing inventory internally so the analog in air B&B would be sort of managing the inventory of hotel rooms", "start_timestamp": "00:33:46", "end_timestamp": "00:34:11", "start_second": 2026, "end_second": 2051, "url": "https://www.youtube.com/watch?v=lDLqrsye-rQ&t=2026s", "title": "Dynamic Pricing in Ride-Sharing Platforms", "thumbnail": "https://i.ytimg.com/vi/lDLqrsye-rQ/maxresdefault.jpg"} {"video_id": "lDLqrsye-rQ", "text": "they look like classic revenue management problems but with strategic supply and so that that you know that perspective is something which opens up I think a range of algorithmic insight that isn't there yet in the literature so I'm going to skip the proof just because we're already well over time for this session I'm happy to talk to you about that offline if you want or if you play back the video and look at the slide slowly you should be able to make it out so last thing I'll tell you is and this result I think is less", "start_timestamp": "00:34:11", "end_timestamp": "00:34:40", "start_second": 2051, "end_second": 2080, "url": "https://www.youtube.com/watch?v=lDLqrsye-rQ&t=2051s", "title": "Dynamic Pricing in Ride-Sharing Platforms", "thumbnail": "https://i.ytimg.com/vi/lDLqrsye-rQ/maxresdefault.jpg"} {"video_id": "lDLqrsye-rQ", "text": "surprising it's just I just like it because of the nature of how we formulated it so dynamic pricing is obviously helpful all right and and already there's a hint in this second-order effect that I mentioned and really like why do we use it well one reason we use it is because we don't want to have to sit and futz around with what we think the system parameters are actually going to be even when it's predictable uncertainty all right so one thing that's nice about dynamic pricing is it sort of naturally adjusts itself", "start_timestamp": "00:34:40", "end_timestamp": "00:35:02", "start_second": 2080, "end_second": 2102, "url": "https://www.youtube.com/watch?v=lDLqrsye-rQ&t=2080s", "title": "Dynamic Pricing in Ride-Sharing Platforms", "thumbnail": "https://i.ytimg.com/vi/lDLqrsye-rQ/maxresdefault.jpg"} {"video_id": "lDLqrsye-rQ", "text": "to wherever the system is living okay and somehow you would like to be able to make a statement that that captures that robustness that if the system parameters are known yeah then maybe static pricing is at least as good as there never is you know doing as good as dynamic pricing but when system parameters are unknown presumably you should gain something in robustness because you were using a dynamic pricing policy so there are a lot of different ways to give these kinds of robustness results and again in the revenue management", "start_timestamp": "00:35:02", "end_timestamp": "00:35:25", "start_second": 2102, "end_second": 2125, "url": "https://www.youtube.com/watch?v=lDLqrsye-rQ&t=2102s", "title": "Dynamic Pricing in Ride-Sharing Platforms", "thumbnail": "https://i.ytimg.com/vi/lDLqrsye-rQ/maxresdefault.jpg"} {"video_id": "lDLqrsye-rQ", "text": "literature it usually involves taking the second-order limit like looking at a fusion limit of the system and there those are nice but but definitely technically complex what we want to do is take maybe a more of a robust optimization viewpoint where we basically said let's ask can we get sort of dynamic pricing to be near optimal across a wide range of system parameters okay so again I'll just tell you that the result in pictures so here what I'm doing is I'm picking one of the parameters let's say the exoticness", "start_timestamp": "00:35:25", "end_timestamp": "00:35:50", "start_second": 2125, "end_second": 2150, "url": "https://www.youtube.com/watch?v=lDLqrsye-rQ&t=2125s", "title": "Dynamic Pricing in Ride-Sharing Platforms", "thumbnail": "https://i.ytimg.com/vi/lDLqrsye-rQ/maxresdefault.jpg"} {"video_id": "lDLqrsye-rQ", "text": "arrival rate of passengers and of you know app opens and what I'm plotting here is this is the throughput I would get if regardless of what the you know for each value the system parameter I set the optimal static price okay this is the blue curve is what I get if I set the optimal strategy for static price believing that the exogenous rate of customer arrivals was mutant was a 4.0 but in fact it ended up being something different and so yeah right here obviously it's it's optimal but then it really degrades quickly as I move away", "start_timestamp": "00:35:50", "end_timestamp": "00:36:24", "start_second": 2150, "end_second": 2184, "url": "https://www.youtube.com/watch?v=lDLqrsye-rQ&t=2150s", "title": "Dynamic Pricing in Ride-Sharing Platforms", "thumbnail": "https://i.ytimg.com/vi/lDLqrsye-rQ/maxresdefault.jpg"} {"video_id": "lDLqrsye-rQ", "text": "right well you really want like is the difference between these two things now suppose that what I tell myself is okay you know I don't know what me zero is going to be exactly but I think it's gonna be between three point six and four point four so let me take each of those to compute the corresponding static prices for each of those to the optimal static prices and use these two as the two prices in a dynamic pricing scheme okay again buried in the background is what threshold do I set to move between these", "start_timestamp": "00:36:24", "end_timestamp": "00:36:49", "start_second": 2184, "end_second": 2209, "url": "https://www.youtube.com/watch?v=lDLqrsye-rQ&t=2184s", "title": "Dynamic Pricing in Ride-Sharing Platforms", "thumbnail": "https://i.ytimg.com/vi/lDLqrsye-rQ/maxresdefault.jpg"} {"video_id": "lDLqrsye-rQ", "text": "two and I'll just tell you that that we're using the same sort of optimality result that was in the proof to be able to set that threshold between once you give me the two prices I can set the optimal threshold right and what you find is that if you now plot how dynamic pricing does using these two prices the two prices from the extremes then yeah it degrades badly outside of that uncertainty set but inside the uncertainty set and what we can show is that dynamic pricing always gets you at least the linear interpolation between", "start_timestamp": "00:36:49", "end_timestamp": "00:37:16", "start_second": 2209, "end_second": 2236, "url": "https://www.youtube.com/watch?v=lDLqrsye-rQ&t=2209s", "title": "Dynamic Pricing in Ride-Sharing Platforms", "thumbnail": "https://i.ytimg.com/vi/lDLqrsye-rQ/maxresdefault.jpg"} {"video_id": "lDLqrsye-rQ", "text": "these two optimal values okay and at least in the numerical experiments we've done that tends to be quite good this the the green curve is is concave and all the experiments we've looked at so far numerical experiments so basically the point is the result we have is something which characterizes the robustness of dynamic pricing through this notion of an uncertainty set over system parameters and says that you always do at least as well as the linear interpolation from the endpoints of your uncertainty set and this", "start_timestamp": "00:37:16", "end_timestamp": "00:37:42", "start_second": 2236, "end_second": 2262, "url": "https://www.youtube.com/watch?v=lDLqrsye-rQ&t=2236s", "title": "Dynamic Pricing in Ride-Sharing Platforms", "thumbnail": "https://i.ytimg.com/vi/lDLqrsye-rQ/maxresdefault.jpg"} {"video_id": "lDLqrsye-rQ", "text": "kind of a technical statement of that okay so let me conclude so a bunch of different things that we were trying to do in this in this work and so some things I didn't you know managed to get to network modeling with multiple regions when I say our main insights generalize I mean they generalize sort of to the extent that we're you know able to characterize this difference between static and dynamic pricing we haven't accounted for ETA sort of in the way that David asked about right aggregate welfare is something like I", "start_timestamp": "00:37:42", "end_timestamp": "00:38:10", "start_second": 2262, "end_second": 2290, "url": "https://www.youtube.com/watch?v=lDLqrsye-rQ&t=2262s", "title": "Dynamic Pricing in Ride-Sharing Platforms", "thumbnail": "https://i.ytimg.com/vi/lDLqrsye-rQ/maxresdefault.jpg"} {"video_id": "lDLqrsye-rQ", "text": "said numerically we get very similar insights but but we're trying to work the theory out there and I think it really gets interesting when you start changing you know when you start asking you know much more foundational design questions so one thing that that I think Chris even pointed out is that you know you can imagine showing drivers these heat maps where we're saying like here is a place where there's there's more or less demand available we have some model inside the queueing model we've we've come up with some nice sort of tractable", "start_timestamp": "00:38:10", "end_timestamp": "00:38:37", "start_second": 2290, "end_second": 2317, "url": "https://www.youtube.com/watch?v=lDLqrsye-rQ&t=2290s", "title": "Dynamic Pricing in Ride-Sharing Platforms", "thumbnail": "https://i.ytimg.com/vi/lDLqrsye-rQ/maxresdefault.jpg"} {"video_id": "lDLqrsye-rQ", "text": "ways to deal with this but extending it then to add in the strategic the strategic component has not been so easy but I think that again this is a really important direction to take the work it's be structure changing the percentage we talked about already and finally I think that there's sort of changing the matching algorithm there's something which has really not discussed a lot if you ask you know uber or lyft I'm sure they would say that they want it to be fairly transparent so you're match to the nearest driver there's all", "start_timestamp": "00:38:37", "end_timestamp": "00:39:02", "start_second": 2317, "end_second": 2342, "url": "https://www.youtube.com/watch?v=lDLqrsye-rQ&t=2317s", "title": "Dynamic Pricing in Ride-Sharing Platforms", "thumbnail": "https://i.ytimg.com/vi/lDLqrsye-rQ/maxresdefault.jpg"} {"video_id": "lDLqrsye-rQ", "text": "kinds of really good reasons why you might want to manage inventory very differently across the network especially if you for those of you that were here for the industry the reverse field trip day you know they were talking about this sort of you know tessellation of the earth into smaller regions than what they've been using before and once you start doing that I absolutely think you would want your match algorithm to occasionally pick people from adjacent regions you know depending on you know or another example", "start_timestamp": "00:39:02", "end_timestamp": "00:39:26", "start_second": 2342, "end_second": 2366, "url": "https://www.youtube.com/watch?v=lDLqrsye-rQ&t=2342s", "title": "Dynamic Pricing in Ride-Sharing Platforms", "thumbnail": "https://i.ytimg.com/vi/lDLqrsye-rQ/maxresdefault.jpg"} {"video_id": "lDLqrsye-rQ", "text": "this would be the Chris on Monday mentioned that uber wants to be able to accommodate driver preferences it use either Chris or last week that if a driver says hey I want to be able to be get back to the East Bay in an hour I'm gonna accommodate well that's gonna change the match algorithm you're not going to give it to the nearest driver necessarily and I think this is again one of these things with sort of geographic matching you would have to have a good queueing model and you would have to be able to model the incentives", "start_timestamp": "00:39:26", "end_timestamp": "00:39:47", "start_second": 2366, "end_second": 2387, "url": "https://www.youtube.com/watch?v=lDLqrsye-rQ&t=2366s", "title": "Dynamic Pricing in Ride-Sharing Platforms", "thumbnail": "https://i.ytimg.com/vi/lDLqrsye-rQ/maxresdefault.jpg"} {"video_id": "Cs_j-oNwGgg", "text": "hi there what you're seeing here is an energy based model that learns the concept of a shape from a demonstration on the left so on the left you can see a demonstration of data point sampled from a shape in these cases circles or squares and then the corresponding energy function that the model in first from that and then it can replicate that shape on the right using that energy function so the paper we're going to analyze today is called concept learning with energy based models by yegor more Dutch of open AI and this is a very cool", "start_timestamp": "00:00:00", "end_timestamp": "00:00:40", "start_second": 0, "end_second": 40, "url": "https://www.youtube.com/watch?v=Cs_j-oNwGgg&t=0s", "title": "Concept Learning with Energy-Based Models (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/Cs_j-oNwGgg/maxresdefault.jpg"} {"video_id": "Cs_j-oNwGgg", "text": "paper or at least I think it's a very cool paper but it is also a very hard paper so therefore first I want to kind of make a bit of an introduction into the concepts that we are facing in this paper so the first thing you need to know are energy functions or energy based models what is an energy function an energy function sometimes called e is simply a function with one or multiple inputs let's call them X and you can make the if the energy function is happy with X it will be the value 0 and if the energy function is not happy with X it", "start_timestamp": "00:00:40", "end_timestamp": "00:01:22", "start_second": 40, "end_second": 82, "url": "https://www.youtube.com/watch?v=Cs_j-oNwGgg&t=40s", "title": "Concept Learning with Energy-Based Models (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/Cs_j-oNwGgg/maxresdefault.jpg"} {"video_id": "Cs_j-oNwGgg", "text": "will be a high value like larger than zero so this is happy this is not happy so let's give some examples of this we can formulate almost any machine learning problem in terms of an energy function let's say we have a classifier the classifier is takes as an input image here may be of a cat and the label so if the label is cat then the energy will be zero if the energy function is of course working correctly and if but if we give the energy function the same image but we give it a wrong label dog then it is very high in the case of the", "start_timestamp": "00:01:22", "end_timestamp": "00:02:12", "start_second": 82, "end_second": 132, "url": "https://www.youtube.com/watch?v=Cs_j-oNwGgg&t=82s", "title": "Concept Learning with Energy-Based Models (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/Cs_j-oNwGgg/maxresdefault.jpg"} {"video_id": "Cs_j-oNwGgg", "text": "classifier of course we can to simply take the loss function as the energy function and we are altom automatically get an energy based model so the loss function here would be something like the negative log probability of the of the sorry of the correct class but in any case it is just going to be a high number let's call it 10 to the 9 so the energy function says ha this is very this is very bad this thing here is very bad the entire thing you input it won't tell you yet what's bad about it so that also means you can", "start_timestamp": "00:02:12", "end_timestamp": "00:02:49", "start_second": 132, "end_second": 169, "url": "https://www.youtube.com/watch?v=Cs_j-oNwGgg&t=132s", "title": "Concept Learning with Energy-Based Models (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/Cs_j-oNwGgg/maxresdefault.jpg"} {"video_id": "Cs_j-oNwGgg", "text": "change any of the two things to make the classifier happy now usually we're concerned with changing the label it's like tell me which other label do I need to input to make you happy and if we make the labels differentiable of course we never input the true label we actually input like a distribution softmax distribution over labels and that's a differentiable we can use gradient descent to update the dog label we can use gradient descent to find the label that would make the energy function more happy so we could use", "start_timestamp": "00:02:49", "end_timestamp": "00:03:24", "start_second": 169, "end_second": 204, "url": "https://www.youtube.com/watch?v=Cs_j-oNwGgg&t=169s", "title": "Concept Learning with Energy-Based Models (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/Cs_j-oNwGgg/maxresdefault.jpg"} {"video_id": "Cs_j-oNwGgg", "text": "gradient descent to get the cat level if we had a good classifier but we can also we can also optimize the image to make it compatible with the dog label right that's things that if you ever saw deep dream or something like this those models do exactly that they optimize the input image for a particular label and there you can view the entire neural network including the loss function as the energy function so what's another example another example is let's say you have a k-means model and the energy function is simply input a data point", "start_timestamp": "00:03:24", "end_timestamp": "00:04:09", "start_second": 204, "end_second": 249, "url": "https://www.youtube.com/watch?v=Cs_j-oNwGgg&t=204s", "title": "Concept Learning with Energy-Based Models (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/Cs_j-oNwGgg/maxresdefault.jpg"} {"video_id": "Cs_j-oNwGgg", "text": "and for the data point what you're going to do is you're going to find the min cluster index the min Kay over you know you have your multiple clusters here and your data point might be here so you're going to fight the cluster that's closest and then the distance here this since Dee will be the energy of that so the model is very happy when your data point it comes from one of the clusters but your model is not happy when the data point is far away and that would be the cost function of the k-means function so that's an energy based model", "start_timestamp": "00:04:09", "end_timestamp": "00:04:44", "start_second": 249, "end_second": 284, "url": "https://www.youtube.com/watch?v=Cs_j-oNwGgg&t=249s", "title": "Concept Learning with Energy-Based Models (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/Cs_j-oNwGgg/maxresdefault.jpg"} {"video_id": "Cs_j-oNwGgg", "text": "too now currently energy based models have come into fashion through things like Gans or any conservative noise contrastive estimation so in a jet in a gam what you have is you have a discriminator and the discriminator will basically learn a function to differentiate data from non data so that by itself is an energy function so the discriminator will learn a function and that function will be low wherever the discriminator thinks there is a data right so it will usually do this around the data point so the data points form", "start_timestamp": "00:04:44", "end_timestamp": "00:05:23", "start_second": 284, "end_second": 323, "url": "https://www.youtube.com/watch?v=Cs_j-oNwGgg&t=284s", "title": "Concept Learning with Energy-Based Models (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/Cs_j-oNwGgg/maxresdefault.jpg"} {"video_id": "Cs_j-oNwGgg", "text": "the valleys right here and then the generator will basically take that discriminator function and will try to infer points that are also in these valleys to produce points that are also in the valleys and then you basically have an energy learning competition the discriminator now tries to push down on the energy where the true data is and push up on the energy where the generated data is and that will give you basically a steeper energy based function in the future I hope so in this case the discriminator neural network is", "start_timestamp": "00:05:23", "end_timestamp": "00:06:05", "start_second": 323, "end_second": 365, "url": "https://www.youtube.com/watch?v=Cs_j-oNwGgg&t=323s", "title": "Concept Learning with Energy-Based Models (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/Cs_j-oNwGgg/maxresdefault.jpg"} {"video_id": "Cs_j-oNwGgg", "text": "the energy function and the D generator just tries to produce data that is compatible with that energy function so I hope that concept of what an energy function is is a bit clearer and in any again any machine learning problem can be formulated in terms of an energy function now what is not done so far is what we alluded to a little bit before in the classifier example and also here so right now when we want to Train again we simply take the generator to whose data now what's the generator skull the generators goal is to hit", "start_timestamp": "00:06:05", "end_timestamp": "00:06:44", "start_second": 365, "end_second": 404, "url": "https://www.youtube.com/watch?v=Cs_j-oNwGgg&t=365s", "title": "Concept Learning with Energy-Based Models (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/Cs_j-oNwGgg/maxresdefault.jpg"} {"video_id": "Cs_j-oNwGgg", "text": "those valleys in the energy function and we produce a generator into in one shot produce this data but what we could also do is of course we could just start somewhere let's say here we pick a random data point and then we use gradient descent because the energy function in this case is smooth we use gradient descent to just drop down this valley and then find ourselves in this valley so without ever training a generator we can use this methods to produce points that are in the valley of the energy function right and this I don't know if", "start_timestamp": "00:06:44", "end_timestamp": "00:07:23", "start_second": 404, "end_second": 443, "url": "https://www.youtube.com/watch?v=Cs_j-oNwGgg&t=404s", "title": "Concept Learning with Energy-Based Models (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/Cs_j-oNwGgg/maxresdefault.jpg"} {"video_id": "Cs_j-oNwGgg", "text": "people I can guess people have trained gams like this the reason why it doesn't work let's say in the real world is because that procedure will just produce adversarial examples for the discriminator and those usually look like nothing like data because if you keep the discriminator just stable and gradient descent against it what you'll get isn't really qualitatively good but in principle if the discriminator was a good energy function for the date to describe the data we could use gradient descent the same up here in order to", "start_timestamp": "00:07:23", "end_timestamp": "00:07:59", "start_second": 443, "end_second": 479, "url": "https://www.youtube.com/watch?v=Cs_j-oNwGgg&t=443s", "title": "Concept Learning with Energy-Based Models (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/Cs_j-oNwGgg/maxresdefault.jpg"} {"video_id": "Cs_j-oNwGgg", "text": "find a good label for an image given that we have a good energy function right so this is that we could simply gradient descent on the label in order to find a better in order to find a better label so in this paper we're going to have a situation where we say we're given an energy function and we're given a bunch of inputs they are then called X a and W and if I have my energy function already if I have given my energy function and I have given two of those three things any two right I can infer the last thing simply by gradient", "start_timestamp": "00:07:59", "end_timestamp": "00:08:47", "start_second": 479, "end_second": 527, "url": "https://www.youtube.com/watch?v=Cs_j-oNwGgg&t=479s", "title": "Concept Learning with Energy-Based Models (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/Cs_j-oNwGgg/maxresdefault.jpg"} {"video_id": "Cs_j-oNwGgg", "text": "descent on my energy function because I know the energy function is zero when all of these when the energy function is happy with the input so when all of these things agree basically the energy function is happy it will output zero otherwise it will output a high value therefore if I given any of those two any two of those three things I can find a compatible third thing by descending and then of course over here in these machine learning problems the task was always actually to learn an energy function right so usually in the", "start_timestamp": "00:08:47", "end_timestamp": "00:09:26", "start_second": 527, "end_second": 566, "url": "https://www.youtube.com/watch?v=Cs_j-oNwGgg&t=527s", "title": "Concept Learning with Energy-Based Models (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/Cs_j-oNwGgg/maxresdefault.jpg"} {"video_id": "Cs_j-oNwGgg", "text": "training dates that we are given images and labels and we want to learn this energy function which would be parameterized so we want to learn the parameters and the same here in our general case if we are now given three things but we are not given the parameters of the energy function we don't know what those are as long as we're given all of the inputs and our training data set and our training data set guarantees these are actually you know these are inputs that are compatible with each other the energy function should below we can", "start_timestamp": "00:09:26", "end_timestamp": "00:09:59", "start_second": 566, "end_second": 599, "url": "https://www.youtube.com/watch?v=Cs_j-oNwGgg&t=566s", "title": "Concept Learning with Energy-Based Models (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/Cs_j-oNwGgg/maxresdefault.jpg"} {"video_id": "Cs_j-oNwGgg", "text": "simply gradient descent on the parameters of the energy function so in a sense there are four things right there are these three inputs and then there are the parameters of the energy function if we're given any three of those four we can gradient descent on the rest and that's going to be the basis so the X here is going to be the so-called state and the state in this paper is going to be images of entities the entities sorry is not going to be images but the entities are these little circles that you're going to see and", "start_timestamp": "00:09:59", "end_timestamp": "00:10:38", "start_second": 599, "end_second": 638, "url": "https://www.youtube.com/watch?v=Cs_j-oNwGgg&t=599s", "title": "Concept Learning with Energy-Based Models (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/Cs_j-oNwGgg/maxresdefault.jpg"} {"video_id": "Cs_j-oNwGgg", "text": "each of those entities can have an exposition a Y position and I believe a color so our G and B so each of those can have that and then the concatenation of all of those attributes is a one big vector and that is your X that's your state so state is number of entities and their attributes a is going to be an attention mask over the state so a is going to be here you have four entities so a will have four entries telling you which of these entities you should pay attention to right now and W is going to be a concept vector so called so W is going", "start_timestamp": "00:10:38", "end_timestamp": "00:11:30", "start_second": 638, "end_second": 690, "url": "https://www.youtube.com/watch?v=Cs_j-oNwGgg&t=638s", "title": "Concept Learning with Energy-Based Models (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/Cs_j-oNwGgg/maxresdefault.jpg"} {"video_id": "Cs_j-oNwGgg", "text": "to be the embedding of a concept now what a concept is in this case is very general I can give you an example one a concept is do any of do the entities that the a pays attention to are they close to each other so in this case you see we have two entities that a has a high value one and this is this all up here and this ball down here now if the concept vector is the embedding for the concept of being close to each other then the energy function would be very happy if those two things are close to each other and it would be very unhappy", "start_timestamp": "00:11:30", "end_timestamp": "00:12:18", "start_second": 690, "end_second": 738, "url": "https://www.youtube.com/watch?v=Cs_j-oNwGgg&t=690s", "title": "Concept Learning with Energy-Based Models (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/Cs_j-oNwGgg/maxresdefault.jpg"} {"video_id": "Cs_j-oNwGgg", "text": "if those two things aren't close to each other but in the very same situations of the same X the same attention mask but a different concept so a different W vector right here then the the energy function would be maybe very happy if the two things are far apart and maybe unhappy if the two things are close so the question is always how are the three things that you put into the energy function compatible with each other and given all but one of these things you can infer the other so let's say you have a perfect energy function for this", "start_timestamp": "00:12:18", "end_timestamp": "00:12:59", "start_second": 738, "end_second": 779, "url": "https://www.youtube.com/watch?v=Cs_j-oNwGgg&t=738s", "title": "Concept Learning with Energy-Based Models (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/Cs_j-oNwGgg/maxresdefault.jpg"} {"video_id": "Cs_j-oNwGgg", "text": "this all of these for this situation you're just given the energy function you can trust it and you are given let's make an example you are given the X so you're given the state I'm gonna draw the state down here right okay this is the state and you were given the W and the W is the embedding it's a vector but the embedding space but the embedding is for a line right so the geometric the geometric unit of a line now your task is to find a the attention mask that will make the energy function happy and as you can see right here", "start_timestamp": "00:12:59", "end_timestamp": "00:13:47", "start_second": 779, "end_second": 827, "url": "https://www.youtube.com/watch?v=Cs_j-oNwGgg&t=779s", "title": "Concept Learning with Energy-Based Models (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/Cs_j-oNwGgg/maxresdefault.jpg"} {"video_id": "Cs_j-oNwGgg", "text": "while you would do is you would put a lot of weight on this this this and this ball and no weight on that ball because those make a line and since everything here is differentiable so the state is differentiable the attentions differentiable and the concepts or vectors that are differentiable you can use gradient descent to find that another example if you're given again the same W so line and you were given this following thing and you were given now you're given the attention on these three and you say please find the X", "start_timestamp": "00:13:47", "end_timestamp": "00:14:28", "start_second": 827, "end_second": 868, "url": "https://www.youtube.com/watch?v=Cs_j-oNwGgg&t=827s", "title": "Concept Learning with Energy-Based Models (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/Cs_j-oNwGgg/maxresdefault.jpg"} {"video_id": "Cs_j-oNwGgg", "text": "please find the X the states that makes this energy function happy now this here you would call the starting state the x0 your ear task is going to be find the x1 find the state how do you have to change this state such that the energy function is happy and of course the answer is going to be is to push this ball here inward until it is in the middle of the two others so the three form a line right these three form a line you don't you don't have to do anything to this ball up here because there is no attention on it and the attention it's", "start_timestamp": "00:14:28", "end_timestamp": "00:15:05", "start_second": 868, "end_second": 905, "url": "https://www.youtube.com/watch?v=Cs_j-oNwGgg&t=868s", "title": "Concept Learning with Energy-Based Models (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/Cs_j-oNwGgg/maxresdefault.jpg"} {"video_id": "Cs_j-oNwGgg", "text": "only is the concept for the things that you put attention on yeah and the state are those three in agreement and the energy function is happy okay we have covered the basics now let's dive into the paper I think this is the longest introduction ever but I think it will pay off once see so they they specifically or this this author I think it's a single author identifies two different things that you can do with an energy function here of course you can do more as we saw but they identify two so here is where you", "start_timestamp": "00:15:05", "end_timestamp": "00:15:50", "start_second": 905, "end_second": 950, "url": "https://www.youtube.com/watch?v=Cs_j-oNwGgg&t=905s", "title": "Concept Learning with Energy-Based Models (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/Cs_j-oNwGgg/maxresdefault.jpg"} {"video_id": "Cs_j-oNwGgg", "text": "have given the initial state and an attention mask and you want to find the x1 the state that satisfies the concept and the tension the most this the author calls generation as you can see here these four things that you have the attention on are pushed around until they make a square because the concept right now is square and in the other case where you are given this x0 and x1 just call this X right here just call this thing X if you're given those two and you are given the concept Square and you're tasked with finding a the", "start_timestamp": "00:15:50", "end_timestamp": "00:16:32", "start_second": 950, "end_second": 992, "url": "https://www.youtube.com/watch?v=Cs_j-oNwGgg&t=950s", "title": "Concept Learning with Energy-Based Models (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/Cs_j-oNwGgg/maxresdefault.jpg"} {"video_id": "Cs_j-oNwGgg", "text": "attention mask of course you're going to put the attention on these right here and that is going to happen through gradient descent again we're not learning a model to give you that attention like in again we're learning a generator to just one shot give it to you right now what we're going to do is we're going to gradient descent optimize on our smooth energy function to give us that perfect attention mask that satisfies the energy function all right so this is the difference right here gradient descent is part of the output", "start_timestamp": "00:16:32", "end_timestamp": "00:17:05", "start_second": 992, "end_second": 1025, "url": "https://www.youtube.com/watch?v=Cs_j-oNwGgg&t=992s", "title": "Concept Learning with Energy-Based Models (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/Cs_j-oNwGgg/maxresdefault.jpg"} {"video_id": "Cs_j-oNwGgg", "text": "procedure of the model usually we just use it to learn and we learn a one-shot model but here gradient descent is part of the model so they introduce energy functions here and they say okay we can have a policy on X so if we're given a concept W and if we're given an A we can have a policy over X which basically means we can find X's that are compatible with that by running gradient descent here you see there is an XK minus 1 and we are running gradient descent on the energy function with respect to X to find a better X", "start_timestamp": "00:17:05", "end_timestamp": "00:17:48", "start_second": 1025, "end_second": 1068, "url": "https://www.youtube.com/watch?v=Cs_j-oNwGgg&t=1025s", "title": "Concept Learning with Energy-Based Models (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/Cs_j-oNwGgg/maxresdefault.jpg"} {"video_id": "Cs_j-oNwGgg", "text": "that satisfies the energy function given those inputs and the same if we want to find an attention mask we are running gradient descent on the attention mask again in order to satisfy the same energy function so you see the inputs are both times the same the concept here we can input square here we can input square but the difference is what we're running gradient descent on and what we keep constant and I would get I would add a third line here actually because we can also if we're given an X and an A we can also infer a W and that's going", "start_timestamp": "00:17:48", "end_timestamp": "00:18:34", "start_second": 1068, "end_second": 1114, "url": "https://www.youtube.com/watch?v=Cs_j-oNwGgg&t=1068s", "title": "Concept Learning with Energy-Based Models (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/Cs_j-oNwGgg/maxresdefault.jpg"} {"video_id": "Cs_j-oNwGgg", "text": "to be an integral part so if I have this right here and this situation and I have say I have a tension on these for now I can ask the model so I'm given X and I'm given a I can ask the model to infer W and the model should ideally output ha this is square now the model isn't going to output square the model is going to output a vector representation of square right so the model is going to output square but as a vector of numbers because that's how we've trained it W isn't it is the embedding but what we can then do later is we can say okay I'm", "start_timestamp": "00:18:34", "end_timestamp": "00:19:21", "start_second": 1114, "end_second": 1161, "url": "https://www.youtube.com/watch?v=Cs_j-oNwGgg&t=1114s", "title": "Concept Learning with Energy-Based Models (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/Cs_j-oNwGgg/maxresdefault.jpg"} {"video_id": "Cs_j-oNwGgg", "text": "not going to tell you it's a square you just come up with a vector W it describes this situation and now I'm going to take that vector W that you came up with miss mister or missus model and I'm going to take tell you a new situation this situation right here and I'm going to now give you X and I'm going to give you the W that you yourself have output and now please tell me what's the a and then the model is of course supposed to tell you oh these four here or the a so without without ever telling that it should be a square what", "start_timestamp": "00:19:21", "end_timestamp": "00:20:04", "start_second": 1161, "end_second": 1204, "url": "https://www.youtube.com/watch?v=Cs_j-oNwGgg&t=1161s", "title": "Concept Learning with Energy-Based Models (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/Cs_j-oNwGgg/maxresdefault.jpg"} {"video_id": "Cs_j-oNwGgg", "text": "you can do is you can let the model infer a W from one example situation and then transfer that W to a new situation so it can identify you can just say whatever concept I have up here please apply that same concept which is the W down here and this is the entire paper now this is the concept learning through energy based models okay so that is kind of a third line I would add down here you can infer a concept vector if you're given the X and the a so in order to do all this their energy function is going to be a so called relational neural", "start_timestamp": "00:20:04", "end_timestamp": "00:20:47", "start_second": 1204, "end_second": 1247, "url": "https://www.youtube.com/watch?v=Cs_j-oNwGgg&t=1204s", "title": "Concept Learning with Energy-Based Models (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/Cs_j-oNwGgg/maxresdefault.jpg"} {"video_id": "Cs_j-oNwGgg", "text": "network so what you'll have is you'll have a simple neural network a multi-layer perceptron that always connects two entities to each other with the concept vector and then this is a belief a sigmoid that connects the attention masks of the two and then you simply sum over all pairs of two entries in your model and then you send that through an MLP sorry through an MLP again this I believe is not so important it's just important that they can feed this entire situation the X the a and the W they can basically feed into a", "start_timestamp": "00:20:47", "end_timestamp": "00:21:26", "start_second": 1247, "end_second": 1286, "url": "https://www.youtube.com/watch?v=Cs_j-oNwGgg&t=1247s", "title": "Concept Learning with Energy-Based Models (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/Cs_j-oNwGgg/maxresdefault.jpg"} {"video_id": "Cs_j-oNwGgg", "text": "neural network in the neural network comes up with a number of how well those three things fit together and then you can transfer these concepts that's pretty cool now the only question is of course and we've always said we're given an energy function or just we just have it but of course this is a neural network and the neural network has parameters and the parameters we don't know what good parameters are at the beginning so we need to train this thing and again the reason why these are toy problems right here is I mean we'll get", "start_timestamp": "00:21:26", "end_timestamp": "00:22:03", "start_second": 1286, "end_second": 1323, "url": "https://www.youtube.com/watch?v=Cs_j-oNwGgg&t=1286s", "title": "Concept Learning with Energy-Based Models (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/Cs_j-oNwGgg/maxresdefault.jpg"} {"video_id": "Cs_j-oNwGgg", "text": "to why it's computational but this is kind of a new field I believe in machine learning at least I come from classical machine learning and we only ever have used like SGD to train and we only have her have produced models that one shot produce something and here we this is a I believe there's a new concept where you use gradient descent as part of the output and that makes a lot of trouble and so that's why we work in toy problems so what this this here is the situation I described you have a demo event where you're given the X and", "start_timestamp": "00:22:03", "end_timestamp": "00:22:44", "start_second": 1323, "end_second": 1364, "url": "https://www.youtube.com/watch?v=Cs_j-oNwGgg&t=1323s", "title": "Concept Learning with Energy-Based Models (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/Cs_j-oNwGgg/maxresdefault.jpg"} {"video_id": "Cs_j-oNwGgg", "text": "the a and you're supposed to infer the W so the question here is what's the W and the model will come up with a W and you're not gonna do anything you're not right now you're simply gonna take that W and tell it oh well here is a so called test event so please apply the W you came up with in this test event and please find me the the a in this case that satisfies the W and the X I give you here and of course the a right here is as you can see even you don't know that it's a square and the actual concept here is move the grey ball to", "start_timestamp": "00:22:44", "end_timestamp": "00:23:25", "start_second": 1364, "end_second": 1405, "url": "https://www.youtube.com/watch?v=Cs_j-oNwGgg&t=1364s", "title": "Concept Learning with Energy-Based Models (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/Cs_j-oNwGgg/maxresdefault.jpg"} {"video_id": "Cs_j-oNwGgg", "text": "the middle of the square right that that is it here but no one has told me this I just looked at the picture so the the correct answer here would be to place attention on those four things and then to take this thing and move it to the middle right here in in the in this over here so that would be the correct answer now the question is how do you train something like this and they they show that they so this is the loss function right here the loss function is they give you a concept and an initial situation and you're supposed to infer", "start_timestamp": "00:23:25", "end_timestamp": "00:24:07", "start_second": 1405, "end_second": 1447, "url": "https://www.youtube.com/watch?v=Cs_j-oNwGgg&t=1405s", "title": "Concept Learning with Energy-Based Models (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/Cs_j-oNwGgg/maxresdefault.jpg"} {"video_id": "Cs_j-oNwGgg", "text": "the x1 and the a and the loss function is simply the negative log likelihood of that but what does that mean so will will make it easier if if you have this this procedure right here where you have demo event this up here this is demo and this is a test event how are you going this entire procedure how are you going to learn the energy function well in this case this entire procedure this entire thing is one training sample sample but usually we have input and label and now here it's much more complicated because so we have input", "start_timestamp": "00:24:07", "end_timestamp": "00:24:58", "start_second": 1447, "end_second": 1498, "url": "https://www.youtube.com/watch?v=Cs_j-oNwGgg&t=1447s", "title": "Concept Learning with Energy-Based Models (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/Cs_j-oNwGgg/maxresdefault.jpg"} {"video_id": "Cs_j-oNwGgg", "text": "okay that's this X and this a cool but then we have SGD as integral part of the procedure to determine the W and now what we could do is just apply a loss to the W but we don't because we don't know what the embedding space for the concepts is we could maybe train a classifier but in this case we want to train the ability to transfer these concepts so our training sample needs to be one time transferring a concept so SGD for one is part of our process here and not only that but then this this X here of course is also part of our", "start_timestamp": "00:24:58", "end_timestamp": "00:25:38", "start_second": 1498, "end_second": 1538, "url": "https://www.youtube.com/watch?v=Cs_j-oNwGgg&t=1498s", "title": "Concept Learning with Energy-Based Models (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/Cs_j-oNwGgg/maxresdefault.jpg"} {"video_id": "Cs_j-oNwGgg", "text": "training sample write this up here as X 0 and this here is X 1 and now we need to find this a this attention mask and that is an SGD again remember inferring anything through the energy function is a gradient descent process so ultimately our one training example consists of X 0 a at the beginning so let's call that a zero it consists of the SGD procedure to find W it consists of X 1 and they consist of the SGD procedure to find a the a 1 the output a and then that will give us the output a the a 1 so this here is our input in the classical", "start_timestamp": "00:25:38", "end_timestamp": "00:26:29", "start_second": 1538, "end_second": 1589, "url": "https://www.youtube.com/watch?v=Cs_j-oNwGgg&t=1538s", "title": "Concept Learning with Energy-Based Models (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/Cs_j-oNwGgg/maxresdefault.jpg"} {"video_id": "Cs_j-oNwGgg", "text": "machine and this would be our X and this here would be our label Y and that's what we trained on we trained so it such that the output right here the this is of course sorry this is of course the Y hat this is what we predict and in the training sample we just write a little generator that will you know make this situation that knows what the concept is right it will say okay I'm gonna make an example for a square then it make this will make the attention mask for a square and then it will make the new situation again with a square", "start_timestamp": "00:26:29", "end_timestamp": "00:27:04", "start_second": 1589, "end_second": 1624, "url": "https://www.youtube.com/watch?v=Cs_j-oNwGgg&t=1589s", "title": "Concept Learning with Energy-Based Models (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/Cs_j-oNwGgg/maxresdefault.jpg"} {"video_id": "Cs_j-oNwGgg", "text": "but not tell us the attention mask there and it will make the attention mask into the true Y so at the end we can compare what our model output the attention mask we output here without ever knowing that it should be a square and we have the true label which comes out of the generator that at the beginning decided that it should be a square and then the loss in the distance between those two that's our loss this is an in this is an enormous procedure to get a loss and most crucially you have to back propagate through optimization", "start_timestamp": "00:27:04", "end_timestamp": "00:27:51", "start_second": 1624, "end_second": 1671, "url": "https://www.youtube.com/watch?v=Cs_j-oNwGgg&t=1624s", "title": "Concept Learning with Energy-Based Models (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/Cs_j-oNwGgg/maxresdefault.jpg"} {"video_id": "Cs_j-oNwGgg", "text": "procedures and this is something that we just can't do yet in our models if you take an image a resonate 50 right right now we do one forward propagation to get a label in this procedure if you had two back propagate through the optimization procedure for each sample you would need to basically back propagate through 50 forward passes of the resonate if you if your optimization procedure is 50 steps long and that is just not feasible right now so that's why we don't do it but I believe maybe once we find a smart way", "start_timestamp": "00:27:51", "end_timestamp": "00:28:30", "start_second": 1671, "end_second": 1710, "url": "https://www.youtube.com/watch?v=Cs_j-oNwGgg&t=1671s", "title": "Concept Learning with Energy-Based Models (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/Cs_j-oNwGgg/maxresdefault.jpg"} {"video_id": "Cs_j-oNwGgg", "text": "of back propping through optimization procedures a whole lot of these things will become the new and new wave and machine learning I really I'm excited - I'm pretty sure it doesn't work yet and this is very figley fiddly work but I'm excited by the prospect that we can do this so this is the training procedure right you've given X 0 x1 and a and you Optima is in order to infer the concept behind it right the generator that your level generator of your training data it knows the concept it has a concept in mind when it", "start_timestamp": "00:28:30", "end_timestamp": "00:29:08", "start_second": 1710, "end_second": 1748, "url": "https://www.youtube.com/watch?v=Cs_j-oNwGgg&t=1710s", "title": "Concept Learning with Energy-Based Models (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/Cs_j-oNwGgg/maxresdefault.jpg"} {"video_id": "Cs_j-oNwGgg", "text": "generated this but you're not telling your model what the concept is it needs to infer that and then using the model thing that the model inferred you can either give it x0 and x1 and infer a or you can give it the X and the a and in forex you can do either of those right these are called identification or generation respectively and then you compare the output here to what the generator at the beginning thought again it's not telling you it's that's because that's the label and you compare this to that and that will be your loss to train", "start_timestamp": "00:29:08", "end_timestamp": "00:29:46", "start_second": 1748, "end_second": 1786, "url": "https://www.youtube.com/watch?v=Cs_j-oNwGgg&t=1748s", "title": "Concept Learning with Energy-Based Models (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/Cs_j-oNwGgg/maxresdefault.jpg"} {"video_id": "Cs_j-oNwGgg", "text": "your energy function parameters so your training samples if you think of this entire thing as one forward pass of the model then it's just classic machine learning right you have a training sample which is one forward pass and you have a corresponding label that you infirm so let's jump to the experiments right here experiments are actually pretty cool so what they've done is for example have taken the concept of being far apart from something now being far apart so that the little X needs to be as far away as possible from the ball", "start_timestamp": "00:29:46", "end_timestamp": "00:30:27", "start_second": 1786, "end_second": 1827, "url": "https://www.youtube.com/watch?v=Cs_j-oNwGgg&t=1786s", "title": "Concept Learning with Energy-Based Models (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/Cs_j-oNwGgg/maxresdefault.jpg"} {"video_id": "Cs_j-oNwGgg", "text": "that has the attention on it so if you do generation and you start the little X right here and you ask the model where please infer the next state of the world it will push that little X away right here and in color you can see the energy function valleys of the position of the X so it pushes it away from this thing but if you take the same concept embedding the concept embedding of being far away but you don't do generation you do identification which means you infer the a then it will simply tell you that this ball right here is the furthest", "start_timestamp": "00:30:27", "end_timestamp": "00:31:11", "start_second": 1827, "end_second": 1871, "url": "https://www.youtube.com/watch?v=Cs_j-oNwGgg&t=1827s", "title": "Concept Learning with Energy-Based Models (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/Cs_j-oNwGgg/maxresdefault.jpg"} {"video_id": "Cs_j-oNwGgg", "text": "away from the X right so you can do all sorts of things like this and transferring concepts I find this here pretty interesting so they had to have two different concepts one concept is read as an identification you need to identify the red ball but the other concept is you need to turn something red right you need to take a ball that is maybe now blue and of course the color you can gradient descent on the colors you'd need to make it red and since the energy function it just takes three input X a and W it doesn't you you you're not going to tell", "start_timestamp": "00:31:11", "end_timestamp": "00:31:53", "start_second": 1871, "end_second": 1913, "url": "https://www.youtube.com/watch?v=Cs_j-oNwGgg&t=1871s", "title": "Concept Learning with Energy-Based Models (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/Cs_j-oNwGgg/maxresdefault.jpg"} {"video_id": "Cs_j-oNwGgg", "text": "it right now in which situation you are it has to create create this W embedding space through learning and if you do it with those two concepts then it will put the make something red concept and the is something red concepts in the same places so this is a PCA and then blue I think these blue is the attention codes for identify the red things and in red or the generation code for make something red and they will be put in the same place which is pretty cool it means that the energy function really learns the feature of something being", "start_timestamp": "00:31:53", "end_timestamp": "00:32:34", "start_second": 1913, "end_second": 1954, "url": "https://www.youtube.com/watch?v=Cs_j-oNwGgg&t=1913s", "title": "Concept Learning with Energy-Based Models (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/Cs_j-oNwGgg/maxresdefault.jpg"} {"video_id": "Cs_j-oNwGgg", "text": "red I find this pretty pretty neat and then here they they have some experiments where they basically show we need that gradient descent optimization procedure because only after many steps will will the energy function basically be aligned with the concept that you want so if you have a zero shot model like just one forward pass as we do here you'll see that the energy function that is supposed to make a circle from samples right this is the example concept right here it if you just have a one shot model it will it cannot but in this case", "start_timestamp": "00:32:34", "end_timestamp": "00:33:17", "start_second": 1954, "end_second": 1997, "url": "https://www.youtube.com/watch?v=Cs_j-oNwGgg&t=1954s", "title": "Concept Learning with Energy-Based Models (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/Cs_j-oNwGgg/maxresdefault.jpg"} {"video_id": "Cs_j-oNwGgg", "text": "at least it doesn't learn to one shot produce only if you opt in for a few steps will it get this so you optimize at inference time and that seems to be very important you can see again here demonstrations of this so the example is this and then the model as you can see after 20 steps learn optimizes the points to go to these locations whereas after only one step it didn't do that yet so there are complex things at work here and this column here is where you don't have a relational or neural network so you can't basically", "start_timestamp": "00:33:17", "end_timestamp": "00:33:55", "start_second": 1997, "end_second": 2035, "url": "https://www.youtube.com/watch?v=Cs_j-oNwGgg&t=1997s", "title": "Concept Learning with Energy-Based Models (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/Cs_j-oNwGgg/maxresdefault.jpg"} {"video_id": "Cs_j-oNwGgg", "text": "capture dependencies between things so you you have no chance of making a square because you don't know where the things are in relation to each other but that's more of an engineering question their point is basically that if you have models that do an optimization at inference time they are much more powerful than models that just do a one-shot forward pass it's sort of like an auto regressive model in NLP versus a non auto regressive model that produces all words at once if you produce all words of a sentence at once no word can", "start_timestamp": "00:33:55", "end_timestamp": "00:34:31", "start_second": 2035, "end_second": 2071, "url": "https://www.youtube.com/watch?v=Cs_j-oNwGgg&t=2035s", "title": "Concept Learning with Energy-Based Models (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/Cs_j-oNwGgg/maxresdefault.jpg"} {"video_id": "Cs_j-oNwGgg", "text": "depend on any other word and you can just come loose independent or you can just produce independent things which will make the sentence often not make any sense they also have this KL objective which is a regularizer which I believe that's just a trial and error they built it in because but it is a regularizer I don't want to really go into that and then they they do demonstration in and they re-enacted on a robot the demonstration here is that there is a situation where two things have a tension on and you're supposed to move", "start_timestamp": "00:34:31", "end_timestamp": "00:35:09", "start_second": 2071, "end_second": 2109, "url": "https://www.youtube.com/watch?v=Cs_j-oNwGgg&t=2071s", "title": "Concept Learning with Energy-Based Models (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/Cs_j-oNwGgg/maxresdefault.jpg"} {"video_id": "Cs_j-oNwGgg", "text": "something into the middle of the two things so that's the content you don't tell the robot the concept it needs to learn that from data and then infer that this is the concept that you want and then transfer that to the other environment now you know this you look you know there's this robot environment but ultimately they still encode the positions of these things and the position of that and really all you have to do different here is that instead of moving this actuator directly you need to like calculate what you need to do to the", "start_timestamp": "00:35:09", "end_timestamp": "00:35:47", "start_second": 2109, "end_second": 2147, "url": "https://www.youtube.com/watch?v=Cs_j-oNwGgg&t=2109s", "title": "Concept Learning with Energy-Based Models (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/Cs_j-oNwGgg/maxresdefault.jpg"} {"video_id": "Cs_j-oNwGgg", "text": "individual joints in the robot so I think this is maybe because it's open AI and it needs to you know look robot II and stuff but the problem here is not really different it's it's not even it's not real-world transfer or anything so yeah let's let go through some of the things they can learn with this so you can see here they can learn these regional geometric shapes and so on the left is the example event that the model needs to take the concept from now this is this is I believe very much identification so what", "start_timestamp": "00:35:47", "end_timestamp": "00:36:23", "start_second": 2147, "end_second": 2183, "url": "https://www.youtube.com/watch?v=Cs_j-oNwGgg&t=2147s", "title": "Concept Learning with Energy-Based Models (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/Cs_j-oNwGgg/maxresdefault.jpg"} {"video_id": "Cs_j-oNwGgg", "text": "they did is they trained with a data set where all of these appear right so this there are squares there are lines there are circles so this is maybe my criticism here that it is not so much to generally infer a concept it is more like identify the concept so the model basically just needs to decide is this line is the circle or is this square because that was those things were in the training data set it would be nice to see how this generalizes to general concepts or if we can even make that if we can have a zero shot concept", "start_timestamp": "00:36:23", "end_timestamp": "00:37:00", "start_second": 2183, "end_second": 2220, "url": "https://www.youtube.com/watch?v=Cs_j-oNwGgg&t=2183s", "title": "Concept Learning with Energy-Based Models (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/Cs_j-oNwGgg/maxresdefault.jpg"} {"video_id": "Cs_j-oNwGgg", "text": "inference and then transfer those concepts to other things maybe that's already happening I don't I don't know so here the spatial arrangement is to either be close to something or to be between two things so if the attention is on two things you want in between so you see the top ones are the demonstrations it needs to recognize the concept and it needs to basically optimize to fulfill that concept shapes so to make shapes is mmm oh yeah there's a triangle right again this this this just very much I believe", "start_timestamp": "00:37:00", "end_timestamp": "00:37:46", "start_second": 2220, "end_second": 2266, "url": "https://www.youtube.com/watch?v=Cs_j-oNwGgg&t=2220s", "title": "Concept Learning with Energy-Based Models (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/Cs_j-oNwGgg/maxresdefault.jpg"} {"video_id": "Cs_j-oNwGgg", "text": "relies on recognition and not actual understanding of what a triangle here you have proximity being closer being far apart what else is cool oh yeah if the recognition for the same task right you need to identify the ball that is closer for and here you really also see the optimization procedure in action where for example at the beginning of each flicker you kind of see the attention being everywhere and then stabilizing to one or two points so if two points are equally close or far apart you'll see the attention being on", "start_timestamp": "00:37:46", "end_timestamp": "00:38:22", "start_second": 2266, "end_second": 2302, "url": "https://www.youtube.com/watch?v=Cs_j-oNwGgg&t=2266s", "title": "Concept Learning with Energy-Based Models (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/Cs_j-oNwGgg/maxresdefault.jpg"} {"video_id": "Cs_j-oNwGgg", "text": "multiple points which is pretty cool right so that means the model really learns this this is concept here's the count quantity so you can either have one two or larger than three or something yeah that seems like they tried three and four and didn't work so they just said I will just do larger than three and here is this robot thing where it also always needs to move in between now this this is the part that I'm not really impressed with but you know whatever whatever you want okay I hope this was a good introduction to", "start_timestamp": "00:38:22", "end_timestamp": "00:39:01", "start_second": 2302, "end_second": 2341, "url": "https://www.youtube.com/watch?v=Cs_j-oNwGgg&t=2302s", "title": "Concept Learning with Energy-Based Models (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/Cs_j-oNwGgg/maxresdefault.jpg"} {"video_id": "Uumd2zOOz60", "text": "hi there people so a lot of you have asked me how I read papers and honestly I don't think there is any super special method to it but you know I thought because people have asked me to make a video on it so I'll make a video on it and I'll try to share my method of reading papers and hopefully this is going to be somewhat of a miniseries or a series where I every now and then discuss how I read one of the papers that I make videos about and I'll try to select them such that different things are highlighted now I've selected this", "start_timestamp": "00:00:00", "end_timestamp": "00:00:38", "start_second": 0, "end_second": 38, "url": "https://www.youtube.com/watch?v=Uumd2zOOz60&t=0s", "title": "How I Read a Paper: Facebook's DETR (Video Tutorial)", "thumbnail": "https://i.ytimg.com/vi/Uumd2zOOz60/maxresdefault.jpg"} {"video_id": "Uumd2zOOz60", "text": "one right here for really for no particular reason other than I sort of remembered it and going to try to go with you through how I read this and how I encountered this and kind of try to honestly share what I thought at the first time when I read it and I hope this helps some of you if it does help you and if you like content like this of course feel free to share this out and subscribe if you haven't seen my original video on this paper it might be worth to go watch it I'll link it and with that let's dive in so again this", "start_timestamp": "00:00:38", "end_timestamp": "00:01:21", "start_second": 38, "end_second": 81, "url": "https://www.youtube.com/watch?v=Uumd2zOOz60&t=38s", "title": "How I Read a Paper: Facebook's DETR (Video Tutorial)", "thumbnail": "https://i.ytimg.com/vi/Uumd2zOOz60/maxresdefault.jpg"} {"video_id": "Uumd2zOOz60", "text": "might be not really something new but I'll just go through it okay so first thing I do is of course read the title so the title has three parts and to end object detection with transformers so what I notice that I do myself is I like through reading a paper it's like read the paper with an open mind I don't do that I almost immediately form an opinion and a hypothesis of what's going on like so I see transformers so I know what transformers are if you don't I've made a video if made lots of videos on transformers attention is all you need", "start_timestamp": "00:01:21", "end_timestamp": "00:01:58", "start_second": 81, "end_second": 118, "url": "https://www.youtube.com/watch?v=Uumd2zOOz60&t=81s", "title": "How I Read a Paper: Facebook's DETR (Video Tutorial)", "thumbnail": "https://i.ytimg.com/vi/Uumd2zOOz60/maxresdefault.jpg"} {"video_id": "Uumd2zOOz60", "text": "is the base paper for that so I know what a transformer is okay and I know that transformers are usually in NLP are usually used in NLP door there are things like you know other things with transformers but usually an NLP model then I read object detection and I know object detection is a computer vision tasks so immediately this here is sort of a a difference and I immediately try to assess what's the new idea in this paper and in this case it might be okay it might be applying transformers to object detection but", "start_timestamp": "00:01:58", "end_timestamp": "00:02:35", "start_second": 118, "end_second": 155, "url": "https://www.youtube.com/watch?v=Uumd2zOOz60&t=118s", "title": "How I Read a Paper: Facebook's DETR (Video Tutorial)", "thumbnail": "https://i.ytimg.com/vi/Uumd2zOOz60/maxresdefault.jpg"} {"video_id": "Uumd2zOOz60", "text": "then I also see end to end and the only reason to put that in a title is because that's the novelty because usually in deep learning we're sort of used to systems being and to end and even if they aren't if most systems aren't end-to-end a lot of people don't is like end to end image classification on image net like Thanks so I I was guessing that the reason they put in end-to-end into the title was because that's actually something that's special about the model so now I have like two competing hypotheses of why this paper matters", "start_timestamp": "00:02:35", "end_timestamp": "00:03:10", "start_second": 155, "end_second": 190, "url": "https://www.youtube.com/watch?v=Uumd2zOOz60&t=155s", "title": "How I Read a Paper: Facebook's DETR (Video Tutorial)", "thumbnail": "https://i.ytimg.com/vi/Uumd2zOOz60/maxresdefault.jpg"} {"video_id": "Uumd2zOOz60", "text": "first of all because it does it with transformers and second because it does it end to end and the of course the true fear is that the combination of end to end transformers all of that is what makes this model and I already form like a hypothesis of whether I like this or not like I have to be have to be honest I have very quick judgment of papers of whether I like them or not and then I sort of catch myself each time and I still try to so they're for most papers actually that I have sort of a negative opinion at the beginning where i will-", "start_timestamp": "00:03:10", "end_timestamp": "00:03:48", "start_second": 190, "end_second": 228, "url": "https://www.youtube.com/watch?v=Uumd2zOOz60&t=190s", "title": "How I Read a Paper: Facebook's DETR (Video Tutorial)", "thumbnail": "https://i.ytimg.com/vi/Uumd2zOOz60/maxresdefault.jpg"} {"video_id": "Uumd2zOOz60", "text": "there are papers where I think like there is no way this is going to you know work or something like this I'm actually positively convinced throughout the paper so for most for most papers where I that I read I'm trying to find the positive things in there but I do form an opinion pretty quickly usually alright so the second thing this part right here I like I don't even see this is like advertisements on on like Twitter I like you you just I have always had issues with author names like people would come to me and be like oh", "start_timestamp": "00:03:48", "end_timestamp": "00:04:27", "start_second": 228, "end_second": 267, "url": "https://www.youtube.com/watch?v=Uumd2zOOz60&t=228s", "title": "How I Read a Paper: Facebook's DETR (Video Tutorial)", "thumbnail": "https://i.ytimg.com/vi/Uumd2zOOz60/maxresdefault.jpg"} {"video_id": "Uumd2zOOz60", "text": "have you seen the new vineos paper and no clue and then when they say like oh that's where they use this character level model to do that and I'm like oh that paper so I like do not care who the authors are of a paper like I don't I can't remember the papers by their author names I've gotten better at it I have to say but i-i've always had trouble with this now that's not to say that a name doesn't pop out to me like if this would be like it like yoshua bengio or some someone like really famous then of course that would catch", "start_timestamp": "00:04:27", "end_timestamp": "00:05:02", "start_second": 267, "end_second": 302, "url": "https://www.youtube.com/watch?v=Uumd2zOOz60&t=267s", "title": "How I Read a Paper: Facebook's DETR (Video Tutorial)", "thumbnail": "https://i.ytimg.com/vi/Uumd2zOOz60/maxresdefault.jpg"} {"video_id": "Uumd2zOOz60", "text": "my eye and but I also know that you know yoshua bengio is paper yoshua bengio slab is huge so just because a big name is on the paper doesn't mean that the paper is going to be of any good or bad quality sometimes the author's give you an indication of what kind of research is there like if you see a Najaf Clun or Ken Kenneth Oh Stanley you know that there's there's going to be the certain type of of you know learning to explore and and and kind of a bit more out-of-the-box thinking in their papers which I really", "start_timestamp": "00:05:02", "end_timestamp": "00:05:39", "start_second": 302, "end_second": 339, "url": "https://www.youtube.com/watch?v=Uumd2zOOz60&t=302s", "title": "How I Read a Paper: Facebook's DETR (Video Tutorial)", "thumbnail": "https://i.ytimg.com/vi/Uumd2zOOz60/maxresdefault.jpg"} {"video_id": "Uumd2zOOz60", "text": "like but it doesn't immediately give you a clue maybe if you go by first authors it's much more indicative if you have already read some of their papers but most often I just ignore authors and go on the affiliation sometimes matters in in that it's a bit of a vicious cycle if there's a big name affiliation like Facebook AI Google ai and so on these papers also they get more exposure in like the press and so on so whatever Google a publish that paper all of these all these pops eye magazines like verge and this and life hacker and hacker news", "start_timestamp": "00:05:39", "end_timestamp": "00:06:20", "start_second": 339, "end_second": 380, "url": "https://www.youtube.com/watch?v=Uumd2zOOz60&t=339s", "title": "How I Read a Paper: Facebook's DETR (Video Tutorial)", "thumbnail": "https://i.ytimg.com/vi/Uumd2zOOz60/maxresdefault.jpg"} {"video_id": "Uumd2zOOz60", "text": "and whatnot they like like write a blurb about it so often they get much more scrutinized for these papers they get much more they get much more the public attention but they also get much more scrutiny which in turn means that there is a bit more pressure on them to do good experiments so that biases meet like a little bit into the direction of believing their experimental evidence more now usually this is also backed up by the fact that I am actually convinced by their experiments usually so these these big-name papers often I find", "start_timestamp": "00:06:20", "end_timestamp": "00:07:01", "start_second": 380, "end_second": 421, "url": "https://www.youtube.com/watch?v=Uumd2zOOz60&t=380s", "title": "How I Read a Paper: Facebook's DETR (Video Tutorial)", "thumbnail": "https://i.ytimg.com/vi/Uumd2zOOz60/maxresdefault.jpg"} {"video_id": "Uumd2zOOz60", "text": "myself even without or disregarding the affiliation to be convinced more than of like regular papers my most often issue with papers is that I don't believe the experiments and I make no difference like even if it's Facebook I still my prior is the experiments or crap and I don't believe them and they have to convince me of the opposite but some like I can't say that it doesn't affect me that it's like a big name affiliation okay so then the second thing is I sometimes I see the paper on archive and I skim the abstract", "start_timestamp": "00:07:01", "end_timestamp": "00:07:41", "start_second": 421, "end_second": 461, "url": "https://www.youtube.com/watch?v=Uumd2zOOz60&t=421s", "title": "How I Read a Paper: Facebook's DETR (Video Tutorial)", "thumbnail": "https://i.ytimg.com/vi/Uumd2zOOz60/maxresdefault.jpg"} {"video_id": "Uumd2zOOz60", "text": "sometimes the abstract is informative and sometimes not so here it's like blah blah blah a new method that views object detection as a direct set prediction problem I'm like oh yeah okay so streamlines the detection effectively removing the need for many hand design components like non maximum suppression yada yada yada the main ingredients called detection transformer asset based global loss that forces unique prediction via bipartite matching and the transformer encode or decode or architecture so they make it clear here", "start_timestamp": "00:07:41", "end_timestamp": "00:08:14", "start_second": 461, "end_second": 494, "url": "https://www.youtube.com/watch?v=Uumd2zOOz60&t=461s", "title": "How I Read a Paper: Facebook's DETR (Video Tutorial)", "thumbnail": "https://i.ytimg.com/vi/Uumd2zOOz60/maxresdefault.jpg"} {"video_id": "Uumd2zOOz60", "text": "why it matters and that's that's what I what I want to get at is sort of what's the new thing in this paper most papers are even though they're all very long and have lots of math and so on they often have like one or maybe two new core things that they really tell you sometimes zero but a lot of times it's like one thing that they really do and you you sort of have to but they're trying to cloak it often because they need to make their research as impactful as possible right but you need to sort of figure out what it is they're doing", "start_timestamp": "00:08:14", "end_timestamp": "00:08:53", "start_second": 494, "end_second": 533, "url": "https://www.youtube.com/watch?v=Uumd2zOOz60&t=494s", "title": "How I Read a Paper: Facebook's DETR (Video Tutorial)", "thumbnail": "https://i.ytimg.com/vi/Uumd2zOOz60/maxresdefault.jpg"} {"video_id": "Uumd2zOOz60", "text": "they make it fairly easy for us in that they say okay they remove the need for many hand designed components like non maximum suppression which tells me that they're building something that's easier than what came before them and that already tells me it it's not necessarily going to be better their argument is more that it's going to be easier right there there are sort of two kinds of experimental results the ones where you try to beat what came before you and the ones where you're trying to say look our thing works just as well as this other", "start_timestamp": "00:08:53", "end_timestamp": "00:09:27", "start_second": 533, "end_second": 567, "url": "https://www.youtube.com/watch?v=Uumd2zOOz60&t=533s", "title": "How I Read a Paper: Facebook's DETR (Video Tutorial)", "thumbnail": "https://i.ytimg.com/vi/Uumd2zOOz60/maxresdefault.jpg"} {"video_id": "Uumd2zOOz60", "text": "thing while being more advantageous in some other metric so I would place this already in the sort of second category and then they say what are the actual ingredients it's a set based global loss that forces unique predictions via bipartite matching now I at this point I know what these terms mean but at this point actually don't have to know what the terms mean what I need to recognize is that I simply have to go later and figure out what that is and a transformer based encoder/decoder architecture okay so there are two", "start_timestamp": "00:09:27", "end_timestamp": "00:10:04", "start_second": 567, "end_second": 604, "url": "https://www.youtube.com/watch?v=Uumd2zOOz60&t=567s", "title": "How I Read a Paper: Facebook's DETR (Video Tutorial)", "thumbnail": "https://i.ytimg.com/vi/Uumd2zOOz60/maxresdefault.jpg"} {"video_id": "Uumd2zOOz60", "text": "things right here that I remember I need to pay attention to later there is this loss which seems to be special and there is the transformer architecture which seemed which they say okay that that's the model basically consists of those two things and then they have a short description of what it does given a fixed small set of learned object queries that your reasons about the relations of the objects and the global image context to directly output the final set of predicted in parallel that almost tells me nothing of yeah okay the", "start_timestamp": "00:10:04", "end_timestamp": "00:10:38", "start_second": 604, "end_second": 638, "url": "https://www.youtube.com/watch?v=Uumd2zOOz60&t=604s", "title": "How I Read a Paper: Facebook's DETR (Video Tutorial)", "thumbnail": "https://i.ytimg.com/vi/Uumd2zOOz60/maxresdefault.jpg"} {"video_id": "Uumd2zOOz60", "text": "model reasons maybe this in parallel is something but the model is conceptually simple and does not require specialized library unlike many other modern detectors this sort of repeats this enforces my hypothesis that they're going with the hey this is a much easier way of doing things approach detter demonstrates accuracy and runtime performance on par with well established that further confirms my hypothesis that this is on are right they runtime performance on par with the current state of the art and at the end they say moreover data", "start_timestamp": "00:10:38", "end_timestamp": "00:11:15", "start_second": 638, "end_second": 675, "url": "https://www.youtube.com/watch?v=Uumd2zOOz60&t=638s", "title": "How I Read a Paper: Facebook's DETR (Video Tutorial)", "thumbnail": "https://i.ytimg.com/vi/Uumd2zOOz60/maxresdefault.jpg"} {"video_id": "Uumd2zOOz60", "text": "can easily be generalized to proceed to produce Panoptix segmentation in a unified manner we show that it significantly outperforms competitive baselines training code and praetor models are available as part when I first read it is like ok can easily be generalized to produce this Panoptix segmentation this is I didn't know yet whether this is like a central claim of their paper that it can do this segmentation or whether this is like an added benefit to their paper because you can read it in both ways and I'm just", "start_timestamp": "00:11:15", "end_timestamp": "00:11:48", "start_second": 675, "end_second": 708, "url": "https://www.youtube.com/watch?v=Uumd2zOOz60&t=675s", "title": "How I Read a Paper: Facebook's DETR (Video Tutorial)", "thumbnail": "https://i.ytimg.com/vi/Uumd2zOOz60/maxresdefault.jpg"} {"video_id": "Uumd2zOOz60", "text": "ready to find this out in the paper now after I've read the abstract and sort of already form the hypothesis of what's going on so here I already in my mind I already sort of have a model of how would I do that right how would I how would I do that and then what would I do so right now what I might be thinking is if I have a transformer over images that directly outputs the the predictions in parallel I'm imagining like an image and the image somehow needs to go into a transformer so maybe there's like an encoder like a CNN encoder that gives me", "start_timestamp": "00:11:48", "end_timestamp": "00:12:30", "start_second": 708, "end_second": 750, "url": "https://www.youtube.com/watch?v=Uumd2zOOz60&t=708s", "title": "How I Read a Paper: Facebook's DETR (Video Tutorial)", "thumbnail": "https://i.ytimg.com/vi/Uumd2zOOz60/maxresdefault.jpg"} {"video_id": "Uumd2zOOz60", "text": "image features and then it's it's so maybe you sample this down this image this is just me hypothesizing what could be going on right and then I might be unrolling that right this image into a vector of these lower pixels and then so in my mind what I would do right here if without knowing anything more would be to do something like Bert's pan predictions so I would have Bert right here and I so for I would input this sequence right here and then to detect an object I would sort of think that maybe the Bert you know Bert has an", "start_timestamp": "00:12:30", "end_timestamp": "00:13:12", "start_second": 750, "end_second": 792, "url": "https://www.youtube.com/watch?v=Uumd2zOOz60&t=750s", "title": "How I Read a Paper: Facebook's DETR (Video Tutorial)", "thumbnail": "https://i.ytimg.com/vi/Uumd2zOOz60/maxresdefault.jpg"} {"video_id": "Uumd2zOOz60", "text": "output that is the same length as the input right so it's it's very good at sequence tagging and things like this so maybe how it detects an object is going to be that it sort of like tags the tags the center location in the pixel of an object right here or a tag somehow the corners of the of the bounding box but then I don't know how this is going to be in parallel maybe Bert outputs like a score for each location and then you do some kind of matching right here so this is my initial hypothesis of what's going", "start_timestamp": "00:13:12", "end_timestamp": "00:13:47", "start_second": 792, "end_second": 827, "url": "https://www.youtube.com/watch?v=Uumd2zOOz60&t=792s", "title": "How I Read a Paper: Facebook's DETR (Video Tutorial)", "thumbnail": "https://i.ytimg.com/vi/Uumd2zOOz60/maxresdefault.jpg"} {"video_id": "Uumd2zOOz60", "text": "on and then I scroll through and honestly the first thing I do is I go and find the pictures and know no different in all like since since you first book you read that's what you do I go and find the pictures because usually if someone proposes anything new that they're gonna try to make a picture of it luckily I don't do like super theoretical what not your Bayesian generalization bounds and I don't know so most often papers I read have some sort of picture and that's very helpful to me I know I know but yeah so I find", "start_timestamp": "00:13:47", "end_timestamp": "00:14:26", "start_second": 827, "end_second": 866, "url": "https://www.youtube.com/watch?v=Uumd2zOOz60&t=827s", "title": "How I Read a Paper: Facebook's DETR (Video Tutorial)", "thumbnail": "https://i.ytimg.com/vi/Uumd2zOOz60/maxresdefault.jpg"} {"video_id": "Uumd2zOOz60", "text": "this picture and here I see okay you have image you have CNN okay gives you a set of image features or so far so good then transform or encoder decoder then set of box predictions so all of them come out here and I already read they're in parallel and then bipartite matching laws so here they I can see they color these in different ways and these color appear to match with these colors right here right in the green here and these they they also this is a very good graphic but from this I can already read that these here go to the no object a", "start_timestamp": "00:14:26", "end_timestamp": "00:15:02", "start_second": 866, "end_second": 902, "url": "https://www.youtube.com/watch?v=Uumd2zOOz60&t=866s", "title": "How I Read a Paper: Facebook's DETR (Video Tutorial)", "thumbnail": "https://i.ytimg.com/vi/Uumd2zOOz60/maxresdefault.jpg"} {"video_id": "Uumd2zOOz60", "text": "lot of times the graphics aren't very good so this this is where I'm not saying in every paper you can learn by looking at the graphics like sometimes the graphics are terrible and you're like what's going on here I like I don't this this makes no sense this happens a lot in this paper right here this happens to be very very good explanatory graphics so I'll take advantage of that and I do the same thing in the other papers right but then later when it doesn't match what I read in the text I'll have to you know update my belief", "start_timestamp": "00:15:02", "end_timestamp": "00:15:37", "start_second": 902, "end_second": 937, "url": "https://www.youtube.com/watch?v=Uumd2zOOz60&t=902s", "title": "How I Read a Paper: Facebook's DETR (Video Tutorial)", "thumbnail": "https://i.ytimg.com/vi/Uumd2zOOz60/maxresdefault.jpg"} {"video_id": "Uumd2zOOz60", "text": "and so on but here I see that these go to no object and this goes to no object so I don't know yet that this is the test set at the point where I read this I was sort of confused by this but I recognized that each of these boxes right here is going to be either resulting in a bounding box or in the no object prediction so from that I could conclude that these things here are maybe some sort of a fixed set right but I still thought that you know these that this would actually be the output of these image features so that in this", "start_timestamp": "00:15:37", "end_timestamp": "00:16:18", "start_second": 937, "end_second": 978, "url": "https://www.youtube.com/watch?v=Uumd2zOOz60&t=937s", "title": "How I Read a Paper: Facebook's DETR (Video Tutorial)", "thumbnail": "https://i.ytimg.com/vi/Uumd2zOOz60/maxresdefault.jpg"} {"video_id": "Uumd2zOOz60", "text": "case you'd have like six set of image features and then you'd have like Bert here even though that's not an encoder decoder I still this was still my running hypothesis that somehow you'd map these image features to these boxes right here so and I didn't know what to what to make of this this thing right here so then I went through some more and look for more pictures and there are not sometimes I also kind of glanced at the formulas but okay when I Everest I see this this is just I mean this is kind of useless like okay cool you", "start_timestamp": "00:16:18", "end_timestamp": "00:16:55", "start_second": 978, "end_second": 1015, "url": "https://www.youtube.com/watch?v=Uumd2zOOz60&t=978s", "title": "How I Read a Paper: Facebook's DETR (Video Tutorial)", "thumbnail": "https://i.ytimg.com/vi/Uumd2zOOz60/maxresdefault.jpg"} {"video_id": "Uumd2zOOz60", "text": "minimize the loss thanks this okay didn't really pay attention to that ah new picture cool so this picture is much more informative than the other picture yeah I believe with the other picture they were trying to show case this loss how they do the matching and even though I could read a lot from that picture I did not get that part and that therefore I felt when I saw this and I just glanced at it I'm like wait what what's different then up here it seems like the same but okay let's look at this so again we see okay you have set", "start_timestamp": "00:16:55", "end_timestamp": "00:17:32", "start_second": 1015, "end_second": 1052, "url": "https://www.youtube.com/watch?v=Uumd2zOOz60&t=1015s", "title": "How I Read a Paper: Facebook's DETR (Video Tutorial)", "thumbnail": "https://i.ytimg.com/vi/Uumd2zOOz60/maxresdefault.jpg"} {"video_id": "Uumd2zOOz60", "text": "of image features that comes out of the CNN so that conforms with my belief but then this here goes into a transformer encoder and this comes out so immediately I see oh this is not the same as these boxes here right that was my hypothesis that these things here would be the colored boxes so I I say okay obviously that's not what happens this thing here seems to be sort of the encoded image information then that's somehow fed into here and that then there are these object query things and they seem to correspond to this so I'm a", "start_timestamp": "00:17:32", "end_timestamp": "00:18:19", "start_second": 1052, "end_second": 1099, "url": "https://www.youtube.com/watch?v=Uumd2zOOz60&t=1052s", "title": "How I Read a Paper: Facebook's DETR (Video Tutorial)", "thumbnail": "https://i.ytimg.com/vi/Uumd2zOOz60/maxresdefault.jpg"} {"video_id": "Uumd2zOOz60", "text": "bit more confused right now what I can see is that these then will result in these in these boxes okay so being confused by that I look for more pictures so I go look for more pictures and this here seems to be like of a visualization a lot of these papers have some sort of ablation experiments or so and so on this I just find a really cool picture for now I don't know yet what it means this I don't know yet what it means and I go down skip off this and then back here in the appendix I find this here which I immediately map to the", "start_timestamp": "00:18:19", "end_timestamp": "00:19:01", "start_second": 1099, "end_second": 1141, "url": "https://www.youtube.com/watch?v=Uumd2zOOz60&t=1099s", "title": "How I Read a Paper: Facebook's DETR (Video Tutorial)", "thumbnail": "https://i.ytimg.com/vi/Uumd2zOOz60/maxresdefault.jpg"} {"video_id": "Uumd2zOOz60", "text": "previous where this is the anchor and this is a decoder and I've already read the attention is all you need paper and at that point it clicked and means like this is not a Burt transformer this is one of these transformers that has an encoder and the decoder even though they told me like 50 billion times already I was too stupid until this point so now I know okay okay I see what's going on so the image goes through here and then this goes as a side input like as an attention from the decoder to the encoder like I know in NLP right so in", "start_timestamp": "00:19:01", "end_timestamp": "00:19:32", "start_second": 1141, "end_second": 1172, "url": "https://www.youtube.com/watch?v=Uumd2zOOz60&t=1141s", "title": "How I Read a Paper: Facebook's DETR (Video Tutorial)", "thumbnail": "https://i.ytimg.com/vi/Uumd2zOOz60/maxresdefault.jpg"} {"video_id": "Uumd2zOOz60", "text": "NLP this here would be a source sequence like maybe if you do translation and this here would be a target sequence so now whenever I see a transformer like this and it outputs something this I I look at it as okay this here is sort of the input that goes as like a side input over here and usually here you have the target sequence but that's not the case right here right you have these object queries so this is how far I get from the pictures now I go up so I have a sort of I have questions now I have questions and that's when I start", "start_timestamp": "00:19:32", "end_timestamp": "00:20:16", "start_second": 1172, "end_second": 1216, "url": "https://www.youtube.com/watch?v=Uumd2zOOz60&t=1172s", "title": "How I Read a Paper: Facebook's DETR (Video Tutorial)", "thumbnail": "https://i.ytimg.com/vi/Uumd2zOOz60/maxresdefault.jpg"} {"video_id": "Uumd2zOOz60", "text": "reading the paper only now do I start reading the paper after I've looked through all the images form the hypothesis and sort of have questions on how this works and we'll go a bit faster from now on - just not bore you with all the things so the introduction is often very important even though it's called introduction and maybe you know if you read a book like if there's like introduction or a prologue or something like this it's often kinda pointless introduction in these research papers is one of the most important points because", "start_timestamp": "00:20:16", "end_timestamp": "00:20:50", "start_second": 1216, "end_second": 1250, "url": "https://www.youtube.com/watch?v=Uumd2zOOz60&t=1216s", "title": "How I Read a Paper: Facebook's DETR (Video Tutorial)", "thumbnail": "https://i.ytimg.com/vi/Uumd2zOOz60/maxresdefault.jpg"} {"video_id": "Uumd2zOOz60", "text": "all of these papers they try basically all of them try to convince a reviewer to accept them and in order to do that they will set up their main points and their main story immediately in the introduction so what you'll usually have is a problem statement which is here like why what's what's wrong right now and then you have like a story of how their paper addresses the issue okay and that's that's here we streamline the training pipeline by viewing object prediction yada yada yada this is often formulates in words what", "start_timestamp": "00:20:50", "end_timestamp": "00:21:31", "start_second": 1250, "end_second": 1291, "url": "https://www.youtube.com/watch?v=Uumd2zOOz60&t=1250s", "title": "How I Read a Paper: Facebook's DETR (Video Tutorial)", "thumbnail": "https://i.ytimg.com/vi/Uumd2zOOz60/maxresdefault.jpg"} {"video_id": "Uumd2zOOz60", "text": "the paper is about and what contribution the paper makes right this is like a this is like a longer abstract the abstract is often very very cryptic very dense this here is often much more informative of what the paper does so for understanding the paper and a high level the introduction is the best place but given that I've already looked at the images and so on I don't actually draw many new much new information from this thing then is related work and honestly I I skip it like unless I'm the actual reviewer of a paper like when I'm", "start_timestamp": "00:21:31", "end_timestamp": "00:22:11", "start_second": 1291, "end_second": 1331, "url": "https://www.youtube.com/watch?v=Uumd2zOOz60&t=1291s", "title": "How I Read a Paper: Facebook's DETR (Video Tutorial)", "thumbnail": "https://i.ytimg.com/vi/Uumd2zOOz60/maxresdefault.jpg"} {"video_id": "Uumd2zOOz60", "text": "the reviewer of a paper I read the related work but often related work is just like you first of all you site a bunch of your friends and then you cite the mandatory papers and then you cite every single person that you think could be a reviewer because or you've actually been rejected from a conference with a reviewer claiming that your you haven't compared or you haven't cited data or that paper you can pretty much be sure that that's the if if it's not a glaring of may omission if it's like a niche paper and you haven't cited it then", "start_timestamp": "00:22:11", "end_timestamp": "00:22:43", "start_second": 1331, "end_second": 1363, "url": "https://www.youtube.com/watch?v=Uumd2zOOz60&t=1331s", "title": "How I Read a Paper: Facebook's DETR (Video Tutorial)", "thumbnail": "https://i.ytimg.com/vi/Uumd2zOOz60/maxresdefault.jpg"} {"video_id": "Uumd2zOOz60", "text": "you're like okay I'm gonna cite it just because the next conference you could be Mary viewer again so I'm not I'm not sure that these related work sections they're necessary like if someone wants to write their theses and they go and read this paper and they want references oftentimes this is a good place but a lot of it is just blah blah blah blah blah okay I know disagree with me if you want oh yeah - maybe - reading quality so I tend to at this point I tend to not skim so at first I skim but at this point I tend", "start_timestamp": "00:22:43", "end_timestamp": "00:23:22", "start_second": 1363, "end_second": 1402, "url": "https://www.youtube.com/watch?v=Uumd2zOOz60&t=1363s", "title": "How I Read a Paper: Facebook's DETR (Video Tutorial)", "thumbnail": "https://i.ytimg.com/vi/Uumd2zOOz60/maxresdefault.jpg"} {"video_id": "Uumd2zOOz60", "text": "to read every sentence and read it closely and understand it and when I realize like I'm tired or something I don't just skim the paper I've tried to skim papers and it doesn't doesn't work try to read every sentence understand every sentence and okay if you don't understand it don't stop reading because of that but try to not skim and be like oh yeah yeah yeah okay I gotta go to go to get away that is not helpful except related work skip completely cool then a lot of times in this paper now is the the model and this is the section I'm", "start_timestamp": "00:23:22", "end_timestamp": "00:24:02", "start_second": 1402, "end_second": 1442, "url": "https://www.youtube.com/watch?v=Uumd2zOOz60&t=1402s", "title": "How I Read a Paper: Facebook's DETR (Video Tutorial)", "thumbnail": "https://i.ytimg.com/vi/Uumd2zOOz60/maxresdefault.jpg"} {"video_id": "Uumd2zOOz60", "text": "actually interested in right so I read very very closely here and then I find out what their their loss is all about and again I stress read these things and understand them right sometimes it's hard but if you're if you're confused that means you either if they've done a bad job or they made a mistake or that you haven't under stood something if you can't understand the sentence try to read on maybe it's clarified later and then you know go back but again do not do not like just start a lot of times when I read paper previously like I wouldn't", "start_timestamp": "00:24:02", "end_timestamp": "00:24:46", "start_second": 1442, "end_second": 1486, "url": "https://www.youtube.com/watch?v=Uumd2zOOz60&t=1442s", "title": "How I Read a Paper: Facebook's DETR (Video Tutorial)", "thumbnail": "https://i.ytimg.com/vi/Uumd2zOOz60/maxresdefault.jpg"} {"video_id": "Uumd2zOOz60", "text": "understand something quite well yet and then I would be like oh yeah yeah yeah and then I noticed that I start skipping and skimming more and more because that would you know pop up again and again and I wouldn't understand it again and again and then at the end I would just be kind of glancing at the paper and I don't want to do that right here so I want to read every sentence and understand it okay so here then I find out about the loss and then I if I don't know something here then I'll go and look it up on maybe on Wikipedia or", "start_timestamp": "00:24:46", "end_timestamp": "00:25:21", "start_second": 1486, "end_second": 1521, "url": "https://www.youtube.com/watch?v=Uumd2zOOz60&t=1486s", "title": "How I Read a Paper: Facebook's DETR (Video Tutorial)", "thumbnail": "https://i.ytimg.com/vi/Uumd2zOOz60/maxresdefault.jpg"} {"video_id": "Uumd2zOOz60", "text": "something like this now I don't need to understand every single part of it right that's maybe I should correct myself so for example this bounding box loss here they talk about the second part of the max across the Hungarian Falls is this box loss that scores bounding boxes unlike many detectors that do box prediction with some Englishmen the other yada-yada they say the most commonly used l1 loss will have different scales for a small so here they basically talk about how they mix the losses they say overall our box loss", "start_timestamp": "00:25:21", "end_timestamp": "00:25:53", "start_second": 1521, "end_second": 1553, "url": "https://www.youtube.com/watch?v=Uumd2zOOz60&t=1521s", "title": "How I Read a Paper: Facebook's DETR (Video Tutorial)", "thumbnail": "https://i.ytimg.com/vi/Uumd2zOOz60/maxresdefault.jpg"} {"video_id": "Uumd2zOOz60", "text": "is that defined as this and this now I haven't I don't know what these losses are I just assume there are some bounding box losses so when I it's not true when I say understand everything understand the things that are integral to the story of the paper right how exactly they compute bounding box losses at this point I don't care I just assume that there is some loss that I can back propagate right I what is important is that they do this Hungarian matching thing right as soon as I get that I'm like ah that was this you know this um", "start_timestamp": "00:25:53", "end_timestamp": "00:26:31", "start_second": 1553, "end_second": 1591, "url": "https://www.youtube.com/watch?v=Uumd2zOOz60&t=1553s", "title": "How I Read a Paper: Facebook's DETR (Video Tutorial)", "thumbnail": "https://i.ytimg.com/vi/Uumd2zOOz60/maxresdefault.jpg"} {"video_id": "Uumd2zOOz60", "text": "this thing no this thing up here this thing this with the matching thing now I get it now I know uh there are always the same amount of boxes here there are always the same amount of labels here and all we need to do is somehow match them and I immediately think why is that relevant oh because when something is already matched to an object some other thing cannot be matched to the same object and that's how we you know prevent the fact that all the things predict the same thing right and so that immediately becomes clear and as I said there is", "start_timestamp": "00:26:31", "end_timestamp": "00:27:09", "start_second": 1591, "end_second": 1629, "url": "https://www.youtube.com/watch?v=Uumd2zOOz60&t=1591s", "title": "How I Read a Paper: Facebook's DETR (Video Tutorial)", "thumbnail": "https://i.ytimg.com/vi/Uumd2zOOz60/maxresdefault.jpg"} {"video_id": "Uumd2zOOz60", "text": "usually like one or two ideas in a paper I don't assume or I don't care what their exact loss function is because I've sort of gotten the idea up here of what the loss is about alright so I hope that's clear under very closely read the things and understand the things that are necessary for the story if you find if you think something's not necessary for the story and then later end up not understanding that maybe come back and you know read it again in any case I would I would rather I would rather skip something and assume it's not necessary", "start_timestamp": "00:27:09", "end_timestamp": "00:27:48", "start_second": 1629, "end_second": 1668, "url": "https://www.youtube.com/watch?v=Uumd2zOOz60&t=1629s", "title": "How I Read a Paper: Facebook's DETR (Video Tutorial)", "thumbnail": "https://i.ytimg.com/vi/Uumd2zOOz60/maxresdefault.jpg"} {"video_id": "Uumd2zOOz60", "text": "if I think so and then come back then trying to understand every everything but the things I do read I try to understand thoroughly okay then there's the architecture okay and that again I read closely and get backbone ok transformer encoder ok and now I understand much more closely a decoder ok and here I get now finally I get what this is about because n objects in parallel yada yada yada these input embeddings are learned positional encodings that we refer to as object queries and similarly to the encode we", "start_timestamp": "00:27:48", "end_timestamp": "00:28:31", "start_second": 1668, "end_second": 1711, "url": "https://www.youtube.com/watch?v=Uumd2zOOz60&t=1668s", "title": "How I Read a Paper: Facebook's DETR (Video Tutorial)", "thumbnail": "https://i.ytimg.com/vi/Uumd2zOOz60/maxresdefault.jpg"} {"video_id": "Uumd2zOOz60", "text": "add them to the input at each attention layer so now they name i've already seen these object queries here and the only word i actually need from this sentence are learnt the fact that their positional encodings I just kind of ignore as soon as they say learnt I know AHA these things here are learned they have actually they're always the same for each of the images they're just over all learned okay so now I feel I understand the entire model and yeah so then they say auxiliary decoding losses and this sometimes you have to pay", "start_timestamp": "00:28:31", "end_timestamp": "00:29:10", "start_second": 1711, "end_second": 1750, "url": "https://www.youtube.com/watch?v=Uumd2zOOz60&t=1711s", "title": "How I Read a Paper: Facebook's DETR (Video Tutorial)", "thumbnail": "https://i.ytimg.com/vi/Uumd2zOOz60/maxresdefault.jpg"} {"video_id": "Uumd2zOOz60", "text": "attention to like exhilarating because those are the things that here they say explicitly we found helpful to use auxiliary losses sometimes they they won't say why they did it they will just say our loss consists of three things and you know if you look at the three things only one of the things is really a part of their story so far and that you should immediately conclude that they've put in the other things because they tried it and it didn't work right so you can also kind of get an estimate of the brittleness and so on of the", "start_timestamp": "00:29:10", "end_timestamp": "00:29:47", "start_second": 1750, "end_second": 1787, "url": "https://www.youtube.com/watch?v=Uumd2zOOz60&t=1750s", "title": "How I Read a Paper: Facebook's DETR (Video Tutorial)", "thumbnail": "https://i.ytimg.com/vi/Uumd2zOOz60/maxresdefault.jpg"} {"video_id": "Uumd2zOOz60", "text": "system in that you see how many unnecessary things are there or how many things are not straightforward how many things aren't the easiest thing that you would do when you would go about and do what they did okay so then you this concludes this model or method usually the student section is called like method or model or something like this and you go to experiments now the main question I have so far or I have maybe I have some more questions about the model itself that I haven't been able to pick up from this section which is not the", "start_timestamp": "00:29:47", "end_timestamp": "00:30:23", "start_second": 1787, "end_second": 1823, "url": "https://www.youtube.com/watch?v=Uumd2zOOz60&t=1787s", "title": "How I Read a Paper: Facebook's DETR (Video Tutorial)", "thumbnail": "https://i.ytimg.com/vi/Uumd2zOOz60/maxresdefault.jpg"} {"video_id": "Uumd2zOOz60", "text": "case here but I simply keep those questions in mind and see whether they are resolved later right so I keep an awareness of what I don't understand but from here on my main issue is are they demonstrating that their story works right so they're here they're they're proposing a loss and a model and in my mind they now need to convince me that that works and that's that's it's not as easy as simply to show me some numbers that they are good at some benchmark they need to show me that they get those numbers because of what they claim so", "start_timestamp": "00:30:23", "end_timestamp": "00:31:09", "start_second": 1823, "end_second": 1869, "url": "https://www.youtube.com/watch?v=Uumd2zOOz60&t=1823s", "title": "How I Read a Paper: Facebook's DETR (Video Tutorial)", "thumbnail": "https://i.ytimg.com/vi/Uumd2zOOz60/maxresdefault.jpg"} {"video_id": "Uumd2zOOz60", "text": "here they claim well okay they propose a new they propose a new architecture so what they need to convince me of is that the architecture itself makes sense right but in other papers when when you propose like and when you say like oh we for example in an L STM when you build in an attention mechanism and you claim oh we you know the attention mechanism can look back at the source sequence in one step then you need to convince me that that actually happens right so you need to not only need you need to perform well you need to convince me", "start_timestamp": "00:31:09", "end_timestamp": "00:31:47", "start_second": 1869, "end_second": 1907, "url": "https://www.youtube.com/watch?v=Uumd2zOOz60&t=1869s", "title": "How I Read a Paper: Facebook's DETR (Video Tutorial)", "thumbnail": "https://i.ytimg.com/vi/Uumd2zOOz60/maxresdefault.jpg"} {"video_id": "Uumd2zOOz60", "text": "that you perform well because of what you claim your model does right so and that's often difficult and I specifically look out in the experiments for usually the question is like where are they trying to me right where are they trying to are or are they trying to me are they trying to cover up the fact that something doesn't work now all the experiments are always in the best light possible of course and you have to keep that in mind but a lot of times you can also already see from the experiments that okay are they doing", "start_timestamp": "00:31:47", "end_timestamp": "00:32:27", "start_second": 1907, "end_second": 1947, "url": "https://www.youtube.com/watch?v=Uumd2zOOz60&t=1907s", "title": "How I Read a Paper: Facebook's DETR (Video Tutorial)", "thumbnail": "https://i.ytimg.com/vi/Uumd2zOOz60/maxresdefault.jpg"} {"video_id": "Uumd2zOOz60", "text": "something weird are they not showing me some obvious experiment or and that's a lot of time because is there an easier explanation for why they get the results that they get other than their explanation right and it is it is their job to convince you that their explanation is the correct one for these numbers and especially if there is an easier one that they haven't excluded and then I don't believe the experiments if that's the case right if there is an easier explanation for the effect I'm I'm very skeptical but some papers have", "start_timestamp": "00:32:27", "end_timestamp": "00:33:07", "start_second": 1947, "end_second": 1987, "url": "https://www.youtube.com/watch?v=Uumd2zOOz60&t=1947s", "title": "How I Read a Paper: Facebook's DETR (Video Tutorial)", "thumbnail": "https://i.ytimg.com/vi/Uumd2zOOz60/maxresdefault.jpg"} {"video_id": "Uumd2zOOz60", "text": "an easier job here than other papers so in this paper they basically show results on a on a on a task and since their paper is about hey our pipeline is just easier than other pipelines what they first of all need to do is they need to like match the numbers of other pipelines and here I see that okay in these results often in the aftermath maybe a table or something here you see like this their model other models and their model is the best model in a lot of cases now if the best thing is of course if is if their model throughout", "start_timestamp": "00:33:07", "end_timestamp": "00:33:44", "start_second": 1987, "end_second": 2024, "url": "https://www.youtube.com/watch?v=Uumd2zOOz60&t=1987s", "title": "How I Read a Paper: Facebook's DETR (Video Tutorial)", "thumbnail": "https://i.ytimg.com/vi/Uumd2zOOz60/maxresdefault.jpg"} {"video_id": "Uumd2zOOz60", "text": "is the best the worst thing is if it's like scattered like this even if their model is the best but in every single benchmark a different configuration of their model is the best that's that's sort of a bad sign unless they can explicitly explain why that is and it's also not that good of a sign if these things are spread out like this like sometimes this baseline is good sometimes their model is better and so on so pay attention to that now in this paper it doesn't matter so much that's actually fine because what they're", "start_timestamp": "00:33:44", "end_timestamp": "00:34:19", "start_second": 2024, "end_second": 2059, "url": "https://www.youtube.com/watch?v=Uumd2zOOz60&t=2024s", "title": "How I Read a Paper: Facebook's DETR (Video Tutorial)", "thumbnail": "https://i.ytimg.com/vi/Uumd2zOOz60/maxresdefault.jpg"} {"video_id": "Uumd2zOOz60", "text": "trying to show is that their model is on par and way easier and they've already made the case in what way it is easier it's easier in terms of architecture if there were to say it's much faster then after that I would expect you know an experiment in speed while these numbers are matched so but since they say it's easier I've already seen the architecture I'm convinced of that now that they show okay our numbers match actually I'm surprised they even outperform a lot of times then I'm quite happy with these experiments so also", "start_timestamp": "00:34:19", "end_timestamp": "00:34:55", "start_second": 2059, "end_second": 2095, "url": "https://www.youtube.com/watch?v=Uumd2zOOz60&t=2059s", "title": "How I Read a Paper: Facebook's DETR (Video Tutorial)", "thumbnail": "https://i.ytimg.com/vi/Uumd2zOOz60/maxresdefault.jpg"} {"video_id": "Uumd2zOOz60", "text": "look for differences between numbers and the spread of numbers now it's not easy to say what if like point 1 is a bigger a small difference that depends on the task but if you know pay attention to these things pay attention to the fact that these results are noisy and oftentimes there is a lot more hyper parameter tuning going into the model of the paper then into the baseline model so why do you want to make your look your stuff look as good as possible and here is a little bit where the institutional credibility of someone", "start_timestamp": "00:34:55", "end_timestamp": "00:35:29", "start_second": 2095, "end_second": 2129, "url": "https://www.youtube.com/watch?v=Uumd2zOOz60&t=2095s", "title": "How I Read a Paper: Facebook's DETR (Video Tutorial)", "thumbnail": "https://i.ytimg.com/vi/Uumd2zOOz60/maxresdefault.jpg"} {"video_id": "Uumd2zOOz60", "text": "like Facebook comes in in that I tend to believe their results a bit more than other results megha but a bit more yeah also look at patterns that they don't point out in the text so if there is like a pattern if you see like an interaction between the number of parameters and the score or something like this just try to be on the lookout of that and see if you can spot something that you think or think about whether that makes sense or not in what your hypothesis would be so here we go on and okay then they go into", "start_timestamp": "00:35:29", "end_timestamp": "00:36:11", "start_second": 2129, "end_second": 2171, "url": "https://www.youtube.com/watch?v=Uumd2zOOz60&t=2129s", "title": "How I Read a Paper: Facebook's DETR (Video Tutorial)", "thumbnail": "https://i.ytimg.com/vi/Uumd2zOOz60/maxresdefault.jpg"} {"video_id": "Uumd2zOOz60", "text": "ablations and a lot of a lot of these papers do appellations and i generally appreciate that so here they visualize that the attention mechanism in their model actually refers to different instances right encoder self attention for a set of reference points the encoder is able to separate individual instances and you can see that pretty clearly right here where and even here with the overlapping cows and this is the sort of experiment that I would expect that actually convinces me that their architecture does what it says", "start_timestamp": "00:36:11", "end_timestamp": "00:36:47", "start_second": 2171, "end_second": 2207, "url": "https://www.youtube.com/watch?v=Uumd2zOOz60&t=2171s", "title": "How I Read a Paper: Facebook's DETR (Video Tutorial)", "thumbnail": "https://i.ytimg.com/vi/Uumd2zOOz60/maxresdefault.jpg"} {"video_id": "Uumd2zOOz60", "text": "that it does right and something like this where you see like totally overlapping things with the attention of the individual things visualized so telling me like especially this one right here the the foot of the back elephant actually being focused by the attention of the bounding box of the back elephant that's the sort of experiment that convinces me that their claims like that their numbers really come from what they claim it comes from okay so at the end of the experimental section you should always ask yourself", "start_timestamp": "00:36:47", "end_timestamp": "00:37:24", "start_second": 2207, "end_second": 2244, "url": "https://www.youtube.com/watch?v=Uumd2zOOz60&t=2207s", "title": "How I Read a Paper: Facebook's DETR (Video Tutorial)", "thumbnail": "https://i.ytimg.com/vi/Uumd2zOOz60/maxresdefault.jpg"} {"video_id": "Uumd2zOOz60", "text": "have they really convinced me that their story is true right that the improvement or winnig whenever they get an improvement or whatever they get whether is is due to the story that they want to sell me or could there be like an easier explanation or does something not fit is like other are the experiments different than from what you would SPECT here okay so these are these are my main questions are they are they convincing me of their story it's not do they have state of the art numbers I don't care I don't care even though like", "start_timestamp": "00:37:24", "end_timestamp": "00:38:05", "start_second": 2244, "end_second": 2285, "url": "https://www.youtube.com/watch?v=Uumd2zOOz60&t=2244s", "title": "How I Read a Paper: Facebook's DETR (Video Tutorial)", "thumbnail": "https://i.ytimg.com/vi/Uumd2zOOz60/maxresdefault.jpg"} {"video_id": "Uumd2zOOz60", "text": "sometimes so there is a bit of a catch I I don't care about state of the art numbers now let's say you have a table like this and you have a computer vision model and one of the models is like on the C for 10 data set now if your baseline model has like a ninety one ninety two percent accuracy on C for ten when I know the state of the art is 96 I don't care right I know like I've done C for ten I know with like I don't know five six layer CNN you can reach these 91 92 93 % accuracy and to get to the 96 97 you would actually be like in the", "start_timestamp": "00:38:05", "end_timestamp": "00:38:46", "start_second": 2285, "end_second": 2326, "url": "https://www.youtube.com/watch?v=Uumd2zOOz60&t=2285s", "title": "How I Read a Paper: Facebook's DETR (Video Tutorial)", "thumbnail": "https://i.ytimg.com/vi/Uumd2zOOz60/maxresdefault.jpg"} {"video_id": "Uumd2zOOz60", "text": "region of a wide ResNet and whatnot so I you know I know that even though you're a few points behind state of the art I know you know this this is valid still so I don't care but if you were to be like at 80 percent accuracy on C for 10 then I then I get a bit like I like it's pretty easy to get to 90 percent plus with like a standard CNN so there I immediately start to wonder why is there an explanation now this could be like a theoretical paper that says oh we investigate MLPs and that's why we only get that number so that's that would be", "start_timestamp": "00:38:46", "end_timestamp": "00:39:31", "start_second": 2326, "end_second": 2371, "url": "https://www.youtube.com/watch?v=Uumd2zOOz60&t=2326s", "title": "How I Read a Paper: Facebook's DETR (Video Tutorial)", "thumbnail": "https://i.ytimg.com/vi/Uumd2zOOz60/maxresdefault.jpg"} {"video_id": "Uumd2zOOz60", "text": "fine but if something is out of the ordinary like this then I pay attention but never because something isn't like the latest and greatest state of the earth that's just dumb ok and also if only evaluate what the paper claims it does right if the paper says we want to show that we are on par with current models then don't be mad if the paper doesn't outperform these models they didn't claim that right so yeah so after these ablations I'm actually pretty happy right here with the results and this right here when I saw this I didn't", "start_timestamp": "00:39:31", "end_timestamp": "00:40:13", "start_second": 2371, "end_second": 2413, "url": "https://www.youtube.com/watch?v=Uumd2zOOz60&t=2371s", "title": "How I Read a Paper: Facebook's DETR (Video Tutorial)", "thumbnail": "https://i.ytimg.com/vi/Uumd2zOOz60/maxresdefault.jpg"} {"video_id": "Uumd2zOOz60", "text": "I didn't expect that but I read the experiment description that these are these different learned object queries and what they do and that gave me an increased understanding of how these object queries actually work right so at that point I still had like a vague I knew that these are learned but reading this and sort of looking at it studying it a bit I was like oh okay then I understood even better what they are so again when I say understand everything in the method section you can still have questions and but you just have to keep", "start_timestamp": "00:40:13", "end_timestamp": "00:40:50", "start_second": 2413, "end_second": 2450, "url": "https://www.youtube.com/watch?v=Uumd2zOOz60&t=2413s", "title": "How I Read a Paper: Facebook's DETR (Video Tutorial)", "thumbnail": "https://i.ytimg.com/vi/Uumd2zOOz60/maxresdefault.jpg"} {"video_id": "Uumd2zOOz60", "text": "it in mind for later and then here I go on and there's this de TR for Panoptix segmentation and they here they propose like a new model so I first look at it and I'm like okay they proposed a new model they can do stuff like this now this is not object detection and again I'm not sure is this like a is this like a an add-on to the method or is was was this up here just an intermediate step to this and honestly after reading that I still wasn't sure it seems like something in between of course the paper is also a bit longer than other papers", "start_timestamp": "00:40:50", "end_timestamp": "00:41:29", "start_second": 2450, "end_second": 2489, "url": "https://www.youtube.com/watch?v=Uumd2zOOz60&t=2450s", "title": "How I Read a Paper: Facebook's DETR (Video Tutorial)", "thumbnail": "https://i.ytimg.com/vi/Uumd2zOOz60/maxresdefault.jpg"} {"video_id": "Uumd2zOOz60", "text": "it just it seems it's too long for just being a side note but it's too short for being its own thing so that was just a bit weird and I treated it as just like a oh we can also do this with our model but I didn't pay like too much attention to that okay so at the end I you know look at conclusions now the conclusions of a paper are much much often they are not nearly as informative as the introduction the conclusions they all often tend to be very generic and kind of hedging a bit against criticisms saying what would be up for future work", "start_timestamp": "00:41:29", "end_timestamp": "00:42:14", "start_second": 2489, "end_second": 2534, "url": "https://www.youtube.com/watch?v=Uumd2zOOz60&t=2489s", "title": "How I Read a Paper: Facebook's DETR (Video Tutorial)", "thumbnail": "https://i.ytimg.com/vi/Uumd2zOOz60/maxresdefault.jpg"} {"video_id": "Uumd2zOOz60", "text": "which is again hedging against criticism because you could simply say well we didn't do this that's future work yes so again I read it but I don't really pay attention to it and then I gloss over the abstract I just would kind of scroll through the abstract if there's something that catches my eye I would look at it and if not then not and then I basically go to the start and whenever I didn't understand something I go back I look at it again and I try to think are all my questions answered and have they sufficiently convinced me that", "start_timestamp": "00:42:14", "end_timestamp": "00:42:54", "start_second": 2534, "end_second": 2574, "url": "https://www.youtube.com/watch?v=Uumd2zOOz60&t=2534s", "title": "How I Read a Paper: Facebook's DETR (Video Tutorial)", "thumbnail": "https://i.ytimg.com/vi/Uumd2zOOz60/maxresdefault.jpg"} {"video_id": "Uumd2zOOz60", "text": "their story is the thing that really has the effect right here and then if I now were to make a video of this I've often found it useful to just put the paper away for a while and it's I usually get the best result when I read the paper the day before and then make a video the day after or if not I'll just you know put it away do something else do some email responding programming going outside eating lunch just some kind of a break between first read or between your first couple of reads and just I don't even think about the paper I just it's", "start_timestamp": "00:42:54", "end_timestamp": "00:43:37", "start_second": 2574, "end_second": 2617, "url": "https://www.youtube.com/watch?v=Uumd2zOOz60&t=2574s", "title": "How I Read a Paper: Facebook's DETR (Video Tutorial)", "thumbnail": "https://i.ytimg.com/vi/Uumd2zOOz60/maxresdefault.jpg"} {"video_id": "Uumd2zOOz60", "text": "kind of it's in the subconscious it kind of bruised right and I happen to think about the paper every now and then but I don't make a conscious effort to be like oh how am I gonna explain this and so on but I just found the the worst videos are the ones where I immediately make the video after reading a paper and I've just discovered that if I kind of take a break and then I look at it again right I look I don't read it fully again but I if I have if I have the feeling I've understood it I don't read it fully", "start_timestamp": "00:43:37", "end_timestamp": "00:44:08", "start_second": 2617, "end_second": 2648, "url": "https://www.youtube.com/watch?v=Uumd2zOOz60&t=2617s", "title": "How I Read a Paper: Facebook's DETR (Video Tutorial)", "thumbnail": "https://i.ytimg.com/vi/Uumd2zOOz60/maxresdefault.jpg"} {"video_id": "Uumd2zOOz60", "text": "again but I just kind of look at it and go again through this story and I think that's even if you you know wanna if you want to talk about a paper in a reading group or tell you know explain it to your friends or whatnot this is often very useful just put it away for a while let it Mellow and I find that helps a lot okay that was my process of reading this particular paper now we again this this is a high quality paper so it's I find it's a pretty easy read in that I simply need to understand what they did and I'm", "start_timestamp": "00:44:08", "end_timestamp": "00:44:46", "start_second": 2648, "end_second": 2686, "url": "https://www.youtube.com/watch?v=Uumd2zOOz60&t=2648s", "title": "How I Read a Paper: Facebook's DETR (Video Tutorial)", "thumbnail": "https://i.ytimg.com/vi/Uumd2zOOz60/maxresdefault.jpg"} {"video_id": "A7AnCvYDQrU", "text": "I got you know basically my spacian is to kind of get to the next step in AI machine running etc and the what we what we see today is is a huge amount of success in machine learning but the sample efficiency of all of the techniques that we use today are much much worse than everything we observe in humans and animals in other words it take many more samples or many more trials in the case reinforcement learning for a machine to learn anything compared to humans and animals so a lot of people are very quick to draw", "start_timestamp": "00:00:00", "end_timestamp": "00:00:45", "start_second": 0, "end_second": 45, "url": "https://www.youtube.com/watch?v=A7AnCvYDQrU&t=0s", "title": "Yann LeCun: \"Energy-Based Self-Supervised Learning\"", "thumbnail": "https://i.ytimg.com/vi/A7AnCvYDQrU/maxresdefault.jpg"} {"video_id": "A7AnCvYDQrU", "text": "conclusions from this but you know humans draw on an animal's drawn evolution and innate behavior but I think it's just more efficient learning and another kind of reaction to this is we draw our background knowledge about about the world and that's true the big question I'm asking here is what does that come from how do we acquire all the background knowledge we have about the world that allows us to learn a new task very quickly so so all the success that we you see in practical machine learning today almost all of it", "start_timestamp": "00:00:45", "end_timestamp": "00:01:19", "start_second": 45, "end_second": 79, "url": "https://www.youtube.com/watch?v=A7AnCvYDQrU&t=45s", "title": "Yann LeCun: \"Energy-Based Self-Supervised Learning\"", "thumbnail": "https://i.ytimg.com/vi/A7AnCvYDQrU/maxresdefault.jpg"} {"video_id": "A7AnCvYDQrU", "text": "is due to supervised running and we all know what what that means right to give let's say you want to do image recognition you give an image to the Machine and if the machine doesn't give you the right answer you tell you what a right a right answer is and you adjust its internal parameters using stochastic gradient descent or something like that a gradient based method to get the output closer to the one you want the amount of information you give to the machine at every trial is relatively small even in the case of something like", "start_timestamp": "00:01:19", "end_timestamp": "00:01:47", "start_second": 79, "end_second": 107, "url": "https://www.youtube.com/watch?v=A7AnCvYDQrU&t=79s", "title": "Yann LeCun: \"Energy-Based Self-Supervised Learning\"", "thumbnail": "https://i.ytimg.com/vi/A7AnCvYDQrU/maxresdefault.jpg"} {"video_id": "A7AnCvYDQrU", "text": "image net which has 1,000 categories you tell it with the correct categories and that's less than 10 bits of information so you're asking efficient to predict a very small amount of information every time as a result you need a lot of samples to try need to do anything reinforcement learning is even worse so reinforcement learning a situation where you don't tell the Machine the correct answer you only tell it whether the answer it produced was good or bad ok now there is like a harder form of reinforcement learning where what the", "start_timestamp": "00:01:47", "end_timestamp": "00:02:12", "start_second": 107, "end_second": 132, "url": "https://www.youtube.com/watch?v=A7AnCvYDQrU&t=107s", "title": "Yann LeCun: \"Energy-Based Self-Supervised Learning\"", "thumbnail": "https://i.ytimg.com/vi/A7AnCvYDQrU/maxresdefault.jpg"} {"video_id": "A7AnCvYDQrU", "text": "Machine sees next types type depends on the answer we got and then there is a problem of exploration exploitation etc but even without talking about this if you look at how long it takes even to learn to for a machine to learn to place the condition that it just runs by reinforcements come into play an Atari game very simple Atari game from the 1980s it takes the equivalent on average of 80 hours of training to reach the performance that any human can reach in about 15 minutes those machines actually get to superhuman performance but it takes them", "start_timestamp": "00:02:12", "end_timestamp": "00:02:40", "start_second": 132, "end_second": 160, "url": "https://www.youtube.com/watch?v=A7AnCvYDQrU&t=132s", "title": "Yann LeCun: \"Energy-Based Self-Supervised Learning\"", "thumbnail": "https://i.ytimg.com/vi/A7AnCvYDQrU/maxresdefault.jpg"} {"video_id": "A7AnCvYDQrU", "text": "a long time the the go system that was produced by deep mine and the one that was produced by Facebook a little bit later I can I know the numbers for Facebook because they published and also they're my friends this takes about 20 million of self play games to to reach super human performance running on 2003 P news for two weeks this is a lot of games mortal games that any humans can play go yes go is complicated and Starcraft so this is a recent the paper actually just appeared last week but the results have", "start_timestamp": "00:02:40", "end_timestamp": "00:03:24", "start_second": 160, "end_second": 204, "url": "https://www.youtube.com/watch?v=A7AnCvYDQrU&t=160s", "title": "Yann LeCun: \"Energy-Based Self-Supervised Learning\"", "thumbnail": "https://i.ytimg.com/vi/A7AnCvYDQrU/maxresdefault.jpg"} {"video_id": "A7AnCvYDQrU", "text": "been known for a while from deepmind the Alpha star system takes about 200 years of equivalent real time to learn to play on a single map for every single player a single type of player if you want and that's an enormous amount of computation there's rumors that just to train this for a week or two that team took more computational resources at Google that auto research okay and you know similar for there is a you know recent demo by open AI and of course when he paper are using the first month on internal manipulation from simulation and then", "start_timestamp": "00:03:24", "end_timestamp": "00:04:03", "start_second": 204, "end_second": 243, "url": "https://www.youtube.com/watch?v=A7AnCvYDQrU&t=204s", "title": "Yann LeCun: \"Energy-Based Self-Supervised Learning\"", "thumbnail": "https://i.ytimg.com/vi/A7AnCvYDQrU/maxresdefault.jpg"} {"video_id": "A7AnCvYDQrU", "text": "you can sort of transfer this to a real real robot and it takes the equivalent of ten thousand years of real time in simulation so you can run simulation fast or you can run it in parallel it just costs money or power or co2 emissions but but it doesn't work in the real world so if you want to train a car to drive itself and you don't have accurately no simulation to turn on this in simulation it's not gonna work you will have the car you'll need a car to drive itself or you know millions of hours caused thousands", "start_timestamp": "00:04:03", "end_timestamp": "00:04:40", "start_second": 243, "end_second": 280, "url": "https://www.youtube.com/watch?v=A7AnCvYDQrU&t=243s", "title": "Yann LeCun: \"Energy-Based Self-Supervised Learning\"", "thumbnail": "https://i.ytimg.com/vi/A7AnCvYDQrU/maxresdefault.jpg"} {"video_id": "A7AnCvYDQrU", "text": "of accidents destroyed South multiple times it will have to run out forklift multiple times before it realizes it's a bad idea to another cliff when it starts it doesn't know anything about gravity or anything like that and so it's not practical for the real world although it may be practical in simulation if you can do an accurate enough simulation but it's gonna cost you a lot in terms of computation so how is it that humans can learn to drive a car in about 20 hours of training for most of us without causing any accidents also for most of", "start_timestamp": "00:04:40", "end_timestamp": "00:05:07", "start_second": 280, "end_second": 307, "url": "https://www.youtube.com/watch?v=A7AnCvYDQrU&t=280s", "title": "Yann LeCun: \"Energy-Based Self-Supervised Learning\"", "thumbnail": "https://i.ytimg.com/vi/A7AnCvYDQrU/maxresdefault.jpg"} {"video_id": "A7AnCvYDQrU", "text": "us right that's a big question how do humans and animals run so quickly and what happens is there is that it's not supervised running it's not refunds my drowning it's something else and so when you look at babies you talk to a cognitive scientist development to a cognitive scientist and and you ask them you know when do baby learn basic things like gravity you know when do they learn that objects are supposed to fall I'll tell you around nine months so before nine month old you should be the scenario here but there's a little car", "start_timestamp": "00:05:07", "end_timestamp": "00:05:43", "start_second": 307, "end_second": 343, "url": "https://www.youtube.com/watch?v=A7AnCvYDQrU&t=307s", "title": "Yann LeCun: \"Energy-Based Self-Supervised Learning\"", "thumbnail": "https://i.ytimg.com/vi/A7AnCvYDQrU/maxresdefault.jpg"} {"video_id": "A7AnCvYDQrU", "text": "on the platform you push it off and the car appears to float in the air it's a trick of course babies barely pay attention that's another thing they see how is your Kendall stuff doing there that day they learn from every single one of them but that doesn't surprise them after nine months old they've run it by gravity and they look at this like a little girl here they're really really surprised because in the meantime they've learned that objects are not supposed to you know kind of float in the air they're supposed to Foley's are", "start_timestamp": "00:05:43", "end_timestamp": "00:06:13", "start_second": 343, "end_second": 373, "url": "https://www.youtube.com/watch?v=A7AnCvYDQrU&t=343s", "title": "Yann LeCun: \"Energy-Based Self-Supervised Learning\"", "thumbnail": "https://i.ytimg.com/vi/A7AnCvYDQrU/maxresdefault.jpg"} {"video_id": "A7AnCvYDQrU", "text": "not supported and so that's actually a trick that a method we just I would say that psychosis is used to identify when babies learn new concepts so you know babies run face-tracking very quickly and you know this is could be you know there are computational models that learn kind of face detection based on emotion you know self supervise being on motion that learning in minutes so like that could be learn but really quickly the notion of object permanence the fact that when an object is hidden behind another one", "start_timestamp": "00:06:13", "end_timestamp": "00:06:44", "start_second": 373, "end_second": 404, "url": "https://www.youtube.com/watch?v=A7AnCvYDQrU&t=373s", "title": "Yann LeCun: \"Energy-Based Self-Supervised Learning\"", "thumbnail": "https://i.ytimg.com/vi/A7AnCvYDQrU/maxresdefault.jpg"} {"video_id": "A7AnCvYDQrU", "text": "is still there we don't seem to be born with this but we learn this quite quickly as well so many moles are born with this the distinction between animate and inanimate objects that's run around three months they're objects whose trajectory are completely predictable and others that are not animate objects and you go gravity inertia a conservation of momentum basically what we call intuitive physics that that comes much later around nine months and it looks as if or maybe that's our hypothesis but you know babies kind of", "start_timestamp": "00:06:44", "end_timestamp": "00:07:19", "start_second": 404, "end_second": 439, "url": "https://www.youtube.com/watch?v=A7AnCvYDQrU&t=404s", "title": "Yann LeCun: \"Energy-Based Self-Supervised Learning\"", "thumbnail": "https://i.ytimg.com/vi/A7AnCvYDQrU/maxresdefault.jpg"} {"video_id": "A7AnCvYDQrU", "text": "learn sort of basic concepts kind of a you know step survive stage pieces sort of building more abstract concept on top of simpler ones so for example you know are we born with the concept that the world is three-dimensional or do we learn this I think it's a good hypothesis to think that we learned this a lot of psychologists will tell you we're born with it but I don't see like how the cortex could be wired to sort of you know you know tell us how to compute depth right although there is certainly some bias in the in the wiring that", "start_timestamp": "00:07:19", "end_timestamp": "00:07:54", "start_second": 439, "end_second": 474, "url": "https://www.youtube.com/watch?v=A7AnCvYDQrU&t=439s", "title": "Yann LeCun: \"Energy-Based Self-Supervised Learning\"", "thumbnail": "https://i.ytimg.com/vi/A7AnCvYDQrU/maxresdefault.jpg"} {"video_id": "A7AnCvYDQrU", "text": "makes this favorable in the sense that you know connections from the left eye and the right eye actually go to the same place in the cortex so if the if the cortex wants to compute disparity it's easy for it the work the wires are there okay but the function not not really and so here is how you could compute how you could learn that the world is two-dimensional if you train your visual system to predict what the world is going to look like when you move your head the best explanation for how the world changes is the fact that", "start_timestamp": "00:07:54", "end_timestamp": "00:08:24", "start_second": 474, "end_second": 504, "url": "https://www.youtube.com/watch?v=A7AnCvYDQrU&t=474s", "title": "Yann LeCun: \"Energy-Based Self-Supervised Learning\"", "thumbnail": "https://i.ytimg.com/vi/A7AnCvYDQrU/maxresdefault.jpg"} {"video_id": "A7AnCvYDQrU", "text": "every pixel every location in the world has a depth right because then you get parallax motion so implicitly if you want to predict what the wall is going to look like when you move your head you're gonna have to learn that implicitly even if you have no idea that the world is two-dimensional that's the best way to explain how the world changes okay so that's not that's an idea that suggests how we can learn very simple concepts just by learning to predict essentially and that's going to be the general theme of this talk which", "start_timestamp": "00:08:24", "end_timestamp": "00:08:49", "start_second": 504, "end_second": 529, "url": "https://www.youtube.com/watch?v=A7AnCvYDQrU&t=504s", "title": "Yann LeCun: \"Energy-Based Self-Supervised Learning\"", "thumbnail": "https://i.ytimg.com/vi/A7AnCvYDQrU/maxresdefault.jpg"} {"video_id": "A7AnCvYDQrU", "text": "is learning to predict prediction is the essence of intelligence in my opinion and so we build models of the world that allows us to learn to drive into in 20 hours to know kinds of stuff but animals do that too so I really love this video of this little baby or wrong song here is being shown a magic trick where you put a cherry in a cup and and then the cherries removed but he doesn't see that and then the cup is empty and he was on the floor laughing okay so he's his model of the world is obviously being violated and he finds that funny Hey I", "start_timestamp": "00:08:49", "end_timestamp": "00:09:31", "start_second": 529, "end_second": 571, "url": "https://www.youtube.com/watch?v=A7AnCvYDQrU&t=529s", "title": "Yann LeCun: \"Energy-Based Self-Supervised Learning\"", "thumbnail": "https://i.ytimg.com/vi/A7AnCvYDQrU/maxresdefault.jpg"} {"video_id": "A7AnCvYDQrU", "text": "mean there's the these two things that happen when your model of the world is violated either you find that funny or you find out or you find it scary because here is something you didn't predict it could kill you in both cases you pay attention okay so that brings us to this this audio cell supervised running this idea of running by prediction okay so not learning a task not running to classify objects in categories that you know come to you from a deus ex machina but learning the structure of the world by just observing", "start_timestamp": "00:09:31", "end_timestamp": "00:10:05", "start_second": 571, "end_second": 605, "url": "https://www.youtube.com/watch?v=A7AnCvYDQrU&t=571s", "title": "Yann LeCun: \"Energy-Based Self-Supervised Learning\"", "thumbnail": "https://i.ytimg.com/vi/A7AnCvYDQrU/maxresdefault.jpg"} {"video_id": "A7AnCvYDQrU", "text": "the world essentially so the basic hypothesis of this is or the principle that you can base this on is predict everything from everything else what do I mean by this so let's say you have a piece of data for the sake of concreteness here let's let's think about a video clip for example there's going to be a piece of that data that you're gonna tell the Machine you can look at it and there's another piece that the Machine pretends it doesn't know it doesn't see here it's the the future frames of the video okay so it", "start_timestamp": "00:10:05", "end_timestamp": "00:10:37", "start_second": 605, "end_second": 637, "url": "https://www.youtube.com/watch?v=A7AnCvYDQrU&t=605s", "title": "Yann LeCun: \"Energy-Based Self-Supervised Learning\"", "thumbnail": "https://i.ytimg.com/vi/A7AnCvYDQrU/maxresdefault.jpg"} {"video_id": "A7AnCvYDQrU", "text": "looks at the video up to a point and then it tries to predict the rest of the video but it pretends it doesn't know it yet and then it trends itself to predict it of course it can just wait and observe what's going to happen in the world and it trains and predicts it by just observing what happened another form of this is this it's called a mast cell supervised running you give a piece of data it's very popular in the context of natural language processing these days take a window of text a bunch of words you remove some of the words and you ask", "start_timestamp": "00:10:37", "end_timestamp": "00:11:08", "start_second": 637, "end_second": 668, "url": "https://www.youtube.com/watch?v=A7AnCvYDQrU&t=637s", "title": "Yann LeCun: \"Energy-Based Self-Supervised Learning\"", "thumbnail": "https://i.ytimg.com/vi/A7AnCvYDQrU/maxresdefault.jpg"} {"video_id": "A7AnCvYDQrU", "text": "the machine to predict the words that are missing in the process of doing so the machine has to basically develop a representation of language that this allows it to make those predictions and basically in the process of doing this it kind of understands language not completely not deeply but but still but really more generally is the idea of taking a piece of data and asking the machine to predict a piece of it from the piece that it sees so as I just mentioned this type of learning is extremely has become in the last year", "start_timestamp": "00:11:08", "end_timestamp": "00:11:42", "start_second": 668, "end_second": 702, "url": "https://www.youtube.com/watch?v=A7AnCvYDQrU&t=668s", "title": "Yann LeCun: \"Energy-Based Self-Supervised Learning\"", "thumbnail": "https://i.ytimg.com/vi/A7AnCvYDQrU/maxresdefault.jpg"} {"video_id": "A7AnCvYDQrU", "text": "has become extremely popular in natural my which processing is actually part about a huge improvement in performance of all natural language processing systems including translation search to Google there's been a series of ideas you know going back to the 90s on this but really sort of a paper that that convinced everyone that's you know decide to be the thing came up on archive in October last year from from Google from Google Google a I was on Google brain actually and they use a particular type of neural net a gigantic", "start_timestamp": "00:11:42", "end_timestamp": "00:12:16", "start_second": 702, "end_second": 736, "url": "https://www.youtube.com/watch?v=A7AnCvYDQrU&t=702s", "title": "Yann LeCun: \"Energy-Based Self-Supervised Learning\"", "thumbnail": "https://i.ytimg.com/vi/A7AnCvYDQrU/maxresdefault.jpg"} {"video_id": "A7AnCvYDQrU", "text": "one called a transformer architecture so transformer architecture is kind of a funny kind of neural net where groups of neurons basically implement some sort of memory module differentiable memory module so they know they don't just compute weight it's on the complete weighted sums but then they compare those weighted sum with with vectors that called keys and then that gives them scores that you normalized to one and then you can compute a linear combination of other vectors and sort of complicated but it's kind of an", "start_timestamp": "00:12:16", "end_timestamp": "00:12:43", "start_second": 736, "end_second": 763, "url": "https://www.youtube.com/watch?v=A7AnCvYDQrU&t=736s", "title": "Yann LeCun: \"Energy-Based Self-Supervised Learning\"", "thumbnail": "https://i.ytimg.com/vi/A7AnCvYDQrU/maxresdefault.jpg"} {"video_id": "A7AnCvYDQrU", "text": "associative memory every module in there is an associative memory and you put 40 layers of those with hundreds of millions of parameters you train this on billions of words of text and you train it in the following way you take a window of a few hundred words you take out 15% of the words and you train the machine to just predict the missing words now the machine cannot do a perfect job at this so what it outputs is for each word there are missing it that puts a probability vector whose size is the size of the of the", "start_timestamp": "00:12:43", "end_timestamp": "00:13:11", "start_second": 763, "end_second": 791, "url": "https://www.youtube.com/watch?v=A7AnCvYDQrU&t=763s", "title": "Yann LeCun: \"Energy-Based Self-Supervised Learning\"", "thumbnail": "https://i.ytimg.com/vi/A7AnCvYDQrU/maxresdefault.jpg"} {"video_id": "A7AnCvYDQrU", "text": "dictionary and it gives you a probability for every every word right so that's the way it handles uncertainty in the prediction it produces a large probability vector this has completely revolutionized NLP everybody does this now it works so well that Google deployed this within like in in the last few weeks they basically deployed this as a way of for example if you ask a question to Google it will produce an answer the answer is computed by anything like this Facebook has deployed as a develop things like this for translation and content", "start_timestamp": "00:13:11", "end_timestamp": "00:13:44", "start_second": 791, "end_second": 824, "url": "https://www.youtube.com/watch?v=A7AnCvYDQrU&t=791s", "title": "Yann LeCun: \"Energy-Based Self-Supervised Learning\"", "thumbnail": "https://i.ytimg.com/vi/A7AnCvYDQrU/maxresdefault.jpg"} {"video_id": "A7AnCvYDQrU", "text": "filtering hate speech detection all kinds of stuff yes right well yeah here you know what those words are right it's just you you it's not right it's not it's supervised running with two differences what is you don't have a extraneous piece of data us submission to predict so basically you're not asking you to perform any task although that understanding the input data the internal structure of input data second thing is that prediction cannot be known exactly because you know you can't predict exactly which word is going to", "start_timestamp": "00:13:44", "end_timestamp": "00:14:26", "start_second": 824, "end_second": 866, "url": "https://www.youtube.com/watch?v=A7AnCvYDQrU&t=824s", "title": "Yann LeCun: \"Energy-Based Self-Supervised Learning\"", "thumbnail": "https://i.ytimg.com/vi/A7AnCvYDQrU/maxresdefault.jpg"} {"video_id": "A7AnCvYDQrU", "text": "go here and so you need to deal with uncertainty and those are crucial key points that so distinguish this from sort of regular supervisor on human okay it doesn't work so well for yes yes it produces a probability vector over all words yeah it's a separate one for every word by the way so yeah so there's no consistency between if you pick one where you can put another word independently format from that distribution vector yeah so of course people try to do this for images so the equivalent of this would be take an", "start_timestamp": "00:14:26", "end_timestamp": "00:15:05", "start_second": 866, "end_second": 905, "url": "https://www.youtube.com/watch?v=A7AnCvYDQrU&t=866s", "title": "Yann LeCun: \"Energy-Based Self-Supervised Learning\"", "thumbnail": "https://i.ytimg.com/vi/A7AnCvYDQrU/maxresdefault.jpg"} {"video_id": "A7AnCvYDQrU", "text": "image you know blank out some of the areas of this image and then train a neural net the coefficient net or something to predict the missing parts the problem with this is that now the distribution of outputs is all is over a high dimensional continuous space and we don't know how to parameterize good distributions over those so those so far have not been very successful not to the extent that those have been successful so the way you use those things is you train this network and then you take the internal representation of language that", "start_timestamp": "00:15:05", "end_timestamp": "00:15:35", "start_second": 905, "end_second": 935, "url": "https://www.youtube.com/watch?v=A7AnCvYDQrU&t=905s", "title": "Yann LeCun: \"Energy-Based Self-Supervised Learning\"", "thumbnail": "https://i.ytimg.com/vi/A7AnCvYDQrU/maxresdefault.jpg"} {"video_id": "A7AnCvYDQrU", "text": "those things learn and use it as input to a supervised task his fish detection answering questions you know whatever this group to students at Facebook in Paris who have used this for training your translator so you give a sentence a sentence in English and a sentence in French you remove different words randomly from the two sentences and then you ask the system to translate and the magic thing is that because some of the words that are removed from the French version are present in English version it learns to produce a", "start_timestamp": "00:15:35", "end_timestamp": "00:16:08", "start_second": 935, "end_second": 968, "url": "https://www.youtube.com/watch?v=A7AnCvYDQrU&t=935s", "title": "Yann LeCun: \"Energy-Based Self-Supervised Learning\"", "thumbnail": "https://i.ytimg.com/vi/A7AnCvYDQrU/maxresdefault.jpg"} {"video_id": "A7AnCvYDQrU", "text": "representation that is independent of language so what you get in the end is a meaning representation that works for English in French and you have two encoders went for engagement from French Google has a version of this has you know a hundred languages Facebook now has a version of this that has compiled languages those are massive networks the latest biggest ones are tens of billions of parameters it's just ridiculously large yeah it's inventing on steroids exactly yeah yeah so because you can't you can't pick it you know you can't", "start_timestamp": "00:16:08", "end_timestamp": "00:16:54", "start_second": 968, "end_second": 1014, "url": "https://www.youtube.com/watch?v=A7AnCvYDQrU&t=968s", "title": "Yann LeCun: \"Energy-Based Self-Supervised Learning\"", "thumbnail": "https://i.ytimg.com/vi/A7AnCvYDQrU/maxresdefault.jpg"} {"video_id": "A7AnCvYDQrU", "text": "pick pixels independent of each other so this is a trick that deepmind has proposed if a former postdoc of my daddy find Carl Gregor which is you make you make the prediction of pixels sequential and you turn it into a classification problem for the great scale where it's you know one among 256 its it just strikes me as wrong you know it kind of work surprisingly well but you know it can be the ultimate answer no I think we'll find something better so there is actually yeah there is there is a second studies about this actually people who", "start_timestamp": "00:16:54", "end_timestamp": "00:17:33", "start_second": 1014, "end_second": 1053, "url": "https://www.youtube.com/watch?v=A7AnCvYDQrU&t=1014s", "title": "Yann LeCun: \"Energy-Based Self-Supervised Learning\"", "thumbnail": "https://i.ytimg.com/vi/A7AnCvYDQrU/maxresdefault.jpg"} {"video_id": "A7AnCvYDQrU", "text": "have tried to study what the representation inside so Chris Manning at Stanford has done some some work on this and his various groups it seems that those things actually represent meeting to some extent right it's not a deep understanding of text you know but its shadow because those are words that are not connected with the real world right i mean the thing only sees text this big question so it has the linguistics community up in arms because it basically you know breaks the entire universe okay of like you know what", "start_timestamp": "00:17:33", "end_timestamp": "00:18:04", "start_second": 1053, "end_second": 1084, "url": "https://www.youtube.com/watch?v=A7AnCvYDQrU&t=1053s", "title": "Yann LeCun: \"Energy-Based Self-Supervised Learning\"", "thumbnail": "https://i.ytimg.com/vi/A7AnCvYDQrU/maxresdefault.jpg"} {"video_id": "A7AnCvYDQrU", "text": "about grammar what about you know semantics what about all those things is all statistics you know what about symbol manipulation right those things basically just represent everything by vectors they embed everything in vector spaces and so the the chomskyan linguistics say oh my god they write books against this okay so supervised running you you you train a system with sort of a pretext task which is not really a task it just reconstruction or prediction and as I said as we said I said it works with you well for texts", "start_timestamp": "00:18:04", "end_timestamp": "00:18:40", "start_second": 1084, "end_second": 1120, "url": "https://www.youtube.com/watch?v=A7AnCvYDQrU&t=1084s", "title": "Yann LeCun: \"Energy-Based Self-Supervised Learning\"", "thumbnail": "https://i.ytimg.com/vi/A7AnCvYDQrU/maxresdefault.jpg"} {"video_id": "A7AnCvYDQrU", "text": "and symbols and people use this now for like you know DNA sequences all kinds of stuff it's very new images yeah not so much video not so much either signal audio not so much either there is some results in other words so improves the steadily out a little bit but they're not as successful as in NLP NLP they're incredibly successful okay there's another reason why we might want to use our supervised learning which is and it goes back to this idea of training a car to drive itself the reason why we can learn to drive a car in 20 hours with", "start_timestamp": "00:18:40", "end_timestamp": "00:19:11", "start_second": 1120, "end_second": 1151, "url": "https://www.youtube.com/watch?v=A7AnCvYDQrU&t=1120s", "title": "Yann LeCun: \"Energy-Based Self-Supervised Learning\"", "thumbnail": "https://i.ytimg.com/vi/A7AnCvYDQrU/maxresdefault.jpg"} {"video_id": "A7AnCvYDQrU", "text": "that crashing most of the time is that we have this model of the world that allows us to predict the consequences the consequences of our actions so we know that if we drive next to a cliff and we turn the wheel to the right you know the car with we're off to the right it will run off the cliff and crashing on the bottom because we know about gravity and nothing good is going to come out of it so we don't even try right because we had this predictive model we can predict the consequences of some of our actions at least so the way", "start_timestamp": "00:19:11", "end_timestamp": "00:19:38", "start_second": 1151, "end_second": 1178, "url": "https://www.youtube.com/watch?v=A7AnCvYDQrU&t=1151s", "title": "Yann LeCun: \"Energy-Based Self-Supervised Learning\"", "thumbnail": "https://i.ytimg.com/vi/A7AnCvYDQrU/maxresdefault.jpg"} {"video_id": "A7AnCvYDQrU", "text": "this works is actually a very standard thing for optimal control theory which is if you have a predictive forward model of the world they gives you the state of the world at time T plus 1 as a function of state the world time T your action and perhaps some latent variable that represents all those stuff you don't know about the world that you can you can sort of run this in your head you can you know run your world model in your head with a proposed sequence of action and see what the result will be and you can measure the cost of it you", "start_timestamp": "00:19:38", "end_timestamp": "00:20:04", "start_second": 1178, "end_second": 1204, "url": "https://www.youtube.com/watch?v=A7AnCvYDQrU&t=1178s", "title": "Yann LeCun: \"Energy-Based Self-Supervised Learning\"", "thumbnail": "https://i.ytimg.com/vi/A7AnCvYDQrU/maxresdefault.jpg"} {"video_id": "A7AnCvYDQrU", "text": "know you can have an eternal cost for a good things are you know and I don't want to cry right and so you can you can sort of run this model forward and perhaps infer a action sequence that will minimize your costs right that model will have to be learned with self supervised running basically here is a state of the world let me take an action and see what the result is or not taking action just because the world is being the world and that's the same problem we need to solve here the self supervised learning", "start_timestamp": "00:20:04", "end_timestamp": "00:20:33", "start_second": 1204, "end_second": 1233, "url": "https://www.youtube.com/watch?v=A7AnCvYDQrU&t=1204s", "title": "Yann LeCun: \"Energy-Based Self-Supervised Learning\"", "thumbnail": "https://i.ytimg.com/vi/A7AnCvYDQrU/maxresdefault.jpg"} {"video_id": "A7AnCvYDQrU", "text": "problem and the main part the main issue is that the world is not deterministic okay so that it be to this picture of the three paradigms of running if you want reinforcement learning supervised running and soft supervising the difference is how much feedback information you give to the system at every trial or every sample here is just one scalar here it's just a few bits for example and here it's basically a whole video right it's a huge amount of information you give to the machine so the hope is that you can try and", "start_timestamp": "00:20:33", "end_timestamp": "00:21:06", "start_second": 1233, "end_second": 1266, "url": "https://www.youtube.com/watch?v=A7AnCvYDQrU&t=1233s", "title": "Yann LeCun: \"Energy-Based Self-Supervised Learning\"", "thumbnail": "https://i.ytimg.com/vi/A7AnCvYDQrU/maxresdefault.jpg"} {"video_id": "A7AnCvYDQrU", "text": "gigantic networks without them being you know ridiculously overpriced and it will learn a lot of the structure about the world just by observation without actually taking any risk and without you spending money correcting labels that's probably how humans and animals learn so much that might be how common sense emerges right the accumulation of all the background knowledge we have about the world by that we accumulate by observation that's how that's the basis of common sense essentially so we need to get machines to do this and for I", "start_timestamp": "00:21:06", "end_timestamp": "00:21:38", "start_second": 1266, "end_second": 1298, "url": "https://www.youtube.com/watch?v=A7AnCvYDQrU&t=1266s", "title": "Yann LeCun: \"Energy-Based Self-Supervised Learning\"", "thumbnail": "https://i.ytimg.com/vi/A7AnCvYDQrU/maxresdefault.jpg"} {"video_id": "A7AnCvYDQrU", "text": "mean I've been sort of advocating for this for a while and you know what Middle's obnoxious you know slides without reinforcement learning being a cherry on the cake of machine learning and supervised reading being the dark matter of AI we don't know what it is there's actually the dark energy you've more you know it's most of the energy okay so the next revolution is I will not be supervised you will not be reinforced either I saw this from Alyosha of course my colleagues Eterna bamileke who's from Berkeley also in Facebook", "start_timestamp": "00:21:38", "end_timestamp": "00:22:09", "start_second": 1298, "end_second": 1329, "url": "https://www.youtube.com/watch?v=A7AnCvYDQrU&t=1298s", "title": "Yann LeCun: \"Energy-Based Self-Supervised Learning\"", "thumbnail": "https://i.ytimg.com/vi/A7AnCvYDQrU/maxresdefault.jpg"} {"video_id": "A7AnCvYDQrU", "text": "here says labels are the opium of the machine learning researcher so you know it's all like revolutionary statements and some dude actually produces a t-shirt that you can buy okay so that gives me two inertia base models which you really really is a proposal for how we approach this problem so the main problem is how do we predict with uncertainty so if I if I do an experiment I'll come back to this if I do an experiment which is I put a pen on the table and a lady go and in a minute and if I I repeat the experiment", "start_timestamp": "00:22:09", "end_timestamp": "00:22:49", "start_second": 1329, "end_second": 1369, "url": "https://www.youtube.com/watch?v=A7AnCvYDQrU&t=1329s", "title": "Yann LeCun: \"Energy-Based Self-Supervised Learning\"", "thumbnail": "https://i.ytimg.com/vi/A7AnCvYDQrU/maxresdefault.jpg"} {"video_id": "A7AnCvYDQrU", "text": "all table times and it's not the same video clip every time the pen will fall in the different direction and if I ask you to predict what is the state what is going to be the set of the world in two seconds you can tell that the pen is going to fall but you can't tell really in which direction right most of the time so if you train a deterministic function to make one prediction the best thing you can do is predict the average of all the possible futures which would be a transparent pen in all possible configurations and if you actually do", "start_timestamp": "00:22:49", "end_timestamp": "00:23:14", "start_second": 1369, "end_second": 1394, "url": "https://www.youtube.com/watch?v=A7AnCvYDQrU&t=1369s", "title": "Yann LeCun: \"Energy-Based Self-Supervised Learning\"", "thumbnail": "https://i.ytimg.com/vi/A7AnCvYDQrU/maxresdefault.jpg"} {"video_id": "A7AnCvYDQrU", "text": "this you train a system to predict video to pointing video frames you're the fourth frame or the the first four frames of the videos are observed the last two are predicted you get blurry predictions they're basically the average of all the stuff that could happen and then machines can't decide which one it has to make one prediction yes yes No okay so address all that works ten try to solve the same problem in a different way from the one I'm going to explain but I'll come back to that analogy later okay so the point is if you have an", "start_timestamp": "00:23:14", "end_timestamp": "00:23:58", "start_second": 1394, "end_second": 1438, "url": "https://www.youtube.com/watch?v=A7AnCvYDQrU&t=1394s", "title": "Yann LeCun: \"Energy-Based Self-Supervised Learning\"", "thumbnail": "https://i.ytimg.com/vi/A7AnCvYDQrU/maxresdefault.jpg"} {"video_id": "A7AnCvYDQrU", "text": "input a deterministic function that produces one output and some cost function here that measures the discrepancy if this cost function is only zero when when the prediction and the observation are the same then this guy can only predict the average now to me to come back to your point if you make this cost function complicated in such a way that it doesn't compare points but it compares then you know distribution so example then yes but then that becomes complicated that's what adversarial training is about you", "start_timestamp": "00:23:58", "end_timestamp": "00:24:26", "start_second": 1438, "end_second": 1466, "url": "https://www.youtube.com/watch?v=A7AnCvYDQrU&t=1438s", "title": "Yann LeCun: \"Energy-Based Self-Supervised Learning\"", "thumbnail": "https://i.ytimg.com/vi/A7AnCvYDQrU/maxresdefault.jpg"} {"video_id": "A7AnCvYDQrU", "text": "have to train this thing to do the to copy all distributions which is hard okay so we're not going to use a deterministic function so here is the the crux of energy based models and it's very connected to things like factor graphs that the people were talking about in the context of graphical models and Bayesian networks and stuff like that so basically you have an input and observation you have an observed or hypothesis for prediction and you have an energy function here that measures the compatibility between the two if the", "start_timestamp": "00:24:26", "end_timestamp": "00:25:00", "start_second": 1466, "end_second": 1500, "url": "https://www.youtube.com/watch?v=A7AnCvYDQrU&t=1466s", "title": "Yann LeCun: \"Energy-Based Self-Supervised Learning\"", "thumbnail": "https://i.ytimg.com/vi/A7AnCvYDQrU/maxresdefault.jpg"} {"video_id": "A7AnCvYDQrU", "text": "two are compatible if Y is a good prediction for X then the energy produced by this is Rho if they are not compatible it's high okay and the way you do inference is that I give you an X you find a why that might be multiple that produces a low energy okay so for example here this is x and y if I give you this X there are two possible answers for why I have low energy right this is the manifold of data that's why they does that our sample form and you know you'll you'll have to answer so I'm not telling you how to do this", "start_timestamp": "00:25:00", "end_timestamp": "00:25:33", "start_second": 1500, "end_second": 1533, "url": "https://www.youtube.com/watch?v=A7AnCvYDQrU&t=1500s", "title": "Yann LeCun: \"Energy-Based Self-Supervised Learning\"", "thumbnail": "https://i.ytimg.com/vi/A7AnCvYDQrU/maxresdefault.jpg"} {"video_id": "A7AnCvYDQrU", "text": "minimization and how to produce multiple answers but that's the inference mechanism so the energy here is not used for training is used for inference it's very different so you could say well alright you know that's an energy function but you know you can take the exponential and normalize with a gives distribution and it gives you a probability yes except that I don't actually want the log of a probability I don't want the energy to be the log of a probability here the probability is a set of measure zero its peak like if I", "start_timestamp": "00:25:33", "end_timestamp": "00:26:05", "start_second": 1533, "end_second": 1565, "url": "https://www.youtube.com/watch?v=A7AnCvYDQrU&t=1533s", "title": "Yann LeCun: \"Energy-Based Self-Supervised Learning\"", "thumbnail": "https://i.ytimg.com/vi/A7AnCvYDQrU/maxresdefault.jpg"} {"video_id": "A7AnCvYDQrU", "text": "know anything about math right you know it's a thin plate and so the the distribution here would be you know you know infinity on that point on that on that manifold and 0 just epsilon that side of it which means this energy function or ever you parameterize it will have to have infinite parameters infinite weights something you can set your own net and it's not very useful because you can't do inference with this it becomes a golf course you can do inference what you want is a function that is smooth so that at any point here", "start_timestamp": "00:26:05", "end_timestamp": "00:26:38", "start_second": 1565, "end_second": 1598, "url": "https://www.youtube.com/watch?v=A7AnCvYDQrU&t=1565s", "title": "Yann LeCun: \"Energy-Based Self-Supervised Learning\"", "thumbnail": "https://i.ytimg.com/vi/A7AnCvYDQrU/maxresdefault.jpg"} {"video_id": "A7AnCvYDQrU", "text": "the gradient of that function might tell you where to go to get a point from the manifold so this is very emphatic you do not want to learn distributions they're bad for you right maximum rank liquid sucks it just doesn't do the right thing this is big mistake that actually gets to the original formulation insists that you know this should be you know one here zero outside and that's just it's just a bad idea so that's an example where you know applying probability theory blindly actually is bad for you and I've been", "start_timestamp": "00:26:38", "end_timestamp": "00:27:12", "start_second": 1598, "end_second": 1632, "url": "https://www.youtube.com/watch?v=A7AnCvYDQrU&t=1598s", "title": "Yann LeCun: \"Energy-Based Self-Supervised Learning\"", "thumbnail": "https://i.ytimg.com/vi/A7AnCvYDQrU/maxresdefault.jpg"} {"video_id": "A7AnCvYDQrU", "text": "trying to snap people out of it for twenty years without success so far okay so this is the what I just showed you is the conditional version and there's an unconditional version where you don't actually have an observation the only thing you want to know is like model the internal consistency of why and really those two problems are not that different from each other in the first case you know a priori which set of variables are observable in the second case you don't know which part of why it's going to be observed and so here", "start_timestamp": "00:27:12", "end_timestamp": "00:27:42", "start_second": 1632, "end_second": 1662, "url": "https://www.youtube.com/watch?v=A7AnCvYDQrU&t=1632s", "title": "Yann LeCun: \"Energy-Based Self-Supervised Learning\"", "thumbnail": "https://i.ytimg.com/vi/A7AnCvYDQrU/maxresdefault.jpg"} {"video_id": "A7AnCvYDQrU", "text": "the what the model gives you is kind of a dependency you know a function that gives you the dependency between y1 and y2 in this case so things like auto encoders or this type yes okay it's akin to negate you look like a hood but you don't want to train you to with maximum maximum okay good or not at least without heavily regularizing it and you don't need the normalization because it's not like you're gonna sample from it anyway so in the end it's just an energy that's the more elementary concept that you can derive from and you", "start_timestamp": "00:27:42", "end_timestamp": "00:28:15", "start_second": 1662, "end_second": 1695, "url": "https://www.youtube.com/watch?v=A7AnCvYDQrU&t=1662s", "title": "Yann LeCun: \"Energy-Based Self-Supervised Learning\"", "thumbnail": "https://i.ytimg.com/vi/A7AnCvYDQrU/maxresdefault.jpg"} {"video_id": "A7AnCvYDQrU", "text": "know physicists in the audience know that energy is more fundamental than probabilities probability is going to derive from from it or Hamiltonian if you're a quantum physicist but it's not probability this amplitude well whatever okay so how do we trade on energy base model so of course we're gonna primate Francis in some way right it's going to be some sort of neural net with a particular architecture and it's gonna have parameters in it and we need to train it in such a way that it takes to shape so that the data the training data", "start_timestamp": "00:28:15", "end_timestamp": "00:28:45", "start_second": 1695, "end_second": 1725, "url": "https://www.youtube.com/watch?v=A7AnCvYDQrU&t=1695s", "title": "Yann LeCun: \"Energy-Based Self-Supervised Learning\"", "thumbnail": "https://i.ytimg.com/vi/A7AnCvYDQrU/maxresdefault.jpg"} {"video_id": "A7AnCvYDQrU", "text": "we observe take low energy and everything else has higher energy and not specifying that it should be the you know that the difference of energy should be akin to wish you know difference of love of our abilities insist on that and there are two classes of methods for doing this without contrastive methods and architectural methods so quadratic methods basically consists in pushing down on the energy of points of data points right so give a give a pair XY to the model and twist the parameters so that the energy coming", "start_timestamp": "00:28:45", "end_timestamp": "00:29:19", "start_second": 1725, "end_second": 1759, "url": "https://www.youtube.com/watch?v=A7AnCvYDQrU&t=1725s", "title": "Yann LeCun: \"Energy-Based Self-Supervised Learning\"", "thumbnail": "https://i.ytimg.com/vi/A7AnCvYDQrU/maxresdefault.jpg"} {"video_id": "A7AnCvYDQrU", "text": "out of it goes down easy and then the contrastive term which prevents this thing from just collapse into zero everywhere picks points intelligently outside and pushed their energy up okay and the problem is how intelligently intelligent you have to be - can I pick those points and by the way guns are an example of this so again for example where the discriminator is this energy function and the generator is the smart system that picks out the points whose energy going to push up that's called energy B's gets any paper on this", "start_timestamp": "00:29:19", "end_timestamp": "00:29:55", "start_second": 1759, "end_second": 1795, "url": "https://www.youtube.com/watch?v=A7AnCvYDQrU&t=1759s", "title": "Yann LeCun: \"Energy-Based Self-Supervised Learning\"", "thumbnail": "https://i.ytimg.com/vi/A7AnCvYDQrU/maxresdefault.jpg"} {"video_id": "A7AnCvYDQrU", "text": "a few years ago and then there is architectural methods and those architectural methods consist in building the energy function in such a way that the volume of stuff they can take low energy is is limited or minimized okay so either about construction or through some regularization term and I'll come to how you do this in a minute but let's start with okay so there are all kinds of you know traditional and supervised learning methods that you can caste that in that language and I said you know basically the construct the basic idea contrastive", "start_timestamp": "00:29:55", "end_timestamp": "00:30:30", "start_second": 1795, "end_second": 1830, "url": "https://www.youtube.com/watch?v=A7AnCvYDQrU&t=1795s", "title": "Yann LeCun: \"Energy-Based Self-Supervised Learning\"", "thumbnail": "https://i.ytimg.com/vi/A7AnCvYDQrU/maxresdefault.jpg"} {"video_id": "A7AnCvYDQrU", "text": "methods is push down the energy of data point push up everywhere else or which what is maximum likelihood does if they have if you have tractable partition function approximation put down the energy of data points and push up on chosen location and maximum likelihood with Monte Carlo Markov chain Monte Carlo an alternate Monte Carlo contrasted divergence metric you learn English for matching all the stuff basically are different versions of this including gas and the third one is trying to function at most points of the", "start_timestamp": "00:30:30", "end_timestamp": "00:31:00", "start_second": 1830, "end_second": 1860, "url": "https://www.youtube.com/watch?v=A7AnCvYDQrU&t=1830s", "title": "Yann LeCun: \"Energy-Based Self-Supervised Learning\"", "thumbnail": "https://i.ytimg.com/vi/A7AnCvYDQrU/maxresdefault.jpg"} {"video_id": "A7AnCvYDQrU", "text": "data manifold two points on the data manifold that's called denoising auto-encoder and that's why those NLP it was largely a pea model i was telling you about do that's called a master to encode particular case of denoising auto-encoder I'm gonna mention a little bit metric learning because it's one of the few cases where it works and it's in fact the only case we know it works in the context of images so it's kind of important and it's those results are reason like last week and then there's architectural methods some of them some", "start_timestamp": "00:31:00", "end_timestamp": "00:31:31", "start_second": 1860, "end_second": 1891, "url": "https://www.youtube.com/watch?v=A7AnCvYDQrU&t=1860s", "title": "Yann LeCun: \"Energy-Based Self-Supervised Learning\"", "thumbnail": "https://i.ytimg.com/vi/A7AnCvYDQrU/maxresdefault.jpg"} {"video_id": "A7AnCvYDQrU", "text": "of you I'm sure know so I think like PCA so PCA you make sure the whole space is not reconstructed because the representation is constrained to be a low dimensional k-means just make sure model square is here etcetera these are the ones I'm going to talk about because that's where my money is right now so I think like sparse coding sparse or two encoders which some of you of course I've heard of and then the other ones I'm not gonna mention okay so how does it work in the context of PCA k-means so in PCA the the the region of the space", "start_timestamp": "00:31:31", "end_timestamp": "00:32:06", "start_second": 1891, "end_second": 1926, "url": "https://www.youtube.com/watch?v=A7AnCvYDQrU&t=1891s", "title": "Yann LeCun: \"Energy-Based Self-Supervised Learning\"", "thumbnail": "https://i.ytimg.com/vi/A7AnCvYDQrU/maxresdefault.jpg"} {"video_id": "A7AnCvYDQrU", "text": "that is perfectly reconstructed is the principal subspace right in this case here if the data points are sampled from this from the sparrow the principal subspace here dimension 1 is this so this has become our reconstruction error 0 and everything else grows could racket quadratically right because you take a point you project it on this and so if it's already there reconstruction here at zero if it's here reconstruction error scores that the square of the Euclidean distance not a good model of a spiral as you can tell k-means says so", "start_timestamp": "00:32:06", "end_timestamp": "00:32:38", "start_second": 1926, "end_second": 1958, "url": "https://www.youtube.com/watch?v=A7AnCvYDQrU&t=1926s", "title": "Yann LeCun: \"Energy-Based Self-Supervised Learning\"", "thumbnail": "https://i.ytimg.com/vi/A7AnCvYDQrU/maxresdefault.jpg"} {"video_id": "A7AnCvYDQrU", "text": "k-means interesting because it has a latent variable in it so the energy function is not directly a function is the minimum of some other more elementary energy function okay so it's the the min over a vector Z of the squared distance between the data point and the this Z vector multiplied by a matrix whose columns are the prototypes of the the k-means model and you constrain this vector to be a one heart vector okay so only one one only one component can be one the other ones have to be 0 and so you have to do this search exhaustively which is", "start_timestamp": "00:32:38", "end_timestamp": "00:33:15", "start_second": 1958, "end_second": 1995, "url": "https://www.youtube.com/watch?v=A7AnCvYDQrU&t=1958s", "title": "Yann LeCun: \"Energy-Based Self-Supervised Learning\"", "thumbnail": "https://i.ytimg.com/vi/A7AnCvYDQrU/maxresdefault.jpg"} {"video_id": "A7AnCvYDQrU", "text": "akin to a nearest neighbor and what you see here those of you black areas are the minima centered on the prototypes ok so KB's just put you know prototypes more or less equally distant over the many fold it looks great in two dimensions and how dimension k-means really doesn't work that well but what's interesting about those both of those cases is that they work because the capacity the volume of the white space that can take low energy is limited okay that's a key concept or you talked about the maximum likelihood so I'm going to", "start_timestamp": "00:33:15", "end_timestamp": "00:33:56", "start_second": 1995, "end_second": 2036, "url": "https://www.youtube.com/watch?v=A7AnCvYDQrU&t=1995s", "title": "Yann LeCun: \"Energy-Based Self-Supervised Learning\"", "thumbnail": "https://i.ytimg.com/vi/A7AnCvYDQrU/maxresdefault.jpg"} {"video_id": "A7AnCvYDQrU", "text": "keep that okay so that leads us to this idea of latent variable models right so so energy models f of x and y that are actually defined by minimizing a more elementary energy function e of X Y Z with respect to Z or by marginalizing over Z which is equivalent to defining f of X Y as minus 1 over beta which you can think of as a new verse temperature log integral over Z exponential minus beta the energy ok this is a log partition function for those of you know and this is a free energy which is like Oh F for physicists in the room so", "start_timestamp": "00:33:56", "end_timestamp": "00:34:33", "start_second": 2036, "end_second": 2073, "url": "https://www.youtube.com/watch?v=A7AnCvYDQrU&t=2036s", "title": "Yann LeCun: \"Energy-Based Self-Supervised Learning\"", "thumbnail": "https://i.ytimg.com/vi/A7AnCvYDQrU/maxresdefault.jpg"} {"video_id": "A7AnCvYDQrU", "text": "there's the conditional version and the unconditional version which only consists in taking X out ok so that's that's what the model looks like you have observe variable variable you need to predict and some latent variable you have to minimize over now why is that interesting latent variables are interesting because they are an essential tool for making a system be able to predict multiple outputs instead of just one so if I build a system out of deterministic functions here I have you know a neural net with a few layers", "start_timestamp": "00:34:33", "end_timestamp": "00:35:09", "start_second": 2073, "end_second": 2109, "url": "https://www.youtube.com/watch?v=A7AnCvYDQrU&t=2073s", "title": "Yann LeCun: \"Energy-Based Self-Supervised Learning\"", "thumbnail": "https://i.ytimg.com/vi/A7AnCvYDQrU/maxresdefault.jpg"} {"video_id": "A7AnCvYDQrU", "text": "it produces some representation of the observed the observed variables and then I feed this to another neuron that I call the decoder together with a little variable by varying this latent variable over a set I can make the output very over set may be something complicated manifold of this if this network is complicated and basically that allows me to solve this problem of not really take the average right I can just you know predict the actual thing I'm observing by finding the latent variable that will make my model pretty the best thing so", "start_timestamp": "00:35:09", "end_timestamp": "00:35:40", "start_second": 2109, "end_second": 2140, "url": "https://www.youtube.com/watch?v=A7AnCvYDQrU&t=2109s", "title": "Yann LeCun: \"Energy-Based Self-Supervised Learning\"", "thumbnail": "https://i.ytimg.com/vi/A7AnCvYDQrU/maxresdefault.jpg"} {"video_id": "A7AnCvYDQrU", "text": "here is how you train an image based model there you show it an x and y you find the Z that minimizes the the reconstruction error okay and if that's not perfect and we're one step O's to caste gradient descent you update the parameters of whatever functions you're using to make this small okay this works great except there's a there's a slight problem with this which is that imagine that Z has the same dimension as Y okay so in C are the same dimension as Y and the decoder is not you know the generate function then it's kind of a powerful", "start_timestamp": "00:35:40", "end_timestamp": "00:36:24", "start_second": 2140, "end_second": 2184, "url": "https://www.youtube.com/watch?v=A7AnCvYDQrU&t=2140s", "title": "Yann LeCun: \"Energy-Based Self-Supervised Learning\"", "thumbnail": "https://i.ytimg.com/vi/A7AnCvYDQrU/maxresdefault.jpg"} {"video_id": "A7AnCvYDQrU", "text": "parameterize function then for any Y you show the Machine there's always going to be a Z that's going to reconstruct it perfectly which means your energy surface can be completely flat it's not a good model dependency of Y on X because your energy function doesn't tell you you know which Y is good so again what we're going to have to do is limit the information capacity of Z like we did with camis basically we're going to have to limit the volume of white space they can take low energy and that volume will have to be commensurate with", "start_timestamp": "00:36:24", "end_timestamp": "00:36:57", "start_second": 2184, "end_second": 2217, "url": "https://www.youtube.com/watch?v=A7AnCvYDQrU&t=2184s", "title": "Yann LeCun: \"Energy-Based Self-Supervised Learning\"", "thumbnail": "https://i.ytimg.com/vi/A7AnCvYDQrU/maxresdefault.jpg"} {"video_id": "A7AnCvYDQrU", "text": "the you know the volume of our or data manifold essentially okay but let's start with contrastive embedding so church embedding is following idea to handle the fact that multiple Y's are compatible with X you feed both x and y to neural nets and those neural nets will have invariances and so you're going to be able to modify Y for a given X without changing the output okay because of the embarrasses built into the system and that's a way of handling the fact that there are multiple ways are compatible with the next but now the", "start_timestamp": "00:36:57", "end_timestamp": "00:37:37", "start_second": 2217, "end_second": 2257, "url": "https://www.youtube.com/watch?v=A7AnCvYDQrU&t=2217s", "title": "Yann LeCun: \"Energy-Based Self-Supervised Learning\"", "thumbnail": "https://i.ytimg.com/vi/A7AnCvYDQrU/maxresdefault.jpg"} {"video_id": "A7AnCvYDQrU", "text": "way you need to train this is that you need to tell it ok here are two images they are actually the same you know conceptually so whatever representation you extract from this image should be similar to the representation you extract from this image so basically I want HNH fine here to be as close to each other as possible because really they represent the same thing but if you if you only do this you see you collapse you see basically those Network completely ignoring the inputs and producing constant vectors right so you", "start_timestamp": "00:37:37", "end_timestamp": "00:38:05", "start_second": 2257, "end_second": 2285, "url": "https://www.youtube.com/watch?v=A7AnCvYDQrU&t=2257s", "title": "Yann LeCun: \"Energy-Based Self-Supervised Learning\"", "thumbnail": "https://i.ytimg.com/vi/A7AnCvYDQrU/maxresdefault.jpg"} {"video_id": "A7AnCvYDQrU", "text": "need a contrastive term the contractive term which is kind of a subclass so this side you are pushing up the energy of so if you don't want is you show pairs of examples that are dissimilar and then you train those networks to produce outputs that are different from each other and these various loss functions to do this so this is in the business it's called Siamese neural net and it's another idea but it's been kind of revived more recently it's been successful for training of face recognition system or there's a the", "start_timestamp": "00:38:05", "end_timestamp": "00:38:33", "start_second": 2285, "end_second": 2313, "url": "https://www.youtube.com/watch?v=A7AnCvYDQrU&t=2285s", "title": "Yann LeCun: \"Energy-Based Self-Supervised Learning\"", "thumbnail": "https://i.ytimg.com/vi/A7AnCvYDQrU/maxresdefault.jpg"} {"video_id": "A7AnCvYDQrU", "text": "paper that just came out that actually beats to actually use some supervised learning in vision to improve performance of a super purely supervised learning is this one moko and it's using a trick to kind of slow down one of those the main issue here is the difficulty of finding hard- so here you have to mine your entire data set for images that the system thinks are similar to this one but really aren't and that's really where where things become complicated but that idea this paper I just mentioned actually improves", "start_timestamp": "00:38:33", "end_timestamp": "00:39:07", "start_second": 2313, "end_second": 2347, "url": "https://www.youtube.com/watch?v=A7AnCvYDQrU&t=2313s", "title": "Yann LeCun: \"Energy-Based Self-Supervised Learning\"", "thumbnail": "https://i.ytimg.com/vi/A7AnCvYDQrU/maxresdefault.jpg"} {"video_id": "A7AnCvYDQrU", "text": "the performance of image recognition systems over purely supervised yes/no nights you know whatever your favorite primary classes but in these cases large compositional nets in fact the name of the architecture is right here this means resonate 50 with four times the size of the future maps resonate 50 is going to a standard architecture for image recognition so the the bird system that's used for an LP those those mashed auto-encoder or denoising auto-encoder the the diagram looks like this you start with a piece", "start_timestamp": "00:39:07", "end_timestamp": "00:39:49", "start_second": 2347, "end_second": 2389, "url": "https://www.youtube.com/watch?v=A7AnCvYDQrU&t=2347s", "title": "Yann LeCun: \"Energy-Based Self-Supervised Learning\"", "thumbnail": "https://i.ytimg.com/vi/A7AnCvYDQrU/maxresdefault.jpg"} {"video_id": "A7AnCvYDQrU", "text": "of data you corrupt it which means you remove some pieces you run it through a few layers of neuron that there's a latent variable which is implicit in those models which is like which of the outputs is picked as a function of the of the probability distribution on the output and then you compare this with the the actual data that you observed and you train the entire system to minimize the reconstruction error and in continued space the conceptually what that does is that if you imagine that your data manifold is this okay those", "start_timestamp": "00:39:49", "end_timestamp": "00:40:24", "start_second": 2389, "end_second": 2424, "url": "https://www.youtube.com/watch?v=A7AnCvYDQrU&t=2389s", "title": "Yann LeCun: \"Energy-Based Self-Supervised Learning\"", "thumbnail": "https://i.ytimg.com/vi/A7AnCvYDQrU/maxresdefault.jpg"} {"video_id": "A7AnCvYDQrU", "text": "points you take a point you corrupt it so you you add noise to it for example in this case and then you train the your parameterize neural net to map this input to the output okay you feed this as input and you tell it you should map it here once the system is trained you can actually plot the vector field of you know those are little vectors that point in the direction where the neural net if you feed with this input would would take you I mean you have to lengthen the the thing but they almost all take you to the to the manifold here", "start_timestamp": "00:40:24", "end_timestamp": "00:40:57", "start_second": 2424, "end_second": 2457, "url": "https://www.youtube.com/watch?v=A7AnCvYDQrU&t=2424s", "title": "Yann LeCun: \"Energy-Based Self-Supervised Learning\"", "thumbnail": "https://i.ytimg.com/vi/A7AnCvYDQrU/maxresdefault.jpg"} {"video_id": "A7AnCvYDQrU", "text": "and the color here indicates the energy so the energy is low on the manifold which is what you want it's high outside except there's a problem right here there's a ridge here and it's a it's a kind of a flat Ridge which is not good so here the reconstruction error is actually zero because the system when this train doesn't can't decide whether to go this way or that way so there's a flaw with this with this thing there ways to fix it but no clear the main issue with this is that it doesn't scale with well in high dimension because in", "start_timestamp": "00:40:57", "end_timestamp": "00:41:29", "start_second": 2457, "end_second": 2489, "url": "https://www.youtube.com/watch?v=A7AnCvYDQrU&t=2457s", "title": "Yann LeCun: \"Energy-Based Self-Supervised Learning\"", "thumbnail": "https://i.ytimg.com/vi/A7AnCvYDQrU/maxresdefault.jpg"} {"video_id": "A7AnCvYDQrU", "text": "high dimension there are many many many ways to be different from a sample and you're never going to explore the entire dimension of the all the space it's right it could be either but the part I'm abusing the reconstruction error is that here the reconstruction error is zero so which means the energy is zero spent so it's a phantom Low Energy I think there are ways to fix it there are no cheap you sure I'm not gonna go too okay so prediction with latent variables as I told you before I give you an X I'll give you Y you find the Z that minimizes", "start_timestamp": "00:41:29", "end_timestamp": "00:42:11", "start_second": 2489, "end_second": 2531, "url": "https://www.youtube.com/watch?v=A7AnCvYDQrU&t=2489s", "title": "Yann LeCun: \"Energy-Based Self-Supervised Learning\"", "thumbnail": "https://i.ytimg.com/vi/A7AnCvYDQrU/maxresdefault.jpg"} {"video_id": "A7AnCvYDQrU", "text": "the reconstruction error and unfortunately because if Z is high-capacity this is gonna give you a flat energy surface so the solution to this is your regular Y Z you basically add a term in the energy along that x RZ where our Z measures you know basically tells you if you are at a particular region of space that you're happy with and so basically you pay a price for making Z go outside of that region a good example of this which is familiar to many of you is our Z could be the l1 norm of Z so if you put the other one", "start_timestamp": "00:42:11", "end_timestamp": "00:42:44", "start_second": 2531, "end_second": 2564, "url": "https://www.youtube.com/watch?v=A7AnCvYDQrU&t=2531s", "title": "Yann LeCun: \"Energy-Based Self-Supervised Learning\"", "thumbnail": "https://i.ytimg.com/vi/A7AnCvYDQrU/maxresdefault.jpg"} {"video_id": "A7AnCvYDQrU", "text": "normal Z the sum of the absolute values of the components of Z to make this small you have to basically make many of the components of Z zero as many as possible and so you end up with sparse sparse representation and that actually limits the the volume of space that is the has low energy essentially this is this is what what you get so this is sort of the unconditional version of it where there is no X is you just modeling Y and here I give you a Y Z the regular as before Z is the other one norm Z is multiplied by a matrix color item matrix", "start_timestamp": "00:42:44", "end_timestamp": "00:43:29", "start_second": 2564, "end_second": 2609, "url": "https://www.youtube.com/watch?v=A7AnCvYDQrU&t=2564s", "title": "Yann LeCun: \"Energy-Based Self-Supervised Learning\"", "thumbnail": "https://i.ytimg.com/vi/A7AnCvYDQrU/maxresdefault.jpg"} {"video_id": "A7AnCvYDQrU", "text": "he won't decoding decoding matrix it produces a reconstruction you measure the square Euclidean distance between the two and that's your energy function so a classical in the applied math community at least you could generalize this and so what you get is when you turn this on this little spiral here yeah you get that the low-energy regions are basically piecewise linear approximations of with sort of low dimensionality in your subspaces of the entire system that works really well in high dimension that's the cool thing and", "start_timestamp": "00:43:29", "end_timestamp": "00:44:07", "start_second": 2609, "end_second": 2647, "url": "https://www.youtube.com/watch?v=A7AnCvYDQrU&t=2609s", "title": "Yann LeCun: \"Energy-Based Self-Supervised Learning\"", "thumbnail": "https://i.ytimg.com/vi/A7AnCvYDQrU/maxresdefault.jpg"} {"video_id": "A7AnCvYDQrU", "text": "it's been studied you know a lot one thing that is not yet studied is what if you make the decoder nonlinear let's say instead of having a matrix here that you multiply Z by you have an entire neural net what happens I'll tell you about this little bit later now here is the problem though finding you know finding I'm sorry it's not what I wanted to show finding Z for a given X a pair X Y finding the Z that minimizes the sum of those two terms can be expensive you have to back popping equation today is to go in and decide this could be a non", "start_timestamp": "00:44:07", "end_timestamp": "00:44:45", "start_second": 2647, "end_second": 2685, "url": "https://www.youtube.com/watch?v=A7AnCvYDQrU&t=2647s", "title": "Yann LeCun: \"Energy-Based Self-Supervised Learning\"", "thumbnail": "https://i.ytimg.com/vi/A7AnCvYDQrU/maxresdefault.jpg"} {"video_id": "A7AnCvYDQrU", "text": "smooth functions you have to do those l1 l2 optimization you know Easter or whatever it can be expensive so one idea is you actually train and you're on that to predict the optimal solution to that optimization problem okay so ignore the the great part for now I'll give you an X on you I find the Z that minimizes the sum of this and that and then use this as a target to train a neural net which from x and y is going to predict this guy and then if this guy is well trained that I don't need to run the optimization algorithm for inference", "start_timestamp": "00:44:45", "end_timestamp": "00:45:24", "start_second": 2685, "end_second": 2724, "url": "https://www.youtube.com/watch?v=A7AnCvYDQrU&t=2685s", "title": "Yann LeCun: \"Energy-Based Self-Supervised Learning\"", "thumbnail": "https://i.ytimg.com/vi/A7AnCvYDQrU/maxresdefault.jpg"} {"video_id": "A7AnCvYDQrU", "text": "anymore I just need to run through the encoder so that becomes very clear that is very important to limit the information content of Z because the system can cheat here it actually has access to the answer and you can just you know copy the answer on the output and so unless you have a way of restricting information content of this asset ml children completely ignore X okay so if you have the unconditional version of this we don't have this part this is called a regularized auto encoder or sparse auto encoder this", "start_timestamp": "00:45:24", "end_timestamp": "00:45:54", "start_second": 2724, "end_second": 2754, "url": "https://www.youtube.com/watch?v=A7AnCvYDQrU&t=2724s", "title": "Yann LeCun: \"Energy-Based Self-Supervised Learning\"", "thumbnail": "https://i.ytimg.com/vi/A7AnCvYDQrU/maxresdefault.jpg"} {"video_id": "A7AnCvYDQrU", "text": "works really well in the sense that if you train as possible to encoder like this weather color is linear and the encoder is a few layers of a neural net on the at least the columns of the decoding matrix end up being little parts of characters which means you can reconstruct any character with a linear combination of a small number of those things people call this call this these things atoms if you try our natural image patches this is a running algorithm running you end up with oriented edge detectors which is", "start_timestamp": "00:45:54", "end_timestamp": "00:46:22", "start_second": 2754, "end_second": 2782, "url": "https://www.youtube.com/watch?v=A7AnCvYDQrU&t=2754s", "title": "Yann LeCun: \"Energy-Based Self-Supervised Learning\"", "thumbnail": "https://i.ytimg.com/vi/A7AnCvYDQrU/maxresdefault.jpg"} {"video_id": "A7AnCvYDQrU", "text": "great you have to do a little bit of whitening for the images if you do this in a compositional mode where the the decoder is actually conditional so Z is not a vector it's a bunch of feature maps and then you run them through collisions and you compute the sum and that's how you decode you get beautiful so these are the the business functions in the decoder the kernels that I used to reconstruct the the outputs and these are weights in the first layer of the uncoated encoder on only has two layers in this case and they're basically", "start_timestamp": "00:46:22", "end_timestamp": "00:46:53", "start_second": 2782, "end_second": 2813, "url": "https://www.youtube.com/watch?v=A7AnCvYDQrU&t=2782s", "title": "Yann LeCun: \"Energy-Based Self-Supervised Learning\"", "thumbnail": "https://i.ytimg.com/vi/A7AnCvYDQrU/maxresdefault.jpg"} {"video_id": "A7AnCvYDQrU", "text": "mirror images of of the decoder and this is four one two four eight 16 32 64 filters you get a very high diversity of filter centers around gratings oriented edges at various frequencies it's really nice this is these are ten year old results more recently we can revive this technology because it's very interesting so this is again filters that are learned on natural image patches from the c4 dataset 9x9 kernels and those are corresponding feature maps that are extremely sparse that can reconstruct basically any image on so far with real", "start_timestamp": "00:46:53", "end_timestamp": "00:47:32", "start_second": 2813, "end_second": 2852, "url": "https://www.youtube.com/watch?v=A7AnCvYDQrU&t=2813s", "title": "Yann LeCun: \"Energy-Based Self-Supervised Learning\"", "thumbnail": "https://i.ytimg.com/vi/A7AnCvYDQrU/maxresdefault.jpg"} {"video_id": "A7AnCvYDQrU", "text": "chip very good accuracy some other work we're doing along those lines is having multi-layer decoders so basically here is an image and then take a bunch of feature mats here run them through completions and value and commissions on value when we construct and then you can sort of stack multiple layers of those on train this carefully if you know how to do it she's not good not easy but it kind of works these are reconstructions this is the original and these are kind of reconstructions you obtained with sparse", "start_timestamp": "00:47:32", "end_timestamp": "00:48:04", "start_second": 2852, "end_second": 2884, "url": "https://www.youtube.com/watch?v=A7AnCvYDQrU&t=2852s", "title": "Yann LeCun: \"Energy-Based Self-Supervised Learning\"", "thumbnail": "https://i.ytimg.com/vi/A7AnCvYDQrU/maxresdefault.jpg"} {"video_id": "A7AnCvYDQrU", "text": "representation so if you only reconstruct from here you ignore the rest you get you get sort of high frequency information if you only reconstruct from here ignore this you just run through this network you can reconstruct a low-resolution version so this is you can think of this as like nonlinear wavelets if you want write the system so naturally learns to represent this let me skip this ok talk about this really quickly so something that's become very popular in the in the business is something called version of tiny coders", "start_timestamp": "00:48:04", "end_timestamp": "00:48:39", "start_second": 2884, "end_second": 2919, "url": "https://www.youtube.com/watch?v=A7AnCvYDQrU&t=2884s", "title": "Yann LeCun: \"Energy-Based Self-Supervised Learning\"", "thumbnail": "https://i.ytimg.com/vi/A7AnCvYDQrU/maxresdefault.jpg"} {"video_id": "A7AnCvYDQrU", "text": "and variation over 20 coders are basically auto-encoder models they could be made conditional if you want but I cannot grade this out and they are an example of a model where you also limit the capacity of the of the the representation here in the middle and the way you limit the information capacity of this vector is that you add noise okay so basically here's a why you run to an encoder you produce a prediction for what the code should be and then you add additive Gaussian noise to it and you run to the decoder and", "start_timestamp": "00:48:39", "end_timestamp": "00:49:12", "start_second": 2919, "end_second": 2952, "url": "https://www.youtube.com/watch?v=A7AnCvYDQrU&t=2919s", "title": "Yann LeCun: \"Energy-Based Self-Supervised Learning\"", "thumbnail": "https://i.ytimg.com/vi/A7AnCvYDQrU/maxresdefault.jpg"} {"video_id": "A7AnCvYDQrU", "text": "there's a constraint here that it's a penalty really used during learning that the the norm of the outputs of the encoder need needs to be as small as possible okay so it's l2 regularization if you want during learning now what how does that give me the information content of the code well let's say that you train without noise right so you train your two encoder is going to assign a code this is in code space it's going to assign a code vector to every training sample these are all of the training samples now I'd know as to", "start_timestamp": "00:49:12", "end_timestamp": "00:49:38", "start_second": 2952, "end_second": 2978, "url": "https://www.youtube.com/watch?v=A7AnCvYDQrU&t=2952s", "title": "Yann LeCun: \"Energy-Based Self-Supervised Learning\"", "thumbnail": "https://i.ytimg.com/vi/A7AnCvYDQrU/maxresdefault.jpg"} {"video_id": "A7AnCvYDQrU", "text": "those guys you turn them into fuzzy balls okay and those fuzzy balls might overlap so for example this this sample and that sample mind up my end up being confused with each other because when you add noise you can turn one into the other and so the reconstruction error will probably increase so what is the system going to do very easy it's gonna make those fuzzy ball fly away from each other right so that they don't overlap and that really is not that interesting you know it just makes the norm of the output of the encoder just larger but it", "start_timestamp": "00:49:38", "end_timestamp": "00:50:12", "start_second": 2978, "end_second": 3012, "url": "https://www.youtube.com/watch?v=A7AnCvYDQrU&t=2978s", "title": "Yann LeCun: \"Energy-Based Self-Supervised Learning\"", "thumbnail": "https://i.ytimg.com/vi/A7AnCvYDQrU/maxresdefault.jpg"} {"video_id": "A7AnCvYDQrU", "text": "you know it doesn't do anything for you so what you do is you play a trick you attach each of those little fuzzy balls to the origin with a spring okay you tell them okay you know you can fly away but not too far so you cannot have to overlap with each and construct some sort of data manifold if you want and two bubbles will overlap to the extent that the reconstruction error is not dramatic on the output okay so there's a trade-off between the strength of that spring the size of those bubbles which in the case of", "start_timestamp": "00:50:12", "end_timestamp": "00:50:43", "start_second": 3012, "end_second": 3043, "url": "https://www.youtube.com/watch?v=A7AnCvYDQrU&t=3012s", "title": "Yann LeCun: \"Energy-Based Self-Supervised Learning\"", "thumbnail": "https://i.ytimg.com/vi/A7AnCvYDQrU/maxresdefault.jpg"} {"video_id": "A7AnCvYDQrU", "text": "rational coders autoencoders are maximized actually and and and things like that and if you if you read all the papers and rational tone colors it's never formulated like this it's formulated as you know some version will lower bound or some probability distribution but it's a mechanical analogy I mean it makes it completely clear this is just a way of reducing the information content to keep the information capacity of the code okay I'm gonna end with an application of of all this which is the problem of predicting what the world around you is", "start_timestamp": "00:50:43", "end_timestamp": "00:51:16", "start_second": 3043, "end_second": 3076, "url": "https://www.youtube.com/watch?v=A7AnCvYDQrU&t=3043s", "title": "Yann LeCun: \"Energy-Based Self-Supervised Learning\"", "thumbnail": "https://i.ytimg.com/vi/A7AnCvYDQrU/maxresdefault.jpg"} {"video_id": "A7AnCvYDQrU", "text": "going to do for things like avoiding to bump into other cars for example right so I already talked about this idea that if you have a forward model of the world that gives you the state of the world at time T plus one of the state the function of the state at time T and the action you're gonna take you can sort of roll out a an action in your head with using this model and there has planned a sequence of action that will minimize your cost here the cost being I want to stay in my lane I don't want to bump into other car I don't want to get too", "start_timestamp": "00:51:16", "end_timestamp": "00:51:46", "start_second": 3076, "end_second": 3106, "url": "https://www.youtube.com/watch?v=A7AnCvYDQrU&t=3076s", "title": "Yann LeCun: \"Energy-Based Self-Supervised Learning\"", "thumbnail": "https://i.ytimg.com/vi/A7AnCvYDQrU/maxresdefault.jpg"} {"video_id": "A7AnCvYDQrU", "text": "close to any other cars okay and that's a different triple cost so I'm not talking about reinforcement running everything is differentiable everything is computable I don't need to try anything I mean I don't need to like estimate gradients of stuff by by trial and error everything is differentiable so the part name of course is that this this model of what cars around you are going to do is not deterministic right there's a lot of things that cars around you are going to do that you know you may not predict", "start_timestamp": "00:51:46", "end_timestamp": "00:52:14", "start_second": 3106, "end_second": 3134, "url": "https://www.youtube.com/watch?v=A7AnCvYDQrU&t=3106s", "title": "Yann LeCun: \"Energy-Based Self-Supervised Learning\"", "thumbnail": "https://i.ytimg.com/vi/A7AnCvYDQrU/maxresdefault.jpg"} {"video_id": "A7AnCvYDQrU", "text": "and so there is a latent variable in the model that you're going to need to sample which is which is going to parameterize the set of all stupid things that cars are on you can do and and non stupid things as well okay so you start from a state which you observe this is your current state this is what you wear the cars around you are and you sample the certain variable you take an action of your action and then the system gives you a prediction for where the car and you are gonna be proud chiptune you okay if you decide to turn the wheel the", "start_timestamp": "00:52:14", "end_timestamp": "00:52:44", "start_second": 3134, "end_second": 3164, "url": "https://www.youtube.com/watch?v=A7AnCvYDQrU&t=3134s", "title": "Yann LeCun: \"Energy-Based Self-Supervised Learning\"", "thumbnail": "https://i.ytimg.com/vi/A7AnCvYDQrU/maxresdefault.jpg"} {"video_id": "A7AnCvYDQrU", "text": "world around you is gonna reel is gonna rotate okay so this is predicting what the world around you is going to look like and then what you could do is you can back propagate gradient from the cost to a network here that is supposed to predict the correct action from the state so should I turn the wheel should I break sure that accelerate and by sampling multiple samples and running this on different initial conditions you you might have a car that trains itself to drive without actually driving just by just thinking about it having trained", "start_timestamp": "00:52:44", "end_timestamp": "00:53:16", "start_second": 3164, "end_second": 3196, "url": "https://www.youtube.com/watch?v=A7AnCvYDQrU&t=3164s", "title": "Yann LeCun: \"Energy-Based Self-Supervised Learning\"", "thumbnail": "https://i.ytimg.com/vi/A7AnCvYDQrU/maxresdefault.jpg"} {"video_id": "A7AnCvYDQrU", "text": "is forward modelled by observing all the cars driving so the way we do this is that there is a camera that looks at a highway from the top and then you track every car and you extract a little rectangle around every car centered on every car and it turns with a car and so that's the world around every car and then you you can record sequences of those little things by tracking every car and that constitutes a training set the set of videos centered on every car and so you give a few frames of this thing observe frames and you train a", "start_timestamp": "00:53:16", "end_timestamp": "00:53:51", "start_second": 3196, "end_second": 3231, "url": "https://www.youtube.com/watch?v=A7AnCvYDQrU&t=3196s", "title": "Yann LeCun: \"Energy-Based Self-Supervised Learning\"", "thumbnail": "https://i.ytimg.com/vi/A7AnCvYDQrU/maxresdefault.jpg"} {"video_id": "A7AnCvYDQrU", "text": "system that has written variables and all that stuff to predict the next frame so Z Z we represent all the stuff you can predict that the other cars are gonna do essentially right oh I see it's that's a good question I think it's I think is 256 dimension vector of 206 dimensions so this is so for inference you need to kind of sample z4 for training Z is given to you by by an encoder basically right you train but you need one of those information capacity reduction here which in our case is done by a combination of adding noise and what we", "start_timestamp": "00:53:51", "end_timestamp": "00:54:41", "start_second": 3231, "end_second": 3281, "url": "https://www.youtube.com/watch?v=A7AnCvYDQrU&t=3231s", "title": "Yann LeCun: \"Energy-Based Self-Supervised Learning\"", "thumbnail": "https://i.ytimg.com/vi/A7AnCvYDQrU/maxresdefault.jpg"} {"video_id": "A7AnCvYDQrU", "text": "call dropout but it's basically set Z to zero it forces it to be zero half the time and so it tells the system like you know even if you don't have a latent variable do a good job at predicting whatever you can and then half the time it lets the system use Z and the the latent variable is combined additively with the representation extracted from the predictor so that zero has kind of a special meaning if you want so this is what it produces so this is recording of the real world this is a prediction when you set Z to zero all the time and so", "start_timestamp": "00:54:41", "end_timestamp": "00:55:16", "start_second": 3281, "end_second": 3316, "url": "https://www.youtube.com/watch?v=A7AnCvYDQrU&t=3281s", "title": "Yann LeCun: \"Energy-Based Self-Supervised Learning\"", "thumbnail": "https://i.ytimg.com/vi/A7AnCvYDQrU/maxresdefault.jpg"} {"video_id": "A7AnCvYDQrU", "text": "you get blurry predictions and what you see here are I'm gonna restart if it wants to restart so what you see here are four different predictions you know run kind of recursively for different samplings of the Z variables and you see they predict different features and it's indicated by the square on the circle here the indicate cars that do different things for the different simply the cost function for training this thing is very simple it's you know whether the car is in lane or whether it's and how far it", "start_timestamp": "00:55:16", "end_timestamp": "00:55:58", "start_second": 3316, "end_second": 3358, "url": "https://www.youtube.com/watch?v=A7AnCvYDQrU&t=3316s", "title": "Yann LeCun: \"Energy-Based Self-Supervised Learning\"", "thumbnail": "https://i.ytimg.com/vi/A7AnCvYDQrU/maxresdefault.jpg"} {"video_id": "A7AnCvYDQrU", "text": "is from its neighbors and so you can train this policy by just by popping in gradient of the cost through the entire system all the way down to the policy Network if you do this it doesn't work because what happens is the system gets into regions of the space where the foreign model does a really bad job at predicting what happens to have low cost okay so the car can it goes off the road or something like that and this can be due also to flaws in the in a cost function but but basically it doesn't do what you want so what you have to do is", "start_timestamp": "00:55:58", "end_timestamp": "00:56:34", "start_second": 3358, "end_second": 3394, "url": "https://www.youtube.com/watch?v=A7AnCvYDQrU&t=3358s", "title": "Yann LeCun: \"Energy-Based Self-Supervised Learning\"", "thumbnail": "https://i.ytimg.com/vi/A7AnCvYDQrU/maxresdefault.jpg"} {"video_id": "A7AnCvYDQrU", "text": "regularize it by forcing the system to stay within regions of the forward model that where the foreign model is pretty sure of its predictions so that the system doesn't try to drive crazy stuff in crazy ways that are not present in the training set and and where its forward model can't really predict what's going to happen accurately and you do this by estimating the uncertainty in the prediction to the forward model by sampling the output of the forward model with these random variables you can sample like the the", "start_timestamp": "00:56:34", "end_timestamp": "00:57:03", "start_second": 3394, "end_second": 3423, "url": "https://www.youtube.com/watch?v=A7AnCvYDQrU&t=3394s", "title": "Yann LeCun: \"Energy-Based Self-Supervised Learning\"", "thumbnail": "https://i.ytimg.com/vi/A7AnCvYDQrU/maxresdefault.jpg"} {"video_id": "A7AnCvYDQrU", "text": "drop out of in the network computing the variance of it and then using this as a term in the cost function so it forces the system to stay within a region of space where predictions are fairly reliable with low variance and this is where the system does so so this is the car being driven the green cars are recorded videos and the white dot indicates whether the car wants to turn accelerate brake etc and they it's perhaps more visible in this example so the yellow car is the car that is in the recorded video the blue car is the one", "start_timestamp": "00:57:03", "end_timestamp": "00:57:39", "start_second": 3423, "end_second": 3459, "url": "https://www.youtube.com/watch?v=A7AnCvYDQrU&t=3423s", "title": "Yann LeCun: \"Energy-Based Self-Supervised Learning\"", "thumbnail": "https://i.ytimg.com/vi/A7AnCvYDQrU/maxresdefault.jpg"} {"video_id": "A7AnCvYDQrU", "text": "that we are driving and it didn't change lane the problem is that the blue car invisible to the other ones and so it get squeezed and it has to escape because the other cars will just record it right so they don't they don't see the blue car is another example they're you know they are there is gonna less issues it's trying to stay sort of halfway between the cars in front in the back okay so that's slide I think this whole idea of supervised running is associated machine learning this don't necessarily believe me but that's where", "start_timestamp": "00:57:39", "end_timestamp": "00:58:19", "start_second": 3459, "end_second": 3499, "url": "https://www.youtube.com/watch?v=A7AnCvYDQrU&t=3459s", "title": "Yann LeCun: \"Energy-Based Self-Supervised Learning\"", "thumbnail": "https://i.ytimg.com/vi/A7AnCvYDQrU/maxresdefault.jpg"} {"video_id": "A7AnCvYDQrU", "text": "everybody is I think we can learn complex hierarchical feature for low resource tasks which is which is becoming really important using supervised running actually in natural language it works is very important for natural language for example it's important for Facebook to be able to translate Burmese into English or to more precisely to actually train a classification system that detects his speech in Bernie's because there is a sneek conflicts in in in Myanmar and so you want to be able to detect a speech", "start_timestamp": "00:58:19", "end_timestamp": "00:58:52", "start_second": 3499, "end_second": 3532, "url": "https://www.youtube.com/watch?v=A7AnCvYDQrU&t=3499s", "title": "Yann LeCun: \"Energy-Based Self-Supervised Learning\"", "thumbnail": "https://i.ytimg.com/vi/A7AnCvYDQrU/maxresdefault.jpg"} {"video_id": "A7AnCvYDQrU", "text": "to prevent bad things from happening but how much data how much training data we have in Burmese so one way to do this is to kind of turn text into a language independent representation and then train a speech detector independently of language it's very important for low resource languages like Burmese or whatever I mean there is 2,000 language is something that people use on Facebook the advantage of that is that we can train massive networks it can accumulate a lot of background knowledge about the world in an on task dependent way and", "start_timestamp": "00:58:52", "end_timestamp": "00:59:25", "start_second": 3532, "end_second": 3565, "url": "https://www.youtube.com/watch?v=A7AnCvYDQrU&t=3532s", "title": "Yann LeCun: \"Energy-Based Self-Supervised Learning\"", "thumbnail": "https://i.ytimg.com/vi/A7AnCvYDQrU/maxresdefault.jpg"} {"video_id": "A7AnCvYDQrU", "text": "then we can use several techniques that handle uncertainty by to learn forward models for model-based control and reinforcement learning model base reinforcement random so my money currently is on energy based approaches latent variable models so that it can handle multi modality regularize rates and viable models to prevent this collapse problem in particular sparse latent variable models although the precise way how to make that sparse is now clear and then latent variable prediction through a trainable encoder that's what I'm", "start_timestamp": "00:59:25", "end_timestamp": "00:59:55", "start_second": 3565, "end_second": 3595, "url": "https://www.youtube.com/watch?v=A7AnCvYDQrU&t=3565s", "title": "Yann LeCun: \"Energy-Based Self-Supervised Learning\"", "thumbnail": "https://i.ytimg.com/vi/A7AnCvYDQrU/maxresdefault.jpg"} {"video_id": "ZPewmEu7644", "text": "hi this is Jeff Dean welcome to applications of deep neural networks of Washington University in this video we're going to look at how we can use ganz to generate additional training data for the latest on my a I course and projects click subscribe in the bell next to it to be notified of every new video Dan's have a wide array of uses beyond just the face generation that you often see them use for they can definitely generate other types of images but they can also work on tabular data and really any sort of data where", "start_timestamp": "00:00:00", "end_timestamp": "00:00:28", "start_second": 0, "end_second": 28, "url": "https://www.youtube.com/watch?v=ZPewmEu7644&t=0s", "title": "GANS for Semi-Supervised Learning in Keras (7.4)", "thumbnail": "https://i.ytimg.com/vi/ZPewmEu7644/maxresdefault.jpg"} {"video_id": "ZPewmEu7644", "text": "you are attempting to have a neural network that is generating data that should be real or should or could be classified as fake the key element to having something as again is having that discriminator that tells the difference in the generator that actually generates the data another area that we are seeing ganz use for a great deal is in the area of semi supervised training so let's first talk about what semi-supervised training actually is and see how again can be used to implement this first let's talk about supervised training and", "start_timestamp": "00:00:28", "end_timestamp": "00:01:00", "start_second": 28, "end_second": 60, "url": "https://www.youtube.com/watch?v=ZPewmEu7644&t=28s", "title": "GANS for Semi-Supervised Learning in Keras (7.4)", "thumbnail": "https://i.ytimg.com/vi/ZPewmEu7644/maxresdefault.jpg"} {"video_id": "ZPewmEu7644", "text": "unsupervised training which you've probably seen in previous machine learning literature but just in case you haven't supervised training is what we've been doing up to this point I would say probably the vast majority of this class is in the area of supervised learning this is where you have multiple axes in the case of tabular data or grids and other things in the case of image data but you have some sort of input coming in which is the X and you know what the correct Y's are you are going to train the model to produce", "start_timestamp": "00:01:00", "end_timestamp": "00:01:35", "start_second": 60, "end_second": 95, "url": "https://www.youtube.com/watch?v=ZPewmEu7644&t=60s", "title": "GANS for Semi-Supervised Learning in Keras (7.4)", "thumbnail": "https://i.ytimg.com/vi/ZPewmEu7644/maxresdefault.jpg"} {"video_id": "ZPewmEu7644", "text": "these Y's when you have these X's because later on you're going to have X's coming in where you don't know what the Y is and that's where you want the neural network or other model to be able to give you some estimate as far as what the Y value is going to actually be unsupervised training is where we have the X's it could look just like this it would work with image data tabular or really just about anything but there is no y we're letting the neural network or whatever model it is and you know typically by", "start_timestamp": "00:01:35", "end_timestamp": "00:02:09", "start_second": 95, "end_second": 129, "url": "https://www.youtube.com/watch?v=ZPewmEu7644&t=95s", "title": "GANS for Semi-Supervised Learning in Keras (7.4)", "thumbnail": "https://i.ytimg.com/vi/ZPewmEu7644/maxresdefault.jpg"} {"video_id": "ZPewmEu7644", "text": "the way use neural networks for unsupervised training this is usually the area of things like k-means clustering and other things your classic unsupervised training is just going to take the inputs and cluster them in such a way so that similar ones are together these could be similar images these could be similar inputs in tabular data a variety of things semi supervised training it's actually much closer to supervised training I would say than unsupervised and this is where gams really shine and semi-supervised training you have X's", "start_timestamp": "00:02:09", "end_timestamp": "00:02:46", "start_second": 129, "end_second": 166, "url": "https://www.youtube.com/watch?v=ZPewmEu7644&t=129s", "title": "GANS for Semi-Supervised Learning in Keras (7.4)", "thumbnail": "https://i.ytimg.com/vi/ZPewmEu7644/maxresdefault.jpg"} {"video_id": "ZPewmEu7644", "text": "just like you have in these others but you don't have a label or a Y for every single one of them you might have a small number of them by oh no means have the complete data set label traditionally what would be done is these values that were not labeled would be left out because they there was no way to feed them into traditional supervised learning or you would train it on the ones that you did have Y's for with classic back propagation or however you were training that particular model then you would create predictions Y", "start_timestamp": "00:02:46", "end_timestamp": "00:03:19", "start_second": 166, "end_second": 199, "url": "https://www.youtube.com/watch?v=ZPewmEu7644&t=166s", "title": "GANS for Semi-Supervised Learning in Keras (7.4)", "thumbnail": "https://i.ytimg.com/vi/ZPewmEu7644/maxresdefault.jpg"} {"video_id": "ZPewmEu7644", "text": "predictions for all the missing values and then retrain the whole thing on the predictive values with the others in practice I never had a great deal of success with that technique but there is some theoretical basis for it with semi supervised training and Ganz will see that there's a way that we are able to actually make use of these now semi-supervised training this does make sense from a biological standpoint if you think about a child who is seeing all sorts of vehicles as they go about their daily lives with their parents or", "start_timestamp": "00:03:19", "end_timestamp": "00:03:55", "start_second": 199, "end_second": 235, "url": "https://www.youtube.com/watch?v=ZPewmEu7644&t=199s", "title": "GANS for Semi-Supervised Learning in Keras (7.4)", "thumbnail": "https://i.ytimg.com/vi/ZPewmEu7644/maxresdefault.jpg"} {"video_id": "ZPewmEu7644", "text": "whoever they're with and they're seeing all these vehicles as they pass on the street and they're not labeled nobody is telling them even though that's a vehicle seeing just a barrage of images as they as they grow up they learn edges they learn other sorts of things they learn how to classify if something is on top of something else just by observing there's no particular labels then eventually somebody says hey that's a that's a bus that's that's a train that's a bicycle using that that small handful of labels that they're given", "start_timestamp": "00:03:55", "end_timestamp": "00:04:29", "start_second": 235, "end_second": 269, "url": "https://www.youtube.com/watch?v=ZPewmEu7644&t=235s", "title": "GANS for Semi-Supervised Learning in Keras (7.4)", "thumbnail": "https://i.ytimg.com/vi/ZPewmEu7644/maxresdefault.jpg"} {"video_id": "ZPewmEu7644", "text": "when somebody actually tells them what they're looking at or they verify it independently that is semi-supervised training because it is building on those years and years of having unlabeled data that they they didn't know what they were looking at but they knew they were looking at something and it just it gives them additional reference that's exactly the same thing with supervised training these values even though we don't have wise they're still valuable for the neural network to be learning structure in this", "start_timestamp": "00:04:29", "end_timestamp": "00:04:58", "start_second": 269, "end_second": 298, "url": "https://www.youtube.com/watch?v=ZPewmEu7644&t=269s", "title": "GANS for Semi-Supervised Learning in Keras (7.4)", "thumbnail": "https://i.ytimg.com/vi/ZPewmEu7644/maxresdefault.jpg"} {"video_id": "ZPewmEu7644", "text": "data as it is learning to predict the ones that we do actually in fact have the wise heart so let's look at the structure for this this is the structure of a normal image generating gang baseline so to speak where they research started we saw this before but just to quickly review we have actual images they go into a discriminator and we have the generated images that the generator so the cyan pieces those are the two neural networks random seed values are causing that generator to generate images the discriminator is learning to", "start_timestamp": "00:04:58", "end_timestamp": "00:05:31", "start_second": 298, "end_second": 331, "url": "https://www.youtube.com/watch?v=ZPewmEu7644&t=298s", "title": "GANS for Semi-Supervised Learning in Keras (7.4)", "thumbnail": "https://i.ytimg.com/vi/ZPewmEu7644/maxresdefault.jpg"} {"video_id": "ZPewmEu7644", "text": "better and better discriminate between actual and generated the generator is learning to create better and better images that fool the discriminate now once this is all done you keep the generator because it generates images for you and you likely throw away the discriminator it was just there for the generator to practice against will see that this flips for semi-supervised learning and semi-supervised learning we care about the discriminator and not so much the generator we typically throw the generator away this is how you would", "start_timestamp": "00:05:31", "end_timestamp": "00:06:03", "start_second": 331, "end_second": 363, "url": "https://www.youtube.com/watch?v=ZPewmEu7644&t=331s", "title": "GANS for Semi-Supervised Learning in Keras (7.4)", "thumbnail": "https://i.ytimg.com/vi/ZPewmEu7644/maxresdefault.jpg"} {"video_id": "ZPewmEu7644", "text": "train a semi supervised classification neural network it's very very similar to the diagram that we just looked at in this case we're looking at how we would train it on tabular data it's a medical record the discriminator would learn to tell the difference between a fake medical record or whatever the generator is generating this parts all the same as the previous one as is as is this part the difference is we're training it now to tell not just the difference between fake and real these are the real and this is this is fake we're teaching", "start_timestamp": "00:06:03", "end_timestamp": "00:06:37", "start_second": 363, "end_second": 397, "url": "https://www.youtube.com/watch?v=ZPewmEu7644&t=363s", "title": "GANS for Semi-Supervised Learning in Keras (7.4)", "thumbnail": "https://i.ytimg.com/vi/ZPewmEu7644/maxresdefault.jpg"} {"video_id": "ZPewmEu7644", "text": "it to learn classes so there's four different classes of say medical record that we're looking at maybe four different health levels we're teaching it as a classification neural network to classify between five things the four classes that were actually interested in and is it a fake once we're done training the whole thing we now have this discriminator that can tell the difference between fake and what the what the classes are we also have the generator that is able to generate these fake medical records but we can then", "start_timestamp": "00:06:37", "end_timestamp": "00:07:07", "start_second": 397, "end_second": 427, "url": "https://www.youtube.com/watch?v=ZPewmEu7644&t=397s", "title": "GANS for Semi-Supervised Learning in Keras (7.4)", "thumbnail": "https://i.ytimg.com/vi/ZPewmEu7644/maxresdefault.jpg"} {"video_id": "ZPewmEu7644", "text": "throw away the generator and we'll use the discriminator really truly as our actual neural network now for the medical records where we don't have the Y so we're missing this we still feed those in it's just now we're evaluating it not based on if it classified it correctly but just if it knew the difference between fake and real the street house view data set is a image data set that is often used to demonstrate semi-supervised game learning and I have a link to a curious example external to this class that", "start_timestamp": "00:07:07", "end_timestamp": "00:07:42", "start_second": 427, "end_second": 462, "url": "https://www.youtube.com/watch?v=ZPewmEu7644&t=427s", "title": "GANS for Semi-Supervised Learning in Keras (7.4)", "thumbnail": "https://i.ytimg.com/vi/ZPewmEu7644/maxresdefault.jpg"} {"video_id": "ZPewmEu7644", "text": "demonstrates this if you're interested in this sort of technology but what this does is you have data on these addresses from images that were taken on the sides of buildings and not all of those are labeled or you simulate them not all being labeled and you see that the Gann is capable of learning to classify these 10-digit types even though it doesn't have labels on each of those now if you want to do the same thing for regression it becomes very similar you have two outputs so you have a multi output neural network one is the actual", "start_timestamp": "00:07:42", "end_timestamp": "00:08:17", "start_second": 462, "end_second": 497, "url": "https://www.youtube.com/watch?v=ZPewmEu7644&t=462s", "title": "GANS for Semi-Supervised Learning in Keras (7.4)", "thumbnail": "https://i.ytimg.com/vi/ZPewmEu7644/maxresdefault.jpg"} {"video_id": "ZPewmEu7644", "text": "regression value that you're trying to train it on and the other is the probability that it's a fake record being generated now these I'm doing tabular as just the example again these could be medical records and perhaps the regression output would be a health level or maybe a guess at how old the patient is or some other value perhaps if they have a current disease or not a prediction so it's it's doing the same two things when you feed an medical records where we don't know the Y output then we want to see that this regression", "start_timestamp": "00:08:17", "end_timestamp": "00:08:55", "start_second": 497, "end_second": 535, "url": "https://www.youtube.com/watch?v=ZPewmEu7644&t=497s", "title": "GANS for Semi-Supervised Learning in Keras (7.4)", "thumbnail": "https://i.ytimg.com/vi/ZPewmEu7644/maxresdefault.jpg"} {"video_id": "ZPewmEu7644", "text": "on the fake record when we're feeding in values where we have the medical record where you don't have the Y we just want to make sure that the probability that the fake record is fairly high and that's built into the training we don't so much care about what its regressing on or what the Russian output is for ones where we do have it we're penalizing it based on how close or how far away it was from the expected Y from this and just like the classification one when we're all done with this we throw away the generator in the", "start_timestamp": "00:08:55", "end_timestamp": "00:09:24", "start_second": 535, "end_second": 564, "url": "https://www.youtube.com/watch?v=ZPewmEu7644&t=535s", "title": "GANS for Semi-Supervised Learning in Keras (7.4)", "thumbnail": "https://i.ytimg.com/vi/ZPewmEu7644/maxresdefault.jpg"} {"video_id": "ZPewmEu7644", "text": "discriminator becomes the semi-supervised neural network that was trained on this now if you want to go further with this semi-supervised learning technique i've given you a couple of lengths of articles that i found useful for this there is a link to the actual house data set that's a pretty interesting data set to look at it has all those house numbers above you can deal with in several ways you can deal with classifying the individual digits they give you the bounding rectangles around the digits they also", "start_timestamp": "00:09:24", "end_timestamp": "00:09:52", "start_second": 564, "end_second": 592, "url": "https://www.youtube.com/watch?v=ZPewmEu7644&t=564s", "title": "GANS for Semi-Supervised Learning in Keras (7.4)", "thumbnail": "https://i.ytimg.com/vi/ZPewmEu7644/maxresdefault.jpg"} {"video_id": "ZPewmEu7644", "text": "give you just the bounding rectangle of the entire set of digits if you want to so you can be classifying digits or you can be classifying the entire address it just depends on how you want to set up the problem the examples that I give you here we're using individual digits this is the original paper that first started looking at this unsupervised representation learning with deep convolutional generative Gass general generative adversarial Network I have a link to this paper in the module thank you for watching this video in the next", "start_timestamp": "00:09:52", "end_timestamp": "00:10:26", "start_second": 592, "end_second": 626, "url": "https://www.youtube.com/watch?v=ZPewmEu7644&t=592s", "title": "GANS for Semi-Supervised Learning in Keras (7.4)", "thumbnail": "https://i.ytimg.com/vi/ZPewmEu7644/maxresdefault.jpg"} {"video_id": "pfFyZY1RPZU", "text": "once it is set maybe high level vision show some of the ideas I think are the big ideas in future learning I think the agenda of deep learning as the idea of using brain simulations to make learning algorithms might better than easy to use and also make revolution advances in machine learning and AI so come back to this later but you know once upon a time I guess when I was in high school I think I joined the field of machine learning because I want to work on AI but somehow that got lost and instead of actually doing AI we", "start_timestamp": "00:00:00", "end_timestamp": "00:00:36", "start_second": 0, "end_second": 36, "url": "https://www.youtube.com/watch?v=pfFyZY1RPZU&t=0s", "title": "Andrew Ng: \"Deep Learning, Self-Taught Learning and Unsupervised Feature Learning\"", "thumbnail": "https://i.ytimg.com/vi/pfFyZY1RPZU/maxresdefault.jpg"} {"video_id": "pfFyZY1RPZU", "text": "wound up spending our lives doing curve fitting which is not what I signed up to do uh and and deep learning was for the first time in many years made me think about the bigger dreams again I should come back and say a bit more about that and and again I would say you know the sort of vision and ideas on a share is really not mine but as I think shared by large community including you know young Jeff Fenton yoshua bengio and many others that you hear from in the next couple weeks what about computers do with our data", "start_timestamp": "00:00:36", "end_timestamp": "00:01:02", "start_second": 36, "end_second": 62, "url": "https://www.youtube.com/watch?v=pfFyZY1RPZU&t=36s", "title": "Andrew Ng: \"Deep Learning, Self-Taught Learning and Unsupervised Feature Learning\"", "thumbnail": "https://i.ytimg.com/vi/pfFyZY1RPZU/maxresdefault.jpg"} {"video_id": "pfFyZY1RPZU", "text": "right we want to talk with images and label them lock of audio listen to audio and do speech recognition have text and do stuff with text and it turns out that machine learning is our best shot and most of these applications today but it is very difficult to get these applications work right so while back i \u00f4ll some of my students at stanford to use like a state-of-the-art computer vision algorithm to to write the motorcycle detector and this was the resulting god and this is typical in computer vision right so um well even", "start_timestamp": "00:01:02", "end_timestamp": "00:01:42", "start_second": 62, "end_second": 102, "url": "https://www.youtube.com/watch?v=pfFyZY1RPZU&t=62s", "title": "Andrew Ng: \"Deep Learning, Self-Taught Learning and Unsupervised Feature Learning\"", "thumbnail": "https://i.ytimg.com/vi/pfFyZY1RPZU/maxresdefault.jpg"} {"video_id": "pfFyZY1RPZU", "text": "though learning algorithms works each of those lines is like a you know six months to two years of work for a team of engineers and and we like these algorithms to be less work to build and also maybe perform better so let me start to explain some of these ideas using computer vision but and then I will talk about about audio and apply these albums other modalities as well so why is this problem hard right so obviously a motorcycle how on earth could a computer fail to recognize what this is zooming into small part of the", "start_timestamp": "00:01:42", "end_timestamp": "00:02:18", "start_second": 102, "end_second": 138, "url": "https://www.youtube.com/watch?v=pfFyZY1RPZU&t=102s", "title": "Andrew Ng: \"Deep Learning, Self-Taught Learning and Unsupervised Feature Learning\"", "thumbnail": "https://i.ytimg.com/vi/pfFyZY1RPZU/maxresdefault.jpg"} {"video_id": "pfFyZY1RPZU", "text": "image zooming into whether little red square is where you and I see a motorcycle the computer sees this so the computer vision problem is to look at all those pixel intensity values and tell you that all those numbers represent the exhaust pipe of a motorcycle seems like you need a very complicated function to do that and how do we do this so machine learning you know machine learning guys like me say oh just feed the data to the learning algorithm and let the learning algorithm do its job right when I teach my machine", "start_timestamp": "00:02:18", "end_timestamp": "00:02:51", "start_second": 138, "end_second": 171, "url": "https://www.youtube.com/watch?v=pfFyZY1RPZU&t=138s", "title": "Andrew Ng: \"Deep Learning, Self-Taught Learning and Unsupervised Feature Learning\"", "thumbnail": "https://i.ytimg.com/vi/pfFyZY1RPZU/maxresdefault.jpg"} {"video_id": "pfFyZY1RPZU", "text": "learning cause I draw pictures like this and this is just not how it works so let's pick a couple pixels and let's plot some examples right so take that image there and because pixel one is relatively dark and pixel two is relatively bright that image you know it has has that position in this figure now let's take a different example a different motorcycle image has a this has a bright two pixel one in the darker pixel too so that second image gets passed a different location and then let's do this for a few negative", "start_timestamp": "00:02:51", "end_timestamp": "00:03:23", "start_second": 171, "end_second": 203, "url": "https://www.youtube.com/watch?v=pfFyZY1RPZU&t=171s", "title": "Andrew Ng: \"Deep Learning, Self-Taught Learning and Unsupervised Feature Learning\"", "thumbnail": "https://i.ytimg.com/vi/pfFyZY1RPZU/maxresdefault.jpg"} {"video_id": "pfFyZY1RPZU", "text": "examples as well no motorcycles and what you find is that if you plot a set of positive and negative motorcycle and non motorcycle images that your positive negative examples are extremely jumbled together and so if you feed this data to you know certainly a linear classifier it doesn't work so what is done is that the machine learning is that we want to be nice if you could come up with what's called a feature representation if you could write a piece of code that tells you does this image have handlebars in", "start_timestamp": "00:03:23", "end_timestamp": "00:03:55", "start_second": 203, "end_second": 235, "url": "https://www.youtube.com/watch?v=pfFyZY1RPZU&t=203s", "title": "Andrew Ng: \"Deep Learning, Self-Taught Learning and Unsupervised Feature Learning\"", "thumbnail": "https://i.ytimg.com/vi/pfFyZY1RPZU/maxresdefault.jpg"} {"video_id": "pfFyZY1RPZU", "text": "it does this image have tires or wheels in it and if you could do that then your data looks more like this on the lower right and it then becomes easy much easier for say a linear classifier like a swivel vector machine the logistic regression to distinguish the motorcycles from the banal motor signals right but the story goes on so will this in this illustrative example or saying well whether we could write a piece of code to tell us that their handlebars and wheels but we don't actually know how to do that and so in computer vision", "start_timestamp": "00:03:55", "end_timestamp": "00:04:27", "start_second": 235, "end_second": 267, "url": "https://www.youtube.com/watch?v=pfFyZY1RPZU&t=235s", "title": "Andrew Ng: \"Deep Learning, Self-Taught Learning and Unsupervised Feature Learning\"", "thumbnail": "https://i.ytimg.com/vi/pfFyZY1RPZU/maxresdefault.jpg"} {"video_id": "pfFyZY1RPZU", "text": "what is done is actually the following this is how people actually come up with features in computer vision this kind of notional illustrative example but this is what I wanna do when I take my motorcycle I make I'm going to detect edges at four different orientations so look for vertical edges horizontal edges 45-degree ages 135 degree edges and then what this number or point seven and the upper right means right that number is saying that the density of vertical edges in the upper right hand quadrant of my image is 0.7 and", "start_timestamp": "00:04:27", "end_timestamp": "00:05:05", "start_second": 267, "end_second": 305, "url": "https://www.youtube.com/watch?v=pfFyZY1RPZU&t=267s", "title": "Andrew Ng: \"Deep Learning, Self-Taught Learning and Unsupervised Feature Learning\"", "thumbnail": "https://i.ytimg.com/vi/pfFyZY1RPZU/maxresdefault.jpg"} {"video_id": "pfFyZY1RPZU", "text": "what and what this number down here says is that the density of horizontal edges you know in the lower right hand quadrant of my image is 0.5 and in case you're getting the sense that sort of like oh my god what on earth is going on there this seems hardly complicated how interrupts it will come up with this piece of code you know that's that's that's the point and sadly this is the way that a lot of computer vision is done today so we'll probably this notion of a feature representation is pervasive throughout machine learning in fact I", "start_timestamp": "00:05:05", "end_timestamp": "00:05:37", "start_second": 305, "end_second": 337, "url": "https://www.youtube.com/watch?v=pfFyZY1RPZU&t=305s", "title": "Andrew Ng: \"Deep Learning, Self-Taught Learning and Unsupervised Feature Learning\"", "thumbnail": "https://i.ytimg.com/vi/pfFyZY1RPZU/maxresdefault.jpg"} {"video_id": "pfFyZY1RPZU", "text": "guess I live in I live in Silicon Valley and if you walk around Silicon Valley and look at what where people are spending all the engineering time is often in coming up with these feature representations so let's let's look what's delve deeper right so where do these features come from um since they're the primary lens through which our algorithms see the world this gives them a certain importance right so how about computer vision oh and then facts are interesting and in fact this notion of feature representations is pervasive", "start_timestamp": "00:05:37", "end_timestamp": "00:06:07", "start_second": 337, "end_second": 367, "url": "https://www.youtube.com/watch?v=pfFyZY1RPZU&t=337s", "title": "Andrew Ng: \"Deep Learning, Self-Taught Learning and Unsupervised Feature Learning\"", "thumbnail": "https://i.ytimg.com/vi/pfFyZY1RPZU/maxresdefault.jpg"} {"video_id": "pfFyZY1RPZU", "text": "you know for vision audio text even other applications so what do you get the features from in computer vision the state of the art answer for where the features come from is that teams of tens hundreds or teams of some of tens hundreds or maybe thousands of computer vision researchers have spent decades of their lives hand engineering features for computer vision the figure on the upper left is a figure that I took from the stiff paper the stiff paper is the single most highly cited paper in computer vision like", "start_timestamp": "00:06:07", "end_timestamp": "00:06:38", "start_second": 367, "end_second": 398, "url": "https://www.youtube.com/watch?v=pfFyZY1RPZU&t=367s", "title": "Andrew Ng: \"Deep Learning, Self-Taught Learning and Unsupervised Feature Learning\"", "thumbnail": "https://i.ytimg.com/vi/pfFyZY1RPZU/maxresdefault.jpg"} {"video_id": "pfFyZY1RPZU", "text": "fifteen years and I read the paper maybe about five times now and I still have no idea what it's doing right this is something so complex piece of code that David loathes good friend David will tell you this himself it took David literally ten years I'm knocking he's he'll say 10 years himself of you know filling with pieces of the code in order to come up with the SIP feature which works pretty well but you know you have to ask is there a better way to design features than this that's vision how about audio same thing", "start_timestamp": "00:06:38", "end_timestamp": "00:07:12", "start_second": 398, "end_second": 432, "url": "https://www.youtube.com/watch?v=pfFyZY1RPZU&t=398s", "title": "Andrew Ng: \"Deep Learning, Self-Taught Learning and Unsupervised Feature Learning\"", "thumbnail": "https://i.ytimg.com/vi/pfFyZY1RPZU/maxresdefault.jpg"} {"video_id": "pfFyZY1RPZU", "text": "right teams of tens hundreds of thousands of audio researchers working on features for audio ever CC is shown on the upper right that's actually a pretty clever surprisingly hard to beat but again you know honestly to this day I have a hard time understanding what some bits of the M FCC album are doing and natural language in fact I think most of natural language processing text processing today is unapologetically about finding better features so think about pauses right there's a lot of NLP work on parsers um this piece of", "start_timestamp": "00:07:12", "end_timestamp": "00:07:46", "start_second": 432, "end_second": 466, "url": "https://www.youtube.com/watch?v=pfFyZY1RPZU&t=432s", "title": "Andrew Ng: \"Deep Learning, Self-Taught Learning and Unsupervised Feature Learning\"", "thumbnail": "https://i.ytimg.com/vi/pfFyZY1RPZU/maxresdefault.jpg"} {"video_id": "pfFyZY1RPZU", "text": "software that tells you where the noun phrases are in your sentence I mean why on earth do I care where the noun phrases are in my sentences I really don't need software to tell me that the only reason we spend so much time working on pauses is because we hope that this will give us useful features to then feed to some later downstream application like anti-spam web search machine translation that we actually care about so common features is difficult time consuming requires expert knowledge and we're working with them", "start_timestamp": "00:07:46", "end_timestamp": "00:08:14", "start_second": 466, "end_second": 494, "url": "https://www.youtube.com/watch?v=pfFyZY1RPZU&t=466s", "title": "Andrew Ng: \"Deep Learning, Self-Taught Learning and Unsupervised Feature Learning\"", "thumbnail": "https://i.ytimg.com/vi/pfFyZY1RPZU/maxresdefault.jpg"} {"video_id": "pfFyZY1RPZU", "text": "applications machine learning you know we spent a long time there's a way if you look at apply machine learning where they will walk anew oh the company local people doing apply machine learning this is really what they spend the vast majority of the time on it's coming up with features so can we do better so the next piece in a lot of deep learning there's a like many people in very like many of you I tend to treat biological inspiration with a great deal of caution and even a healthy dose of skepticism but for me a", "start_timestamp": "00:08:14", "end_timestamp": "00:08:54", "start_second": 494, "end_second": 534, "url": "https://www.youtube.com/watch?v=pfFyZY1RPZU&t=494s", "title": "Andrew Ng: \"Deep Learning, Self-Taught Learning and Unsupervised Feature Learning\"", "thumbnail": "https://i.ytimg.com/vi/pfFyZY1RPZU/maxresdefault.jpg"} {"video_id": "pfFyZY1RPZU", "text": "lot of the some of my thinking about deep learning has been taken inspiration from biology so I only share of you but you know some some cool cool ideas from really by washing inspiration so turns out there's a fascinating hypothesis that much of human intelligence can be explained by a single learning algorithm since cause the one learning our hypothesis let me share with you some evidence for this right so this one's experiment first done on ferrets in MIT on that very piece of brain tissue shown on the slide that's your auditory cortex", "start_timestamp": "00:08:54", "end_timestamp": "00:09:28", "start_second": 534, "end_second": 568, "url": "https://www.youtube.com/watch?v=pfFyZY1RPZU&t=534s", "title": "Andrew Ng: \"Deep Learning, Self-Taught Learning and Unsupervised Feature Learning\"", "thumbnail": "https://i.ytimg.com/vi/pfFyZY1RPZU/maxresdefault.jpg"} {"video_id": "pfFyZY1RPZU", "text": "the way that your understanding my words now is that your ears is routing the sound signal to that piece of great brain tissue and this processing the sound and then that's how you you eventually get to understand what I'm saying so neuroscientists did the following experiments which is in cut the wire between the ears and the auditory cortex and do what's called a neural rewiring experiment so that eventually the signal from the eyes gets routed to the auditory cortex it turns out if you do this that great piece of", "start_timestamp": "00:09:28", "end_timestamp": "00:10:00", "start_second": 568, "end_second": 600, "url": "https://www.youtube.com/watch?v=pfFyZY1RPZU&t=568s", "title": "Andrew Ng: \"Deep Learning, Self-Taught Learning and Unsupervised Feature Learning\"", "thumbnail": "https://i.ytimg.com/vi/pfFyZY1RPZU/maxresdefault.jpg"} {"video_id": "pfFyZY1RPZU", "text": "brain tissue learns to see and this is the word see this has been replicated in multiple labs on four species of animals and these animals can quote see in every single sense of the word that I know how to use the word see these animals they can do visual discrimination talks we can look at things and make correct decisions based on you know an image in front of them using that rare piece of brain tissue another example this rare piece of brain tissue is your somatosensory cortex is responsible for your sense of touch do a similar neural", "start_timestamp": "00:10:00", "end_timestamp": "00:10:30", "start_second": 600, "end_second": 630, "url": "https://www.youtube.com/watch?v=pfFyZY1RPZU&t=600s", "title": "Andrew Ng: \"Deep Learning, Self-Taught Learning and Unsupervised Feature Learning\"", "thumbnail": "https://i.ytimg.com/vi/pfFyZY1RPZU/maxresdefault.jpg"} {"video_id": "pfFyZY1RPZU", "text": "rewiring experiment and your somatosensory cortex Lucy um so more generally this is idea that if the same physical piece of brain tissue right the same physical bit of your brain can process sight or sound or touch or maybe even other things then maybe there's a single learning algorithm they can process sight or sound or touch or maybe other things and that it can you know discover some approximation to that learning algorithm when we discover a totally different algorithm but it accomplishes the same thing then that might be a better way to", "start_timestamp": "00:10:30", "end_timestamp": "00:11:05", "start_second": 630, "end_second": 665, "url": "https://www.youtube.com/watch?v=pfFyZY1RPZU&t=630s", "title": "Andrew Ng: \"Deep Learning, Self-Taught Learning and Unsupervised Feature Learning\"", "thumbnail": "https://i.ytimg.com/vi/pfFyZY1RPZU/maxresdefault.jpg"} {"video_id": "pfFyZY1RPZU", "text": "us making progress in AI then hand engineering separate pieces of code in each of these individual application silos which which we have been doing for decades now just a few more fun examples it turns out you can plug in other sensors to the brain and the brain kind of figures out how to do of it shown on the upper left is a scene with your tongue right so this is actually undergoing FDA trials of now to help blind people see a system called brain port so the way it works is you strap a camera as your forehead takes a", "start_timestamp": "00:11:05", "end_timestamp": "00:11:37", "start_second": 665, "end_second": 697, "url": "https://www.youtube.com/watch?v=pfFyZY1RPZU&t=665s", "title": "Andrew Ng: \"Deep Learning, Self-Taught Learning and Unsupervised Feature Learning\"", "thumbnail": "https://i.ytimg.com/vi/pfFyZY1RPZU/maxresdefault.jpg"} {"video_id": "pfFyZY1RPZU", "text": "low-resolution grayscale image of what's in front of you run a wire to a rectangular array of electrodes that you place on top of the tongue so the each pixel maps to a point on your tongue and maybe a high voltage is a bright pixel and a low voltage is a dark pixel and even as adults you and I today would be able to learn to see you about tongues in like tens they only ten-thirty 10 20 minutes human echolocation well you need a you know snap your fingers right I'll click your tongue and there are there actually schools today training blind", "start_timestamp": "00:11:37", "end_timestamp": "00:12:09", "start_second": 697, "end_second": 729, "url": "https://www.youtube.com/watch?v=pfFyZY1RPZU&t=697s", "title": "Andrew Ng: \"Deep Learning, Self-Taught Learning and Unsupervised Feature Learning\"", "thumbnail": "https://i.ytimg.com/vi/pfFyZY1RPZU/maxresdefault.jpg"} {"video_id": "pfFyZY1RPZU", "text": "children to learn to interpret the pattern of sounds bouncing off the environments as human sonar a haptic belt is you know bring a buzzes around your waist program the one facing off the bus and then you get the directions and you just magically know where North is or similar to how birds sense direction you can plug a third eye into frog and you know the farm learns how to how to deal with it it doesn't work in every single instance there are cases where this doesn't work but I think to a surprisingly large extent it's almost as", "start_timestamp": "00:12:09", "end_timestamp": "00:12:39", "start_second": 729, "end_second": 759, "url": "https://www.youtube.com/watch?v=pfFyZY1RPZU&t=729s", "title": "Andrew Ng: \"Deep Learning, Self-Taught Learning and Unsupervised Feature Learning\"", "thumbnail": "https://i.ytimg.com/vi/pfFyZY1RPZU/maxresdefault.jpg"} {"video_id": "pfFyZY1RPZU", "text": "if you can plug in you know not quite any sensor but almost the large range of sensors onto almost any part of the brain kind of going to do of it so want to be cool if you get a learning algorithm to do the same so let's see oh let's take a break could you guys on I think you now know enough to work look at the questions one through three in the handout do you guys want to take a few minutes so just write down do write down what you think is the right answer and when you've done so you know discuss what you wrote down with", "start_timestamp": "00:12:39", "end_timestamp": "00:13:15", "start_second": 759, "end_second": 795, "url": "https://www.youtube.com/watch?v=pfFyZY1RPZU&t=759s", "title": "Andrew Ng: \"Deep Learning, Self-Taught Learning and Unsupervised Feature Learning\"", "thumbnail": "https://i.ytimg.com/vi/pfFyZY1RPZU/maxresdefault.jpg"} {"video_id": "pfFyZY1RPZU", "text": "your neighbors and and see if you agree or disagree for question one I had d4 question two I had auditory cortex learns to see and the question three I don't know different people have different ideas I guess I tend to use the wording that much of human intelligence can be explained by a single learning algorithm but there are lots of other Worthing's that lots of other ways of distracting it alright so given this you know what are the implications for machine learning right so here if we think that without visual", "start_timestamp": "00:13:15", "end_timestamp": "00:13:54", "start_second": 795, "end_second": 834, "url": "https://www.youtube.com/watch?v=pfFyZY1RPZU&t=795s", "title": "Andrew Ng: \"Deep Learning, Self-Taught Learning and Unsupervised Feature Learning\"", "thumbnail": "https://i.ytimg.com/vi/pfFyZY1RPZU/maxresdefault.jpg"} {"video_id": "pfFyZY1RPZU", "text": "system computes an incredibly complicated function of the input right it looks all those numbers and those pixel values and tells you that that's the motorcycle exhaust pipe and so two approaches that we could try to build such a system as you could try to directly implement this complicated function which is what I think of as a hand engineering approach or maybe you can try to learn this function instead right and in kind of a side comment maybe only for the aficionados the machine learning is that if you look at", "start_timestamp": "00:13:54", "end_timestamp": "00:14:26", "start_second": 834, "end_second": 866, "url": "https://www.youtube.com/watch?v=pfFyZY1RPZU&t=834s", "title": "Andrew Ng: \"Deep Learning, Self-Taught Learning and Unsupervised Feature Learning\"", "thumbnail": "https://i.ytimg.com/vi/pfFyZY1RPZU/maxresdefault.jpg"} {"video_id": "pfFyZY1RPZU", "text": "a train learning algorithm you know a learning algorithm after has trained with all the parameter values there's a very complex thing but the learning algorithm itself is relatively simple most learning algorithms can be described in like half a page of pseudocode so the complexity of the things we're training usually comes from the complexity of the data rather than the complexity of the algorithm and then that's a good thing because we know how to get complex data you just year which is an images or around us but coming up with complex", "start_timestamp": "00:14:26", "end_timestamp": "00:14:52", "start_second": 866, "end_second": 892, "url": "https://www.youtube.com/watch?v=pfFyZY1RPZU&t=866s", "title": "Andrew Ng: \"Deep Learning, Self-Taught Learning and Unsupervised Feature Learning\"", "thumbnail": "https://i.ytimg.com/vi/pfFyZY1RPZU/maxresdefault.jpg"} {"video_id": "pfFyZY1RPZU", "text": "algorithms is hot right so here's a here's a problem that I guess I post a few years ago which is you know can we learn a better feature representation for vision or audio or what have you so concretely can you come up with an algorithm they just examine examine it's a bunch of images like these and automatically comes up with a better way to represent images than the raw pixels and if you can do that maybe you can apply the same algorithm to audio and have the same algorithm trained along a bunch of audio clips and have it find a", "start_timestamp": "00:14:52", "end_timestamp": "00:15:26", "start_second": 892, "end_second": 926, "url": "https://www.youtube.com/watch?v=pfFyZY1RPZU&t=892s", "title": "Andrew Ng: \"Deep Learning, Self-Taught Learning and Unsupervised Feature Learning\"", "thumbnail": "https://i.ytimg.com/vi/pfFyZY1RPZU/maxresdefault.jpg"} {"video_id": "pfFyZY1RPZU", "text": "better way to represent audio than the raw data okay so let's let's write down the mathematical formalism of this problem right which is given a 14 by 14 image X image patch X one way to represent the image patch is with a list of 196 row numbers corresponding to the pixel intensity values the probably one opposes can we come up with a better feature vector to represent those pixels okay and if you can do so then this is what you can do here's a problem called a self-taught learning I guess which is well so in in traditional machine", "start_timestamp": "00:15:26", "end_timestamp": "00:16:06", "start_second": 926, "end_second": 966, "url": "https://www.youtube.com/watch?v=pfFyZY1RPZU&t=926s", "title": "Andrew Ng: \"Deep Learning, Self-Taught Learning and Unsupervised Feature Learning\"", "thumbnail": "https://i.ytimg.com/vi/pfFyZY1RPZU/maxresdefault.jpg"} {"video_id": "pfFyZY1RPZU", "text": "learning right if you want to learn to distinguish your motorcycles from non motorcycles you have a training set with some and this is a pain because there's a lot of work to come up with a lot of pictures and motorcycles it was like tens of thousands of them so in the unsupervised feature learning on the self or learning problem what you do is instead we're going to give you a large source of unlabeled images then give you an infinite source of unlabeled images because of the web where we all have an effectively infinite source of images", "start_timestamp": "00:16:06", "end_timestamp": "00:16:40", "start_second": 966, "end_second": 1000, "url": "https://www.youtube.com/watch?v=pfFyZY1RPZU&t=966s", "title": "Andrew Ng: \"Deep Learning, Self-Taught Learning and Unsupervised Feature Learning\"", "thumbnail": "https://i.ytimg.com/vi/pfFyZY1RPZU/maxresdefault.jpg"} {"video_id": "pfFyZY1RPZU", "text": "and the task is can all those random images up there somehow can pictures of trees and sunsets and horses and so on can that help you to do a better job figuring out that this picture down here is a mobile cycle okay and so one way to do that is that we have an algorithm that can look at these on label images and learn a much better representation of images than just the raw pixels and if that superior representation allows us to then look at a small label training set and this my superior representation allows us to use", "start_timestamp": "00:16:40", "end_timestamp": "00:17:21", "start_second": 1000, "end_second": 1041, "url": "https://www.youtube.com/watch?v=pfFyZY1RPZU&t=1000s", "title": "Andrew Ng: \"Deep Learning, Self-Taught Learning and Unsupervised Feature Learning\"", "thumbnail": "https://i.ytimg.com/vi/pfFyZY1RPZU/maxresdefault.jpg"} {"video_id": "pfFyZY1RPZU", "text": "the small label training set to do a much better job figuring out what this tested images okay so I guess in machine learning there are sort of three standard three and a few common formalisms right there's the supervised learning setting which is the olders will standard one that most you best know so let's see it goes to record distinguish between cars and motorcycles right so the standard Nero school like 50 year old supervised learning setting 30 year old supervisor is I think you need to come a large training set up a", "start_timestamp": "00:17:21", "end_timestamp": "00:17:52", "start_second": 1041, "end_second": 1072, "url": "https://www.youtube.com/watch?v=pfFyZY1RPZU&t=1041s", "title": "Andrew Ng: \"Deep Learning, Self-Taught Learning and Unsupervised Feature Learning\"", "thumbnail": "https://i.ytimg.com/vi/pfFyZY1RPZU/maxresdefault.jpg"} {"video_id": "pfFyZY1RPZU", "text": "bottle cause a lot of motorcycles okay Oh about 10 15 years ago deciding Andrew McCollum Tom Mitchell or maybe even others before them start to talk about semi-supervised learning the idea of using unlabeled data and that was exciting but in semi-supervised learning as is simply conceived um you know the ability to use unlabeled data is great but in semi supervised learning typically the unlabeled data is all images or causing over cycles and it turns out that this is a semi-supervised learning this sort of model is not", "start_timestamp": "00:17:52", "end_timestamp": "00:18:28", "start_second": 1072, "end_second": 1108, "url": "https://www.youtube.com/watch?v=pfFyZY1RPZU&t=1072s", "title": "Andrew Ng: \"Deep Learning, Self-Taught Learning and Unsupervised Feature Learning\"", "thumbnail": "https://i.ytimg.com/vi/pfFyZY1RPZU/maxresdefault.jpg"} {"video_id": "pfFyZY1RPZU", "text": "widely used because it turns out that um you know rarely do you have a data set of where all the images are either caused or motorcycles and nothing else but the only thing we're doing is just missing the label so this is kind of useful but isn't why they use where as in um what I call self-taught learning the goal is to take you know totally random images that may be caused may be old cycles may be totally other random things and to can you can use somehow use this to learn to distinguish the miles and many cycles and one weight is", "start_timestamp": "00:18:28", "end_timestamp": "00:19:02", "start_second": 1108, "end_second": 1142, "url": "https://www.youtube.com/watch?v=pfFyZY1RPZU&t=1108s", "title": "Andrew Ng: \"Deep Learning, Self-Taught Learning and Unsupervised Feature Learning\"", "thumbnail": "https://i.ytimg.com/vi/pfFyZY1RPZU/maxresdefault.jpg"} {"video_id": "pfFyZY1RPZU", "text": "what would I like to think about it is that you know the first time that a child sees a new object where someone invents a new vehicle right the first time that you and I saw a Segway we learn to recognize the Segway very quickly just from seeing it once and I think the reason that we learn to recognize a Segway very quickly is because you're in my visual system prior to that had had several decades of experience looking at random natural images just seeing the world and was by looking at these random unlabeled images", "start_timestamp": "00:19:02", "end_timestamp": "00:19:35", "start_second": 1142, "end_second": 1175, "url": "https://www.youtube.com/watch?v=pfFyZY1RPZU&t=1142s", "title": "Andrew Ng: \"Deep Learning, Self-Taught Learning and Unsupervised Feature Learning\"", "thumbnail": "https://i.ytimg.com/vi/pfFyZY1RPZU/maxresdefault.jpg"} {"video_id": "pfFyZY1RPZU", "text": "that allow us that allowed you in my visual system to learn enough about the structure of the world to come up with better features if you will so that the first time you saw a Segway you very quickly learn to recognize what a Segway is right so just to make sure you've got this concept could you please a look at question four and just do that map this to a new example and so someone called the answer what first first part a PCOD second part and third part all right also all right that was easy cool so how do you actually do this in", "start_timestamp": "00:19:35", "end_timestamp": "00:20:11", "start_second": 1175, "end_second": 1211, "url": "https://www.youtube.com/watch?v=pfFyZY1RPZU&t=1175s", "title": "Andrew Ng: \"Deep Learning, Self-Taught Learning and Unsupervised Feature Learning\"", "thumbnail": "https://i.ytimg.com/vi/pfFyZY1RPZU/maxresdefault.jpg"} {"video_id": "pfFyZY1RPZU", "text": "order to come up with an algorithm to learn features let's turn one last time to biological motivation turns out that when your brain gets an image the first thing it does is look for edges in the image right so first stage of visual processing on the brain is called visual cortical area view what I think y'all might have mentioned is yesterday and the first thing it does is look for edges or lines I'm going to use the term lines and edges interchangeably so in your brain right now there's probably a neuron that is looking for a 45 degree", "start_timestamp": "00:20:11", "end_timestamp": "00:20:38", "start_second": 1211, "end_second": 1238, "url": "https://www.youtube.com/watch?v=pfFyZY1RPZU&t=1211s", "title": "Andrew Ng: \"Deep Learning, Self-Taught Learning and Unsupervised Feature Learning\"", "thumbnail": "https://i.ytimg.com/vi/pfFyZY1RPZU/maxresdefault.jpg"} {"video_id": "pfFyZY1RPZU", "text": "line 45-degree edge like this shown on the left with the dark region next to a bright region and there's probably a different neuron in your brain right now there's looking for a vertical line like this one right here okay so um how can we get our software to maybe mimic the brain and also find edges like this what we don't want to do is code this up by hand because you know what I don't want to do is tell us the neuroscientist and then work really hard to right hand engineer software the replica I think what's most much more", "start_timestamp": "00:20:38", "end_timestamp": "00:21:12", "start_second": 1238, "end_second": 1272, "url": "https://www.youtube.com/watch?v=pfFyZY1RPZU&t=1238s", "title": "Andrew Ng: \"Deep Learning, Self-Taught Learning and Unsupervised Feature Learning\"", "thumbnail": "https://i.ytimg.com/vi/pfFyZY1RPZU/maxresdefault.jpg"} {"video_id": "pfFyZY1RPZU", "text": "interesting is we can have an algorithm learn these things by itself and that there is such an algorithm very old ones like a what 16 year old result now do 200 thousand a few called sparse coding you talked about this a bit yesterday did you write cool so now go through this very quickly even a small coding was obviously conceived as like a theoretical neuroscience model so you know Brunello thousand will tell you right he never envisioned that this would be used as a machine learning algorithm this is like a theoretical", "start_timestamp": "00:21:12", "end_timestamp": "00:21:42", "start_second": 1272, "end_second": 1302, "url": "https://www.youtube.com/watch?v=pfFyZY1RPZU&t=1272s", "title": "Andrew Ng: \"Deep Learning, Self-Taught Learning and Unsupervised Feature Learning\"", "thumbnail": "https://i.ytimg.com/vi/pfFyZY1RPZU/maxresdefault.jpg"} {"video_id": "pfFyZY1RPZU", "text": "neuroscience result used to try to explain you know computations in the brain or something like that right and this is how the atom works is an unsupervised learning algorithm so the way where else is you feed it a set of m images X 1 X 2 up to X M so each input example is and let's say an N by n matrix yo quicker like a 14 by 14 image patch what's fast coding does is it learns the dictionary of basis functions Phi 1 Phi 2 up to 5 K such that each of your training images X subject to the constraint that the a J's are mostly 0", "start_timestamp": "00:21:42", "end_timestamp": "00:22:21", "start_second": 1302, "end_second": 1341, "url": "https://www.youtube.com/watch?v=pfFyZY1RPZU&t=1302s", "title": "Andrew Ng: \"Deep Learning, Self-Taught Learning and Unsupervised Feature Learning\"", "thumbnail": "https://i.ytimg.com/vi/pfFyZY1RPZU/maxresdefault.jpg"} {"video_id": "pfFyZY1RPZU", "text": "sparse and then the way this is implemented is on with a l1 constraint where both work you know minimize the sum of absolute value terms on the on their coefficients AJ okay the sparsity penalty term and so if you do this then on by the way I think this is the only equation I have for this first hour so I hope you enjoyed it so same thing in pictures if you train sparse coding on unnatural images every single time you're on it they'll learn the set of basis functions that look a lot like the edge detectors that you know we believe visual protocol", "start_timestamp": "00:22:21", "end_timestamp": "00:23:05", "start_second": 1341, "end_second": 1385, "url": "https://www.youtube.com/watch?v=pfFyZY1RPZU&t=1341s", "title": "Andrew Ng: \"Deep Learning, Self-Taught Learning and Unsupervised Feature Learning\"", "thumbnail": "https://i.ytimg.com/vi/pfFyZY1RPZU/maxresdefault.jpg"} {"video_id": "pfFyZY1RPZU", "text": "error view one is looking for and then given a test example give it a test image X right what what what they will do is it will select out let's say three out of my 64 basis functions here and it will take that test example and explain it or decompose it into a linear combination of in this case just three out of 64 of my basis functions okay so speaking loosely this algorithm has to quote invented edge detection right the algorithm is free to choose absolutely any basis functions at once but it shows but you know if you run it", "start_timestamp": "00:23:05", "end_timestamp": "00:23:41", "start_second": 1385, "end_second": 1421, "url": "https://www.youtube.com/watch?v=pfFyZY1RPZU&t=1385s", "title": "Andrew Ng: \"Deep Learning, Self-Taught Learning and Unsupervised Feature Learning\"", "thumbnail": "https://i.ytimg.com/vi/pfFyZY1RPZU/maxresdefault.jpg"} {"video_id": "pfFyZY1RPZU", "text": "every time it chooses to learn basis functions that look like these edges and what this decomposition says is that this image X is 0.8 times H number 36 plus 0.3 times H number 42 plus 0.5 times H number 63 okay so if you will this is saying this is now decompose the image in terms of what edges appear in this image and this gives a high-level more succinct more compact representation of the image and also probably a more useful one right because it's more useful to know where the edges are in the image that you know where the", "start_timestamp": "00:23:41", "end_timestamp": "00:24:20", "start_second": 1421, "end_second": 1460, "url": "https://www.youtube.com/watch?v=pfFyZY1RPZU&t=1421s", "title": "Andrew Ng: \"Deep Learning, Self-Taught Learning and Unsupervised Feature Learning\"", "thumbnail": "https://i.ytimg.com/vi/pfFyZY1RPZU/maxresdefault.jpg"} {"video_id": "pfFyZY1RPZU", "text": "pixels are moreover this gives us a alternative way to represent the image instead of representing the image patch using a list of 196 pixel values we can instead using this vector of numbers a 1 through a 64 these are the coefficients multiplying into the basis functions just a few more examples so the method in density attention those represent an image in terms a just appear in it and it turns out that a funeral scientists have done that had to invent a chef have done quantitative comparisons between sparse coding and oh and visual cortical", "start_timestamp": "00:24:20", "end_timestamp": "00:25:02", "start_second": 1460, "end_second": 1502, "url": "https://www.youtube.com/watch?v=pfFyZY1RPZU&t=1460s", "title": "Andrew Ng: \"Deep Learning, Self-Taught Learning and Unsupervised Feature Learning\"", "thumbnail": "https://i.ytimg.com/vi/pfFyZY1RPZU/maxresdefault.jpg"} {"video_id": "pfFyZY1RPZU", "text": "area v1 and found that you know it's not as by no means a perfect explanation of visual to everyone but but it matches of unknown so surprisingly well on not all but on many dimensions so that's vision power other input modalities so this is a slide I got from Evan Smith from his PhD thesis work with Michael Ricky so what Evan did was he appoints false coding to audio data and what I've shown here is 20 basis functions learned by sparse coding when trained on natural sounds ok so this is a grid of 5 by 4 you know what 5 by 4 the lower audio", "start_timestamp": "00:25:02", "end_timestamp": "00:25:43", "start_second": 1502, "end_second": 1543, "url": "https://www.youtube.com/watch?v=pfFyZY1RPZU&t=1502s", "title": "Andrew Ng: \"Deep Learning, Self-Taught Learning and Unsupervised Feature Learning\"", "thumbnail": "https://i.ytimg.com/vi/pfFyZY1RPZU/maxresdefault.jpg"} {"video_id": "pfFyZY1RPZU", "text": "clips I guess audio basis functions so this 20 basis functions learn by sparse coding um what he did was he then went to the cat or the tree system since the biologists in Boston had been you know using electro recordings to figure out what early auditory processing in a cat does and for each of these 20 things learned by his algorithm he found the closest match in the biological data and the closest matches are shown over the in red ok so the same algorithm that only one hand gives a you know he's an explanation for early", "start_timestamp": "00:25:43", "end_timestamp": "00:26:21", "start_second": 1543, "end_second": 1581, "url": "https://www.youtube.com/watch?v=pfFyZY1RPZU&t=1543s", "title": "Andrew Ng: \"Deep Learning, Self-Taught Learning and Unsupervised Feature Learning\"", "thumbnail": "https://i.ytimg.com/vi/pfFyZY1RPZU/maxresdefault.jpg"} {"video_id": "pfFyZY1RPZU", "text": "visual processing and on the other hand use may be a you know by no means perfect but but the reason why X which is some explanation for review of it auditory processing as well and it turns out you can do a similar study on them early somatosensory processing as well this is work done by Andrew Sacco Stanford where he collected touch data how do you call it touch data right so the way that done Andrew Andrew Sachs did it was um you know so we hold things of our hands all the time when I'm holding this thing but", "start_timestamp": "00:26:21", "end_timestamp": "00:26:52", "start_second": 1581, "end_second": 1612, "url": "https://www.youtube.com/watch?v=pfFyZY1RPZU&t=1581s", "title": "Andrew Ng: \"Deep Learning, Self-Taught Learning and Unsupervised Feature Learning\"", "thumbnail": "https://i.ytimg.com/vi/pfFyZY1RPZU/maxresdefault.jpg"} {"video_id": "pfFyZY1RPZU", "text": "how do you how do you actually collect data for how I'm holding it so the way Andrew Sachs did it was um he took a cloth and he took an object and he sprayed talcum powder all over the object and then when you take a glove and you hope this object and then you let go the pattern of talcum powder you know on your glove tells you where you came into contact with the object and moreover the density of talcum powder actually corresponds a little bit to the to the pressure and and he not sure why he did this but um he didn't actually", "start_timestamp": "00:26:52", "end_timestamp": "00:27:26", "start_second": 1612, "end_second": 1646, "url": "https://www.youtube.com/watch?v=pfFyZY1RPZU&t=1612s", "title": "Andrew Ng: \"Deep Learning, Self-Taught Learning and Unsupervised Feature Learning\"", "thumbnail": "https://i.ytimg.com/vi/pfFyZY1RPZU/maxresdefault.jpg"} {"video_id": "pfFyZY1RPZU", "text": "found so so what type of objects do people hold or I won't we don't know so we're collecting data you want to be representative of what animals do so fortunately it turns out that there were two biologists that has spent about a year of their lives sitting on some Island watching monkeys and carefully documenting every single way that monkeys pick up different things so thank God I'm up computer science is right and so Andrew Sachs you know took that distribution of data and he wearing his glove picked up objects using the", "start_timestamp": "00:27:26", "end_timestamp": "00:28:01", "start_second": 1646, "end_second": 1681, "url": "https://www.youtube.com/watch?v=pfFyZY1RPZU&t=1646s", "title": "Andrew Ng: \"Deep Learning, Self-Taught Learning and Unsupervised Feature Learning\"", "thumbnail": "https://i.ytimg.com/vi/pfFyZY1RPZU/maxresdefault.jpg"} {"video_id": "pfFyZY1RPZU", "text": "same distribution of drawers as was documented in these monkeys on an island oh and that was his data um I think that story was pretty fun but totally unnecessary oh but but Carlyle sorry showing a zero so training training on data like this that turns out that you learn basic functions using sparse coding there are I should say by no means a perfect match to one to what is known to what is believed to happen in somatosensory cortex but this may be a surprisingly good match dimensions right so that's fast coding", "start_timestamp": "00:28:01", "end_timestamp": "00:28:43", "start_second": 1681, "end_second": 1723, "url": "https://www.youtube.com/watch?v=pfFyZY1RPZU&t=1681s", "title": "Andrew Ng: \"Deep Learning, Self-Taught Learning and Unsupervised Feature Learning\"", "thumbnail": "https://i.ytimg.com/vi/pfFyZY1RPZU/maxresdefault.jpg"} {"video_id": "pfFyZY1RPZU", "text": "um and let me could you take could you do a five and six on the handle so it was the answer four five and six wait what's wait once what is must actually where was it four six again okay come on all right be right all right cool let success on the same for every image but is the coefficients they want to vacate the look features for the specific image okay um all right great so that's sparse coding and it turns out that on the different ways to implement you know sparse coding and what what I just talked about was maybe the the", "start_timestamp": "00:28:43", "end_timestamp": "00:29:36", "start_second": 1723, "end_second": 1776, "url": "https://www.youtube.com/watch?v=pfFyZY1RPZU&t=1723s", "title": "Andrew Ng: \"Deep Learning, Self-Taught Learning and Unsupervised Feature Learning\"", "thumbnail": "https://i.ytimg.com/vi/pfFyZY1RPZU/maxresdefault.jpg"} {"video_id": "pfFyZY1RPZU", "text": "original way biocells and view in 1986 they're much route there there are different ways now I think young mother talked about encoded decode architectures I'll talk a bit more about that later today but I think this intuition of learning sparse features has been kind of key for is one of the ideas I guess that allows us to learn very useful features even from unlabeled data come back to this later as well there are there other ways to do it if any of you are familiar with ICA actually how many of you have heard of", "start_timestamp": "00:29:36", "end_timestamp": "00:30:08", "start_second": 1776, "end_second": 1808, "url": "https://www.youtube.com/watch?v=pfFyZY1RPZU&t=1776s", "title": "Andrew Ng: \"Deep Learning, Self-Taught Learning and Unsupervised Feature Learning\"", "thumbnail": "https://i.ytimg.com/vi/pfFyZY1RPZU/maxresdefault.jpg"} {"video_id": "pfFyZY1RPZU", "text": "the ICA independent components out oh cool all of you awesome so it turns out that there's a you know the deep mathematical relationship between ICA and sparse coding with the turns out the two algorithms are doing something very similar for me personally these days I tend to use the ICA version so sparse coding rather than the version I just talked about but later today also talked about about sparse altering colors different ways of learning sparse features we'll get to that later but so what do you do instead of your we just", "start_timestamp": "00:30:08", "end_timestamp": "00:30:41", "start_second": 1808, "end_second": 1841, "url": "https://www.youtube.com/watch?v=pfFyZY1RPZU&t=1808s", "title": "Andrew Ng: \"Deep Learning, Self-Taught Learning and Unsupervised Feature Learning\"", "thumbnail": "https://i.ytimg.com/vi/pfFyZY1RPZU/maxresdefault.jpg"} {"video_id": "pfFyZY1RPZU", "text": "described is you know one layer of one of these spas feature learning algorithms maybe sparse coding may be as possible to encoder may be a sparse VPN or sparse RBM and turns out what you can do really building on a jet engines work what you can do is recursively apply this procedure where instead of just going from pixels to images excuse me pixels to edges you can recursively apply this procedure and you know just as you can group together pixels to form agency can group together edges to form combinations of edges and group together", "start_timestamp": "00:30:41", "end_timestamp": "00:31:17", "start_second": 1841, "end_second": 1877, "url": "https://www.youtube.com/watch?v=pfFyZY1RPZU&t=1841s", "title": "Andrew Ng: \"Deep Learning, Self-Taught Learning and Unsupervised Feature Learning\"", "thumbnail": "https://i.ytimg.com/vi/pfFyZY1RPZU/maxresdefault.jpg"} {"video_id": "pfFyZY1RPZU", "text": "combinations of edges to form higher level features so let me show an example that this is an example run by Hall actually it's not a michigan professor but what huh laughs did was she trained one air of virtuous paws dbn and first layer you know Avram mercy group together pixels for mages another level up learn significant edges to form models of object parts right so this I should say this was an example of sparse coding train just on pictures of faces so the entire dataset was pictures of faces right and then I'm requesting you apply", "start_timestamp": "00:31:17", "end_timestamp": "00:31:55", "start_second": 1877, "end_second": 1915, "url": "https://www.youtube.com/watch?v=pfFyZY1RPZU&t=1877s", "title": "Andrew Ng: \"Deep Learning, Self-Taught Learning and Unsupervised Feature Learning\"", "thumbnail": "https://i.ytimg.com/vi/pfFyZY1RPZU/maxresdefault.jpg"} {"video_id": "pfFyZY1RPZU", "text": "this the next level up and doing some more complete models of faces so let me make sure that this visualization makes sense right um when I have this little square here shown here what this little red tag what does little square means is that I have learned a neuron in the first level that is looking for a vertical edge like that one okay going one level up and and and I've shown all these rectangles the same size but higher up features are actually looking at bigger regions of the image okay it's just a resize all of the same but one", "start_timestamp": "00:31:55", "end_timestamp": "00:32:29", "start_second": 1915, "end_second": 1949, "url": "https://www.youtube.com/watch?v=pfFyZY1RPZU&t=1915s", "title": "Andrew Ng: \"Deep Learning, Self-Taught Learning and Unsupervised Feature Learning\"", "thumbnail": "https://i.ytimg.com/vi/pfFyZY1RPZU/maxresdefault.jpg"} {"video_id": "pfFyZY1RPZU", "text": "level up this is actually looking at the bigger vision of the image but one level up you know this rectangle here means that at that next level one of the neurons has learned to detect eyes that look like that great and then the highest level you know if you look at the upper leftmost square say with that visualization is showing that there's a neuron that has learned to detect faces that you look local bit like that person okay if you train the same algorithm on different object classes you end up with different decompositions of different", "start_timestamp": "00:32:29", "end_timestamp": "00:33:05", "start_second": 1949, "end_second": 1985, "url": "https://www.youtube.com/watch?v=pfFyZY1RPZU&t=1949s", "title": "Andrew Ng: \"Deep Learning, Self-Taught Learning and Unsupervised Feature Learning\"", "thumbnail": "https://i.ytimg.com/vi/pfFyZY1RPZU/maxresdefault.jpg"} {"video_id": "pfFyZY1RPZU", "text": "option classes into different object parts then more complete models of objects if you train the algorithm on a mix of four different classes of objects so there's an algorithm trained on a data set that includes cars faces bikes and airplanes then you know you end up with at the mid level you get features that are shared among the different object classes where I don't know maybe what I guess are new cars and motorbikes both have real tire like shapes or your features that kind of shared between multiple object parts and then the", "start_timestamp": "00:33:05", "end_timestamp": "00:33:37", "start_second": 1985, "end_second": 2017, "url": "https://www.youtube.com/watch?v=pfFyZY1RPZU&t=1985s", "title": "Andrew Ng: \"Deep Learning, Self-Taught Learning and Unsupervised Feature Learning\"", "thumbnail": "https://i.ytimg.com/vi/pfFyZY1RPZU/maxresdefault.jpg"} {"video_id": "pfFyZY1RPZU", "text": "highest level you get object specific features okay yeah is there any sort of variance in see yes there is there's a point so because of the nature of the visualization I showed them as though images but yeah it it there's some amounts of Indians that's hard to visualize it oh I remember I have a better example later today of a where where we more carefully document the invariant zones have a better example later okay so was this good for when you when when you hear your research isn't deep learning like me talk you you see", "start_timestamp": "00:33:37", "end_timestamp": "00:34:21", "start_second": 2017, "end_second": 2061, "url": "https://www.youtube.com/watch?v=pfFyZY1RPZU&t=2017s", "title": "Andrew Ng: \"Deep Learning, Self-Taught Learning and Unsupervised Feature Learning\"", "thumbnail": "https://i.ytimg.com/vi/pfFyZY1RPZU/maxresdefault.jpg"} {"video_id": "pfFyZY1RPZU", "text": "people like yawn in me and Jeff intern is hello stories IQs but you know so you can learn features so what is it good for well it turns out the Hollywood to benchmark could stand the benchmark in computer vision where the task is to watch a short video clip and decide whether you know any of a small number of activities took place in this video you know whether two people kids to do hugs almost driving so as eating's or was running the visor activities like the theater computer vision has tried out many different combinations that", "start_timestamp": "00:34:21", "end_timestamp": "00:34:49", "start_second": 2061, "end_second": 2089, "url": "https://www.youtube.com/watch?v=pfFyZY1RPZU&t=2061s", "title": "Andrew Ng: \"Deep Learning, Self-Taught Learning and Unsupervised Feature Learning\"", "thumbnail": "https://i.ytimg.com/vi/pfFyZY1RPZU/maxresdefault.jpg"} {"video_id": "pfFyZY1RPZU", "text": "features last year probably soon as standard found that by learning rather than hand engineering the features he was able to significantly outperform the previous AVR all right how about audio it turns out you can apply similar ideas to audio so this is a spectrogram which is a different representation for audio you can take slices of spectrograms and apply sparse coding to that it turns out if you do this then on this is a dictionary of basis functions learning for speech I guess I'm not an excellent speech but in impervia probably a", "start_timestamp": "00:34:49", "end_timestamp": "00:35:25", "start_second": 2089, "end_second": 2125, "url": "https://www.youtube.com/watch?v=pfFyZY1RPZU&t=2089s", "title": "Andrew Ng: \"Deep Learning, Self-Taught Learning and Unsupervised Feature Learning\"", "thumbnail": "https://i.ytimg.com/vi/pfFyZY1RPZU/maxresdefault.jpg"} {"video_id": "pfFyZY1RPZU", "text": "slightly optimistic as a reading of these you know the basis functions learn by sparse coding correspond roughly to phonemes there's a slightly optimistic interpretation I should say and so but if under this slightly optimistic interpretation when say informally that sparse coding has learned to decompose speech data you know very loosely into it because the phonemes to the appearance speech and moreover you can take this recursively apply this idea just as we saw earlier to build higher and higher level features and I guess a few years ago", "start_timestamp": "00:35:25", "end_timestamp": "00:36:01", "start_second": 2125, "end_second": 2161, "url": "https://www.youtube.com/watch?v=pfFyZY1RPZU&t=2125s", "title": "Andrew Ng: \"Deep Learning, Self-Taught Learning and Unsupervised Feature Learning\"", "thumbnail": "https://i.ytimg.com/vi/pfFyZY1RPZU/maxresdefault.jpg"} {"video_id": "pfFyZY1RPZU", "text": "oh how likely so against the timid benchmark is a data set that many speech researchers work on this is one of those datasets where you know if you do point 1 percent better you write a paper um and a few years ago Holland was able to you know make what correspondent so I think we worked out something like like two thirds of a decade work for progress or something on this data set just by learning in this chart is outdated I made this child I think back when pauillac was publishing this paper since publishing this paper geoff hinton and", "start_timestamp": "00:36:01", "end_timestamp": "00:36:39", "start_second": 2161, "end_second": 2199, "url": "https://www.youtube.com/watch?v=pfFyZY1RPZU&t=2161s", "title": "Andrew Ng: \"Deep Learning, Self-Taught Learning and Unsupervised Feature Learning\"", "thumbnail": "https://i.ytimg.com/vi/pfFyZY1RPZU/maxresdefault.jpg"} {"video_id": "pfFyZY1RPZU", "text": "others have surpassed this also using deep learning techniques right um and then I was referring as was preparing this talk I best prefer practice I ask my students to help me put together a chart of the social results where you know we or others or whatever holders through the odd benchmark result using deep learning and there were surprisingly many of them from us Stanford from other groups on I say yeah I worked on machine learning for a long time I've never in my life seen anyone technology not go over benchmarks like this quickly this is the", "start_timestamp": "00:36:39", "end_timestamp": "00:37:19", "start_second": 2199, "end_second": 2239, "url": "https://www.youtube.com/watch?v=pfFyZY1RPZU&t=2199s", "title": "Andrew Ng: \"Deep Learning, Self-Taught Learning and Unsupervised Feature Learning\"", "thumbnail": "https://i.ytimg.com/vi/pfFyZY1RPZU/maxresdefault.jpg"} {"video_id": "pfFyZY1RPZU", "text": "whole view the deep learning is like knocking over benchmark like nobody's business um there's actually a lot more than fits on one slide I think if I put all the ones I'm aware of it'd be about three slides like this what's left to be done right so and I know that some of you are you know here because uh you want to learn how to apply these things and I know some of you are here because you might be even interested in doing research yourselves and writing research papers yourselves in deep learning and future learning so I want to share of", "start_timestamp": "00:37:19", "end_timestamp": "00:37:49", "start_second": 2239, "end_second": 2269, "url": "https://www.youtube.com/watch?v=pfFyZY1RPZU&t=2239s", "title": "Andrew Ng: \"Deep Learning, Self-Taught Learning and Unsupervised Feature Learning\"", "thumbnail": "https://i.ytimg.com/vi/pfFyZY1RPZU/maxresdefault.jpg"} {"video_id": "pfFyZY1RPZU", "text": "you I'll do this later to talk more about the state later as well I'll share of you what I think of as a as a good way as one as one of many promising directions in which to you know take research for for deep learning I think that's a scaling up so and happy how do we build effective deep learning algorithms right how do you get these animals work well well in fact how do you build effective machine learning algorithms you know so let's not back in history right about 20 years ago oh there were these debates about you know", "start_timestamp": "00:37:49", "end_timestamp": "00:38:21", "start_second": 2269, "end_second": 2301, "url": "https://www.youtube.com/watch?v=pfFyZY1RPZU&t=2269s", "title": "Andrew Ng: \"Deep Learning, Self-Taught Learning and Unsupervised Feature Learning\"", "thumbnail": "https://i.ytimg.com/vi/pfFyZY1RPZU/maxresdefault.jpg"} {"video_id": "pfFyZY1RPZU", "text": "where these different supervised learning algorithms so no feature learning yourself or learning pursue these supervised learning algorithms and they used to be all these debates about you know is this your album better is my album better so um michelle banko and eric bro did one of the studies that most influenced my thinking where they took maybe four of the state of the art learning algorithms of the day I guess back in 2001 azn's were not yet popular so didn't actually study SVM's but they took a natural", "start_timestamp": "00:38:21", "end_timestamp": "00:38:52", "start_second": 2301, "end_second": 2332, "url": "https://www.youtube.com/watch?v=pfFyZY1RPZU&t=2301s", "title": "Andrew Ng: \"Deep Learning, Self-Taught Learning and Unsupervised Feature Learning\"", "thumbnail": "https://i.ytimg.com/vi/pfFyZY1RPZU/maxresdefault.jpg"} {"video_id": "pfFyZY1RPZU", "text": "language processing house on which they had a effectively unlimited source of label data and they trained for learning algorithms and plotted on the x-axis is a training set size parts on the y-axis is the performance is the accuracy all the algorithms do about the same is the amount of data you have and even a quote superior algorithm often lose to a quote inferior algorithm if only you can give the inferior algorithm or data to train on yeah so I think this results like these that has led to this Maxim in machine learning", "start_timestamp": "00:38:52", "end_timestamp": "00:39:30", "start_second": 2332, "end_second": 2370, "url": "https://www.youtube.com/watch?v=pfFyZY1RPZU&t=2332s", "title": "Andrew Ng: \"Deep Learning, Self-Taught Learning and Unsupervised Feature Learning\"", "thumbnail": "https://i.ytimg.com/vi/pfFyZY1RPZU/maxresdefault.jpg"} {"video_id": "pfFyZY1RPZU", "text": "that you know says that often is not who is the best album that witnesses who has the most data and then I definitely see this over and over and slip the value of you if you look at think about the most commercially successful websites you know the ones making large amounts of money in that you use per every day many of those algorithms are incredibly simple is like logistic regression but the secret is that those albums will fit far more data than anyone else has so how about so this is supervised learning um how about unsupervised learning so", "start_timestamp": "00:39:30", "end_timestamp": "00:40:10", "start_second": 2370, "end_second": 2410, "url": "https://www.youtube.com/watch?v=pfFyZY1RPZU&t=2370s", "title": "Andrew Ng: \"Deep Learning, Self-Taught Learning and Unsupervised Feature Learning\"", "thumbnail": "https://i.ytimg.com/vi/pfFyZY1RPZU/maxresdefault.jpg"} {"video_id": "pfFyZY1RPZU", "text": "Adam coats as you who who helped prepare this handout a few years ago actually and a half ago did this interesting study right where he took all of the unsupervised feature learning algorithms of the day that you know that there's guys like us I guess debate as my averin better as your oven better weather and he took a bunch of these algorithms and ran all of them and they read the model size so for unsupervised feature learning all of us have a have a large amount of data right if you know you're doing if you're learning from unlabeled", "start_timestamp": "00:40:10", "end_timestamp": "00:40:46", "start_second": 2410, "end_second": 2446, "url": "https://www.youtube.com/watch?v=pfFyZY1RPZU&t=2410s", "title": "Andrew Ng: \"Deep Learning, Self-Taught Learning and Unsupervised Feature Learning\"", "thumbnail": "https://i.ytimg.com/vi/pfFyZY1RPZU/maxresdefault.jpg"} {"video_id": "pfFyZY1RPZU", "text": "images from natural images um you have an infinite amount of data right new and so the parameters the very is not the amount of data is the size of the models of how many features do you learn those coefficients a 1 just in the or the example we had 64 coefficients a 1 through a 64 for sparse coding but let's set that bigger let's learn a thousand features instead or 10,000 whatever let's learn much larger number features and what Adam found was that you know the album does matter maybe it matters more than know maybe matters more than", "start_timestamp": "00:40:46", "end_timestamp": "00:41:20", "start_second": 2446, "end_second": 2480, "url": "https://www.youtube.com/watch?v=pfFyZY1RPZU&t=2446s", "title": "Andrew Ng: \"Deep Learning, Self-Taught Learning and Unsupervised Feature Learning\"", "thumbnail": "https://i.ytimg.com/vi/pfFyZY1RPZU/maxresdefault.jpg"} {"video_id": "pfFyZY1RPZU", "text": "we respond supervised learning because these albums are less patrol but they can be sort of silver result where the bigger the model the better it does and in fact one interesting historical aside so see part n we actually went back and historically creates though you know I mean we like to publish paper and saying my oberyn's better than yours yours valine whatever right went back and traced all the sequence of papers with you know person a published as a result on C far person B published paper saying oh I did better than person C published", "start_timestamp": "00:41:20", "end_timestamp": "00:41:52", "start_second": 2480, "end_second": 2512, "url": "https://www.youtube.com/watch?v=pfFyZY1RPZU&t=2480s", "title": "Andrew Ng: \"Deep Learning, Self-Taught Learning and Unsupervised Feature Learning\"", "thumbnail": "https://i.ytimg.com/vi/pfFyZY1RPZU/maxresdefault.jpg"} {"video_id": "pfFyZY1RPZU", "text": "an LP say even better and then the one says oh I do even better a new idea invert we traced though a couple sequences of look of that of supposedly benchmarks of advances in benchmark that was supposedly do 200 miles better than yours as I do better and we believe that a lot of those results of the supposed progress was actually because the models got bigger right it's not that my album is actually better it just I don't know most law had more time to work and so I trained mine better and so I'm going to write a paper saying", "start_timestamp": "00:41:52", "end_timestamp": "00:42:23", "start_second": 2512, "end_second": 2543, "url": "https://www.youtube.com/watch?v=pfFyZY1RPZU&t=2512s", "title": "Andrew Ng: \"Deep Learning, Self-Taught Learning and Unsupervised Feature Learning\"", "thumbnail": "https://i.ytimg.com/vi/pfFyZY1RPZU/maxresdefault.jpg"} {"video_id": "pfFyZY1RPZU", "text": "my albums better in the stuff I've done the one most reliable way to get better results has been to train a bigger model if I change the algorithm sometimes it makes it better sometimes not but in fact I you know look at the literature I feel like a lot of work by a lot of different research groups has been in some ways on on trying to get these models to just train bigger right so in this world of a supervised learning where all of us have an infinite amount of data you know I feel like we're not limited by what data we", "start_timestamp": "00:42:23", "end_timestamp": "00:42:57", "start_second": 2543, "end_second": 2577, "url": "https://www.youtube.com/watch?v=pfFyZY1RPZU&t=2543s", "title": "Andrew Ng: \"Deep Learning, Self-Taught Learning and Unsupervised Feature Learning\"", "thumbnail": "https://i.ytimg.com/vi/pfFyZY1RPZU/maxresdefault.jpg"} {"video_id": "pfFyZY1RPZU", "text": "have were much more limited by our ability to process the infinite amount of data that all of us have alright so you know many attempts to come more efficient algorithms parallelization I say John's done very cool work on the FPGA and a second plantation oh I think I brought I'm going to take credit for bringing GPUs to the deep learning world and and so on as well work like this um and in fact looking at this chart my personal interpretation which others will disagree with is that those results were achieved to a very large part", "start_timestamp": "00:42:57", "end_timestamp": "00:43:33", "start_second": 2577, "end_second": 2613, "url": "https://www.youtube.com/watch?v=pfFyZY1RPZU&t=2577s", "title": "Andrew Ng: \"Deep Learning, Self-Taught Learning and Unsupervised Feature Learning\"", "thumbnail": "https://i.ytimg.com/vi/pfFyZY1RPZU/maxresdefault.jpg"} {"video_id": "pfFyZY1RPZU", "text": "because of scalability issues right but this is my personal interpretation which others may disagree with could you through the questions 7 & 8 on the handout so question 7 whether you have which which ones did you check off 2 & 3 cool I'll take your word for it and for question 8 oh and I just actually a question I checked off everything except DNA computing them it off my answer anyway yeah I thought of floating quantum in there too but I think someone actually is working on quantum computing yes yeah all right cool so um let's see you know", "start_timestamp": "00:43:33", "end_timestamp": "00:44:21", "start_second": 2613, "end_second": 2661, "url": "https://www.youtube.com/watch?v=pfFyZY1RPZU&t=2613s", "title": "Andrew Ng: \"Deep Learning, Self-Taught Learning and Unsupervised Feature Learning\"", "thumbnail": "https://i.ytimg.com/vi/pfFyZY1RPZU/maxresdefault.jpg"} {"video_id": "pfFyZY1RPZU", "text": "what there's something else I could talk about I think I'll do that towards the end um so you know just to wrap up this piece I think talk about the high level vision of less learning rather than manually designing our features but again kind of for me you know this isn't just about machine learning anymore this is I feel like um can we really learn something about a I especially perceptual ai ai ai and human intelligence is very broad I think you know we're we're starting to get a handle maybe the perceptual part of AI", "start_timestamp": "00:44:21", "end_timestamp": "00:44:50", "start_second": 2661, "end_second": 2690, "url": "https://www.youtube.com/watch?v=pfFyZY1RPZU&t=2661s", "title": "Andrew Ng: \"Deep Learning, Self-Taught Learning and Unsupervised Feature Learning\"", "thumbnail": "https://i.ytimg.com/vi/pfFyZY1RPZU/maxresdefault.jpg"} {"video_id": "pfFyZY1RPZU", "text": "which is maybe like you know 40 to 60 percent of many animal brains right so this big part of the brain and so what I'd like to do is as I state you know thank you for your attention and for your patience or need to do these things I hope that was somewhat fun what I like to do is let's break and later on in the in the next couple sessions where you know die slightly deeper into technical details go for the basics talk about neural networks and builders the algorithms also to point out that you know after you later today or", "start_timestamp": "00:44:50", "end_timestamp": "00:45:23", "start_second": 2690, "end_second": 2723, "url": "https://www.youtube.com/watch?v=pfFyZY1RPZU&t=2690s", "title": "Andrew Ng: \"Deep Learning, Self-Taught Learning and Unsupervised Feature Learning\"", "thumbnail": "https://i.ytimg.com/vi/pfFyZY1RPZU/maxresdefault.jpg"} {"video_id": "GJBOMWpLpTQ", "text": "have you ever thought about joining a cago competition but don't really know where to start well you're in the right place I'm title data scientist to Rachel's hatman and today I'm gonna show you how to enter a competition alright to start off with we're gonna pick a competition and I already know I want to enter the housing prices advanced regression techniques so before I join I'm probably gonna want to read through the rules but I actually read through them earlier and you can do that on your own so I've accepted the rules for this", "start_timestamp": "00:00:00", "end_timestamp": "00:00:36", "start_second": 0, "end_second": 36, "url": "https://www.youtube.com/watch?v=GJBOMWpLpTQ&t=0s", "title": "How to Enter a Kaggle Competition (using Kernels) | Kaggle", "thumbnail": "https://i.ytimg.com/vi/GJBOMWpLpTQ/maxresdefault.jpg"} {"video_id": "GJBOMWpLpTQ", "text": "competition and I've joined the competition let's find a kernel to work off of so let's see kernels that people have recently run so looking through here oh look here's an example submission rachel is that you yeah that's me I did I made this earlier so I'm going to click on this example submission here and this is gonna walk me through all of the steps I need in order to enter a cankle competition so rather than just reading through it here I'm actually gonna copy and edit this kernel so I have my own copy that I can", "start_timestamp": "00:00:36", "end_timestamp": "00:01:14", "start_second": 36, "end_second": 74, "url": "https://www.youtube.com/watch?v=GJBOMWpLpTQ&t=36s", "title": "How to Enter a Kaggle Competition (using Kernels) | Kaggle", "thumbnail": "https://i.ytimg.com/vi/GJBOMWpLpTQ/maxresdefault.jpg"} {"video_id": "GJBOMWpLpTQ", "text": "work in it's gonna take just a second to spit up when you enter a competition there's a couple of basic steps you need to do first you need to train your model I'm doing all of this in kygo kernels which means that our code is going to be run on Kaggle rather than on my local computer so this competition this kernel already has a lot of the information we need in it so we are reading in some helpful libraries this is using R you can also write a kernel in Python if you prefer loading in my data and I'm setting a scene for reproducibility from", "start_timestamp": "00:01:14", "end_timestamp": "00:01:49", "start_second": 74, "end_second": 109, "url": "https://www.youtube.com/watch?v=GJBOMWpLpTQ&t=74s", "title": "How to Enter a Kaggle Competition (using Kernels) | Kaggle", "thumbnail": "https://i.ytimg.com/vi/GJBOMWpLpTQ/maxresdefault.jpg"} {"video_id": "GJBOMWpLpTQ", "text": "here I want to actually train my model so it looks like this person it's me has already started training a very simple model we're doing a test train split for cross-validation so I can check that my models not overfitting and then there's a little bit of pre-processing we're taking the training data we're removing the column that we are trying to predict to sale price and then he are we are converting strings to factors and then we are doing some label encoding so this is just taking categorical factors like I don't know", "start_timestamp": "00:01:49", "end_timestamp": "00:02:23", "start_second": 109, "end_second": 143, "url": "https://www.youtube.com/watch?v=GJBOMWpLpTQ&t=109s", "title": "How to Enter a Kaggle Competition (using Kernels) | Kaggle", "thumbnail": "https://i.ytimg.com/vi/GJBOMWpLpTQ/maxresdefault.jpg"} {"video_id": "GJBOMWpLpTQ", "text": "zoning and turning that in to a number so that we can create a matrix and from there we can use our matrix as input to an XG boost model so let's train our model really quick and if we look at the mean average error which is what this competition is going to be graded on I can see that it looks like it's actually still going down as we continue to do lots of boosting rounds so this is an X G boost model and the number of rounds you train it for will affect how much complexity in the data you can model and let's make sure that we're not", "start_timestamp": "00:02:23", "end_timestamp": "00:03:04", "start_second": 143, "end_second": 184, "url": "https://www.youtube.com/watch?v=GJBOMWpLpTQ&t=143s", "title": "How to Enter a Kaggle Competition (using Kernels) | Kaggle", "thumbnail": "https://i.ytimg.com/vi/GJBOMWpLpTQ/maxresdefault.jpg"} {"video_id": "GJBOMWpLpTQ", "text": "overfitting by running the same model on some of our held-out data so doing some cross validation and it looks like the test error is actually smaller than the Train error so what I'm gonna do is just increase the number of training rounds and see if that's still the case all right yeah and it looks like it is so I think just by increasing the number of training rounds I should get a better performance than this kernel originally had so once we have trained our model we can actually make our predictions so our", "start_timestamp": "00:03:04", "end_timestamp": "00:03:42", "start_second": 184, "end_second": 222, "url": "https://www.youtube.com/watch?v=GJBOMWpLpTQ&t=184s", "title": "How to Enter a Kaggle Competition (using Kernels) | Kaggle", "thumbnail": "https://i.ytimg.com/vi/GJBOMWpLpTQ/maxresdefault.jpg"} {"video_id": "GJBOMWpLpTQ", "text": "predictions here are going to be based on this test CSV which if we look at our data set we can see that the Train CSV has 81 columns and the test CSV has 80 columns that's because the test data set has all of the features that you would need but it does not have the feature that you're trying to predict so I don't know what the answers are over here that's what I'm trying to guess I will take these this test data that doesn't have the target column I'm gonna do the exact same pre-processing as I did before and then I'm gonna make some", "start_timestamp": "00:03:42", "end_timestamp": "00:04:21", "start_second": 222, "end_second": 261, "url": "https://www.youtube.com/watch?v=GJBOMWpLpTQ&t=222s", "title": "How to Enter a Kaggle Competition (using Kernels) | Kaggle", "thumbnail": "https://i.ytimg.com/vi/GJBOMWpLpTQ/maxresdefault.jpg"} {"video_id": "GJBOMWpLpTQ", "text": "predictions using the model that I've trained up here to several more rounds than original submission and then I am going to make predictions using this test data matrix and I'm going to save it as a CSV so what I'm saying here is I want a data frame with two columns one column is called ID and this has the ID information from the test data and one column is called sale price and this has my predictions from the submission prediction and I know that those are the columns that I need because if I look at the sample submission here I have the", "start_timestamp": "00:04:21", "end_timestamp": "00:04:58", "start_second": 261, "end_second": 298, "url": "https://www.youtube.com/watch?v=GJBOMWpLpTQ&t=261s", "title": "How to Enter a Kaggle Competition (using Kernels) | Kaggle", "thumbnail": "https://i.ytimg.com/vi/GJBOMWpLpTQ/maxresdefault.jpg"} {"video_id": "GJBOMWpLpTQ", "text": "columns ID and sale price and if I don't have the right column names here when I go to submit my file I will get an error all right once I have made my predictions I'm going to write those predictions to a CSV file so I'm saving a file out that has all of my submission information in it from here I need to commit my notebook so committing runs all of your code top to bottom and it creates a stable version that you can refer back to later so if I'm making multiple submissions I can go back and I can look at the", "start_timestamp": "00:04:58", "end_timestamp": "00:05:35", "start_second": 298, "end_second": 335, "url": "https://www.youtube.com/watch?v=GJBOMWpLpTQ&t=298s", "title": "How to Enter a Kaggle Competition (using Kernels) | Kaggle", "thumbnail": "https://i.ytimg.com/vi/GJBOMWpLpTQ/maxresdefault.jpg"} {"video_id": "GJBOMWpLpTQ", "text": "committed version I used for that specific submission and it looks like it's complete so let's open the version and here if I scroll down you can see at the bottom we have output files so I have one output file submission CSV that has the predictions that I saved out in my kernel and I'm going to submit this file to the competition ah and it's submitted and the submission is complete and it looks like my score is 0.16 and if I jump to my position on the leaderboard I might have to wait a second for that to be that's to be done but you can see", "start_timestamp": "00:05:35", "end_timestamp": "00:06:19", "start_second": 335, "end_second": 379, "url": "https://www.youtube.com/watch?v=GJBOMWpLpTQ&t=335s", "title": "How to Enter a Kaggle Competition (using Kernels) | Kaggle", "thumbnail": "https://i.ytimg.com/vi/GJBOMWpLpTQ/maxresdefault.jpg"} {"video_id": "GJBOMWpLpTQ", "text": "that I have my score I've made my submission and from here all I need to do is keep trying new things and improving my model and then making new submissions as I think the things that I did will improve my overall model so that's it that's all you need to do to enter a kegel competition obviously I wrote some of the code off-screen we're all so of course welcome to use the kernels that people have made public for a various for specific competition and also check out the discussion and see what people are talking about about that", "start_timestamp": "00:06:19", "end_timestamp": "00:06:52", "start_second": 379, "end_second": 412, "url": "https://www.youtube.com/watch?v=GJBOMWpLpTQ&t=379s", "title": "How to Enter a Kaggle Competition (using Kernels) | Kaggle", "thumbnail": "https://i.ytimg.com/vi/GJBOMWpLpTQ/maxresdefault.jpg"} {"video_id": "hQEnzdLkPj4", "text": "hi there check out these clusters of images right here and just have a look at how all of them are pretty much showing the same object so here's balloons here's birds here's sharks or other fish these are from images from the image net data set and you can see that these clusters are pretty much the object classes themselves so there's all the frogs right here here all the all the people that have caught fish so this the astonishing thing about this is that these clusters have been obtained without any labels of the image net", "start_timestamp": "00:00:00", "end_timestamp": "00:00:41", "start_second": 0, "end_second": 41, "url": "https://www.youtube.com/watch?v=hQEnzdLkPj4&t=0s", "title": "Learning To Classify Images Without Labels (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/hQEnzdLkPj4/maxresdefault.jpg"} {"video_id": "hQEnzdLkPj4", "text": "dataset of course the data set has labels but this method doesn't use the labels it learns to classify images without labels so today we're looking at this paper learning to classify images without labels by water from Guns Becca Simon Van Daan hender stung stamatis Georg Ulis mark pro Simmons and Luke fungal and on a high level overview they have a three-step procedure basically first they they use self supervised learning in order to get good representations second they do a clustering so they do a sort of k", "start_timestamp": "00:00:41", "end_timestamp": "00:01:29", "start_second": 41, "end_second": 89, "url": "https://www.youtube.com/watch?v=hQEnzdLkPj4&t=41s", "title": "Learning To Classify Images Without Labels (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/hQEnzdLkPj4/maxresdefault.jpg"} {"video_id": "hQEnzdLkPj4", "text": "nearest neighbor clustering but they do clustering on top of those things but they're doing in a kind of special way and then third they do a refinement through self labeling so if you know what all of these are you basically understand the paper already but there's a bit of tricky steps in there and it's pretty cool that at the end it works out like you just saw so before we dive in as always if you're here and not subscribed then please do and if you liked the video share it out and leave a comment if you feel like commenting cool", "start_timestamp": "00:01:29", "end_timestamp": "00:02:13", "start_second": 89, "end_second": 133, "url": "https://www.youtube.com/watch?v=hQEnzdLkPj4&t=89s", "title": "Learning To Classify Images Without Labels (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/hQEnzdLkPj4/maxresdefault.jpg"} {"video_id": "hQEnzdLkPj4", "text": "so as we already stated the problem they ask is it possible to automatically classify images without the use of ground truth annotations or even when the classes themselves are not known a priori now you might seem like you might think that this is outrageous how can you class high when you don't even know what the classes are and so on so the way you have to imagine it going forward and they're sort of they don't explicitly explain it but it's it's sort of assumed that if you have a data set dataset ba-da-da-da-da and you learn to", "start_timestamp": "00:02:13", "end_timestamp": "00:02:52", "start_second": 133, "end_second": 172, "url": "https://www.youtube.com/watch?v=hQEnzdLkPj4&t=133s", "title": "Learning To Classify Images Without Labels (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/hQEnzdLkPj4/maxresdefault.jpg"} {"video_id": "hQEnzdLkPj4", "text": "classify it what basically that means is you cluster it right you put some of the data points in in the same clusters okay and then of course the data set I'm gonna draw the same data set right here the same data set would have an actual classification thing so this would be class zero this here maybe class one and this year might be class 2 now you can't possibly know how the classes are like called or something which one is the first which one is a second so at test time basically if you have a method like", "start_timestamp": "00:02:52", "end_timestamp": "00:03:28", "start_second": 172, "end_second": 208, "url": "https://www.youtube.com/watch?v=hQEnzdLkPj4&t=172s", "title": "Learning To Classify Images Without Labels (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/hQEnzdLkPj4/maxresdefault.jpg"} {"video_id": "hQEnzdLkPj4", "text": "this that doesn't use labels what you're going to do is you're basically going to find you're going to be as generous as possible and in the assignment of these and say I'll look if I assign this here to cluster zero and this here to cluster 2 and this year to cluster 1 and I just you know carry over the labels what would my accuracy be under that labeling so you've asked generous as possible with the assignments of the labels so that's how it's going to work right that's a way you have to keep in mind we're basically developing an algorithm", "start_timestamp": "00:03:28", "end_timestamp": "00:04:04", "start_second": 208, "end_second": 244, "url": "https://www.youtube.com/watch?v=hQEnzdLkPj4&t=208s", "title": "Learning To Classify Images Without Labels (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/hQEnzdLkPj4/maxresdefault.jpg"} {"video_id": "hQEnzdLkPj4", "text": "that gives us this kind of clustering of the data and then if that clustering partitions the data in the same way as the actual labeling would the actual labeling with the test labels then we think it's a good algorithm okay so they claim they have a okay in this paper we deviate from recent works and advocate a two-step approach and it's actually a three step approach but we're feature learning and clustering are decoupled okay why why is that so they argue what you could do what people have done is and I'm going", "start_timestamp": "00:04:04", "end_timestamp": "00:04:46", "start_second": 244, "end_second": 286, "url": "https://www.youtube.com/watch?v=hQEnzdLkPj4&t=244s", "title": "Learning To Classify Images Without Labels (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/hQEnzdLkPj4/maxresdefault.jpg"} {"video_id": "hQEnzdLkPj4", "text": "to well this is just a wall of text so what you could do is you could just basically cluster the data like who says you can't use clustering algorithms but then the question is what what do you cluster them by like you need a distance so if I have points in 2d it sort of makes sense to use the Euclidean distance here but if I have images of cats and dogs and whatnot then the Euclidean distance between the pixels is really not a good a good thing but also so you might think we could actually we could you use a deep neural network and", "start_timestamp": "00:04:46", "end_timestamp": "00:05:24", "start_second": 286, "end_second": 324, "url": "https://www.youtube.com/watch?v=hQEnzdLkPj4&t=286s", "title": "Learning To Classify Images Without Labels (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/hQEnzdLkPj4/maxresdefault.jpg"} {"video_id": "hQEnzdLkPj4", "text": "then basically send the image that's the image right here send the image through the deep neural network and then either take this last state right here so it goes through and through and through and we could get take either of the hidden states or we could just take you know the last state that is the sort of hidden representation right here and do the clustering with that but then of course the question is what do you what which neural network do you take how do you train that neural network and there have been a few approaches such as a", "start_timestamp": "00:05:24", "end_timestamp": "00:05:57", "start_second": 324, "end_second": 357, "url": "https://www.youtube.com/watch?v=hQEnzdLkPj4&t=324s", "title": "Learning To Classify Images Without Labels (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/hQEnzdLkPj4/maxresdefault.jpg"} {"video_id": "hQEnzdLkPj4", "text": "deep cluster which try to formulate basically an objective for that neural network where you first you send all the images through right you send a bunch of images through to get you in embedding space you get you points and then in embedding space you think well the features that are in the embedding space they are somehow latent and they you know if basically the entire thing is if this neural network was used to classify images you would have a classification head on top and a classification head this is like a five class classification", "start_timestamp": "00:05:57", "end_timestamp": "00:06:31", "start_second": 357, "end_second": 391, "url": "https://www.youtube.com/watch?v=hQEnzdLkPj4&t=357s", "title": "Learning To Classify Images Without Labels (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/hQEnzdLkPj4/maxresdefault.jpg"} {"video_id": "hQEnzdLkPj4", "text": "that is nothing else than a linear classifier boundary that you put on top of this hidden representation so if you were to use this neural network for classification it must be possible to draw a linear boundary between the classes and therefore the either things like the inner product distance or the Euclidean distance must make sense in that space if they don't make sense in the picture space but they must make sense in the hidden representation space because what you're going to do with them is exactly linear classification", "start_timestamp": "00:06:31", "end_timestamp": "00:07:07", "start_second": 391, "end_second": 427, "url": "https://www.youtube.com/watch?v=hQEnzdLkPj4&t=391s", "title": "Learning To Classify Images Without Labels (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/hQEnzdLkPj4/maxresdefault.jpg"} {"video_id": "hQEnzdLkPj4", "text": "the last classification head of a neural network is just a linear classifier so the assumption is that and the conclusion is well in this space you should be able to cluster by Euclidian distance so what deep cluster does alternate like is first get the representations you start off with a random neural network then cluster these representations then basically label self label the images in a way now Oh way oversimplifying that technique right here but you have these alternative steps of clustering and then kind of", "start_timestamp": "00:07:07", "end_timestamp": "00:07:45", "start_second": 427, "end_second": 465, "url": "https://www.youtube.com/watch?v=hQEnzdLkPj4&t=427s", "title": "Learning To Classify Images Without Labels (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/hQEnzdLkPj4/maxresdefault.jpg"} {"video_id": "hQEnzdLkPj4", "text": "finding better representation and then clustering these representations and what it basically says is that the CNN itself is such a is like a prior because it's the translation invariant works very good for very well for natural images the CNN itself will lead to good representations if we do it in this way and there's some good results there but this paper argues that if you do that then the the algorithm tends to focus a lot on very low-level features so if the pixel on the bottom right here is blue right then you can and the neural", "start_timestamp": "00:07:45", "end_timestamp": "00:08:25", "start_second": 465, "end_second": 505, "url": "https://www.youtube.com/watch?v=hQEnzdLkPj4&t=465s", "title": "Learning To Classify Images Without Labels (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/hQEnzdLkPj4/maxresdefault.jpg"} {"video_id": "hQEnzdLkPj4", "text": "network by chance puts two of those images where the blue pixel on the bottom right it puts them close together then in the next step it will because they're close together will cluster them together and then it will basically feed back the new representation should put the two in the same class right it will feed back that it should focus even more on that blue pixel so it's very very dependent on initializations and it can jump super easily onto these low-level of features that have nothing to do with what the high level task", "start_timestamp": "00:08:25", "end_timestamp": "00:09:01", "start_second": 505, "end_second": 541, "url": "https://www.youtube.com/watch?v=hQEnzdLkPj4&t=505s", "title": "Learning To Classify Images Without Labels (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/hQEnzdLkPj4/maxresdefault.jpg"} {"video_id": "hQEnzdLkPj4", "text": "you're ultimately trying to solve which is to classify these images later so what this paper does is it says we can eliminate this we can eliminate this the fact that these methods will predict will produce neural networks that focus on low-level features and how do we do that we do that by representation learning so representation you're learning you might know this as self supervised learning and this is the task they solve in the first step of their objective so let's go through this this right here is an", "start_timestamp": "00:09:01", "end_timestamp": "00:09:44", "start_second": 541, "end_second": 584, "url": "https://www.youtube.com/watch?v=hQEnzdLkPj4&t=541s", "title": "Learning To Classify Images Without Labels (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/hQEnzdLkPj4/maxresdefault.jpg"} {"video_id": "hQEnzdLkPj4", "text": "image now the T is a transformation of that image and in self supervised learning there are several methods that you can transform an image so for example you can random crop an image you can just cut out like a piece right here and scale that up to be as large as the original image or you can use for example data augmentation which means you take the image and you basically so if there is I don't know the cat right here you kind of convolve it with some things it's there's like a very squiggly cat okay I'm terrible is you can you can", "start_timestamp": "00:09:44", "end_timestamp": "00:10:26", "start_second": 584, "end_second": 626, "url": "https://www.youtube.com/watch?v=hQEnzdLkPj4&t=584s", "title": "Learning To Classify Images Without Labels (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/hQEnzdLkPj4/maxresdefault.jpg"} {"video_id": "hQEnzdLkPj4", "text": "rotate it for example so it's like this okay so these are all these are all sets including the crop sets of this transformation T so you transform it in some way and you want after you've transformed it you send your original image that it should be red you send your original image and the transformed image through a neural network each one by themselves okay and then after then this you say the hidden representation here should be close to each other Oh this is this is basically the self supervised training task it's it's been", "start_timestamp": "00:10:26", "end_timestamp": "00:11:11", "start_second": 626, "end_second": 671, "url": "https://www.youtube.com/watch?v=hQEnzdLkPj4&t=626s", "title": "Learning To Classify Images Without Labels (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/hQEnzdLkPj4/maxresdefault.jpg"} {"video_id": "hQEnzdLkPj4", "text": "shown to work very very well as a pre-training method for classification neural networks you you have an image and it's augmented version and you minimize the inner product or the Euclidean distance between the two versions in the hidden space and the rationale is exactly the same the rationale is that this hidden space of course should be linearly classifiable and so the distance between those should be close and the rationale between having these tasks is that well if I flip the image right if I flip the image", "start_timestamp": "00:11:11", "end_timestamp": "00:11:46", "start_second": 671, "end_second": 706, "url": "https://www.youtube.com/watch?v=hQEnzdLkPj4&t=671s", "title": "Learning To Classify Images Without Labels (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/hQEnzdLkPj4/maxresdefault.jpg"} {"video_id": "hQEnzdLkPj4", "text": "to the right it cannot focus on the pixel on the bottom right anymore because that's not going to be the pixel on the bottom right here and I'm not always going to flip it into the same direction and sometimes I'm gonna crop it so it also can't focus on the pics on the bottom right because in the crop that pixel is like out here it's not even in the crop so basically what you're looking to do with these self supervised methods is you are looking to destroy this low level of information that's that's all you're looking to", "start_timestamp": "00:11:46", "end_timestamp": "00:12:16", "start_second": 706, "end_second": 736, "url": "https://www.youtube.com/watch?v=hQEnzdLkPj4&t=706s", "title": "Learning To Classify Images Without Labels (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/hQEnzdLkPj4/maxresdefault.jpg"} {"video_id": "hQEnzdLkPj4", "text": "build a pipeline of a neural network here that destroys deliberately low level information and you do that by coming up with tasks like this self supervision tasks that this that deliberately exclude this information from being used I think that's what's going on generally in the self supervised learning thing okay so this here as you can see is the neural network that you train you send both images the original and the Augmented version through the same neural network and then you minimize some distance which is usually like the inner product", "start_timestamp": "00:12:16", "end_timestamp": "00:12:54", "start_second": 736, "end_second": 774, "url": "https://www.youtube.com/watch?v=hQEnzdLkPj4&t=736s", "title": "Learning To Classify Images Without Labels (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/hQEnzdLkPj4/maxresdefault.jpg"} {"video_id": "hQEnzdLkPj4", "text": "or the Euclidean distance in this embedding space okay and what you train you can see right here you train the parameters of this neural network so the transformations are fixed or sampled and the distance is fixed you train the neural networks such that your embeddings minimize this task now this is nothing new this has been this has been used for a couple of years now to get better representation self supervised learning the thing but they basically say we can use this as an initialization step for this clustering procedure because if we", "start_timestamp": "00:12:54", "end_timestamp": "00:13:30", "start_second": 774, "end_second": 810, "url": "https://www.youtube.com/watch?v=hQEnzdLkPj4&t=774s", "title": "Learning To Classify Images Without Labels (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/hQEnzdLkPj4/maxresdefault.jpg"} {"video_id": "hQEnzdLkPj4", "text": "don't do that we focus on these low-level features okay and notice you don't need any labels for this procedure that's why it's called self supervise okay so the second second part is the clustering now they cluster but they don't just cluster these representations that would be that doesn't perform very well in their in their experiments what they instead do is they minimize this entire objective right here and we'll go through it step by step so they train a new neural network okay this thing right here this is a new neural network so", "start_timestamp": "00:13:30", "end_timestamp": "00:14:13", "start_second": 810, "end_second": 853, "url": "https://www.youtube.com/watch?v=hQEnzdLkPj4&t=810s", "title": "Learning To Classify Images Without Labels (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/hQEnzdLkPj4/maxresdefault.jpg"} {"video_id": "hQEnzdLkPj4", "text": "first you have you already have the neural network which was called what was it even called the one that gives you the embedding with the theta okay it's called five theta it's the same architecture and I think they initialize one with the other so in step 1 you get Phi Theta 5 theta goes if from from X gives you a representation of X ok let's call it hidden X so that's via self supervised learning but in step 2 you train an entirely new new neural network this phi ada here and you initialize it with this one but now you", "start_timestamp": "00:14:13", "end_timestamp": "00:14:59", "start_second": 853, "end_second": 899, "url": "https://www.youtube.com/watch?v=hQEnzdLkPj4&t=853s", "title": "Learning To Classify Images Without Labels (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/hQEnzdLkPj4/maxresdefault.jpg"} {"video_id": "hQEnzdLkPj4", "text": "train it to do the following again you want to minimize sorry you want to maximize the inner product right here see that's the inner product you want to maximize the inner product between two things now that's the same thing as before we want to minimize the distance between two things and the dot product distance in that case you maximize the dot product between two things and the two things are two images that go through the same neural network as before right this and this now what's different here is that here", "start_timestamp": "00:14:59", "end_timestamp": "00:15:33", "start_second": 899, "end_second": 933, "url": "https://www.youtube.com/watch?v=hQEnzdLkPj4&t=899s", "title": "Learning To Classify Images Without Labels (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/hQEnzdLkPj4/maxresdefault.jpg"} {"video_id": "hQEnzdLkPj4", "text": "input and one image of the data set that's the same as before okay so we input one image but here before in the self supervised learning we input an Augmented version of that and now we input something else we input this K right here now what's K what K comes from this neighbor set of X okay this is the set of neighbors of X and these neighbors are determined with respect to this neural network right here okay so what you do after step one is you take your neural network with the good embeddings and here is your data set X", "start_timestamp": "00:15:33", "end_timestamp": "00:16:16", "start_second": 933, "end_second": 976, "url": "https://www.youtube.com/watch?v=hQEnzdLkPj4&t=933s", "title": "Learning To Classify Images Without Labels (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/hQEnzdLkPj4/maxresdefault.jpg"} {"video_id": "hQEnzdLkPj4", "text": "your data set X that's really another your data set X is this list basically of all the images in your data set and what we're going to do is you're going to take all of them using that neural network that you just trained and embed them into a latent space right here okay this is the latent space where you have done this self supervised training and now for each image right here so if this is X eye you're going to find its K nearest neighbors and they use I think they use five as a benchmark so you're going to find its nearest neighbors it's", "start_timestamp": "00:16:16", "end_timestamp": "00:16:57", "start_second": 976, "end_second": 1017, "url": "https://www.youtube.com/watch?v=hQEnzdLkPj4&t=976s", "title": "Learning To Classify Images Without Labels (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/hQEnzdLkPj4/maxresdefault.jpg"} {"video_id": "hQEnzdLkPj4", "text": "five nearest neighbors and you do this for each image so this image has these five nearest neighbors and so on so in step two what you're trying to do is you're going to try to pull together each image and its nearest neighbors in that in this this not in this space directly but you determine which ones are the nearest neighbor from this neural net where can you keep it constant that's how you determine what the nearest neighbors are in the first task and that is your NX set for X I and in the second step you're trying to make", "start_timestamp": "00:16:57", "end_timestamp": "00:17:36", "start_second": 1017, "end_second": 1056, "url": "https://www.youtube.com/watch?v=hQEnzdLkPj4&t=1017s", "title": "Learning To Classify Images Without Labels (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/hQEnzdLkPj4/maxresdefault.jpg"} {"video_id": "hQEnzdLkPj4", "text": "the representations of any image and its nearest neighbors closer to each other okay so with with this thing right here you maximize the inner product between X in after this neural network and a nearest neighbor of X that was was a nearest neighbor after the first task now the way they cluster here is not just again by putting it into an embedding space like we saw before but this thing right here this neural network as you can see here is is a C dimensional vector in 0 1 now C is the number of classes that you can either", "start_timestamp": "00:17:36", "end_timestamp": "00:18:23", "start_second": 1056, "end_second": 1103, "url": "https://www.youtube.com/watch?v=hQEnzdLkPj4&t=1056s", "title": "Learning To Classify Images Without Labels (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/hQEnzdLkPj4/maxresdefault.jpg"} {"video_id": "hQEnzdLkPj4", "text": "know that so you don't know which class is which you don't have labels but you could know how many classes there are or you could just guess how many classes there are and as long as you as you over guess you can still like build super clusters later so this they simply say it's in 0 1 but they also say it performs a soft assignment so we're also going to assume that this is normalized so for each for each data point X here you're going to you're going to have an image you're going to put it through this new neural network ok this new", "start_timestamp": "00:18:23", "end_timestamp": "00:19:00", "start_second": 1103, "end_second": 1140, "url": "https://www.youtube.com/watch?v=hQEnzdLkPj4&t=1103s", "title": "Learning To Classify Images Without Labels (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/hQEnzdLkPj4/maxresdefault.jpg"} {"video_id": "hQEnzdLkPj4", "text": "neural network new and it's going to tell you it's going to give you basically a histogram let's say class 1 2 or 3 we guess there are 3 class and it's going to give you an assignment of the 3 and you also take a nearest neighbor here is your dataset you also take a nearest neighbor of that so you so you look for this set n of X and you take a nearest neighbor maybe that's that's a maybe that's a dog I can't I really can't draw a dog yeah that's the best I can do I'm sorry and you also put that through the same network and you're", "start_timestamp": "00:19:00", "end_timestamp": "00:19:41", "start_second": 1140, "end_second": 1181, "url": "https://www.youtube.com/watch?v=hQEnzdLkPj4&t=1140s", "title": "Learning To Classify Images Without Labels (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/hQEnzdLkPj4/maxresdefault.jpg"} {"video_id": "hQEnzdLkPj4", "text": "saying since they were nearest neighbor and task 1 they must share some sort of interesting high level features because that's what the first task was for therefore I want to make them closer together in in the in the light of these of this neural network right here so this is also going to give you an assignment like maybe like this okay and now you object you you train this network right here to basically match these two distributions okay so this is this is now a classifier into c classes but we guess c and we don't have labels", "start_timestamp": "00:19:41", "end_timestamp": "00:20:22", "start_second": 1181, "end_second": 1222, "url": "https://www.youtube.com/watch?v=hQEnzdLkPj4&t=1181s", "title": "Learning To Classify Images Without Labels (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/hQEnzdLkPj4/maxresdefault.jpg"} {"video_id": "hQEnzdLkPj4", "text": "we simply our label is going to be my neighbors from the first task must have the same labels that's our label now they say they also have this term right here which is the entropy over assignments okay as you can see so they minimize the following they minimize this quantity which has a negative in front of it so that means they maximize this log inner product and they also maximize the entropy because sorry so they minimize this thing and but the entropy is a negative quantity right so they maximize the interview", "start_timestamp": "00:20:22", "end_timestamp": "00:21:04", "start_second": 1222, "end_second": 1264, "url": "https://www.youtube.com/watch?v=hQEnzdLkPj4&t=1222s", "title": "Learning To Classify Images Without Labels (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/hQEnzdLkPj4/maxresdefault.jpg"} {"video_id": "hQEnzdLkPj4", "text": "because here is a plus and now they minimize the entropy let's see what they say by minimizing the following objective now entropy is the sum of the negative sum of P log P and this if this is P yes this is the probability that an image is are going to be assigned to cluster C over the entire dataset so they're going to mmm yes so it's negative this quantity negative minus P log P and this is the entropy so they're going to minimize the entropy let's see what they say we include an entropy term the second term in equation two which", "start_timestamp": "00:21:04", "end_timestamp": "00:22:01", "start_second": 1264, "end_second": 1321, "url": "https://www.youtube.com/watch?v=hQEnzdLkPj4&t=1264s", "title": "Learning To Classify Images Without Labels (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/hQEnzdLkPj4/maxresdefault.jpg"} {"video_id": "hQEnzdLkPj4", "text": "spreads the predictions uniformly across clusters C ok so what we want is a uniform assignment over class which means we should maximize the entropy oh yes okay they minimize this thing and this here is the negative entropy right okay so they want basically what they want of over the whole dataset that not all of the images are going to be in the same cluster well this is cluster one and then this is cluster two and then this is cluster three so that term counteracts that basically the more evenly spread the", "start_timestamp": "00:22:01", "end_timestamp": "00:22:44", "start_second": 1321, "end_second": 1364, "url": "https://www.youtube.com/watch?v=hQEnzdLkPj4&t=1321s", "title": "Learning To Classify Images Without Labels (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/hQEnzdLkPj4/maxresdefault.jpg"} {"video_id": "hQEnzdLkPj4", "text": "entire dataset distribution is the the the higher the entropy the lower the negative entropy and that's the goal right here I'm sorry this this was I was confused by the too many negative signs and then you minimize the entire thing all right now they say they say different thing right here they say here this bracket denotes the dot product operator as we saw it's the dot product between these two distributions right here the first term in equation two imposes this neural network to make consistent predictions for a sample X I", "start_timestamp": "00:22:44", "end_timestamp": "00:23:22", "start_second": 1364, "end_second": 1402, "url": "https://www.youtube.com/watch?v=hQEnzdLkPj4&t=1364s", "title": "Learning To Classify Images Without Labels (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/hQEnzdLkPj4/maxresdefault.jpg"} {"video_id": "hQEnzdLkPj4", "text": "and its neighboring samples the neighbors of X I and here is an interesting thing note that the dot product will be maximal when the predictions are one hot that means confident and assigned to the same cluster consistent so they basically say the objective encourages confidence because it encourages predictions to be one hot and it can Courage's consistency because it you know that because the distributions need to be the same they should be in the same cluster right now I agree with the consistency like if you make the inner", "start_timestamp": "00:23:22", "end_timestamp": "00:23:59", "start_second": 1402, "end_second": 1439, "url": "https://www.youtube.com/watch?v=hQEnzdLkPj4&t=1402s", "title": "Learning To Classify Images Without Labels (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/hQEnzdLkPj4/maxresdefault.jpg"} {"video_id": "hQEnzdLkPj4", "text": "product high then of the up to date of two of these histograms of course they'll look the same right because these are ultimately vectors these are three-dimensional vectors let's call them two-dimensional vectors right so here is class one here is class two if you you know make the inner product small or high they will agree on their predictions but I I disagree that this encourages anything to be one hot like in and if you have two vectors that are both zero one times zero one the inner product is going to be one and if you", "start_timestamp": "00:23:59", "end_timestamp": "00:24:34", "start_second": 1439, "end_second": 1474, "url": "https://www.youtube.com/watch?v=hQEnzdLkPj4&t=1439s", "title": "Learning To Classify Images Without Labels (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/hQEnzdLkPj4/maxresdefault.jpg"} {"video_id": "hQEnzdLkPj4", "text": "have two assignments that are 0.5 and 0.5 then it is also going to result in an in an inner product of is it 0.5 right is also going to to be no so what's the inner product here the inner product is 0.5 times 0.5 plus 0.5 times 0.5 which is 0.5 am i dumb an embarrassingly long time later oh it's because the l1 norm okay okay we got it we got it I am I am okay I am too dumb yes of course I was thinking of these vectors being normalized in L 2 space where their inner products would always be 1 but of course if you have", "start_timestamp": "00:24:34", "end_timestamp": "00:25:28", "start_second": 1474, "end_second": 1528, "url": "https://www.youtube.com/watch?v=hQEnzdLkPj4&t=1474s", "title": "Learning To Classify Images Without Labels (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/hQEnzdLkPj4/maxresdefault.jpg"} {"video_id": "hQEnzdLkPj4", "text": "assignments between classes and it's a probability distribution a histogram then all of the possible assignments lie on this on this thing right here now the inner product with yourself of course is the length of the vector and the length of a vector that points to one class or the other class is longer than a vector that points in between so okay I see that's where they get this that's where they get this must be one hot from so okay I'll give that to them it is actually encouraging one hot predictions as long as these things are normalized", "start_timestamp": "00:25:28", "end_timestamp": "00:26:09", "start_second": 1528, "end_second": 1569, "url": "https://www.youtube.com/watch?v=hQEnzdLkPj4&t=1528s", "title": "Learning To Classify Images Without Labels (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/hQEnzdLkPj4/maxresdefault.jpg"} {"video_id": "hQEnzdLkPj4", "text": "in l1 space which they probably are because they're histograms right yes that was that was dumbness of me I was trying to make a counter example I'm like wait a minute this counter example is a counter example to my counter example okay so yeah that's that's that so as you can see they are of course correct here and they now make the first experiments so they say basically after the first step of the self supervised training they can already retrieve sort of nearest neighbors and the nearest neighbors the nearest neighbors of these images right", "start_timestamp": "00:26:09", "end_timestamp": "00:26:56", "start_second": 1569, "end_second": 1616, "url": "https://www.youtube.com/watch?v=hQEnzdLkPj4&t=1569s", "title": "Learning To Classify Images Without Labels (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/hQEnzdLkPj4/maxresdefault.jpg"} {"video_id": "hQEnzdLkPj4", "text": "here are the ones that you see on the right and after the self supervised one these nearest neighbors are already pretty good at sharing the high level features actually crazy-crazy good right this flute here is in different sizes as you can see the fishes aren't aren't all exactly the same the birds so you can see it really focuses on sort of higher level of features but I guess it's really dependent on this higher level tasks and they what they also investigate this quantitatively but I just want to focus on how how good is", "start_timestamp": "00:26:56", "end_timestamp": "00:27:36", "start_second": 1616, "end_second": 1656, "url": "https://www.youtube.com/watch?v=hQEnzdLkPj4&t=1616s", "title": "Learning To Classify Images Without Labels (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/hQEnzdLkPj4/maxresdefault.jpg"} {"video_id": "hQEnzdLkPj4", "text": "this after only the self supervised thing and now they do this clustering and they can already sort they could already evaluate it right here because now they have a clustering right after this step they've basically pulled together the neighbors and they have this neural network that is not assigning classes so they could already evaluate this and they are going to do that but that's not good enough yet then they do a third step which is fine tuning through self labeling now self labeling is pretty much exactly what", "start_timestamp": "00:27:36", "end_timestamp": "00:28:10", "start_second": 1656, "end_second": 1690, "url": "https://www.youtube.com/watch?v=hQEnzdLkPj4&t=1656s", "title": "Learning To Classify Images Without Labels (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/hQEnzdLkPj4/maxresdefault.jpg"} {"video_id": "hQEnzdLkPj4", "text": "it's what it says it's you label your own data with your own classifier now that might be a bit outrageous and you basically saying wait a minute if I label my own data and learn a classifier on these labels isn't isn't it just going to come out the same and the answer is no right if you have a dataset because your classifier doesn't give you just first of all if your classifier is something like this right just happens to be and you label and you learn a new classifier it is going to be more like this right because it sort of maximizes a lot", "start_timestamp": "00:28:10", "end_timestamp": "00:28:57", "start_second": 1690, "end_second": 1737, "url": "https://www.youtube.com/watch?v=hQEnzdLkPj4&t=1690s", "title": "Learning To Classify Images Without Labels (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/hQEnzdLkPj4/maxresdefault.jpg"} {"video_id": "hQEnzdLkPj4", "text": "of classifiers maximize these distances between the classes so even if it's like that and then the second step they do is they say okay there are some points where we are actually more confident about such as this one we're more confident about that one also this one and then this one here is pretty close like we're not super neither this one but we're very confident about these two so we're only going to use the ones where we are in fact confident about to learn to learn the new classifier or basically we you can also weigh them and", "start_timestamp": "00:28:57", "end_timestamp": "00:29:34", "start_second": 1737, "end_second": 1774, "url": "https://www.youtube.com/watch?v=hQEnzdLkPj4&t=1737s", "title": "Learning To Classify Images Without Labels (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/hQEnzdLkPj4/maxresdefault.jpg"} {"video_id": "hQEnzdLkPj4", "text": "so on but they go by confidence right here as you can see in this final algorithm so this is the entire algorithm and I got kicked away at our algorithm there we go all right so semantic clustering by adopting nearest neighbors they're scan algorithm so in the first step you do this pretext task this is the self supervision the representation learning okay for your entire data set no sorry this is this is this your optimized optimizer neural network with task T that's just self supervised representation learning okay then the", "start_timestamp": "00:29:34", "end_timestamp": "00:30:23", "start_second": 1774, "end_second": 1823, "url": "https://www.youtube.com/watch?v=hQEnzdLkPj4&t=1774s", "title": "Learning To Classify Images Without Labels (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/hQEnzdLkPj4/maxresdefault.jpg"} {"video_id": "hQEnzdLkPj4", "text": "second thing we're going to determine the nearest neighbor set for each X now they also in that's that they also augment the data they do a heavy data augmentation and so on also in this in the third step in the self labeling they do date augmentation there's a lot of tricks in here but ultimately the base algorithm goes like this so you find your neighboring sets for each X and then what you do while you're clustering loss decreases you update this clustering neural network by with this loss that we saw so this is the loss", "start_timestamp": "00:30:23", "end_timestamp": "00:31:00", "start_second": 1823, "end_second": 1860, "url": "https://www.youtube.com/watch?v=hQEnzdLkPj4&t=1823s", "title": "Learning To Classify Images Without Labels (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/hQEnzdLkPj4/maxresdefault.jpg"} {"video_id": "hQEnzdLkPj4", "text": "where you make the nearest neighbors closer to each other while still keeping the entropy high okay and then in the last after you've done this you go through when you say while the length of Y increases what's why why is all the data points that are above a certain threshold now you going to filter the data set that is above a certain threshold and that's your data set Y and you Terrain this same neural network you basically fine-tune it with the cross entropy loss on your own labels so now you only have labels Y", "start_timestamp": "00:31:00", "end_timestamp": "00:31:44", "start_second": 1860, "end_second": 1904, "url": "https://www.youtube.com/watch?v=hQEnzdLkPj4&t=1860s", "title": "Learning To Classify Images Without Labels (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/hQEnzdLkPj4/maxresdefault.jpg"} {"video_id": "hQEnzdLkPj4", "text": "okay so it's not it's not labels you have the cross entropy loss between the assignments of this and the assignments of your data set okay so you basically do the same task but you filter by confidence and they use a threshold I think of 0.7 or something like this now let's go into the experiments the experiments or look as follows so they do some ablations to find out where in their methods kind of the the gains come from and we'll just quickly go through them if they just do these self supervision at the beginning and then", "start_timestamp": "00:31:44", "end_timestamp": "00:32:33", "start_second": 1904, "end_second": 1953, "url": "https://www.youtube.com/watch?v=hQEnzdLkPj4&t=1904s", "title": "Learning To Classify Images Without Labels (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/hQEnzdLkPj4/maxresdefault.jpg"} {"video_id": "hQEnzdLkPj4", "text": "just do k-means clustering on top of that that will give them on C for ten a thirty five point nine percent accuracy so not very good so the clustering you can't just cluster on top of these representations and then be done if they do what they say so this is sample and batch entropy loss this basically means you do not care about the nearest neighbors you do this entire thing but you only make an image close to the prediction close to itself and it's augmentations so you don't use any nearest neighbor information also", "start_timestamp": "00:32:33", "end_timestamp": "00:33:11", "start_second": 1953, "end_second": 1991, "url": "https://www.youtube.com/watch?v=hQEnzdLkPj4&t=1953s", "title": "Learning To Classify Images Without Labels (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/hQEnzdLkPj4/maxresdefault.jpg"} {"video_id": "hQEnzdLkPj4", "text": "doesn't work like I wouldn't pay too much attention that the numbers are ten twenty or thirty it just it like doesn't work now if you use the scan loss you all of a sudden you get into a regime where there is actual signal so this is um this is now significantly above the this is significantly above random guessing and if you use strong data augmentation as I said is a lot of this is has these tricks in it of what kind of data augmentation you do and so on so never forget that that these papers besides their idea they put in all the tricks", "start_timestamp": "00:33:11", "end_timestamp": "00:33:53", "start_second": 1991, "end_second": 2033, "url": "https://www.youtube.com/watch?v=hQEnzdLkPj4&t=1991s", "title": "Learning To Classify Images Without Labels (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/hQEnzdLkPj4/maxresdefault.jpg"} {"video_id": "hQEnzdLkPj4", "text": "they can so you get 10% more and then if you do this self labeling step you get another 10% more and this is fairly respectable like 83.5 without ever seeing labels it's fairly good but of course there are only ten classes right here so keep that in mind but they will do it on image net later and they investigate what kind of self supervision tasks at the beginning are important and they investigate things like rot net feature decoupling and noise contrastive estimation which noise contrastive estimation is the best a", "start_timestamp": "00:33:53", "end_timestamp": "00:34:33", "start_second": 2033, "end_second": 2073, "url": "https://www.youtube.com/watch?v=hQEnzdLkPj4&t=2033s", "title": "Learning To Classify Images Without Labels (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/hQEnzdLkPj4/maxresdefault.jpg"} {"video_id": "hQEnzdLkPj4", "text": "noise contrastive estimation I think is just where you as we said you input an image and then it's kind of noisy versions with augmented in various ways and then you classify them together and this has been like this these methods have been very successful in the last few years yeah so this they they have various investigations into their algorithm I want to point out this here this is the accuracy vs. confidence after the complete clustering step so this is now the third step is self labeling and you can see right here as", "start_timestamp": "00:34:33", "end_timestamp": "00:35:17", "start_second": 2073, "end_second": 2117, "url": "https://www.youtube.com/watch?v=hQEnzdLkPj4&t=2073s", "title": "Learning To Classify Images Without Labels (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/hQEnzdLkPj4/maxresdefault.jpg"} {"video_id": "hQEnzdLkPj4", "text": "these confidence of the network goes up the actual accuracy goes up as well so that means the network after the classroom is really more confident about the points that it can classify more accurately there's like a correlation between the network is confident and the actual label of the point which is remarkable because it has never seen the label but also see how sort of the range here is soup is quite small so with the standard augmentation that goes like from here to here so where you set that threshold is", "start_timestamp": "00:35:17", "end_timestamp": "00:35:53", "start_second": 2117, "end_second": 2153, "url": "https://www.youtube.com/watch?v=hQEnzdLkPj4&t=2117s", "title": "Learning To Classify Images Without Labels (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/hQEnzdLkPj4/maxresdefault.jpg"} {"video_id": "hQEnzdLkPj4", "text": "fairly important that might be quite brittle here because you need to set the threshold right such that some points are below it and some are above it and you you don't want to pull in points where where you're not because if you pull in points from here you're only you only have the correct label for 75% or something like them of them and that means if you now self label and learn on them you're going to learn the wrong signal so this this step seems fairly brittle honestly but I don't know of course they go on and investigate", "start_timestamp": "00:35:53", "end_timestamp": "00:36:41", "start_second": 2153, "end_second": 2201, "url": "https://www.youtube.com/watch?v=hQEnzdLkPj4&t=2153s", "title": "Learning To Classify Images Without Labels (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/hQEnzdLkPj4/maxresdefault.jpg"} {"video_id": "hQEnzdLkPj4", "text": "various things such as how many clusters do you need or how many nearest neighbors sorry do you need this number K here and you can see that if you have zero neighbors then you're doing a lot worse than if you have let's say five nearest neighbors so the jump here as you can see is fairly high in all the data sets but after that it sort of doesn't really matter much so it seems like five nearest neighbors should be enough for most things and here they just show that when they remove the false positives that their algorithm", "start_timestamp": "00:36:41", "end_timestamp": "00:37:18", "start_second": 2201, "end_second": 2238, "url": "https://www.youtube.com/watch?v=hQEnzdLkPj4&t=2201s", "title": "Learning To Classify Images Without Labels (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/hQEnzdLkPj4/maxresdefault.jpg"} {"video_id": "hQEnzdLkPj4", "text": "actually converges to the correct clustering the correct accuracy which is not surprising like if you remove the wrong samples that are wrong then the rest of the samples are going to be right I think that's just showing that it doesn't go into some kind of crazy downward spiral loop or something like this but still it's just kind of funny okay so they do you investigate how much they improve and they improve by quite a lot above the kind of previous methods so they have a lot of previous methods but a manner", "start_timestamp": "00:37:18", "end_timestamp": "00:37:51", "start_second": 2238, "end_second": 2271, "url": "https://www.youtube.com/watch?v=hQEnzdLkPj4&t=2238s", "title": "Learning To Classify Images Without Labels (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/hQEnzdLkPj4/maxresdefault.jpg"} {"video_id": "hQEnzdLkPj4", "text": "this includes things like k-means and so on ganz deep cluster that we spoke about and this method it already gets as you can see fairly close to good accuracy so you have like eighty eight point six percent accuracy and that's you know fairly remarkable on C 410 without seeing the labels but we'll go on and now they go into image net now image net of course has way more classes it has a thousand classes compared to see four tenths ten classes so if you if you think you know clustering ten classes might and they're fairly apart from each", "start_timestamp": "00:37:51", "end_timestamp": "00:38:34", "start_second": 2271, "end_second": 2314, "url": "https://www.youtube.com/watch?v=hQEnzdLkPj4&t=2271s", "title": "Learning To Classify Images Without Labels (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/hQEnzdLkPj4/maxresdefault.jpg"} {"video_id": "hQEnzdLkPj4", "text": "other might work with various techniques image net a thousand classes that's way more difficult that I do subsample this to 50 100 and 200 classes and they get okay accuracy as you can see they they get 81 percent in for 50 classes where a supervised baseline would get 86 percent into 200 classes they get 69 percent where a supervised baseline would get 76 percent so it's fairly it's it's there and and that's quite remarkable for this low number of classes and they figure out that if they look for these samples", "start_timestamp": "00:38:34", "end_timestamp": "00:39:24", "start_second": 2314, "end_second": 2364, "url": "https://www.youtube.com/watch?v=hQEnzdLkPj4&t=2314s", "title": "Learning To Classify Images Without Labels (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/hQEnzdLkPj4/maxresdefault.jpg"} {"video_id": "hQEnzdLkPj4", "text": "that are kind of in the most of the middle of their cluster they get these prototypes right here you can see all of these images I don't if you know imagine that some of the images we really only have the part of the object and so on so here with the prototypical things you really get center clear shot of the object with clearly visible features and so on so this sort of re sort of repeats the fact that this clustering really does go on what sort of semantic information of course the labels here are you know", "start_timestamp": "00:39:24", "end_timestamp": "00:40:03", "start_second": 2364, "end_second": 2403, "url": "https://www.youtube.com/watch?v=hQEnzdLkPj4&t=2364s", "title": "Learning To Classify Images Without Labels (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/hQEnzdLkPj4/maxresdefault.jpg"} {"video_id": "hQEnzdLkPj4", "text": "from the from the test label set the network can't figure that out and and then they go for a thousand classes and in a thousand classes it doesn't really work because there might be just too many confusions right here but they do have this confusion matrix of their of their method and it shows that the confusion matrix is pretty much a long like block diagonal along these super clusters right here so you can you can see the dogs the network confuses the dogs fairly often and then insects with each other but not really across here", "start_timestamp": "00:40:03", "end_timestamp": "00:40:42", "start_second": 2403, "end_second": 2442, "url": "https://www.youtube.com/watch?v=hQEnzdLkPj4&t=2403s", "title": "Learning To Classify Images Without Labels (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/hQEnzdLkPj4/maxresdefault.jpg"} {"video_id": "hQEnzdLkPj4", "text": "which is still quite remarkable but I mean that's you get the same thing for a lot of these methods so I don't know I don't know how much different this would be in other methods but certainly it's interesting to look at now they go into one last thing and that is what if we don't know how many clusters there are right if we don't know anything so say so far we have assumed to to have knowledge about the number of ground truth classes the model predictions were valid losing the Hungarian matching algorithm we already saw this in the DET", "start_timestamp": "00:40:42", "end_timestamp": "00:41:19", "start_second": 2442, "end_second": 2479, "url": "https://www.youtube.com/watch?v=hQEnzdLkPj4&t=2442s", "title": "Learning To Classify Images Without Labels (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/hQEnzdLkPj4/maxresdefault.jpg"} {"video_id": "hQEnzdLkPj4", "text": "\u00e4r by facebook if you remember however what happens if the number of clusters does not match the number of ground truth classes anymore so they now say Table three reports the results when we overestimate the number of ground truth classes by a factor of two okay so now they build just twenty classes for C for ten instead of ten classes and we'll going to look at table three real quick where's Table three this is Table three okay so when they over cluster you get the thing here on the bottom and you can see there is a drop in accuracy right", "start_timestamp": "00:41:19", "end_timestamp": "00:42:02", "start_second": 2479, "end_second": 2522, "url": "https://www.youtube.com/watch?v=hQEnzdLkPj4&t=2479s", "title": "Learning To Classify Images Without Labels (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/hQEnzdLkPj4/maxresdefault.jpg"} {"video_id": "hQEnzdLkPj4", "text": "here now what I don't actually they don't actually say how they do the over cluster matching so if you imagine if I now have I don't know six clusters but I any need to assign them to three clusters you know here do I still use this most optimistic thing so they are still used I think they still use this most optimistic matching right where you assign everything to its best fitted cluster right you compute all the permutations and then you give it the best benefit of the doubt now if you imagine the situation where I over", "start_timestamp": "00:42:02", "end_timestamp": "00:42:48", "start_second": 2522, "end_second": 2568, "url": "https://www.youtube.com/watch?v=hQEnzdLkPj4&t=2522s", "title": "Learning To Classify Images Without Labels (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/hQEnzdLkPj4/maxresdefault.jpg"} {"video_id": "hQEnzdLkPj4", "text": "cluster to the point that I have each image in its own cluster and I run this algorithm to evaluate my clustering I give it basically the most beneficial view then I would get a hundred percent accuracy okay so like in in in one of the in this over cluster approach I would sort of expect that you actually get a better score because you can like there is more generosity of the matching algorithm involved now that's counteracted by the fact that you can't group together things that obviously have similar features because they are", "start_timestamp": "00:42:48", "end_timestamp": "00:43:29", "start_second": 2568, "end_second": 2609, "url": "https://www.youtube.com/watch?v=hQEnzdLkPj4&t=2568s", "title": "Learning To Classify Images Without Labels (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/hQEnzdLkPj4/maxresdefault.jpg"} {"video_id": "hQEnzdLkPj4", "text": "in the same class so there's kind of two forces pulling here but I was kind of astounded that it's going down and the evaluation method of this matching algorithm it sort of breaks down when you have more classes at least in my opinion yeah but but it's interesting to see that you can just overshoot and but then you need some sort of heuristic to reconcile that in any case I think this paper is pretty cool it brings together a lot of things that we're already present and introduces this kind of this step approach but what you have to keep", "start_timestamp": "00:43:29", "end_timestamp": "00:44:08", "start_second": 2609, "end_second": 2648, "url": "https://www.youtube.com/watch?v=hQEnzdLkPj4&t=2609s", "title": "Learning To Classify Images Without Labels (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/hQEnzdLkPj4/maxresdefault.jpg"} {"video_id": "hQEnzdLkPj4", "text": "in mind and by the way there's lots of samples down here what you have to keep in mind is there are a lot of hyper parameters in here there are like this threshold and you know the first of all yeah the number of classes the thresholds the architectures and so on and and all of this has been tuned to get these numbers really high right all of these steps all of the augmentations and so on the chosen data argumentations it has been chosen to get this number as high as possible so you know to interpret this as oh look we can close classify without", "start_timestamp": "00:44:08", "end_timestamp": "00:44:49", "start_second": 2648, "end_second": 2689, "url": "https://www.youtube.com/watch?v=hQEnzdLkPj4&t=2648s", "title": "Learning To Classify Images Without Labels (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/hQEnzdLkPj4/maxresdefault.jpg"} {"video_id": "hQEnzdLkPj4", "text": "knowing the labels is you know if yes in this case but the hyper parameter choices of the algorithm are all informed by the labels so it is still very very unclear of how this method will actually work when you really don't have the labels when you actually have to choose the hyper parameters in absence of anything and yeah I think the future might tell if they continue to work on this alright thanks for listening looking watching and bearing with me through my wrestling with with various math basic math in this video I", "start_timestamp": "00:44:49", "end_timestamp": "00:45:30", "start_second": 2689, "end_second": 2730, "url": "https://www.youtube.com/watch?v=hQEnzdLkPj4&t=2689s", "title": "Learning To Classify Images Without Labels (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/hQEnzdLkPj4/maxresdefault.jpg"} {"video_id": "Yizyv8MpYfg", "text": "I'm excited to be here because I looked at the class syllabus and it looks really good so it's super relevant so my name is hike I work at Sky do I leave the autonomy team there Skye do is a start-up in Redwood City that makes vision based fully autonomous drones this is the drone we just launched the sky do - and it's it's it does a lot of stuff that you've been learning in this class and so I mostly just want to show a bunch of visualizations talk about the way we think about things and please ask lots of questions because that's that's", "start_timestamp": "00:00:00", "end_timestamp": "00:00:36", "start_second": 0, "end_second": 36, "url": "https://www.youtube.com/watch?v=Yizyv8MpYfg&t=0s", "title": "Lecture 23 Guest Lecture: Drones -- CS287-FA19 Advanced Robotics at UC Berkeley", "thumbnail": "https://i.ytimg.com/vi/Yizyv8MpYfg/hqdefault.jpg"} {"video_id": "Yizyv8MpYfg", "text": "the whole point so I'll just pass this around if you guys want to take a look oh we go I don't think I'm allowed to fly in here but I would and you'll have to excuse my moustache but it is November so okay so just to get you guys interested I'm gonna show the first like I don't know 40 seconds of our of our launch video for this guy which we launched just boy a month ago and we're shipping out right about now so yeah all right so you can watch the rest it's pretty good but I'm just going to talk very quickly about the point of our", "start_timestamp": "00:00:36", "end_timestamp": "00:02:15", "start_second": 36, "end_second": 135, "url": "https://www.youtube.com/watch?v=Yizyv8MpYfg&t=36s", "title": "Lecture 23 Guest Lecture: Drones -- CS287-FA19 Advanced Robotics at UC Berkeley", "thumbnail": "https://i.ytimg.com/vi/Yizyv8MpYfg/hqdefault.jpg"} {"video_id": "Yizyv8MpYfg", "text": "company and then just jump into technical stuff so basically drones are really useful because you can have an eye in the air moving around in 3d and that's useful for just a ton of different industries and this is nothing new the problem is that I think the technology for them to be safe and trustworthy and intelligent in the wild in enough places hasn't been there and so that's the point of sky do from day one it's like let's make the technology for this type of robot work such that it can go out and do things in the world so", "start_timestamp": "00:02:15", "end_timestamp": "00:02:48", "start_second": 135, "end_second": 168, "url": "https://www.youtube.com/watch?v=Yizyv8MpYfg&t=135s", "title": "Lecture 23 Guest Lecture: Drones -- CS287-FA19 Advanced Robotics at UC Berkeley", "thumbnail": "https://i.ytimg.com/vi/Yizyv8MpYfg/hqdefault.jpg"} {"video_id": "Yizyv8MpYfg", "text": "basically we've we've been invested in this like building the whole stack and the reason we build the hardware is because like the whole thing just matters like you care a ton about weight and size and power and cost but also just about the the quality of the cameras the exact like vibration characteristics of the thing the electronics in play the power system the embedded system and the software on top that runs like controlling the whole thing makes it possible to make to like push the limits of robots and and that's", "start_timestamp": "00:02:48", "end_timestamp": "00:03:18", "start_second": 168, "end_second": 198, "url": "https://www.youtube.com/watch?v=Yizyv8MpYfg&t=168s", "title": "Lecture 23 Guest Lecture: Drones -- CS287-FA19 Advanced Robotics at UC Berkeley", "thumbnail": "https://i.ytimg.com/vi/Yizyv8MpYfg/hqdefault.jpg"} {"video_id": "Yizyv8MpYfg", "text": "why we do it and so so not just you know this is the consumer launch that that videos regarding you know like this awesome video capture for anyone without needing to be an expert pilot but we're but it also makes a ton of sense for any task where you're inspecting things like bridges and cell towers and pipelines and rooftops anywhere that's dangerous anywhere that's costly and you need to build a map of things so for construction for mining it it it's it's a long list so as long as you have something that it's trustworthy intelligent can", "start_timestamp": "00:03:18", "end_timestamp": "00:03:52", "start_second": 198, "end_second": 232, "url": "https://www.youtube.com/watch?v=Yizyv8MpYfg&t=198s", "title": "Lecture 23 Guest Lecture: Drones -- CS287-FA19 Advanced Robotics at UC Berkeley", "thumbnail": "https://i.ytimg.com/vi/Yizyv8MpYfg/hqdefault.jpg"} {"video_id": "Yizyv8MpYfg", "text": "automate tasks and get get even just video data it's incredibly useful so that's that's our goal this is another slide that says the same thing okay and then here's just a view of kind of the more commercial side of this so the idea of being able to get up inside kind of these high voltage trusses and power lines and take close inspection photos this is something that's very hard to do manually and it's and it's dangerous and these things need to get inspected all the time it's incredibly expensive and then on the right is us", "start_timestamp": "00:03:52", "end_timestamp": "00:04:29", "start_second": 232, "end_second": 269, "url": "https://www.youtube.com/watch?v=Yizyv8MpYfg&t=232s", "title": "Lecture 23 Guest Lecture: Drones -- CS287-FA19 Advanced Robotics at UC Berkeley", "thumbnail": "https://i.ytimg.com/vi/Yizyv8MpYfg/hqdefault.jpg"} {"video_id": "Yizyv8MpYfg", "text": "being inside kind of just this narrow bridge truss with metal on all three sides so something you really can't do with GPS it it's just it's not possible cool and then finally I'll show that this product that we just kind of announced that we're super super excited about which is all the dock also commonly referred to as the box which is essentially just a cozy home for the drone that is weatherproof and basically it pops out the drone flies and does a task comes back and lands and goes in and the things cloud connected and the", "start_timestamp": "00:04:29", "end_timestamp": "00:05:09", "start_second": 269, "end_second": 309, "url": "https://www.youtube.com/watch?v=Yizyv8MpYfg&t=269s", "title": "Lecture 23 Guest Lecture: Drones -- CS287-FA19 Advanced Robotics at UC Berkeley", "thumbnail": "https://i.ytimg.com/vi/Yizyv8MpYfg/hqdefault.jpg"} {"video_id": "Yizyv8MpYfg", "text": "whole idea is like you have this robot out somewhere on your construction side along along a pipeline and the drones go out and do tasks on a regular schedule and they come back and the whole thing is hands-off and it's truly like just just an autonomous thing that's that's doing work and so it becomes more of a cloud platform where people can use that rather than you know somewhat someone is piloting this robot and in control of it and that's kind of that's how we see the the long long-term play of this going", "start_timestamp": "00:05:09", "end_timestamp": "00:05:45", "start_second": 309, "end_second": 345, "url": "https://www.youtube.com/watch?v=Yizyv8MpYfg&t=309s", "title": "Lecture 23 Guest Lecture: Drones -- CS287-FA19 Advanced Robotics at UC Berkeley", "thumbnail": "https://i.ytimg.com/vi/Yizyv8MpYfg/hqdefault.jpg"} {"video_id": "Yizyv8MpYfg", "text": "cool so here's a video that just shows it's just a clip of a mountain biker that we think is a pretty good shot of just autonomous visual tracking following so the drone is doing all this computation of course onboard in real-time based on visual tracking and we're able to kind of get some nice cinematic video here with tree occlusions with fast motion and here we're getting a shot from the front which we call lead mode which is pretty tough because it means you have to do a lot better job anticipating someone's", "start_timestamp": "00:05:45", "end_timestamp": "00:06:18", "start_second": 345, "end_second": 378, "url": "https://www.youtube.com/watch?v=Yizyv8MpYfg&t=345s", "title": "Lecture 23 Guest Lecture: Drones -- CS287-FA19 Advanced Robotics at UC Berkeley", "thumbnail": "https://i.ytimg.com/vi/Yizyv8MpYfg/hqdefault.jpg"} {"video_id": "Yizyv8MpYfg", "text": "emotions so I want to just talk about aspects of like how this works and just build it up and again please ask lots of questions okay so if if you look at the drone it's got so it's got the main 4k camera on a 3-axis gimbal and that's kind of what we were looking at here and then it has six navigation cameras so you can look at the geometry of the drone there's there's three on top and three on the bottom and they're actually these crazy super fisheye cameras so they see 200 degrees so they see like beyond a", "start_timestamp": "00:06:18", "end_timestamp": "00:06:58", "start_second": 378, "end_second": 418, "url": "https://www.youtube.com/watch?v=Yizyv8MpYfg&t=378s", "title": "Lecture 23 Guest Lecture: Drones -- CS287-FA19 Advanced Robotics at UC Berkeley", "thumbnail": "https://i.ytimg.com/vi/Yizyv8MpYfg/hqdefault.jpg"} {"video_id": "Yizyv8MpYfg", "text": "hemisphere which means the three on the top can basically see the upper hemisphere in trinocular view and then the three on the bottom can see the bottom hemisphere in trinocular view and the goal is to get like 360 coverage so we can do 360 you know obstacle avoidance and awareness of the scene so this is what the this is what one of the cameras looks like on the bottom and one on the top the other four are pretty similar to these two so you can see the it's the view is pretty funky like this is this is one of the propellers that's", "start_timestamp": "00:06:58", "end_timestamp": "00:07:31", "start_second": 418, "end_second": 451, "url": "https://www.youtube.com/watch?v=Yizyv8MpYfg&t=418s", "title": "Lecture 23 Guest Lecture: Drones -- CS287-FA19 Advanced Robotics at UC Berkeley", "thumbnail": "https://i.ytimg.com/vi/Yizyv8MpYfg/hqdefault.jpg"} {"video_id": "Yizyv8MpYfg", "text": "the battery on the top that you're seeing from the bottom right side where this is forward and then the this fin is actually if you get the drone you can see it's got this little by the camera it's got this fin which protects the camera lens if you were to like drop it down and it looks a lot bigger here than on the drone itself but that's because it's just really right next to the camera and then this is the other yeah I think the other arm is over there that you're seeing from one of the top cameras so obviously it's this kind of", "start_timestamp": "00:07:31", "end_timestamp": "00:08:09", "start_second": 451, "end_second": 489, "url": "https://www.youtube.com/watch?v=Yizyv8MpYfg&t=451s", "title": "Lecture 23 Guest Lecture: Drones -- CS287-FA19 Advanced Robotics at UC Berkeley", "thumbnail": "https://i.ytimg.com/vi/Yizyv8MpYfg/hqdefault.jpg"} {"video_id": "Yizyv8MpYfg", "text": "tiny planet distorted view but this this is what we work with and the cameras are rolling shutter so doing computer vision on rolling shutter cameras is is a really hard problem because what basically what that means is every row is kind of taken at a different time which means if you're undergoing fast rotation especially but even fast motion you're the contents of the image are as if you know time is changing in them so to account for that correctly you have to you have to be very careful so here's here's what it kind of looks", "start_timestamp": "00:08:09", "end_timestamp": "00:08:49", "start_second": 489, "end_second": 529, "url": "https://www.youtube.com/watch?v=Yizyv8MpYfg&t=489s", "title": "Lecture 23 Guest Lecture: Drones -- CS287-FA19 Advanced Robotics at UC Berkeley", "thumbnail": "https://i.ytimg.com/vi/Yizyv8MpYfg/hqdefault.jpg"} {"video_id": "Yizyv8MpYfg", "text": "like with the navigation cameras put together to create a 360 view so if you see the clip we were watching before is on the top left and then but that is of the full sort of sphere where you can see it's just this yellow outline is that does that make sense so having so the navigation cameras give a lot more context and that's you know that's that's what we use for all the algorithms so the core the core things we basically need to do to make this drone fly our one state estimation so we need to estimate our trajectory how how", "start_timestamp": "00:08:49", "end_timestamp": "00:09:26", "start_second": 529, "end_second": 566, "url": "https://www.youtube.com/watch?v=Yizyv8MpYfg&t=529s", "title": "Lecture 23 Guest Lecture: Drones -- CS287-FA19 Advanced Robotics at UC Berkeley", "thumbnail": "https://i.ytimg.com/vi/Yizyv8MpYfg/hqdefault.jpg"} {"video_id": "Yizyv8MpYfg", "text": "we're moving in the world and the second thing is obstacle avoidance building a dense map of the environment in a way that we can use to to navigate the third thing is detection and tracking of objects so in this case the biker seeing them in the image estimating than 3d predicting their trajectory and the fourth thing is motion planning so with that sort of understanding of the scene want to figure out what is the best flight path that's sort of in this case provides a really cinematic video and is also not crashing it's also smooth and", "start_timestamp": "00:09:26", "end_timestamp": "00:10:02", "start_second": 566, "end_second": 602, "url": "https://www.youtube.com/watch?v=Yizyv8MpYfg&t=566s", "title": "Lecture 23 Guest Lecture: Drones -- CS287-FA19 Advanced Robotics at UC Berkeley", "thumbnail": "https://i.ytimg.com/vi/Yizyv8MpYfg/hqdefault.jpg"} {"video_id": "Yizyv8MpYfg", "text": "respects the physical dynamics of of the vehicle like being able to push the limits of aggressiveness but with if you need it but without sort of commanding something that is not possible by the physical vehicle okay and then here is the same 360 view with with a depth estimate at the bottom so this is one way of visualizing the like full output of our of our depth estimation system where the close pixels are more yellow and so you can imagine how that is really useful for obstacle avoidance because you can directly kind of feed", "start_timestamp": "00:10:02", "end_timestamp": "00:10:45", "start_second": 602, "end_second": 645, "url": "https://www.youtube.com/watch?v=Yizyv8MpYfg&t=602s", "title": "Lecture 23 Guest Lecture: Drones -- CS287-FA19 Advanced Robotics at UC Berkeley", "thumbnail": "https://i.ytimg.com/vi/Yizyv8MpYfg/hqdefault.jpg"} {"video_id": "Yizyv8MpYfg", "text": "into the the planning system as a 3d map I'm visualizing you here in this we call us an echo rectangular view because it's a nice way to show the 360 but this is not actually how we use the system online okay and then here's one more way of viewing like a single instance in time here it's a it's a point cloud view that is the kind of colored by the pixels themselves this guy appear that's the drone that's the trajectory the drone was flying and then here is the person's trajectory that makes sense and then we have you have a", "start_timestamp": "00:10:45", "end_timestamp": "00:11:31", "start_second": 645, "end_second": 691, "url": "https://www.youtube.com/watch?v=Yizyv8MpYfg&t=645s", "title": "Lecture 23 Guest Lecture: Drones -- CS287-FA19 Advanced Robotics at UC Berkeley", "thumbnail": "https://i.ytimg.com/vi/Yizyv8MpYfg/hqdefault.jpg"} {"video_id": "Yizyv8MpYfg", "text": "scene around us and then it's going to switch to a depth view so instead of being colored by the pixels themselves they'll be colored by the range from the vehicle so the distance of each point from the vehicle similar to what we were looking at before but now in 3d okay cool so I'm going to talk a little bit about what we need to do in terms of state estimation and this was actually our first drone the r-1 it's a picture of but anything that is you know could potentially be variable on the drone we want to model an estimate so the the the", "start_timestamp": "00:11:31", "end_timestamp": "00:12:15", "start_second": 691, "end_second": 735, "url": "https://www.youtube.com/watch?v=Yizyv8MpYfg&t=691s", "title": "Lecture 23 Guest Lecture: Drones -- CS287-FA19 Advanced Robotics at UC Berkeley", "thumbnail": "https://i.ytimg.com/vi/Yizyv8MpYfg/hqdefault.jpg"} {"video_id": "Yizyv8MpYfg", "text": "obvious one is the trajectory of the robot which be like a pose in space so a position velocity orientation angular velocity over time of the robot and we do this with our visual inertial odometry system I'll show one slide after this we also have several I am use and so accelerometer gyroscope which is super useful for propagating you know short term data it really only works to integrate the IMU for a matter of at least the accelerometer for maybe a second beyond that on a drone and with the sort of size and and cost", "start_timestamp": "00:12:15", "end_timestamp": "00:12:51", "start_second": 735, "end_second": 771, "url": "https://www.youtube.com/watch?v=Yizyv8MpYfg&t=735s", "title": "Lecture 23 Guest Lecture: Drones -- CS287-FA19 Advanced Robotics at UC Berkeley", "thumbnail": "https://i.ytimg.com/vi/Yizyv8MpYfg/hqdefault.jpg"} {"video_id": "Yizyv8MpYfg", "text": "constraints it is it's too noisy but I'm use have biases and estimating them correctly is is key to being able to effectively use them and then we have the poses of the cameras on the vehicle so we you know obviously have CAD of the of the lenses and they get calibrated Factory time and we take that and we put that into our model of you know how we're gonna go from things in the image in 2d to 3d and the problem is those move around so every vehicle is a little bit different just because the physical characteristics so they get calibrated", "start_timestamp": "00:12:51", "end_timestamp": "00:13:31", "start_second": 771, "end_second": 811, "url": "https://www.youtube.com/watch?v=Yizyv8MpYfg&t=771s", "title": "Lecture 23 Guest Lecture: Drones -- CS287-FA19 Advanced Robotics at UC Berkeley", "thumbnail": "https://i.ytimg.com/vi/Yizyv8MpYfg/hqdefault.jpg"} {"video_id": "Yizyv8MpYfg", "text": "in the factory by a robot that waves them around a bunch of tags and we know how to model that we have a model for the camera lenses and we calibrate that and also their poses the problem with that is that they also move around when as the vehicle undergoes temperature changes especially so it's a magnesium frame with plastic pieces and those things expand and contract and any small amount of rotation like even a small fraction of a degree significantly changes the actual results you get when something is 20 meters away so that the", "start_timestamp": "00:13:31", "end_timestamp": "00:14:04", "start_second": 811, "end_second": 844, "url": "https://www.youtube.com/watch?v=Yizyv8MpYfg&t=811s", "title": "Lecture 23 Guest Lecture: Drones -- CS287-FA19 Advanced Robotics at UC Berkeley", "thumbnail": "https://i.ytimg.com/vi/Yizyv8MpYfg/hqdefault.jpg"} {"video_id": "Yizyv8MpYfg", "text": "problem is that the ability of estimating depth or really doing odometry from things that distance it depends on the inverse range of the the object you're looking at and so as you get that far out you're really looking for fractional differences between a fractional differences of a pixel between multiple images and that makes you that makes it so that you have to be very careful about your calibration the yes so so we we calibrate the poses of the cameras and particular it's mostly about the rotation like the actual", "start_timestamp": "00:14:04", "end_timestamp": "00:14:41", "start_second": 844, "end_second": 881, "url": "https://www.youtube.com/watch?v=Yizyv8MpYfg&t=844s", "title": "Lecture 23 Guest Lecture: Drones -- CS287-FA19 Advanced Robotics at UC Berkeley", "thumbnail": "https://i.ytimg.com/vi/Yizyv8MpYfg/hqdefault.jpg"} {"video_id": "Yizyv8MpYfg", "text": "translation of the cameras moving it does not have a large effect on the results it's mostly it's mostly rotations but we calibrate those things online sore joint ly with our odometry system so there there are many variables in the optimization some of which are that the pose of the robot but also the poses of the cameras and the biases the IMU and additionally we calibrate the intrinsic parameters of our lenses which is which is kind of nuts thing to do on line but those also change I'll show a video of that here so this is showing a", "start_timestamp": "00:14:41", "end_timestamp": "00:15:17", "start_second": 881, "end_second": 917, "url": "https://www.youtube.com/watch?v=Yizyv8MpYfg&t=881s", "title": "Lecture 23 Guest Lecture: Drones -- CS287-FA19 Advanced Robotics at UC Berkeley", "thumbnail": "https://i.ytimg.com/vi/Yizyv8MpYfg/hqdefault.jpg"} {"video_id": "Yizyv8MpYfg", "text": "temperature sweep of a fixed lens and you can actually see sort of these circles are fixed but you can see how much it moves is actually set pixels of motion there over the change and those are the like intrinsic lens parameters and the distortion model of the camera and so to really handle this we also estimate these online with everything else which is a cool and a hard problem and then of course we have propeller spinning so we've got to estimate the rates of those propellers and then we have the gimbal and we", "start_timestamp": "00:15:17", "end_timestamp": "00:15:55", "start_second": 917, "end_second": 955, "url": "https://www.youtube.com/watch?v=Yizyv8MpYfg&t=917s", "title": "Lecture 23 Guest Lecture: Drones -- CS287-FA19 Advanced Robotics at UC Berkeley", "thumbnail": "https://i.ytimg.com/vi/Yizyv8MpYfg/hqdefault.jpg"} {"video_id": "Yizyv8MpYfg", "text": "estimate a combination of ways exactly where the user cameras pointed because while we have you know we have sensors on the gimbal that tell us where we think it is and we've commanded it to be somewhere it's also on a an isolation system and there are other changes there so we also additionally do visual estimation of between the user camera and the navigation cameras to to line those things up cool so here's just a quick visualization of the visual inertial Adamo tree system so on the bottom left are some features that are being tracked", "start_timestamp": "00:15:55", "end_timestamp": "00:16:35", "start_second": 955, "end_second": 995, "url": "https://www.youtube.com/watch?v=Yizyv8MpYfg&t=955s", "title": "Lecture 23 Guest Lecture: Drones -- CS287-FA19 Advanced Robotics at UC Berkeley", "thumbnail": "https://i.ytimg.com/vi/Yizyv8MpYfg/hqdefault.jpg"} {"video_id": "Yizyv8MpYfg", "text": "in this case they're their sparse features just tracked over time in one camera and the right is the estimated trajectory of the robot so I I think you've guys have gone through kind of a traditional slam pipeline is that right in this class sort of maybe ok well anyway it's yes so so what you get is a bunch of 2d observations in your camera images by tracking in your images and you can track either between cameras at a single time instant or you can track over time and really you want to kind of do both and there's different", "start_timestamp": "00:16:35", "end_timestamp": "00:17:17", "start_second": 995, "end_second": 1037, "url": "https://www.youtube.com/watch?v=Yizyv8MpYfg&t=995s", "title": "Lecture 23 Guest Lecture: Drones -- CS287-FA19 Advanced Robotics at UC Berkeley", "thumbnail": "https://i.ytimg.com/vi/Yizyv8MpYfg/hqdefault.jpg"} {"video_id": "Yizyv8MpYfg", "text": "characteristics of what works well but then you take that and you are jointly optimizing for a 3d locations of those landmarks that agree with the observations you've made in the images while also solving for your own pose and to the the residual and this kind of objective will be the reprojection of that 3d point into your camera model for your pose and those are all variables in your optimization result in a pixel coordinate and then you would compare that pixel coordinate to the pixel coordinate at which you actually", "start_timestamp": "00:17:17", "end_timestamp": "00:17:52", "start_second": 1037, "end_second": 1072, "url": "https://www.youtube.com/watch?v=Yizyv8MpYfg&t=1037s", "title": "Lecture 23 Guest Lecture: Drones -- CS287-FA19 Advanced Robotics at UC Berkeley", "thumbnail": "https://i.ytimg.com/vi/Yizyv8MpYfg/hqdefault.jpg"} {"video_id": "Yizyv8MpYfg", "text": "observed that feature in that camera and with multiple observations of what you think is the same 3d point that starts to constrain things in this system okay still know okay so here is here's a little bit of what rolling shutter can look like in this case it's rolling shutter combined with vibration it slowed down a lot it's a little subtle to see here but you can see these like Wiggles we call it wibblewobble the yes so the the car you know you can you can see the windshield of the car sort of waving around and that if you're not", "start_timestamp": "00:17:52", "end_timestamp": "00:18:40", "start_second": 1072, "end_second": 1120, "url": "https://www.youtube.com/watch?v=Yizyv8MpYfg&t=1072s", "title": "Lecture 23 Guest Lecture: Drones -- CS287-FA19 Advanced Robotics at UC Berkeley", "thumbnail": "https://i.ytimg.com/vi/Yizyv8MpYfg/hqdefault.jpg"} {"video_id": "Yizyv8MpYfg", "text": "modeling that if you're not accounting for it that's just pure noise that's going to mess things up for you in a way that does not match geometric constraints the the the way you if this is happening to you you have two choices choice one is to like improve your hardware improve your system such that you don't get this issue like there might be a certain safe frequency of vibration that is related to the the time it takes to read out your camera and you get some periodic signal option two is to have a way of estimating", "start_timestamp": "00:18:40", "end_timestamp": "00:19:19", "start_second": 1120, "end_second": 1159, "url": "https://www.youtube.com/watch?v=Yizyv8MpYfg&t=1120s", "title": "Lecture 23 Guest Lecture: Drones -- CS287-FA19 Advanced Robotics at UC Berkeley", "thumbnail": "https://i.ytimg.com/vi/Yizyv8MpYfg/hqdefault.jpg"} {"video_id": "Yizyv8MpYfg", "text": "exactly how the camera moved during this time period and accounting for that which requires very careful work and we do both of these things but it's once once you're trying to fly like at high speeds in aggressive scenarios like we really really have to be pushing the edge of what what we can model there and what we can estimate okay cool this is one of my favorite slides because I think it talks about what is what is really hard about computer vision at least computer vision in a production robot so the so a here", "start_timestamp": "00:19:19", "end_timestamp": "00:20:00", "start_second": 1159, "end_second": 1200, "url": "https://www.youtube.com/watch?v=Yizyv8MpYfg&t=1159s", "title": "Lecture 23 Guest Lecture: Drones -- CS287-FA19 Advanced Robotics at UC Berkeley", "thumbnail": "https://i.ytimg.com/vi/Yizyv8MpYfg/hqdefault.jpg"} {"video_id": "Yizyv8MpYfg", "text": "is just thin thin branches so flying in a forest right now it's winter we're gonna have a bunch of customers they're gonna take their drones and I'm gonna try to follow themselves riding a bike in the forest and there's going to be all kinds of terrifying thin branches that you can barely see and we we work really really hard on being able to see that from the cameras I'll make the bold claim that I think we're almost definitely the best in the world that seeing really fine thin cameras from their branches from camera images and", "start_timestamp": "00:20:00", "end_timestamp": "00:20:37", "start_second": 1200, "end_second": 1237, "url": "https://www.youtube.com/watch?v=Yizyv8MpYfg&t=1200s", "title": "Lecture 23 Guest Lecture: Drones -- CS287-FA19 Advanced Robotics at UC Berkeley", "thumbnail": "https://i.ytimg.com/vi/Yizyv8MpYfg/hqdefault.jpg"} {"video_id": "Yizyv8MpYfg", "text": "when you're flying at you know you kind of thinking about like what's the resolution of my camera how fast am i flying what's my physical reaction time to either steer away or stop it's obviously not guaranteed you have a way to steer away so you could really think you'd be thinking about okay if I'm flying at 35 miles per hour and there is it like what how far away do I have to see an object to be able to stop if that's like if it suddenly pops in and and that will that kind of plays into how how faint does it like what is", "start_timestamp": "00:20:37", "end_timestamp": "00:21:10", "start_second": 1237, "end_second": 1270, "url": "https://www.youtube.com/watch?v=Yizyv8MpYfg&t=1237s", "title": "Lecture 23 Guest Lecture: Drones -- CS287-FA19 Advanced Robotics at UC Berkeley", "thumbnail": "https://i.ytimg.com/vi/Yizyv8MpYfg/hqdefault.jpg"} {"video_id": "Yizyv8MpYfg", "text": "the smallest perceptible signal you can have in your camera images before it like you have a chance of seeing that and I think we're we're pretty close to the limit there of like if if you can as a human like look at the camera image and and see something like we probably see it as well but it's still like we're we're up against that edge of just how much signals can we get out of that so be here is is a really fun scenario where there's a bright Sun so the the tough lighting conditions make everything harder right so in this case", "start_timestamp": "00:21:10", "end_timestamp": "00:21:47", "start_second": 1270, "end_second": 1307, "url": "https://www.youtube.com/watch?v=Yizyv8MpYfg&t=1270s", "title": "Lecture 23 Guest Lecture: Drones -- CS287-FA19 Advanced Robotics at UC Berkeley", "thumbnail": "https://i.ytimg.com/vi/Yizyv8MpYfg/hqdefault.jpg"} {"video_id": "Yizyv8MpYfg", "text": "there's a low Sun and we're in a forest and particularly are actually images from our one the first the first drone but it's it's very similar so the slide on the left the camera on the left is I think has a smudge on it like a fingerprint on the lens which basically makes just like light goes differently into the camera and and the left and right cameras look very different there what that means is that if you're looking to if you're looking between the images for a photometric signal that they're they're not agreeing", "start_timestamp": "00:21:47", "end_timestamp": "00:22:25", "start_second": 1307, "end_second": 1345, "url": "https://www.youtube.com/watch?v=Yizyv8MpYfg&t=1307s", "title": "Lecture 23 Guest Lecture: Drones -- CS287-FA19 Advanced Robotics at UC Berkeley", "thumbnail": "https://i.ytimg.com/vi/Yizyv8MpYfg/hqdefault.jpg"} {"video_id": "Yizyv8MpYfg", "text": "at all so typically what you you know might do for stereo vision algorithm is look at two cameras and look for you know look along and if you pull her line and look for something where the you know say a small patch of pixels looks the same in the left and right camera the problem is if you're up over here here you just got these crazy vertical sunrays that will match other sun rays in a totally inconsistent way but not your actual contents so it's a huge source of noise and especially when it's it's different between the two cameras", "start_timestamp": "00:22:25", "end_timestamp": "00:23:01", "start_second": 1345, "end_second": 1381, "url": "https://www.youtube.com/watch?v=Yizyv8MpYfg&t=1345s", "title": "Lecture 23 Guest Lecture: Drones -- CS287-FA19 Advanced Robotics at UC Berkeley", "thumbnail": "https://i.ytimg.com/vi/Yizyv8MpYfg/hqdefault.jpg"} {"video_id": "Yizyv8MpYfg", "text": "and then C is just motion blur so that drone can you know like that's kind of our decision unless it hits something and flips over but to allow very aggressive rotations up to several hundred degrees per second which induces motion blur in the cameras D is just reflections and mirrors and I really don't have a good solution for that like if you if you fly this drone in front of a panel of glass that's like three meters by three meters wide like it it will you know it will see what's inside and and think it can it can fly in the", "start_timestamp": "00:23:01", "end_timestamp": "00:23:43", "start_second": 1381, "end_second": 1423, "url": "https://www.youtube.com/watch?v=Yizyv8MpYfg&t=1381s", "title": "Lecture 23 Guest Lecture: Drones -- CS287-FA19 Advanced Robotics at UC Berkeley", "thumbnail": "https://i.ytimg.com/vi/Yizyv8MpYfg/hqdefault.jpg"} {"video_id": "Yizyv8MpYfg", "text": "the hope of doing this like semantically is it's it's tough it's really tough and then mirrors of course like in this case a reflective building and so we typically hope is that these vertical struts we're able you know those make geometric sense and would prevent us from flying into it but the part of the mirror that's just reflecting the building behind well that just looks like an opposite world that's geometrically consistent and so our depth map will reflect that it thinks that these are that these are basically poles and then", "start_timestamp": "00:23:43", "end_timestamp": "00:24:16", "start_second": 1423, "end_second": 1456, "url": "https://www.youtube.com/watch?v=Yizyv8MpYfg&t=1423s", "title": "Lecture 23 Guest Lecture: Drones -- CS287-FA19 Advanced Robotics at UC Berkeley", "thumbnail": "https://i.ytimg.com/vi/Yizyv8MpYfg/hqdefault.jpg"} {"video_id": "Yizyv8MpYfg", "text": "there's something behind there if you look closely you can kind of see there's some blinds in here so horizontally and so at some at some point of like if you can see what's in the window and you can see what's behind you have to kind of there's like two potential realities for what that could be and it's it's an interesting problem to think about when when that's which would happen and which taking a stronger signal okay so E is just big textualist surfaces so let's say the drones like right up against the ceiling is typically the the hardest of", "start_timestamp": "00:24:16", "end_timestamp": "00:24:56", "start_second": 1456, "end_second": 1496, "url": "https://www.youtube.com/watch?v=Yizyv8MpYfg&t=1456s", "title": "Lecture 23 Guest Lecture: Drones -- CS287-FA19 Advanced Robotics at UC Berkeley", "thumbnail": "https://i.ytimg.com/vi/Yizyv8MpYfg/hqdefault.jpg"} {"video_id": "Yizyv8MpYfg", "text": "cases and really only sees what he looks like so it's just white and that's also what the sky looks like in a lot of scenarios now sometimes the sky is bright blue but not not always often it's just basically saturated whiteness from the cameras and deciding which of those two is is the case is a really hard problem the best thing that can help you there is like the fact that we have full 360 context on the drone means that the rest of the scene can tell us oh is this more likely to be an indoor scene with a room or is this going to be", "start_timestamp": "00:24:56", "end_timestamp": "00:25:29", "start_second": 1496, "end_second": 1529, "url": "https://www.youtube.com/watch?v=Yizyv8MpYfg&t=1496s", "title": "Lecture 23 Guest Lecture: Drones -- CS287-FA19 Advanced Robotics at UC Berkeley", "thumbnail": "https://i.ytimg.com/vi/Yizyv8MpYfg/hqdefault.jpg"} {"video_id": "Yizyv8MpYfg", "text": "you know we're flying out in an outdoor scene and that the reason it matters so much is because well the sky is one of the best directions to fly in like it's it's it's usually very safe and so if we're if we're trying to get away from something and try to figure out how to move like going up is usually a very good decision but not if it's a ceiling then you're gonna have a bad time okay and then f is water so there's just water droplets on the lens I mentioned smudges from your fingerprint also dirt also dust also cracked lenses just all", "start_timestamp": "00:25:29", "end_timestamp": "00:26:07", "start_second": 1529, "end_second": 1567, "url": "https://www.youtube.com/watch?v=Yizyv8MpYfg&t=1529s", "title": "Lecture 23 Guest Lecture: Drones -- CS287-FA19 Advanced Robotics at UC Berkeley", "thumbnail": "https://i.ytimg.com/vi/Yizyv8MpYfg/hqdefault.jpg"} {"video_id": "Yizyv8MpYfg", "text": "manner of things that inhibit the sensor and this is something that we don't particularly control right the user is is responsible they get for cleaning the lenses but they take this drone somewhere we've never been and we asked them to wipe the thing and we and we detect these things online like that's the best thing we can do so actually as a drone is flying we look for potential issues with what the cameras see and and having dirty lenses and we have sort of we have a lot of these types of warnings but this is an", "start_timestamp": "00:26:07", "end_timestamp": "00:26:39", "start_second": 1567, "end_second": 1599, "url": "https://www.youtube.com/watch?v=Yizyv8MpYfg&t=1567s", "title": "Lecture 23 Guest Lecture: Drones -- CS287-FA19 Advanced Robotics at UC Berkeley", "thumbnail": "https://i.ytimg.com/vi/Yizyv8MpYfg/hqdefault.jpg"} {"video_id": "Yizyv8MpYfg", "text": "interesting one particular is like we have multiple levels of like oh it looks like your lenses are dirty like you should probably land and clean them and try again and you can dismiss that and most people just dismiss it and don't listen and then there's another level that's like okay you really have dirty cameras like we're not gonna let you say track anymore so the drone just kind of like goes into a hover it's like oh it's not safe but we can't just make the drone land or something because the drone could be you know a couple of", "start_timestamp": "00:26:39", "end_timestamp": "00:27:13", "start_second": 1599, "end_second": 1633, "url": "https://www.youtube.com/watch?v=Yizyv8MpYfg&t=1599s", "title": "Lecture 23 Guest Lecture: Drones -- CS287-FA19 Advanced Robotics at UC Berkeley", "thumbnail": "https://i.ytimg.com/vi/Yizyv8MpYfg/hqdefault.jpg"} {"video_id": "Yizyv8MpYfg", "text": "kilometres out over the ocean or something and just hovering there or landing is not a good acceptable solution and just coming back on its own is also dangerous so what we sort of chose to do there is for the drone to just hover and you can control it manually on your phone or with a controller but it won't like track you because then it's sort of more autonomous and you're oblivious to the fact that it's it's performance is degraded for sure so we we care about lots of those types of robustness issues because it's just like getting getting", "start_timestamp": "00:27:13", "end_timestamp": "00:27:46", "start_second": 1633, "end_second": 1666, "url": "https://www.youtube.com/watch?v=Yizyv8MpYfg&t=1633s", "title": "Lecture 23 Guest Lecture: Drones -- CS287-FA19 Advanced Robotics at UC Berkeley", "thumbnail": "https://i.ytimg.com/vi/Yizyv8MpYfg/hqdefault.jpg"} {"video_id": "Yizyv8MpYfg", "text": "to a demo that works of like okay you know we have we have this drone it films somebody it avoids obstacles like that is much much easier obviously than getting to the point where you can actually have something where someone paid for a product and they take it somewhere new and they like initially don't trust it but what we typically see is after like one or two flights of you know no like no problems it works and in the vast majority of you know scenes you're not gonna have any issue but you quickly just gain way too much", "start_timestamp": "00:27:46", "end_timestamp": "00:28:18", "start_second": 1666, "end_second": 1698, "url": "https://www.youtube.com/watch?v=Yizyv8MpYfg&t=1666s", "title": "Lecture 23 Guest Lecture: Drones -- CS287-FA19 Advanced Robotics at UC Berkeley", "thumbnail": "https://i.ytimg.com/vi/Yizyv8MpYfg/hqdefault.jpg"} {"video_id": "Yizyv8MpYfg", "text": "confidence in the robot just to work anywhere anytime and that's something that we kind of struggle with is how to like communicate limitations in an effective way because it does work so well in so many cases okay talk a little bit about mapping so and by talk I mean show this video so these are just you know vark like a voxel representation over the user camera video basically just to to look cool so in terms of dense mapping so that the goals from state estimation are different right because for estimating", "start_timestamp": "00:28:18", "end_timestamp": "00:29:01", "start_second": 1698, "end_second": 1741, "url": "https://www.youtube.com/watch?v=Yizyv8MpYfg&t=1698s", "title": "Lecture 23 Guest Lecture: Drones -- CS287-FA19 Advanced Robotics at UC Berkeley", "thumbnail": "https://i.ytimg.com/vi/Yizyv8MpYfg/hqdefault.jpg"} {"video_id": "Yizyv8MpYfg", "text": "how we're moving basically we have we have cameras in every direction and you don't need a lot of points like you need a little bit of information about other things are moving that are static and you track them well the which means if something is very difficult like crazy tangle of thin branches you can just ignore that part and look at the ground or look at the tops of the trees and that's enough to tell you how you're moving but for actually mapping things like we need to pay attention to every single direction and make a decision", "start_timestamp": "00:29:01", "end_timestamp": "00:29:31", "start_second": 1741, "end_second": 1771, "url": "https://www.youtube.com/watch?v=Yizyv8MpYfg&t=1741s", "title": "Lecture 23 Guest Lecture: Drones -- CS287-FA19 Advanced Robotics at UC Berkeley", "thumbnail": "https://i.ytimg.com/vi/Yizyv8MpYfg/hqdefault.jpg"} {"video_id": "Yizyv8MpYfg", "text": "about that saying oh this directions like too hard to look at like you can decide to never fly in that direction but that's that's very limiting so we published one paper here a couple of years ago with research in turn and Alex Kendall and the the paper was basically trying to combine a classical stereo matching pipeline like geometric reasoning about stereo vision with with deep learning to learn features and do the matching so at the time we we did really well on these the kiddie self-driving data set which we submit it", "start_timestamp": "00:29:31", "end_timestamp": "00:30:08", "start_second": 1771, "end_second": 1808, "url": "https://www.youtube.com/watch?v=Yizyv8MpYfg&t=1771s", "title": "Lecture 23 Guest Lecture: Drones -- CS287-FA19 Advanced Robotics at UC Berkeley", "thumbnail": "https://i.ytimg.com/vi/Yizyv8MpYfg/hqdefault.jpg"} {"video_id": "Yizyv8MpYfg", "text": "to and the basically the bottom one here is the ground truth of that data set which is a lidar scan and in the middle is a as a prediction of our of our network and so it it worked really well in in these scenes I'm not going to go into details here but essentially the the network was structured in a way to take advantage of sort of intuition about stereo matching and classical stare matching algorithms where we have we compute shared learned weights on each of the two images with two towers and then we have a cost volume approach", "start_timestamp": "00:30:08", "end_timestamp": "00:30:46", "start_second": 1808, "end_second": 1846, "url": "https://www.youtube.com/watch?v=Yizyv8MpYfg&t=1808s", "title": "Lecture 23 Guest Lecture: Drones -- CS287-FA19 Advanced Robotics at UC Berkeley", "thumbnail": "https://i.ytimg.com/vi/Yizyv8MpYfg/hqdefault.jpg"} {"video_id": "Yizyv8MpYfg", "text": "where kind of mix those features together at different possible disparities and then we regularize the cost volume with 3d convolutions and then we we pick a winner so this this worked really well but I'll kind of give a couple of caveats the first caveat is it ran at one frame per second on a Titan X GPU which meant like obviously we can't use it it's orders of magnitude off and the second caveat is that frankly the the kitty data said or really like my experience any self-driving data set that I've seen are are incredibly easy", "start_timestamp": "00:30:46", "end_timestamp": "00:31:24", "start_second": 1846, "end_second": 1884, "url": "https://www.youtube.com/watch?v=Yizyv8MpYfg&t=1846s", "title": "Lecture 23 Guest Lecture: Drones -- CS287-FA19 Advanced Robotics at UC Berkeley", "thumbnail": "https://i.ytimg.com/vi/Yizyv8MpYfg/hqdefault.jpg"} {"video_id": "Yizyv8MpYfg", "text": "compared to the types of scenes that drones tend to see in in the wild and the reason is because of regularity so most of the images just look very similar like you have a road scene you have cars there's always a ground plane there's things on the sides there are street signs that are vertical and there's just a ton of semantic priors you can do this is what monocular depth estimation works really well on driving scenes as well when things are you know when things are normal now that's not always the case but for a drone you can", "start_timestamp": "00:31:24", "end_timestamp": "00:31:56", "start_second": 1884, "end_second": 1916, "url": "https://www.youtube.com/watch?v=Yizyv8MpYfg&t=1884s", "title": "Lecture 23 Guest Lecture: Drones -- CS287-FA19 Advanced Robotics at UC Berkeley", "thumbnail": "https://i.ytimg.com/vi/Yizyv8MpYfg/hqdefault.jpg"} {"video_id": "Yizyv8MpYfg", "text": "be flying really like really in any nook or cranny of any building or while and and you really can't tell what's in a lot of the camera images and there's just no there's no good monocular prior a lot of the time because you could just be in inside some random geometric shape in a sculpture and art gallery or something and so it's it's a much harder problem for that and and specifically the the metrics on a dataset like kitty where you're typically measured on like how how accurate is like the average pixel error across the image right like", "start_timestamp": "00:31:56", "end_timestamp": "00:32:37", "start_second": 1916, "end_second": 1957, "url": "https://www.youtube.com/watch?v=Yizyv8MpYfg&t=1916s", "title": "Lecture 23 Guest Lecture: Drones -- CS287-FA19 Advanced Robotics at UC Berkeley", "thumbnail": "https://i.ytimg.com/vi/Yizyv8MpYfg/hqdefault.jpg"} {"video_id": "Yizyv8MpYfg", "text": "I can't emphasize enough just how useless that is for robotic navigation purposes like being being 1% closer like it's basically just dominated by you know how well do you estimate the ground plane and the exact pixel values of the depths and if you have something like a thin object that you miss or an entire vehicle but it's further away or something like it has very little to impact on the score and so focusing on metrics that really matter for autonomous navigation were things like 3d occupancy and just thinking about", "start_timestamp": "00:32:37", "end_timestamp": "00:33:12", "start_second": 1957, "end_second": 1992, "url": "https://www.youtube.com/watch?v=Yizyv8MpYfg&t=1957s", "title": "Lecture 23 Guest Lecture: Drones -- CS287-FA19 Advanced Robotics at UC Berkeley", "thumbnail": "https://i.ytimg.com/vi/Yizyv8MpYfg/hqdefault.jpg"} {"video_id": "Yizyv8MpYfg", "text": "only the parts of scene that matter and in the end like in the in the Berkley way that kind of one of the most powerful ways to do that is to think about the sort of action space like what are the parts of the scene that actually matter for making a decision rather than doing it allocating your error metric by pixel values which is totally simple to think about but but pretty meaningless okay so since then we've made it a lot faster and a lot better and it's also changed in many many ways but basically we are flying deep networks that produce", "start_timestamp": "00:33:12", "end_timestamp": "00:33:53", "start_second": 1992, "end_second": 2033, "url": "https://www.youtube.com/watch?v=Yizyv8MpYfg&t=1992s", "title": "Lecture 23 Guest Lecture: Drones -- CS287-FA19 Advanced Robotics at UC Berkeley", "thumbnail": "https://i.ytimg.com/vi/Yizyv8MpYfg/hqdefault.jpg"} {"video_id": "Yizyv8MpYfg", "text": "depths and build an obstacle map for us and it runs at approximately a thousand times faster than the previous sort of first thing that I mentioned and if she think we were the first ones to fly a robot based in a complex scene based on sort of a deep learned obstacle avoidance system about a year and a half ago this was our little ATV flight yeah yeah that's a good question basically we we do a bit of both like we aggregate the images but the the dominant part of the deep learning estimation right now is is between the sort of cam like the", "start_timestamp": "00:33:53", "end_timestamp": "00:34:44", "start_second": 2033, "end_second": 2084, "url": "https://www.youtube.com/watch?v=Yizyv8MpYfg&t=2033s", "title": "Lecture 23 Guest Lecture: Drones -- CS287-FA19 Advanced Robotics at UC Berkeley", "thumbnail": "https://i.ytimg.com/vi/Yizyv8MpYfg/hqdefault.jpg"} {"video_id": "Yizyv8MpYfg", "text": "camera is at an instant in time but we have definitely we definitely do things that are between like temporarily as well there are different challenges into doing that one of the challenges is moving objects so moving objects you know you you have to estimate that motion to geometric we get the correct thing right like if you're doing optical flow and then you assume things are static you'll get in correct depth in particular like if you're flying the same speed as somebody and following a biker say and they and and you do that", "start_timestamp": "00:34:44", "end_timestamp": "00:35:20", "start_second": 2084, "end_second": 2120, "url": "https://www.youtube.com/watch?v=Yizyv8MpYfg&t=2084s", "title": "Lecture 23 Guest Lecture: Drones -- CS287-FA19 Advanced Robotics at UC Berkeley", "thumbnail": "https://i.ytimg.com/vi/Yizyv8MpYfg/hqdefault.jpg"} {"video_id": "Yizyv8MpYfg", "text": "and you think they're static like it'll seem like they're far away because basically they don't move with respect to you and so the pixels you stay in the same pixel location which means which implies that they're they're basically at infinity so there's tricky things like that to take into account yeah I'm going to show some a couple of just clips of the sort of hard things we focus on so this is just a forest with a bunch of branches and and looking like really what you care about here is like did you see this object or not and", "start_timestamp": "00:35:20", "end_timestamp": "00:35:53", "start_second": 2120, "end_second": 2153, "url": "https://www.youtube.com/watch?v=Yizyv8MpYfg&t=2120s", "title": "Lecture 23 Guest Lecture: Drones -- CS287-FA19 Advanced Robotics at UC Berkeley", "thumbnail": "https://i.ytimg.com/vi/Yizyv8MpYfg/hqdefault.jpg"} {"video_id": "Yizyv8MpYfg", "text": "generally the like 3d extent of it but the exact the exact details of of the shape really don't matter very much although it does tend to be correlated with you know the quality of your predictions here's a here's a 3d view of that same thing where we have the camera image on the bottom left and then the depth on the bottom right okay power lines are real and something to be worried about so okay you can see it's a little hard to tell here oh these aren't matching are they okay well so you can kind of see if you trace this", "start_timestamp": "00:35:53", "end_timestamp": "00:37:00", "start_second": 2153, "end_second": 2220, "url": "https://www.youtube.com/watch?v=Yizyv8MpYfg&t=2153s", "title": "Lecture 23 Guest Lecture: Drones -- CS287-FA19 Advanced Robotics at UC Berkeley", "thumbnail": "https://i.ytimg.com/vi/Yizyv8MpYfg/hqdefault.jpg"} {"video_id": "Yizyv8MpYfg", "text": "out there there are a bunch of power lines in this image so these ones obviously are very clear you can see these this is a faint but you can see it over here it kind of fades into the background this is a power line that's quite faint to see and it disappears entirely behind the Sun here and then actually up here there's one that goes all the way across like that and it's it's very very hard to see so it's just giving some some intuition for that and there's a second one here and there's a third one here and so really like it", "start_timestamp": "00:37:00", "end_timestamp": "00:37:35", "start_second": 2220, "end_second": 2255, "url": "https://www.youtube.com/watch?v=Yizyv8MpYfg&t=2220s", "title": "Lecture 23 Guest Lecture: Drones -- CS287-FA19 Advanced Robotics at UC Berkeley", "thumbnail": "https://i.ytimg.com/vi/Yizyv8MpYfg/hqdefault.jpg"} {"video_id": "Yizyv8MpYfg", "text": "does depend a lot on the background and the lighting on everything just when it gets in your camera mijin and how much that works so again that's that's kind of the walls we tend to be up against trying to push the limits of where we can fly what we can see and how fast we can go especially and then about textualist surfaces so here's actually flying through our office and you can see on the left here example it's a scene with you know specularity is on the walls which will be lying to you photo metrically and just a lot of a lot", "start_timestamp": "00:37:35", "end_timestamp": "00:38:10", "start_second": 2255, "end_second": 2290, "url": "https://www.youtube.com/watch?v=Yizyv8MpYfg&t=2255s", "title": "Lecture 23 Guest Lecture: Drones -- CS287-FA19 Advanced Robotics at UC Berkeley", "thumbnail": "https://i.ytimg.com/vi/Yizyv8MpYfg/hqdefault.jpg"} {"video_id": "Yizyv8MpYfg", "text": "of white texture and then here is the same thing in 3d it's funny this blue glow on the bottom is actually because there's there are some different lights on the drone but the in this case this this LED was on which is very close to this camera and it basically just shines this blue halo under right there on the ground but we we don't keep this one on in flight normally it's just these guys which have been you know designed to not be seen by the cameras the whole kind of cemented the whole geometry of the drone", "start_timestamp": "00:38:10", "end_timestamp": "00:38:58", "start_second": 2290, "end_second": 2338, "url": "https://www.youtube.com/watch?v=Yizyv8MpYfg&t=2290s", "title": "Lecture 23 Guest Lecture: Drones -- CS287-FA19 Advanced Robotics at UC Berkeley", "thumbnail": "https://i.ytimg.com/vi/Yizyv8MpYfg/hqdefault.jpg"} {"video_id": "Yizyv8MpYfg", "text": "and the inversion and everything is kind of built around enabling as much visibility as possible of the world without obstructions yeah the frequency so I can't say exactly what frequency is part of the reason is because it's sort of internal stuff and part illusion is because it it can change based on some different things so we we are pretty clever about some of the things we do in terms of so you know what what resolution do you process stuff at how complicated of algorithms do you run or intensive and which like which", "start_timestamp": "00:38:58", "end_timestamp": "00:39:38", "start_second": 2338, "end_second": 2378, "url": "https://www.youtube.com/watch?v=Yizyv8MpYfg&t=2338s", "title": "Lecture 23 Guest Lecture: Drones -- CS287-FA19 Advanced Robotics at UC Berkeley", "thumbnail": "https://i.ytimg.com/vi/Yizyv8MpYfg/hqdefault.jpg"} {"video_id": "Yizyv8MpYfg", "text": "directions do you focus on so all those things in terms of like down sampling and foliation and choosing the the extent of the algorithm you run are pretty important to finding the the best balance and so for example if you're flying really fast at such that it's really dynamically infeasible for you to even move in these other directions for the purpose of navigation it doesn't really make sense to focus your efforts there you really want to kind of like pick the cone of just do tunnel vision there now the kind of in a different", "start_timestamp": "00:39:38", "end_timestamp": "00:40:18", "start_second": 2378, "end_second": 2418, "url": "https://www.youtube.com/watch?v=Yizyv8MpYfg&t=2378s", "title": "Lecture 23 Guest Lecture: Drones -- CS287-FA19 Advanced Robotics at UC Berkeley", "thumbnail": "https://i.ytimg.com/vi/Yizyv8MpYfg/hqdefault.jpg"} {"video_id": "Yizyv8MpYfg", "text": "case like for example let's say you're let's say you're barreling down this way at max speed but you're also filming something that's over there now you may want to focus on those two directions instead of just the one because you need to estimate the scene over there to make some decisions about your high-level objectives and then obstacle avoidance over here so things like that are our important games to play okay here's another fun one that's just artistic this is just an example of someone basically pushing back on the joystick", "start_timestamp": "00:40:18", "end_timestamp": "00:40:58", "start_second": 2418, "end_second": 2458, "url": "https://www.youtube.com/watch?v=Yizyv8MpYfg&t=2418s", "title": "Lecture 23 Guest Lecture: Drones -- CS287-FA19 Advanced Robotics at UC Berkeley", "thumbnail": "https://i.ytimg.com/vi/Yizyv8MpYfg/hqdefault.jpg"} {"video_id": "Yizyv8MpYfg", "text": "so we have we have a bunch of accessories but one is just it's a controller and you can fly and the idea is basically like you control a drone like normal you know kind of if you've ever flown these MOTU joysticks and the idea is that you don't have to be an extra pile like you can just you can just push it forward and won't crash right it'll go around things it will it will do its thing so it's it's a really fun way to fly because it means you can you can either just have fun or do your task film something but you're not", "start_timestamp": "00:40:58", "end_timestamp": "00:41:28", "start_second": 2458, "end_second": 2488, "url": "https://www.youtube.com/watch?v=Yizyv8MpYfg&t=2458s", "title": "Lecture 23 Guest Lecture: Drones -- CS287-FA19 Advanced Robotics at UC Berkeley", "thumbnail": "https://i.ytimg.com/vi/Yizyv8MpYfg/hqdefault.jpg"} {"video_id": "Yizyv8MpYfg", "text": "worried about physics or the drone or the crashing or things like that and that that's what this shot was basically just pushing pushing back on the sticks okay yeah so this is can't tell if you guys can see this at all but this is kind of a typical example of what a thin branch looks like in an image so these are actually two images it's a quite a close branch and so get a sense of like here's here's the branch and then here's the branch so if you look closely it's it's coming out of the ground over here and then it comes all the way down very", "start_timestamp": "00:41:28", "end_timestamp": "00:42:06", "start_second": 2488, "end_second": 2526, "url": "https://www.youtube.com/watch?v=Yizyv8MpYfg&t=2488s", "title": "Lecture 23 Guest Lecture: Drones -- CS287-FA19 Advanced Robotics at UC Berkeley", "thumbnail": "https://i.ytimg.com/vi/Yizyv8MpYfg/hqdefault.jpg"} {"video_id": "Yizyv8MpYfg", "text": "close to the camera and so the actual correct sort of match of this stick is moving from here to here and then there's another one over here that's moving like that so if you if you look closely you can kind of get a sense of that and so this is just now maybe conveying a sense of the the type of problem here which is one of the hardest things we need to do which is just carefully pick out objects and and there's kind of the outline in this image of what what we're looking at and actually the a real key here is like if", "start_timestamp": "00:42:06", "end_timestamp": "00:42:44", "start_second": 2526, "end_second": 2564, "url": "https://www.youtube.com/watch?v=Yizyv8MpYfg&t=2526s", "title": "Lecture 23 Guest Lecture: Drones -- CS287-FA19 Advanced Robotics at UC Berkeley", "thumbnail": "https://i.ytimg.com/vi/Yizyv8MpYfg/hqdefault.jpg"} {"video_id": "Yizyv8MpYfg", "text": "you flip between two images you can tell but if you're just looking at a single image you can't like no no human can and really no algorithm can either like you can look at this kind of scene all day I'm a tangle of branches and not have any idea what the closest thing in there is which really showed that that's kind of what I was getting out of the difference between a road type scene which is very regular and this type of scene here's here's another fun one where we're actually above powerlines so there's there's one power line here you", "start_timestamp": "00:42:44", "end_timestamp": "00:43:16", "start_second": 2564, "end_second": 2596, "url": "https://www.youtube.com/watch?v=Yizyv8MpYfg&t=2564s", "title": "Lecture 23 Guest Lecture: Drones -- CS287-FA19 Advanced Robotics at UC Berkeley", "thumbnail": "https://i.ytimg.com/vi/Yizyv8MpYfg/hqdefault.jpg"} {"video_id": "Yizyv8MpYfg", "text": "can see that's moving there's another think really faint one here and then there's two right here that you can see very clearly on this part of them disappear mostly up up in that section okay awesome so we have you know we can play with simulated data synthetic data and the cool thing about that is you know we can we know everything about it and we can try to make it realistic we also have real data and the problem with real data is we don't have any ground truth for the depth which is a which is a very tough problem and there's a whole body", "start_timestamp": "00:43:16", "end_timestamp": "00:43:55", "start_second": 2596, "end_second": 2635, "url": "https://www.youtube.com/watch?v=Yizyv8MpYfg&t=2596s", "title": "Lecture 23 Guest Lecture: Drones -- CS287-FA19 Advanced Robotics at UC Berkeley", "thumbnail": "https://i.ytimg.com/vi/Yizyv8MpYfg/hqdefault.jpg"} {"video_id": "Yizyv8MpYfg", "text": "of research that focuses on trying to learn from unsupervised data which is really interesting the problem is that slide I showed about all the visions the things that make vision hard none of those things are photo metrically consistent so if that's your cue to learn depth it's that's not a solved problem right because the that that cue won't get you the really hard things so here's it's kind of a fun example of something where we took we took Suns like synthetic Suns that we generated and we just put them in real images and", "start_timestamp": "00:43:55", "end_timestamp": "00:44:26", "start_second": 2635, "end_second": 2666, "url": "https://www.youtube.com/watch?v=Yizyv8MpYfg&t=2635s", "title": "Lecture 23 Guest Lecture: Drones -- CS287-FA19 Advanced Robotics at UC Berkeley", "thumbnail": "https://i.ytimg.com/vi/Yizyv8MpYfg/hqdefault.jpg"} {"video_id": "Yizyv8MpYfg", "text": "the kind of fun thing you can do there is to say oh well whatever you predicted before the Sun was there you should predict the same thing with the Sun so that's kind of one way of getting at just without knowing everything at least gaining some robustness to a really big bright Sun okay let me show a little bit about tracking and detection so this is kind of a fun video of just tracking somebody through a forest with a bunch of lighting changes and occlusions and pose change so people change shape a lot and we're reproject ejector ii into the", "start_timestamp": "00:44:26", "end_timestamp": "00:45:07", "start_second": 2666, "end_second": 2707, "url": "https://www.youtube.com/watch?v=Yizyv8MpYfg&t=2666s", "title": "Lecture 23 Guest Lecture: Drones -- CS287-FA19 Advanced Robotics at UC Berkeley", "thumbnail": "https://i.ytimg.com/vi/Yizyv8MpYfg/hqdefault.jpg"} {"video_id": "Yizyv8MpYfg", "text": "image which you need the a pretty accurate estimate of the robots trajectory to do you can and then on the top right is actual output of a deep network that predicts the pixels of this person instance at that time step and then the bottom right is just that's just the top-down view of where where the drone was and where the person was and then the red vector was it's just an estimate of the velocity so like what's in a linear sense where where's that person gonna go in the next next a little bit cool so the way we do this is", "start_timestamp": "00:45:07", "end_timestamp": "00:45:52", "start_second": 2707, "end_second": 2752, "url": "https://www.youtube.com/watch?v=Yizyv8MpYfg&t=2707s", "title": "Lecture 23 Guest Lecture: Drones -- CS287-FA19 Advanced Robotics at UC Berkeley", "thumbnail": "https://i.ytimg.com/vi/Yizyv8MpYfg/hqdefault.jpg"} {"video_id": "Yizyv8MpYfg", "text": "kind of the way we do everything else which is a combination of geometric reasoning and and and deep learning for semantics and the images come to be combined with like 3d motion and things we know tend to be true about the world so oh yeah well so multi object tracking is even if you're just tracking one person which is more common tracking multiple objects is really useful because you can then discriminate between people rather than just saying everyone else's background you can say oh this person a disappeared the person", "start_timestamp": "00:45:52", "end_timestamp": "00:46:27", "start_second": 2752, "end_second": 2787, "url": "https://www.youtube.com/watch?v=Yizyv8MpYfg&t=2752s", "title": "Lecture 23 Guest Lecture: Drones -- CS287-FA19 Advanced Robotics at UC Berkeley", "thumbnail": "https://i.ytimg.com/vi/Yizyv8MpYfg/hqdefault.jpg"} {"video_id": "Yizyv8MpYfg", "text": "B matches person B from before rather than person B also just appeared and which one's the right person so these are our three co-founders just running among each other and we're doing a pretty good job of keeping them in frame or keeping him as the correct estimate and then on the bottom oh boy oh boy okay on the bottom right here it's kind of a fun failure mode where it's hard to see but so able run behind Matt here and then or Adam and Matt pops out at the same instant so like there's a person one person goes behind and hides and the", "start_timestamp": "00:46:27", "end_timestamp": "00:47:06", "start_second": 2787, "end_second": 2826, "url": "https://www.youtube.com/watch?v=Yizyv8MpYfg&t=2787s", "title": "Lecture 23 Guest Lecture: Drones -- CS287-FA19 Advanced Robotics at UC Berkeley", "thumbnail": "https://i.ytimg.com/vi/Yizyv8MpYfg/hqdefault.jpg"} {"video_id": "Yizyv8MpYfg", "text": "other person that was hiding pops out and they're kind of at a similar velocity and so the the things you're combining are like where are they their positions they're their velocities someone's unlikely to just suddenly stop but it can happen and then they're like estimate the visual estimate of their by appearance which we do with with deep networks and so in this case the the visual estimate is similar enough that like the chance of one person disappearing and another appearing and then being at the same velocity is like", "start_timestamp": "00:47:06", "end_timestamp": "00:47:33", "start_second": 2826, "end_second": 2853, "url": "https://www.youtube.com/watch?v=Yizyv8MpYfg&t=2826s", "title": "Lecture 23 Guest Lecture: Drones -- CS287-FA19 Advanced Robotics at UC Berkeley", "thumbnail": "https://i.ytimg.com/vi/Yizyv8MpYfg/hqdefault.jpg"} {"video_id": "Yizyv8MpYfg", "text": "very low and it clearly yeah they're basically learned features so essentially it's based on the appearance of an instance of an object and there's pretty solid body of literature about how to have to try to learn those things but you tend to have learned through examples of people that look there's different constraints based on what type of like use you want to do but typically it's like you have similar instances you have the same instance of a person but from many different views and poses and and lighting and all that and then you", "start_timestamp": "00:47:33", "end_timestamp": "00:48:15", "start_second": 2853, "end_second": 2895, "url": "https://www.youtube.com/watch?v=Yizyv8MpYfg&t=2853s", "title": "Lecture 23 Guest Lecture: Drones -- CS287-FA19 Advanced Robotics at UC Berkeley", "thumbnail": "https://i.ytimg.com/vi/Yizyv8MpYfg/hqdefault.jpg"} {"video_id": "Yizyv8MpYfg", "text": "have multiple people that look similar and you train positive and negative examples and you you learned that kind of task this is very common for like yes like surveillance camera datasets they tend to be easy to collect where they're just fixed now again the drone is a lot harder because it's moving and people tend to be moving very quickly and running away and doing flips and stuff which make the problem a lot harder but the concept is pretty similar yeah oh yeah so in the just for the sort of consumer follow you skate based use take", "start_timestamp": "00:48:15", "end_timestamp": "00:48:55", "start_second": 2895, "end_second": 2935, "url": "https://www.youtube.com/watch?v=Yizyv8MpYfg&t=2895s", "title": "Lecture 23 Guest Lecture: Drones -- CS287-FA19 Advanced Robotics at UC Berkeley", "thumbnail": "https://i.ytimg.com/vi/Yizyv8MpYfg/hqdefault.jpg"} {"video_id": "Yizyv8MpYfg", "text": "off and then whatever is in the view of the user camera you'll see just icons on the phone so we must we do people in cars and so you'll just see a little plus icon and you just tap and then and then it works and it shows that halo that was in the launch video earlier on the phone we also have this thing called the beacon it's another accessory so this guy just a little puck that lets you basically fly without your phone and it has a it has a GPS that we trust a lot more and so what this does is if it lets the drone if it loses visual track", "start_timestamp": "00:48:55", "end_timestamp": "00:49:34", "start_second": 2935, "end_second": 2974, "url": "https://www.youtube.com/watch?v=Yizyv8MpYfg&t=2935s", "title": "Lecture 23 Guest Lecture: Drones -- CS287-FA19 Advanced Robotics at UC Berkeley", "thumbnail": "https://i.ytimg.com/vi/Yizyv8MpYfg/hqdefault.jpg"} {"video_id": "Yizyv8MpYfg", "text": "and you view like you get stuck or something fails it can basically try to just combine between that and the GPS estimate and then find you again and pick you up visually and so the idea is to just to make it much more robust in terms of if you're skiing or something that you you're just hands you don't have to worry about it and so the other way we do it which is kind of fun is if you just if you take off like this it will take off and if there was someone standing right there it will just track that person okay cool", "start_timestamp": "00:49:34", "end_timestamp": "00:50:11", "start_second": 2974, "end_second": 3011, "url": "https://www.youtube.com/watch?v=Yizyv8MpYfg&t=2974s", "title": "Lecture 23 Guest Lecture: Drones -- CS287-FA19 Advanced Robotics at UC Berkeley", "thumbnail": "https://i.ytimg.com/vi/Yizyv8MpYfg/hqdefault.jpg"} {"video_id": "Yizyv8MpYfg", "text": "say a little bit about planning so I think generally we we have a bunch of knowledge of the environment it's all uncertain there's no guarantees about anything because you know we're not sure if we saw the thin branch where we think we know where we are there's uncertainty about all these things we try to reason about that and the best way possible in the end like we have some measurements and we need a policy that decides what the propeller should do such that the drone accomplishes our goals you know some of which are are", "start_timestamp": "00:50:11", "end_timestamp": "00:50:45", "start_second": 3011, "end_second": 3045, "url": "https://www.youtube.com/watch?v=Yizyv8MpYfg&t=3011s", "title": "Lecture 23 Guest Lecture: Drones -- CS287-FA19 Advanced Robotics at UC Berkeley", "thumbnail": "https://i.ytimg.com/vi/Yizyv8MpYfg/hqdefault.jpg"} {"video_id": "Yizyv8MpYfg", "text": "down here like obstacle avoidance dynamic limits smoothness effective tracking and cinematic quality of the video and these are all you know that there are hard trade-offs with each other it's it's not easy to achieve these things in all but the simplest of cases so I think it breaks down a way that I think you you've you've all kind of seen before so we won't go into this one but basically like we have the the biggest the most interesting part of the problem is figuring out what the cost function should be based on everything", "start_timestamp": "00:50:45", "end_timestamp": "00:51:22", "start_second": 3045, "end_second": 3082, "url": "https://www.youtube.com/watch?v=Yizyv8MpYfg&t=3045s", "title": "Lecture 23 Guest Lecture: Drones -- CS287-FA19 Advanced Robotics at UC Berkeley", "thumbnail": "https://i.ytimg.com/vi/Yizyv8MpYfg/hqdefault.jpg"} {"video_id": "Yizyv8MpYfg", "text": "we know and so based on the current state of things in the full environment what what are the actions we should choose that that meant like that give us the best result and defining that function is a lot of work and trial and error and investigation and that's that's something we just work at on just iteratively like trying things out flying it seeing what fails trying to like figure out what our discrepancies between we really want and what we saw which is which is a really fun problem and then you have to find you actually you need", "start_timestamp": "00:51:22", "end_timestamp": "00:51:57", "start_second": 3082, "end_second": 3117, "url": "https://www.youtube.com/watch?v=Yizyv8MpYfg&t=3082s", "title": "Lecture 23 Guest Lecture: Drones -- CS287-FA19 Advanced Robotics at UC Berkeley", "thumbnail": "https://i.ytimg.com/vi/Yizyv8MpYfg/hqdefault.jpg"} {"video_id": "Yizyv8MpYfg", "text": "to find an action that accomplishes that goal and that that's the cycle and then either you didn't have the right objective or you didn't find the minimum for that objective well enough to do what you wanted and in terms of solving it I think you guys all know but like you can you can learn you know that's the the promise of deep reinforcement learning is that you can well potentially learn both of these things from data and you can do sampling techniques you need a trajectory library you can do a tree based search or you", "start_timestamp": "00:51:57", "end_timestamp": "00:52:36", "start_second": 3117, "end_second": 3156, "url": "https://www.youtube.com/watch?v=Yizyv8MpYfg&t=3117s", "title": "Lecture 23 Guest Lecture: Drones -- CS287-FA19 Advanced Robotics at UC Berkeley", "thumbnail": "https://i.ytimg.com/vi/Yizyv8MpYfg/hqdefault.jpg"} {"video_id": "Yizyv8MpYfg", "text": "can do optimization and you can you can do trajectory optimization and mile predictive control to solve these things and so we we visit we do a combination of all these things in a way but but as cool is like we we run something on board that runs at 500 races per second that is reasoning about kind of the whole stack between all the way up to like obstacle ones and cinematic quality all down to like what are the dynamic limits on the rotors and the more you can reason about the whole system together rather than in separate little", "start_timestamp": "00:52:36", "end_timestamp": "00:53:11", "start_second": 3156, "end_second": 3191, "url": "https://www.youtube.com/watch?v=Yizyv8MpYfg&t=3156s", "title": "Lecture 23 Guest Lecture: Drones -- CS287-FA19 Advanced Robotics at UC Berkeley", "thumbnail": "https://i.ytimg.com/vi/Yizyv8MpYfg/hqdefault.jpg"} {"video_id": "Yizyv8MpYfg", "text": "layers that have their own boundaries the more you can get out of the robot I think that's kind of universally true the trick is that you have to it's you have to be a lot more careful it's a more dangerous game to play and again we we also use sampling techniques and they have they have a good role to play within the system then I'll just add a note about simulation because it's it's awesome and super useful especially for motion planning there's like for computer vision you want is like really photorealistic images that you've", "start_timestamp": "00:53:11", "end_timestamp": "00:53:50", "start_second": 3191, "end_second": 3230, "url": "https://www.youtube.com/watch?v=Yizyv8MpYfg&t=3191s", "title": "Lecture 23 Guest Lecture: Drones -- CS287-FA19 Advanced Robotics at UC Berkeley", "thumbnail": "https://i.ytimg.com/vi/Yizyv8MpYfg/hqdefault.jpg"} {"video_id": "Yizyv8MpYfg", "text": "carefully curated for planning you don't care you just need something that like is good enough to see your behaviors and try things out on so this is like you know we literally drive this guy around with a joystick at our computers and he just kind of slides around the world and we just used that to test basic things and then even without images we have these scenes where basically we generate a random forest and we honor we want to say compare we're going to figure out okay how far away from obstacles should the drone", "start_timestamp": "00:53:50", "end_timestamp": "00:54:22", "start_second": 3230, "end_second": 3262, "url": "https://www.youtube.com/watch?v=Yizyv8MpYfg&t=3230s", "title": "Lecture 23 Guest Lecture: Drones -- CS287-FA19 Advanced Robotics at UC Berkeley", "thumbnail": "https://i.ytimg.com/vi/Yizyv8MpYfg/hqdefault.jpg"} {"video_id": "Yizyv8MpYfg", "text": "stay to not crash into them versus like how often is it going to get stuck in a dense forest if we do that so we can like we tend to run logs against hundreds and thousands of with regressions against a ton of just recorded log data right so we record all of our flights all of the images and we can rerun all our vision algorithms and check like those types of things but for planning you can't change what the robot did right it's it's it's all kind of off policy and so simulation is really helpful there and so we can run an", "start_timestamp": "00:54:22", "end_timestamp": "00:54:58", "start_second": 3262, "end_second": 3298, "url": "https://www.youtube.com/watch?v=Yizyv8MpYfg&t=3262s", "title": "Lecture 23 Guest Lecture: Drones -- CS287-FA19 Advanced Robotics at UC Berkeley", "thumbnail": "https://i.ytimg.com/vi/Yizyv8MpYfg/hqdefault.jpg"} {"video_id": "Yizyv8MpYfg", "text": "experiment like I would generate a bunch of different random forests and have the subject run the same path through them and then see how often the drone hit something or get stuck and have a statistical sense of what to do there the problem is always like something bad happens we look at that log and then like okay we can tweak this thing to fix it but then did we just make five other things worse and we can't like do we can't do a thousand real flights to test that in any sane way so maintaining the sort of infrastructure to handle all", "start_timestamp": "00:54:58", "end_timestamp": "00:55:28", "start_second": 3298, "end_second": 3328, "url": "https://www.youtube.com/watch?v=Yizyv8MpYfg&t=3298s", "title": "Lecture 23 Guest Lecture: Drones -- CS287-FA19 Advanced Robotics at UC Berkeley", "thumbnail": "https://i.ytimg.com/vi/Yizyv8MpYfg/hqdefault.jpg"} {"video_id": "Yizyv8MpYfg", "text": "those things is really critical yeah yeah yeah I mentioned I would fly in here but I don't think I'm allowed to yeah that's a good question we identified the dynamics model which is somewhere between them but we definitely model the dynamics of this drone it's not like a model free black box and the reason because it's definitely model vulnerable so it is you know the propellers there are aerodynamic effects that we understand quite well there's the inertia's of the drone and properties of it and so we we model", "start_timestamp": "00:55:28", "end_timestamp": "00:56:17", "start_second": 3328, "end_second": 3377, "url": "https://www.youtube.com/watch?v=Yizyv8MpYfg&t=3328s", "title": "Lecture 23 Guest Lecture: Drones -- CS287-FA19 Advanced Robotics at UC Berkeley", "thumbnail": "https://i.ytimg.com/vi/Yizyv8MpYfg/hqdefault.jpg"} {"video_id": "Yizyv8MpYfg", "text": "everything that we can and we try to fit parameters to it based on flight data and then I think that's that's pretty different from other types of robots that are more like especially when you're dealing with contacts and more complicated things but this is not that many degrees of freedom there I think the complicated part comes from aerodynamics so once you're once you're flying fast and you're in turbulent flow and you get these types of like pitching moments and the accelerations your rotors play into things so that there is", "start_timestamp": "00:56:17", "end_timestamp": "00:56:49", "start_second": 3377, "end_second": 3409, "url": "https://www.youtube.com/watch?v=Yizyv8MpYfg&t=3377s", "title": "Lecture 23 Guest Lecture: Drones -- CS287-FA19 Advanced Robotics at UC Berkeley", "thumbnail": "https://i.ytimg.com/vi/Yizyv8MpYfg/hqdefault.jpg"} {"video_id": "Yizyv8MpYfg", "text": "a lot of complexity there but it's also complexity that has been pretty well modeled in aerospace at least in terms of the structure of something that you can learn so yeah yeah uh-huh we don't I mean you can't you can't ensure anything because well it yeah I guess it's a complicated question I does not exactly sometimes people ask like how do you guarantee that it's going to be obstacle free for example and and the answer to that is we can't because of perception and so nothing is ever guaranteed but we we do our best so in terms of like if", "start_timestamp": "00:56:49", "end_timestamp": "00:57:40", "start_second": 3409, "end_second": 3460, "url": "https://www.youtube.com/watch?v=Yizyv8MpYfg&t=3409s", "title": "Lecture 23 Guest Lecture: Drones -- CS287-FA19 Advanced Robotics at UC Berkeley", "thumbnail": "https://i.ytimg.com/vi/Yizyv8MpYfg/hqdefault.jpg"} {"video_id": "Yizyv8MpYfg", "text": "you have a nominee MVC how do you make sure that it converges well it depends a lot on how you're solving it if you there are good ways to know if you've made it worse than your initial guess for example so if you if you do that you can try to fall back to something else or you just like try to work towards robustness so have have ways of just managing what you're doing such that you at least have some path that moves you forward and is not catastrophic but the best way to I think the best way to do that is just to", "start_timestamp": "00:57:40", "end_timestamp": "00:58:16", "start_second": 3460, "end_second": 3496, "url": "https://www.youtube.com/watch?v=Yizyv8MpYfg&t=3460s", "title": "Lecture 23 Guest Lecture: Drones -- CS287-FA19 Advanced Robotics at UC Berkeley", "thumbnail": "https://i.ytimg.com/vi/Yizyv8MpYfg/hqdefault.jpg"} {"video_id": "Yizyv8MpYfg", "text": "try to carefully design your problem there constraints in the setup of what you have to to have you know stability and convergence and and there's there's a lot of there's a ton of work that could be put into that that is that is tricky but there's no fallback that's a good fallback because let's say a fallback is like you know go constant velocity well you might just run into something or like yeah but you can but you could for example say oh I'm gonna stop carrying about any of my other high-level stuff and I just want to like", "start_timestamp": "00:58:16", "end_timestamp": "00:58:51", "start_second": 3496, "end_second": 3531, "url": "https://www.youtube.com/watch?v=Yizyv8MpYfg&t=3496s", "title": "Lecture 23 Guest Lecture: Drones -- CS287-FA19 Advanced Robotics at UC Berkeley", "thumbnail": "https://i.ytimg.com/vi/Yizyv8MpYfg/hqdefault.jpg"} {"video_id": "Yizyv8MpYfg", "text": "come to a stop with the obstacle avoidance if it's a really bad scenario yeah okay just another video of simulation and the kind of cool thing about simulation so this is this is actually the from an SDK that we have it's kind of in a private beta SDK where people sort of we let people work on it that we think make a lot of sense to to kind of develop on our drone but the the cool thing is to share that simulation between like our purposes and actually being able to turn the drone into something that you can program so at a", "start_timestamp": "00:58:51", "end_timestamp": "00:59:29", "start_second": 3531, "end_second": 3569, "url": "https://www.youtube.com/watch?v=Yizyv8MpYfg&t=3531s", "title": "Lecture 23 Guest Lecture: Drones -- CS287-FA19 Advanced Robotics at UC Berkeley", "thumbnail": "https://i.ytimg.com/vi/Yizyv8MpYfg/hqdefault.jpg"} {"video_id": "Yizyv8MpYfg", "text": "high level like make it move around make it do things but not have to care about obstacle avoidance and and again the low-level stuff so we're it's it's a hard problem because controlling a drone like you have to understand a lot of the internals of the system to do it so we're kind of always thinking about what is the way to try to design an API such that it's it's intuitive for people to understand especially people that aren't experienced drones or even robotics to be able to control things and get what they want and sort of understand why it", "start_timestamp": "00:59:29", "end_timestamp": "01:00:03", "start_second": 3569, "end_second": 3603, "url": "https://www.youtube.com/watch?v=Yizyv8MpYfg&t=3569s", "title": "Lecture 23 Guest Lecture: Drones -- CS287-FA19 Advanced Robotics at UC Berkeley", "thumbnail": "https://i.ytimg.com/vi/Yizyv8MpYfg/hqdefault.jpg"} {"video_id": "Yizyv8MpYfg", "text": "might not be able to do something and why it might go do something okay cool let me show a couple of other cool things so we estimate the wind online which is really useful for to have as a model to do more effective planning and aerodynamic control the drone this is something neat it's a lot less relevant for this drone but the first drone had perimeter guards and the reason I had perimeter guards is to have cameras that can see like on the outside that's why we very carefully designed the geometry of this drone to try to get", "start_timestamp": "01:00:03", "end_timestamp": "01:00:38", "start_second": 3603, "end_second": 3638, "url": "https://www.youtube.com/watch?v=Yizyv8MpYfg&t=3603s", "title": "Lecture 23 Guest Lecture: Drones -- CS287-FA19 Advanced Robotics at UC Berkeley", "thumbnail": "https://i.ytimg.com/vi/Yizyv8MpYfg/hqdefault.jpg"} {"video_id": "Yizyv8MpYfg", "text": "rid of the perimeter guards because they're so expensive and bulky and and bad for performance while still having visibility but the cool thing was it could bump into things and not have the propellers touch it and in this case like we go we fly into this glass intentionally and we actually detect that collision and we add that obstacle into our map there so we won't do it again which is a pretty neat thing to do and then yeah we look a lot at sort of recovery from dynamic new verse so we call this inverted pizza toss where you", "start_timestamp": "01:00:38", "end_timestamp": "01:01:12", "start_second": 3638, "end_second": 3672, "url": "https://www.youtube.com/watch?v=Yizyv8MpYfg&t=3638s", "title": "Lecture 23 Guest Lecture: Drones -- CS287-FA19 Advanced Robotics at UC Berkeley", "thumbnail": "https://i.ytimg.com/vi/Yizyv8MpYfg/hqdefault.jpg"} {"video_id": "Yizyv8MpYfg", "text": "your initial condition is like upside down and then spinning quickly and then the drone kind of has to come down and recover this is another fun one where there's prior work on this as like its own controller but something neat that we found was if you actually basically just with parameter changes to our planning system if we shut off one rotor we find this kind of mode of flight where it's still semi stable where you're not controlling yaw anymore but you're able to sort of hover and come down to Al and this is all in simulation", "start_timestamp": "01:01:12", "end_timestamp": "01:01:47", "start_second": 3672, "end_second": 3707, "url": "https://www.youtube.com/watch?v=Yizyv8MpYfg&t=3672s", "title": "Lecture 23 Guest Lecture: Drones -- CS287-FA19 Advanced Robotics at UC Berkeley", "thumbnail": "https://i.ytimg.com/vi/Yizyv8MpYfg/hqdefault.jpg"} {"video_id": "Yizyv8MpYfg", "text": "these so far because the question of state estimation in this in this scenarios is a lot harder but that's pretty cool that like by just spinning really fast you're able to stay hovering with three rotors and in fact what we did was sure that we could again in simulation actually follow somebody and avoid obstacles while just having three propellers and spinning like a like a madman so I don't think anyone's shown that before okay these are slides from cvpr it's a vision conference so I'm gonna skip those I wouldn't say that in fact I already", "start_timestamp": "01:01:47", "end_timestamp": "01:02:29", "start_second": 3707, "end_second": 3749, "url": "https://www.youtube.com/watch?v=Yizyv8MpYfg&t=3707s", "title": "Lecture 23 Guest Lecture: Drones -- CS287-FA19 Advanced Robotics at UC Berkeley", "thumbnail": "https://i.ytimg.com/vi/Yizyv8MpYfg/hqdefault.jpg"} {"video_id": "Yizyv8MpYfg", "text": "talked about yeah don't work on deep learning benchmarks and try to improve them by one percent that's a sucky thing to do work on hard things and real robots and that's I think what this class is all about so that's awesome okay and I'm just gonna show a one last video which is one of my favorites just for drones following Jack here as he goes down a waterslide just in in every direction and it's just a fun thing to watch all right cool yeah so we get a lot of sweet videos and again so our goal is to like bring these drones to doing useful", "start_timestamp": "01:02:29", "end_timestamp": "01:03:28", "start_second": 3749, "end_second": 3808, "url": "https://www.youtube.com/watch?v=Yizyv8MpYfg&t=3749s", "title": "Lecture 23 Guest Lecture: Drones -- CS287-FA19 Advanced Robotics at UC Berkeley", "thumbnail": "https://i.ytimg.com/vi/Yizyv8MpYfg/hqdefault.jpg"} {"video_id": "Yizyv8MpYfg", "text": "things not just for video capture for you know content for consumers and action sports but also for inspection and mapping and construction and all all these places where you need a trustworthy reliable robot that can move around anywhere and get and sense sense things basically all right that's that's it yeah thank you cool and then yeah if any of you guys are like looking at internships or I know a lot of your graduate students so I'd be happy to hear from basically anybody in this class we have you know we have a we have", "start_timestamp": "01:03:28", "end_timestamp": "01:04:10", "start_second": 3808, "end_second": 3850, "url": "https://www.youtube.com/watch?v=Yizyv8MpYfg&t=3808s", "title": "Lecture 23 Guest Lecture: Drones -- CS287-FA19 Advanced Robotics at UC Berkeley", "thumbnail": "https://i.ytimg.com/vi/Yizyv8MpYfg/hqdefault.jpg"} {"video_id": "MpdbFLXOOIw", "text": "hi there today we're looking at supervised contrast of learning by people from Google research and MIT now this paper proposes a new loss for supervised learning and you might recognize that this is a big claim so for ever now we basically use this cross-entropy loss in order to do supervised training of neural networks this paper proposes to replace that with the supervised contrastive loss and let's jump straight into the results here they say our supervised contrastive loss outperforms the cross-entropy loss", "start_timestamp": "00:00:00", "end_timestamp": "00:00:41", "start_second": 0, "end_second": 41, "url": "https://www.youtube.com/watch?v=MpdbFLXOOIw&t=0s", "title": "Supervised Contrastive Learning", "thumbnail": "https://i.ytimg.com/vi/MpdbFLXOOIw/hqdefault.jpg"} {"video_id": "MpdbFLXOOIw", "text": "with standard data augmentations such as Auto augment and Rand augment so these are some of the previous state-of-the-art data augmentation techniques used together with the cross entropy loss and they say there are supervised contrastive loss outperforms them you can see here on image net which is the biggest vision benchmark or the most famous one this new loss the supervised contrastive loss outperforms these other methods by something like a percent one percent is a big improvement on image net right now so they claim it", "start_timestamp": "00:00:41", "end_timestamp": "00:01:21", "start_second": 41, "end_second": 81, "url": "https://www.youtube.com/watch?v=MpdbFLXOOIw&t=41s", "title": "Supervised Contrastive Learning", "thumbnail": "https://i.ytimg.com/vi/MpdbFLXOOIw/hqdefault.jpg"} {"video_id": "MpdbFLXOOIw", "text": "is a big claim right you recognize if this is true this could be a game-changer basically for all of supervised learning and supervised learning is really the only thing right now in deep learning that works so it could revolutionize the field but so here's the but it is actually not a new loss to replace the cross entropy loss and that's they do they do come about this pretty quickly some and you don't think they're they're dishonest or lying or anything here but it is it is sort of if you start reading you like what this", "start_timestamp": "00:01:21", "end_timestamp": "00:01:58", "start_second": 81, "end_second": 118, "url": "https://www.youtube.com/watch?v=MpdbFLXOOIw&t=81s", "title": "Supervised Contrastive Learning", "thumbnail": "https://i.ytimg.com/vi/MpdbFLXOOIw/hqdefault.jpg"} {"video_id": "MpdbFLXOOIw", "text": "is a new loss it is not it is a new way of pre training the network for a classification task and so let's look into this so if you look at what does what does mean to build a classifier in this is what you usually do this is supervised cross-entropy training you have an image and the image here is of a dog you put it through your network and you obtain a representation so the representation here R is this last layer or the second-to-last layer and you put that through a classification layer and then a soft Max and what you get as an output", "start_timestamp": "00:01:58", "end_timestamp": "00:02:40", "start_second": 118, "end_second": 160, "url": "https://www.youtube.com/watch?v=MpdbFLXOOIw&t=118s", "title": "Supervised Contrastive Learning", "thumbnail": "https://i.ytimg.com/vi/MpdbFLXOOIw/hqdefault.jpg"} {"video_id": "MpdbFLXOOIw", "text": "is basically a probability distribution and let's say you have three classes here there's dog there's cat and there's horse and let's say the network doesn't yet isn't yet trained very well so the probability for dog here is fairly low so this is basically what the network thinks of that image like which class does it belong to with what probability I also have this label right here so the labeled dog for that image what you do with that is you do a one hot vector so that would look like this so the one is at the position where the correct class", "start_timestamp": "00:02:40", "end_timestamp": "00:03:21", "start_second": 160, "end_second": 201, "url": "https://www.youtube.com/watch?v=MpdbFLXOOIw&t=160s", "title": "Supervised Contrastive Learning", "thumbnail": "https://i.ytimg.com/vi/MpdbFLXOOIw/hqdefault.jpg"} {"video_id": "MpdbFLXOOIw", "text": "is and then the cross-entropy loss takes all of this and does the following there's a sum over all your classes in this case you have three classes and let's call these the labels l and you want to always take the label of the class times the log probability that the network thinks belongs to this class so you can quickly see that this if the label is 0 so for all the incorrect classes that means this entire term drops away and only if the label is 1 so only the correct class that will result in the log probability of the class", "start_timestamp": "00:03:21", "end_timestamp": "00:04:07", "start_second": 201, "end_second": 247, "url": "https://www.youtube.com/watch?v=MpdbFLXOOIw&t=201s", "title": "Supervised Contrastive Learning", "thumbnail": "https://i.ytimg.com/vi/MpdbFLXOOIw/hqdefault.jpg"} {"video_id": "MpdbFLXOOIw", "text": "where the label is the correct label right so in order to make this a loss you actually have to put a negative sign in front of here because you want to this so this entire thing reduces to the log probability of the correct class this is what you want to max semi's there for you if you want to minimize something you need so you minimize the negative log probability of the correct class which means you maximize the probability a a if you've never looked at the cross entropy loss like this it is important to notice that", "start_timestamp": "00:04:07", "end_timestamp": "00:04:47", "start_second": 247, "end_second": 287, "url": "https://www.youtube.com/watch?v=MpdbFLXOOIw&t=247s", "title": "Supervised Contrastive Learning", "thumbnail": "https://i.ytimg.com/vi/MpdbFLXOOIw/hqdefault.jpg"} {"video_id": "MpdbFLXOOIw", "text": "you're gonna say hey all this does is pull this here up right and it doesn't do anything to the other ones but you have to realize that this softmax operation since this is a probability distribution all of this is normalized to sum up to one so implicitly you will push these down through the normalization right so what this does is it pushes the correct class up and it pushes the other classes down so this to look at this is going to be important later because if you look at what this representation here does so again you", "start_timestamp": "00:04:47", "end_timestamp": "00:05:25", "start_second": 287, "end_second": 325, "url": "https://www.youtube.com/watch?v=MpdbFLXOOIw&t=287s", "title": "Supervised Contrastive Learning", "thumbnail": "https://i.ytimg.com/vi/MpdbFLXOOIw/hqdefault.jpg"} {"video_id": "MpdbFLXOOIw", "text": "have the network produces a representation here this is 2,000 dimensional and then it does it adds on top this classification layer this classification layer is simply a linear layer and then a softmax on top so how you have to imagine this is that there is a representation space this 2010 shell space and the representations are made in such a way that the labels such that sorry let's have three classes here the representations are made in such a way that a linear classifier can separate them correctly right so here", "start_timestamp": "00:05:25", "end_timestamp": "00:06:08", "start_second": 325, "end_second": 368, "url": "https://www.youtube.com/watch?v=MpdbFLXOOIw&t=325s", "title": "Supervised Contrastive Learning", "thumbnail": "https://i.ytimg.com/vi/MpdbFLXOOIw/hqdefault.jpg"} {"video_id": "MpdbFLXOOIw", "text": "this would be like a boundary and then this would be another boundary and this maybe would be another decision boundary so you can see that the linear classifier can separate the classes well that is the goal if you use this soft Max cross-entropy loss that is implicitly what will happen in the representation space W all the cares about is that the classes are on one side of the decision boundary and everything else is on the other side of a decision boundary so if if you have the network isn't trained very well at the beginning and", "start_timestamp": "00:06:08", "end_timestamp": "00:06:45", "start_second": 368, "end_second": 405, "url": "https://www.youtube.com/watch?v=MpdbFLXOOIw&t=368s", "title": "Supervised Contrastive Learning", "thumbnail": "https://i.ytimg.com/vi/MpdbFLXOOIw/hqdefault.jpg"} {"video_id": "MpdbFLXOOIw", "text": "you maybe have a sample of the green class here it will push the network such that the representation of that sample will go on to the other side of this decision boundary and it will push the decision boundary at the same time to make that happen more easily right so it will optimize all of this at the same time that's what you do that's how you optimize representations so this work here and another work has said wouldn't it be great if the representation and decision boundaries weren't just trained at the same time for this but we learn", "start_timestamp": "00:06:45", "end_timestamp": "00:07:24", "start_second": 405, "end_second": 444, "url": "https://www.youtube.com/watch?v=MpdbFLXOOIw&t=405s", "title": "Supervised Contrastive Learning", "thumbnail": "https://i.ytimg.com/vi/MpdbFLXOOIw/hqdefault.jpg"} {"video_id": "MpdbFLXOOIw", "text": "good representations first such that classifying them becomes very simple and in essence what this paper says is if we have a representation space W shouldn't images of the same class shouldn't we just make them close together you know so without caring about decision boundaries we just want them to be close to each other and we want them to be far apart from other classes if that happens you can see that a linear classifier is going to have a very easy time separating these classes later so that's exactly what this paper does it has a", "start_timestamp": "00:07:24", "end_timestamp": "00:08:09", "start_second": 444, "end_second": 489, "url": "https://www.youtube.com/watch?v=MpdbFLXOOIw&t=444s", "title": "Supervised Contrastive Learning", "thumbnail": "https://i.ytimg.com/vi/MpdbFLXOOIw/hqdefault.jpg"} {"video_id": "MpdbFLXOOIw", "text": "pre training stage and a training stage so in the pre training stage this is over here su provides contrastive in the pre training stage it simply tries to learn these representations right like over like down here such that without the decision boundaries class think images of the same class are close together and images of different classes are far apart which notice the the subtle difference right to the cross-entropy loss where you just care about them being on one or the other side of a decision boundary and in stage this so", "start_timestamp": "00:08:09", "end_timestamp": "00:08:49", "start_second": 489, "end_second": 529, "url": "https://www.youtube.com/watch?v=MpdbFLXOOIw&t=489s", "title": "Supervised Contrastive Learning", "thumbnail": "https://i.ytimg.com/vi/MpdbFLXOOIw/hqdefault.jpg"} {"video_id": "MpdbFLXOOIw", "text": "this stage one and then in stage two and there is where where it comes in you basically freeze the network so you freeze these weights down here these are frozen you don't train them anymore all you train is this one classification layer so the represent actually freeze also the representation layer here you only train the classifier on top in stage two but you train it using soft Max and using the cross-entropy loss so you you train the classifier in the old cross entropy way using just normal supervised learning the difference here", "start_timestamp": "00:08:49", "end_timestamp": "00:09:32", "start_second": 529, "end_second": 572, "url": "https://www.youtube.com/watch?v=MpdbFLXOOIw&t=529s", "title": "Supervised Contrastive Learning", "thumbnail": "https://i.ytimg.com/vi/MpdbFLXOOIw/hqdefault.jpg"} {"video_id": "MpdbFLXOOIw", "text": "is that the stage one free training is is what's training the network and the cross entropy dose only trains the classifier right so let's look at how this pre training actually worked what is using what it's using is a method called contrastive pre-training now in contrastive pre training and they have a little diagram up here what this does is if you look at the classic way of doing contrastive pre train you have to go to the unsupervised pre-training literature people have kind of discovered that they can improve a neural network by", "start_timestamp": "00:09:32", "end_timestamp": "00:10:12", "start_second": 572, "end_second": 612, "url": "https://www.youtube.com/watch?v=MpdbFLXOOIw&t=572s", "title": "Supervised Contrastive Learning", "thumbnail": "https://i.ytimg.com/vi/MpdbFLXOOIw/hqdefault.jpg"} {"video_id": "MpdbFLXOOIw", "text": "pre-training it first in an unsupervised way and this is also called some of these methods are called self supervise so the advantage here of self supervised or unsupervised pre training is that you don't need labels what you want to do is simply to make the representation space somewhat meaningful right so you simply want the network to learn representations of images that are somehow meaningful right that are there and here's how you do it so you want to take an image like this dog here and then you want to randomly augment this", "start_timestamp": "00:10:12", "end_timestamp": "00:10:56", "start_second": 612, "end_second": 656, "url": "https://www.youtube.com/watch?v=MpdbFLXOOIw&t=612s", "title": "Supervised Contrastive Learning", "thumbnail": "https://i.ytimg.com/vi/MpdbFLXOOIw/hqdefault.jpg"} {"video_id": "MpdbFLXOOIw", "text": "image which just means you want to produce different versions of the same image in this case down here this is a random crop it's cropped about here it's still the same image but it's a different version of it in the case here you can see that it's flipped left right and the brightness is slightly increased so these are just different versions of the same image and what you also want are what's called negatives negatives are simply different images from your data set right for example this or this or this you don't", "start_timestamp": "00:10:56", "end_timestamp": "00:11:30", "start_second": 656, "end_second": 690, "url": "https://www.youtube.com/watch?v=MpdbFLXOOIw&t=656s", "title": "Supervised Contrastive Learning", "thumbnail": "https://i.ytimg.com/vi/MpdbFLXOOIw/hqdefault.jpg"} {"video_id": "MpdbFLXOOIw", "text": "care as long as they're different right you just sample a bunch and what you want so you're you're embedding space and they make it big a deal here that they are normalized and that seems to work better but this is not necessary for the idea to work the big idea is here that if you have an image right here let's say this is this is the dog and the blue dots here are the augmented versions of the same dog and the green dots are all the other images in the data set what you want is that the all the images that come from the original", "start_timestamp": "00:11:30", "end_timestamp": "00:12:13", "start_second": 690, "end_second": 733, "url": "https://www.youtube.com/watch?v=MpdbFLXOOIw&t=690s", "title": "Supervised Contrastive Learning", "thumbnail": "https://i.ytimg.com/vi/MpdbFLXOOIw/hqdefault.jpg"} {"video_id": "MpdbFLXOOIw", "text": "same image are pulled close together and everything else is pushed apart right so that's why these are called positives and these are called negatives so the contrast of training basically means that you always want to have a set that you pull together in representation space and assets called the negatives that you push apart so the network basically learns about these random transformations that you have here the network kind of learns what it means to come from the same image it learns to be robust to these kind of transformations", "start_timestamp": "00:12:13", "end_timestamp": "00:12:51", "start_second": 733, "end_second": 771, "url": "https://www.youtube.com/watch?v=MpdbFLXOOIw&t=733s", "title": "Supervised Contrastive Learning", "thumbnail": "https://i.ytimg.com/vi/MpdbFLXOOIw/hqdefault.jpg"} {"video_id": "MpdbFLXOOIw", "text": "it learns about the data in general and how to kind of spread the data and embedding space with these transformations so this usually ends up in a pretty good representation space and people have been using this in recent years in order to gain significant improvements now the problem here if you specifically do this to pre train a classifier is the thing they show on the right so on the left here you have a picture of a dog right but if you just do this self supervised you do it without the labels so it can happen", "start_timestamp": "00:12:51", "end_timestamp": "00:13:30", "start_second": 771, "end_second": 810, "url": "https://www.youtube.com/watch?v=MpdbFLXOOIw&t=771s", "title": "Supervised Contrastive Learning", "thumbnail": "https://i.ytimg.com/vi/MpdbFLXOOIw/hqdefault.jpg"} {"video_id": "MpdbFLXOOIw", "text": "that this image here shows up in the negatives but it is also of a dog right and now this image here is going to end up maybe being this image here and you see what happens to it it's a green one so it's gonna get pushed apart and this is going to make the entire task for the later classifier much harder because if they are pushed apart from each other how is a linear classifier going to have them on the same side of the decision boundary while having everything else on a different side right so the the task", "start_timestamp": "00:13:30", "end_timestamp": "00:14:06", "start_second": 810, "end_second": 846, "url": "https://www.youtube.com/watch?v=MpdbFLXOOIw&t=810s", "title": "Supervised Contrastive Learning", "thumbnail": "https://i.ytimg.com/vi/MpdbFLXOOIw/hqdefault.jpg"} {"video_id": "MpdbFLXOOIw", "text": "here is implicitly making the task for the later classifier harder by pushing apart samples that should be of the same class and so this is this is not happening if you introduce a labels to the pre training objective that's what they do the supervised contrast objective now you still all you want to do is here we're going to draw the same embedding space and we're going to draw this original dog image and we're going to draw the Augmented version of the original dog image but now we also have the following we also have these images", "start_timestamp": "00:14:06", "end_timestamp": "00:14:46", "start_second": 846, "end_second": 886, "url": "https://www.youtube.com/watch?v=MpdbFLXOOIw&t=846s", "title": "Supervised Contrastive Learning", "thumbnail": "https://i.ytimg.com/vi/MpdbFLXOOIw/hqdefault.jpg"} {"video_id": "MpdbFLXOOIw", "text": "which are images of the same class so we're going to put them in black here and let's say the augmented versions around them in smaller black thoughts augmented versions of those right you can augment them as well and then you have the negative samples and the negative samples are not just any images but just images of different classes so you just go over your mini batch and all everything that's of the same class we could becomes positives including their augmentations and everything that is not in the same class becomes negatives and", "start_timestamp": "00:14:46", "end_timestamp": "00:15:21", "start_second": 886, "end_second": 921, "url": "https://www.youtube.com/watch?v=MpdbFLXOOIw&t=886s", "title": "Supervised Contrastive Learning", "thumbnail": "https://i.ytimg.com/vi/MpdbFLXOOIw/hqdefault.jpg"} {"video_id": "MpdbFLXOOIw", "text": "also you can augment them as well so now we have a bunch of things in our embedding space and our objective is going simply going to be again we want to push away all the images that not of the same class as our original as our red original image which is called the anchor so all of this needs to be pushed away but now we want to pull together all the Augmented versions of the original image but also we want to pull together all of the other images of the same class including also their augmented version so all of this is", "start_timestamp": "00:15:21", "end_timestamp": "00:15:59", "start_second": 921, "end_second": 959, "url": "https://www.youtube.com/watch?v=MpdbFLXOOIw&t=921s", "title": "Supervised Contrastive Learning", "thumbnail": "https://i.ytimg.com/vi/MpdbFLXOOIw/hqdefault.jpg"} {"video_id": "MpdbFLXOOIw", "text": "going to be pulled together so not only does the let work learn about these augmentations which again for this idea the augmentations aren't even necessary the network there learns a representation space where images of the same class are close together which again is going to make the task of later linear classifiers that needs to separate this class from other classes very very easy and again the other images aren't just going to be pushed away but if they're from the same class let's say this and this image are from", "start_timestamp": "00:15:59", "end_timestamp": "00:16:30", "start_second": 959, "end_second": 990, "url": "https://www.youtube.com/watch?v=MpdbFLXOOIw&t=959s", "title": "Supervised Contrastive Learning", "thumbnail": "https://i.ytimg.com/vi/MpdbFLXOOIw/hqdefault.jpg"} {"video_id": "MpdbFLXOOIw", "text": "the same class all of those are going to be pushed apart from a red dot but by themselves being pushed together to their own cluster here of their own class I hope this makes sense and I hope the difference to the cross-entropy objective is sort of clear the cross-entropy objective simply from the beginning just cares about which side of the decision boundary iran while this pre-training objective first cares to put things close together that are in the same class and then the decision classifier will have a much easier time", "start_timestamp": "00:16:30", "end_timestamp": "00:17:10", "start_second": 990, "end_second": 1030, "url": "https://www.youtube.com/watch?v=MpdbFLXOOIw&t=990s", "title": "Supervised Contrastive Learning", "thumbnail": "https://i.ytimg.com/vi/MpdbFLXOOIw/hqdefault.jpg"} {"video_id": "MpdbFLXOOIw", "text": "the reason why this works better than the because because it's not entirely clear from the beginning that why this should work better because it's working with the same information it's just because people have generally found that these pre-training contrastive pre-training objectives they just are somewhat better at exploiting the information in the data set then if you just hammer on hammer with the contrastive sorry with the cross-entropy loss from the beginning so but it is not fully explained yet why this works better be", "start_timestamp": "00:17:10", "end_timestamp": "00:17:48", "start_second": 1030, "end_second": 1068, "url": "https://www.youtube.com/watch?v=MpdbFLXOOIw&t=1030s", "title": "Supervised Contrastive Learning", "thumbnail": "https://i.ytimg.com/vi/MpdbFLXOOIw/hqdefault.jpg"} {"video_id": "MpdbFLXOOIw", "text": "as it's working with the same data again the difference here is that the previous methods of contrastive pre-training the self supervised ones they did not have access to the labels and the advantage of that is you can have a giant database of unlabeled additional data that you do the free training on whereas here we do the pre training including the labels so here the label dog is an intrinsic part because we need to know which of these samples we need to pull together but that also means we cannot leverage the", "start_timestamp": "00:17:48", "end_timestamp": "00:18:25", "start_second": 1068, "end_second": 1105, "url": "https://www.youtube.com/watch?v=MpdbFLXOOIw&t=1068s", "title": "Supervised Contrastive Learning", "thumbnail": "https://i.ytimg.com/vi/MpdbFLXOOIw/hqdefault.jpg"} {"video_id": "MpdbFLXOOIw", "text": "may be that we have more on labelled data and unlabeled data is pretty cheap to obtain so that's the advantages and disadvantages here so this new loss so they they do compare this here and usually in these contrastive objectives you have somewhat like two encoders want to encode the the anchor and want to encode the Augmented versions and this one is like a momentum with shared weights and so on it all of this isn't really important if you want to look into that look into papers like momentum contrast or I did one on curl for", "start_timestamp": "00:18:25", "end_timestamp": "00:19:04", "start_second": 1105, "end_second": 1144, "url": "https://www.youtube.com/watch?v=MpdbFLXOOIw&t=1105s", "title": "Supervised Contrastive Learning", "thumbnail": "https://i.ytimg.com/vi/MpdbFLXOOIw/hqdefault.jpg"} {"video_id": "MpdbFLXOOIw", "text": "reinforcement learning I think the the general gist of it is clear so they compare the formulation of their loss to the self supervised one usually it takes the form of things like this so the one is the the anchor here and then the ZJ I would be the positive example and you see here that the inner product between the anchor and the positive example sorry about that the inner product should be high because here the loss is the negative of whatever is here so if you minimize the loss you say I want the inner product", "start_timestamp": "00:19:04", "end_timestamp": "00:19:50", "start_second": 1144, "end_second": 1190, "url": "https://www.youtube.com/watch?v=MpdbFLXOOIw&t=1144s", "title": "Supervised Contrastive Learning", "thumbnail": "https://i.ytimg.com/vi/MpdbFLXOOIw/hqdefault.jpg"} {"video_id": "MpdbFLXOOIw", "text": "between my anchor and whatever is the positive sample to be high and all everything else here which includes the thing on the top but it also includes everything else I the inner product to be low and which is exactly the thing where you push you pull together the positives and you push apart everything else the that is the standard objective that you had before they they extend this but looks almost the same so compared to the unsupervised objective now first of all they extend this such that you can have more than", "start_timestamp": "00:19:50", "end_timestamp": "00:20:30", "start_second": 1190, "end_second": 1230, "url": "https://www.youtube.com/watch?v=MpdbFLXOOIw&t=1190s", "title": "Supervised Contrastive Learning", "thumbnail": "https://i.ytimg.com/vi/MpdbFLXOOIw/hqdefault.jpg"} {"video_id": "MpdbFLXOOIw", "text": "one positive sample now this is also possible in the unsupervised way so they just augmented by this and they also now this is the crucial part they include the labels into the pre turning objective so they say everywhere where I and J have the same label should be maximized in the inner product so should be pulled together while everything else is being pushed apart yes so they say we generalize to an arbitrary number of positives and I also say contrastive power increases with more negatives I think that's just a finding that they", "start_timestamp": "00:20:30", "end_timestamp": "00:21:16", "start_second": 1230, "end_second": 1276, "url": "https://www.youtube.com/watch?v=MpdbFLXOOIw&t=1230s", "title": "Supervised Contrastive Learning", "thumbnail": "https://i.ytimg.com/vi/MpdbFLXOOIw/hqdefault.jpg"} {"video_id": "MpdbFLXOOIw", "text": "have that when they add more negative so when they increase the batch size that contrastive power increases they do analyze their gradient which I find it's pretty neat you can already see that if you formulate a loss of course the gradient is going to go in the negative direction but they make it clear that if you look at the gradient for the positive cases what appears is this one - pIJ quantity and the pIJ quantity is exactly the inner product between I and J normalized of course so if you minim so the gradient is going to point into", "start_timestamp": "00:21:16", "end_timestamp": "00:21:57", "start_second": 1276, "end_second": 1317, "url": "https://www.youtube.com/watch?v=MpdbFLXOOIw&t=1276s", "title": "Supervised Contrastive Learning", "thumbnail": "https://i.ytimg.com/vi/MpdbFLXOOIw/hqdefault.jpg"} {"video_id": "MpdbFLXOOIw", "text": "the negative direction of that for the positives which means you you're gonna pull them together and it's going to push into this direction with for the negative classes which means you you push them up and they also analyze what happens a in with relation to hardness so they say there are two kinds of if you just look at the positive samples there are two kinds there are easy positives where the network has already learned to match them closely where the inner product is almost one if you look at them that means the pIJ quantity is large", "start_timestamp": "00:21:57", "end_timestamp": "00:22:38", "start_second": 1317, "end_second": 1358, "url": "https://www.youtube.com/watch?v=MpdbFLXOOIw&t=1317s", "title": "Supervised Contrastive Learning", "thumbnail": "https://i.ytimg.com/vi/MpdbFLXOOIw/hqdefault.jpg"} {"video_id": "MpdbFLXOOIw", "text": "right because that is basically the inner product and you look at this term this term is exactly what we saw in the gradient then you see that this here since this is one this entire thing is zero this is all so highs this is close to one so this entire thing is zero this is almost zero but if you have a hard positive where the network hasn't learned yet to align the inner product properly or align the representation properly then the angle between the things again these are normalized the angle is they are approximately", "start_timestamp": "00:22:38", "end_timestamp": "00:23:18", "start_second": 1358, "end_second": 1398, "url": "https://www.youtube.com/watch?v=MpdbFLXOOIw&t=1358s", "title": "Supervised Contrastive Learning", "thumbnail": "https://i.ytimg.com/vi/MpdbFLXOOIw/hqdefault.jpg"} {"video_id": "MpdbFLXOOIw", "text": "orthogonal so the gradient magnitude is going to be this here is going to be approximately 0 so this is close to 1 and this here since this is also 0 is also close to 1 so this is going to be larger than 0 which means that their loss focuses on the examples that are that the network cannot yet represent well according to their objective which makes sense right first of all but second of all it that is exactly the same thing as in the cross entropy loss if you if you look at the cross entropy loss and you have a", "start_timestamp": "00:23:18", "end_timestamp": "00:24:00", "start_second": 1398, "end_second": 1440, "url": "https://www.youtube.com/watch?v=MpdbFLXOOIw&t=1398s", "title": "Supervised Contrastive Learning", "thumbnail": "https://i.ytimg.com/vi/MpdbFLXOOIw/hqdefault.jpg"} {"video_id": "MpdbFLXOOIw", "text": "situation where the network is really good already for a given sample so it already puts a dog into the dark class then the the gradient will not be pulling much for that sample it might mainly focuses on where you're still wrong so it is like I appreciate the analysis but it is not a notable difference I think what they want to show is that their loss if you do gradient descent really does what it is supposed to do namely first of all it does this polling together pushing a part of inner products for the positive and negative samples and it", "start_timestamp": "00:24:00", "end_timestamp": "00:24:41", "start_second": 1440, "end_second": 1481, "url": "https://www.youtube.com/watch?v=MpdbFLXOOIw&t=1440s", "title": "Supervised Contrastive Learning", "thumbnail": "https://i.ytimg.com/vi/MpdbFLXOOIw/hqdefault.jpg"} {"video_id": "MpdbFLXOOIw", "text": "mainly focuses on samples where you not yet have found a good representation to align them with others it focuses on pairs that are not yet correctly close or together or far apart they also connect this to the triplet loss where they can show after some approximation that if their loss only has one positive and one negative sample it is going to be proportional to the triplet loss the triplet loss is basically where you have an image and you find one positive I think there's going to be of the same class right here and you find one", "start_timestamp": "00:24:41", "end_timestamp": "00:25:22", "start_second": 1481, "end_second": 1522, "url": "https://www.youtube.com/watch?v=MpdbFLXOOIw&t=1481s", "title": "Supervised Contrastive Learning", "thumbnail": "https://i.ytimg.com/vi/MpdbFLXOOIw/hqdefault.jpg"} {"video_id": "MpdbFLXOOIw", "text": "negative of a different class and you try to push those apart while pulling those together the problem here they say is the problem of hard negative sampling in order for this to make sense you need the negative sample to be what's called a hard negative sample so this the call is hard negative mining because you only have one negative sample you better make this something where the network can learn from right and if it's too easy the network can't learn anything and they're more thereby you have the problem of hard negative mining where", "start_timestamp": "00:25:22", "end_timestamp": "00:25:59", "start_second": 1522, "end_second": 1559, "url": "https://www.youtube.com/watch?v=MpdbFLXOOIw&t=1522s", "title": "Supervised Contrastive Learning", "thumbnail": "https://i.ytimg.com/vi/MpdbFLXOOIw/hqdefault.jpg"} {"video_id": "MpdbFLXOOIw", "text": "you often have to filter through your mini batch or even through your data set to find a good negative sample to go along with this pair of positive samples but I don't I don't really see how their method except that you know it has a bunch of positives and negative samples except for that which I guess you could also apply to the triplet loss there's not really a difference here again your if your method is a contrastive method you do have the problem that if you simply sample at random your negative samples are going to be become easier", "start_timestamp": "00:25:59", "end_timestamp": "00:26:35", "start_second": 1559, "end_second": 1595, "url": "https://www.youtube.com/watch?v=MpdbFLXOOIw&t=1559s", "title": "Supervised Contrastive Learning", "thumbnail": "https://i.ytimg.com/vi/MpdbFLXOOIw/hqdefault.jpg"} {"video_id": "MpdbFLXOOIw", "text": "and easier over the training over the course of training and you get the problem of at some point you're gonna have to do actively sample hard negatives I think this paper just gets around it by having huge batch sizes so yeah but again they do get state-of-the-art on imagenet for these types of networks and augmentation strategies and they do look at how their loss appears to be more hyper parameter stable so if they change out the augmentation if they change the optimizer or the learning rate you can see here that the spread in accuracy is", "start_timestamp": "00:26:35", "end_timestamp": "00:27:16", "start_second": 1595, "end_second": 1636, "url": "https://www.youtube.com/watch?v=MpdbFLXOOIw&t=1595s", "title": "Supervised Contrastive Learning", "thumbnail": "https://i.ytimg.com/vi/MpdbFLXOOIw/hqdefault.jpg"} {"video_id": "MpdbFLXOOIw", "text": "much smaller than for the cross entropy loss except here but it is it is hard to compare variances of things that don't have the same means in terms of accuracy so take this on the right here with a grain of salt they also evaluate this on corrupted image net so there's an image net data set where you it has several levels of corruptness of the data set and you can see your accuracy goes down but the accuracy for the cross entropy loss goes down faster than for the supervised contrastive loss you see they start", "start_timestamp": "00:27:16", "end_timestamp": "00:27:57", "start_second": 1636, "end_second": 1677, "url": "https://www.youtube.com/watch?v=MpdbFLXOOIw&t=1636s", "title": "Supervised Contrastive Learning", "thumbnail": "https://i.ytimg.com/vi/MpdbFLXOOIw/hqdefault.jpg"} {"video_id": "MpdbFLXOOIw", "text": "together like this and they go further apart now it is not clear to me whether that's just an effect like if you just trained a supervised contrastive loss also to this level whether it would fall off at the same speed or whether because it is the supervised contrastive loss it would kind of match that curve it's not clear whether that's really an effect of the difference of the losses or is just an effect of the fact that they aren't the same accuracy to begin with again this kind of shifting you can't really", "start_timestamp": "00:27:57", "end_timestamp": "00:28:32", "start_second": 1677, "end_second": 1712, "url": "https://www.youtube.com/watch?v=MpdbFLXOOIw&t=1677s", "title": "Supervised Contrastive Learning", "thumbnail": "https://i.ytimg.com/vi/MpdbFLXOOIw/hqdefault.jpg"} {"video_id": "MpdbFLXOOIw", "text": "compare things that have different means in the first place but that's it is an interesting finding that their method is more stable to these corruptions I just want to point out at the end their training details and just highlight they train for up to seven hundred epochs during the pre training stage which is I think standard but mad and they trained up models with batch sizes up to 8192 so you need like a super TPU cluster to run these kind of things and I am never exactly trusting of numbers like this even though it's", "start_timestamp": "00:28:32", "end_timestamp": "00:29:13", "start_second": 1712, "end_second": 1753, "url": "https://www.youtube.com/watch?v=MpdbFLXOOIw&t=1712s", "title": "Supervised Contrastive Learning", "thumbnail": "https://i.ytimg.com/vi/MpdbFLXOOIw/hqdefault.jpg"} {"video_id": "MpdbFLXOOIw", "text": "it's kind of a good improvement it is still like a 1% improvement and in these small numbers I feel I just feel the there might be sir there might be a big effect that things like batch sizes and how much you put into computing how much compute you put into it and what else you're doing there might be so much influence of that that I first want to see this replicated multiple times across the entire field before I'm going to really trust that this is a good thing to do alright so I hope you like this if you're still here", "start_timestamp": "00:29:13", "end_timestamp": "00:29:58", "start_second": 1753, "end_second": 1798, "url": "https://www.youtube.com/watch?v=MpdbFLXOOIw&t=1753s", "title": "Supervised Contrastive Learning", "thumbnail": "https://i.ytimg.com/vi/MpdbFLXOOIw/hqdefault.jpg"} {"video_id": "a-VQfQqIMrE", "text": "hi there today we'll look at mix-up beyond empirical risk minimization by a Hong Yi Chang Mustapha sis Yong and dolphin and David Lopes Pass so this paper is actually pretty simple but it introduces a technique that apparently helps with training classifiers and it I have it seen it used in practice so there must be at least something to it it is ultimately very simple so usually you input a data point X into your neural network in deep learning so f of X that's your neural network your known neural network has parameters", "start_timestamp": "00:00:00", "end_timestamp": "00:00:45", "start_second": 0, "end_second": 45, "url": "https://www.youtube.com/watch?v=a-VQfQqIMrE&t=0s", "title": "mixup: Beyond Empirical Risk Minimization (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/a-VQfQqIMrE/maxresdefault.jpg"} {"video_id": "a-VQfQqIMrE", "text": "theta you get some output Y hat and along with the X you also have a y:i true label and then you have a loss function that compares what you output with your true label and then you just try to make that loss smaller you want to adjust your parameters so next time you see data point X its output will be a little closer to the true label Y and we call this empirical miss empirical risk minimization because you don't actually what you think is that your X comes from some distribution from some data distribution D like the space of", "start_timestamp": "00:00:45", "end_timestamp": "00:01:31", "start_second": 45, "end_second": 91, "url": "https://www.youtube.com/watch?v=a-VQfQqIMrE&t=45s", "title": "mixup: Beyond Empirical Risk Minimization (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/a-VQfQqIMrE/maxresdefault.jpg"} {"video_id": "a-VQfQqIMrE", "text": "all natural images or the space of all of language but what you actually have is you have a data set of a finite amount of data that you can put a that you can sample x and y from and so instead of your minimizing your true risk you minimize your empirical risk the empirical risk minimization right here now what's the problem with that the problem is that you can get overly confident about your data points and nothing else and that will hurt your generalization so if you have a data point let's say right here and another", "start_timestamp": "00:01:31", "end_timestamp": "00:02:10", "start_second": 91, "end_second": 130, "url": "https://www.youtube.com/watch?v=a-VQfQqIMrE&t=91s", "title": "mixup: Beyond Empirical Risk Minimization (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/a-VQfQqIMrE/maxresdefault.jpg"} {"video_id": "a-VQfQqIMrE", "text": "one right here your network is basically so this this is a this is class 1 this is class 2 your network is going to maybe make decision boundaries like this and like this where it says ok here is class 1 and here is class 2 but it could you know it's very conceivable that here it says ah here is class 4 and over here is class 7 and right here through is class 9 and by the way here class 4 again so the the empirical risk minimization leaves everything in between the data points open now what this paper proposes is that we should", "start_timestamp": "00:02:10", "end_timestamp": "00:02:55", "start_second": 130, "end_second": 175, "url": "https://www.youtube.com/watch?v=a-VQfQqIMrE&t=130s", "title": "mixup: Beyond Empirical Risk Minimization (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/a-VQfQqIMrE/maxresdefault.jpg"} {"video_id": "a-VQfQqIMrE", "text": "not only train our our classifier on these data points but on all the data points sort of in between the two and this is the mix of data points so this data point here might be constructed if this is a and this is B from zero point one times B right and plus 0.9 times a because it's mostly a and it's a little bit B and now you think what are the labels here if a belongs to class one and B belongs to class two then of course the label of this data point is zero point one times the class of B which is 2 plus 0.9 times the class of a", "start_timestamp": "00:02:55", "end_timestamp": "00:03:45", "start_second": 175, "end_second": 225, "url": "https://www.youtube.com/watch?v=a-VQfQqIMrE&t=175s", "title": "mixup: Beyond Empirical Risk Minimization (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/a-VQfQqIMrE/maxresdefault.jpg"} {"video_id": "a-VQfQqIMrE", "text": "which is 1 ultimately because what you do is you input a class like class number two if you want to input this into a machine learning model you just you don't just say it's class number two what you input is a distribution that is basically has zeros everywhere so these small things there zero zero zero one zero and this here is at class number two so this would be class number one class number two class number three right you input a distribution like this if you want to express class number two now in our sample right here what we", "start_timestamp": "00:03:45", "end_timestamp": "00:04:23", "start_second": 225, "end_second": 263, "url": "https://www.youtube.com/watch?v=a-VQfQqIMrE&t=225s", "title": "mixup: Beyond Empirical Risk Minimization (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/a-VQfQqIMrE/maxresdefault.jpg"} {"video_id": "a-VQfQqIMrE", "text": "would input as a label is simply a mix between class so 0.9 0.9 of class 1 0.1 of class 2 and then zero everywhere else so this would be our label for the data point that we construct right here this will be our sorry the top one would be our data point formally you take two data points and you mix them using this lambda mixing factor that'll give you a new data point that's in between the other data points and you take the two corresponding labels and you mix them accordingly as well and that will give you the label for that data point and", "start_timestamp": "00:04:23", "end_timestamp": "00:05:07", "start_second": 263, "end_second": 307, "url": "https://www.youtube.com/watch?v=a-VQfQqIMrE&t=263s", "title": "mixup: Beyond Empirical Risk Minimization (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/a-VQfQqIMrE/maxresdefault.jpg"} {"video_id": "a-VQfQqIMrE", "text": "now your model will learn to basically smoothly interpolate so you will teach your model the thing on the left here is class number one right that's class number one the thing on the right is class number two this here is a half of class one and a half of class two so the model basically learns a smooth interpolation where the situation that's here on top is probably not going to happen anymore but what it would do is it would sort of create these ISO lines around class two and then around class one where it's sort of smoothly getting", "start_timestamp": "00:05:07", "end_timestamp": "00:05:47", "start_second": 307, "end_second": 347, "url": "https://www.youtube.com/watch?v=a-VQfQqIMrE&t=307s", "title": "mixup: Beyond Empirical Risk Minimization (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/a-VQfQqIMrE/maxresdefault.jpg"} {"video_id": "a-VQfQqIMrE", "text": "less and less sure about the class of the data points but on the way it is always either class 1 or class 2 and they say that can help the generalization performance and it's visible or why right it's just the only thing that's not it it's not clear from the beginning is that this kind of interpolation actually makes sense because if this means we sort of linearly interpolate between two images so if we have two images we just take half of one and half of the other and that will be not a natural image it will", "start_timestamp": "00:05:47", "end_timestamp": "00:06:17", "start_second": 347, "end_second": 377, "url": "https://www.youtube.com/watch?v=a-VQfQqIMrE&t=347s", "title": "mixup: Beyond Empirical Risk Minimization (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/a-VQfQqIMrE/maxresdefault.jpg"} {"video_id": "a-VQfQqIMrE", "text": "be kind of a blurry thing otherwise you know all our problems would be solved and we could just linearly classify things but in any case in practice it actually seems to help probably because interpolations of two images linear interpolations are still much more like something like a natural image then any random noise you could come up with so they say it isn't code right here code is pretty simple simply want to mix the two things and the mixing factor this lambda here comes from a beta distribution and they use a beta I", "start_timestamp": "00:06:17", "end_timestamp": "00:06:56", "start_second": 377, "end_second": 416, "url": "https://www.youtube.com/watch?v=a-VQfQqIMrE&t=377s", "title": "mixup: Beyond Empirical Risk Minimization (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/a-VQfQqIMrE/maxresdefault.jpg"} {"video_id": "a-VQfQqIMrE", "text": "believe of 0.4 or something just want to quickly show you this is the red line here so the red line as you can see mostly most of the time they're going to either sample the thing on the very left or the thing on the very right that means the either sample the first or the second data point but some of the time they actually sample something in the middle and it's it's fairly uniform in the middle so it appears like a good distribution to sample from if you want to sample these mixing coefficients and by adjusting the the actual number of", "start_timestamp": "00:06:56", "end_timestamp": "00:07:34", "start_second": 416, "end_second": 454, "url": "https://www.youtube.com/watch?v=a-VQfQqIMrE&t=416s", "title": "mixup: Beyond Empirical Risk Minimization (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/a-VQfQqIMrE/maxresdefault.jpg"} {"video_id": "a-VQfQqIMrE", "text": "alpha and beta here you can determine how many times you sample the original data points versus how many times you sample something in the middle okay on this toy data set right here they showcase what mix up can do so in a classic model you have the orange and the green data points and blue is basically where the classifier believes its class one you see this very hard border here it's quite a hard border now you only have two classes here and so the hard border is sort of a problem in itself because if you think of for", "start_timestamp": "00:07:34", "end_timestamp": "00:08:12", "start_second": 454, "end_second": 492, "url": "https://www.youtube.com/watch?v=a-VQfQqIMrE&t=454s", "title": "mixup: Beyond Empirical Risk Minimization (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/a-VQfQqIMrE/maxresdefault.jpg"} {"video_id": "a-VQfQqIMrE", "text": "example adversarial examples all they have to do is basically get over that one inch and the classifier is already super duper sure it's the orange class right whereas if you use mix up your border is much much much more fuzzy it's like yeah it's only really sure here and out out here everywhere but in the middle it's sort of like me I don't know and so that's kind of a more desirable situation and of course this here works particularly in this in this linear to the setting but as we can see the same reasoning applies to sort of higher", "start_timestamp": "00:08:12", "end_timestamp": "00:08:55", "start_second": 492, "end_second": 535, "url": "https://www.youtube.com/watch?v=a-VQfQqIMrE&t=492s", "title": "mixup: Beyond Empirical Risk Minimization (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/a-VQfQqIMrE/maxresdefault.jpg"} {"video_id": "a-VQfQqIMrE", "text": "higher layers and higher dimensionality data points right I have to seem to lost the ability to zoom Oh No that's back okay and that's basically it for this paper this is all they do they propose this method and then they test it they say something interesting here that mix-up converges to the classical method as off approaches zero so that would push your beta distribution basically in the middle all the way down and you would only sample from the very left or the very right so you can smoothly interpolate between this mixing", "start_timestamp": "00:08:55", "end_timestamp": "00:09:34", "start_second": 535, "end_second": 574, "url": "https://www.youtube.com/watch?v=a-VQfQqIMrE&t=535s", "title": "mixup: Beyond Empirical Risk Minimization (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/a-VQfQqIMrE/maxresdefault.jpg"} {"video_id": "a-VQfQqIMrE", "text": "and the classic method they so their main results are we apply this to classifiers and what I like is since again is also a classifier so the discriminator is a class fair they also apply it to Ganz and they outperform unstable eyes the classic training on ganz they show that it's more robust towards adversarial attacks because it's not so sure about intermediate things and they generally outperform other methods but also they do this nice investigation here where they measure the prediction error of in between data", "start_timestamp": "00:09:34", "end_timestamp": "00:10:16", "start_second": 574, "end_second": 616, "url": "https://www.youtube.com/watch?v=a-VQfQqIMrE&t=574s", "title": "mixup: Beyond Empirical Risk Minimization (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/a-VQfQqIMrE/maxresdefault.jpg"} {"video_id": "a-VQfQqIMrE", "text": "and what it means is they say a prediction is counted as a miss if it does not belong to Y I or Y J so you have a sample right here X I in the sample right here X J and you look at what the classifier says in between the two data points so you just interpolate the two data points and just measure what the classifier says and whenever the classifier either says y I or Y J either either a label of those two data points you count it as correct and you only count it as incorrect if it says something else and you can see here if", "start_timestamp": "00:10:16", "end_timestamp": "00:10:53", "start_second": 616, "end_second": 653, "url": "https://www.youtube.com/watch?v=a-VQfQqIMrE&t=616s", "title": "mixup: Beyond Empirical Risk Minimization (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/a-VQfQqIMrE/maxresdefault.jpg"} {"video_id": "a-VQfQqIMrE", "text": "you train with the classic method erm the these errors happen much more often that's exactly the situation I pointed out at the beginning where in the high dimensions it can you know occur that all sorts of decision boundary sneak here in between the two data points and by interpolating between them during training you sort of much reduce that you reduce that effect a lot now this they also say that the gradient norm of the gradients of the model with respect to input in between training data it happens the same thing the norm", "start_timestamp": "00:10:53", "end_timestamp": "00:11:37", "start_second": 653, "end_second": 697, "url": "https://www.youtube.com/watch?v=a-VQfQqIMrE&t=653s", "title": "mixup: Beyond Empirical Risk Minimization (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/a-VQfQqIMrE/maxresdefault.jpg"} {"video_id": "a-VQfQqIMrE", "text": "of the gradients in the middle is also much much lower and this yeah this investigation I find pretty cool I have to say I have seen mixup in practice so it might be useful I've read a paper where they basically say oh it was a big transfer paper yeah where they basically say it is useful if you have for example if you have little data and a big model so you can sort of regularize the model and is also useful to know that they did test this with drop out so we can they compared it with drop out and the conclusion is basically that this is", "start_timestamp": "00:11:37", "end_timestamp": "00:12:12", "start_second": 697, "end_second": 732, "url": "https://www.youtube.com/watch?v=a-VQfQqIMrE&t=697s", "title": "mixup: Beyond Empirical Risk Minimization (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/a-VQfQqIMrE/maxresdefault.jpg"} {"video_id": "a-VQfQqIMrE", "text": "something else than drop out so it's not doing the same thing drop out of course it means you drop out some of the data points in intermediate activations and that sort of gives you a noisy version of the data point this here can actually be combined with drop out which means that it gives you an additional benefit you see right here most of the best numbers happen when you use mix up plus drop out so it seems to be just an additional regularization on top of drop out pretty cool pretty cool investigation awesome alright so if you", "start_timestamp": "00:12:12", "end_timestamp": "00:12:51", "start_second": 732, "end_second": 771, "url": "https://www.youtube.com/watch?v=a-VQfQqIMrE&t=732s", "title": "mixup: Beyond Empirical Risk Minimization (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/a-VQfQqIMrE/maxresdefault.jpg"} {"video_id": "arbbhHyRP90", "text": "and us in the car today we are going to talk about AI machine learning and deep learning the reason why is because AI is everywhere in the news the government loves to put money on AI project a pinch of cube todo investors love to put money on machine learning powered companies investment and everybody's talking about deep learning algorithms we are going to define what is AI what is machine learning what is good learning at the end I'm going to tell you how a full-stack programmer like me that doesn't like math and a dozen really", "start_timestamp": "00:00:00", "end_timestamp": "00:00:38", "start_second": 0, "end_second": 38, "url": "https://www.youtube.com/watch?v=arbbhHyRP90&t=0s", "title": "\uba38\uc2e0\ub7ec\ub2dd vs \ub525\ub7ec\ub2dd vs \uc778\uacf5\uc9c0\ub2a5? A.I. \uac1c\ub150\uc815\ub9ac!", "thumbnail": "https://i.ytimg.com/vi/arbbhHyRP90/maxresdefault.jpg"} {"video_id": "arbbhHyRP90", "text": "like calculus in all that can get started working with artificial intelligence and machine learning ai ai is divided in two categories narrow AI in general area Hollywood and Netflix and all the movies are in general AI general AI is machines that do everything that humans do and better that can do a general purpose AI that can do anything so it can talk it can learn to play games it can communicate it can make a judgment it can just be exactly like a human in varies general AI right now in the world in the industry we are in a narrow AI", "start_timestamp": "00:00:38", "end_timestamp": "00:01:19", "start_second": 38, "end_second": 79, "url": "https://www.youtube.com/watch?v=arbbhHyRP90&t=38s", "title": "\uba38\uc2e0\ub7ec\ub2dd vs \ub525\ub7ec\ub2dd vs \uc778\uacf5\uc9c0\ub2a5? A.I. \uac1c\ub150\uc815\ub9ac!", "thumbnail": "https://i.ytimg.com/vi/arbbhHyRP90/maxresdefault.jpg"} {"video_id": "arbbhHyRP90", "text": "narrow AI are machines that can only do one thing and one thing very well where the focus of the machine is narrow the applications are narrow the machine can do one thing well and one thing only an example of narrow AI would be for example the Facebook AI at finding out faces in photos but that's it this AI is not able to learn how to find out dogs in photos because is narrow is general is narrow AI it can only do one thing well and one thing only that's what we are right now but now we need to understand how we teach those", "start_timestamp": "00:01:19", "end_timestamp": "00:01:58", "start_second": 79, "end_second": 118, "url": "https://www.youtube.com/watch?v=arbbhHyRP90&t=79s", "title": "\uba38\uc2e0\ub7ec\ub2dd vs \ub525\ub7ec\ub2dd vs \uc778\uacf5\uc9c0\ub2a5? A.I. \uac1c\ub150\uc815\ub9ac!", "thumbnail": "https://i.ytimg.com/vi/arbbhHyRP90/maxresdefault.jpg"} {"video_id": "arbbhHyRP90", "text": "hey guys here is where machine learning is the way to accomplish a a I so how the machines learn there are many categories of machine learning but the most famous ones are to on supervised learning and supervised learning let's say that we are going to make an application that detects if a food is a hot dog or not a hot dog if we did it in a supervised way what we will do is that we will label what a hot dog is okay a hot dog is a sausage a hot dog is long a hot dog has some sauce on top a hot dog has is between a bun that is a", "start_timestamp": "00:01:58", "end_timestamp": "00:02:53", "start_second": 118, "end_second": 173, "url": "https://www.youtube.com/watch?v=arbbhHyRP90&t=118s", "title": "\uba38\uc2e0\ub7ec\ub2dd vs \ub525\ub7ec\ub2dd vs \uc778\uacf5\uc9c0\ub2a5? A.I. \uac1c\ub150\uc815\ub9ac!", "thumbnail": "https://i.ytimg.com/vi/arbbhHyRP90/maxresdefault.jpg"} {"video_id": "arbbhHyRP90", "text": "label we're going to label what a hot dog is we're going to tell the Machine what a hot dog is right and then we are going to get millions of photos of any kind of food we're going to put that into the machine in the machine based in our labels the machine is going to say ok this photo has 60% chance of being a hot dog so the machine is not thinking the machine is just telling us in a probability way with statistics and mathematics it just train us this has 95% chance of being a hot dog based on the labels that the humans gave to the", "start_timestamp": "00:02:53", "end_timestamp": "00:03:31", "start_second": 173, "end_second": 211, "url": "https://www.youtube.com/watch?v=arbbhHyRP90&t=173s", "title": "\uba38\uc2e0\ub7ec\ub2dd vs \ub525\ub7ec\ub2dd vs \uc778\uacf5\uc9c0\ub2a5? A.I. \uac1c\ub150\uc815\ub9ac!", "thumbnail": "https://i.ytimg.com/vi/arbbhHyRP90/maxresdefault.jpg"} {"video_id": "arbbhHyRP90", "text": "machine one example of supervised machine learning will be for example a music recommendation system the user US will label the songs a good night so we will tell the Machine these are the snow that I enjoy this is the ton of beat that I enjoy this is the kind of artists that I like next time a new song comes the Machine now has labeled data into what is a song that Nikolas likes and then you will know if Nikolas has a 90% chance of liking this new song that's coming up that is supervised learning humans label the data now in", "start_timestamp": "00:03:31", "end_timestamp": "00:04:09", "start_second": 211, "end_second": 249, "url": "https://www.youtube.com/watch?v=arbbhHyRP90&t=211s", "title": "\uba38\uc2e0\ub7ec\ub2dd vs \ub525\ub7ec\ub2dd vs \uc778\uacf5\uc9c0\ub2a5? A.I. \uac1c\ub150\uc815\ub9ac!", "thumbnail": "https://i.ytimg.com/vi/arbbhHyRP90/maxresdefault.jpg"} {"video_id": "arbbhHyRP90", "text": "unsupervised learning the humans do not label the data so for example to do this hot dog app what we are going to do is that we are going to give to the Machine millions of photos of only hot dogs we're going to give that to the Machine we're gonna give the machine any leg and we are going to let the Machine by itself figure out a label that makes a hot dog so when you're going to tell any description of a hot dog we're just gonna give the Machine a lot of hot dog photos in the machine we'll figure out by itself after a lot of time and a lot", "start_timestamp": "00:04:09", "end_timestamp": "00:04:43", "start_second": 249, "end_second": 283, "url": "https://www.youtube.com/watch?v=arbbhHyRP90&t=249s", "title": "\uba38\uc2e0\ub7ec\ub2dd vs \ub525\ub7ec\ub2dd vs \uc778\uacf5\uc9c0\ub2a5? A.I. \uac1c\ub150\uc815\ub9ac!", "thumbnail": "https://i.ytimg.com/vi/arbbhHyRP90/maxresdefault.jpg"} {"video_id": "arbbhHyRP90", "text": "of processing power and a lot a lot of data all right now of course I know this is just a very simple explanation if you want this if you like this topic we can talk more about it in super in future videos but now I need to move on to the last topic which is learning deep learning is just a way to accomplish machine learning machine learning is a way to accomplish a deep learning is called deep learning because it makes use of something called neural networks very smart scientists very smart mathematicians and computer scientists", "start_timestamp": "00:04:43", "end_timestamp": "00:05:19", "start_second": 283, "end_second": 319, "url": "https://www.youtube.com/watch?v=arbbhHyRP90&t=283s", "title": "\uba38\uc2e0\ub7ec\ub2dd vs \ub525\ub7ec\ub2dd vs \uc778\uacf5\uc9c0\ub2a5? A.I. \uac1c\ub150\uc815\ub9ac!", "thumbnail": "https://i.ytimg.com/vi/arbbhHyRP90/maxresdefault.jpg"} {"video_id": "arbbhHyRP90", "text": "and all that they came up with this algorithm that works like a brain you need a lot of data to train that and you need a lot of processing power because it's a very long and computer intensive process but that's it deep learning is a way to accomplish machine learning machine learning is a way to accomplish AI all right now deep learning is being used by companies for example like Google or Google or Tesla for example because they can process and have massive amounts of data and they have massive amounts of money so how does a", "start_timestamp": "00:05:19", "end_timestamp": "00:05:56", "start_second": 319, "end_second": 356, "url": "https://www.youtube.com/watch?v=arbbhHyRP90&t=319s", "title": "\uba38\uc2e0\ub7ec\ub2dd vs \ub525\ub7ec\ub2dd vs \uc778\uacf5\uc9c0\ub2a5? A.I. \uac1c\ub150\uc815\ub9ac!", "thumbnail": "https://i.ytimg.com/vi/arbbhHyRP90/maxresdefault.jpg"} {"video_id": "arbbhHyRP90", "text": "full-stack programmer come up and start working with machine learning and deep learning well if you want to get started in machine learning you have to learn Python that is like the best way to get started if you know Python then you can move on and look into something called tensorflow thankfully you don't have to do all these things by yourself manually you don't have to create a neural networks by OSHA the community has already built tons and tons of things the most popular framework for artificial intelligence is", "start_timestamp": "00:05:56", "end_timestamp": "00:06:24", "start_second": 356, "end_second": 384, "url": "https://www.youtube.com/watch?v=arbbhHyRP90&t=356s", "title": "\uba38\uc2e0\ub7ec\ub2dd vs \ub525\ub7ec\ub2dd vs \uc778\uacf5\uc9c0\ub2a5? A.I. \uac1c\ub150\uc815\ub9ac!", "thumbnail": "https://i.ytimg.com/vi/arbbhHyRP90/maxresdefault.jpg"} {"video_id": "arbbhHyRP90", "text": "called tensor flow tensor flow is in JavaScript and Python so those two languages are super super easy to do get start to do it and you can get started tomorrow if you wanted to also there is this thing called brain jail on brain j/s they already have neural net works deep learning algorithms activation functions I'll on the things already done for you to work with deep learning and no GS so if you ask me it's an amazing thing like I said the community has already built so many things so if you're scared of math", "start_timestamp": "00:06:24", "end_timestamp": "00:06:54", "start_second": 384, "end_second": 414, "url": "https://www.youtube.com/watch?v=arbbhHyRP90&t=384s", "title": "\uba38\uc2e0\ub7ec\ub2dd vs \ub525\ub7ec\ub2dd vs \uc778\uacf5\uc9c0\ub2a5? A.I. \uac1c\ub150\uc815\ub9ac!", "thumbnail": "https://i.ytimg.com/vi/arbbhHyRP90/maxresdefault.jpg"} {"video_id": "arbbhHyRP90", "text": "calculation on that you can go and start playing around with tensorflow or with brain shaders and that will just give you an introduction of what is machine learning without having to take care of all the math in the calculus Python and JavaScript those two languages are gonna be big big big big on machine learning I think Python already is it's a massive in JavaScript is a start in there because it's on the web thank you for watching I hope that you enjoyed this video let me know what you think let me know in the comments", "start_timestamp": "00:06:54", "end_timestamp": "00:07:21", "start_second": 414, "end_second": 441, "url": "https://www.youtube.com/watch?v=arbbhHyRP90&t=414s", "title": "\uba38\uc2e0\ub7ec\ub2dd vs \ub525\ub7ec\ub2dd vs \uc778\uacf5\uc9c0\ub2a5? A.I. \uac1c\ub150\uc815\ub9ac!", "thumbnail": "https://i.ytimg.com/vi/arbbhHyRP90/maxresdefault.jpg"} {"video_id": "IRVdiHu1VCc", "text": "The \"Dirty Jobs\" crew and I were called to a little town in Colorado, called Craig. It's only a couple dozen square miles. It's in the Rockies. And the job in question was sheep rancher. My role on the show, for those of you who haven't seen it -- it's pretty simple. I'm an apprentice, and I work with the people who do the jobs in question. And my responsibilities are to simply try and keep up, and give an honest account of what it's like to be these people for one day in their life. The job in question: herding sheep.", "start_timestamp": "00:00:00", "end_timestamp": "00:00:45", "start_second": 0, "end_second": 45, "url": "https://www.youtube.com/watch?v=IRVdiHu1VCc&t=0s", "title": "Learning from dirty jobs | Mike Rowe", "thumbnail": "https://i.ytimg.com/vi/IRVdiHu1VCc/maxresdefault.jpg"} {"video_id": "IRVdiHu1VCc", "text": "Great. We go to Craig and we check into a hotel, and I realize the next day that castration is going to be an absolute part of this work. Normally, I never do any research at all. But this is a touchy subject, and I work for the Discovery Channel, and we want to portray accurately whatever it is we do. And we certainly want to do it with a lot of respect for the animals. So I call the Humane Society and I say, \"Look, I'm going to be castrating some lambs. Can you tell me the deal?\" And they're like, \"Yeah, it's pretty straightforward.\"", "start_timestamp": "00:00:45", "end_timestamp": "00:01:20", "start_second": 45, "end_second": 80, "url": "https://www.youtube.com/watch?v=IRVdiHu1VCc&t=45s", "title": "Learning from dirty jobs | Mike Rowe", "thumbnail": "https://i.ytimg.com/vi/IRVdiHu1VCc/maxresdefault.jpg"} {"video_id": "IRVdiHu1VCc", "text": "They use a band, basically, a rubber band, like this, only a little smaller. This one was actually around the playing cards I got yesterday -- (Laughter) But it had a certain familiarity to it. And I said, \"Well, what exactly is the process?\" And they said, \"The band is applied to the tail, tightly. And then another band is applied to the scrotum, tightly. Blood flow is slowly retarded; a week later the parts in question fall off. \"Great -- got it.\" OK, I call the SPCA to confirm this. They confirm it. I also call PETA just for fun,", "start_timestamp": "00:01:20", "end_timestamp": "00:01:53", "start_second": 80, "end_second": 113, "url": "https://www.youtube.com/watch?v=IRVdiHu1VCc&t=80s", "title": "Learning from dirty jobs | Mike Rowe", "thumbnail": "https://i.ytimg.com/vi/IRVdiHu1VCc/maxresdefault.jpg"} {"video_id": "IRVdiHu1VCc", "text": "and they don't like it, but they confirm it. OK, that's basically how you do it. So the next day I go out. And I'm given a horse and we go get the lambs and we take them to a pen that we built, and we go about the business of animal husbandry. Melanie is the wife of Albert. Albert is the shepherd in question. Melanie picks up the lamb, one hand on both legs on the right, likewise on the left. Lamb goes on the post, she opens it up. Alright. Great. Albert goes in, I follow Albert, the crew is around. I always watch the process done the first time before I try it.", "start_timestamp": "00:01:53", "end_timestamp": "00:02:25", "start_second": 113, "end_second": 145, "url": "https://www.youtube.com/watch?v=IRVdiHu1VCc&t=113s", "title": "Learning from dirty jobs | Mike Rowe", "thumbnail": "https://i.ytimg.com/vi/IRVdiHu1VCc/maxresdefault.jpg"} {"video_id": "IRVdiHu1VCc", "text": "Being an apprentice, you know, you do that. Albert reaches in his pocket to pull out, you know, this black rubber band, but what comes out instead is a knife. And I'm like, \"Hmm, that's not rubber at all,\" you know? (Laughter) And he kind of flicked it open in a way that caught the sun that was just coming over the Rockies, it was very -- (Laughter) It was ... it was impressive. In the space of about two seconds, Albert had the knife between the cartilage of the tail, right next to the butt of the lamb, and very quickly, the tail was gone and in the bucket that I was holding.", "start_timestamp": "00:02:25", "end_timestamp": "00:03:00", "start_second": 145, "end_second": 180, "url": "https://www.youtube.com/watch?v=IRVdiHu1VCc&t=145s", "title": "Learning from dirty jobs | Mike Rowe", "thumbnail": "https://i.ytimg.com/vi/IRVdiHu1VCc/maxresdefault.jpg"} {"video_id": "IRVdiHu1VCc", "text": "A second later, with a big thumb and a well-calloused forefinger, he had the scrotum firmly in his grasp. And he pulled it toward him, like so, and he took the knife and he put it on the tip. \"Now, you think you know what's coming, Michael, You don't, OK?\" (Laughter) He snips it, throws the tip over his shoulder, and then grabs the scrotum and pushes it upward, and then his head dips down, obscuring my view. But what I hear is a slurping sound, and a noise that sounds like Velcro being yanked off a sticky wall,", "start_timestamp": "00:03:00", "end_timestamp": "00:03:30", "start_second": 180, "end_second": 210, "url": "https://www.youtube.com/watch?v=IRVdiHu1VCc&t=180s", "title": "Learning from dirty jobs | Mike Rowe", "thumbnail": "https://i.ytimg.com/vi/IRVdiHu1VCc/maxresdefault.jpg"} {"video_id": "IRVdiHu1VCc", "text": "and I am not even kidding. Can we roll the video? No, I'm kidding, we don't -- (Laughter) I thought it best to talk in pictures. I do something now I've never, ever done on a \"Dirty Jobs\" shoot, ever. I say, \"Time out. Stop.\" You guys know the show, we use take one; we don't do take two. There's no writing, there's no scripting, there's no nonsense. We don't fool around, we don't rehearse -- we shoot what we get! I said, \"Stop. This is nuts.\" I mean -- (Laughter) \"This is crazy. We can't do this.\" And Albert's like, \"What?\"", "start_timestamp": "00:03:30", "end_timestamp": "00:04:06", "start_second": 210, "end_second": 246, "url": "https://www.youtube.com/watch?v=IRVdiHu1VCc&t=210s", "title": "Learning from dirty jobs | Mike Rowe", "thumbnail": "https://i.ytimg.com/vi/IRVdiHu1VCc/maxresdefault.jpg"} {"video_id": "IRVdiHu1VCc", "text": "And I'm like, \"I don't know what just happened, but there are testicles in this bucket, and that's not how we do it.\" He said \"Well, that's how we do it.\" I said, \"Why would you do it this way?\" And before I even let him explain, I said, \"I want to do it the right way, with the rubber bands.\" And he says, \"Like the Humane Society?\" I said, \"Yes, like the Humane Society. Let's do something that doesn't make the lamb squeal and bleed. We're on in five continents, dude! We're on twice a day on the Discovery -- we can't do this.\"", "start_timestamp": "00:04:06", "end_timestamp": "00:04:32", "start_second": 246, "end_second": 272, "url": "https://www.youtube.com/watch?v=IRVdiHu1VCc&t=246s", "title": "Learning from dirty jobs | Mike Rowe", "thumbnail": "https://i.ytimg.com/vi/IRVdiHu1VCc/maxresdefault.jpg"} {"video_id": "IRVdiHu1VCc", "text": "He says, \"OK.\" He goes to his box and pulls out a bag of these little rubber bands. Melanie picks up another lamb, puts it on the post, band goes on the tail, band goes on the scrotum. Lamb goes on the ground, lamb takes two steps, falls down, gets up, shakes a little, takes another couple steps, falls down. I'm like, this is not a good sign for this lamb, at all. Gets up, walks to the corner. It's quivering, and it lies down and it's in obvious distress. And I'm looking at the lamb and I say, \"Albert, how long?", "start_timestamp": "00:04:32", "end_timestamp": "00:05:04", "start_second": 272, "end_second": 304, "url": "https://www.youtube.com/watch?v=IRVdiHu1VCc&t=272s", "title": "Learning from dirty jobs | Mike Rowe", "thumbnail": "https://i.ytimg.com/vi/IRVdiHu1VCc/maxresdefault.jpg"} {"video_id": "IRVdiHu1VCc", "text": "When does he get up?\" He's like, \"A day?\" I said, \"A day! How long does it take them to fall off?\" \"A week.\" Meanwhile, the lamb that he had just done his little procedure on is, you know, he's just prancing around, bleeding stopped. He's, you know, nibbling on some grass, frolicking. And I was just so blown away at how completely wrong I was, in that second. And I was reminded how utterly wrong I am, so much of the time. (Laughter) And I was especially reminded of what a ridiculously short straw I had that day,", "start_timestamp": "00:05:04", "end_timestamp": "00:05:41", "start_second": 304, "end_second": 341, "url": "https://www.youtube.com/watch?v=IRVdiHu1VCc&t=304s", "title": "Learning from dirty jobs | Mike Rowe", "thumbnail": "https://i.ytimg.com/vi/IRVdiHu1VCc/maxresdefault.jpg"} {"video_id": "IRVdiHu1VCc", "text": "because now I had to do what Albert had just done, and there are like 100 of these lambs in the pen. And suddenly, this whole thing's starting to feel like a German porno, and I'm like -- (Laughter) Melanie picks up the lamb, puts it on the post, opens it up. Albert hands me the knife. I go in, tail comes off. I go in, I grab the scrotum, tip comes off. Albert instructs, \"Push it way up there.\" I do. \"Push it further.\" I do. The testicles emerge. They look like thumbs, coming right at you. And he says, \"Bite 'em.", "start_timestamp": "00:05:41", "end_timestamp": "00:06:15", "start_second": 341, "end_second": 375, "url": "https://www.youtube.com/watch?v=IRVdiHu1VCc&t=341s", "title": "Learning from dirty jobs | Mike Rowe", "thumbnail": "https://i.ytimg.com/vi/IRVdiHu1VCc/maxresdefault.jpg"} {"video_id": "IRVdiHu1VCc", "text": "Just bite 'em off.\" (Laughter) And I heard him, I heard all the words -- (Laughter) Like, how did I get here? How did -- I mean -- how did I get here? It's just -- it's one of those moments where the brain goes off on its own, and suddenly, I'm standing there in the Rockies, and all I can think of is the Aristotelian definition of a tragedy. You know, Aristotle says a tragedy is that moment when the hero comes face to face with his true identity. (Laughter) And I'm like, \"What is this jacked-up metaphor? I don't like what I'm thinking right now.\"", "start_timestamp": "00:06:15", "end_timestamp": "00:06:57", "start_second": 375, "end_second": 417, "url": "https://www.youtube.com/watch?v=IRVdiHu1VCc&t=375s", "title": "Learning from dirty jobs | Mike Rowe", "thumbnail": "https://i.ytimg.com/vi/IRVdiHu1VCc/maxresdefault.jpg"} {"video_id": "IRVdiHu1VCc", "text": "And I can't get this thought out of my head, and I can't get that vision out of my sight, so I did what I had to do. I went in and I took them. I took them like this, and I yanked my head back. And I'm standing there with two testicles on my chin. (Laughter) And now I can't get -- I can't shake the metaphor. I'm still in \"Poetics,\" in Aristotle, and I'm thinking -- out of nowhere, two terms come crashing into my head, that I hadn't heard since my classics professor in college drilled them there. And they are \"anagnorisis\" and \"peripeteia.\"", "start_timestamp": "00:06:57", "end_timestamp": "00:07:31", "start_second": 417, "end_second": 451, "url": "https://www.youtube.com/watch?v=IRVdiHu1VCc&t=417s", "title": "Learning from dirty jobs | Mike Rowe", "thumbnail": "https://i.ytimg.com/vi/IRVdiHu1VCc/maxresdefault.jpg"} {"video_id": "IRVdiHu1VCc", "text": "Anagnorisis and peripeteia. Anagnorisis is the Greek word for discovery. Literally, the transition from ignorance to knowledge is anagnorisis. It's what our network does; it's what \"Dirty Jobs\" is. And I'm up to my neck in anagnorises every single day. Great. The other word, peripeteia, that's the moment in the great tragedies -- Euripides and Sophocles. That's the moment where Oedipus has his moment, where he suddenly realizes that hot chick he's been sleeping with and having babies with is his mother. That's peripety, or peripeteia.", "start_timestamp": "00:07:31", "end_timestamp": "00:08:15", "start_second": 451, "end_second": 495, "url": "https://www.youtube.com/watch?v=IRVdiHu1VCc&t=451s", "title": "Learning from dirty jobs | Mike Rowe", "thumbnail": "https://i.ytimg.com/vi/IRVdiHu1VCc/maxresdefault.jpg"} {"video_id": "IRVdiHu1VCc", "text": "And this metaphor in my head -- I've got anagnorisis and peripeteia on my chin -- (Laughter) I've got to tell you, it's such a great device, though. When you start to look for peripeteia, you find it everywhere. I mean, Bruce Willis in \"The Sixth Sense,\" right? Spends the whole movie trying to help the little kid who sees dead people, and then -- boom! -- \"Oh, I'm dead.\" Peripeteia. You know? It's crushing when the audience sees it the right way. Neo in \"The Matrix,\" you know? \"Oh, I'm living in a computer program.", "start_timestamp": "00:08:15", "end_timestamp": "00:08:47", "start_second": 495, "end_second": 527, "url": "https://www.youtube.com/watch?v=IRVdiHu1VCc&t=495s", "title": "Learning from dirty jobs | Mike Rowe", "thumbnail": "https://i.ytimg.com/vi/IRVdiHu1VCc/maxresdefault.jpg"} {"video_id": "IRVdiHu1VCc", "text": "That's weird.\" These discoveries that lead to sudden realizations. And I've been having them, over 200 dirty jobs, I have them all the time, but that one -- that one drilled something home in a way that I just wasn't prepared for. And, as I stood there, looking at the happy lamb that I had just defiled -- but it looked OK; looking at that poor other little thing that I'd done it the right way on, and I just was struck by -- if I'm wrong about that, and if I'm wrong so often, in a literal way, what other peripatetic misconceptions might I be able to comment upon?", "start_timestamp": "00:08:47", "end_timestamp": "00:09:29", "start_second": 527, "end_second": 569, "url": "https://www.youtube.com/watch?v=IRVdiHu1VCc&t=527s", "title": "Learning from dirty jobs | Mike Rowe", "thumbnail": "https://i.ytimg.com/vi/IRVdiHu1VCc/maxresdefault.jpg"} {"video_id": "IRVdiHu1VCc", "text": "Because, look -- I'm not a social anthropologist, but I have a friend who is. And I talk to him. (Laughter) And he says, \"Hey Mike, look. I don't know if your brain is interested in this sort of thing or not, but do you realize you've shot in every state? You've worked in mining, you've worked in fishing, you've worked in steel, you've worked in every major industry. You've had your back shoulder to shoulder with these guys that our politicians are desperate to relate to every four years, right?\" I can still see Hillary doing the shots of rye,", "start_timestamp": "00:09:29", "end_timestamp": "00:10:00", "start_second": 569, "end_second": 600, "url": "https://www.youtube.com/watch?v=IRVdiHu1VCc&t=569s", "title": "Learning from dirty jobs | Mike Rowe", "thumbnail": "https://i.ytimg.com/vi/IRVdiHu1VCc/maxresdefault.jpg"} {"video_id": "IRVdiHu1VCc", "text": "dribbling down her chin, with the steel workers. I mean, these are the people that I work with every single day. \"And if you have something to say about their thoughts, collectively, it might be time to think about it. Because, dude, you know, four years.\" So, that's in my head, testicles are on my chin, thoughts are bouncing around. And, after that shoot, \"Dirty Jobs\" really didn't change, in terms of what the show is, but it changed for me, personally. And now, when I talk about the show, I no longer just tell the story you heard and 190 like it.", "start_timestamp": "00:10:00", "end_timestamp": "00:10:39", "start_second": 600, "end_second": 639, "url": "https://www.youtube.com/watch?v=IRVdiHu1VCc&t=600s", "title": "Learning from dirty jobs | Mike Rowe", "thumbnail": "https://i.ytimg.com/vi/IRVdiHu1VCc/maxresdefault.jpg"} {"video_id": "IRVdiHu1VCc", "text": "I do, but I also start to talk about some of the other things I got wrong; some of the other notions of work that I've just been assuming are sacrosanct, and they're not. People with dirty jobs are happier than you think. As a group, they're the happiest people I know. And I don't want to start whistling \"Look for the Union Label,\" and all that happy-worker crap. I'm just telling you that these are balanced people who do unthinkable work. Roadkill picker-uppers whistle while they work, I swear to God -- I did it with them.", "start_timestamp": "00:10:39", "end_timestamp": "00:11:09", "start_second": 639, "end_second": 669, "url": "https://www.youtube.com/watch?v=IRVdiHu1VCc&t=639s", "title": "Learning from dirty jobs | Mike Rowe", "thumbnail": "https://i.ytimg.com/vi/IRVdiHu1VCc/maxresdefault.jpg"} {"video_id": "IRVdiHu1VCc", "text": "They've got this amazing sort of symmetry to their life. And I see it over and over and over again. So I started to wonder what would happen if we challenged some of these sacred cows? Follow your passion -- we've been talking about it here for the last 36 hours. Follow your passion -- what could possibly be wrong with that? It's probably the worst advice I ever got. (Laughter) Follow your dreams and go broke, right? I mean, that's all I heard growing up. I didn't know what to do with my life, but I was told if you follow your passion, it's going to work out.", "start_timestamp": "00:11:09", "end_timestamp": "00:11:41", "start_second": 669, "end_second": 701, "url": "https://www.youtube.com/watch?v=IRVdiHu1VCc&t=669s", "title": "Learning from dirty jobs | Mike Rowe", "thumbnail": "https://i.ytimg.com/vi/IRVdiHu1VCc/maxresdefault.jpg"} {"video_id": "IRVdiHu1VCc", "text": "I can give you 30 examples right now. Bob Combs, the pig farmer in Las Vegas who collects the uneaten scraps of food from the casinos and feeds them to his swine. Why? Because there's so much protein in the stuff we don't eat, his pigs grow at twice the normal speed, and he's one rich pig farmer. He's good for the environment, he spends his days doing this incredible service, and he smells like hell, but God bless him. He's making a great living. You ask him, \"Did you follow your passion here?\" and he'd laugh at you.", "start_timestamp": "00:11:41", "end_timestamp": "00:12:08", "start_second": 701, "end_second": 728, "url": "https://www.youtube.com/watch?v=IRVdiHu1VCc&t=701s", "title": "Learning from dirty jobs | Mike Rowe", "thumbnail": "https://i.ytimg.com/vi/IRVdiHu1VCc/maxresdefault.jpg"} {"video_id": "IRVdiHu1VCc", "text": "The guy's worth -- he just got offered like 60 million dollars for his farm and turned it down, outside of Vegas. He didn't follow his passion. He stepped back and he watched where everybody was going, and he went the other way. And I hear that story over and over. Matt Freund, a dairy farmer in New Canaan, Connecticut, who woke up one day and realized the crap from his cows was worth more than their milk, if he could use it to make these biodegradable flowerpots. Now he's selling them to Walmart, right? Follow his passion? The guy's -- come on.", "start_timestamp": "00:12:08", "end_timestamp": "00:12:41", "start_second": 728, "end_second": 761, "url": "https://www.youtube.com/watch?v=IRVdiHu1VCc&t=728s", "title": "Learning from dirty jobs | Mike Rowe", "thumbnail": "https://i.ytimg.com/vi/IRVdiHu1VCc/maxresdefault.jpg"} {"video_id": "IRVdiHu1VCc", "text": "So I started to look at passion, I started to look at efficiency vs. effectiveness. As Tim talked about earlier, that's a huge distinction. I started to look at teamwork and determination. And basically, all those platitudes they call \"successories\" that hang with that schmaltzy art in boardrooms around the world right now, that stuff -- it's suddenly all been turned on its head. Safety. Safety first is ... Going back to OSHA and PETA and the Humane Society: What if OSHA got it wrong? I mean -- this is heresy, what I'm about to say --", "start_timestamp": "00:12:41", "end_timestamp": "00:13:17", "start_second": 761, "end_second": 797, "url": "https://www.youtube.com/watch?v=IRVdiHu1VCc&t=761s", "title": "Learning from dirty jobs | Mike Rowe", "thumbnail": "https://i.ytimg.com/vi/IRVdiHu1VCc/maxresdefault.jpg"} {"video_id": "IRVdiHu1VCc", "text": "but what if it's really safety third? Right? (Laughter) No, I mean, really. What I mean to say is: I value my safety on these crazy jobs as much as the people that I'm working with, but the ones who really get it done -- they're not out there talking about safety first. They know that other things come first -- the business of doing the work comes first, the business of getting it done. And I'll never forget, up in the Bering Sea, I was on a crab boat with the \"Deadliest Catch\" guys -- which I also work on in the first season.", "start_timestamp": "00:13:17", "end_timestamp": "00:13:51", "start_second": 797, "end_second": 831, "url": "https://www.youtube.com/watch?v=IRVdiHu1VCc&t=797s", "title": "Learning from dirty jobs | Mike Rowe", "thumbnail": "https://i.ytimg.com/vi/IRVdiHu1VCc/maxresdefault.jpg"} {"video_id": "IRVdiHu1VCc", "text": "We were about 100 miles off the coast of Russia: 50-foot seas, big waves, green water coming over the wheelhouse, right? Most hazardous environment I'd ever seen, and I was back with a guy, lashing the pots down. So I'm 40 feet off the deck, which is like looking down at the top of your shoe, you know, and it's doing this in the ocean. Unspeakably dangerous. I scamper down, I go into the wheelhouse and I say, with some level of incredulity, \"Captain -- OSHA?\" And he says, \"OSHA? Ocean.\" And he points out there.", "start_timestamp": "00:13:51", "end_timestamp": "00:14:23", "start_second": 831, "end_second": 863, "url": "https://www.youtube.com/watch?v=IRVdiHu1VCc&t=831s", "title": "Learning from dirty jobs | Mike Rowe", "thumbnail": "https://i.ytimg.com/vi/IRVdiHu1VCc/maxresdefault.jpg"} {"video_id": "IRVdiHu1VCc", "text": "(Laughter) But in that moment, what he said next can't be repeated in the Lower 48. It can't be repeated on any factory floor or any construction site. But he looked at me and said, \"Son,\" -- he's my age, by the way, he calls me \"son,\" I love that -- he says, \"Son, I'm the captain of a crab boat. My responsibility is not to get you home alive. My responsibility is to get you home rich.\" (Laughter) You want to get home alive, that's on you.\" And for the rest of that day -- safety first. I mean, I was like -- So, the idea that we create this sense of complacency", "start_timestamp": "00:14:23", "end_timestamp": "00:15:01", "start_second": 863, "end_second": 901, "url": "https://www.youtube.com/watch?v=IRVdiHu1VCc&t=863s", "title": "Learning from dirty jobs | Mike Rowe", "thumbnail": "https://i.ytimg.com/vi/IRVdiHu1VCc/maxresdefault.jpg"} {"video_id": "IRVdiHu1VCc", "text": "when all we do is talk about somebody else's responsibility as though it's our own, and vice versa. Anyhow, a whole lot of things. I could talk at length about the many little distinctions we made and the endless list of ways that I got it wrong. But what it all comes down to is this: I've formed a theory, and I'm going to share it now in my remaining 2 minutes and 30 seconds. It goes like this: we've declared war on work, as a society -- all of us. It's a civil war. It's a cold war, really. We didn't set out to do it", "start_timestamp": "00:15:01", "end_timestamp": "00:15:36", "start_second": 901, "end_second": 936, "url": "https://www.youtube.com/watch?v=IRVdiHu1VCc&t=901s", "title": "Learning from dirty jobs | Mike Rowe", "thumbnail": "https://i.ytimg.com/vi/IRVdiHu1VCc/maxresdefault.jpg"} {"video_id": "IRVdiHu1VCc", "text": "and we didn't twist our mustache in some Machiavellian way, but we've done it. And we've waged this war on at least four fronts, certainly in Hollywood. The way we portray working people on TV -- it's laughable. If there's a plumber, he's 300 pounds and he's got a giant butt crack, admit it. You see him all the time. That's what plumbers look like, right? We turn them into heroes, or we turn them into punch lines. That's what TV does. We try hard on \"Dirty Jobs\" not to do that, which is why I do the work and I don't cheat.", "start_timestamp": "00:15:36", "end_timestamp": "00:16:08", "start_second": 936, "end_second": 968, "url": "https://www.youtube.com/watch?v=IRVdiHu1VCc&t=936s", "title": "Learning from dirty jobs | Mike Rowe", "thumbnail": "https://i.ytimg.com/vi/IRVdiHu1VCc/maxresdefault.jpg"} {"video_id": "IRVdiHu1VCc", "text": "But, we've waged this war on Madison Avenue. So many of the commercials that come out there in the way of a message -- what's really being said? \"Your life would be better if you could work a little less, didn't have to work so hard, got home a little earlier, could retire a little faster, punch out a little sooner.\" It's all in there, over and over, again and again. Washington? I can't even begin to talk about the deals and policies in place that affect the bottom-line reality of the available jobs, because I don't really know; I just know that that's a front in this war.", "start_timestamp": "00:16:08", "end_timestamp": "00:16:39", "start_second": 968, "end_second": 999, "url": "https://www.youtube.com/watch?v=IRVdiHu1VCc&t=968s", "title": "Learning from dirty jobs | Mike Rowe", "thumbnail": "https://i.ytimg.com/vi/IRVdiHu1VCc/maxresdefault.jpg"} {"video_id": "IRVdiHu1VCc", "text": "And right here, guys -- Silicon Valley. I mean -- how many people have an iPhone on them right now? How many people have their BlackBerry? We're plugged in; we're connected. I would never suggest for a second that something bad has come out of the tech revolution. Good grief, not to this crowd. (Laughter) But I would suggest that innovation without imitation is a complete waste of time. And nobody celebrates imitation the way \"Dirty Jobs\" guys know it has to be done. Your iPhone without those people making the same interface,", "start_timestamp": "00:16:39", "end_timestamp": "00:17:13", "start_second": 999, "end_second": 1033, "url": "https://www.youtube.com/watch?v=IRVdiHu1VCc&t=999s", "title": "Learning from dirty jobs | Mike Rowe", "thumbnail": "https://i.ytimg.com/vi/IRVdiHu1VCc/maxresdefault.jpg"} {"video_id": "IRVdiHu1VCc", "text": "the same circuitry, the same board, over and over -- all of that -- that's what makes it equally as possible as the genius that goes inside of it. So, we've got this new toolbox. You know? Our tools today don't look like shovels and picks. They look like the stuff we walk around with. And so the collective effect of all of that has been this marginalization of lots and lots of jobs. And I realized, probably too late in this game -- I hope not, because I don't know if I can do 200 more of these things -- but we're going to do as many as we can.", "start_timestamp": "00:17:13", "end_timestamp": "00:17:51", "start_second": 1033, "end_second": 1071, "url": "https://www.youtube.com/watch?v=IRVdiHu1VCc&t=1033s", "title": "Learning from dirty jobs | Mike Rowe", "thumbnail": "https://i.ytimg.com/vi/IRVdiHu1VCc/maxresdefault.jpg"} {"video_id": "IRVdiHu1VCc", "text": "And to me, the most important thing to know and to really come face to face with, is that fact that I got it wrong about a lot of things, not just the testicles on my chin. I got a lot wrong. So, we're thinking -- by \"we,\" I mean me -- (Laughter) that the thing to do is to talk about a PR campaign for work -- manual labor, skilled labor. Somebody needs to be out there, talking about the forgotten benefits. I'm talking about grandfather stuff, the stuff a lot us probably grew up with but we've kind of -- you know, kind of lost a little.", "start_timestamp": "00:17:51", "end_timestamp": "00:18:31", "start_second": 1071, "end_second": 1111, "url": "https://www.youtube.com/watch?v=IRVdiHu1VCc&t=1071s", "title": "Learning from dirty jobs | Mike Rowe", "thumbnail": "https://i.ytimg.com/vi/IRVdiHu1VCc/maxresdefault.jpg"} {"video_id": "IRVdiHu1VCc", "text": "Barack wants to create two and a half million jobs. The infrastructure is a huge deal. This war on work that I suppose exists, has casualties like any other war. The infrastructure is the first one, declining trade school enrollments are the second one. Every single year, fewer electricians, fewer carpenters, fewer plumbers, fewer welders, fewer pipe fitters, fewer steam fitters. The infrastructure jobs that everybody is talking about creating are those guys -- the ones that have been in decline, over and over.", "start_timestamp": "00:18:31", "end_timestamp": "00:19:02", "start_second": 1111, "end_second": 1142, "url": "https://www.youtube.com/watch?v=IRVdiHu1VCc&t=1111s", "title": "Learning from dirty jobs | Mike Rowe", "thumbnail": "https://i.ytimg.com/vi/IRVdiHu1VCc/maxresdefault.jpg"} {"video_id": "IRVdiHu1VCc", "text": "Meanwhile, we've got two trillion dollars, at a minimum, according to the American Society of Civil Engineers, that we need to expend to even make a dent in the infrastructure, which is currently rated at a D minus. So, if I were running for anything -- and I'm not -- I would simply say that the jobs we hope to make and the jobs we hope to create aren't going to stick unless they're jobs that people want. And I know the point of this conference is to celebrate things that are near and dear to us, but I also know that clean and dirty aren't opposites.", "start_timestamp": "00:19:02", "end_timestamp": "00:19:34", "start_second": 1142, "end_second": 1174, "url": "https://www.youtube.com/watch?v=IRVdiHu1VCc&t=1142s", "title": "Learning from dirty jobs | Mike Rowe", "thumbnail": "https://i.ytimg.com/vi/IRVdiHu1VCc/maxresdefault.jpg"} {"video_id": "hv3UO3G0Ofo", "text": "transformers are quickly coming for your favorite models yesterday they replaced lstms in nlp they used to be good at nlp but blah we now have transformers think again today we're going to see that maybe in the near future transformers will replace convolutions in image processing so this paper is a step in toward towards this direction you just wonder what is it going to be tomorrow maybe linear regression is going to be replaced just by giant transformers trained on 5 000 tpus uh who knows we'll see in any case we're looking at", "start_timestamp": "00:00:00", "end_timestamp": "00:00:40", "start_second": 0, "end_second": 40, "url": "https://www.youtube.com/watch?v=hv3UO3G0Ofo&t=0s", "title": "Axial-DeepLab: Stand-Alone Axial-Attention for Panoptic Segmentation (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/hv3UO3G0Ofo/maxresdefault.jpg"} {"video_id": "hv3UO3G0Ofo", "text": "axial deep lab stand-alone axial attention for pan-optic segmentation by hui wang yukon chu bradley green heart week adam alan yul and liang chia chen of john hopkins university and google research so this paper combines a bunch of techniques that have been introduced recently uh to deal with attention in problems where you would traditionally use a convolution so in this particular case they deal with this problem of pan-optic segmentation which basically you'll see you'll get an image and there's a bunch of stuff on the", "start_timestamp": "00:00:40", "end_timestamp": "00:01:19", "start_second": 40, "end_second": 79, "url": "https://www.youtube.com/watch?v=hv3UO3G0Ofo&t=40s", "title": "Axial-DeepLab: Stand-Alone Axial-Attention for Panoptic Segmentation (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/hv3UO3G0Ofo/maxresdefault.jpg"} {"video_id": "hv3UO3G0Ofo", "text": "image like a cat here and a house right here and you're supposed to color the pixels of the same object the same so you see you see all these pixels here are house and then all these pixels these pixels right here are cat and so on and then there's also the background so all these pixels right here i know beautiful beautiful beautiful our background so for this problem um it's kind of important that there you you you're very precise first of all so you can look at you know pixels or clusters of pixels and also that you", "start_timestamp": "00:01:19", "end_timestamp": "00:02:02", "start_second": 79, "end_second": 122, "url": "https://www.youtube.com/watch?v=hv3UO3G0Ofo&t=79s", "title": "Axial-DeepLab: Stand-Alone Axial-Attention for Panoptic Segmentation (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/hv3UO3G0Ofo/maxresdefault.jpg"} {"video_id": "hv3UO3G0Ofo", "text": "take long-range dependencies into account because if you for example recognize that this is a house and you recognize that here's a wall right here um you might be able to much better classify what is wall over here and what isn't okay so the kind of long-range dependencies play a role in these problems across images and usually attention mechanisms are pretty good for these long-range dependencies but they're also expensive and that's what this paper deals with so they use this axial attention that has been", "start_timestamp": "00:02:02", "end_timestamp": "00:02:39", "start_second": 122, "end_second": 159, "url": "https://www.youtube.com/watch?v=hv3UO3G0Ofo&t=122s", "title": "Axial-DeepLab: Stand-Alone Axial-Attention for Panoptic Segmentation (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/hv3UO3G0Ofo/maxresdefault.jpg"} {"video_id": "hv3UO3G0Ofo", "text": "introduced for exactly resolving this problem in types of data like images or higher order tensors and they also combine this together with learned positional encodings which we've also seen um time and time again throughout the kind of transformer and attention literature so the combination of axial attention these learn positional embeddings allows them to replace the resnet backbone that usually is found in panoptix segmentation models with the with a standalone attention so they build models that are partial", "start_timestamp": "00:02:39", "end_timestamp": "00:03:18", "start_second": 159, "end_second": 198, "url": "https://www.youtube.com/watch?v=hv3UO3G0Ofo&t=159s", "title": "Axial-DeepLab: Stand-Alone Axial-Attention for Panoptic Segmentation (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/hv3UO3G0Ofo/maxresdefault.jpg"} {"video_id": "hv3UO3G0Ofo", "text": "replace the convolutions with attention modules or replace them entirely so the entire model is going to be just an attention model so no more convolutions in it and they perform pretty well in classic tasks like they they test on imagenet classification they perform pretty well and they achieve state-of-the-art on some of these segmentation tasks so we'll go through the model right here this is a very very extensive paper in terms of experimental evaluation what i want to get into is mainly how the method works", "start_timestamp": "00:03:18", "end_timestamp": "00:03:53", "start_second": 198, "end_second": 233, "url": "https://www.youtube.com/watch?v=hv3UO3G0Ofo&t=198s", "title": "Axial-DeepLab: Stand-Alone Axial-Attention for Panoptic Segmentation (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/hv3UO3G0Ofo/maxresdefault.jpg"} {"video_id": "hv3UO3G0Ofo", "text": "um and show you what their model looks like so we'll go through it and as always let me know what you think in the comments and tell me if you liked it or not uh share it out if you did all right so they go over a very long list of prior work which is you know pretty pretty cool and here they say their contributions so their contributions are four fold first of all the proposed method is the first attempt to build standalone attention models with larger large or a global receptive field and we'll see what that means", "start_timestamp": "00:03:53", "end_timestamp": "00:04:33", "start_second": 233, "end_second": 273, "url": "https://www.youtube.com/watch?v=hv3UO3G0Ofo&t=233s", "title": "Axial-DeepLab: Stand-Alone Axial-Attention for Panoptic Segmentation (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/hv3UO3G0Ofo/maxresdefault.jpg"} {"video_id": "hv3UO3G0Ofo", "text": "we propose position sensitive attention layer that makes better use of positional information without adding much computational cost we show that axial attention works well not only as a standalone model on image classification but also as a backbone on pan-optic segmentation instant segmentation and semantic segmentation maybe what i did before described before was instance or semantic segmentation and not panoptix segmentation excuse me if that's the case as you can see it can be used for various various image tasks", "start_timestamp": "00:04:33", "end_timestamp": "00:05:10", "start_second": 273, "end_second": 310, "url": "https://www.youtube.com/watch?v=hv3UO3G0Ofo&t=273s", "title": "Axial-DeepLab: Stand-Alone Axial-Attention for Panoptic Segmentation (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/hv3UO3G0Ofo/maxresdefault.jpg"} {"video_id": "hv3UO3G0Ofo", "text": "lastly our axial deep lab improves significantly over bottom-up state-of-the-art on cocoa achieving comparable performance of two-stage methods we also surpassed the previous state-of-the-art methods on mapillary vistas and cityscapes so these are various tasks as i said and also what they don't mention here is that they perform fairly well on imagenet in fact in the abstract they formulate this as um in particular our model outperforms all existing standalone self attention models on imagenet like that's you know that's a way to phrase it uh", "start_timestamp": "00:05:10", "end_timestamp": "00:05:47", "start_second": 310, "end_second": 347, "url": "https://www.youtube.com/watch?v=hv3UO3G0Ofo&t=310s", "title": "Axial-DeepLab: Stand-Alone Axial-Attention for Panoptic Segmentation (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/hv3UO3G0Ofo/maxresdefault.jpg"} {"video_id": "hv3UO3G0Ofo", "text": "you just exclude all of the other models until you're the best outperforms all existing standalone self-attention models on imagenet yeah i mean that's good i i mean there's something to be said of comparing apples to apples but you can also you can also go overboard if you want to make your work look as good as possible of course you know everyone everyone does that and there's no particular shame in it okay so if a we're going to build up our model right here and the basic element of this um model is going to be this self-attention", "start_timestamp": "00:05:47", "end_timestamp": "00:06:31", "start_second": 347, "end_second": 391, "url": "https://www.youtube.com/watch?v=hv3UO3G0Ofo&t=347s", "title": "Axial-DeepLab: Stand-Alone Axial-Attention for Panoptic Segmentation (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/hv3UO3G0Ofo/maxresdefault.jpg"} {"video_id": "hv3UO3G0Ofo", "text": "mechanism now quickly because i know you all know what it is but very quickly you want to perform this action right here over a region right here so there is always a query and now the subscripts here are going to be important in this paper okay so the query is at a given position position o and you can see that's the o right here that's the i'm going to call it the output i guess that's what they said as well so the output position you want to go over all of the input positions and you want to aggregate data from all", "start_timestamp": "00:06:31", "end_timestamp": "00:07:14", "start_second": 391, "end_second": 434, "url": "https://www.youtube.com/watch?v=hv3UO3G0Ofo&t=391s", "title": "Axial-DeepLab: Stand-Alone Axial-Attention for Panoptic Segmentation (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/hv3UO3G0Ofo/maxresdefault.jpg"} {"video_id": "hv3UO3G0Ofo", "text": "of the input positions so that's right here and how do you aggregate data by this softmax operator right here and you can see the key also has a p right here and the softmax is over the axis of p so in particular case of the images what does that mean if you have an image right here it's made into pixels okay so you have pixels now a transformer or gen in generally these attention models what you can imagine is they always transform a data point into a data point of the same dimensions now this doesn't have to be", "start_timestamp": "00:07:14", "end_timestamp": "00:07:51", "start_second": 434, "end_second": 471, "url": "https://www.youtube.com/watch?v=hv3UO3G0Ofo&t=434s", "title": "Axial-DeepLab: Stand-Alone Axial-Attention for Panoptic Segmentation (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/hv3UO3G0Ofo/maxresdefault.jpg"} {"video_id": "hv3UO3G0Ofo", "text": "actually and i think one of the developments that is going to come in coming years or months or weeks maybe someone's already doing it is in fact to play more with this with this arbitrary constraint that we're imposing on ourselves because it's not really clear that this is the best thing but for now an attention layer is always transforming a data point here a four by four image into a data point of the same size also a four by four image right here now this is as i said this is quite simplified but it is true", "start_timestamp": "00:07:51", "end_timestamp": "00:08:30", "start_second": 471, "end_second": 510, "url": "https://www.youtube.com/watch?v=hv3UO3G0Ofo&t=471s", "title": "Axial-DeepLab: Stand-Alone Axial-Attention for Panoptic Segmentation (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/hv3UO3G0Ofo/maxresdefault.jpg"} {"video_id": "hv3UO3G0Ofo", "text": "in nlp where we always transform our whatever 512 sequence token sequence into a 512 token sequence and it is true here now the output is is going to be here on the right and the question always is okay so i'll go over these um these these pixels right here and for every pixel let's say for this pixel i'm going to ask what data goes there what's the output of the layer at that particular pixel and the output of the layer is going to be somehow dependent on on the input right here now if you know classic convolutional", "start_timestamp": "00:08:30", "end_timestamp": "00:09:09", "start_second": 510, "end_second": 549, "url": "https://www.youtube.com/watch?v=hv3UO3G0Ofo&t=510s", "title": "Axial-DeepLab: Stand-Alone Axial-Attention for Panoptic Segmentation (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/hv3UO3G0Ofo/maxresdefault.jpg"} {"video_id": "hv3UO3G0Ofo", "text": "models what the classic convolutional model says the output of this is going to be dependent on this region right here if it's like a three by three filter okay so you have this convolutional filter and that means that blue dot on the right is going to pay attention to you know its own location in the input plus everything around it okay and then every single uh data point here is going to do that so for example this green data point is going to pay attention to this region right here now there's a border um so there's maybe some padding", "start_timestamp": "00:09:09", "end_timestamp": "00:09:48", "start_second": 549, "end_second": 588, "url": "https://www.youtube.com/watch?v=hv3UO3G0Ofo&t=549s", "title": "Axial-DeepLab: Stand-Alone Axial-Attention for Panoptic Segmentation (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/hv3UO3G0Ofo/maxresdefault.jpg"} {"video_id": "hv3UO3G0Ofo", "text": "but the question is always where does the information come from and how is it aggregated okay in a convolutional layer what happens in a convolution layer in a convolution layer you simply have your filter right you have your filter and the filter has numbers in it like three and five and eight and so on and what you're going to do is you're going to take this region right here this blue region of the lower layer and that's maybe that's also you know filled with numbers like seven what's a good number zero zero is a", "start_timestamp": "00:09:48", "end_timestamp": "00:10:20", "start_second": 588, "end_second": 620, "url": "https://www.youtube.com/watch?v=hv3UO3G0Ofo&t=588s", "title": "Axial-DeepLab: Stand-Alone Axial-Attention for Panoptic Segmentation (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/hv3UO3G0Ofo/maxresdefault.jpg"} {"video_id": "hv3UO3G0Ofo", "text": "purple is a good nice number and you're going to multiply those and then you're going to sum them up and then you're going to put that on where the blue dot is okay so where does the information come from in the convolution from around the location from around the output location but in the input okay so you go to the input at the same location as where you want the output to be you take the neighborhood and there is a fixed a fixed scheme of aggregating the neighborhood okay and then you sum you multiply and you sum across it in", "start_timestamp": "00:10:20", "end_timestamp": "00:10:56", "start_second": 620, "end_second": 656, "url": "https://www.youtube.com/watch?v=hv3UO3G0Ofo&t=620s", "title": "Axial-DeepLab: Stand-Alone Axial-Attention for Panoptic Segmentation (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/hv3UO3G0Ofo/maxresdefault.jpg"} {"video_id": "hv3UO3G0Ofo", "text": "contrast to this in a uh fully attentional model where does the information come from let's again look at the blue dot and let's consider it fully attentional okay where does the information come from everywhere anywhere anywhere at all okay the information comes from everywhere now how how do i know um how to aggregate the information so it's no longer in a neighborhood how do i know how to aggregate the information that's also different so two things are different um now in a convolution i would have another four by four", "start_timestamp": "00:10:56", "end_timestamp": "00:11:43", "start_second": 656, "end_second": 703, "url": "https://www.youtube.com/watch?v=hv3UO3G0Ofo&t=656s", "title": "Axial-DeepLab: Stand-Alone Axial-Attention for Panoptic Segmentation (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/hv3UO3G0Ofo/maxresdefault.jpg"} {"video_id": "hv3UO3G0Ofo", "text": "grid here that's pre-specified but in the attention model this here is basically all filled with question marks question mark question mark where how what what number goes here how do i in the end i also do this multiply and i sum it up and i put it right here okay but how do these numbers come to be well these numbers also come these are dynamically computed also from um from the input it's a bit special but this is how attention works okay so every pixel gets to decide where information comes from and how it is aggregated it basically it", "start_timestamp": "00:11:43", "end_timestamp": "00:12:31", "start_second": 703, "end_second": 751, "url": "https://www.youtube.com/watch?v=hv3UO3G0Ofo&t=703s", "title": "Axial-DeepLab: Stand-Alone Axial-Attention for Panoptic Segmentation (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/hv3UO3G0Ofo/maxresdefault.jpg"} {"video_id": "hv3UO3G0Ofo", "text": "comes from anywhere and how it is aggregated is dynamic depending on the pixel if you don't still don't understand it maybe pay out to watch a video on attention itself i happen to have made one but you can watch any one when you understand that you will understand the um the extension here to the image is the exact same thing as with the sequence except uh the pixels are basically one long sequence in the image okay so this would be a fully attentional model down here now what's the problem here the problem is", "start_timestamp": "00:12:31", "end_timestamp": "00:13:14", "start_second": 751, "end_second": 794, "url": "https://www.youtube.com/watch?v=hv3UO3G0Ofo&t=751s", "title": "Axial-DeepLab: Stand-Alone Axial-Attention for Panoptic Segmentation (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/hv3UO3G0Ofo/maxresdefault.jpg"} {"video_id": "hv3UO3G0Ofo", "text": "that pictures are pretty large so even even something like mnist which is like 28 by 28 is like 700 pixels plus i don't remember exactly but it's like about 700 pixels and our big transformers now so bert a very famous transformer takes inputs uh that are like 512 in length and you already need pretty decent hardware to run this and the requirements on memory and compute scale quadratically with the input length so already with mnist you're in pretty pretty shady territory um if you go up to something like imagenet which is like", "start_timestamp": "00:13:14", "end_timestamp": "00:13:59", "start_second": 794, "end_second": 839, "url": "https://www.youtube.com/watch?v=hv3UO3G0Ofo&t=794s", "title": "Axial-DeepLab: Stand-Alone Axial-Attention for Panoptic Segmentation (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/hv3UO3G0Ofo/maxresdefault.jpg"} {"video_id": "hv3UO3G0Ofo", "text": "225 by 225 you're that's bad right that's not good um so you have to come up with something else so people have been playing around the reason why i introduced it this way is people have been playing around a bit with sort of coming up with an intermediate with a compromise between the two so the compromise that this paper here focuses on is going to be is going to be a compromise where we you remember when i said where does the information for a given pixel come from and we said okay it can come from anywhere", "start_timestamp": "00:13:59", "end_timestamp": "00:14:40", "start_second": 839, "end_second": 880, "url": "https://www.youtube.com/watch?v=hv3UO3G0Ofo&t=839s", "title": "Axial-DeepLab: Stand-Alone Axial-Attention for Panoptic Segmentation (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/hv3UO3G0Ofo/maxresdefault.jpg"} {"video_id": "hv3UO3G0Ofo", "text": "in the attention framework and that's good because that allows us to make super long-range connections so any pixel can aggregate information from any other pixel and not even in a fixed way but in a dynamic way so depending on the pixel value itself and the other values it can it decide how it wants to aggregate information that turns out to be expensive right every pixel together with every pixel well that's quadratic okay so what do we do we make a third method that's going to be a compromise and the compromise is going to be the", "start_timestamp": "00:14:40", "end_timestamp": "00:15:15", "start_second": 880, "end_second": 915, "url": "https://www.youtube.com/watch?v=hv3UO3G0Ofo&t=880s", "title": "Axial-DeepLab: Stand-Alone Axial-Attention for Panoptic Segmentation (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/hv3UO3G0Ofo/maxresdefault.jpg"} {"video_id": "hv3UO3G0Ofo", "text": "following the compromise is going to be all right we still do the dynamic aggregation which means that we still do the attention thing however however we're going to restrict back to this neighborhood region of the convolution so in this model where this information for the blue dot come from it again comes from this neighborhood right here and this number the size here is going to be called m so it still comes from that m by m neighborhood so a pixel can only aggregate information from its neighbors but contrary to a convolution", "start_timestamp": "00:15:15", "end_timestamp": "00:15:56", "start_second": 915, "end_second": 956, "url": "https://www.youtube.com/watch?v=hv3UO3G0Ofo&t=915s", "title": "Axial-DeepLab: Stand-Alone Axial-Attention for Panoptic Segmentation (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/hv3UO3G0Ofo/maxresdefault.jpg"} {"video_id": "hv3UO3G0Ofo", "text": "how it aggregates the information like this what in convolution would be a kernel the kernel is made dynamically by the attention module and it's made dynamically on a case-by-case basis okay so we restrict it to a neighborhood multiply sum it up and then put it into the output and we do that for every pixel now it resembles much more a convolution simply a convolution with this dynamic with this dynamic matrix right here and that's the starting point for this paper so this paper does two things to this it says okay", "start_timestamp": "00:15:56", "end_timestamp": "00:16:36", "start_second": 956, "end_second": 996, "url": "https://www.youtube.com/watch?v=hv3UO3G0Ofo&t=956s", "title": "Axial-DeepLab: Stand-Alone Axial-Attention for Panoptic Segmentation (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/hv3UO3G0Ofo/maxresdefault.jpg"} {"video_id": "hv3UO3G0Ofo", "text": "um we can augment this by so-called positional embeddings uh a positional embedding you might know from the sequence transformers so if i have a sequence my cat is tall i don't even know what that means for a cat but okay what in a positional encoding so if you use a transformer and you transform this as we said into a sequence of equal length and then transform is basically information routing the transformer simply sees the lower layer sequence as a set not as a sequence it has no notion of what's neighboring to", "start_timestamp": "00:16:36", "end_timestamp": "00:17:18", "start_second": 996, "end_second": 1038, "url": "https://www.youtube.com/watch?v=hv3UO3G0Ofo&t=996s", "title": "Axial-DeepLab: Stand-Alone Axial-Attention for Panoptic Segmentation (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/hv3UO3G0Ofo/maxresdefault.jpg"} {"video_id": "hv3UO3G0Ofo", "text": "what what comes from where so it pays to tell the transformer by the way this is word one this is word two this is word three this is word four there are various ways to do it um transformers usually have fairly complicated kind of sine wave based positional encodings that bring many advantages with them um in this case they say well it might pay a payoff to learn where actually these things are in this neighborhood so they experiment with relative positional encoding which means they annotate this neighborhood with", "start_timestamp": "00:17:18", "end_timestamp": "00:17:57", "start_second": 1038, "end_second": 1077, "url": "https://www.youtube.com/watch?v=hv3UO3G0Ofo&t=1038s", "title": "Axial-DeepLab: Stand-Alone Axial-Attention for Panoptic Segmentation (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/hv3UO3G0Ofo/maxresdefault.jpg"} {"video_id": "hv3UO3G0Ofo", "text": "something like look here in the middle it's a zero zero here it's like zero one here it's zero negative one negative one zero and so on so they annotate it with uh these positional encodings now this is this would be the easy way what they actually do is they simply they give the model a matrix like this and they learn that matrix by heart let's say um so the positional encodings are relative positional encodings and they are learned okay so you can do that you can learn position on coding so if you don't want to do the one two three", "start_timestamp": "00:17:57", "end_timestamp": "00:18:42", "start_second": 1077, "end_second": 1122, "url": "https://www.youtube.com/watch?v=hv3UO3G0Ofo&t=1077s", "title": "Axial-DeepLab: Stand-Alone Axial-Attention for Panoptic Segmentation (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/hv3UO3G0Ofo/maxresdefault.jpg"} {"video_id": "hv3UO3G0Ofo", "text": "four right here you simply say well here is a vector here is a vector here is a vector and here is also a vector now model you're already learning like all the ways to make this thing here happen and you're already learning your output weights up here right using back propagation why don't you learn yourself what you would like for position one like what kind of information you would like to be to have there using back propagation right so the model you provide them you always provide the same vector so this is the same vector", "start_timestamp": "00:18:42", "end_timestamp": "00:19:17", "start_second": 1122, "end_second": 1157, "url": "https://www.youtube.com/watch?v=hv3UO3G0Ofo&t=1122s", "title": "Axial-DeepLab: Stand-Alone Axial-Attention for Panoptic Segmentation (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/hv3UO3G0Ofo/maxresdefault.jpg"} {"video_id": "hv3UO3G0Ofo", "text": "for position one and you have a different vector for position two and you have a different vector for position three right so um but across all of the data points these vectors are going to be the same so the vector one is always going to be that same vector for all of the data points so the model somehow must learn independent of the data point what it means to be in position one so the model must learn how it wants to fill that vector that's called a learned positional embeddings we've seen this in many models so far it usually works", "start_timestamp": "00:19:17", "end_timestamp": "00:19:50", "start_second": 1157, "end_second": 1190, "url": "https://www.youtube.com/watch?v=hv3UO3G0Ofo&t=1157s", "title": "Axial-DeepLab: Stand-Alone Axial-Attention for Panoptic Segmentation (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/hv3UO3G0Ofo/maxresdefault.jpg"} {"video_id": "hv3UO3G0Ofo", "text": "pretty well and i guess here it works especially well if you have these relative positional encodings and so this thing here is not going to be an actual matrix filled with these numbers it's going to be a learned matrix a trainable matrix that is filled that the network is allowed to fill with numbers right like three five eight and you might be you might notice that we've seen this before right um so ultimately the information in this blue thing right here is going to depend on this dynamically created aggregating of", "start_timestamp": "00:19:50", "end_timestamp": "00:20:36", "start_second": 1190, "end_second": 1236, "url": "https://www.youtube.com/watch?v=hv3UO3G0Ofo&t=1190s", "title": "Axial-DeepLab: Stand-Alone Axial-Attention for Panoptic Segmentation (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/hv3UO3G0Ofo/maxresdefault.jpg"} {"video_id": "hv3UO3G0Ofo", "text": "information through the neighborhood and this statically learned aggregation of information throughout the neighborhood which is a con which is sort of a convolution right um because in the convolution you've already seen here this is a statically learned map of how to aggregate information from the neighborhood of a pixel so i think even though there are slight differences um they for example say this these are the same across attention heads and so on um however i suspect that you you can think of these learned positional embeddings", "start_timestamp": "00:20:36", "end_timestamp": "00:21:20", "start_second": 1236, "end_second": 1280, "url": "https://www.youtube.com/watch?v=hv3UO3G0Ofo&t=1236s", "title": "Axial-DeepLab: Stand-Alone Axial-Attention for Panoptic Segmentation (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/hv3UO3G0Ofo/maxresdefault.jpg"} {"video_id": "hv3UO3G0Ofo", "text": "to be um to be kind of like what you learn in a convolution not exactly so um no i i think i made a mistake and we'll see it in the formula we'll see it in the formula yeah okay so here they introduce these positional embeddings okay so you see that we previously we had the soft max previously we had this and this okay so this is the lower layer this is the information that comes into the layer and now it's it's transformed into values by a linear matrix but essentially this is the lower layer and for each of", "start_timestamp": "00:21:20", "end_timestamp": "00:22:07", "start_second": 1280, "end_second": 1327, "url": "https://www.youtube.com/watch?v=hv3UO3G0Ofo&t=1280s", "title": "Axial-DeepLab: Stand-Alone Axial-Attention for Panoptic Segmentation (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/hv3UO3G0Ofo/maxresdefault.jpg"} {"video_id": "hv3UO3G0Ofo", "text": "the output locations you want to know how should i aggregate information from that lower layer and you do this by this thing here this thing here is this dynamically constructed attention matrix using also the softmax okay so how should you aggregate information this comes from this query at the output position and the keys at the input position and now you add to that this method this thing right here which is again an inner product between the carrier query and the positional encodings okay so the positional encodings are going to", "start_timestamp": "00:22:07", "end_timestamp": "00:22:44", "start_second": 1327, "end_second": 1364, "url": "https://www.youtube.com/watch?v=hv3UO3G0Ofo&t=1327s", "title": "Axial-DeepLab: Stand-Alone Axial-Attention for Panoptic Segmentation (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/hv3UO3G0Ofo/maxresdefault.jpg"} {"video_id": "hv3UO3G0Ofo", "text": "be learned and hard coded but they still are modified by the queries so the query can still pay attention the difference is the keys depend on the input while the positional encoding does not depend on the input so the queries can decide i want to gather information from this and this and this type of information so that would be the key or it can decide i would like very much to look at pixels that are somehow on the bottom right of the pixel that i am now that would be the um positional encodings and that's that's the mistake", "start_timestamp": "00:22:44", "end_timestamp": "00:23:24", "start_second": 1364, "end_second": 1404, "url": "https://www.youtube.com/watch?v=hv3UO3G0Ofo&t=1364s", "title": "Axial-DeepLab: Stand-Alone Axial-Attention for Panoptic Segmentation (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/hv3UO3G0Ofo/maxresdefault.jpg"} {"video_id": "hv3UO3G0Ofo", "text": "i made when i said it's equivalent to a convolution it is not because the query can still it's still modulated by that query vector um of how to aggregate information otherwise you would have this to be a standalone multiplied by the input right here but it sort of pays off to think of it like what you do in the convolution so in the convolution you learn how to aggregate information basically based on on position um relative position to the position that you want to output and here you do a similar thing you", "start_timestamp": "00:23:24", "end_timestamp": "00:24:01", "start_second": 1404, "end_second": 1441, "url": "https://www.youtube.com/watch?v=hv3UO3G0Ofo&t=1404s", "title": "Axial-DeepLab: Stand-Alone Axial-Attention for Panoptic Segmentation (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/hv3UO3G0Ofo/maxresdefault.jpg"} {"video_id": "hv3UO3G0Ofo", "text": "learn static position embeddings that you then can attend to with your queries all right so these are the position embeddings and they make use of those position embeddings in fact they attend them to the following in this work we enable the output to retrieve relative positions beside the content based on query key affinities formally so the problem up here is that okay you have these position embeddings um and here are the outputs but if you do this in multiple layers right if you do let's let's go with 1d", "start_timestamp": "00:24:01", "end_timestamp": "00:24:43", "start_second": 1441, "end_second": 1483, "url": "https://www.youtube.com/watch?v=hv3UO3G0Ofo&t=1441s", "title": "Axial-DeepLab: Stand-Alone Axial-Attention for Panoptic Segmentation (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/hv3UO3G0Ofo/maxresdefault.jpg"} {"video_id": "hv3UO3G0Ofo", "text": "sequences if you do this in multiple layers and here you annotate the position let's just go one two three four um and okay this layer can make use of that right we gather stuff from here but then when this layer when this layer gathers information from here the where the information comes from in the layer below is some is how somehow getting lost right so it cannot kind of pull through this information to here or at least it's very complicated this model extends this positional embeddings in order to pull through that", "start_timestamp": "00:24:43", "end_timestamp": "00:25:25", "start_second": 1483, "end_second": 1525, "url": "https://www.youtube.com/watch?v=hv3UO3G0Ofo&t=1483s", "title": "Axial-DeepLab: Stand-Alone Axial-Attention for Panoptic Segmentation (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/hv3UO3G0Ofo/maxresdefault.jpg"} {"video_id": "hv3UO3G0Ofo", "text": "information so as you can see there are two new things right here the biggest important new thing is that right here we don't so here is how we aggregate information okay and here is the information that we aggregate over now you can see previously this was just this value vector and now it is extended to the position to positional embeddings learned positional embeddings okay so the this with this you're able to route the positional embeddings to the output and also here you can see the attention gets fairly complex", "start_timestamp": "00:25:25", "end_timestamp": "00:26:12", "start_second": 1525, "end_second": 1572, "url": "https://www.youtube.com/watch?v=hv3UO3G0Ofo&t=1525s", "title": "Axial-DeepLab: Stand-Alone Axial-Attention for Panoptic Segmentation (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/hv3UO3G0Ofo/maxresdefault.jpg"} {"video_id": "hv3UO3G0Ofo", "text": "so you have query key attention which is classic attention the queries can attend to positional codings but also the keys can attend to positional encodings so not only can uh not only can the the node on top say i would like to attend to position three um position three can also say well together with me uh positions two and four are are fairly important i guess that's what that's what that is maybe i'm mistaken here but you can see right here there is an interaction between the keys and the positional encoding right here", "start_timestamp": "00:26:12", "end_timestamp": "00:26:56", "start_second": 1572, "end_second": 1616, "url": "https://www.youtube.com/watch?v=hv3UO3G0Ofo&t=1572s", "title": "Axial-DeepLab: Stand-Alone Axial-Attention for Panoptic Segmentation (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/hv3UO3G0Ofo/maxresdefault.jpg"} {"video_id": "hv3UO3G0Ofo", "text": "now these position encodings they are different for the queries keys and values but um ultimately we don't it doesn't make too much of a difference so here is a contrast between what a traditional attention layer would do and what they would do so a traditional attention layer gets the input x and transforms it by means of these linear transformations right here into the queries these are the queries let's call them q into the keys and into the values okay then it does a matrix multiplication with the keys and the queries and puts", "start_timestamp": "00:26:56", "end_timestamp": "00:27:42", "start_second": 1616, "end_second": 1662, "url": "https://www.youtube.com/watch?v=hv3UO3G0Ofo&t=1616s", "title": "Axial-DeepLab: Stand-Alone Axial-Attention for Panoptic Segmentation (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/hv3UO3G0Ofo/maxresdefault.jpg"} {"video_id": "hv3UO3G0Ofo", "text": "that through a softmax so this here is going to be our attention matrix this is the attention matrix and the attention matrix is multiplied here by the values and that determines our output okay again the attention matrix defines how we aggregate information and the values is what information do we aggregate you know for the output in contrast when we introduce these positional encodings you can see right here again we have query key and value now it gets a little bit more more more complex right here namely we do this", "start_timestamp": "00:27:42", "end_timestamp": "00:28:31", "start_second": 1662, "end_second": 1711, "url": "https://www.youtube.com/watch?v=hv3UO3G0Ofo&t=1662s", "title": "Axial-DeepLab: Stand-Alone Axial-Attention for Panoptic Segmentation (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/hv3UO3G0Ofo/maxresdefault.jpg"} {"video_id": "hv3UO3G0Ofo", "text": "query key multiplication right here but we also multiply the query by these uh positional embeddings for q we also multiply the keys by the positional embeddings for k and all of this together so this is a big plus right here all of this together is routed through the softmax okay and now the diagram is a little bit complicated uh now you can see the softmax aggregates information from here and from this learn position embeddings i would rather have they would just use it like they did in the formula uh do v plus r and", "start_timestamp": "00:28:31", "end_timestamp": "00:29:14", "start_second": 1711, "end_second": 1754, "url": "https://www.youtube.com/watch?v=hv3UO3G0Ofo&t=1711s", "title": "Axial-DeepLab: Stand-Alone Axial-Attention for Panoptic Segmentation (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/hv3UO3G0Ofo/maxresdefault.jpg"} {"video_id": "hv3UO3G0Ofo", "text": "say that's going to be the information that we are aggregating and the soft max here the output of the softmax is going to be how we aggregate information this is the attention all right i hope that's sort of clear you introduce these positional embeddings for queries keys and values and that allows the model to have a sense of where the information is coming from basically what positions which if you drop the convolutions so the convolution had this intrinsically because in your convolutional kernel right uh can i", "start_timestamp": "00:29:14", "end_timestamp": "00:29:56", "start_second": 1754, "end_second": 1796, "url": "https://www.youtube.com/watch?v=hv3UO3G0Ofo&t=1754s", "title": "Axial-DeepLab: Stand-Alone Axial-Attention for Panoptic Segmentation (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/hv3UO3G0Ofo/maxresdefault.jpg"} {"video_id": "hv3UO3G0Ofo", "text": "i'm i'm dumb if in your convolutional kernel the number right here if there was a seven right here that meant that wherever you are whatever is on the bottom right is seven important okay so that's that was the the convolution had this intrinsically here if you just do attention we as humans we see it in a in this kind of grid form but the machine doesn't the machine simply sees a set of pixels it simply sees you can this is to the attention mechanism this is exactly the same as a long list of pixels or a discontinued set", "start_timestamp": "00:29:56", "end_timestamp": "00:30:38", "start_second": 1796, "end_second": 1838, "url": "https://www.youtube.com/watch?v=hv3UO3G0Ofo&t=1796s", "title": "Axial-DeepLab: Stand-Alone Axial-Attention for Panoptic Segmentation (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/hv3UO3G0Ofo/maxresdefault.jpg"} {"video_id": "hv3UO3G0Ofo", "text": "it doesn't matter to the machine so it's like the problems a feed forward network has so we need to annotate it we have to give it positional information and learned positional information seems to work very well right here though you could think of static positional information okay this is the first thing the positional embeddings um that now help the attention mechanism see where the information is coming from that's really important in pictures uh so we add that the second thing they do is this so-called axial", "start_timestamp": "00:30:38", "end_timestamp": "00:31:16", "start_second": 1838, "end_second": 1876, "url": "https://www.youtube.com/watch?v=hv3UO3G0Ofo&t=1838s", "title": "Axial-DeepLab: Stand-Alone Axial-Attention for Panoptic Segmentation (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/hv3UO3G0Ofo/maxresdefault.jpg"} {"video_id": "hv3UO3G0Ofo", "text": "attention now axial attention is a sort of a let's say a trick in order to reduce the [Music] load on a the load on an attention mechanism so what does it mean we've already we've already seen in sequences right if i have a sequence a sequence layer that's going to be n squared connections between the two now there are various ways to restrict that so instead of having all of these connections let's say from onenote we've already seen wait if we just restrict it to let's say only this thing right here only this", "start_timestamp": "00:31:16", "end_timestamp": "00:31:57", "start_second": 1876, "end_second": 1917, "url": "https://www.youtube.com/watch?v=hv3UO3G0Ofo&t=1876s", "title": "Axial-DeepLab: Stand-Alone Axial-Attention for Panoptic Segmentation (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/hv3UO3G0Ofo/maxresdefault.jpg"} {"video_id": "hv3UO3G0Ofo", "text": "stuff that can be that is lower right that is lower in complexity and this in this case it would be just a neighborhood so that's what we've done that's this this m thing right here however we can also do it in different ways since this is a set anyway we can simply say uh maybe we should just always skip one we could like do attention like this and that would be just fine too right that would also leave away some of the information but you gain in computational efficiency there are various trade-offs now in a picture you have", "start_timestamp": "00:31:57", "end_timestamp": "00:32:37", "start_second": 1917, "end_second": 1957, "url": "https://www.youtube.com/watch?v=hv3UO3G0Ofo&t=1917s", "title": "Axial-DeepLab: Stand-Alone Axial-Attention for Panoptic Segmentation (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/hv3UO3G0Ofo/maxresdefault.jpg"} {"video_id": "hv3UO3G0Ofo", "text": "the same options right so you can do the neighborhood thing as we did or you can say where should the green pixel pay attention to axial attention says the green pixel should pay attention to only the row where it is in okay that's it should ignore the rest of the input it should only pay attention to that row where it is in and then in the next layer we'll flip it then the green pixel the same green pixel will pay attention to only the column it is in okay so that's that's called axial attention but don't think like don't don't", "start_timestamp": "00:32:37", "end_timestamp": "00:33:21", "start_second": 1957, "end_second": 2001, "url": "https://www.youtube.com/watch?v=hv3UO3G0Ofo&t=1957s", "title": "Axial-DeepLab: Stand-Alone Axial-Attention for Panoptic Segmentation (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/hv3UO3G0Ofo/maxresdefault.jpg"} {"video_id": "hv3UO3G0Ofo", "text": "there is nothing special about this being an axis or whatnot you could also define and it would not be called axial attention but you could define it it makes the same sense to say well that green pixel just depends on this diagonal right here just in the in this layer it just does this diagonal and then in the next layer it does like the anti-diagonal um you can say uh i just choose five random pixels in this layer and five random pixels in the next layer and that would work as well we've already seen this in this paper called", "start_timestamp": "00:33:21", "end_timestamp": "00:33:59", "start_second": 2001, "end_second": 2039, "url": "https://www.youtube.com/watch?v=hv3UO3G0Ofo&t=2001s", "title": "Axial-DeepLab: Stand-Alone Axial-Attention for Panoptic Segmentation (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/hv3UO3G0Ofo/maxresdefault.jpg"} {"video_id": "hv3UO3G0Ofo", "text": "big bird right the big big big bird but big bird so big bird explicitly used random connections in the attention mechanism and their argument was well if we use different random connections in each layer then information can travel pretty fast through the network so what's the problem with these neighborhoods right here what's the problem with neighborhood attention like this the problem is that you break the long range dependencies so let's see what happens if information needs to go from this pixel or to this pixel or this node to this", "start_timestamp": "00:33:59", "end_timestamp": "00:34:45", "start_second": 2039, "end_second": 2085, "url": "https://www.youtube.com/watch?v=hv3UO3G0Ofo&t=2039s", "title": "Axial-DeepLab: Stand-Alone Axial-Attention for Panoptic Segmentation (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/hv3UO3G0Ofo/maxresdefault.jpg"} {"video_id": "hv3UO3G0Ofo", "text": "node but if information needs to travel from this node to this node in a classic attention mechanism everything's connected to everything so that node in the next layer can simply aggregate information from here well that's not possible if you do this kind of neighborhood attention as we've done here if i do neighborhood attention then at most right because the neighborhood is three long at most this node right here can aggregate information from this node and then again it's three long in the next step so now", "start_timestamp": "00:34:45", "end_timestamp": "00:35:16", "start_second": 2085, "end_second": 2116, "url": "https://www.youtube.com/watch?v=hv3UO3G0Ofo&t=2085s", "title": "Axial-DeepLab: Stand-Alone Axial-Attention for Panoptic Segmentation (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/hv3UO3G0Ofo/maxresdefault.jpg"} {"video_id": "hv3UO3G0Ofo", "text": "this node can aggregate information from this node okay because the in the neighborhood is three long and you can only attend to within your neighborhood this means that if i want to send information to something that's really far away i need to um i need to go many many layers right i need to go layer layer layer layer and this has been well known this has already been a like a problem this has already been a property of convolutional neural networks so convolutions specifically traded off the fully connectedness of fully connected layers", "start_timestamp": "00:35:16", "end_timestamp": "00:35:57", "start_second": 2116, "end_second": 2157, "url": "https://www.youtube.com/watch?v=hv3UO3G0Ofo&t=2116s", "title": "Axial-DeepLab: Stand-Alone Axial-Attention for Panoptic Segmentation (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/hv3UO3G0Ofo/maxresdefault.jpg"} {"video_id": "hv3UO3G0Ofo", "text": "two local connections convolutions but that means that you have to go very deep in order to make long range connections you can't just make them in one step the same problem right here now this paper big bird argued that if you have random connections instead of neighborhood connections just the property of random graphs mean that um you you are pretty fast in sending information around so because in a random graph of size n you on average all two nodes are connected by path lengths of log n this is much faster", "start_timestamp": "00:35:57", "end_timestamp": "00:36:38", "start_second": 2157, "end_second": 2198, "url": "https://www.youtube.com/watch?v=hv3UO3G0Ofo&t=2157s", "title": "Axial-DeepLab: Stand-Alone Axial-Attention for Panoptic Segmentation (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/hv3UO3G0Ofo/maxresdefault.jpg"} {"video_id": "hv3UO3G0Ofo", "text": "because in this neighborhood thing two nodes are connected in a path length of order of n right you can you can pretty easily see that if i make the sequence longer i need that many more steps in order to send it around in fact it's like something like n divided by m this neighborhood size in a random graph it's log n and in this axial attention that's why i introduced it it's two okay every uh every two nodes are connected by two steps if if node if this node right here needs to send information to this node right here", "start_timestamp": "00:36:38", "end_timestamp": "00:37:21", "start_second": 2198, "end_second": 2241, "url": "https://www.youtube.com/watch?v=hv3UO3G0Ofo&t=2198s", "title": "Axial-DeepLab: Stand-Alone Axial-Attention for Panoptic Segmentation (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/hv3UO3G0Ofo/maxresdefault.jpg"} {"video_id": "hv3UO3G0Ofo", "text": "in a classic attention mechanism you could do some one step because every pixel attends to every other pixel however right now we have to um we have to see so this node attends in this layer sorry i have to think so how do we send information between the two we select this node right here in the first layer this node pays attention to this row okay which includes the red dot so the red dot can send information to the x in this layer in the next layer we select this node right here which is our target node where the information", "start_timestamp": "00:37:21", "end_timestamp": "00:38:03", "start_second": 2241, "end_second": 2283, "url": "https://www.youtube.com/watch?v=hv3UO3G0Ofo&t=2241s", "title": "Axial-DeepLab: Stand-Alone Axial-Attention for Panoptic Segmentation (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/hv3UO3G0Ofo/maxresdefault.jpg"} {"video_id": "hv3UO3G0Ofo", "text": "should go to it pays attention to all of this column which includes that x that before right this this x right here where we send information to so it takes two layers two steps to send information from any node to any other node that's pretty good so this um axial attention if you stack them on top of each other you sacrifice a little bit of uh of being able to send information from anywhere to anywhere for the pleasure of not having this quadratic attention anymore as you can see your attention mechanism is now as long", "start_timestamp": "00:38:03", "end_timestamp": "00:38:45", "start_second": 2283, "end_second": 2325, "url": "https://www.youtube.com/watch?v=hv3UO3G0Ofo&t=2283s", "title": "Axial-DeepLab: Stand-Alone Axial-Attention for Panoptic Segmentation (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/hv3UO3G0Ofo/maxresdefault.jpg"} {"video_id": "hv3UO3G0Ofo", "text": "or as big as your column or is wide or your row is high again this isn't this isn't specific to rows or columns you could do this as i said with these kind of uh diagonals you could do it with any other sort of sub pattern where you can sort of guarantee that the overlap between the layers is enough so you can send information around pretty efficiently and they use this right here so this axial attention you can see the formula is exactly the same the only change from before is this part right here you can see that", "start_timestamp": "00:38:45", "end_timestamp": "00:39:28", "start_second": 2325, "end_second": 2368, "url": "https://www.youtube.com/watch?v=hv3UO3G0Ofo&t=2325s", "title": "Axial-DeepLab: Stand-Alone Axial-Attention for Panoptic Segmentation (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/hv3UO3G0Ofo/maxresdefault.jpg"} {"video_id": "hv3UO3G0Ofo", "text": "the neighborhood that they aggregate over is no longer m by m it is now 1 by m so we've seen them going from if this is the the full input image and you wanna you wanna see where to attend what this paper does is it says a classic sorry a convolutional neural network would be attending to some sub part right this is convolution an attention mechanism pure attention would attend to everything right this is attention then what we are doing sorry that was a mistake what other people were doing were reverting back this attention", "start_timestamp": "00:39:28", "end_timestamp": "00:40:21", "start_second": 2368, "end_second": 2421, "url": "https://www.youtube.com/watch?v=hv3UO3G0Ofo&t=2368s", "title": "Axial-DeepLab: Stand-Alone Axial-Attention for Panoptic Segmentation (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/hv3UO3G0Ofo/maxresdefault.jpg"} {"video_id": "hv3UO3G0Ofo", "text": "um to a subpart this kind of neighborhood attention okay but that was still you know you still have m squared you still have o of m squared because of the attention mechanism now what we are doing is we are going even lower we're actually going one by m okay this this is with with axial attention so in general it's one by m and then in the next layer we can go one by m in this direction and have that property um and because it's so cheap now right because it's now o of m to compute this we might as well make", "start_timestamp": "00:40:21", "end_timestamp": "00:41:04", "start_second": 2421, "end_second": 2464, "url": "https://www.youtube.com/watch?v=hv3UO3G0Ofo&t=2421s", "title": "Axial-DeepLab: Stand-Alone Axial-Attention for Panoptic Segmentation (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/hv3UO3G0Ofo/maxresdefault.jpg"} {"video_id": "hv3UO3G0Ofo", "text": "m as long as the row itself okay so their last step is going to be to say okay we have one by m right here and that's going to be the row itself now you can see right here that they say axial attention reduces the complexity to hwm this enables global receptive field which is achieved by setting the span m directly to the whole input features optionally one could also use a fixed m value in order to reduce memory footprint on huge feature maps which is something that they're going to do later on imagenet i believe so when they", "start_timestamp": "00:41:04", "end_timestamp": "00:41:46", "start_second": 2464, "end_second": 2506, "url": "https://www.youtube.com/watch?v=hv3UO3G0Ofo&t=2464s", "title": "Axial-DeepLab: Stand-Alone Axial-Attention for Panoptic Segmentation (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/hv3UO3G0Ofo/maxresdefault.jpg"} {"video_id": "hv3UO3G0Ofo", "text": "have big inputs or big outputs they actually do use a smaller m what you can see right here is that i wasn't really that wasn't really correct of me to say that it's now o of m because you you still have the entire query space so you multiply query by by keys now even if you make the keys to be 1 by m yes you reduce definitely you reduce this from height times width to times height times width to this but then you can see on this thing right here if you take it and let's say we have this kind of row pattern and we replace m", "start_timestamp": "00:41:46", "end_timestamp": "00:42:33", "start_second": 2506, "end_second": 2553, "url": "https://www.youtube.com/watch?v=hv3UO3G0Ofo&t=2506s", "title": "Axial-DeepLab: Stand-Alone Axial-Attention for Panoptic Segmentation (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/hv3UO3G0Ofo/maxresdefault.jpg"} {"video_id": "hv3UO3G0Ofo", "text": "by the width then we have width squared so again the square appears however it's smaller than the original attention the original attention was h squared w squared right because hw is the image and you need that squared in order to do the attention mechanism now we've basically reduced one of the factors it is still an attention mechanism so there's still a tension going but we've basically transformed the the image we've reduced it to one column now the one column is still a tension so this is still a tension like here so this", "start_timestamp": "00:42:33", "end_timestamp": "00:43:14", "start_second": 2553, "end_second": 2594, "url": "https://www.youtube.com/watch?v=hv3UO3G0Ofo&t=2553s", "title": "Axial-DeepLab: Stand-Alone Axial-Attention for Panoptic Segmentation (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/hv3UO3G0Ofo/maxresdefault.jpg"} {"video_id": "hv3UO3G0Ofo", "text": "now reduces to the attention that you see in a in a single sequence okay if you see the image as a long stretch of pixels what this does is basically it's up it simply subdivides that into neighborhoods so we're back to neighborhoods basically um but we shift the neighborhoods from layer to layer so in the next layer the neighborhoods are going to be just alternating right the neighborhoods is going to be this is one neighborhood connected to this neighborhood connected to this neighborhood i hope this makes sense", "start_timestamp": "00:43:14", "end_timestamp": "00:43:56", "start_second": 2594, "end_second": 2636, "url": "https://www.youtube.com/watch?v=hv3UO3G0Ofo&t=2594s", "title": "Axial-DeepLab: Stand-Alone Axial-Attention for Panoptic Segmentation (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/hv3UO3G0Ofo/maxresdefault.jpg"} {"video_id": "hv3UO3G0Ofo", "text": "so it's going to be it's basically a mix between if you if you were to do this in convolution you could do one layer where it's neighborhood convolution and then one layer where it's like convolution with holes in it i think they're called actress convolutions or something like this with like giant holes in it that are exact is exactly the anti-pattern of the neighborhood convolution from before that's what this is so you see their axial attention block right here their axial attention block replaces the resnet block so if you know", "start_timestamp": "00:43:56", "end_timestamp": "00:44:34", "start_second": 2636, "end_second": 2674, "url": "https://www.youtube.com/watch?v=hv3UO3G0Ofo&t=2636s", "title": "Axial-DeepLab: Stand-Alone Axial-Attention for Panoptic Segmentation (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/hv3UO3G0Ofo/maxresdefault.jpg"} {"video_id": "hv3UO3G0Ofo", "text": "resnet i've done a paper on resnet resnet basically takes the input pipes it through straight and adds to it whatever comes out of this operation okay that's a residual block now usually this thing here would be convolutions and convolutions and they are now replaced by these multi-head axial attention you can see there is a multi-headed tension in the height and there is a multi-head attention in the width and that gives us the property that every node can send around information to every other node in two steps", "start_timestamp": "00:44:34", "end_timestamp": "00:45:12", "start_second": 2674, "end_second": 2712, "url": "https://www.youtube.com/watch?v=hv3UO3G0Ofo&t=2674s", "title": "Axial-DeepLab: Stand-Alone Axial-Attention for Panoptic Segmentation (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/hv3UO3G0Ofo/maxresdefault.jpg"} {"video_id": "hv3UO3G0Ofo", "text": "i don't like the fact that there is only two because um well this i guess this gives a significant bias to one or the other direction depending on the order that you do them in if if i had done this i maybe would have used three of them because it depends on how you want to aggregate information right like here you train the network specifically to aggregate information first in this direction and then in this direction which might work and it will give you that sending around information anywhere so maybe they've actually tried and it", "start_timestamp": "00:45:12", "end_timestamp": "00:45:49", "start_second": 2712, "end_second": 2749, "url": "https://www.youtube.com/watch?v=hv3UO3G0Ofo&t=2712s", "title": "Axial-DeepLab: Stand-Alone Axial-Attention for Panoptic Segmentation (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/hv3UO3G0Ofo/maxresdefault.jpg"} {"video_id": "hv3UO3G0Ofo", "text": "it just performed the same so i i just might have a dumb suggestion right here in any case they simply replace in we've come a long way right we've gone to like neighborhoods and blah blah blah ultimately take a res net replace the convolutions with the height axis attention and the width access attention and we're good and then we come to results so that's it you have these positional embeddings you have the axial attention and it turns out that on imagenet they perform fairly fairly well so you can see that models", "start_timestamp": "00:45:49", "end_timestamp": "00:46:28", "start_second": 2749, "end_second": 2788, "url": "https://www.youtube.com/watch?v=hv3UO3G0Ofo&t=2749s", "title": "Axial-DeepLab: Stand-Alone Axial-Attention for Panoptic Segmentation (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/hv3UO3G0Ofo/maxresdefault.jpg"} {"video_id": "hv3UO3G0Ofo", "text": "like a resnet 50 model will get a 76.9 on imagenet which is not state of the art but it's also not it's not bad right the resnet 50 is pretty good model um you can see the full axial attention right here uh achieves a 78.1 also not state-of-the-art but still pretty good and as they say it's the best fully attentional model on imagenet as our standalone attention model on imagenet so where this model really shines is where you really have to make long-range connections between pixels and that's these kind of segmentation", "start_timestamp": "00:46:28", "end_timestamp": "00:47:12", "start_second": 2788, "end_second": 2832, "url": "https://www.youtube.com/watch?v=hv3UO3G0Ofo&t=2788s", "title": "Axial-DeepLab: Stand-Alone Axial-Attention for Panoptic Segmentation (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/hv3UO3G0Ofo/maxresdefault.jpg"} {"video_id": "hv3UO3G0Ofo", "text": "tasks and i want to skip the tables right here yeah their best and everything and go to the appendix where they have some examples of this so here you can see specifically uh this is the original image you have a ground truth and you have the differences between their model this axial deep lab and the panoptic deep lab um that is a baseline for them and you can see that the the failure cases here are are pretty you know show how show how the axial deep lab is better i don't know if they are cherry picked or not but", "start_timestamp": "00:47:12", "end_timestamp": "00:47:54", "start_second": 2832, "end_second": 2874, "url": "https://www.youtube.com/watch?v=hv3UO3G0Ofo&t=2832s", "title": "Axial-DeepLab: Stand-Alone Axial-Attention for Panoptic Segmentation (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/hv3UO3G0Ofo/maxresdefault.jpg"} {"video_id": "hv3UO3G0Ofo", "text": "at least you can see that at some point so it handles occlusions better it handles instances better so here you see that the ground truth separates the person from the tie and the axial attention is able to do this but the the baseline is not able to do this correctly because it labels part of that white shirt also as and you can see why there's kind of a delimiter line here here here here but if you have long range dependencies right if you have long range dependencies in the model the model will recognize wait wait that's that must be the same", "start_timestamp": "00:47:54", "end_timestamp": "00:48:34", "start_second": 2874, "end_second": 2914, "url": "https://www.youtube.com/watch?v=hv3UO3G0Ofo&t=2874s", "title": "Axial-DeepLab: Stand-Alone Axial-Attention for Panoptic Segmentation (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/hv3UO3G0Ofo/maxresdefault.jpg"} {"video_id": "hv3UO3G0Ofo", "text": "thing as this thing here and this thing here and this thing here so that must be the same object um it's simply that the shirt was occluded by the tie and goes beneath it and now appears again it's not a different it's not part of the tie and it's not part of the um of a different object it's actually part of the shirt so the long range attention you can see at these examples sometimes here okay this might not be an instance of super duper long range dependencies this is simply where the model performs better so you can see", "start_timestamp": "00:48:34", "end_timestamp": "00:49:13", "start_second": 2914, "end_second": 2953, "url": "https://www.youtube.com/watch?v=hv3UO3G0Ofo&t=2914s", "title": "Axial-DeepLab: Stand-Alone Axial-Attention for Panoptic Segmentation (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/hv3UO3G0Ofo/maxresdefault.jpg"} {"video_id": "hv3UO3G0Ofo", "text": "here the ground roof has that surfboard segmented and the baseline does not um that this can also just be you know there are a lot of tricks to make this work of course and you throw a lot of compute at it and sometimes you just get better numbers or part of the better numbers because of the additional compute right here what do we have so you can see occlusions it appears to handle occlusions uh in a better way and this might be due to this axial attention it might be due to the positional embeddings but you can see that the", "start_timestamp": "00:49:13", "end_timestamp": "00:49:51", "start_second": 2953, "end_second": 2991, "url": "https://www.youtube.com/watch?v=hv3UO3G0Ofo&t=2953s", "title": "Axial-DeepLab: Stand-Alone Axial-Attention for Panoptic Segmentation (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/hv3UO3G0Ofo/maxresdefault.jpg"} {"video_id": "hv3UO3G0Ofo", "text": "ground truth here has the laptop between the person's hands segmented the baseline cannot do that but the axial tension does do that and i don't know what this is honestly this is um uh you can you can see though the axial attention also misses the fact that it should segment this in the background and if this occlusion handling you can see best in this example where the person in the back reappears on both sides of that person so you can see that the axial attention manages to segment that where that is just a mutant person right", "start_timestamp": "00:49:51", "end_timestamp": "00:50:33", "start_second": 2991, "end_second": 3033, "url": "https://www.youtube.com/watch?v=hv3UO3G0Ofo&t=2991s", "title": "Axial-DeepLab: Stand-Alone Axial-Attention for Panoptic Segmentation (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/hv3UO3G0Ofo/maxresdefault.jpg"} {"video_id": "hv3UO3G0Ofo", "text": "here though the ground truth is equally shaky i think there is might be some ambiguity of how you can segment these images obviously but you can see the fact that there are long range dependencies probably helped with this saying that wait in this image there's this white stuff right here and there's this white stuff right here and um connecting these two regions with attention probably helped in segmenting uh these to be the same object even though you can see there is a break in the object so there is a break", "start_timestamp": "00:50:33", "end_timestamp": "00:51:09", "start_second": 3033, "end_second": 3069, "url": "https://www.youtube.com/watch?v=hv3UO3G0Ofo&t=3033s", "title": "Axial-DeepLab: Stand-Alone Axial-Attention for Panoptic Segmentation (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/hv3UO3G0Ofo/maxresdefault.jpg"} {"video_id": "hv3UO3G0Ofo", "text": "no at no point is the object on the left uh touching or the segment on the left touching the segment on the right and still the model manages to put those into the same label category there is the last um last thing where they they want to research what their heads learn and usually you can do this right you can kind of visualize what the attention has learned so in this case right here in the column heads the way you have to read this is that this particular head right here um aggregates information from its column so everywhere where it lights", "start_timestamp": "00:51:09", "end_timestamp": "00:51:52", "start_second": 3069, "end_second": 3112, "url": "https://www.youtube.com/watch?v=hv3UO3G0Ofo&t=3069s", "title": "Axial-DeepLab: Stand-Alone Axial-Attention for Panoptic Segmentation (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/hv3UO3G0Ofo/maxresdefault.jpg"} {"video_id": "hv3UO3G0Ofo", "text": "up it there's a lot of information being routed you can see specifically in this here uh the heads of the people or the heads of the persons in the picture light up fairly well so for example this head right here is probably aggregating information a lot from this position right here and this head here is aggregating information from this position so you can deduce that that particular attention head probably deals with people's faces uh whereas that particular attention head probably deals you can see the attention is mostly on the grass right", "start_timestamp": "00:51:52", "end_timestamp": "00:52:32", "start_second": 3112, "end_second": 3152, "url": "https://www.youtube.com/watch?v=hv3UO3G0Ofo&t=3112s", "title": "Axial-DeepLab: Stand-Alone Axial-Attention for Panoptic Segmentation (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/hv3UO3G0Ofo/maxresdefault.jpg"} {"video_id": "hv3UO3G0Ofo", "text": "here and you can see the same with the for the row heads now their description here is that we notice that column head one corresponds to human heads while calm at four course correlates with the field only which you know you can interpret it as this this seemed pretty clear but then they say something like row head six focuses on relatively large relatively local regions where column head five pools all over the image so row head six which is this thing right here you can see that okay it maybe focuses on small regions", "start_timestamp": "00:52:32", "end_timestamp": "00:53:13", "start_second": 3152, "end_second": 3193, "url": "https://www.youtube.com/watch?v=hv3UO3G0Ofo&t=3152s", "title": "Axial-DeepLab: Stand-Alone Axial-Attention for Panoptic Segmentation (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/hv3UO3G0Ofo/maxresdefault.jpg"} {"video_id": "hv3UO3G0Ofo", "text": "though you can see okay what like here you can get it that's a person but in other places um i don't know where column head five pools over the whole image and this i don't know maybe they just needed something more to say because they put these pictures here they were like oh okay the the column heads are really nice because we couldn't like these this one's really nice because it you know just pays attention to the people and this one looks really nice because it pays attention to the field but we can't really put the column head", "start_timestamp": "00:53:13", "end_timestamp": "00:53:46", "start_second": 3193, "end_second": 3226, "url": "https://www.youtube.com/watch?v=hv3UO3G0Ofo&t=3193s", "title": "Axial-DeepLab: Stand-Alone Axial-Attention for Panoptic Segmentation (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/hv3UO3G0Ofo/maxresdefault.jpg"} {"video_id": "hv3UO3G0Ofo", "text": "attention without putting the row head attention but then none of the row heads really are like super distinctive on the particular thing in the image so we need to come up with something that we can say and then he's like ah this one this is there's not a lot of attention so we need to contrast this with something then you would think that they contrast it with another row head but then there's no row head that does this whole image so there's like ah column at five yeah i'm i'm not sure if there's there's a", "start_timestamp": "00:53:46", "end_timestamp": "00:54:19", "start_second": 3226, "end_second": 3259, "url": "https://www.youtube.com/watch?v=hv3UO3G0Ofo&t=3226s", "title": "Axial-DeepLab: Stand-Alone Axial-Attention for Panoptic Segmentation (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/hv3UO3G0Ofo/maxresdefault.jpg"} {"video_id": "hv3UO3G0Ofo", "text": "bit of there's a bit of uh tactical writing going on here i suspect i mean it's still you know it's doing something uh cool but yeah there's there's a definitely an element of sales in when you do when you write research papers and just um not to this data but just props to the lines in front of the histograms makes it so much easier to read how big the stupid bars are why does everyone put the lines behind the histogram i probably do that myself and now i'm just i'm realizing how much easier that is all right there is a big", "start_timestamp": "00:54:19", "end_timestamp": "00:54:59", "start_second": 3259, "end_second": 3299, "url": "https://www.youtube.com/watch?v=hv3UO3G0Ofo&t=3259s", "title": "Axial-DeepLab: Stand-Alone Axial-Attention for Panoptic Segmentation (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/hv3UO3G0Ofo/maxresdefault.jpg"} {"video_id": "hv3UO3G0Ofo", "text": "big big experimental section right here and there's a big appendix where you can read up all of the different numbers comparisons ablations what not um ultimately i just wanted to go over the method basically putting this into context with other things like putting this into context with stuff like big bird axial attention other positional encodings uh how it co how it relates to convolutions how it relates to feed forward networks and what convolutions did to feed forward networks and so on i hope you at least", "start_timestamp": "00:54:59", "end_timestamp": "00:55:34", "start_second": 3299, "end_second": 3334, "url": "https://www.youtube.com/watch?v=hv3UO3G0Ofo&t=3299s", "title": "Axial-DeepLab: Stand-Alone Axial-Attention for Panoptic Segmentation (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/hv3UO3G0Ofo/maxresdefault.jpg"} {"video_id": "9cHAjRWI2oQ", "text": "Thailand GNN learning to tile with self supervised graph neural network many problems in computer graphics face combinatorial optimizations which are typically solved by approximation algorithms are heuristic search methods in this work we explore whether a learning-based approach can solve a classical combinatorial geometric problem tiling our problem specifically we focus on tiling the interior of an arbitrary 2d shape using a given tile set while avoiding holes in tile overlaps our trained network can help produce", "start_timestamp": "00:00:00", "end_timestamp": "00:00:49", "start_second": 0, "end_second": 49, "url": "https://www.youtube.com/watch?v=9cHAjRWI2oQ&t=0s", "title": "TilinGNN: Learning to Tile with Self-Supervised Graph Neural Network (SIGGRAPH 2020)", "thumbnail": "https://i.ytimg.com/vi/9cHAjRWI2oQ/maxresdefault.jpg"} {"video_id": "9cHAjRWI2oQ", "text": "tilings in time roughly linear to the number of candidate tile locations significantly outperforming traditional combinatorial search our learned to tile approach given an input tileset we first enumerate candidate tile locations then we generate random shapes to crop and locate candidate tile locations after that we create a graph to describe each tile placement and train our network to predict tile placements with our self supervised loss at test time we applied the Train Network to predict tile locations for", "start_timestamp": "00:00:49", "end_timestamp": "00:01:37", "start_second": 49, "end_second": 97, "url": "https://www.youtube.com/watch?v=9cHAjRWI2oQ&t=49s", "title": "TilinGNN: Learning to Tile with Self-Supervised Graph Neural Network (SIGGRAPH 2020)", "thumbnail": "https://i.ytimg.com/vi/9cHAjRWI2oQ/maxresdefault.jpg"} {"video_id": "9cHAjRWI2oQ", "text": "arbitrary shapes where we first locate tile placements and progressively fill the shape with the health of our network overall this work has three technical contributions first we model this tiling problem as an instance of graph learning second we design a graph convolutional neural network to predict tile placement to be a graph convolution here we call our network tile and GNN third we define loss terms directly on the network output so Thailand GNN can be trained with self supervision interactive design interface we provide", "start_timestamp": "00:01:37", "end_timestamp": "00:02:23", "start_second": 97, "end_second": 143, "url": "https://www.youtube.com/watch?v=9cHAjRWI2oQ&t=97s", "title": "TilinGNN: Learning to Tile with Self-Supervised Graph Neural Network (SIGGRAPH 2020)", "thumbnail": "https://i.ytimg.com/vi/9cHAjRWI2oQ/maxresdefault.jpg"} {"video_id": "bM9BXu9lPZ0", "text": "everything we as human beings have created on this planet was essentially first created in our minds all that you see which is human work on this planet first found expression in the mind then it got manifested in the outside world so one thing we need to understand is the wonderful things that we have done on this planet and the horrible things that we have done on this planet both have come from the human mind so if we are concerned as to what we create in this world it's extremely important that first of all we learn to create the right things", "start_timestamp": "00:00:00", "end_timestamp": "00:00:52", "start_second": 0, "end_second": 52, "url": "https://www.youtube.com/watch?v=bM9BXu9lPZ0&t=0s", "title": "Law of Attraction simplified by Sadhguru", "thumbnail": "https://i.ytimg.com/vi/bM9BXu9lPZ0/maxresdefault.jpg"} {"video_id": "bM9BXu9lPZ0", "text": "in our mind how we keep our minds if we do not have the power to keep our minds the way we want it what we create in the world is also going to be very accidental and haphazard so learning to create our minds the way we want is the basis of creating the world the way we want there is a wonderful story in the yogic lore on a certain day a man took a walk he went for a long walk accidentally unaware he walked into paradise fortunate isn't he he just took a walk and he landed up in paradise after this long walk he felt", "start_timestamp": "00:00:52", "end_timestamp": "00:01:36", "start_second": 52, "end_second": 96, "url": "https://www.youtube.com/watch?v=bM9BXu9lPZ0&t=52s", "title": "Law of Attraction simplified by Sadhguru", "thumbnail": "https://i.ytimg.com/vi/bM9BXu9lPZ0/maxresdefault.jpg"} {"video_id": "bM9BXu9lPZ0", "text": "a little tired so he thought oh i'm tired i wish i could rest somewhere he looked around there there was a nice tree underneath which there was very cushiony grass so it was inviting he went and put his head down there and slept after a few hours he woke up well rested and he thought oh i'm well rested but i'm feeling hungry i wish i had something to eat and he thought about all the nice things that he ever wanted to eat in his life and instantly all those things appeared in front of him you need to understand there are the", "start_timestamp": "00:01:36", "end_timestamp": "00:02:21", "start_second": 96, "end_second": 141, "url": "https://www.youtube.com/watch?v=bM9BXu9lPZ0&t=96s", "title": "Law of Attraction simplified by Sadhguru", "thumbnail": "https://i.ytimg.com/vi/bM9BXu9lPZ0/maxresdefault.jpg"} {"video_id": "bM9BXu9lPZ0", "text": "services like that hungry people don't ask questions food came and he ate stomach became full then he thought oh my stomach is full i wish i had something to drink all the nice things that he ever wanted to drink he thought about it and all of them just appeared in front of him drinking people also don't ask questions so he drank now with a little bit of alcohol in him you know charles darwin told you all of you were monkeys your tail fell away not me charles darwin told you that you were all monkeys and your tail fell away and then you", "start_timestamp": "00:02:21", "end_timestamp": "00:03:08", "start_second": 141, "end_second": 188, "url": "https://www.youtube.com/watch?v=bM9BXu9lPZ0&t=141s", "title": "Law of Attraction simplified by Sadhguru", "thumbnail": "https://i.ytimg.com/vi/bM9BXu9lPZ0/maxresdefault.jpg"} {"video_id": "bM9BXu9lPZ0", "text": "became human yes definitely the tail fell away but the monkey in yoga we always refer to an unestablished mind as murkata which means a monkey why we are referring to the mind as a monkey is what are the qualities of a monkey one thing about a monkey is it's unnecessary moment and another thing about the monkey is if i say you're monkeying somebody what does it mean imitation monkey and imitation have become synonymous so these two essential qualities of a monkey are very much the qualities of an unestablished mind", "start_timestamp": "00:03:08", "end_timestamp": "00:03:53", "start_second": 188, "end_second": 233, "url": "https://www.youtube.com/watch?v=bM9BXu9lPZ0&t=188s", "title": "Law of Attraction simplified by Sadhguru", "thumbnail": "https://i.ytimg.com/vi/bM9BXu9lPZ0/maxresdefault.jpg"} {"video_id": "bM9BXu9lPZ0", "text": "unnecessary moment you don't have to learn it from the monkey you can teach it to the monkey and imitation is full-time job of the mind so when these two qualities are on a mind is referred to as a monkey so this monkey became active within him he just looked around thought what the hell is happening here i asked for food food came i asked for drink drink came there must be ghosts around here and ghosts came oh the ghosts have come they're going to surround me and torture me he thought immediately the ghost surrounded him and started torturing him", "start_timestamp": "00:03:53", "end_timestamp": "00:04:40", "start_second": 233, "end_second": 280, "url": "https://www.youtube.com/watch?v=bM9BXu9lPZ0&t=233s", "title": "Law of Attraction simplified by Sadhguru", "thumbnail": "https://i.ytimg.com/vi/bM9BXu9lPZ0/maxresdefault.jpg"} {"video_id": "bM9BXu9lPZ0", "text": "then he started screaming in pain and said oh they're going to kill me and he died just now he said he's a fortunate being the problem is he was sitting under a kalpaviksa or a wishing tree he asked for food food came he asked for drink drink came he asked for ghost ghost came he asked for torture torture came he asked for death death happened now don't go looking for these kalpa rukshas in the forest you can barely find a tree these days a well established mind a mind which is in a state of samyukti is referred to as a culpa victim", "start_timestamp": "00:04:40", "end_timestamp": "00:05:25", "start_second": 280, "end_second": 325, "url": "https://www.youtube.com/watch?v=bM9BXu9lPZ0&t=280s", "title": "Law of Attraction simplified by Sadhguru", "thumbnail": "https://i.ytimg.com/vi/bM9BXu9lPZ0/maxresdefault.jpg"} {"video_id": "bM9BXu9lPZ0", "text": "if you organize your mind to a certain level of organization it in turn organizes the whole system your body your emotion your energies everything gets organized in the direction once all these four dimensions of you your physical body your mind your emotion and the fundamental life energies are organized in one direction once you are like this anything that you wish happens without even lifting a little finger actually it would help to assist it with activity but even without doing any activity you can still manifest what you want", "start_timestamp": "00:05:25", "end_timestamp": "00:06:04", "start_second": 325, "end_second": 364, "url": "https://www.youtube.com/watch?v=bM9BXu9lPZ0&t=325s", "title": "Law of Attraction simplified by Sadhguru", "thumbnail": "https://i.ytimg.com/vi/bM9BXu9lPZ0/maxresdefault.jpg"} {"video_id": "bM9BXu9lPZ0", "text": "if you organize these four dimensions in one direction and keep it unwavering in that direction for a certain period of time right now the problem with your mind is every moment it is changing its direction it is like you want to travel somewhere and every two steps if you keep changing your direction the question of you reaching the destination is very remote unless it happens by chance so organizing our minds and in turn organizing the whole system and these four basic dimensions of who you are right now in one direction if you do this you are", "start_timestamp": "00:06:04", "end_timestamp": "00:06:45", "start_second": 364, "end_second": 405, "url": "https://www.youtube.com/watch?v=bM9BXu9lPZ0&t=364s", "title": "Law of Attraction simplified by Sadhguru", "thumbnail": "https://i.ytimg.com/vi/bM9BXu9lPZ0/maxresdefault.jpg"} {"video_id": "bM9BXu9lPZ0", "text": "a kalpa ruksha yourself anything that you wish will happen but right now if you look at your lives everything that you have wished for till now if it happens you're finished everything and everybody that you have desired for if all of that lands up in your house today could you live with that so if you want to become empowered it is also important that you become responsible as to what you ask for and what you don't right now the world situation is just this we are hugely empowered with technology today it doesn't take six six billion", "start_timestamp": "00:06:45", "end_timestamp": "00:07:23", "start_second": 405, "end_second": 443, "url": "https://www.youtube.com/watch?v=bM9BXu9lPZ0&t=405s", "title": "Law of Attraction simplified by Sadhguru", "thumbnail": "https://i.ytimg.com/vi/bM9BXu9lPZ0/maxresdefault.jpg"} {"video_id": "bM9BXu9lPZ0", "text": "people to destroy this planet one man by pressing the wrong button can destroy the whole planet when we are empowered like this it's very important that our physical action emotional action mental action and energy actions are controlled and properly directed if it is not so we become destructive self-destructive right now that is our problem the technology which is supposed to make our life beautiful and easy has become the source of all the problem that we are destroying the very basis of our life which is the planet", "start_timestamp": "00:07:23", "end_timestamp": "00:08:00", "start_second": 443, "end_second": 480, "url": "https://www.youtube.com/watch?v=bM9BXu9lPZ0&t=443s", "title": "Law of Attraction simplified by Sadhguru", "thumbnail": "https://i.ytimg.com/vi/bM9BXu9lPZ0/maxresdefault.jpg"} {"video_id": "bM9BXu9lPZ0", "text": "so what should have been a boon we are making a curse out of it what has brought incredible levels of comfort and convenience to us in the last 100 years or so has also become a threat to our life simply because we are not conscious action we are in a compulsive state of action so organizing our minds fundamentally means moving from a compulsive state of activity to a conscious state of activity you might have heard of people for whom they asked for something and beyond all expectations it came true to true for them generally this happens to", "start_timestamp": "00:08:00", "end_timestamp": "00:08:43", "start_second": 480, "end_second": 523, "url": "https://www.youtube.com/watch?v=bM9BXu9lPZ0&t=480s", "title": "Law of Attraction simplified by Sadhguru", "thumbnail": "https://i.ytimg.com/vi/bM9BXu9lPZ0/maxresdefault.jpg"} {"video_id": "bM9BXu9lPZ0", "text": "people who are in faith now let's say you want to build a house if you start thinking oh i want to build a house to build a house i need 50 lakhs but i have only 50 rupees in my pocket not possible not possible not possible the moment you say not possible you are also saying i don't want it so on one level you're creating a desire that you want something on another level you are saying i don't want it so in this conflict it may not happen someone who has some faith in a god or in a temple or whatever who is how simple-minded faith works", "start_timestamp": "00:08:43", "end_timestamp": "00:09:21", "start_second": 523, "end_second": 561, "url": "https://www.youtube.com/watch?v=bM9BXu9lPZ0&t=523s", "title": "Law of Attraction simplified by Sadhguru", "thumbnail": "https://i.ytimg.com/vi/bM9BXu9lPZ0/maxresdefault.jpg"} {"video_id": "bM9BXu9lPZ0", "text": "only for those people who are simple-minded thinking people people who are too much thinking for them it never works a child-like person who has a simple faith in his god or his temple or whatever he goes to the temple and says shiva i want a house i don't know how you must make it for me now in his mind there are no negative thoughts will it happen will it not happen is it possible is it not possible these things are completely removed by the simple act of faith now he believes shiva will do it for him and it will happen", "start_timestamp": "00:09:21", "end_timestamp": "00:09:58", "start_second": 561, "end_second": 598, "url": "https://www.youtube.com/watch?v=bM9BXu9lPZ0&t=561s", "title": "Law of Attraction simplified by Sadhguru", "thumbnail": "https://i.ytimg.com/vi/bM9BXu9lPZ0/maxresdefault.jpg"} {"video_id": "bM9BXu9lPZ0", "text": "so is shiva going to come and build your house no i want you to understand god will not lift his little finger for you what you refer to as god is a source of creation as a creator he has done a phenomenal job there's no question about it could you think of a better creation than this is it in anybody's imagination to think anything better than what is there right now so as a creator he has done his job wonderfully well but if you want life to happen the way you want it because right now the very crux of your happiness and your", "start_timestamp": "00:09:58", "end_timestamp": "00:10:34", "start_second": 598, "end_second": 634, "url": "https://www.youtube.com/watch?v=bM9BXu9lPZ0&t=598s", "title": "Law of Attraction simplified by Sadhguru", "thumbnail": "https://i.ytimg.com/vi/bM9BXu9lPZ0/maxresdefault.jpg"} {"video_id": "bM9BXu9lPZ0", "text": "well-being is this if at all if you're unhappy the only and only reason why you're unhappy is life is not happening the way you think it should happen that's all it is so if life is not happening the way you think it should happen you're unhappy if life happens the way you think it should happen you are happy it's as simple as that so if life has to happen the way you think it should happen first of all how you think with how much focus you think how much stability is there in your thought and how much reverberance is there in the", "start_timestamp": "00:10:34", "end_timestamp": "00:11:14", "start_second": 634, "end_second": 674, "url": "https://www.youtube.com/watch?v=bM9BXu9lPZ0&t=634s", "title": "Law of Attraction simplified by Sadhguru", "thumbnail": "https://i.ytimg.com/vi/bM9BXu9lPZ0/maxresdefault.jpg"} {"video_id": "bM9BXu9lPZ0", "text": "thought process will determine whether your thought will become a reality or is it just an empty thought or how you do not create any impediments for your thought by creating negative thought process is something possible or not possible is destroying humanity what is possible and not possible is not your business it's nature's business your business is just to strive but what you want right now you're sitting here if i ask you two simple questions i want you to just look at this and answer this right now from where you're", "start_timestamp": "00:11:14", "end_timestamp": "00:11:55", "start_second": 674, "end_second": 715, "url": "https://www.youtube.com/watch?v=bM9BXu9lPZ0&t=674s", "title": "Law of Attraction simplified by Sadhguru", "thumbnail": "https://i.ytimg.com/vi/bM9BXu9lPZ0/maxresdefault.jpg"} {"video_id": "bM9BXu9lPZ0", "text": "sitting can you just fly off you say no right now from where you're sitting can you get up and walk you'll say yes what is the basis of this why you say no to flying and yes to walking because past experience of life many times you've gotten up and walked never did you fly off or in other words you're using the past experience of life as a basis for deciding whether something is possible or not possible or in other words you have decided that what has not happened till now cannot happen in your life in future", "start_timestamp": "00:11:55", "end_timestamp": "00:12:34", "start_second": 715, "end_second": 754, "url": "https://www.youtube.com/watch?v=bM9BXu9lPZ0&t=715s", "title": "Law of Attraction simplified by Sadhguru", "thumbnail": "https://i.ytimg.com/vi/bM9BXu9lPZ0/maxresdefault.jpg"} {"video_id": "bM9BXu9lPZ0", "text": "this is a disgrace to humanity and the human spirit what has not happened till now on this planet can happen tomorrow human beings are capable of making it happen tomorrow so what is possible and what is not possible is not your business that is nature's business nature will decide that you just see what is it that you really want and strive for that and if your thought is created in a powerful way without any negativity without any negative thoughts bringing down the intensity of the thought process it will definitely manifest the whole", "start_timestamp": "00:12:34", "end_timestamp": "00:13:10", "start_second": 754, "end_second": 790, "url": "https://www.youtube.com/watch?v=bM9BXu9lPZ0&t=754s", "title": "Law of Attraction simplified by Sadhguru", "thumbnail": "https://i.ytimg.com/vi/bM9BXu9lPZ0/maxresdefault.jpg"} {"video_id": "bM9BXu9lPZ0", "text": "existence today modern science is proving is just a reverberation of energy it is a vibration similarly your thought is also a vibration if you generate a powerful thought and let it out it will always manifest itself so generally people are using faith as a means to remove the negative thought today once you have become thinking human beings your faith is not too deep it doesn't matter how much faith you think you have somewhere doubts always crop up right now the way your minds are made this moment if god appears right here", "start_timestamp": "00:13:10", "end_timestamp": "00:13:55", "start_second": 790, "end_second": 835, "url": "https://www.youtube.com/watch?v=bM9BXu9lPZ0&t=790s", "title": "Law of Attraction simplified by Sadhguru", "thumbnail": "https://i.ytimg.com/vi/bM9BXu9lPZ0/maxresdefault.jpg"} {"video_id": "bM9BXu9lPZ0", "text": "you will not surrender to him you will want an investigation whether he is really god or not with this kind of mind you should not waste your time on faith so there is an alternative which is commitment if you simply commit yourself to creating what you really care for now once again your thought gets organized in such a way there is no such thing as whether it's possible or not possible there is no hurdle in your thought process your thought flows freely towards what you want once this happens making it happen will", "start_timestamp": "00:13:55", "end_timestamp": "00:14:33", "start_second": 835, "end_second": 873, "url": "https://www.youtube.com/watch?v=bM9BXu9lPZ0&t=835s", "title": "Law of Attraction simplified by Sadhguru", "thumbnail": "https://i.ytimg.com/vi/bM9BXu9lPZ0/maxresdefault.jpg"} {"video_id": "bM9BXu9lPZ0", "text": "also naturally follow so to create what you really care for first and foremost thing is that what you want must be well manifested in your mind that this is what i want is that what you really want you must look at it because any number of things in your life you have thought this is it the moment you reach there you realize that's not it it's the next one and the next one and the next one so what is it that one really wants is one thing first of all we must explore once that is clear and we are committed to creating it now", "start_timestamp": "00:14:33", "end_timestamp": "00:15:07", "start_second": 873, "end_second": 907, "url": "https://www.youtube.com/watch?v=bM9BXu9lPZ0&t=873s", "title": "Law of Attraction simplified by Sadhguru", "thumbnail": "https://i.ytimg.com/vi/bM9BXu9lPZ0/maxresdefault.jpg"} {"video_id": "bM9BXu9lPZ0", "text": "there is a continuous process of thought in that direction once you can maintain a steady stream of thought without changing direction definitely this is going to happen in your life or it will definitely manifest as a reality in your life so either you make this human form into your kalpa riksha or you make it into one big mess which is happening all over one reason why we have not created the kind of world that all of us would want to live in is [Music] too many people are busy looking up too many people are interested in other", "start_timestamp": "00:15:07", "end_timestamp": "00:15:53", "start_second": 907, "end_second": 953, "url": "https://www.youtube.com/watch?v=bM9BXu9lPZ0&t=907s", "title": "Law of Attraction simplified by Sadhguru", "thumbnail": "https://i.ytimg.com/vi/bM9BXu9lPZ0/maxresdefault.jpg"} {"video_id": "bM9BXu9lPZ0", "text": "planets they are not interested in this planet but every aspect of their life they are checking out with other planets and further too many people are in service of heaven they are not on service of this earth i think right now we need people who are in service of this earth not heaven i believe if heaven is closer to divine than where we are if that is so i believe they are little more organized than us they don't need help from us this has been a major problem on the planet anything that is valuable for a human being", "start_timestamp": "00:15:53", "end_timestamp": "00:16:35", "start_second": 953, "end_second": 995, "url": "https://www.youtube.com/watch?v=bM9BXu9lPZ0&t=953s", "title": "Law of Attraction simplified by Sadhguru", "thumbnail": "https://i.ytimg.com/vi/bM9BXu9lPZ0/maxresdefault.jpg"} {"video_id": "bM9BXu9lPZ0", "text": "anything that is good the highest aspects of human life have unfortunately been exported to heaven for example if you say love people say god is loving we do not know whether god is loving or not human beings are capable of love it is very very important that people understand human beings are capable of love human beings are capable of compassion human beings are capable of joy and blissfulness all the good things that are possible for a human being unfortunately have been exported to heaven if we want to create the kind of world", "start_timestamp": "00:16:35", "end_timestamp": "00:17:16", "start_second": 995, "end_second": 1036, "url": "https://www.youtube.com/watch?v=bM9BXu9lPZ0&t=995s", "title": "Law of Attraction simplified by Sadhguru", "thumbnail": "https://i.ytimg.com/vi/bM9BXu9lPZ0/maxresdefault.jpg"} {"video_id": "bM9BXu9lPZ0", "text": "that we want to we need to understand whatever you refer to as god the idea of god has entered our mind only because we have seen creation around us because there is creation we have assumed a creator god is a great creator what you refer to as god is a source of creation that source of creation has not failed us has done a fantastic job but now the question is about management if you want to leave the management in the hands of the creator he will manage it in his own way according to his agenda but that's not", "start_timestamp": "00:17:16", "end_timestamp": "00:17:56", "start_second": 1036, "end_second": 1076, "url": "https://www.youtube.com/watch?v=bM9BXu9lPZ0&t=1036s", "title": "Law of Attraction simplified by Sadhguru", "thumbnail": "https://i.ytimg.com/vi/bM9BXu9lPZ0/maxresdefault.jpg"} {"video_id": "bM9BXu9lPZ0", "text": "what you want you want life to happen the way you want it now for example let's say all of you here are the india soccer team for the next world cup and i am the coach so these next four years everything that you need to know about football is taught to you everything that i know about football is poured into you in so many ways now the time to play the match has come you're on the field and the ball comes near your foot but you look at me and it's no good you've seen those coaches sitting there and boiling nothing happens", "start_timestamp": "00:17:56", "end_timestamp": "00:18:36", "start_second": 1076, "end_second": 1116, "url": "https://www.youtube.com/watch?v=bM9BXu9lPZ0&t=1076s", "title": "Law of Attraction simplified by Sadhguru", "thumbnail": "https://i.ytimg.com/vi/bM9BXu9lPZ0/maxresdefault.jpg"} {"video_id": "bM9BXu9lPZ0", "text": "because now once you're on the field it's your job so this is the same thing the creator has done a fantastic job now you're here it is for you and me to see how to manage this how we want it how to keep this world how in what condition would all of us enjoy best is something that we have to look at so at every stage in our life we tend to think this is it if this one thing happens everything will be fine with my life you reach there and you realize that's not it and you postpone it to something else and something else this is going on", "start_timestamp": "00:18:36", "end_timestamp": "00:19:12", "start_second": 1116, "end_second": 1152, "url": "https://www.youtube.com/watch?v=bM9BXu9lPZ0&t=1116s", "title": "Law of Attraction simplified by Sadhguru", "thumbnail": "https://i.ytimg.com/vi/bM9BXu9lPZ0/maxresdefault.jpg"} {"video_id": "bM9BXu9lPZ0", "text": "the first and foremost thing is you must be clear what is it that you really want if you do not know what you want the question of creating it doesn't arise if you look at what you really want what every human being wants is he wants to live joyfully he wants to live peacefully in terms of his relationships he wants it to be loving and affectionate or in other words all that any human being is seeking for is pleasantness within himself pleasantness around him if this pleasantness if it happens in our body we call this health and pleasure if it", "start_timestamp": "00:19:12", "end_timestamp": "00:19:56", "start_second": 1152, "end_second": 1196, "url": "https://www.youtube.com/watch?v=bM9BXu9lPZ0&t=1152s", "title": "Law of Attraction simplified by Sadhguru", "thumbnail": "https://i.ytimg.com/vi/bM9BXu9lPZ0/maxresdefault.jpg"} {"video_id": "bM9BXu9lPZ0", "text": "happens in our mind we call this peace and joy if it happens in our emotion we call this love and compassion if it happens in our energy we call this blissfulness and ecstasy this is all that a human being is looking for whether he is going to his office to work he wants to make money build a career build a family he sits in the bar sits in the temple he's still looking for the same thing pleasantness within pleasantness around if this is what we want to create i think it's time we addressed it directly and commit ourselves to", "start_timestamp": "00:19:56", "end_timestamp": "00:20:33", "start_second": 1196, "end_second": 1233, "url": "https://www.youtube.com/watch?v=bM9BXu9lPZ0&t=1196s", "title": "Law of Attraction simplified by Sadhguru", "thumbnail": "https://i.ytimg.com/vi/bM9BXu9lPZ0/maxresdefault.jpg"} {"video_id": "bM9BXu9lPZ0", "text": "creating it so you want to create yourself as a peaceful human being joyful human being loving human being a pleasant human being on all levels and do you also want a world like this a peaceful world a loving world a joyful world no no i want greenery i want food when we say a joyful world that means everything that you want has happened so this is all that you're looking for so all that you need to do is commit yourself to creating it to create a peaceful joyful and loving world both for yourself and everybody around", "start_timestamp": "00:20:33", "end_timestamp": "00:21:11", "start_second": 1233, "end_second": 1271, "url": "https://www.youtube.com/watch?v=bM9BXu9lPZ0&t=1233s", "title": "Law of Attraction simplified by Sadhguru", "thumbnail": "https://i.ytimg.com/vi/bM9BXu9lPZ0/maxresdefault.jpg"} {"video_id": "bM9BXu9lPZ0", "text": "you every day in the morning if you start your day with this simple thought in your mind that today wherever i go i will create a peaceful loving and joyful world if you fall down 100 times in the day what does it matter for a committed man there is no such thing as failure if you fall down 100 times 100 lessons to be learned if you commit yourself like this to creating what you really care for now your mind gets organized once your mind gets organized the way you think is the way you feel your emotion will get organized once", "start_timestamp": "00:21:11", "end_timestamp": "00:21:46", "start_second": 1271, "end_second": 1306, "url": "https://www.youtube.com/watch?v=bM9BXu9lPZ0&t=1271s", "title": "Law of Attraction simplified by Sadhguru", "thumbnail": "https://i.ytimg.com/vi/bM9BXu9lPZ0/maxresdefault.jpg"} {"video_id": "bM9BXu9lPZ0", "text": "your thought and emotion is organized your energies will get organized in the same direction once your thought emotion and energies are organized your very body will get organized once all these four are organized in one direction your ability to create and manifest what you want is phenomenal you are the creator in many ways why i am saying you are the creator is i want you to look at the nature of your life right now if you eat a banana in four hours time this banana becomes a human being there is something within you a life", "start_timestamp": "00:21:46", "end_timestamp": "00:22:22", "start_second": 1306, "end_second": 1342, "url": "https://www.youtube.com/watch?v=bM9BXu9lPZ0&t=1306s", "title": "Law of Attraction simplified by Sadhguru", "thumbnail": "https://i.ytimg.com/vi/bM9BXu9lPZ0/maxresdefault.jpg"} {"video_id": "bM9BXu9lPZ0", "text": "creating process a process which builds this body the manufacturer of this body is within you give him a banana he makes a human being out of that banana transforming a banana into a human being is not a small thing it is a phenomena it is just that this phenomena is happening within you unconsciously if you could only consciously manifest this making a banana into a human being you are the creator you are nothing less than that as the theory of evolution goes to make a monkey into a human being it took millions of years over an", "start_timestamp": "00:22:22", "end_timestamp": "00:23:01", "start_second": 1342, "end_second": 1381, "url": "https://www.youtube.com/watch?v=bM9BXu9lPZ0&t=1342s", "title": "Law of Attraction simplified by Sadhguru", "thumbnail": "https://i.ytimg.com/vi/bM9BXu9lPZ0/maxresdefault.jpg"} {"video_id": "bM9BXu9lPZ0", "text": "afternoon you can make a banana into your human being or whatever else a piece of bread that you eat into a human being so the very source of creation is functioning within you if you organize these four dimensions of mind emotion body and energy in one direction the source of creation is with you you are the creator what you want to create will happen to you effortlessly once you're organized like this now you are not a mess you are a culprit you have the power to create what you want there are tools and technologies as to how to", "start_timestamp": "00:23:01", "end_timestamp": "00:23:42", "start_second": 1381, "end_second": 1422, "url": "https://www.youtube.com/watch?v=bM9BXu9lPZ0&t=1381s", "title": "Law of Attraction simplified by Sadhguru", "thumbnail": "https://i.ytimg.com/vi/bM9BXu9lPZ0/maxresdefault.jpg"} {"video_id": "bM9BXu9lPZ0", "text": "organize the system in such a way that instead of being a psychological mess you can make yourself into your kalpa ruksha this culture these traditions the whole technology of yoga is just about this transforming yourself from being just a piece of creation to the creator himself this is not in search of god this is in search of becoming a god this is not in search of divine this is in search of becoming divine because that which you call as divine that which is the source of existence is throbbing within you every moment of", "start_timestamp": "00:23:42", "end_timestamp": "00:24:20", "start_second": 1422, "end_second": 1460, "url": "https://www.youtube.com/watch?v=bM9BXu9lPZ0&t=1422s", "title": "Law of Attraction simplified by Sadhguru", "thumbnail": "https://i.ytimg.com/vi/bM9BXu9lPZ0/maxresdefault.jpg"} {"video_id": "bM9BXu9lPZ0", "text": "your life otherwise a piece of bread cannot become a human being in the course of an afternoon so shifting from being just a piece of flesh and blood to becoming a creator there is a whole science and technology for this there are tools to make this happen that which is the source of creation is functioning within you every moment of your life it is just that have you kept access to that dimension or not organizing the four basic elements of your life will give you that excess there are tools and technologies to do this", "start_timestamp": "00:24:20", "end_timestamp": "00:24:57", "start_second": 1460, "end_second": 1497, "url": "https://www.youtube.com/watch?v=bM9BXu9lPZ0&t=1460s", "title": "Law of Attraction simplified by Sadhguru", "thumbnail": "https://i.ytimg.com/vi/bM9BXu9lPZ0/maxresdefault.jpg"} {"video_id": "bM9BXu9lPZ0", "text": "the whole signs of yoga the whole technology that we refer to as yoga is just about this transforming yourself from being just a piece of creation to become a creator for example 100 years ago if i picked up something like this and i start speaking speaking to someone who is across in another part of the world you would think it's some kind of a miracle either i must be a messenger or a son or maybe god himself but today this is just another gadget that every one of us carry and use today sitting here without", "start_timestamp": "00:24:57", "end_timestamp": "00:25:40", "start_second": 1497, "end_second": 1540, "url": "https://www.youtube.com/watch?v=bM9BXu9lPZ0&t=1497s", "title": "Law of Attraction simplified by Sadhguru", "thumbnail": "https://i.ytimg.com/vi/bM9BXu9lPZ0/maxresdefault.jpg"} {"video_id": "bM9BXu9lPZ0", "text": "using this instrument if i speak to some someone in another part of the world it is still a miracle for you so this instrument happened or because of human mind wanting it to happen 100 years ago nobody thought this was possible but today it is just a common thing similarly many many many things which are not in our perception yet can be brought into our perception and our ability to create our lives can be greatly enhanced so first and foremost thing is to organize the mind and to organize your emotions body and", "start_timestamp": "00:25:40", "end_timestamp": "00:26:22", "start_second": 1540, "end_second": 1582, "url": "https://www.youtube.com/watch?v=bM9BXu9lPZ0&t=1540s", "title": "Law of Attraction simplified by Sadhguru", "thumbnail": "https://i.ytimg.com/vi/bM9BXu9lPZ0/maxresdefault.jpg"} {"video_id": "bM9BXu9lPZ0", "text": "energy in that line once this happens you're in touch with the fundamental life creating process within you once you're in touch with it once you're in excess of that power you have the power to create you have the power to create your life and your surroundings the way you want it because we have lost our power to create we are making a mess out of ourselves and the world around us if we operated as the true creator as it is operating within us creating this body for us if we could create our lives with the", "start_timestamp": "00:26:22", "end_timestamp": "00:26:57", "start_second": 1582, "end_second": 1617, "url": "https://www.youtube.com/watch?v=bM9BXu9lPZ0&t=1582s", "title": "Law of Attraction simplified by Sadhguru", "thumbnail": "https://i.ytimg.com/vi/bM9BXu9lPZ0/maxresdefault.jpg"} {"video_id": "F5aaXrIMWyU", "text": "all right today we're going to find out why ai is much better at governing people why poor people really should pay more taxes and how donald trump is just a normal human all right we'll dive into it we're looking at the ai economist by salesforce research now salesforce research has kind of created a simulated world environment where they can place agents in it and the agents they can move around they can collect resources they can trade those resources and they can use those resources to build houses and that will earn them coins", "start_timestamp": "00:00:00", "end_timestamp": "00:00:41", "start_second": 0, "end_second": 41, "url": "https://www.youtube.com/watch?v=F5aaXrIMWyU&t=0s", "title": "The AI Economist: Improving Equality and Productivity with AI-Driven Tax Policies (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/F5aaXrIMWyU/hqdefault.jpg"} {"video_id": "F5aaXrIMWyU", "text": "and each agent wants to maximize its own coins but also there's the government and the government can set taxes so they collect money from everyone and they redistribute it and the goal now is to going to be that the ai handles both the agent and the taxes and we want to maximize the social welfare of the entire population all right that's the goal so the paper here is called the ai economist improving equality and productivity with ai driven tax policies by stefan cheng and alexander trott and other people from salesforce", "start_timestamp": "00:00:41", "end_timestamp": "00:01:21", "start_second": 41, "end_second": 81, "url": "https://www.youtube.com/watch?v=F5aaXrIMWyU&t=41s", "title": "The AI Economist: Improving Equality and Productivity with AI-Driven Tax Policies (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/F5aaXrIMWyU/hqdefault.jpg"} {"video_id": "F5aaXrIMWyU", "text": "research and harvard university so as i said this is a simulated environment and the simulated environment works like this there is a 2d plane kind of like a game playing field and in this game there are agents here you can see the agents there are always four agents where oh down here what are you what are you doing in the corner come on be productive um the the the agents are in this world and they can do certain things they have certain actions at their disposal so first of all they can move around they can move down left right and so on", "start_timestamp": "00:01:21", "end_timestamp": "00:02:08", "start_second": 81, "end_second": 128, "url": "https://www.youtube.com/watch?v=F5aaXrIMWyU&t=81s", "title": "The AI Economist: Improving Equality and Productivity with AI-Driven Tax Policies (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/F5aaXrIMWyU/hqdefault.jpg"} {"video_id": "F5aaXrIMWyU", "text": "whenever they walk past a resource tile they collect the resource this is stone and this is wood so there are two kinds of resources and then the last actions the agents have is building a house one wood and one stone will create one house and the house gives you coins so this is a house and that will give you coins but how much coins you get is different from agent to agent and this represents the agent's different skill levels this is an abstraction and the kind of economic theory behind it is that the income inequality in people", "start_timestamp": "00:02:08", "end_timestamp": "00:02:48", "start_second": 128, "end_second": 168, "url": "https://www.youtube.com/watch?v=F5aaXrIMWyU&t=128s", "title": "The AI Economist: Improving Equality and Productivity with AI-Driven Tax Policies (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/F5aaXrIMWyU/hqdefault.jpg"} {"video_id": "F5aaXrIMWyU", "text": "one of the main drivers of it is that they are skilled differently and therefore are able to they are able to convert one unit of labor into more money than another lower skilled worker so this is here represented by the fact that maybe if this agent here builds the house they'll get 50 coins but if this agent here would build the same house they'll only get 10 coins so we'll call this here a high skilled worker and this here a low skilled worker now the last thing sorry i thought last thing before but the very last thing the", "start_timestamp": "00:02:48", "end_timestamp": "00:03:26", "start_second": 168, "end_second": 206, "url": "https://www.youtube.com/watch?v=F5aaXrIMWyU&t=168s", "title": "The AI Economist: Improving Equality and Productivity with AI-Driven Tax Policies (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/F5aaXrIMWyU/hqdefault.jpg"} {"video_id": "F5aaXrIMWyU", "text": "agents can do is they can trade so if one agent has too many resources and the other one has not enough they can trade those resources among each other for those coins so once you build a house you collect some coins you can then either go and collect more resources or you can use those coins in order to buy resources off of other people this guy this is unlucky no coins no houses and no resources look at them oh yeah so you also can't move across the water here um you can only move on the grass you can also not", "start_timestamp": "00:03:26", "end_timestamp": "00:04:06", "start_second": 206, "end_second": 246, "url": "https://www.youtube.com/watch?v=F5aaXrIMWyU&t=206s", "title": "The AI Economist: Improving Equality and Productivity with AI-Driven Tax Policies (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/F5aaXrIMWyU/hqdefault.jpg"} {"video_id": "F5aaXrIMWyU", "text": "move through a house which gives you some interesting abilities because you can just build a house right here and um yes so and you can't move over other players but these are so the rules are pretty simple and the goal here is for the agents to maximize the number of coins they get in a thousand steps so the number h here is one thousand which is the number of steps that the agents can take before the game is over and it restarts again so each agent is using reinforcement learning in order to learn how to achieve the maximum number of", "start_timestamp": "00:04:06", "end_timestamp": "00:04:45", "start_second": 246, "end_second": 285, "url": "https://www.youtube.com/watch?v=F5aaXrIMWyU&t=246s", "title": "The AI Economist: Improving Equality and Productivity with AI-Driven Tax Policies (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/F5aaXrIMWyU/hqdefault.jpg"} {"video_id": "F5aaXrIMWyU", "text": "coins now the policies of course going to be different depending on whether that is a high or a low skilled worker the catch here is that outside of this there is the government the government here let's draw this big house with the flag of our fictitious nation which is like this that's the flag and the government will observe what's happening here and they will issue a tax um taxes so it will issue a tax distribution now how do you imagine that so if you imagine the government says something like this for the first ten coins you own", "start_timestamp": "00:04:45", "end_timestamp": "00:05:29", "start_second": 285, "end_second": 329, "url": "https://www.youtube.com/watch?v=F5aaXrIMWyU&t=285s", "title": "The AI Economist: Improving Equality and Productivity with AI-Driven Tax Policies (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/F5aaXrIMWyU/hqdefault.jpg"} {"video_id": "F5aaXrIMWyU", "text": "you owe us five percent of that um for the next 10 coins so from 10 to 20 you earn you owe us 10 and so on so if you earn even more you owe us more and more percent of those extra coins this is what you might know as a progressive tax schedule the more you earn the more percentage-wise you pay on that extra earned money this is what you might be used to but there are other tax schedules and the exact histogram you see or the exact how many percent for which amount of coins that is the action of the government so", "start_timestamp": "00:05:29", "end_timestamp": "00:06:09", "start_second": 329, "end_second": 369, "url": "https://www.youtube.com/watch?v=F5aaXrIMWyU&t=329s", "title": "The AI Economist: Improving Equality and Productivity with AI-Driven Tax Policies (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/F5aaXrIMWyU/hqdefault.jpg"} {"video_id": "F5aaXrIMWyU", "text": "the government decides on the taxes and the taxes are just collected from the income so if you if an agent earns these coins then it has to pay taxes to the government and the government will redistribute all the taxes it has collected equally among the population so if you pay a lot you might lose through this process and if you just pay a little taxes you might gain through this process so that's it that is the basic premise of the game the agents are using reinforcement learning and i believe the newness of this paper is also that the", "start_timestamp": "00:06:09", "end_timestamp": "00:06:49", "start_second": 369, "end_second": 409, "url": "https://www.youtube.com/watch?v=F5aaXrIMWyU&t=369s", "title": "The AI Economist: Improving Equality and Productivity with AI-Driven Tax Policies (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/F5aaXrIMWyU/hqdefault.jpg"} {"video_id": "F5aaXrIMWyU", "text": "government now is using reinforcement learning in order to determine the optimal tax policy there is kind of this inner loop here and there is this outer game where the government also tries to maximize the rl and what does the government try to maximize good question it is a measure that's called social welfare now social welfare consists of two things and they have this here way down in the paper social welfare in this paper consists of two things first of all economic productivity which basically just means", "start_timestamp": "00:06:49", "end_timestamp": "00:07:26", "start_second": 409, "end_second": 446, "url": "https://www.youtube.com/watch?v=F5aaXrIMWyU&t=409s", "title": "The AI Economist: Improving Equality and Productivity with AI-Driven Tax Policies (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/F5aaXrIMWyU/hqdefault.jpg"} {"video_id": "F5aaXrIMWyU", "text": "how many coins have has anyone produced it doesn't matter who but just the total amount of coins produced the second one is income equality and this is related to the the genie index so if you plot the cumulative distribution of wealth a fully equal society would be a straight line because 50 percent of the people would have 50 of the money and so on but a almost all true societies have something like this where fifty percent of the people might have ten percent of the money and the rest fifty percent of the people has the", "start_timestamp": "00:07:26", "end_timestamp": "00:08:04", "start_second": 446, "end_second": 484, "url": "https://www.youtube.com/watch?v=F5aaXrIMWyU&t=446s", "title": "The AI Economist: Improving Equality and Productivity with AI-Driven Tax Policies (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/F5aaXrIMWyU/hqdefault.jpg"} {"video_id": "F5aaXrIMWyU", "text": "other ninety percent and the measure of inequality is this area here um this is called the genie index and one minus this area is what this paper has as an equality measure so the higher this number the more equal is the society in terms of their income distribution now what is actually optimized for here is this thing equality times productivity so you want both to be high your income equality and your productivity there's a trade-off here of course but you can you can have multiple ways to trade that off and that will give you", "start_timestamp": "00:08:04", "end_timestamp": "00:08:46", "start_second": 484, "end_second": 526, "url": "https://www.youtube.com/watch?v=F5aaXrIMWyU&t=484s", "title": "The AI Economist: Improving Equality and Productivity with AI-Driven Tax Policies (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/F5aaXrIMWyU/hqdefault.jpg"} {"video_id": "F5aaXrIMWyU", "text": "the different uh thing they call this the social welfare function and that's the thing that the government rl agent optimizes for so you can see here already the free market even though it's the most productive produces the most coins because if you haven't free market means no taxes if you have no taxes then people are basically encouraged to uh earn more money because they don't have to pay taxes on them right as soon as you tax them they're less encouraged to earn more money and therefore if you have no taxes the", "start_timestamp": "00:08:46", "end_timestamp": "00:09:23", "start_second": 526, "end_second": 563, "url": "https://www.youtube.com/watch?v=F5aaXrIMWyU&t=526s", "title": "The AI Economist: Improving Equality and Productivity with AI-Driven Tax Policies (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/F5aaXrIMWyU/hqdefault.jpg"} {"video_id": "F5aaXrIMWyU", "text": "most coins will be earned in total but the equality suffers so the equality is the lowest among these things considered if you compare that to the ai economist the ai economist achieves the highest social welfare it achieves the highest equality but it doesn't suffer as much in productivity as other systems here and the baseline systems are first of all the u.s federal system this is not particularly tight to the u.s this is basically every system uh or most of the systems that you have currently in the world", "start_timestamp": "00:09:23", "end_timestamp": "00:10:02", "start_second": 563, "end_second": 602, "url": "https://www.youtube.com/watch?v=F5aaXrIMWyU&t=563s", "title": "The AI Economist: Improving Equality and Productivity with AI-Driven Tax Policies (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/F5aaXrIMWyU/hqdefault.jpg"} {"video_id": "F5aaXrIMWyU", "text": "is the progressive tax system and the size formula which i believe is an economically theory-based system which is a regressive tax schedule you can see them down here where the u.s federal will be progressive means the more you earn the more percentage-wise you pay while the says formula will be regressive which generally means the more you earn the less you pay i believe this was derived under some assumptions to be the optimal tax distribution and the ai economist will come will come to will come to this in in a second", "start_timestamp": "00:10:02", "end_timestamp": "00:10:42", "start_second": 602, "end_second": 642, "url": "https://www.youtube.com/watch?v=F5aaXrIMWyU&t=602s", "title": "The AI Economist: Improving Equality and Productivity with AI-Driven Tax Policies (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/F5aaXrIMWyU/hqdefault.jpg"} {"video_id": "F5aaXrIMWyU", "text": "let's actually just look at one of these things first one of these games how this plays out the cool thing here is that they have pretty flashy animations so you can look how does one of these games turn out now this is a free market game and you can see the agents moving around collecting things building houses and you might notice that one of the agents namely agent one is just building all of the houses and generally just kind of being a dick being in everyone's face and kind of building things everywhere", "start_timestamp": "00:10:42", "end_timestamp": "00:11:16", "start_second": 642, "end_second": 676, "url": "https://www.youtube.com/watch?v=F5aaXrIMWyU&t=642s", "title": "The AI Economist: Improving Equality and Productivity with AI-Driven Tax Policies (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/F5aaXrIMWyU/hqdefault.jpg"} {"video_id": "F5aaXrIMWyU", "text": "and the other ones don't and or or just very few like the light blue on the on the bottom left build some houses on the right you can see how the distribution of wealth is is structured and you see agent one ends up with most of the wealth now the size of the circle i think is the total productivity so you can see this grows over time mainly because agent one becomes so rich and if you analyze this if you analyze what's happening here then you'll see that agent one and i might be yeah they have a graph up here so so it", "start_timestamp": "00:11:16", "end_timestamp": "00:12:02", "start_second": 676, "end_second": 722, "url": "https://www.youtube.com/watch?v=F5aaXrIMWyU&t=676s", "title": "The AI Economist: Improving Equality and Productivity with AI-Driven Tax Policies (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/F5aaXrIMWyU/hqdefault.jpg"} {"video_id": "F5aaXrIMWyU", "text": "is very interesting what happens this is kind of the same game so agent one here is this orange dot and agents two three and four are these dots here and this graph here is coin from trading so how much money they win or lose from trading now you the green bars are trading wood and the the brown bars are trading stone so you see agent number four which is the lowest skilled um the skill is just determined at the beginning of the episode it will just make all of its coins basically by selling wood and agent 3 will make all of its", "start_timestamp": "00:12:02", "end_timestamp": "00:12:47", "start_second": 722, "end_second": 767, "url": "https://www.youtube.com/watch?v=F5aaXrIMWyU&t=722s", "title": "The AI Economist: Improving Equality and Productivity with AI-Driven Tax Policies (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/F5aaXrIMWyU/hqdefault.jpg"} {"video_id": "F5aaXrIMWyU", "text": "coins by selling stone and agent 2 will collect both and sell both and agent one will just spend money in trading so you'll have a specialization here agent one which is the highest skill one right here will buy resources in order to build more houses because it clearly profits from building lots and lots and lots and lots of houses so it will use that money to buy more resources rather than go in collecting them while all the other ones basically forego building houses in favor of they just collect the resources and they just trade them way", "start_timestamp": "00:12:47", "end_timestamp": "00:13:27", "start_second": 767, "end_second": 807, "url": "https://www.youtube.com/watch?v=F5aaXrIMWyU&t=767s", "title": "The AI Economist: Improving Equality and Productivity with AI-Driven Tax Policies (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/F5aaXrIMWyU/hqdefault.jpg"} {"video_id": "F5aaXrIMWyU", "text": "to the agent one that's more profitable for them than building houses themselves so you see this kind of specialization emerging in these games which i find i find this to be pretty cool that you see something like this like a really stark division of labor emerging just from these very very uh small set of rules and you can analyze this game in different ways they have a few more plots where this becomes quite apparent that um sorry that that these agents specialize so you see here resources collected sorry about that resources collected", "start_timestamp": "00:13:27", "end_timestamp": "00:14:09", "start_second": 807, "end_second": 849, "url": "https://www.youtube.com/watch?v=F5aaXrIMWyU&t=807s", "title": "The AI Economist: Improving Equality and Productivity with AI-Driven Tax Policies (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/F5aaXrIMWyU/hqdefault.jpg"} {"video_id": "F5aaXrIMWyU", "text": "uh if you have the lowest skill and the highest skill labors the reas the lowest skills they mainly about this this should be a pen they mainly collect resources while the highest skill labor mainly goes for building things it doesn't collect resources but net income from building is really high while everyone else just doesn't build at all all right so we have a division of labor emerging now this was a free market let's actually compare the different algorithms so if you look at social welfare this is this thing", "start_timestamp": "00:14:09", "end_timestamp": "00:14:53", "start_second": 849, "end_second": 893, "url": "https://www.youtube.com/watch?v=F5aaXrIMWyU&t=849s", "title": "The AI Economist: Improving Equality and Productivity with AI-Driven Tax Policies (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/F5aaXrIMWyU/hqdefault.jpg"} {"video_id": "F5aaXrIMWyU", "text": "here equality times productivity you can see that the ai economist will outperform over time over the training progress it will outperform all of the other systems so it will outperform the free market the u.s federal tax system and the sas formula um if trained for long enough which is to be expected right if you put rl onto a cost function it will then optimize that cost function but it's pretty cool to see that it had there's a lot of lot of headroom here over what we currently have now let's look at some of these", "start_timestamp": "00:14:53", "end_timestamp": "00:15:31", "start_second": 893, "end_second": 931, "url": "https://www.youtube.com/watch?v=F5aaXrIMWyU&t=893s", "title": "The AI Economist: Improving Equality and Productivity with AI-Driven Tax Policies (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/F5aaXrIMWyU/hqdefault.jpg"} {"video_id": "F5aaXrIMWyU", "text": "strategies it comes up with so what do these games look like where the ai has imposed different tax strategies so this is with the size strategy you see that here again you you see this inequality emerging with the yellow player here building most of the houses with the ai economist again there is inequality but you can see at the distribution that agent one only ends up with about half of the wealth where if you compare this to the free market here then agent one ends up with like two-thirds of the wealth right this", "start_timestamp": "00:15:31", "end_timestamp": "00:16:12", "start_second": 931, "end_second": 972, "url": "https://www.youtube.com/watch?v=F5aaXrIMWyU&t=931s", "title": "The AI Economist: Improving Equality and Productivity with AI-Driven Tax Policies (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/F5aaXrIMWyU/hqdefault.jpg"} {"video_id": "F5aaXrIMWyU", "text": "is the game we saw before um but there is not qualitatively that much of a difference uh but there is in the end result all right let's look at what the these policies actually come up with so what is the tax policy that the ai comes up with so this tax policy outperforms on this social welfare metric and this is very interesting right so first of all you see that it's right zigzag it's like down up down up uh which is already weird so the first very weird thing is the the spike at the very bottom so that thing here what's that thing", "start_timestamp": "00:16:12", "end_timestamp": "00:17:00", "start_second": 972, "end_second": 1020, "url": "https://www.youtube.com/watch?v=F5aaXrIMWyU&t=972s", "title": "The AI Economist: Improving Equality and Productivity with AI-Driven Tax Policies (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/F5aaXrIMWyU/hqdefault.jpg"} {"video_id": "F5aaXrIMWyU", "text": "here those are the poorest people in your society and you're taxing them the highest right so just imagine this you're here uh downtrodden by life abandoned by society you have no money no house no nothing and you're just trying to get a job you're just getting like a little bit of money and you can buy a cheeseburger and then the government comes give us that us that money come on so basically this these are the poor and the poor in this system is just fu fu the poor now the reason why this happens is pretty clear", "start_timestamp": "00:17:00", "end_timestamp": "00:17:47", "start_second": 1020, "end_second": 1067, "url": "https://www.youtube.com/watch?v=F5aaXrIMWyU&t=1020s", "title": "The AI Economist: Improving Equality and Productivity with AI-Driven Tax Policies (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/F5aaXrIMWyU/hqdefault.jpg"} {"video_id": "F5aaXrIMWyU", "text": "right the reason why this happens is because you want to encourage people to go here to earn more money right so so it's not like the government makes any money from the poor people independently of how it how high it taxes them but it is a basically an incentive structure to make them move over to the somewhat more productive population because here it's assumed kinda that even the lowest skilled ones can move over a bit if you just tax them enough at the low brackets right so um this this is what i find to be", "start_timestamp": "00:17:47", "end_timestamp": "00:18:26", "start_second": 1067, "end_second": 1106, "url": "https://www.youtube.com/watch?v=F5aaXrIMWyU&t=1067s", "title": "The AI Economist: Improving Equality and Productivity with AI-Driven Tax Policies (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/F5aaXrIMWyU/hqdefault.jpg"} {"video_id": "F5aaXrIMWyU", "text": "you just have to realize that it is so hard i believe it is almost impossible to encapsulate what we really want in a system into a formula to be into a cost function to be optimized it is so incredibly hard and you see that here of course it is going to result in a better social outcome but it just doesn't feel right to tax the poor at what 60 okay so f the poor right and then you get to to this to this level right here and interestingly if you earn even more you'll be taxed high again right so this this um", "start_timestamp": "00:18:26", "end_timestamp": "00:19:09", "start_second": 1106, "end_second": 1149, "url": "https://www.youtube.com/watch?v=F5aaXrIMWyU&t=1106s", "title": "The AI Economist: Improving Equality and Productivity with AI-Driven Tax Policies (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/F5aaXrIMWyU/hqdefault.jpg"} {"video_id": "F5aaXrIMWyU", "text": "this we're kind of used to that you earn little you pay little you earn more you er you pay more but then comes this entire valley here what's up with that right like wtf doesn't matter and this can be this this is now of course the same reasoning as you have with this science formula here is where the rich people you want to tax them less so that they are more productive such that they generate more coins and even though you tax them less percentage-wise they will end up paying more uh money in absolute terms because", "start_timestamp": "00:19:09", "end_timestamp": "00:19:51", "start_second": 1149, "end_second": 1191, "url": "https://www.youtube.com/watch?v=F5aaXrIMWyU&t=1149s", "title": "The AI Economist: Improving Equality and Productivity with AI-Driven Tax Policies (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/F5aaXrIMWyU/hqdefault.jpg"} {"video_id": "F5aaXrIMWyU", "text": "because you basically encourage them to produce more so that is that is can that is the i guess the reasoning behind this but what you have to re you have to recognize what's happening here right what are we optimizing we're optimizing this productivity times equality right and what do we get you see you get two big values of attraction one here and one here and that means that this algorithm favors a two-class society right and i believe this is this is partially the limitations of this simulation here the fact that you're only a f4 agent the", "start_timestamp": "00:19:51", "end_timestamp": "00:20:33", "start_second": 1191, "end_second": 1233, "url": "https://www.youtube.com/watch?v=F5aaXrIMWyU&t=1191s", "title": "The AI Economist: Improving Equality and Productivity with AI-Driven Tax Policies (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/F5aaXrIMWyU/hqdefault.jpg"} {"video_id": "F5aaXrIMWyU", "text": "fact that you can only do two things either collect or build right it encourages a two-class society this specialization that you saw right so you say these here are the money makers right and these here are the collectors and it is very hard to move from one group to the other because if you you earn more coins as a collector you're here and you're really discouraged here if you move there you want to move all the way over here right now the people that are are already over here if they earn an extra coin that doesn't bother them too much so", "start_timestamp": "00:20:33", "end_timestamp": "00:21:09", "start_second": 1233, "end_second": 1269, "url": "https://www.youtube.com/watch?v=F5aaXrIMWyU&t=1233s", "title": "The AI Economist: Improving Equality and Productivity with AI-Driven Tax Policies (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/F5aaXrIMWyU/hqdefault.jpg"} {"video_id": "F5aaXrIMWyU", "text": "they're very encouraged to earn more money but the very the poorer people on this side they're basically discouraged from earning more money because the system needs them to stay at that collector level right so the system encourages the two-class society because we have not built social mobility into the into the into the equation we have not built a measure for social social mobility into the cost function and therefore the ai doesn't care that the poor people will stay poor and the rich people will stay rich", "start_timestamp": "00:21:09", "end_timestamp": "00:21:49", "start_second": 1269, "end_second": 1309, "url": "https://www.youtube.com/watch?v=F5aaXrIMWyU&t=1269s", "title": "The AI Economist: Improving Equality and Productivity with AI-Driven Tax Policies (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/F5aaXrIMWyU/hqdefault.jpg"} {"video_id": "F5aaXrIMWyU", "text": "uh it just knows that this is the best outcome for society overall given the cost function that we had again this just doesn't seem like fair to us like what we want we want someone to be able to make it over here right even if they start out from the bottom and so we'd have to we have to build that in so we have a system that is effing f the poor right no social mobility mobility no and then look at what happening at the end what's happening at the end this is beautiful very rich people these are the money maker right this is", "start_timestamp": "00:21:49", "end_timestamp": "00:22:32", "start_second": 1309, "end_second": 1352, "url": "https://www.youtube.com/watch?v=F5aaXrIMWyU&t=1309s", "title": "The AI Economist: Improving Equality and Productivity with AI-Driven Tax Policies (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/F5aaXrIMWyU/hqdefault.jpg"} {"video_id": "F5aaXrIMWyU", "text": "the this is the monopoly guy top hat monocle wearing scrooge mcduck bathing in coins this is where the the government makes their money and um the discrepancy is really stunning because you could also argue hey why don't we apply the same reasoning as we applied here and here right it's not is it not like the case that if the rich people if if you tax them lower they'll pay more money and so on i believe again this might be just a result of this how the simulation is set up so we'll move away quickly and we'll come", "start_timestamp": "00:22:32", "end_timestamp": "00:23:11", "start_second": 1352, "end_second": 1391, "url": "https://www.youtube.com/watch?v=F5aaXrIMWyU&t=1352s", "title": "The AI Economist: Improving Equality and Productivity with AI-Driven Tax Policies (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/F5aaXrIMWyU/hqdefault.jpg"} {"video_id": "F5aaXrIMWyU", "text": "back to this here is what i find particularly interesting about this paper which just confuses the heck out of me it is a double periodic game so it's an inner outer loop game what do i mean by that they have these episodes right here is the start and here is the end and they subdivide this into as we said 1 000 steps so an agent is here and they can do step step step step step and it can perform these actions this is the agent there are 1 000 steps here and the agent just tries to collect as much coin so this is your classic rl", "start_timestamp": "00:23:11", "end_timestamp": "00:23:53", "start_second": 1391, "end_second": 1433, "url": "https://www.youtube.com/watch?v=F5aaXrIMWyU&t=1391s", "title": "The AI Economist: Improving Equality and Productivity with AI-Driven Tax Policies (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/F5aaXrIMWyU/hqdefault.jpg"} {"video_id": "F5aaXrIMWyU", "text": "problem but also they divide this into 10 what they call periods and i'm just going to draw maybe four periods right so this thing here they call one period where the whole thing is an episode now the purpose of the period is that at the beginning of each period the government the government can impose a new tax schedule so the government doesn't only fix the taxes once but it can change the taxes over the course of the episode right now this is what i find i i just don't see why so now you're formulating the", "start_timestamp": "00:23:53", "end_timestamp": "00:24:39", "start_second": 1433, "end_second": 1479, "url": "https://www.youtube.com/watch?v=F5aaXrIMWyU&t=1433s", "title": "The AI Economist: Improving Equality and Productivity with AI-Driven Tax Policies (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/F5aaXrIMWyU/hqdefault.jpg"} {"video_id": "F5aaXrIMWyU", "text": "tax giving objective as a sequential decision making it's like the government saying well today we have high taxes but tomorrow we have low taxes and the day after that we have high taxes again and it just doesn't make sense to to for any government to do this um what you should do is you should set taxes once at the beginning of the episode and then see how that turns out and then try to maximize uh your tax schedule because all we're looking at um we're only ever looking at how the taxes are at the end right the things", "start_timestamp": "00:24:39", "end_timestamp": "00:25:15", "start_second": 1479, "end_second": 1515, "url": "https://www.youtube.com/watch?v=F5aaXrIMWyU&t=1479s", "title": "The AI Economist: Improving Equality and Productivity with AI-Driven Tax Policies (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/F5aaXrIMWyU/hqdefault.jpg"} {"video_id": "F5aaXrIMWyU", "text": "that we've examined are just the last taxes that the ai has issued we don't know the dynamic of what happens in between this might be super wild actually what the ai does in between and i just don't see the framing as a as a as a sequential decision problem and i believe this is just an over engineered thing because someone wanted a reason and here is the architecture right you see someone wanted a reason to put an lstm in there someone is thinking like well rl that means like sequential decisions and so on and rl", "start_timestamp": "00:25:15", "end_timestamp": "00:25:53", "start_second": 1515, "end_second": 1553, "url": "https://www.youtube.com/watch?v=F5aaXrIMWyU&t=1515s", "title": "The AI Economist: Improving Equality and Productivity with AI-Driven Tax Policies (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/F5aaXrIMWyU/hqdefault.jpg"} {"video_id": "F5aaXrIMWyU", "text": "in this outer loop the way i propose it would just be a one step per episode decision which is a banded problem and as we all know bandits are boring so they didn't want this to be a bandit problem they wanted to be a sequential problem and that's why they made this period thing which i find dumb um so another factor here and i'm going to tell you how this relates to the to the weird rich people are taxed high another factor here is look at this it's a cnn an mlp an lstm and an mlp and the agent as well and i can tell you right now the cnn has", "start_timestamp": "00:25:53", "end_timestamp": "00:26:32", "start_second": 1553, "end_second": 1592, "url": "https://www.youtube.com/watch?v=F5aaXrIMWyU&t=1553s", "title": "The AI Economist: Improving Equality and Productivity with AI-Driven Tax Policies (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/F5aaXrIMWyU/hqdefault.jpg"} {"video_id": "F5aaXrIMWyU", "text": "two layers two and the lstm has like 128 units in its hidden state so these are tiny tiny models and it is not a model based rl it's model free or else proximal policy optimization and the the um the ability of these agents or planner to learn anything substantial here i believe is just not uh super duper uh well right so the i i believe that these are rather dumb agents and you can see the tax rates given by the planner is fed into the agent model but i don't think that the agent given such a small model can actually adjust to these inputs", "start_timestamp": "00:26:32", "end_timestamp": "00:27:25", "start_second": 1592, "end_second": 1645, "url": "https://www.youtube.com/watch?v=F5aaXrIMWyU&t=1592s", "title": "The AI Economist: Improving Equality and Productivity with AI-Driven Tax Policies (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/F5aaXrIMWyU/hqdefault.jpg"} {"video_id": "F5aaXrIMWyU", "text": "because you have to do some pretty good logic in order to from these tax brackets to determine uh how you should act right now what i think is happening is the agent just kind of is aware of its skill level and through its rewards it's trying to maximize its in future rewards and then when the government changes the tax rate it will not i am almost positive it will not directly change its response to that but it will kind of observe that something's happening in the world and then adjust maybe a little bit its overall", "start_timestamp": "00:27:25", "end_timestamp": "00:28:03", "start_second": 1645, "end_second": 1683, "url": "https://www.youtube.com/watch?v=F5aaXrIMWyU&t=1645s", "title": "The AI Economist: Improving Equality and Productivity with AI-Driven Tax Policies (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/F5aaXrIMWyU/hqdefault.jpg"} {"video_id": "F5aaXrIMWyU", "text": "strategy uh but not in that particular instance and it will be delayed or it will be like an overall strategy and this might be one of the reasons why the tax brackets here might be screwed up because who says who says if i were this ai what i could do is in period one through nine i make the taxes really low for the rich people so i just encourage everyone to make more money right like come on become more productive and i get the benefits of that and then in the last episode and last period right i just freaking jack up that final tax bracket", "start_timestamp": "00:28:03", "end_timestamp": "00:28:46", "start_second": 1683, "end_second": 1726, "url": "https://www.youtube.com/watch?v=F5aaXrIMWyU&t=1683s", "title": "The AI Economist: Improving Equality and Productivity with AI-Driven Tax Policies (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/F5aaXrIMWyU/hqdefault.jpg"} {"video_id": "F5aaXrIMWyU", "text": "it's like you you have lots of money give it to me right and then you just redistribute what you got there to the poor people in the very last period and thereby you achieve your goal of this social welfare function but of course this is not sustainable because all the rich people would just be kind of screwed through that and move down again but it's the end of the episode so what are they going to do so i think the fact how this is framed that there are just two different ways to get coins uh the fact that this is this", "start_timestamp": "00:28:46", "end_timestamp": "00:29:20", "start_second": 1726, "end_second": 1760, "url": "https://www.youtube.com/watch?v=F5aaXrIMWyU&t=1726s", "title": "The AI Economist: Improving Equality and Productivity with AI-Driven Tax Policies (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/F5aaXrIMWyU/hqdefault.jpg"} {"video_id": "F5aaXrIMWyU", "text": "periodical nature of the outer loop all might lead to something that becomes slowly more and more and more uninterpretable uh still cool though all right so the final thing they do this with humans yes real humans so they let humans try it and they have this interface here and the humans they behave quite differently from the ai so there are a few different things where the humans act but look at that here ai economist this is what the agents do right so this ai economist is the tax strategy so just take these", "start_timestamp": "00:29:20", "end_timestamp": "00:30:06", "start_second": 1760, "end_second": 1806, "url": "https://www.youtube.com/watch?v=F5aaXrIMWyU&t=1760s", "title": "The AI Economist: Improving Equality and Productivity with AI-Driven Tax Policies (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/F5aaXrIMWyU/hqdefault.jpg"} {"video_id": "F5aaXrIMWyU", "text": "developed tax strategies and let the humans be the agents so that the you you just want to observe how the agents act and whether or not the tax strategies also work when it's real humans acting in this environment and not rl agents so compare this to how the humans act the humans they just build their houses in like neat little packets or straight lines or stuff like this i just i just find it to be very funny now there are some things lacking in the human environment which i find really important so first of all they have no cost for", "start_timestamp": "00:30:06", "end_timestamp": "00:30:43", "start_second": 1806, "end_second": 1843, "url": "https://www.youtube.com/watch?v=F5aaXrIMWyU&t=1806s", "title": "The AI Economist: Improving Equality and Productivity with AI-Driven Tax Policies (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/F5aaXrIMWyU/hqdefault.jpg"} {"video_id": "F5aaXrIMWyU", "text": "moving which i guess is minor but um second of all they have no trade and i think that is that just kills the whole experiment because now of course what you're gonna get is the wealth is just going to be proportional to how much you get coins per house which is different for each agent right so to me that that is now a pointless experiment if you can't uh trade because the outcome is just predictable and i don't think that the human behavior changes in response to the different tax brackets i think they'll just", "start_timestamp": "00:30:43", "end_timestamp": "00:31:20", "start_second": 1843, "end_second": 1880, "url": "https://www.youtube.com/watch?v=F5aaXrIMWyU&t=1843s", "title": "The AI Economist: Improving Equality and Productivity with AI-Driven Tax Policies (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/F5aaXrIMWyU/hqdefault.jpg"} {"video_id": "F5aaXrIMWyU", "text": "do and however they can make money they'll make money they'll build more houses until it becomes unprofitable and that's it so i don't see the i don't see the value of these experiments even though they show that again the ai economist outperforms the other tax strategies in this equality times productivity metric and also in another metric that they measure um the second problem i have is for the human experiments they take this distribution here they say well the a this is one of the distributions that the ai came up with", "start_timestamp": "00:31:20", "end_timestamp": "00:31:55", "start_second": 1880, "end_second": 1915, "url": "https://www.youtube.com/watch?v=F5aaXrIMWyU&t=1880s", "title": "The AI Economist: Improving Equality and Productivity with AI-Driven Tax Policies (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/F5aaXrIMWyU/hqdefault.jpg"} {"video_id": "F5aaXrIMWyU", "text": "but you notice the lack of the fu poor people and the lack of this big spike here for the rich people which i find um are one of the two features of the other distribution so i think there's quite a bit of variance in what this ai comes up with or maybe it's just because this is periodical but this is really confusing because they show and discuss that other distribution and now all of a sudden they say well we use this distribution that was also created by our ai and it seems to be qualitatively quite different", "start_timestamp": "00:31:55", "end_timestamp": "00:32:29", "start_second": 1915, "end_second": 1949, "url": "https://www.youtube.com/watch?v=F5aaXrIMWyU&t=1915s", "title": "The AI Economist: Improving Equality and Productivity with AI-Driven Tax Policies (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/F5aaXrIMWyU/hqdefault.jpg"} {"video_id": "F5aaXrIMWyU", "text": "in any case um let's look at how the humans behave under the um under the different strategies so in the size formula you'll see that yeah the light blue person here is kind of spreading out a bit probably playing correctly everyone else is just neatly building their houses look at humans are so territorial and most of them they kind of they kind of stay in their little corner and they're like this is my corridor i'm gonna build my houses here in a nice thing and under the ai economist again you don't really see a different thing", "start_timestamp": "00:32:29", "end_timestamp": "00:33:06", "start_second": 1949, "end_second": 1986, "url": "https://www.youtube.com/watch?v=F5aaXrIMWyU&t=1949s", "title": "The AI Economist: Improving Equality and Productivity with AI-Driven Tax Policies (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/F5aaXrIMWyU/hqdefault.jpg"} {"video_id": "F5aaXrIMWyU", "text": "just because the taxes are different uh the qualitative behavior is quite the same it's just building straight lines and here i think the difference is more between the humans so i think it's not always the same humans and um the difference might be more between the humans and you kind of see that the humans clearly don't haven't really trained or discovered the optimal strategy they're just doing something and you what you're seeing is just a result of the taxation uh it's not different behavior and this here this this", "start_timestamp": "00:33:06", "end_timestamp": "00:33:38", "start_second": 1986, "end_second": 2018, "url": "https://www.youtube.com/watch?v=F5aaXrIMWyU&t=1986s", "title": "The AI Economist: Improving Equality and Productivity with AI-Driven Tax Policies (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/F5aaXrIMWyU/hqdefault.jpg"} {"video_id": "F5aaXrIMWyU", "text": "is the best okay watch the on the bottom right the human they're just first they do something and they're just walling off walling up the other players and this is this is the best i'm going to build a big beautiful wall and i'm going to have the orange guy pay for it it's donald trump in the game amazing and look at the end they actually managed to lock in the other players so they can't move anymore donald trump wins amazing though actually the yellow player appears to win economy-wise but what do you want with lots of money if", "start_timestamp": "00:33:38", "end_timestamp": "00:34:24", "start_second": 2018, "end_second": 2064, "url": "https://www.youtube.com/watch?v=F5aaXrIMWyU&t=2018s", "title": "The AI Economist: Improving Equality and Productivity with AI-Driven Tax Policies (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/F5aaXrIMWyU/hqdefault.jpg"} {"video_id": "ByeRnmHJ-uk", "text": "thank you for the introduction Riya we're excited to be giving this tutorial on meta learning throughout the tutorial we encourage you to be thinking about questions that you might have about the content that we're presenting and as you go through if you have a question you may come up to one of the four microphones but also if you want to also ask a question from from your own seat without having to get up we have a link for that will allow you to post questions and to also upload other questions and we'll be monitoring that", "start_timestamp": "00:00:00", "end_timestamp": "00:00:34", "start_second": 0, "end_second": 34, "url": "https://www.youtube.com/watch?v=ByeRnmHJ-uk&t=0s", "title": "Learning to learn: An Introduction to Meta Learning", "thumbnail": "https://i.ytimg.com/vi/ByeRnmHJ-uk/hqdefault.jpg"} {"video_id": "ByeRnmHJ-uk", "text": "and also asking questions throughout from that link so that's slide okama sauce meta but we'll also have the link on the future slides also we posted all of a pdf version of these slides at tinyurl.com sasha ICML - meta slides and so if you don't want to be putting your phone up to take pictures of the slides and you can look at the slides there as well great okay I mean additionally we'll also be taking questions at the break and at the end of the tutorial so let's get started so a lot of the motivation for meta learning comes from", "start_timestamp": "00:00:34", "end_timestamp": "00:01:07", "start_second": 34, "end_second": 67, "url": "https://www.youtube.com/watch?v=ByeRnmHJ-uk&t=34s", "title": "Learning to learn: An Introduction to Meta Learning", "thumbnail": "https://i.ytimg.com/vi/ByeRnmHJ-uk/hqdefault.jpg"} {"video_id": "ByeRnmHJ-uk", "text": "being able to learn from small amounts of data and in particular what we've seen is that meta learning thrives with large data sets if there's one thing to take away from the last few years of machine learning research I think it's that large diverse data sets plus large models leads to broad generalization we've seen this time and time again from systems trained on imagenet to transform our models trained on large machine translation systems to GPT to trained for large-scale language modeling and all this falls under the paradigm of", "start_timestamp": "00:01:07", "end_timestamp": "00:01:39", "start_second": 67, "end_second": 99, "url": "https://www.youtube.com/watch?v=ByeRnmHJ-uk&t=67s", "title": "Learning to learn: An Introduction to Meta Learning", "thumbnail": "https://i.ytimg.com/vi/ByeRnmHJ-uk/hqdefault.jpg"} {"video_id": "ByeRnmHJ-uk", "text": "deep supervised learning but what if you don't have a large data set what if you're in domains such as medical imaging or robotics or translation of rare languages or recommendation systems in each of these situations we don't have a large data set for every possible task every possible switch situation or every possible person we want to personalize our machine learning system to or what if you want a more general-purpose AI system that can do many different things that you want to be able to continuously adapt and learn", "start_timestamp": "00:01:39", "end_timestamp": "00:02:06", "start_second": 99, "end_second": 126, "url": "https://www.youtube.com/watch?v=ByeRnmHJ-uk&t=99s", "title": "Learning to learn: An Introduction to Meta Learning", "thumbnail": "https://i.ytimg.com/vi/ByeRnmHJ-uk/hqdefault.jpg"} {"video_id": "ByeRnmHJ-uk", "text": "on the job if so it's impractical to learn each and everything from scratch for doing this and so instead we want to be able to very quickly learn new things based on our previous experience and finally what if your data has a long tail for example what if the number of data points starts going down significantly as you encounter more objects or as you interact with new people here new words and encountering new driving situations in these situations your standard machine learning systems will do well in in the kind of the big data regime but", "start_timestamp": "00:02:06", "end_timestamp": "00:02:37", "start_second": 126, "end_second": 157, "url": "https://www.youtube.com/watch?v=ByeRnmHJ-uk&t=126s", "title": "Learning to learn: An Introduction to Meta Learning", "thumbnail": "https://i.ytimg.com/vi/ByeRnmHJ-uk/hqdefault.jpg"} {"video_id": "ByeRnmHJ-uk", "text": "as as you move towards having fewer examples these systems will start to break down they're prettier it not only in the long tail something but in all three of these situations these settings start to break the standard supervised learning paradigm ok so what I'd like to try out next is is it actually give you guys a test so supervised learning breaks down here but actually humans are pretty good at these situations and I want to give you a future learning test and your goal is I'll give you six training data points which are shown on", "start_timestamp": "00:02:37", "end_timestamp": "00:03:08", "start_second": 157, "end_second": 188, "url": "https://www.youtube.com/watch?v=ByeRnmHJ-uk&t=157s", "title": "Learning to learn: An Introduction to Meta Learning", "thumbnail": "https://i.ytimg.com/vi/ByeRnmHJ-uk/hqdefault.jpg"} {"video_id": "ByeRnmHJ-uk", "text": "the left the three on the the first column are from the painter Brock and the the middle three are from Cezanne and your goal is to be able to classify the painter for the for the paintings shown on the right who painted that painting and this is a future learning problem because you only get six data points in order to do this how so you get six label data points for this binary classification problem okay so raise your hand if you think that the painting on the right was painted by Cezanne okay and raise your hand if you", "start_timestamp": "00:03:08", "end_timestamp": "00:03:41", "start_second": 188, "end_second": 221, "url": "https://www.youtube.com/watch?v=ByeRnmHJ-uk&t=188s", "title": "Learning to learn: An Introduction to Meta Learning", "thumbnail": "https://i.ytimg.com/vi/ByeRnmHJ-uk/hqdefault.jpg"} {"video_id": "ByeRnmHJ-uk", "text": "think that the painter the painting was drawn by Brock okay great so most of you got the right answer so this is indeed by Brock and so and in the way that you could recognize this is that there's kind of some some more straight lines and more kind of high contrast lines in the painting and so how did you accomplish this so this sort of thing trading from learning from only six examples but have to be extremely hard for a lot of modern machine learning systems and yet all of you eight guys were able to do it or most of", "start_timestamp": "00:03:41", "end_timestamp": "00:04:11", "start_second": 221, "end_second": 251, "url": "https://www.youtube.com/watch?v=ByeRnmHJ-uk&t=221s", "title": "Learning to learn: An Introduction to Meta Learning", "thumbnail": "https://i.ytimg.com/vi/ByeRnmHJ-uk/hqdefault.jpg"} {"video_id": "ByeRnmHJ-uk", "text": "you guys were able to do it quite well so the way that you were able to accomplish this was because you have previous experience you weren't trying to learn from these six examples from scratch and many of you probably haven't seen these particular paintings before or maybe you haven't even scene paintings from these particular artists before but you have experienced different shapes different textures you've probably seen other paintings before and do that previous experience you're able to figure out how to solve", "start_timestamp": "00:04:11", "end_timestamp": "00:04:36", "start_second": 251, "end_second": 276, "url": "https://www.youtube.com/watch?v=ByeRnmHJ-uk&t=251s", "title": "Learning to learn: An Introduction to Meta Learning", "thumbnail": "https://i.ytimg.com/vi/ByeRnmHJ-uk/hqdefault.jpg"} {"video_id": "ByeRnmHJ-uk", "text": "this task from only six examples okay so now how about we get a machine learning system to solve this task depending on what era you're in you would probably answer it differently you might try to model the image formation process you might try them all the geometry of different objects in the image if you were using slightly more sophisticated techniques you might use something like histah features or Haga features with a support vector machine or more recently maybe you try to fine tune from imagenet features or try to do", "start_timestamp": "00:04:36", "end_timestamp": "00:05:04", "start_second": 276, "end_second": 304, "url": "https://www.youtube.com/watch?v=ByeRnmHJ-uk&t=276s", "title": "Learning to learn: An Introduction to Meta Learning", "thumbnail": "https://i.ytimg.com/vi/ByeRnmHJ-uk/hqdefault.jpg"} {"video_id": "ByeRnmHJ-uk", "text": "domain adaptation from other painters for example and and maybe in the future we'll be doing something even more sophisticated so these different approaches may seem very distinct in that kind of the approach that they're taking but they all share one thing in common which is all of them are different ways to inject previous knowledge or previous experience into the system and as you move down these these prior knowledge you get few engineered human engineered priors and more data-driven priors and also as you", "start_timestamp": "00:05:04", "end_timestamp": "00:05:33", "start_second": 304, "end_second": 333, "url": "https://www.youtube.com/watch?v=ByeRnmHJ-uk&t=304s", "title": "Learning to learn: An Introduction to Meta Learning", "thumbnail": "https://i.ytimg.com/vi/ByeRnmHJ-uk/hqdefault.jpg"} {"video_id": "ByeRnmHJ-uk", "text": "move down you get systems that work undoubtedly better and so in this tutorial we want to try to take this one step further and in particular we want to be able to learn priors explicitly from previous experience that lead to efficient downstream learning an entirely data Duren approach to acquiring these priors that is can we how these systems learn how to learn to solve tasks and this is what is known as meta learning in the rest of this tutorial Sergey will first talk about the problem statement and overview the general meta", "start_timestamp": "00:05:33", "end_timestamp": "00:06:05", "start_second": 333, "end_second": 365, "url": "https://www.youtube.com/watch?v=ByeRnmHJ-uk&t=333s", "title": "Learning to learn: An Introduction to Meta Learning", "thumbnail": "https://i.ytimg.com/vi/ByeRnmHJ-uk/hqdefault.jpg"} {"video_id": "ByeRnmHJ-uk", "text": "learning problem then we'll be talking about different meta learning algorithms ranging from black box out updation approaches to autumn tape optimization based approaches to nonparametric methods then we'll discuss how we can develop bayesian variants of each of these methods then we'll talk about how meta learning has a plot been applied to different application areas we'll take a short five-minute break and also allows additional questions Sergey will then talk about meta reinforcement learning and we'll conclude by discussing", "start_timestamp": "00:06:05", "end_timestamp": "00:06:31", "start_second": 365, "end_second": 391, "url": "https://www.youtube.com/watch?v=ByeRnmHJ-uk&t=365s", "title": "Learning to learn: An Introduction to Meta Learning", "thumbnail": "https://i.ytimg.com/vi/ByeRnmHJ-uk/hqdefault.jpg"} {"video_id": "ByeRnmHJ-uk", "text": "challenges on frontiers ok next circuit will be talking about the problem statement and overview chelsey and those of you that were trying to find the slides we did actually there was somebody who actually posted the link again if you go to the thing on the slide here the slide Oh / meta the first question is actually a link to the slide deck so if you want the slide deck please check that out there alright so let's start with a discussion of how we can actually formulate the meta learning problem and there are really kind of two distinct", "start_timestamp": "00:06:31", "end_timestamp": "00:07:02", "start_second": 391, "end_second": 422, "url": "https://www.youtube.com/watch?v=ByeRnmHJ-uk&t=391s", "title": "Learning to learn: An Introduction to Meta Learning", "thumbnail": "https://i.ytimg.com/vi/ByeRnmHJ-uk/hqdefault.jpg"} {"video_id": "ByeRnmHJ-uk", "text": "viewpoints on meta learning there's kind of a mechanistic view and a probabilistic view let me explain what I mean by these the mechanistic view looks at meta learning as a setting where there's a deep neural network model that can read in an entire data set and then make predictions for new data points training this network use a metadata set which itself consists of many data sets each for a different task and this view of meta learning makes it easier to implement meta learning algorithms so if you're actually coding something up in", "start_timestamp": "00:07:02", "end_timestamp": "00:07:32", "start_second": 422, "end_second": 452, "url": "https://www.youtube.com/watch?v=ByeRnmHJ-uk&t=422s", "title": "Learning to learn: An Introduction to Meta Learning", "thumbnail": "https://i.ytimg.com/vi/ByeRnmHJ-uk/hqdefault.jpg"} {"video_id": "ByeRnmHJ-uk", "text": "tensorflow or PI torch the mechanistic view is probably the one that makes it clearest the probabilistic view treats meta learning as the problem of extracting prior information from a set of meta training tasks that allows for efficient learning of new tasks this view says that learning a new task basically used this prior plus a small amount of training data to infer the most likely posterior parameters that will allow you to solve this task and this view of meta learning makes it easier to understand meta learning", "start_timestamp": "00:07:32", "end_timestamp": "00:08:01", "start_second": 452, "end_second": 481, "url": "https://www.youtube.com/watch?v=ByeRnmHJ-uk&t=452s", "title": "Learning to learn: An Introduction to Meta Learning", "thumbnail": "https://i.ytimg.com/vi/ByeRnmHJ-uk/hqdefault.jpg"} {"video_id": "ByeRnmHJ-uk", "text": "algorithms these are not two views that result in different methods they're actually two viewpoints that can be taken to understand the same methods so in this part of the tutorial I'll actually focus on the second view on the probabilistic view because our aim is really to help everybody to understand meta learning algorithms but we'll see the more mechanistic view emerge when we talk about particular practical instantiation of these methods okay so just to work towards a problem definition for meta learning let's first", "start_timestamp": "00:08:01", "end_timestamp": "00:08:28", "start_second": 481, "end_second": 508, "url": "https://www.youtube.com/watch?v=ByeRnmHJ-uk&t=481s", "title": "Learning to learn: An Introduction to Meta Learning", "thumbnail": "https://i.ytimg.com/vi/ByeRnmHJ-uk/hqdefault.jpg"} {"video_id": "ByeRnmHJ-uk", "text": "start with a problem definition for regular supervised learning and cast it in a probabilistic framework so a lot of what I'm gonna say some of you might have already seen may be in a course on machine learning or a textbook but I just want to walk through it step by step because the meta learning problem definition will build on this so if we're doing supervised learning what we're really doing is we're finding the most likely parameters Phi given our data D so Phi denotes the parameters of your model so if your training for example a", "start_timestamp": "00:08:28", "end_timestamp": "00:08:56", "start_second": 508, "end_second": 536, "url": "https://www.youtube.com/watch?v=ByeRnmHJ-uk&t=508s", "title": "Learning to learn: An Introduction to Meta Learning", "thumbnail": "https://i.ytimg.com/vi/ByeRnmHJ-uk/hqdefault.jpg"} {"video_id": "ByeRnmHJ-uk", "text": "deep neural network model Phi literally refers to the weights the D refers to your training data so it's a set of tuples of input-output pairs where the input might be something like an image and the output is maybe the label corresponding to the class of the object in that image now when we actually want to do this kind of maximum like that estimation problem we typically apply Bayes rule and rewrite it as the sum of log P of D given Phi plus log P of Phi and the first term is typically referred to as the likelihood of your data and", "start_timestamp": "00:08:56", "end_timestamp": "00:09:27", "start_second": 536, "end_second": 567, "url": "https://www.youtube.com/watch?v=ByeRnmHJ-uk&t=536s", "title": "Learning to learn: An Introduction to Meta Learning", "thumbnail": "https://i.ytimg.com/vi/ByeRnmHJ-uk/hqdefault.jpg"} {"video_id": "ByeRnmHJ-uk", "text": "the second term is the prior or the regularizer so if you're using weight decay that corresponds to a Gaussian prior for example and if we factorize the likelihood if we assume independent and identically distributed data points then we get the familiar form shown here it's a sum over all of your data points of the log probability of the label Y I given the input X I and your parameters Phi so this is essentially supervised learning now of course there are some things that are a little bit problematic about this as Chelsea alluded to in the", "start_timestamp": "00:09:27", "end_timestamp": "00:09:57", "start_second": 567, "end_second": 597, "url": "https://www.youtube.com/watch?v=ByeRnmHJ-uk&t=567s", "title": "Learning to learn: An Introduction to Meta Learning", "thumbnail": "https://i.ytimg.com/vi/ByeRnmHJ-uk/hqdefault.jpg"} {"video_id": "ByeRnmHJ-uk", "text": "motivation the models that will work best the most powerful models will typically require a large amounts of data so if your data is very limited it might be very difficult to get a very accurate posterior or very accurate estimate of Phi so the problem at its core that we're going to be dealing with a meta learning is how do you do a good job of estimating Phi when your data is limited and the way you're going to do that is by incorporating more data that is not exactly for the tasks that you want but somehow structurally related so", "start_timestamp": "00:09:57", "end_timestamp": "00:10:26", "start_second": 597, "end_second": 626, "url": "https://www.youtube.com/watch?v=ByeRnmHJ-uk&t=597s", "title": "Learning to learn: An Introduction to Meta Learning", "thumbnail": "https://i.ytimg.com/vi/ByeRnmHJ-uk/hqdefault.jpg"} {"video_id": "ByeRnmHJ-uk", "text": "the question is how can we incorporate additional data and I should say as an aside this is very much the same kind of challenge that things like semi supervised learning and unsupervised learning deal with so in semi-supervised learning you incorporate additional data that doesn't have wise and and so forth in metal learning you incorporate additional data that we're going to call D meta terrain which is labeled data it's just labeled data for different tasks so D meta trained is actually a data set of data sets so it's a set of", "start_timestamp": "00:10:26", "end_timestamp": "00:10:55", "start_second": 626, "end_second": 655, "url": "https://www.youtube.com/watch?v=ByeRnmHJ-uk&t=626s", "title": "Learning to learn: An Introduction to Meta Learning", "thumbnail": "https://i.ytimg.com/vi/ByeRnmHJ-uk/hqdefault.jpg"} {"video_id": "ByeRnmHJ-uk", "text": "data sets d1 through DN where each of those data sets di itself consists of a set of tuples X I and why I wear those why's our labels for a different task so we assume the tasks are somehow structurally similar but not actually the same so you can't just directly incorporate Dee Mehta trained as trainee dated or supervised learning let me give a little example this is based on a popular benchmark for meta learning called the mini image in a data set let's say that your few shot classification task requires you to do", "start_timestamp": "00:10:55", "end_timestamp": "00:11:25", "start_second": 655, "end_second": 685, "url": "https://www.youtube.com/watch?v=ByeRnmHJ-uk&t=655s", "title": "Learning to learn: An Introduction to Meta Learning", "thumbnail": "https://i.ytimg.com/vi/ByeRnmHJ-uk/hqdefault.jpg"} {"video_id": "ByeRnmHJ-uk", "text": "five-way classification between cats dogs lions worms and stacks of bowls I don't know why you would want to do this task but let's say this is a few shot tasks and you have only a few examples of each image now those few examples are not enough to solve the tasks by themselves so we're gonna use the meta train which is a collection of data sets for other five-way classification problems so maybe one of them classifies you know birds mushrooms dogs singers and pianos for instance different tasks but some structural", "start_timestamp": "00:11:25", "end_timestamp": "00:11:52", "start_second": 685, "end_second": 712, "url": "https://www.youtube.com/watch?v=ByeRnmHJ-uk&t=685s", "title": "Learning to learn: An Introduction to Meta Learning", "thumbnail": "https://i.ytimg.com/vi/ByeRnmHJ-uk/hqdefault.jpg"} {"video_id": "ByeRnmHJ-uk", "text": "similarity because all the more visual recognition tasks another example maybe the tasks you want to solve is a few shot regression problem you have a few examples of input-output pairs but then your meta training tasks consist of other curve fitting problems so other curves with a few sample input-output pairs or maybe you have some kind of speech recognition task or some kind of language translation task and so on so in all these cases you can formulate a set of meta training tasks that are not the same as the tasks you want to solve", "start_timestamp": "00:11:52", "end_timestamp": "00:12:20", "start_second": 712, "end_second": 740, "url": "https://www.youtube.com/watch?v=ByeRnmHJ-uk&t=712s", "title": "Learning to learn: An Introduction to Meta Learning", "thumbnail": "https://i.ytimg.com/vi/ByeRnmHJ-uk/hqdefault.jpg"} {"video_id": "ByeRnmHJ-uk", "text": "but somehow structurally similar and typically these would come from your prior data now we could simply stop right there and treat meta learning as a nonparametric learning problem so you want to basically use D and D meta train together but oftentimes we want to use high capacity models we don't want to store all of our data and keep it around forever we'd like to somehow to SIL it into a model into a parametric model with learn model parameters so in meta learning we don't want to keep D meta train around forever what we'd like to", "start_timestamp": "00:12:20", "end_timestamp": "00:12:50", "start_second": 740, "end_second": 770, "url": "https://www.youtube.com/watch?v=ByeRnmHJ-uk&t=740s", "title": "Learning to learn: An Introduction to Meta Learning", "thumbnail": "https://i.ytimg.com/vi/ByeRnmHJ-uk/hqdefault.jpg"} {"video_id": "ByeRnmHJ-uk", "text": "do instead is learn some meta parameters theta so we're going to use D meta train to learn theta and theta will basically contain all the information that we need for solving new tasks that we've extracted from D meta train so whatever we need to know about D meta train is going to be baked into theta via a meta learning process and that's essentially the essence of the meta learning problem now if we want to treat this probabilistically what we can do now is we can say well we can write out our p of phi given d common", "start_timestamp": "00:12:50", "end_timestamp": "00:13:18", "start_second": 770, "end_second": 798, "url": "https://www.youtube.com/watch?v=ByeRnmHJ-uk&t=770s", "title": "Learning to learn: An Introduction to Meta Learning", "thumbnail": "https://i.ytimg.com/vi/ByeRnmHJ-uk/hqdefault.jpg"} {"video_id": "ByeRnmHJ-uk", "text": "Mehta trained as an equation where we're integrating out the these sufficient statistics of our meta training data called theta and this implies the assumption that Phi is conditionally independent of D meta train given theta which is very reasonable because we just said the theta should be whatever extracts all the necessary sufficient statistics from D meta train now in reality integrating out theta is computationally very very expensive so we wouldn't want to do this what we would want to do in practice typically", "start_timestamp": "00:13:18", "end_timestamp": "00:13:46", "start_second": 798, "end_second": 826, "url": "https://www.youtube.com/watch?v=ByeRnmHJ-uk&t=798s", "title": "Learning to learn: An Introduction to Meta Learning", "thumbnail": "https://i.ytimg.com/vi/ByeRnmHJ-uk/hqdefault.jpg"} {"video_id": "ByeRnmHJ-uk", "text": "is use a maximum a posteriori estimate which is which means that we're going to approximate this integral with just a point estimate for theta star where theta star is whatever actually maximizes the log probability of theta given D meta train which again is a very standard thing to do in machine learning so this Arg max that I have written on the right side here this is the meta learning problem the meta learning problem is to pull out the right theta from your meta training data so that that theta contains everything you need", "start_timestamp": "00:13:46", "end_timestamp": "00:14:16", "start_second": 826, "end_second": 856, "url": "https://www.youtube.com/watch?v=ByeRnmHJ-uk&t=826s", "title": "Learning to learn: An Introduction to Meta Learning", "thumbnail": "https://i.ytimg.com/vi/ByeRnmHJ-uk/hqdefault.jpg"} {"video_id": "ByeRnmHJ-uk", "text": "to know to efficiently solve new tasks and efficiently solve new tasks means figure out Phi so once you have theta the problem of getting Phi can be written as the Arg max of log P Phi given D comma theta star because you don't need the D mail train anymore that's all been baked into theta star okay so that's the basic problem formulation if anybody has any questions feel free to come up to the microphones and ask me otherwise I'm going to move on to a simple example yeah that's an excellent question so meta learning is", "start_timestamp": "00:14:16", "end_timestamp": "00:14:53", "start_second": 856, "end_second": 893, "url": "https://www.youtube.com/watch?v=ByeRnmHJ-uk&t=856s", "title": "Learning to learn: An Introduction to Meta Learning", "thumbnail": "https://i.ytimg.com/vi/ByeRnmHJ-uk/hqdefault.jpg"} {"video_id": "ByeRnmHJ-uk", "text": "conceptually quite related to a number of other problem settings including transfer learning multitask learning you know even things like semi-supervised learning in that all of these problem settings deal with incorporating additional data that is not quite from your task but is going to help you solve your task more efficiently the main difference is that meta learning deals with a setting where you still have to do some amount of adaptation on your new task transfer learning formula in a certain ways can I", "start_timestamp": "00:14:53", "end_timestamp": "00:15:19", "start_second": 893, "end_second": 919, "url": "https://www.youtube.com/watch?v=ByeRnmHJ-uk&t=893s", "title": "Learning to learn: An Introduction to Meta Learning", "thumbnail": "https://i.ytimg.com/vi/ByeRnmHJ-uk/hqdefault.jpg"} {"video_id": "ByeRnmHJ-uk", "text": "should be viewed as a type of metal learning as well I'll describe related problem settings a little bit more at the end of this section and maybe then things will be a little clearer okay let's continue so let's work through a little example of how we can design a cartoony version of a meta learning algorithm Chelsea will talk about much more practical algorithms this is just meant to be an illustration so first let's talk about the the adaptation let's say that we already have this theta star we don't care how we learn it and now we just", "start_timestamp": "00:15:19", "end_timestamp": "00:15:49", "start_second": 919, "end_second": 949, "url": "https://www.youtube.com/watch?v=ByeRnmHJ-uk&t=919s", "title": "Learning to learn: An Introduction to Meta Learning", "thumbnail": "https://i.ytimg.com/vi/ByeRnmHJ-uk/hqdefault.jpg"} {"video_id": "ByeRnmHJ-uk", "text": "want to classify new data points using a small data set D so classifying new data points means your test data point X test goes in your label Y test comes out and this function that does this is somehow parameterize by theta star so theta star determines the mapping between X test and y test Waiters theta star come from well it comes from using your data set D which might be a small data set for your new task together with your theta star so D is going to be read in by some function and that function that reads in", "start_timestamp": "00:15:49", "end_timestamp": "00:16:21", "start_second": 949, "end_second": 981, "url": "https://www.youtube.com/watch?v=ByeRnmHJ-uk&t=949s", "title": "Learning to learn: An Introduction to Meta Learning", "thumbnail": "https://i.ytimg.com/vi/ByeRnmHJ-uk/hqdefault.jpg"} {"video_id": "ByeRnmHJ-uk", "text": "D is parametrized by theta star sorry about that so that function is parametrized by theta star now you would also like to of course be able to learn this theta star using large amounts of metal training data which I'll come to in a second but if you can somehow use that Mediterranea to get theta star then that will process your data set up with Phi star and allow you to turn your test inputs into test labels so theta star is what parametrize is this function okay so now how do we actually train this thing well as I alluded to before it's", "start_timestamp": "00:16:21", "end_timestamp": "00:16:57", "start_second": 981, "end_second": 1017, "url": "https://www.youtube.com/watch?v=ByeRnmHJ-uk&t=981s", "title": "Learning to learn: An Introduction to Meta Learning", "thumbnail": "https://i.ytimg.com/vi/ByeRnmHJ-uk/hqdefault.jpg"} {"video_id": "ByeRnmHJ-uk", "text": "going to involve this meta training data and the key idea behind setting up metal learning algorithms I think is best summarized by this sentence from a paper by venules at all called matching networks which says that our training procedure is based on a simple machine learning principle which is that tests and train conditions must match now let's unpack this a little bit what are the test conditions well test here refers to meta test right so meta test time is adaptation the test condition is that a model parametrized by theta star", "start_timestamp": "00:16:57", "end_timestamp": "00:17:26", "start_second": 1017, "end_second": 1046, "url": "https://www.youtube.com/watch?v=ByeRnmHJ-uk&t=1017s", "title": "Learning to learn: An Introduction to Meta Learning", "thumbnail": "https://i.ytimg.com/vi/ByeRnmHJ-uk/hqdefault.jpg"} {"video_id": "ByeRnmHJ-uk", "text": "reads in d outputs Phi star and five stars then used to classify new data points for your task so the training time conditions need to match so it met a training time you also need to have a model that reads in a data set which data set well a data set di from your met training set it is going to be parametrized by theta it's going to output Phi star and that Phi star needs to be good for classifying points which points well that's maybe the puzzle so what is it that we're actually going to classify here what we need to do in", "start_timestamp": "00:17:26", "end_timestamp": "00:18:02", "start_second": 1046, "end_second": 1082, "url": "https://www.youtube.com/watch?v=ByeRnmHJ-uk&t=1046s", "title": "Learning to learn: An Introduction to Meta Learning", "thumbnail": "https://i.ytimg.com/vi/ByeRnmHJ-uk/hqdefault.jpg"} {"video_id": "ByeRnmHJ-uk", "text": "order to complete the meta learning problem definition is we need to reserve a little test set for each task so it's not enough to just have a training set the training set is what the model needs to read in but then needs to be trained on something and what it actually going to be trained on is a little test set for each task so for every one of our few shot tasks we're gonna assume that we have K training points but done also some number L of little test points and those test points are was going to supervise the metal learning they are", "start_timestamp": "00:18:02", "end_timestamp": "00:18:30", "start_second": 1082, "end_second": 1110, "url": "https://www.youtube.com/watch?v=ByeRnmHJ-uk&t=1082s", "title": "Learning to learn: An Introduction to Meta Learning", "thumbnail": "https://i.ytimg.com/vi/ByeRnmHJ-uk/hqdefault.jpg"} {"video_id": "ByeRnmHJ-uk", "text": "not used for adaptation they're just used for metal learning so D test is where X tests and Y tests will be sampled from so the game that you're playing then is read in D train output Phi and make sure that Phi is good for classifying points from D test for that same task so now we can actually complete the meta learning problem definition so the adaptation step we can write more compactly as some function f theta star of D train so f theta star reads in D train and outputs Phi star so now all we have to do is learn theta", "start_timestamp": "00:18:30", "end_timestamp": "00:19:07", "start_second": 1110, "end_second": 1147, "url": "https://www.youtube.com/watch?v=ByeRnmHJ-uk&t=1110s", "title": "Learning to learn: An Introduction to Meta Learning", "thumbnail": "https://i.ytimg.com/vi/ByeRnmHJ-uk/hqdefault.jpg"} {"video_id": "ByeRnmHJ-uk", "text": "such that Phi equals F theta D train is good for D test so for every task I you want to read in D train I and be good for D test I which means that we can write down the meta learning problem formulation like this theta star is the Arg max for the sum over all of your tasks of log P Phi I given D test I where Phi I is equal to F theta apply to D train so notice that we get 5 from D train but we met a train on D test we can also represent this with a graphical model so if you're into graphical models here's a", "start_timestamp": "00:19:07", "end_timestamp": "00:19:46", "start_second": 1147, "end_second": 1186, "url": "https://www.youtube.com/watch?v=ByeRnmHJ-uk&t=1147s", "title": "Learning to learn: An Introduction to Meta Learning", "thumbnail": "https://i.ytimg.com/vi/ByeRnmHJ-uk/hqdefault.jpg"} {"video_id": "ByeRnmHJ-uk", "text": "graphical model that represents this relationship so you have theta which are your global metal earned parameters for every task you have a Phi I and X train together with Phii determines y train and X tests together with Phii determines Y test and Y test is observed during meta training but not observed during that a testing that's why it's kind of half shaded there okay so this basically defines the meta learning problem but let's kind of round out this explanation with a little bit of an overview of terminology because", "start_timestamp": "00:19:46", "end_timestamp": "00:20:17", "start_second": 1186, "end_second": 1217, "url": "https://www.youtube.com/watch?v=ByeRnmHJ-uk&t=1186s", "title": "Learning to learn: An Introduction to Meta Learning", "thumbnail": "https://i.ytimg.com/vi/ByeRnmHJ-uk/hqdefault.jpg"} {"video_id": "ByeRnmHJ-uk", "text": "we'll see a bunch of this terminology pop up again again during the tutorial so we make a distinction between meta training meta testing training and testing so you're learning the parameters theta during a meta training phase that meta training phase trains on a collection of data sets each of which is separate into a training set and a test set so when we say training set we mean that small few shots set for a particular task when we say test set we mean the corresponding test images when we say meta training we mean the whole", "start_timestamp": "00:20:17", "end_timestamp": "00:20:50", "start_second": 1217, "end_second": 1250, "url": "https://www.youtube.com/watch?v=ByeRnmHJ-uk&t=1217s", "title": "Learning to learn: An Introduction to Meta Learning", "thumbnail": "https://i.ytimg.com/vi/ByeRnmHJ-uk/hqdefault.jpg"} {"video_id": "ByeRnmHJ-uk", "text": "set of tasks so meta training consists of multiple tasks each one with a training set of a test set and meta testing is what happens once you're done meta training and you want to adapt to a training set for a new task so the set of day of data sets is called D meta train that's what we refer to it as these are meta training tasks we're gonna use Ti to refer to meta training tasks so these are all of our key is this is our meta test tasks I'm sorry and then sometimes you hear people say support set and quarry set so support", "start_timestamp": "00:20:50", "end_timestamp": "00:21:32", "start_second": 1250, "end_second": 1292, "url": "https://www.youtube.com/watch?v=ByeRnmHJ-uk&t=1250s", "title": "Learning to learn: An Introduction to Meta Learning", "thumbnail": "https://i.ytimg.com/vi/ByeRnmHJ-uk/hqdefault.jpg"} {"video_id": "ByeRnmHJ-uk", "text": "refer is basically synonymous with training set and sometimes people use supports that just to avoid the confusion between meta training and training so if someone says support they mean the inner training set and when someone says quarry they're referring to the test the the inner test not the meta test just the test so the quarry is the thing that you actually want to classify correctly after reading in the support and if someone says like oh I have a case shot classification problem what they're referring to is the number of", "start_timestamp": "00:21:32", "end_timestamp": "00:22:01", "start_second": 1292, "end_second": 1321, "url": "https://www.youtube.com/watch?v=ByeRnmHJ-uk&t=1292s", "title": "Learning to learn: An Introduction to Meta Learning", "thumbnail": "https://i.ytimg.com/vi/ByeRnmHJ-uk/hqdefault.jpg"} {"video_id": "ByeRnmHJ-uk", "text": "examples if someone says I have a five-way classification problem they're referring to the number of classes so if you say have a five-shot five-way classification problem that means I have five classes each of which has five examples there's a little bit of confusion about the word shot sometimes it means the number of images per class and sometimes it means the total number of images usually we'll use it to mean the number of images per class so five shot five way means twenty-five data points okay now just to", "start_timestamp": "00:22:01", "end_timestamp": "00:22:29", "start_second": 1321, "end_second": 1349, "url": "https://www.youtube.com/watch?v=ByeRnmHJ-uk&t=1321s", "title": "Learning to learn: An Introduction to Meta Learning", "thumbnail": "https://i.ytimg.com/vi/ByeRnmHJ-uk/hqdefault.jpg"} {"video_id": "ByeRnmHJ-uk", "text": "wrap up a few closely related problem settings that are good to be aware of and this is coming back to that question about transfer learning from before so middle learning is closely related to a few other things that we can actually cast as you know within the same terminology so multitask learning deals with the problem of learning a model with parameters theta star that immediately solve multiple tasks so you can think of multitask learning as sort of zero shot metal learning so that corresponds to defining parameters that", "start_timestamp": "00:22:29", "end_timestamp": "00:22:55", "start_second": 1349, "end_second": 1375, "url": "https://www.youtube.com/watch?v=ByeRnmHJ-uk&t=1349s", "title": "Learning to learn: An Introduction to Meta Learning", "thumbnail": "https://i.ytimg.com/vi/ByeRnmHJ-uk/hqdefault.jpg"} {"video_id": "ByeRnmHJ-uk", "text": "immediately solve all the tasks at the same time this is usually not possible in metal learning problems you can't have one model that classifies you know that does the five Way classification with dogs and lions and also with you know the pianos and the cats but you can view multitask learning is a special case where Phi is just equal to theta another very closely related problem setting is type of parameter optimization and auto ml these can be cast as metal learning they're actually they are essentially metal learning", "start_timestamp": "00:22:55", "end_timestamp": "00:23:23", "start_second": 1375, "end_second": 1403, "url": "https://www.youtube.com/watch?v=ByeRnmHJ-uk&t=1375s", "title": "Learning to learn: An Introduction to Meta Learning", "thumbnail": "https://i.ytimg.com/vi/ByeRnmHJ-uk/hqdefault.jpg"} {"video_id": "ByeRnmHJ-uk", "text": "problems they're outside of the scope of this tutorial but I'll just mention briefly how they can be related so in hyper parameter optimization you can say that theta refers to your hyper parameters that's where you're going to get out of your meta training set and Phi is the network weights so you'll learn your hyper parameters from d mehta train and then you'll use them to get Phi architecture search for same deal theta refers to the parameters of your architecture and Phi is the actual weights in the model this is a very", "start_timestamp": "00:23:23", "end_timestamp": "00:23:48", "start_second": 1403, "end_second": 1428, "url": "https://www.youtube.com/watch?v=ByeRnmHJ-uk&t=1403s", "title": "Learning to learn: An Introduction to Meta Learning", "thumbnail": "https://i.ytimg.com/vi/ByeRnmHJ-uk/hqdefault.jpg"} {"video_id": "ByeRnmHJ-uk", "text": "active area of research unfortunately outside the scope of this tutorial but hopefully this will tell you a little bit about how they relate okay and next Chelsea will discuss a number of actual metal learning algorithms that we can use based on this problem setting oh yes and we'd be happy to take any questions right now - yeah so one question from can you elaborate more on the structural similarity that's required between the Mediterranean tasks yeah so in regard to the structural similarity between the metal training", "start_timestamp": "00:23:48", "end_timestamp": "00:24:15", "start_second": 1428, "end_second": 1455, "url": "https://www.youtube.com/watch?v=ByeRnmHJ-uk&t=1428s", "title": "Learning to learn: An Introduction to Meta Learning", "thumbnail": "https://i.ytimg.com/vi/ByeRnmHJ-uk/hqdefault.jpg"} {"video_id": "ByeRnmHJ-uk", "text": "tasks this is we can actually make that notion formal the way we make it formal as we say there's a lucien over tasks there's a distribution p task and you assume that all of your mediterranean tasks are drawn from that distribution and you assume that all of your meta test tasks are drawn from the same distribution so this is the meta learning analog of the standard supervised learning assumption now what does a distribution over tasks really mean well that's sometimes ends up being a much more subjective notion if you", "start_timestamp": "00:24:15", "end_timestamp": "00:24:41", "start_second": 1455, "end_second": 1481, "url": "https://www.youtube.com/watch?v=ByeRnmHJ-uk&t=1455s", "title": "Learning to learn: An Introduction to Meta Learning", "thumbnail": "https://i.ytimg.com/vi/ByeRnmHJ-uk/hqdefault.jpg"} {"video_id": "ByeRnmHJ-uk", "text": "have a piece of code that generates your tasks and you can say well these it needs to be generated by the same code but of course in reality those tasks are probably produced by nature and there it becomes a much fuzzier line so Chelsea will also discuss a little bit about extrapolation and generalization the perhaps pertains to this great so before we actually want to start going about evaluating meta learning algorithms are going about designing my lorry algorithms we need to figure out how to actually evaluate a meta learning", "start_timestamp": "00:24:41", "end_timestamp": "00:25:06", "start_second": 1481, "end_second": 1506, "url": "https://www.youtube.com/watch?v=ByeRnmHJ-uk&t=1481s", "title": "Learning to learn: An Introduction to Meta Learning", "thumbnail": "https://i.ytimg.com/vi/ByeRnmHJ-uk/hqdefault.jpg"} {"video_id": "ByeRnmHJ-uk", "text": "algorithm once we have one and so it's worth mentioning that there a lot of the the modern metal learning advances and and techniques were motivated by some some work done by Brendan Lake in 2015 and Brendan introduced the Omniglot dataset which is a is much more simple than the mini imagenet dataset that the circuit was showing on the previous slides but allows us to really study some of the basics of mendler nning so the Omniglot dataset it has six hundred twenty three sixteen hundred twenty three characters from 50 different", "start_timestamp": "00:25:06", "end_timestamp": "00:25:37", "start_second": 1506, "end_second": 1537, "url": "https://www.youtube.com/watch?v=ByeRnmHJ-uk&t=1506s", "title": "Learning to learn: An Introduction to Meta Learning", "thumbnail": "https://i.ytimg.com/vi/ByeRnmHJ-uk/hqdefault.jpg"} {"video_id": "ByeRnmHJ-uk", "text": "alphabets that are had many classes and few examples per class a few classes per class and what I find really appealing about these kinds of datasets is that they're more reflective of the statistics of the real world in the real world we have tremendous diversity in terms of the number of objects and number of items and people that we encounter and we don't encounter them over and over again we often encounter many new things constantly throughout our lifetime okay so proposes both discriminative and generative problems", "start_timestamp": "00:25:37", "end_timestamp": "00:26:35", "start_second": 1537, "end_second": 1595, "url": "https://www.youtube.com/watch?v=ByeRnmHJ-uk&t=1537s", "title": "Learning to learn: An Introduction to Meta Learning", "thumbnail": "https://i.ytimg.com/vi/ByeRnmHJ-uk/hqdefault.jpg"} {"video_id": "ByeRnmHJ-uk", "text": "for example an initial approaches for this data set and for other data sets for a few shot learning we're based off of Bayesian models and on parametric's and similar to what Sergey was mentioning before in addition to this which in many ways actually methods are doing quite well on these days you've also been using things like mini image net C far cub celeb a and other data sets for for evaluating meta training algorithms and many of these were not necessarily initially purposed for for medal learning but we're able to", "start_timestamp": "00:26:35", "end_timestamp": "00:27:06", "start_second": 1595, "end_second": 1626, "url": "https://www.youtube.com/watch?v=ByeRnmHJ-uk&t=1595s", "title": "Learning to learn: An Introduction to Meta Learning", "thumbnail": "https://i.ytimg.com/vi/ByeRnmHJ-uk/hqdefault.jpg"} {"video_id": "ByeRnmHJ-uk", "text": "kind of put them in to the purpose that we would like okay so this is similar to what was discussing earlier where we have some n way K shot classification problem such as image where we wanted to be able to perform learning from very small data sets so we might want to be able to learn from one example of five different classes to classify new examples or new images as being among one of the five classes shown on the left and the way that we can do this is we can take data from other image classes structure it in the same way is", "start_timestamp": "00:27:06", "end_timestamp": "00:27:42", "start_second": 1626, "end_second": 1662, "url": "https://www.youtube.com/watch?v=ByeRnmHJ-uk&t=1626s", "title": "Learning to learn: An Introduction to Meta Learning", "thumbnail": "https://i.ytimg.com/vi/ByeRnmHJ-uk/hqdefault.jpg"} {"video_id": "ByeRnmHJ-uk", "text": "what we're gonna be seeing about a test time for example taking images of mushrooms and dogs and singer is structuring it into the likewise these these five way Wangchuk classification problems doing this for many different other image classes training a training or networked in order to perform these types of things across these training classes such that an evaluation is able to solve the problem on the top with held out classes and this is an example that's specific to image classification and we're gonna be coming back to this", "start_timestamp": "00:27:42", "end_timestamp": "00:28:13", "start_second": 1662, "end_second": 1693, "url": "https://www.youtube.com/watch?v=ByeRnmHJ-uk&t=1662s", "title": "Learning to learn: An Introduction to Meta Learning", "thumbnail": "https://i.ytimg.com/vi/ByeRnmHJ-uk/hqdefault.jpg"} {"video_id": "ByeRnmHJ-uk", "text": "example a number of times because it's useful for comparing different approaches but the same sorts of ideas are also applicable to things like regression to language generation and and and prediction to skill learning really any machine learning problem you can construct in this way where you're training it on a number of machine learning problems and you want it to be able to generalize to learning a new problem with a small amount of data ok so now that we know how to evaluate a meta learning algorithm was actually dig", "start_timestamp": "00:28:13", "end_timestamp": "00:28:45", "start_second": 1693, "end_second": 1725, "url": "https://www.youtube.com/watch?v=ByeRnmHJ-uk&t=1693s", "title": "Learning to learn: An Introduction to Meta Learning", "thumbnail": "https://i.ytimg.com/vi/ByeRnmHJ-uk/hqdefault.jpg"} {"video_id": "ByeRnmHJ-uk", "text": "into how we actually design these meta learning so the general recipe and the general principle behind these algorithms is that we need to choose some form of inferring the parameters of a model Phi given our chain data set and our meta parameters theta and then once we choose choose the form of this we can then optimize it optimize the meta parameters fight theta with respect to a maximum likelihood objective using our meta training data okay and many of the different algorithms that were gonna be looking at today really only differ in", "start_timestamp": "00:28:45", "end_timestamp": "00:29:13", "start_second": 1725, "end_second": 1753, "url": "https://www.youtube.com/watch?v=ByeRnmHJ-uk&t=1725s", "title": "Learning to learn: An Introduction to Meta Learning", "thumbnail": "https://i.ytimg.com/vi/ByeRnmHJ-uk/hqdefault.jpg"} {"video_id": "ByeRnmHJ-uk", "text": "step one choosing how we want to represent this inference problem essentially and so I you can ask well can we just treat this as an inference problem and and pretty clear neural networks are actually are quite good at inference so maybe we can just use a neural network to to represent this function itself and that's exactly what the first approach will be so this is what we'll refer to as blackbox adaptation approaches and the key idea is for a neural network to represent this function that outputs a set of", "start_timestamp": "00:29:13", "end_timestamp": "00:29:43", "start_second": 1753, "end_second": 1783, "url": "https://www.youtube.com/watch?v=ByeRnmHJ-uk&t=1753s", "title": "Learning to learn: An Introduction to Meta Learning", "thumbnail": "https://i.ytimg.com/vi/ByeRnmHJ-uk/hqdefault.jpg"} {"video_id": "ByeRnmHJ-uk", "text": "parameters given a data set and a set of meta parameters and for now we're going to be using deterministic or point estimate of this function key we told to note as f theta and of course we'll get back to Bayesian methods later and so we'll see Bayes a bit later okay so how do you actually try to design a neural network to do this well one thing you could do is you could use a recurrent neural network that takes in data points sequentially and produces a set of parameters Phi and so this recurrent neural network in this case will be", "start_timestamp": "00:29:43", "end_timestamp": "00:30:19", "start_second": 1783, "end_second": 1819, "url": "https://www.youtube.com/watch?v=ByeRnmHJ-uk&t=1783s", "title": "Learning to learn: An Introduction to Meta Learning", "thumbnail": "https://i.ytimg.com/vi/ByeRnmHJ-uk/hqdefault.jpg"} {"video_id": "ByeRnmHJ-uk", "text": "representing F theta and then we'll take the outputted parameters use those parameters for another neural network that's gonna make predictions about test data points and so these are gonna be the data points from D test okay and then once we have this model we can train it with standard supervised learning this is just a standard recurrent neural network so we can train it to maximize a log probability of the labels of the test data points given the test inputs and we can do this optimization across all of the tasks in", "start_timestamp": "00:30:19", "end_timestamp": "00:30:48", "start_second": 1819, "end_second": 1848, "url": "https://www.youtube.com/watch?v=ByeRnmHJ-uk&t=1819s", "title": "Learning to learn: An Introduction to Meta Learning", "thumbnail": "https://i.ytimg.com/vi/ByeRnmHJ-uk/hqdefault.jpg"} {"video_id": "ByeRnmHJ-uk", "text": "our meta training data set we can rewrite this loss function of performing a bag of evaluating predictions of a model as simply a loss function operating over the parameters Phi and the test data points so we're gonna write this right here and this this will be used mostly for convenience later on and then with this form we can write the full optimization problem as an optimization of the over the the parameters outputted by the neural network F theta and the test data set okay so now that we have this this optimization objective how do actually", "start_timestamp": "00:30:48", "end_timestamp": "00:31:25", "start_second": 1848, "end_second": 1885, "url": "https://www.youtube.com/watch?v=ByeRnmHJ-uk&t=1848s", "title": "Learning to learn: An Introduction to Meta Learning", "thumbnail": "https://i.ytimg.com/vi/ByeRnmHJ-uk/hqdefault.jpg"} {"video_id": "ByeRnmHJ-uk", "text": "what is the algorithm that's used to optimize this so what the algorithm looks like is we first sample a task from our meditating data set or a mini batches of tasks then for that task we have a data set di and will sample disjoint data sets D tre and I and D test I from that data set and then once we have so I guess what this looks like I say these are the images corresponding to tasks i we want to be able to partition this or basically sample this sample D train and sample t-tests from from this data set and so we'll assign", "start_timestamp": "00:31:25", "end_timestamp": "00:32:00", "start_second": 1885, "end_second": 1920, "url": "https://www.youtube.com/watch?v=ByeRnmHJ-uk&t=1885s", "title": "Learning to learn: An Introduction to Meta Learning", "thumbnail": "https://i.ytimg.com/vi/ByeRnmHJ-uk/hqdefault.jpg"} {"video_id": "ByeRnmHJ-uk", "text": "like so and then once we have D train and D test will compute the parameters using D train and then evaluate those predicted parameters using the test data points and it's quite important that d'\u00eatre and D tests are disjoint so that we're not training for memorization of the labels but instead training for generalization and then once we update our meta parameters of them were of course going to repeat this process for new tasks and if we use a mini batch of tasks the gradient in step four is gonna be averaged across that mini batch great", "start_timestamp": "00:32:00", "end_timestamp": "00:32:31", "start_second": 1920, "end_second": 1951, "url": "https://www.youtube.com/watch?v=ByeRnmHJ-uk&t=1920s", "title": "Learning to learn: An Introduction to Meta Learning", "thumbnail": "https://i.ytimg.com/vi/ByeRnmHJ-uk/hqdefault.jpg"} {"video_id": "ByeRnmHJ-uk", "text": "okay so there's the algorithm now how to actually represent the form of F data so the the form that I have written here is a recurrent neural network you could use something like an Ellis TM you could also use something that another memory augmented neural network like a neural Turing machine which has done it has been done in past work you could also use something like self attention or 1d convolutions or even really just a feed-forward network that then averages in some embedding space the key thing is that you want this you want these", "start_timestamp": "00:32:31", "end_timestamp": "00:32:58", "start_second": 1951, "end_second": 1978, "url": "https://www.youtube.com/watch?v=ByeRnmHJ-uk&t=1951s", "title": "Learning to learn: An Introduction to Meta Learning", "thumbnail": "https://i.ytimg.com/vi/ByeRnmHJ-uk/hqdefault.jpg"} {"video_id": "ByeRnmHJ-uk", "text": "networks to be able to take in sets of data points and often times you want it to be able to take in variable numbers of data points and so that's where I will be using these types of architectures that have the capability to take in take in sets and variable numbers of data points okay great so I know that we've gone over kind of this type of approach and how it works and and the different architectures what are some of the challenges that come up so one thing that you might ask is well if our neural network is outputting all of the", "start_timestamp": "00:32:58", "end_timestamp": "00:33:27", "start_second": 1978, "end_second": 2007, "url": "https://www.youtube.com/watch?v=ByeRnmHJ-uk&t=1978s", "title": "Learning to learn: An Introduction to Meta Learning", "thumbnail": "https://i.ytimg.com/vi/ByeRnmHJ-uk/hqdefault.jpg"} {"video_id": "ByeRnmHJ-uk", "text": "parameters of of another neural network this doesn't really seem scalable but if the the the neural network that's making inferences about test data points has millions of parameters then we need a neural network that outputs millions of a million million dimensional output and one idea that we can use to remedy this is we don't actually need to output all of the parameters of another neural network we really just need to output the sufficient statistics of that data set of the training tasks that allow us to make predictions about new data", "start_timestamp": "00:33:27", "end_timestamp": "00:33:57", "start_second": 2007, "end_second": 2037, "url": "https://www.youtube.com/watch?v=ByeRnmHJ-uk&t=2007s", "title": "Learning to learn: An Introduction to Meta Learning", "thumbnail": "https://i.ytimg.com/vi/ByeRnmHJ-uk/hqdefault.jpg"} {"video_id": "ByeRnmHJ-uk", "text": "points and so what we can do is we can take this take this architecture instead of outputting Phi we can output something like H where H is representing a low dimensional vector and then and this will be essentially representing information about the task everything that's needed about the task in order to make predictions and then you can combine this sufficient statistic H with another set of parameters theta G that are also metal learned along with the parameters of F such that with these with both H and theta we can make", "start_timestamp": "00:33:57", "end_timestamp": "00:34:26", "start_second": 2037, "end_second": 2066, "url": "https://www.youtube.com/watch?v=ByeRnmHJ-uk&t=2037s", "title": "Learning to learn: An Introduction to Meta Learning", "thumbnail": "https://i.ytimg.com/vi/ByeRnmHJ-uk/hqdefault.jpg"} {"video_id": "ByeRnmHJ-uk", "text": "predictions about new data points and then theta G can be something very high dimensional and with the combination of the two we'll be able to make predictions for a new task okay so the general form of what this looks like is you can kind of abstract oh wait is this notion of H and just write out the ability to make predictions given a training data set and a new test input outputting the corresponding label okay before I move on to the next type of approach are there any questions so the one question from the audience was does", "start_timestamp": "00:34:26", "end_timestamp": "00:34:59", "start_second": 2066, "end_second": 2099, "url": "https://www.youtube.com/watch?v=ByeRnmHJ-uk&t=2066s", "title": "Learning to learn: An Introduction to Meta Learning", "thumbnail": "https://i.ytimg.com/vi/ByeRnmHJ-uk/hqdefault.jpg"} {"video_id": "ByeRnmHJ-uk", "text": "the network need to be recurrent or are there other architectures that could work well absolutely so as I mentioned on the previous slide this could be something that's recurrent like Ellis gems and you're all Turing machines you could use something like self attention or work recursive models but you also don't need to have something that actually is like a sequence the sequence model could also simply being be something that has a feed-forward model and then average if you assume that you're trained is that has fixed-length than you could of", "start_timestamp": "00:34:59", "end_timestamp": "00:35:26", "start_second": 2099, "end_second": 2126, "url": "https://www.youtube.com/watch?v=ByeRnmHJ-uk&t=2099s", "title": "Learning to learn: An Introduction to Meta Learning", "thumbnail": "https://i.ytimg.com/vi/ByeRnmHJ-uk/hqdefault.jpg"} {"video_id": "ByeRnmHJ-uk", "text": "course also concatenate and use a fully connected network although these are the approaches on this slide tend to be a bit more scalable I think it's on hello hi um so this might be related to the few previous slide so I I'm trying to understand the difference between meta learning and they're learning to learn framework by Jonatan vector of the T / posts around 1990s so I think if I understand correctly the the basic idea of learning to learn is that you're trying to learn the inductive bias of the learning", "start_timestamp": "00:35:26", "end_timestamp": "00:36:06", "start_second": 2126, "end_second": 2166, "url": "https://www.youtube.com/watch?v=ByeRnmHJ-uk&t=2126s", "title": "Learning to learn: An Introduction to Meta Learning", "thumbnail": "https://i.ytimg.com/vi/ByeRnmHJ-uk/hqdefault.jpg"} {"video_id": "ByeRnmHJ-uk", "text": "problem which in this case is data start in your notation could you elaborate a little bit more in this similarity or the difference Thanks yeah so I'm not sure if I'm familiar with the particular work that you mentioned but in general many of the ideas that we're presenting are inspired by work that was done initially in the late 80s and early 90s with with older types of neural network approaches many of those approaches didn't specifically look at the few shot learning setting which were focusing on this tutorial but", "start_timestamp": "00:36:06", "end_timestamp": "00:36:37", "start_second": 2166, "end_second": 2197, "url": "https://www.youtube.com/watch?v=ByeRnmHJ-uk&t=2166s", "title": "Learning to learn: An Introduction to Meta Learning", "thumbnail": "https://i.ytimg.com/vi/ByeRnmHJ-uk/hqdefault.jpg"} {"video_id": "ByeRnmHJ-uk", "text": "looked at in general learning from from relatively large data sets and it's worth mentioning that actually this particular approach up here was actually done was was one of the approaches that was used in the 90s and and also by hawk radar at all in 2001 but some of the approaches that we'll be discussing in later parts of the tutorial are more new another question from the audience was can we use meta learning approaches to solve classical supervised learning problems and are there any benefits to doing so so I think that we'll get to this", "start_timestamp": "00:36:37", "end_timestamp": "00:37:11", "start_second": 2197, "end_second": 2231, "url": "https://www.youtube.com/watch?v=ByeRnmHJ-uk&t=2197s", "title": "Learning to learn: An Introduction to Meta Learning", "thumbnail": "https://i.ytimg.com/vi/ByeRnmHJ-uk/hqdefault.jpg"} {"video_id": "ByeRnmHJ-uk", "text": "question a bit at the end there are situations where I guess the main thing the main type of problem that you want to be able to address with meta learning techniques is settings where you want to be able to take in information about a task that has that takes on the form of some data whether it be fully supervised or weekly supervised and so if your supervisor problems setting doesn't have that sort of structure where you want to learn from data to solve new problems then I think it would be quite challenging to apply some of these", "start_timestamp": "00:37:11", "end_timestamp": "00:37:42", "start_second": 2231, "end_second": 2262, "url": "https://www.youtube.com/watch?v=ByeRnmHJ-uk&t=2231s", "title": "Learning to learn: An Introduction to Meta Learning", "thumbnail": "https://i.ytimg.com/vi/ByeRnmHJ-uk/hqdefault.jpg"} {"video_id": "ByeRnmHJ-uk", "text": "techniques but in other situations maybe your meta learning maybe your supervised learning problem does have that structure and these approaches would definitely but definitely do well ok so let's move on to the optimization based approaches so now that we just kind of talked about one way to kind of make this approach more scalable is there a way to infer all of the parameters of the neural network in a way that's scalable without having to train a neural network to output all of the parameters in a particular as Sergei", "start_timestamp": "00:37:42", "end_timestamp": "00:38:11", "start_second": 2262, "end_second": 2291, "url": "https://www.youtube.com/watch?v=ByeRnmHJ-uk&t=2262s", "title": "Learning to learn: An Introduction to Meta Learning", "thumbnail": "https://i.ytimg.com/vi/ByeRnmHJ-uk/hqdefault.jpg"} {"video_id": "ByeRnmHJ-uk", "text": "mentioned before you can view the problem of supervised learning as an inference problem where I goes in for a set of parameters using data and the way that we solve supervised learning problems is through optimization and these optimization perches are quite scalable so what if we treat the problem of inferring a set of parameters from data and using meta parameters as exactly an optimization problem and and this is what optimization based approaches do and so the key idea here is that we're going to acquire our task", "start_timestamp": "00:38:11", "end_timestamp": "00:38:38", "start_second": 2291, "end_second": 2318, "url": "https://www.youtube.com/watch?v=ByeRnmHJ-uk&t=2291s", "title": "Learning to learn: An Introduction to Meta Learning", "thumbnail": "https://i.ytimg.com/vi/ByeRnmHJ-uk/hqdefault.jpg"} {"video_id": "ByeRnmHJ-uk", "text": "specific parameters Phii through an optimization procedure that depends both on the training data and our meta parameters theta and the optimization will look something like this where we are optimizing objective that looks like the likelihood of the data given our toss parameters as well as the likely of our task parameters given our meta parameters we're essentially the meta parameters are serving as a prior now you might ask well what should the form of the prior be for our meta parameters well there's a lot of different", "start_timestamp": "00:38:38", "end_timestamp": "00:39:06", "start_second": 2318, "end_second": 2346, "url": "https://www.youtube.com/watch?v=ByeRnmHJ-uk&t=2318s", "title": "Learning to learn: An Introduction to Meta Learning", "thumbnail": "https://i.ytimg.com/vi/ByeRnmHJ-uk/hqdefault.jpg"} {"video_id": "ByeRnmHJ-uk", "text": "approaches a lot of different things that we could do here but one very successful form of prior knowledge that we've seen in deep learning for example is using is training from an initialization provided by another data set and in particular what we seen is if we train on things like image and then fine-tune that model on other data sets were able to capture a lot of the rich information and supervision that's exists in the image that data set and use it for new tasks so this is a very successful form of prior knowledge", "start_timestamp": "00:39:06", "end_timestamp": "00:39:36", "start_second": 2346, "end_second": 2376, "url": "https://www.youtube.com/watch?v=ByeRnmHJ-uk&t=2346s", "title": "Learning to learn: An Introduction to Meta Learning", "thumbnail": "https://i.ytimg.com/vi/ByeRnmHJ-uk/hqdefault.jpg"} {"video_id": "ByeRnmHJ-uk", "text": "and of course the way that it works is you have a set of pre trading parameters data and you run gradient descent using training data for your new tasks okay so this is this works really well in a number of different situations but what if your train data for your new task has only a few data points like the six data points that I showed in the example at the beginning well in this case things like fine tuning are gonna break down a bit because though they weren't actually trained for the ability to adopt very", "start_timestamp": "00:39:36", "end_timestamp": "00:40:01", "start_second": 2376, "end_second": 2401, "url": "https://www.youtube.com/watch?v=ByeRnmHJ-uk&t=2376s", "title": "Learning to learn: An Introduction to Meta Learning", "thumbnail": "https://i.ytimg.com/vi/ByeRnmHJ-uk/hqdefault.jpg"} {"video_id": "ByeRnmHJ-uk", "text": "quickly and as a result you'll either over fit to those six examples or you won't be able to adapt quickly enough and move far enough from your initialization so this is what we want to be able to do a test I may be quite nice if we can just run fine tuning on on our six examples and get a get it up get some answer some function so this what we wanna be able to do at test time the key idea behind this approach is to explicitly optimize for us how to pre train parameters such that fine tuning with a very small data set works very", "start_timestamp": "00:40:01", "end_timestamp": "00:40:30", "start_second": 2401, "end_second": 2430, "url": "https://www.youtube.com/watch?v=ByeRnmHJ-uk&t=2401s", "title": "Learning to learn: An Introduction to Meta Learning", "thumbnail": "https://i.ytimg.com/vi/ByeRnmHJ-uk/hqdefault.jpg"} {"video_id": "ByeRnmHJ-uk", "text": "well and so what this looks like is we're going to take the fine tuning process written here this is just one step of gradient descent but you could also use a few steps or up to like ten steps of green descent for example then we are going to take where we were after got after fine tuning this can be like Phii for example evaluate how well that generalizes to new data points for that task this is measuring how successful fine-tuning was and then we can optimize this objective with regard to the initial set of parameters so we're gonna", "start_timestamp": "00:40:30", "end_timestamp": "00:41:00", "start_second": 2430, "end_second": 2460, "url": "https://www.youtube.com/watch?v=ByeRnmHJ-uk&t=2430s", "title": "Learning to learn: An Introduction to Meta Learning", "thumbnail": "https://i.ytimg.com/vi/ByeRnmHJ-uk/hqdefault.jpg"} {"video_id": "ByeRnmHJ-uk", "text": "optimize for set of pre train parameters such that fine tuning gives us a generalizable function for that task and of course we don't want to just do this over one task but we'll do this over all of the tasks that are meta training data set so that we can learn an initialization that's amenable to fine tuning for many different types of tasks ok so the key ideas is to learn this parameter vector that transfers effectively via fine tuning ok so what does this look like at a somewhat more intuitive level say theta is the the", "start_timestamp": "00:41:00", "end_timestamp": "00:41:31", "start_second": 2460, "end_second": 2491, "url": "https://www.youtube.com/watch?v=ByeRnmHJ-uk&t=2460s", "title": "Learning to learn: An Introduction to Meta Learning", "thumbnail": "https://i.ytimg.com/vi/ByeRnmHJ-uk/hqdefault.jpg"} {"video_id": "ByeRnmHJ-uk", "text": "parameter vector that we're meta-learning and Phi I star is the optimal parameter vector for task I am then you can view the meta learning process as a thick black line where when we're at this point during Mediterranean we take a gradient step with respect to task 3 we're quite far from the optimum for task 3 whereas at the end of meta learning if we take a gradient up with respect to toss 3 or quite close to the optimum and likewise for a number of other tasks we refer to this procedure as model agnostic medal earning in the", "start_timestamp": "00:41:31", "end_timestamp": "00:41:57", "start_second": 2491, "end_second": 2517, "url": "https://www.youtube.com/watch?v=ByeRnmHJ-uk&t=2491s", "title": "Learning to learn: An Introduction to Meta Learning", "thumbnail": "https://i.ytimg.com/vi/ByeRnmHJ-uk/hqdefault.jpg"} {"video_id": "ByeRnmHJ-uk", "text": "sense that is agnostic to the model that you use and loss function that you use as long as both of them are amenable to gradient based adaptation okay so now that we've gone through the objective again let's let's go through what actually that algorithm looks like so we can take the algorithm that we showed before for the standard for the the black box adaptation approach and instead we want to derive the algorithm for an optimization based approach and what we can do is we can simply just replace step 3 that's inferring the parameters with the", "start_timestamp": "00:41:57", "end_timestamp": "00:42:26", "start_second": 2517, "end_second": 2546, "url": "https://www.youtube.com/watch?v=ByeRnmHJ-uk&t=2517s", "title": "Learning to learn: An Introduction to Meta Learning", "thumbnail": "https://i.ytimg.com/vi/ByeRnmHJ-uk/hqdefault.jpg"} {"video_id": "ByeRnmHJ-uk", "text": "neural network with a step that's actually optimizing for the parameters in this case through gradient descent so we'll sample a task sample destroy its data sets for that and for parameters with gradient descent on the training data and then update our meta parameters using the test data points note that this does bring up a second-order optimization problem because you have to compute the gradient in step four you have to differentiate through a gradient step and test in in step three in practice a number of standard auto", "start_timestamp": "00:42:26", "end_timestamp": "00:42:54", "start_second": 2546, "end_second": 2574, "url": "https://www.youtube.com/watch?v=ByeRnmHJ-uk&t=2546s", "title": "Learning to learn: An Introduction to Meta Learning", "thumbnail": "https://i.ytimg.com/vi/ByeRnmHJ-uk/hqdefault.jpg"} {"video_id": "ByeRnmHJ-uk", "text": "differentiation libraries like tensor flow and PI torch can handle this quite gracefully and and really you don't have to worry about it too much at all and it also isn't particularly computationally expensive but we will talk a bit more about a ways to mitigate this in a few slides okay so now how does this approach compared to the black box out of potations that we mentioned before so let's bring up the general form that we talked about before where you have some neural network that's taking a training data point or training data set and a", "start_timestamp": "00:42:54", "end_timestamp": "00:43:21", "start_second": 2574, "end_second": 2601, "url": "https://www.youtube.com/watch?v=ByeRnmHJ-uk&t=2574s", "title": "Learning to learn: An Introduction to Meta Learning", "thumbnail": "https://i.ytimg.com/vi/ByeRnmHJ-uk/hqdefault.jpg"} {"video_id": "ByeRnmHJ-uk", "text": "test data point and it's producing a prediction for the test data point it turns out that you can view the optimization based approach in the same general form so before we were using like a recurrent neural network to represent this function but now we're using what I'll denote as F mammal to represent this function and that is the function with parameters Phi that takes an X tests and produces a prediction where Phi is defined as the initial meta parameters plus gradient descent on the data point and so essentially you can", "start_timestamp": "00:43:21", "end_timestamp": "00:43:52", "start_second": 2601, "end_second": 2632, "url": "https://www.youtube.com/watch?v=ByeRnmHJ-uk&t=2601s", "title": "Learning to learn: An Introduction to Meta Learning", "thumbnail": "https://i.ytimg.com/vi/ByeRnmHJ-uk/hqdefault.jpg"} {"video_id": "ByeRnmHJ-uk", "text": "view the mammal algorithm as a computation graph just with this funny embedded gradient operator within it and so I really you can just view it as a very similar approach for doing it but one that has a lot more structure the structure namely of optimization within it and also with this view that means that we can very naturally mix and match components of these different approaches so for example one of the things we could do is learn the initialization that that mammal is doing but also learn learn how to make gradient updates the", "start_timestamp": "00:43:52", "end_timestamp": "00:44:23", "start_second": 2632, "end_second": 2663, "url": "https://www.youtube.com/watch?v=ByeRnmHJ-uk&t=2632s", "title": "Learning to learn: An Introduction to Meta Learning", "thumbnail": "https://i.ytimg.com/vi/ByeRnmHJ-uk/hqdefault.jpg"} {"video_id": "ByeRnmHJ-uk", "text": "initialization and that's exactly what ravi and la rochelle did in 2017 which actually preceded the mammal work and the and this computation graph view will come back again in as we discussed the third type of approach great so questions how is theta G learned this is coming back to the black box adaptation yeah so great question so theta G this is going back to the black box and in that case we had a we had a very like submission statistics age that are produced by the neural network and we also had theta G that was using to use", "start_timestamp": "00:44:23", "end_timestamp": "00:45:03", "start_second": 2663, "end_second": 2703, "url": "https://www.youtube.com/watch?v=ByeRnmHJ-uk&t=2663s", "title": "Learning to learn: An Introduction to Meta Learning", "thumbnail": "https://i.ytimg.com/vi/ByeRnmHJ-uk/hqdefault.jpg"} {"video_id": "ByeRnmHJ-uk", "text": "to make predictions about the test data points and in that case theta G is optimized with all the other meta parameters of F so it's optimized just like all the other meta parameters another question about the black box setup tation so we'll Hib trivial meaning like why wouldn't a chai just learn to recognize which task is fed in and just up what's sort of like a one hot indicator for one of the N tasks that's a good question I think that in practice it doesn't do you have an answer for that yeah maybe one way to", "start_timestamp": "00:45:03", "end_timestamp": "00:45:38", "start_second": 2703, "end_second": 2738, "url": "https://www.youtube.com/watch?v=ByeRnmHJ-uk&t=2703s", "title": "Learning to learn: An Introduction to Meta Learning", "thumbnail": "https://i.ytimg.com/vi/ByeRnmHJ-uk/hqdefault.jpg"} {"video_id": "ByeRnmHJ-uk", "text": "think about this is it's kind of the same problem as memorizing labels for regular supervised learning so in the same way that supervised learning can overfit meta learning can net over fit so if you start seeing that your that your model just outputs a task indicator that can happen if you have a very small number of meta training tasks that's just an instance of overfitting we'll talk about metal or filling towards the end of the tutorial great oh one more another one I don't know if you know this but what's", "start_timestamp": "00:45:38", "end_timestamp": "00:46:03", "start_second": 2738, "end_second": 2763, "url": "https://www.youtube.com/watch?v=ByeRnmHJ-uk&t=2738s", "title": "Learning to learn: An Introduction to Meta Learning", "thumbnail": "https://i.ytimg.com/vi/ByeRnmHJ-uk/hqdefault.jpg"} {"video_id": "ByeRnmHJ-uk", "text": "oh gee from memory augmented neural networks yeah so that's a great question so the memory augmented neural networks question by santoro at all basically that just used a standard RNN to take in data points including the test data point and so in that case theta G was actually the same exact parameters as theta in F so H the sufficient statistic is simply the hidden state of an RNN and there is weight sharing across across theta G and theta enough where it's represented by the the weights that are shared across time and it recurrent", "start_timestamp": "00:46:03", "end_timestamp": "00:46:34", "start_second": 2763, "end_second": 2794, "url": "https://www.youtube.com/watch?v=ByeRnmHJ-uk&t=2763s", "title": "Learning to learn: An Introduction to Meta Learning", "thumbnail": "https://i.ytimg.com/vi/ByeRnmHJ-uk/hqdefault.jpg"} {"video_id": "ByeRnmHJ-uk", "text": "neural network so this one is a crowd favorite apparently how was oldest related to hyper networks where we were interested in giving parameters of a model s output great yeah so the the first blackbox annotation approach is also what is done in hyper networks I think that I'm not completely sure about this but I think that the hyper networks paper wasn't particularly focused on better learning problems and look at you looking at other problems where you're gonna be outputting parameters of neural networks but the approach the algorithm", "start_timestamp": "00:46:34", "end_timestamp": "00:47:01", "start_second": 2794, "end_second": 2821, "url": "https://www.youtube.com/watch?v=ByeRnmHJ-uk&t=2794s", "title": "Learning to learn: An Introduction to Meta Learning", "thumbnail": "https://i.ytimg.com/vi/ByeRnmHJ-uk/hqdefault.jpg"} {"video_id": "ByeRnmHJ-uk", "text": "used is is exactly the same as the black box on updation approaches that I mentioned before this question I can answer myself can we adopt mammal in a framework that we don't have a batch of tasks ahead of them instead we get them sequentially to hear the answer this question you'll have to wait until the very end of the tutorial on the last slide and she'll see we'll answer it okay one quick audience question okay so this is a question about FML so if I understand it correctly that MMO essentially learns the base model which", "start_timestamp": "00:47:01", "end_timestamp": "00:47:31", "start_second": 2821, "end_second": 2851, "url": "https://www.youtube.com/watch?v=ByeRnmHJ-uk&t=2821s", "title": "Learning to learn: An Introduction to Meta Learning", "thumbnail": "https://i.ytimg.com/vi/ByeRnmHJ-uk/hqdefault.jpg"} {"video_id": "ByeRnmHJ-uk", "text": "kind of in the middle of like the optimal parameter for different tasks but it's it is working analyzing an assumption that those optimal parameters for those different tasks are now too far away so if depth in some kind of if you are choosing a model space that the optimal like parameter or far away so that maybe you're the middle point of the base below although it is not too far away from each of the optimal parameter it might be not optimal for any of those I noticed it in the original paper that you are basically using a pretty simple", "start_timestamp": "00:47:31", "end_timestamp": "00:48:10", "start_second": 2851, "end_second": 2890, "url": "https://www.youtube.com/watch?v=ByeRnmHJ-uk&t=2851s", "title": "Learning to learn: An Introduction to Meta Learning", "thumbnail": "https://i.ytimg.com/vi/ByeRnmHJ-uk/hqdefault.jpg"} {"video_id": "ByeRnmHJ-uk", "text": "commercial neural net which have fewer neurons I'm just wondering will the MMO works keeps its performance like if you are using a more complex model where its parameter space is more complex and speaker yeah so first it's mentioning that this diagram we like to use for an illustration purposes in terms of understanding the algorithm but it can also be a bit misleading which is that in many cases particularly with heavily over parametrized neural networks there isn't just a single optimum for the correct solution there's actually an", "start_timestamp": "00:48:10", "end_timestamp": "00:48:40", "start_second": 2890, "end_second": 2920, "url": "https://www.youtube.com/watch?v=ByeRnmHJ-uk&t=2890s", "title": "Learning to learn: An Introduction to Meta Learning", "thumbnail": "https://i.ytimg.com/vi/ByeRnmHJ-uk/hqdefault.jpg"} {"video_id": "ByeRnmHJ-uk", "text": "entire space of optimum and and with with those types of problems we see that the it is actually a little bit easier to find something where you're simply one or a few gradient steps away and in fact in a minute I'll talk about the expressive power of of the Manuel algorithm and its ability to adapt even when your tasks are extremely different with regard to architectures I'll talk a bit about that but in practice we do find that it scales well to to larger larger architectures but it may require a bit more tuning than other meta", "start_timestamp": "00:48:40", "end_timestamp": "00:49:09", "start_second": 2920, "end_second": 2949, "url": "https://www.youtube.com/watch?v=ByeRnmHJ-uk&t=2920s", "title": "Learning to learn: An Introduction to Meta Learning", "thumbnail": "https://i.ytimg.com/vi/ByeRnmHJ-uk/hqdefault.jpg"} {"video_id": "ByeRnmHJ-uk", "text": "learning methods okay thank you okay so great leaving off where you left off so mammal exhibits this kind of structure unlike the black box on updation approaches which is it has this gradient operator inside of it and so it's actually performing an optimization problem both within the better training process as well as it met a test time and so one thing that might be quite natural to ask is using that structure does that mean that we can generalize better to two tasks that are slightly out of distribution and this is of", "start_timestamp": "00:49:09", "end_timestamp": "00:49:41", "start_second": 2949, "end_second": 2981, "url": "https://www.youtube.com/watch?v=ByeRnmHJ-uk&t=2949s", "title": "Learning to learn: An Introduction to Meta Learning", "thumbnail": "https://i.ytimg.com/vi/ByeRnmHJ-uk/hqdefault.jpg"} {"video_id": "ByeRnmHJ-uk", "text": "course an empirical question that we're gonna study more so than a theoretical question and so what we're gonna do is we're going to compare mammal with with blackbox annotation approaches such as snail and meta networks and we looked at image Dominika image classification problem and we tried to plot the task variability versus performance and what we found consistently across the board is as we move away from the the Mediterranean tasks with either zero zero shear or a scale of 1 we see that of course performance drops for all", "start_timestamp": "00:49:41", "end_timestamp": "00:50:12", "start_second": 2981, "end_second": 3012, "url": "https://www.youtube.com/watch?v=ByeRnmHJ-uk&t=2981s", "title": "Learning to learn: An Introduction to Meta Learning", "thumbnail": "https://i.ytimg.com/vi/ByeRnmHJ-uk/hqdefault.jpg"} {"video_id": "ByeRnmHJ-uk", "text": "approaches but that mammal is able to perform better because it has the structure of being able to run in gradient descent at test time and at the very least you are still running grained descent so you'll you won't be doing significantly worse than then what you might be doing with with a neural work that you really don't can't really say anything about if it's just outputting parameters for example okay so you might say well we get this nice structure but does this come at a cost like as the question alluded to before", "start_timestamp": "00:50:12", "end_timestamp": "00:50:41", "start_second": 3012, "end_second": 3041, "url": "https://www.youtube.com/watch?v=ByeRnmHJ-uk&t=3012s", "title": "Learning to learn: An Introduction to Meta Learning", "thumbnail": "https://i.ytimg.com/vi/ByeRnmHJ-uk/hqdefault.jpg"} {"video_id": "ByeRnmHJ-uk", "text": "do the did you need to assume that the task parameters are very close to each other for different tasks and so we studied this question by studying the expressive power of a single gradient step basically the update that's used in the mammal function and what we can say is that actually for sufficiently deep neural network function f the mammal function on the right can represent anything that the recurrent neural network can represent on the Left which is that it can represent any function of the training data and the test input and", "start_timestamp": "00:50:41", "end_timestamp": "00:51:10", "start_second": 3041, "end_second": 3070, "url": "https://www.youtube.com/watch?v=ByeRnmHJ-uk&t=3041s", "title": "Learning to learn: An Introduction to Meta Learning", "thumbnail": "https://i.ytimg.com/vi/ByeRnmHJ-uk/hqdefault.jpg"} {"video_id": "ByeRnmHJ-uk", "text": "we can show this under a few assumptions that are relatively mild such as the non zero learning rate as well as unique data points in the training data set and the reason why this is interesting is that it means that mammal has the inductive bias of optimization of precede procedures being embedded within it but without losing the expressive power of gradient descent or without losing the expressive power of deep recurrent neural networks ok great so let's go back to some of the motivation that we talked about a little bit with", "start_timestamp": "00:51:10", "end_timestamp": "00:51:38", "start_second": 3070, "end_second": 3098, "url": "https://www.youtube.com/watch?v=ByeRnmHJ-uk&t=3070s", "title": "Learning to learn: An Introduction to Meta Learning", "thumbnail": "https://i.ytimg.com/vi/ByeRnmHJ-uk/hqdefault.jpg"} {"video_id": "ByeRnmHJ-uk", "text": "optimization based approaches where we're saying that the meta parameters serve as a prior and we talked about how one form of prior knowledge is is initialization for fine-tuning can we make this a bit more formal and actually better characterize what's work sort of prior that is it mammal is imposing on the learning process and to do so we're going to look at Bayesian meta learning approaches that use graphical models so this is a kind of a graphical model similar to the one that Sergei showed before where Phii is the top specific", "start_timestamp": "00:51:38", "end_timestamp": "00:52:08", "start_second": 3098, "end_second": 3128, "url": "https://www.youtube.com/watch?v=ByeRnmHJ-uk&t=3098s", "title": "Learning to learn: An Introduction to Meta Learning", "thumbnail": "https://i.ytimg.com/vi/ByeRnmHJ-uk/hqdefault.jpg"} {"video_id": "ByeRnmHJ-uk", "text": "parameters and theta is the meta parameters so if you want to do do meta learning or learn a prior theta in this graphical model it's gonna look like the following optimization where we're optimizing the logs likelihood of the data given given the parameters you can write this out similar to the equations that Surya showed earlier as an integration over the top specific parameters Phi which are not observed and the and this is simply empirical Bayes and the we can approximate is is it's quite intractable and so one thing", "start_timestamp": "00:52:08", "end_timestamp": "00:52:39", "start_second": 3128, "end_second": 3159, "url": "https://www.youtube.com/watch?v=ByeRnmHJ-uk&t=3128s", "title": "Learning to learn: An Introduction to Meta Learning", "thumbnail": "https://i.ytimg.com/vi/ByeRnmHJ-uk/hqdefault.jpg"} {"video_id": "ByeRnmHJ-uk", "text": "we could do is we could approximate this integral with you maximum a posteriori estimate of the Taos Pacific parameters Phi I and this is a fairly crude approximation but it but one thing interestingly that we can show is if you compute the map estimate basically gradient descent with early stopping corresponds to map inference under a Gaussian prior with mean theta and a variance that's a function of the number of gradient descent steps and the learning rate and this is exact in the in the linear case and approximate", "start_timestamp": "00:52:39", "end_timestamp": "00:53:11", "start_second": 3159, "end_second": 3191, "url": "https://www.youtube.com/watch?v=ByeRnmHJ-uk&t=3159s", "title": "Learning to learn: An Introduction to Meta Learning", "thumbnail": "https://i.ytimg.com/vi/ByeRnmHJ-uk/hqdefault.jpg"} {"video_id": "ByeRnmHJ-uk", "text": "approximate in the non-linear case and so what we can see is that through this approximate approximate equivalence as well as the appraoch the approximation of the integral with the map estimate is that mammal is approximating inference in this hierarchical bayesian model which i think is useful for providing some intuition for the types of priors that we're learning in the meta learning process okay so mammal is a form of implicit prior are there other forms of priors that we can impose on the optimization procedure one thing we", "start_timestamp": "00:53:11", "end_timestamp": "00:53:42", "start_second": 3191, "end_second": 3222, "url": "https://www.youtube.com/watch?v=ByeRnmHJ-uk&t=3191s", "title": "Learning to learn: An Introduction to Meta Learning", "thumbnail": "https://i.ytimg.com/vi/ByeRnmHJ-uk/hqdefault.jpg"} {"video_id": "ByeRnmHJ-uk", "text": "could do is do grain descent with an explicit Gaussian prior the log likelihood of a Gaussian shown here and this is what was done by Reuters Warren at all in the implicit mammal paper we could also have a prior used in Bayesian linear regression in this case we can't input and post a prior on all weights of the neural network that would be intractable but we can impose it on the last layer of the neural network and do that on top of metal earned features this was done in an alpaca another type of mammal and we can also", "start_timestamp": "00:53:42", "end_timestamp": "00:54:11", "start_second": 3222, "end_second": 3251, "url": "https://www.youtube.com/watch?v=ByeRnmHJ-uk&t=3222s", "title": "Learning to learn: An Introduction to Meta Learning", "thumbnail": "https://i.ytimg.com/vi/ByeRnmHJ-uk/hqdefault.jpg"} {"video_id": "ByeRnmHJ-uk", "text": "this is moving more away from Beijing about this but we can also do other forms of optimization on the last layer of the neural network such as doing Ridge regression logistic regression or support vector machines and this is uh this forward prior is essentially that we want features that are useful for linear classification that can be performed with these methods and to my knowledge is this last approach meta objet as the current state of the art on P shot image recognition benchmarks okay so now that we've talked about", "start_timestamp": "00:54:11", "end_timestamp": "00:54:40", "start_second": 3251, "end_second": 3280, "url": "https://www.youtube.com/watch?v=ByeRnmHJ-uk&t=3251s", "title": "Learning to learn: An Introduction to Meta Learning", "thumbnail": "https://i.ytimg.com/vi/ByeRnmHJ-uk/hqdefault.jpg"} {"video_id": "ByeRnmHJ-uk", "text": "optimization based approaches let's go through a couple challenges with them so one challenge is how do you choose an architecture that's effective for embedding this integrated in descent procedure and one way to do this is to do architecture search and the interesting things that was found in this paper is it found that highly non-standard architectures that were very very deep and very narrow we're quite effective for use with mammal and this is a bit different from standard architectures that work well for", "start_timestamp": "00:54:40", "end_timestamp": "00:55:05", "start_second": 3280, "end_second": 3305, "url": "https://www.youtube.com/watch?v=ByeRnmHJ-uk&t=3280s", "title": "Learning to learn: An Introduction to Meta Learning", "thumbnail": "https://i.ytimg.com/vi/ByeRnmHJ-uk/hqdefault.jpg"} {"video_id": "ByeRnmHJ-uk", "text": "standard supervised learning problems and I'm particularly many images at five wave five shot classification proceed benchmark mammal with the basic architecture I see achieves around sixty three percent accuracy while mammal with the architecture search is able to achieve seventy four percent accuracy a pretty substantial boost by actually tuning the architecture that works well for it lastly one other challenge that you come up with with the medal the mammal algorithm is that you run into the second order optimization procedure and", "start_timestamp": "00:55:05", "end_timestamp": "00:55:34", "start_second": 3305, "end_second": 3334, "url": "https://www.youtube.com/watch?v=ByeRnmHJ-uk&t=3305s", "title": "Learning to learn: An Introduction to Meta Learning", "thumbnail": "https://i.ytimg.com/vi/ByeRnmHJ-uk/hqdefault.jpg"} {"video_id": "ByeRnmHJ-uk", "text": "this can exhibit different and different instabilities one idea for trying to mitigate this is really the dumbest idea you can come up with is to assume that the Jacobian of Phi with respect to theta is identity and simply copy the gradient with this rectify to be the gradient with respect to theta and this actually works somewhat surprisingly well oddly enough on relatively simple problems although anecdotally we found it not to work well as you try to move towards more complex problems like imitation learning and reinforcement", "start_timestamp": "00:55:34", "end_timestamp": "00:56:01", "start_second": 3334, "end_second": 3361, "url": "https://www.youtube.com/watch?v=ByeRnmHJ-uk&t=3334s", "title": "Learning to learn: An Introduction to Meta Learning", "thumbnail": "https://i.ytimg.com/vi/ByeRnmHJ-uk/hqdefault.jpg"} {"video_id": "ByeRnmHJ-uk", "text": "learning I another thing you can do is automatically learn the inner and outer learning rates you can also optimize only a subset of parameters in the inner loop such as the last layer or affine transformations at each layer you could also try to decouple the the learning rate in the back term statistics that each gradient step to have fewer decoupled parameters that might cause instabilities and finally you could also introduced introduced additional context variables into the architecture to allow for multiplicative interactions between", "start_timestamp": "00:56:01", "end_timestamp": "00:56:29", "start_second": 3361, "end_second": 3389, "url": "https://www.youtube.com/watch?v=ByeRnmHJ-uk&t=3361s", "title": "Learning to learn: An Introduction to Meta Learning", "thumbnail": "https://i.ytimg.com/vi/ByeRnmHJ-uk/hqdefault.jpg"} {"video_id": "ByeRnmHJ-uk", "text": "parameters and allow for a more expressive gradient and so come my takeaway here is that there are a range of simple tricks that can help the optimization significantly great so before we move on to nonparametric methods let's take one question from the audience and potentially some subtitle questions do you know how Mammon compares to other first-order method learning algorithms particularly reptile by Jones shown many tail so you questions how does it compare to first-order algorithms like I mean can it get over some of these problems", "start_timestamp": "00:56:29", "end_timestamp": "00:57:00", "start_second": 3389, "end_second": 3420, "url": "https://www.youtube.com/watch?v=ByeRnmHJ-uk&t=3389s", "title": "Learning to learn: An Introduction to Meta Learning", "thumbnail": "https://i.ytimg.com/vi/ByeRnmHJ-uk/hqdefault.jpg"} {"video_id": "ByeRnmHJ-uk", "text": "you just presented in this fight and you're asking about the de mammal no like if you see reptile which is a first-order method for meta learning instead of mammal can it get over some of the second-order gradient problems you just presented here yeah so as I mentioned on the on the first idea I listed here both first-order mammal and reptile do by using this crude approximation you can have a faster optimization procedure and and it potentially can be less stable but the the main benefit that you get from it is", "start_timestamp": "00:57:00", "end_timestamp": "00:57:34", "start_second": 3420, "end_second": 3454, "url": "https://www.youtube.com/watch?v=ByeRnmHJ-uk&t=3420s", "title": "Learning to learn: An Introduction to Meta Learning", "thumbnail": "https://i.ytimg.com/vi/ByeRnmHJ-uk/hqdefault.jpg"} {"video_id": "ByeRnmHJ-uk", "text": "this is faster and lower in memory but we have found that there are a number of problems where these types of first-order methods don't work at all and you need to use the the second-order methods in order to optimization optimize them well thank you why is theta minus Phi in the Gaussian prior this is a few slides ago the answer to this question though is that the the interpretation as a Gaussian prior it basically says that the prior is on Phi and Phi is normally distributed with a mean of theta the variance of that prior depends on the", "start_timestamp": "00:57:34", "end_timestamp": "00:58:25", "start_second": 3454, "end_second": 3505, "url": "https://www.youtube.com/watch?v=ByeRnmHJ-uk&t=3454s", "title": "Learning to learn: An Introduction to Meta Learning", "thumbnail": "https://i.ytimg.com/vi/ByeRnmHJ-uk/hqdefault.jpg"} {"video_id": "ByeRnmHJ-uk", "text": "number of breeding steps you take which is actually a very natural thing bei so the more grading steps you take the further away you get from theta that corresponds to a prior with the wider variance next question is there any guarantee or test that Phi is not multimodal as map will assume unimodality yeah so it's certainly yeah this this tribution certainly could be multimodal and this approximation is ignoring that in practice we have found that that mammal can work quite well on multi modal problems where you have like", "start_timestamp": "00:58:25", "end_timestamp": "00:58:57", "start_second": 3505, "end_second": 3537, "url": "https://www.youtube.com/watch?v=ByeRnmHJ-uk&t=3505s", "title": "Learning to learn: An Introduction to Meta Learning", "thumbnail": "https://i.ytimg.com/vi/ByeRnmHJ-uk/hqdefault.jpg"} {"video_id": "ByeRnmHJ-uk", "text": "quietly quite different functions that you're that are represented in the task variables although you do need a deeper neural network for that and there are also approaches such as multi modal mammal I believe that try to tackle this problem head-on and enable you to get more efficient use of your neural network parameters and by by allowing it to represent multimodal distributions over five data a couple more quick ones in the case of orthogonal tasks with mammal just memorize so if the tasks are if a single function can represent both tasks", "start_timestamp": "00:58:57", "end_timestamp": "00:59:35", "start_second": 3537, "end_second": 3575, "url": "https://www.youtube.com/watch?v=ByeRnmHJ-uk&t=3537s", "title": "Learning to learn: An Introduction to Meta Learning", "thumbnail": "https://i.ytimg.com/vi/ByeRnmHJ-uk/hqdefault.jpg"} {"video_id": "ByeRnmHJ-uk", "text": "without relying on the data then it will just memorize the function and ignore the data in many cases can we do gradient descent for multiple steps to get Phi in mammal yeah absolutely so you can use a variable number of gradient steps and practice we found that up to five gradients that works well but in practice you can use you can use more than that if if you find it helpful for your algorithm I it does not introduce higher order terms than a second order optimization it still remains the second order optimization if you go through if", "start_timestamp": "00:59:35", "end_timestamp": "01:00:09", "start_second": 3575, "end_second": 3609, "url": "https://www.youtube.com/watch?v=ByeRnmHJ-uk&t=3575s", "title": "Learning to learn: An Introduction to Meta Learning", "thumbnail": "https://i.ytimg.com/vi/ByeRnmHJ-uk/hqdefault.jpg"} {"video_id": "ByeRnmHJ-uk", "text": "you go through the mouth okay great so let's move on to nonparametric approaches and so far we've talked about parametric methods and that we're gonna be learning a model that's parameterize by five and and what about using methods that don't have parameters five and the motivation here is that in low danger regimes nonparametric methods are quite simple and work quite well and during meta test time we're in a future learning setting and so we are in elite low data regime however during but a training we still want to be parametric", "start_timestamp": "01:00:09", "end_timestamp": "01:00:47", "start_second": 3609, "end_second": 3647, "url": "https://www.youtube.com/watch?v=ByeRnmHJ-uk&t=3609s", "title": "Learning to learn: An Introduction to Meta Learning", "thumbnail": "https://i.ytimg.com/vi/ByeRnmHJ-uk/hqdefault.jpg"} {"video_id": "ByeRnmHJ-uk", "text": "because we have large amounts of data across all of the meta training tasks so the key idea behind these approaches is can we use a parametric meta learner in order to produce a nonparametric learner okay and note that some of these methods that I'll be presenting do precede parametric approaches but we're presenting them in this in this group setting to aid in understanding okay so the key idea here is is here's a few shell learning problem and one of the things you might ask is well what one thing you could once very simple thing", "start_timestamp": "01:00:47", "end_timestamp": "01:01:20", "start_second": 3647, "end_second": 3680, "url": "https://www.youtube.com/watch?v=ByeRnmHJ-uk&t=3647s", "title": "Learning to learn: An Introduction to Meta Learning", "thumbnail": "https://i.ytimg.com/vi/ByeRnmHJ-uk/hqdefault.jpg"} {"video_id": "ByeRnmHJ-uk", "text": "you could do in this approach is just take your test data point and compare it to each of the the data points in your training data basically do nearest neighbors by by comparing to each of the images the this is a quite a simple opportunity quite valid for these types of problems the key question is in what space do you compare these images and with what distance metric I for example you could do pixel space or l2 distance but that probably wouldn't give you an effective metric over the similarity between these", "start_timestamp": "01:01:20", "end_timestamp": "01:01:48", "start_second": 3680, "end_second": 3708, "url": "https://www.youtube.com/watch?v=ByeRnmHJ-uk&t=3680s", "title": "Learning to learn: An Introduction to Meta Learning", "thumbnail": "https://i.ytimg.com/vi/ByeRnmHJ-uk/hqdefault.jpg"} {"video_id": "ByeRnmHJ-uk", "text": "images and so the key idea behind these on parametric methods is to learn a metric space that leads to effective comparisons learn a more semantic metric space that leads to effective predictions on the test data points and then we learn how to compare these images to make effective predictions and so the first very simple approach for doing this is to train a Siamese Network to predict whether or not two images are the same so we can train a neural network that takes in two images and is trained to output whether or not they're", "start_timestamp": "01:01:48", "end_timestamp": "01:02:20", "start_second": 3708, "end_second": 3740, "url": "https://www.youtube.com/watch?v=ByeRnmHJ-uk&t=3708s", "title": "Learning to learn: An Introduction to Meta Learning", "thumbnail": "https://i.ytimg.com/vi/ByeRnmHJ-uk/hqdefault.jpg"} {"video_id": "ByeRnmHJ-uk", "text": "the same we're not so zero correspond so them not being the same one corresponds to them being the same class and repeat this through all of the data in your meta training data set and so once you've trained this neural network to be able to compare pairs of images about a test time you can then take your test data point compare it to each of the images in your training dataset see and then output the corresponding label to the image that is the closest okay so in this case meta training is give me a two-way classification problem and then", "start_timestamp": "01:02:20", "end_timestamp": "01:02:48", "start_second": 3740, "end_second": 3768, "url": "https://www.youtube.com/watch?v=ByeRnmHJ-uk&t=3740s", "title": "Learning to learn: An Introduction to Meta Learning", "thumbnail": "https://i.ytimg.com/vi/ByeRnmHJ-uk/hqdefault.jpg"} {"video_id": "ByeRnmHJ-uk", "text": "meta testing is going to be an anyway classification problem by doing all of these pairwise comparisons so now to improve upon this approach you might ask well can we make Mediterranean meta testing match can we train it such as it can actually perform effective anyway classification and this was kind of a key idea that Sergey alluded to in the problem definition which is to try to match Mediterranean meta testing in this case putting a an NBA classification problem with nearest neighbors into a neural network and so we're gonna be", "start_timestamp": "01:02:48", "end_timestamp": "01:03:20", "start_second": 3768, "end_second": 3800, "url": "https://www.youtube.com/watch?v=ByeRnmHJ-uk&t=3768s", "title": "Learning to learn: An Introduction to Meta Learning", "thumbnail": "https://i.ytimg.com/vi/ByeRnmHJ-uk/hqdefault.jpg"} {"video_id": "ByeRnmHJ-uk", "text": "feeding the train data set as shown on the left into an into the neural network comparing this to our test input shown on the bottom two and outputting these similarities and then training it such that the predictions that it gets out which corresponds to weighted nearest neighbors are correct with respect to each of the tasks in our meta training set cool so this is basically embedding nearest neighbors into a neural network the the metric that we're using can be correspond to a convolutional encoder or some other architecture to get the", "start_timestamp": "01:03:20", "end_timestamp": "01:03:54", "start_second": 3800, "end_second": 3834, "url": "https://www.youtube.com/watch?v=ByeRnmHJ-uk&t=3800s", "title": "Learning to learn: An Introduction to Meta Learning", "thumbnail": "https://i.ytimg.com/vi/ByeRnmHJ-uk/hqdefault.jpg"} {"video_id": "ByeRnmHJ-uk", "text": "embeddings of the training data points this method uses a bi-directional ostium and then we get a model that can do anyway classification is actually better trained for anyway classification now if you have more than one example per class what this will do is it will independently compare the test image with each of those examples per class and one of the things that might be nice to do is actually to better integrate the information across different examples for our class to aggregate cost information into a sort of prototypical", "start_timestamp": "01:03:54", "end_timestamp": "01:04:24", "start_second": 3834, "end_second": 3864, "url": "https://www.youtube.com/watch?v=ByeRnmHJ-uk&t=3834s", "title": "Learning to learn: An Introduction to Meta Learning", "thumbnail": "https://i.ytimg.com/vi/ByeRnmHJ-uk/hqdefault.jpg"} {"video_id": "ByeRnmHJ-uk", "text": "embedding of that class such that when we do comparisons we're comparing at a class level rather than at the example level and this is exactly what prototypical networks does so they embed each of the data points for a given class such as the class corresponding to green blue or orange I average these to compute a prototype shown as CK and then make predictions based off of embedding the test data point X test and comparing it to each of the prototypes and what this looks like is we simply measure the distance acts between each of the", "start_timestamp": "01:04:24", "end_timestamp": "01:04:59", "start_second": 3864, "end_second": 3899, "url": "https://www.youtube.com/watch?v=ByeRnmHJ-uk&t=3864s", "title": "Learning to learn: An Introduction to Meta Learning", "thumbnail": "https://i.ytimg.com/vi/ByeRnmHJ-uk/hqdefault.jpg"} {"video_id": "ByeRnmHJ-uk", "text": "prototypes this is denoted as D and then perform a softmax operation to decide which class it corresponds to and in this case D can correspond to Euclidean distance or cosine distance great so this is a simple approach that works quite well and there are a couple other extensions that we can make upon it so so far we've looked at Siamese networks about two networks they're prototypical that works at all correspond to some sort of embedding in the nearest neighbors either two examples or two prototypes when tanta comes up is what if you need", "start_timestamp": "01:04:59", "end_timestamp": "01:05:31", "start_second": 3899, "end_second": 3931, "url": "https://www.youtube.com/watch?v=ByeRnmHJ-uk&t=3899s", "title": "Learning to learn: An Introduction to Meta Learning", "thumbnail": "https://i.ytimg.com/vi/ByeRnmHJ-uk/hqdefault.jpg"} {"video_id": "ByeRnmHJ-uk", "text": "a reason about more complex relationships between data points we don't simply want to average to create a single prototypical example pour per per class to handle this we could learn a nonlinear relation module on the embeddings so we can essentially learn that D function by embedding each of the pairwise examples and and producing predictions for each of them we could also learn a different prototype more than one type for each class learning an infinite mixture of prototypes or perform message passing on the different classes and the", "start_timestamp": "01:05:31", "end_timestamp": "01:06:00", "start_second": 3931, "end_second": 3960, "url": "https://www.youtube.com/watch?v=ByeRnmHJ-uk&t=3931s", "title": "Learning to learn: An Introduction to Meta Learning", "thumbnail": "https://i.ytimg.com/vi/ByeRnmHJ-uk/hqdefault.jpg"} {"video_id": "ByeRnmHJ-uk", "text": "different examples in our data set okay so stepping up a bit let's actually try to think about how these different approaches compare so you can go back to the computation graph perspective and bring up the kind of computation graph view of the black box approaches as well as the optimization based approaches and the nonparametric approach can be viewed in the same exact way except here I we're gonna be having a new function which I'll denote as PN for prototypical networks where the function corresponds to the softmax of the the", "start_timestamp": "01:06:00", "end_timestamp": "01:06:32", "start_second": 3960, "end_second": 3992, "url": "https://www.youtube.com/watch?v=ByeRnmHJ-uk&t=3960s", "title": "Learning to learn: An Introduction to Meta Learning", "thumbnail": "https://i.ytimg.com/vi/ByeRnmHJ-uk/hqdefault.jpg"} {"video_id": "ByeRnmHJ-uk", "text": "negative distance between the embedded query tests data point and the prototypes where the prototypes are defined as the average embedding of each of the class for that example and so with this view again we can mix and match components of this computation graph to create hybrid approaches among these different three and this includes things like camel which can both conditions on the data with a black box approach and run gradients and runs grain to send all the parameters it's also another hybrid approach is to run", "start_timestamp": "01:06:32", "end_timestamp": "01:07:04", "start_second": 3992, "end_second": 4024, "url": "https://www.youtube.com/watch?v=ByeRnmHJ-uk&t=3992s", "title": "Learning to learn: An Introduction to Meta Learning", "thumbnail": "https://i.ytimg.com/vi/ByeRnmHJ-uk/hqdefault.jpg"} {"video_id": "ByeRnmHJ-uk", "text": "gradient descent on an embedding that produces a set of parameters so this sort of combines all three approaches and then finally we can do something like mammal but initialize the last layer to be a linear classifier equivalent to the prototypical networks classifier okay before we move on to the takeaways are there any questions about nonparametric approaches or any questions from slide oh no there are roughly 200 questions and I can't sort through them all but there are a couple of questions that are kind", "start_timestamp": "01:07:04", "end_timestamp": "01:07:38", "start_second": 4024, "end_second": 4058, "url": "https://www.youtube.com/watch?v=ByeRnmHJ-uk&t=4024s", "title": "Learning to learn: An Introduction to Meta Learning", "thumbnail": "https://i.ytimg.com/vi/ByeRnmHJ-uk/hqdefault.jpg"} {"video_id": "ByeRnmHJ-uk", "text": "of crowd favorites that are maybe good to take not not to do that primary methods necessarily but one common theme is well can we do meta training where we have different numbers of classes for different tasks or different numbers of data points for different classes yeah absolutely so and actually this is a this is a good question to talk about now too because um these different approaches can handle variable numbers of training data points and numbers of number of data points per class and varied number variable numbers", "start_timestamp": "01:07:38", "end_timestamp": "01:08:07", "start_second": 4058, "end_second": 4087, "url": "https://www.youtube.com/watch?v=ByeRnmHJ-uk&t=4058s", "title": "Learning to learn: An Introduction to Meta Learning", "thumbnail": "https://i.ytimg.com/vi/ByeRnmHJ-uk/hqdefault.jpg"} {"video_id": "ByeRnmHJ-uk", "text": "of classes the this is something that optimization based approaches can handle very gracefully because you can simply compete your loss function over different batches of data points very very naturally blackbox approaches you can do it but you need to train it for to be able to do that you need to be able to train it with variable data set sizes and variable numbers of examples per class and nonparametric approaches can also handle it fairly gracefully but in practice we found them not to always work well to to variable numbers of data", "start_timestamp": "01:08:07", "end_timestamp": "01:08:35", "start_second": 4087, "end_second": 4115, "url": "https://www.youtube.com/watch?v=ByeRnmHJ-uk&t=4087s", "title": "Learning to learn: An Introduction to Meta Learning", "thumbnail": "https://i.ytimg.com/vi/ByeRnmHJ-uk/hqdefault.jpg"} {"video_id": "ByeRnmHJ-uk", "text": "points per class so another question which is kind of a summary of multiple questions but I think it applies to all three of the methods as well to what degree to meta learning methods actually learn to use the entirety of kind of a test batch meaning that they classify multiple images at once versus are they classifying individual images one at a time this is all the questions about inductive versus transductive are also getting at this point yes I think it depends on your application so if you think that you are going to have just a", "start_timestamp": "01:08:35", "end_timestamp": "01:09:02", "start_second": 4115, "end_second": 4142, "url": "https://www.youtube.com/watch?v=ByeRnmHJ-uk&t=4115s", "title": "Learning to learn: An Introduction to Meta Learning", "thumbnail": "https://i.ytimg.com/vi/ByeRnmHJ-uk/hqdefault.jpg"} {"video_id": "ByeRnmHJ-uk", "text": "single example that you need to classify it test time then then all of these approaches can handle that setting and then if you're in the transductive setting and you think that you're gonna actually have a test dataset that you need a label that corresponds to two multiple classes then you could also you could either independently classify them of course or try to actually combine information between them different approaches can handle this in different ways for example you can use the batch storm statistics to I basically transmit", "start_timestamp": "01:09:02", "end_timestamp": "01:09:33", "start_second": 4142, "end_second": 4173, "url": "https://www.youtube.com/watch?v=ByeRnmHJ-uk&t=4142s", "title": "Learning to learn: An Introduction to Meta Learning", "thumbnail": "https://i.ytimg.com/vi/ByeRnmHJ-uk/hqdefault.jpg"} {"video_id": "ByeRnmHJ-uk", "text": "information across different tasks or across different examples in your meta test set and with this you can actually transmit a fair amount of information through those about statistics and there are also more recent approaches that try to learn lost functions that can effectively transmit information across unlabeled examples another common theme in regard to mammal and a few questions here is well what are the relative benefits of simply doing pre training followed by fine-tuning versus doing a mammal and also can mammal itself be used as just a", "start_timestamp": "01:09:33", "end_timestamp": "01:10:04", "start_second": 4173, "end_second": 4204, "url": "https://www.youtube.com/watch?v=ByeRnmHJ-uk&t=4173s", "title": "Learning to learn: An Introduction to Meta Learning", "thumbnail": "https://i.ytimg.com/vi/ByeRnmHJ-uk/hqdefault.jpg"} {"video_id": "ByeRnmHJ-uk", "text": "pre training stage without any fine-tuning so in practice mammal is certainly training for like fine-tuned ability to a new task and so I'm practice you really do need to actually fine-tune it to a test task although perhaps if you only care about getting good features and maybe the features learned by the method are something that is reasonable okay so some intermediate takeaways of these different approaches one of the benefits of things like blackbox approaches is that they are easy to combine with a variety of", "start_timestamp": "01:10:04", "end_timestamp": "01:10:37", "start_second": 4204, "end_second": 4237, "url": "https://www.youtube.com/watch?v=ByeRnmHJ-uk&t=4204s", "title": "Learning to learn: An Introduction to Meta Learning", "thumbnail": "https://i.ytimg.com/vi/ByeRnmHJ-uk/hqdefault.jpg"} {"video_id": "ByeRnmHJ-uk", "text": "learning problems such as supervised learning and as you'll see after the break and reinforcement learning they do involve a challenging optimization problem and this is a bit of a subtle point which is that initialization because you're not embedding a known learning procedure into the meta learning process there's no inductive bias that points you in the right direction for how to learn from the data and as a result you have to learn how to learn from the data completely from scratch and as a result these methods can often be data", "start_timestamp": "01:10:37", "end_timestamp": "01:11:04", "start_second": 4237, "end_second": 4264, "url": "https://www.youtube.com/watch?v=ByeRnmHJ-uk&t=4237s", "title": "Learning to learn: An Introduction to Meta Learning", "thumbnail": "https://i.ytimg.com/vi/ByeRnmHJ-uk/hqdefault.jpg"} {"video_id": "ByeRnmHJ-uk", "text": "inefficient because that optimization process requires time to actually learn how to learn from data and lastly these methods do have an intertwined model and architecture essentially the the model that you're using to make predictions about data points is inherently intertwined with the model that you're using to to take in the training data points as input okay as I mentioned before optimization based approaches can very nicely handle varying K and large K the structure also lends well to out of distribution tasks and one of the", "start_timestamp": "01:11:04", "end_timestamp": "01:11:35", "start_second": 4264, "end_second": 4295, "url": "https://www.youtube.com/watch?v=ByeRnmHJ-uk&t=4264s", "title": "Learning to learn: An Introduction to Meta Learning", "thumbnail": "https://i.ytimg.com/vi/ByeRnmHJ-uk/hqdefault.jpg"} {"video_id": "ByeRnmHJ-uk", "text": "downsides is that does it involve the second order optimization although there are a couple of approaches towards trying to mitigate this nonparametric pressures are quite simple and they're entirely feed-forward and because they're entirely feed-forward they're computationally fast and easy to optimize because you don't need to worry about by propagating through gradient steps or through recurrent neural networks for example they're harder to generalize to varying K this is more of an empirical observation than", "start_timestamp": "01:11:35", "end_timestamp": "01:11:59", "start_second": 4295, "end_second": 4319, "url": "https://www.youtube.com/watch?v=ByeRnmHJ-uk&t=4295s", "title": "Learning to learn: An Introduction to Meta Learning", "thumbnail": "https://i.ytimg.com/vi/ByeRnmHJ-uk/hqdefault.jpg"} {"video_id": "ByeRnmHJ-uk", "text": "then a theoretical one and of course because they're using non parametric approaches they're harder to scale to very large K because the the computation grows with more and more data points okay and the mostly to the approaches so far have been limited to classification approaches because you're inherently making making decisions based off of the based on the data points that you've seen so far okay and lastly it's worth pointing out that well tune versions of each of these approaches tend to perform compare ibly on existing few shot", "start_timestamp": "01:11:59", "end_timestamp": "01:12:29", "start_second": 4319, "end_second": 4349, "url": "https://www.youtube.com/watch?v=ByeRnmHJ-uk&t=4319s", "title": "Learning to learn: An Introduction to Meta Learning", "thumbnail": "https://i.ytimg.com/vi/ByeRnmHJ-uk/hqdefault.jpg"} {"video_id": "ByeRnmHJ-uk", "text": "learning benchmarks so at the performance level they all perform quite well okay so now let's talk about Bayesian meta learning approaches so we had this really nice motivation for for Bayesian methods and then we kind of threw that all away in the last thirty minutes or so and so and particularly what we did is we assume that this PFI given the data set and the meta parameters is going to be a point estimate or a deterministic function of the training data point and and the meta parameters you might as well maybe this", "start_timestamp": "01:12:29", "end_timestamp": "01:13:02", "start_second": 4349, "end_second": 4382, "url": "https://www.youtube.com/watch?v=ByeRnmHJ-uk&t=4349s", "title": "Learning to learn: An Introduction to Meta Learning", "thumbnail": "https://i.ytimg.com/vi/ByeRnmHJ-uk/hqdefault.jpg"} {"video_id": "ByeRnmHJ-uk", "text": "is all fine like we don't we don't really need Bayesian things but there are many situations where it actually is useful to be representing actually a distribution over the potential functions that we might be encountering and in particular because future learning problems only have a small number of data points they may be ambiguous even with when you do have a prior for example if your goal is to classify between all the images on the left and all the images on the right all the folks on the left are smiling", "start_timestamp": "01:13:02", "end_timestamp": "01:13:27", "start_second": 4382, "end_second": 4407, "url": "https://www.youtube.com/watch?v=ByeRnmHJ-uk&t=4382s", "title": "Learning to learn: An Introduction to Meta Learning", "thumbnail": "https://i.ytimg.com/vi/ByeRnmHJ-uk/hqdefault.jpg"} {"video_id": "ByeRnmHJ-uk", "text": "wearing hats and young all the folks on the right or not and so if you get an image that has someone that smiling and wearing hot and not young or not smiling and wearing a hat and young that it's inherently ambiguous on what the correct label is for these classes because you don't know if you're supposed to classifying on the attribute of smiling on the attribute of wearing a hat or on the attribute of being young and so what you might ask this can me generate hypotheses about the underlying function I like the three hypotheses that I", "start_timestamp": "01:13:27", "end_timestamp": "01:13:53", "start_second": 4407, "end_second": 4433, "url": "https://www.youtube.com/watch?v=ByeRnmHJ-uk&t=4407s", "title": "Learning to learn: An Introduction to Meta Learning", "thumbnail": "https://i.ytimg.com/vi/ByeRnmHJ-uk/hqdefault.jpg"} {"video_id": "ByeRnmHJ-uk", "text": "mentioned before essentially can you sample from this function PFI and this is gonna be important for for ambiguous problems but also for safety critical few shot learning such as medical imaging for learning to actively learn because if you care about actually reducing your uncertainty about a given function and getting labels for new data points and you want to be able to reason about your uncertainty over your function space right now and it's also and there have been approaches for using active learning with meta learning and it's", "start_timestamp": "01:13:53", "end_timestamp": "01:14:21", "start_second": 4433, "end_second": 4461, "url": "https://www.youtube.com/watch?v=ByeRnmHJ-uk&t=4433s", "title": "Learning to learn: An Introduction to Meta Learning", "thumbnail": "https://i.ytimg.com/vi/ByeRnmHJ-uk/hqdefault.jpg"} {"video_id": "ByeRnmHJ-uk", "text": "also useful for learning to explore and meta reinforcement learning because again we can try to explicitly reduce our uncertainty by collecting more data okay so how do we go about trying to medal learn in a way that allows us to generate hypotheses and reason about distributions so we can bring up the graphical model that sorry I mentioned before where we're going to have a distribution of our prior parameters theta we will have a distribution over our TAS specific parameters by I given theta and then we'll also have", "start_timestamp": "01:14:21", "end_timestamp": "01:14:49", "start_second": 4461, "end_second": 4489, "url": "https://www.youtube.com/watch?v=ByeRnmHJ-uk&t=4461s", "title": "Learning to learn: An Introduction to Meta Learning", "thumbnail": "https://i.ytimg.com/vi/ByeRnmHJ-uk/hqdefault.jpg"} {"video_id": "ByeRnmHJ-uk", "text": "probability distributions correspondent the probability of our training dataset giving or given our toss parameters and our test dataset given our task parameters and then our goal will be can we sample task specific parameters Phi I given our training data points and ex-house essentially given all of the observed variables and because of conditional independence a test time we we don't really need to worry too much about X test we just want to be able to sample Phi I given x training in a white rain so blackbox adaptation approaches", "start_timestamp": "01:14:49", "end_timestamp": "01:15:18", "start_second": 4489, "end_second": 4518, "url": "https://www.youtube.com/watch?v=ByeRnmHJ-uk&t=4489s", "title": "Learning to learn: An Introduction to Meta Learning", "thumbnail": "https://i.ytimg.com/vi/ByeRnmHJ-uk/hqdefault.jpg"} {"video_id": "ByeRnmHJ-uk", "text": "are quite easy to to extend to this Bayesian setting so what we can do is we can use amortized variational inference we can train a neural network to represent a distribution over sufficient statistics we can also represent a distribution over full parameter vectors Phi I although that might get a bit unwieldy at some point and so we could train a neural network to represent a Gaussian distribution over H and then feed this H into another neural network and use ideas from from amortized variational inference such as the", "start_timestamp": "01:15:18", "end_timestamp": "01:15:48", "start_second": 4518, "end_second": 4548, "url": "https://www.youtube.com/watch?v=ByeRnmHJ-uk&t=4518s", "title": "Learning to learn: An Introduction to Meta Learning", "thumbnail": "https://i.ytimg.com/vi/ByeRnmHJ-uk/hqdefault.jpg"} {"video_id": "ByeRnmHJ-uk", "text": "reprioritization trick to train this model in order to effectively produce this distribution indeed various approaches have used this type of approach for example Gordon at all I use this approach where H correspond to the the weights of the last layer of a neural network okay so now now that we've gone over how you can a very naturally apply this with blackbox approaches what about optimization based metal learning approaches can we further extend these to the Bayesian setting there's a few different ways that you go", "start_timestamp": "01:15:48", "end_timestamp": "01:16:21", "start_second": 4548, "end_second": 4581, "url": "https://www.youtube.com/watch?v=ByeRnmHJ-uk&t=4548s", "title": "Learning to learn: An Introduction to Meta Learning", "thumbnail": "https://i.ytimg.com/vi/ByeRnmHJ-uk/hqdefault.jpg"} {"video_id": "ByeRnmHJ-uk", "text": "about doing this well this is this figure than the simple black box approach one thing we could do is model the distribution of Phi given theta as Gaussian and use the same sort of variational inference for training as I mentioned before but have her inference that work you perform an optimization over Phi and this is what was done in Ravi and pizza in 2019 another thing that we want to do is if you don't want to model p5 theta I give PFI given theta as Gaussian we can use stein variational gradient to on the last layer of the", "start_timestamp": "01:16:21", "end_timestamp": "01:16:57", "start_second": 4581, "end_second": 4617, "url": "https://www.youtube.com/watch?v=ByeRnmHJ-uk&t=4581s", "title": "Learning to learn: An Introduction to Meta Learning", "thumbnail": "https://i.ytimg.com/vi/ByeRnmHJ-uk/hqdefault.jpg"} {"video_id": "ByeRnmHJ-uk", "text": "neural network and do gradient based inference on the last layer and only have a single set of parameters similar to theta G for all the other layers and also we could use something like an ensemble of mammals ensemble of m\u00e9diterran\u00e9e all networks that allows us to represent basically particles of the distribution and both of these were proposed by Kim at all in 2018 know one thing you might ask is well can we can we get some kind of the benefits of both of these can we model both a non Gaussian posterior and do so over all of", "start_timestamp": "01:16:57", "end_timestamp": "01:17:28", "start_second": 4617, "end_second": 4648, "url": "https://www.youtube.com/watch?v=ByeRnmHJ-uk&t=4617s", "title": "Learning to learn: An Introduction to Meta Learning", "thumbnail": "https://i.ytimg.com/vi/ByeRnmHJ-uk/hqdefault.jpg"} {"video_id": "ByeRnmHJ-uk", "text": "the parameters and there is a way that we can do this in fact and so let's go back to our graphical model say that we want to sample from this this distribution you can write out as the integral where we're integrating out p theta of course this integral is completely intractable if theta are the parameters of a neural network but one thing you might ask is well what if we knew p if i given theta and the training data set if we knew this distribution our graphical model would be transformed in this way switching the arrows from", "start_timestamp": "01:17:28", "end_timestamp": "01:18:00", "start_second": 4648, "end_second": 4680, "url": "https://www.youtube.com/watch?v=ByeRnmHJ-uk&t=4648s", "title": "Learning to learn: An Introduction to Meta Learning", "thumbnail": "https://i.ytimg.com/vi/ByeRnmHJ-uk/hqdefault.jpg"} {"video_id": "ByeRnmHJ-uk", "text": "the training dataset to 5 and then we could simply sample with ancestral sampling we could sample at theta from the prior sample 5 from this distribution and and and get get a get a sampled 5 and so the key idea is what we can do here is we can approximate this distribution Phi given theta and D train as a a point estimate using map of course this is extremely crude approximation similar to the one that we use in mammal but it's also extremely convenient and in particular if we do this approximation we can still get", "start_timestamp": "01:18:00", "end_timestamp": "01:18:34", "start_second": 4680, "end_second": 4714, "url": "https://www.youtube.com/watch?v=ByeRnmHJ-uk&t=4680s", "title": "Learning to learn: An Introduction to Meta Learning", "thumbnail": "https://i.ytimg.com/vi/ByeRnmHJ-uk/hqdefault.jpg"} {"video_id": "ByeRnmHJ-uk", "text": "samples from PFI given the training data and in fact we can to do this approximation we can use the same gradient based map approximation as shown before but we don't require we don't require doing the intractable integral at the top okay so at test time what this corresponds to is we'll have initial set of parameters theta will add noise to sample will add noise so that that parameter vector to basically sample a new theta and then we will run gradient descent from there so it's essentially a way to make mammal", "start_timestamp": "01:18:34", "end_timestamp": "01:19:10", "start_second": 4714, "end_second": 4750, "url": "https://www.youtube.com/watch?v=ByeRnmHJ-uk&t=4714s", "title": "Learning to learn: An Introduction to Meta Learning", "thumbnail": "https://i.ytimg.com/vi/ByeRnmHJ-uk/hqdefault.jpg"} {"video_id": "ByeRnmHJ-uk", "text": "they give you estimates and sample from PFI given your training data but I'm doing so in a way that actually will lead to the correct posterior distribution I didn't go through actually how we train this that's a bit harder and for details you can look at the probabilistic mammal paper on the bottom we refer to this algorithm as platypus or a probabilistic latent model for incorporating priors and uncertainty in future learning which is a bit of a tortured acronym but we can then get another type of male and what this gives", "start_timestamp": "01:19:10", "end_timestamp": "01:19:42", "start_second": 4750, "end_second": 4782, "url": "https://www.youtube.com/watch?v=ByeRnmHJ-uk&t=4750s", "title": "Learning to learn: An Introduction to Meta Learning", "thumbnail": "https://i.ytimg.com/vi/ByeRnmHJ-uk/hqdefault.jpg"} {"video_id": "ByeRnmHJ-uk", "text": "us is it gives us the ability to sample different functions with only a few data points such as linear functions sinusoid functions but they were inherently ambiguous because there's some noise in the data as well as represent different classification problems for different classifiers with different decision boundaries as again shown in the colored dashed lines and perhaps most importantly it's also it's better able to model ambiguous future learning problems like I showed on the first slide okay for further reading on", "start_timestamp": "01:19:42", "end_timestamp": "01:20:10", "start_second": 4782, "end_second": 4810, "url": "https://www.youtube.com/watch?v=ByeRnmHJ-uk&t=4782s", "title": "Learning to learn: An Introduction to Meta Learning", "thumbnail": "https://i.ytimg.com/vi/ByeRnmHJ-uk/hqdefault.jpg"} {"video_id": "ByeRnmHJ-uk", "text": "Bayesian by learning approaches you can look at the papers shown here and next we can go to a couple questions before moving on to applications so one question is if we want to do a Bayesian metal learning approach can we just use drop out at a test time right so if you just do drop out at test time you will certainly get a distribution there's no guarantee if you don't train for that distribution to actually represent the posterior then you won't actually get the correct posterior over functions for probabilistic mammal", "start_timestamp": "01:20:10", "end_timestamp": "01:20:48", "start_second": 4810, "end_second": 4848, "url": "https://www.youtube.com/watch?v=ByeRnmHJ-uk&t=4810s", "title": "Learning to learn: An Introduction to Meta Learning", "thumbnail": "https://i.ytimg.com/vi/ByeRnmHJ-uk/hqdefault.jpg"} {"video_id": "ByeRnmHJ-uk", "text": "you assume that there's no relationship between the training X and the training Y the graphical model assumes that the that there is that the labels Y are dependent on the inputs the the labels Y train are dependent on the inputs X train and the the parameter is Phi okay so let's go through a couple applications this will be quite quick meta learning has been used in a variety of computer vision applications such as image recognition modeling the motion and the pose of humans uses for domain adaptation for example when", "start_timestamp": "01:20:48", "end_timestamp": "01:21:30", "start_second": 4848, "end_second": 4890, "url": "https://www.youtube.com/watch?v=ByeRnmHJ-uk&t=4848s", "title": "Learning to learn: An Introduction to Meta Learning", "thumbnail": "https://i.ytimg.com/vi/ByeRnmHJ-uk/hqdefault.jpg"} {"video_id": "ByeRnmHJ-uk", "text": "you want to adapt to new domains and you have a variety of domains to train on as well as for a few shots segmentation problems or you want to segment images given only a few labeled pixels beyond some of those kind of typical supervised computer vision problems people have also looked at tired of modeling with with meta learning methods this includes a few shot image generation if you shot image the image translation when you where you want to translate between different types of images where you want to translate things to a new type of", "start_timestamp": "01:21:30", "end_timestamp": "01:21:58", "start_second": 4890, "end_second": 4918, "url": "https://www.youtube.com/watch?v=ByeRnmHJ-uk&t=4890s", "title": "Learning to learn: An Introduction to Meta Learning", "thumbnail": "https://i.ytimg.com/vi/ByeRnmHJ-uk/hqdefault.jpg"} {"video_id": "ByeRnmHJ-uk", "text": "animal that that you haven't seen before also generation of novel viewpoints given a single viewpoint as well as generating videos of people from just a single image of that person okay and then another application that people have looked at is imitation learning where the goal is given one demonstration of a task can you learn a policy for that task and this is quite a fairly simple extension of the supervised meta learning approaches that we showed before because you can treat imitation learning as a supervised", "start_timestamp": "01:21:58", "end_timestamp": "01:22:28", "start_second": 4918, "end_second": 4948, "url": "https://www.youtube.com/watch?v=ByeRnmHJ-uk&t=4918s", "title": "Learning to learn: An Introduction to Meta Learning", "thumbnail": "https://i.ytimg.com/vi/ByeRnmHJ-uk/hqdefault.jpg"} {"video_id": "ByeRnmHJ-uk", "text": "learning problem this has been looked at for stocking blocks on top of each other for extended step temporarily extended tasks also for other kind of robotics control tasks in the real world as well as for high fidelity imitation where you want to very closely match the the training demonstration and also it has also been used with optimization based patient techniques such as it showed here so here the goal is to after seeing a single demonstration of placing an apple into a bowl through teleoperation throughout what is able to figure out", "start_timestamp": "01:22:28", "end_timestamp": "01:23:02", "start_second": 4948, "end_second": 4982, "url": "https://www.youtube.com/watch?v=ByeRnmHJ-uk&t=4948s", "title": "Learning to learn: An Introduction to Meta Learning", "thumbnail": "https://i.ytimg.com/vi/ByeRnmHJ-uk/hqdefault.jpg"} {"video_id": "ByeRnmHJ-uk", "text": "how to place the Apple onto the ball in new situations in this case I the the robot had never seen any of these objects before and so it's adapting to being able to manipulate objects in new objects in in this setting okay and also for more advanced topics you can look at approaches for one time inverse reinforcement learning and one-shot hierarchical imitation learning now one thing you might say is can we take this one step further can we do imitation from a video of a human per se where our goal is given a video of a human", "start_timestamp": "01:23:02", "end_timestamp": "01:23:31", "start_second": 4982, "end_second": 5011, "url": "https://www.youtube.com/watch?v=ByeRnmHJ-uk&t=4982s", "title": "Learning to learn: An Introduction to Meta Learning", "thumbnail": "https://i.ytimg.com/vi/ByeRnmHJ-uk/hqdefault.jpg"} {"video_id": "ByeRnmHJ-uk", "text": "performing a task such as placing into the red bowl can the robot figure out a policy for placing for performing the toss turn in the video such as placing the peach into the red bowl and in this case you need to learn how to learn from weak supervision and what I mean my weak supervision is that the video of the human has all the information about the task but is not accessible in a way of kind of a standard machine learning data set and so we need to do something a bit different here and in particular what", "start_timestamp": "01:23:31", "end_timestamp": "01:23:59", "start_second": 5011, "end_second": 5039, "url": "https://www.youtube.com/watch?v=ByeRnmHJ-uk&t=5011s", "title": "Learning to learn: An Introduction to Meta Learning", "thumbnail": "https://i.ytimg.com/vi/ByeRnmHJ-uk/hqdefault.jpg"} {"video_id": "ByeRnmHJ-uk", "text": "we're going to do is we can take the the mammal objective or really any meta learning objective and the the the meta objective the outer objective is going to be fully supervised and the inner objective is going to be weekly supervised so we're be learning how to learn from weekly supervised data using fully supervised data and then a test time you'll run gradient descent using the weekly supervised data now you one question you might ask is well what if the weekly supervised loss function isn't available like from a video of a", "start_timestamp": "01:23:59", "end_timestamp": "01:24:27", "start_second": 5039, "end_second": 5067, "url": "https://www.youtube.com/watch?v=ByeRnmHJ-uk&t=5039s", "title": "Learning to learn: An Introduction to Meta Learning", "thumbnail": "https://i.ytimg.com/vi/ByeRnmHJ-uk/hqdefault.jpg"} {"video_id": "ByeRnmHJ-uk", "text": "human in that case you could actually learn a loss function for performing adaptation such as I know all Network loss function that outputs a scalar value and you train this loss function such that the gradients it provides are effective for adaptation okay and then lastly it's worth mentioning that meta learning for languages been explored in a variety of contexts this includes adapting link adapting models to modeling new programs such as program induction and program synthesis this also includes adapting to new languages", "start_timestamp": "01:24:27", "end_timestamp": "01:24:57", "start_second": 5067, "end_second": 5097, "url": "https://www.youtube.com/watch?v=ByeRnmHJ-uk&t=5067s", "title": "Learning to learn: An Introduction to Meta Learning", "thumbnail": "https://i.ytimg.com/vi/ByeRnmHJ-uk/hqdefault.jpg"} {"video_id": "ByeRnmHJ-uk", "text": "such as translating between two language pairs that you only have a small amount of data for by meditating on language pairs that you have a lot of data for this also includes learning new words so this is actually done in the original matching networks paper that learns how to use a new word in the new context from a single example usage of that word and lastly adapting to new personas so training dialog agents to be able to generate dialogue from a particular persona from only a few examples of that persona okay", "start_timestamp": "01:24:57", "end_timestamp": "01:25:26", "start_second": 5097, "end_second": 5126, "url": "https://www.youtube.com/watch?v=ByeRnmHJ-uk&t=5097s", "title": "Learning to learn: An Introduction to Meta Learning", "thumbnail": "https://i.ytimg.com/vi/ByeRnmHJ-uk/hqdefault.jpg"} {"video_id": "F0CBkT0UVWE", "text": "[Music] so hi everyone uh i am glad to have to have a chance to give this talk to you today uh so as dragon already mentioned uh the title is api design uh tips and uh introduction into building the apis uh so thinking about this topic uh uh it's in this talk it's it will not be a matter of uh talking about one correct approach uh as uh there are many good ones uh the goal is stop is to talk about the ideas and thought process and how you can use those when designing and thinking about uh apis uh the talk will not cover", "start_timestamp": "00:00:00", "end_timestamp": "00:01:00", "start_second": 0, "end_second": 60, "url": "https://www.youtube.com/watch?v=F0CBkT0UVWE&t=0s", "title": "API Design - Getting Started\u2014Alen Pokos", "thumbnail": "https://i.ytimg.com/vi/F0CBkT0UVWE/maxresdefault.jpg"} {"video_id": "F0CBkT0UVWE", "text": "the whole topic because uh this is a very very big area to talk about uh so i will uh focus on some of the selected areas that i found in my experience to be the most available when thinking about the api design examples shown in this stock are inspired by real life uh some of which are simplified or adjusted to fit the format of this talk first let me let me introduce myself to you as dragon already mentioned my name is allen i'm a deputy cto and team leader of architecture team at decoder i've been building web apps and apis for", "start_timestamp": "00:01:00", "end_timestamp": "00:01:39", "start_second": 60, "end_second": 99, "url": "https://www.youtube.com/watch?v=F0CBkT0UVWE&t=60s", "title": "API Design - Getting Started\u2014Alen Pokos", "thumbnail": "https://i.ytimg.com/vi/F0CBkT0UVWE/maxresdefault.jpg"} {"video_id": "F0CBkT0UVWE", "text": "a very long time now i've been building json api powered projects for the past five years i'm doing talks training workshops and educations as part of my daily work let me talk about a little bit about our company so three coder is a company based in zagreb croatia and we employ uh developers uh uh on premise and remote also from uh some uh neighboring countries uh we are medium company with 65 employees out of which 50 are developers and engineers so we cover mobile front and back and data science ux and ui", "start_timestamp": "00:01:39", "end_timestamp": "00:02:16", "start_second": 99, "end_second": 136, "url": "https://www.youtube.com/watch?v=F0CBkT0UVWE&t=99s", "title": "API Design - Getting Started\u2014Alen Pokos", "thumbnail": "https://i.ytimg.com/vi/F0CBkT0UVWE/maxresdefault.jpg"} {"video_id": "F0CBkT0UVWE", "text": "uh our main project is uh nuscolo it's a classified platform which is number one in oh sorry do you see it now ah okay yeah sorry there was a miss in the shires uh okay uh so we have a uh our main project is a new scholar uh it's number one uh platform in croatia and uh also in slovenia and we are always open to hire uh new colleagues so feel free to check out our website for more information uh on to the topic so uh when we talk about the apis uh uh depending on your technical background uh the api can uh mean several different", "start_timestamp": "00:02:16", "end_timestamp": "00:03:03", "start_second": 136, "end_second": 183, "url": "https://www.youtube.com/watch?v=F0CBkT0UVWE&t=136s", "title": "API Design - Getting Started\u2014Alen Pokos", "thumbnail": "https://i.ytimg.com/vi/F0CBkT0UVWE/maxresdefault.jpg"} {"video_id": "F0CBkT0UVWE", "text": "things so let's try to clear up uh this definition first so uh in our definition apis uh is something that enables application to communicate with one another uh this can be either uh server to server a client application or some other service that you are building so a well designed api should reflect the goals of business and it's designed to serve that purpose the idea here where will talk about is uh web apis but the ideas and thoughts are also applicable to other areas as well uh so when talk about uh talking about", "start_timestamp": "00:03:03", "end_timestamp": "00:03:42", "start_second": 183, "end_second": 222, "url": "https://www.youtube.com/watch?v=F0CBkT0UVWE&t=183s", "title": "API Design - Getting Started\u2014Alen Pokos", "thumbnail": "https://i.ytimg.com/vi/F0CBkT0UVWE/maxresdefault.jpg"} {"video_id": "F0CBkT0UVWE", "text": "the web apis we need to mention uh restful apis uh so it's an architectural style for uh building distributed systems based on hyper media and it's not necessarily just implying uh http uh so web apis is what we will talk about today a couple of things to mention here uh web is usually a stateless request model using http and the http requests should be independent of each other and may occur in any order uh this is something when uh thinking about the how we will model our uh endpoints so uh why we use apis uh so apis are", "start_timestamp": "00:03:42", "end_timestamp": "00:04:24", "start_second": 222, "end_second": 264, "url": "https://www.youtube.com/watch?v=F0CBkT0UVWE&t=222s", "title": "API Design - Getting Started\u2014Alen Pokos", "thumbnail": "https://i.ytimg.com/vi/F0CBkT0UVWE/maxresdefault.jpg"} {"video_id": "F0CBkT0UVWE", "text": "used to build uh webs to build for other web servers or client consumers such as web browser mobile application uh outside services uh one thing that we can also uh usually imply its communication is done using http protocol in some text formats most preferably json or xml so before we dive into the apis we can also talk about the design world design world itself can have several meanings and one definition that i found very useful is design is a set of decisions with the goal to define the look and functionality of", "start_timestamp": "00:04:24", "end_timestamp": "00:05:07", "start_second": 264, "end_second": 307, "url": "https://www.youtube.com/watch?v=F0CBkT0UVWE&t=264s", "title": "API Design - Getting Started\u2014Alen Pokos", "thumbnail": "https://i.ytimg.com/vi/F0CBkT0UVWE/maxresdefault.jpg"} {"video_id": "F0CBkT0UVWE", "text": "the topic that you are designing uh so it's a simple but valuable definition that can help us understand when we talk about the design so uh what does this mean in a sense of api design so uh api design is a process of developing uh interfaces for exposing our functionality or data to other applications as such a good api design takes into consideration organization specific strengths uh and uh limitations in terms of budgets personal skills and technical uh infrastructure so i said here we need to think about", "start_timestamp": "00:05:07", "end_timestamp": "00:05:48", "start_second": 307, "end_second": 348, "url": "https://www.youtube.com/watch?v=F0CBkT0UVWE&t=307s", "title": "API Design - Getting Started\u2014Alen Pokos", "thumbnail": "https://i.ytimg.com/vi/F0CBkT0UVWE/maxresdefault.jpg"} {"video_id": "F0CBkT0UVWE", "text": "also the technical approach but also take into account the consumers and our business that we are trying to support here uh one thing that we uh often uh come into across in our experience when talking about the api design is uh when people disagree on design it's often they don't agree on the goals and in my experience this is one of the very important notes to have in your head so because a lot of times the discussions is about the design but we need to consider uh what what is our goal uh so for example if there is a", "start_timestamp": "00:05:48", "end_timestamp": "00:06:23", "start_second": 348, "end_second": 383, "url": "https://www.youtube.com/watch?v=F0CBkT0UVWE&t=348s", "title": "API Design - Getting Started\u2014Alen Pokos", "thumbnail": "https://i.ytimg.com/vi/F0CBkT0UVWE/maxresdefault.jpg"} {"video_id": "F0CBkT0UVWE", "text": "disagreement that the current design is not future proof for the next five to ten years uh but our goal is to deliver some short-lived feature then this discussion is not needed because it's not our goal we are delivering something that has a shelf life and uh very often in our uh in our area of work uh our solutions have a shelf life because we are always thinking about how to upgrade them so when talking about the design a good design can be broken down into three main aspects those are purpose usability and constraints", "start_timestamp": "00:06:23", "end_timestamp": "00:06:56", "start_second": 383, "end_second": 416, "url": "https://www.youtube.com/watch?v=F0CBkT0UVWE&t=383s", "title": "API Design - Getting Started\u2014Alen Pokos", "thumbnail": "https://i.ytimg.com/vi/F0CBkT0UVWE/maxresdefault.jpg"} {"video_id": "F0CBkT0UVWE", "text": "uh so a purpose part uh as we mentioned in the previous slides uh is about fulfilling the business goals about getting the right information uh to the right side uh usability and constraints this again can be also covered depending on your technology stack and standards you could already have this covered with some standard that you use so for example in recorder we use a json api standard which i'll mention later in the talk and also give some examples and this helped us a lot to really think about the important stuff", "start_timestamp": "00:06:56", "end_timestamp": "00:07:32", "start_second": 416, "end_second": 452, "url": "https://www.youtube.com/watch?v=F0CBkT0UVWE&t=416s", "title": "API Design - Getting Started\u2014Alen Pokos", "thumbnail": "https://i.ytimg.com/vi/F0CBkT0UVWE/maxresdefault.jpg"} {"video_id": "F0CBkT0UVWE", "text": "and not to worry about conventions and similar so to summarize what is the api responsibilities it's to provide the data and functionality to successfully complete a business goal uh that business goal can be a sale a conversion a technology related data exchange anything that you are trying to cover one also uh interesting uh thought to have is i think everyone here some variation of this is uh good solutions have strong foundations uh so when thinking about the api design uh there is a way that i can actually", "start_timestamp": "00:07:32", "end_timestamp": "00:08:09", "start_second": 452, "end_second": 489, "url": "https://www.youtube.com/watch?v=F0CBkT0UVWE&t=452s", "title": "API Design - Getting Started\u2014Alen Pokos", "thumbnail": "https://i.ytimg.com/vi/F0CBkT0UVWE/maxresdefault.jpg"} {"video_id": "F0CBkT0UVWE", "text": "try to visualize this so looking at our solution we built a solution to fulfill some requirements and to meet those requirements there can usually be three main pillars for building the solution so one is a client pillar one is a api pillar and one is a backend pillar again we are talking about the web concept here uh so when each of these pillars are strong we are happy and prosperous and the success is when all of them all the three pillars are working towards a common goal instead of working towards the goal of", "start_timestamp": "00:08:09", "end_timestamp": "00:08:46", "start_second": 489, "end_second": 526, "url": "https://www.youtube.com/watch?v=F0CBkT0UVWE&t=489s", "title": "API Design - Getting Started\u2014Alen Pokos", "thumbnail": "https://i.ytimg.com/vi/F0CBkT0UVWE/maxresdefault.jpg"} {"video_id": "F0CBkT0UVWE", "text": "one pillar uh so looking at the example for uh in the relationship between the client and the api uh there might be a case where our client needs uh the whole home page for its app in one request and this is something that usually in the api we don't think about like this or for example we have an article and the article has an image so depending on the client we can do it either sending a url or we can have some complex resources additional logic behind it looking at the api and the backend relationship this is", "start_timestamp": "00:08:46", "end_timestamp": "00:09:26", "start_second": 526, "end_second": 566, "url": "https://www.youtube.com/watch?v=F0CBkT0UVWE&t=526s", "title": "API Design - Getting Started\u2014Alen Pokos", "thumbnail": "https://i.ytimg.com/vi/F0CBkT0UVWE/maxresdefault.jpg"} {"video_id": "F0CBkT0UVWE", "text": "where usually things get a bit more complex in and it involves a lot of more work so sometimes an api can copy copy uh backend implementation one to one so we have the same data models we have the same attributes uh and this can be a good start um but often uh because if we already know the backend implementation we can be biased towards the design of our api and this is uh this can sometimes limit our solution space so if we approach the api design isolated from the back end so not thinking about how the backend is", "start_timestamp": "00:09:26", "end_timestamp": "00:10:01", "start_second": 566, "end_second": 601, "url": "https://www.youtube.com/watch?v=F0CBkT0UVWE&t=566s", "title": "API Design - Getting Started\u2014Alen Pokos", "thumbnail": "https://i.ytimg.com/vi/F0CBkT0UVWE/maxresdefault.jpg"} {"video_id": "F0CBkT0UVWE", "text": "implemented we can sometimes come up with a really great designs that work uh well uh so for example if in your backend system you have a document revision system but on the api you only need to serve the latest version of the document there might actually be no need to show all this revision system in your apis so when looking at these three pillars if the pillars are too affected by each other our construction can become fragile and unbalanced uh we can become coupled to the dynamic of these solutions uh so for example if our client is doing", "start_timestamp": "00:10:01", "end_timestamp": "00:10:40", "start_second": 601, "end_second": 640, "url": "https://www.youtube.com/watch?v=F0CBkT0UVWE&t=601s", "title": "API Design - Getting Started\u2014Alen Pokos", "thumbnail": "https://i.ytimg.com/vi/F0CBkT0UVWE/maxresdefault.jpg"} {"video_id": "F0CBkT0UVWE", "text": "uh ui redesign and we need to change uh modify the apis because of it despite the data set being the same this can be an indicator that we are too coupled to the client or if a back end is doing some uh refactoring or optimizations and it requires us to change our api again we are too coupled uh with uh with the back end again as this is the talk about the api so i will focus now more about the api itself uh so uh when thinking about the apis one thing that we can also try to test is okay how will it serve multiple clients does it need to", "start_timestamp": "00:10:40", "end_timestamp": "00:11:20", "start_second": 640, "end_second": 680, "url": "https://www.youtube.com/watch?v=F0CBkT0UVWE&t=640s", "title": "API Design - Getting Started\u2014Alen Pokos", "thumbnail": "https://i.ytimg.com/vi/F0CBkT0UVWE/maxresdefault.jpg"} {"video_id": "F0CBkT0UVWE", "text": "serve multiple clients uh will it be served to multiple back ends uh how will the filtering down how would the format done so there can be a lot of questions raised what we try to do is simplify and focus on the content instead of form so we should focus on what what kind of data what the data api provides and not how uh when we talk about how uh we usually think about the formats the structures uh and communication uh so this should not be our primary focus so imagine a world where everyone knows what your apis work uh how your apis", "start_timestamp": "00:11:20", "end_timestamp": "00:12:01", "start_second": 680, "end_second": 721, "url": "https://www.youtube.com/watch?v=F0CBkT0UVWE&t=680s", "title": "API Design - Getting Started\u2014Alen Pokos", "thumbnail": "https://i.ytimg.com/vi/F0CBkT0UVWE/maxresdefault.jpg"} {"video_id": "F0CBkT0UVWE", "text": "work and you just need to tell them what kind of data you are uh serving from it so as i mentioned before in recorder we use json api to help us with this uh what json api provides is uh enables us to stop worrying about the structure and covers a lot of frequent questions that we might find uh so uh json api removes for us the needs to invent things uh that we don't need to reinvent uh the json api itself is a resource oriented api uh they do have a uh really well well written uh specification so i would suggest uh if you're interested", "start_timestamp": "00:12:01", "end_timestamp": "00:12:44", "start_second": 721, "end_second": 764, "url": "https://www.youtube.com/watch?v=F0CBkT0UVWE&t=721s", "title": "API Design - Getting Started\u2014Alen Pokos", "thumbnail": "https://i.ytimg.com/vi/F0CBkT0UVWE/maxresdefault.jpg"} {"video_id": "F0CBkT0UVWE", "text": "to take a look at it and i'll go over a quick overview of uh basic features of the json api so uh looking at it so this is a basic structure or resource uh it combined it constrained it is comprised of the type so usually some kind of type describing the resource the id of the resource and some attributes uh like a real world example let's see an article so we would have an article uh with an id1 and a title and the body of the article and uh so when we think about okay so i have an article i might need to uh see an author of the", "start_timestamp": "00:12:44", "end_timestamp": "00:13:21", "start_second": 764, "end_second": 801, "url": "https://www.youtube.com/watch?v=F0CBkT0UVWE&t=764s", "title": "API Design - Getting Started\u2014Alen Pokos", "thumbnail": "https://i.ytimg.com/vi/F0CBkT0UVWE/maxresdefault.jpg"} {"video_id": "F0CBkT0UVWE", "text": "article so this is done using the relationships so we can define a relationship and say okay relationship to this article is an author with a type of people so it gets pretty easy again thinking about okay i know the id of the user it would be great if i have this user included uh in my resource so again i can just simply have it included uh in the main response so again this is the response from from the main endpoint if we need a relationship of a relationship yes the related resources are also resources so we can", "start_timestamp": "00:13:21", "end_timestamp": "00:13:56", "start_second": 801, "end_second": 836, "url": "https://www.youtube.com/watch?v=F0CBkT0UVWE&t=801s", "title": "API Design - Getting Started\u2014Alen Pokos", "thumbnail": "https://i.ytimg.com/vi/F0CBkT0UVWE/maxresdefault.jpg"} {"video_id": "F0CBkT0UVWE", "text": "have a relationship and again using included we can include the relationship of that uh we can include the resource of that relationship as well so we can get the article the author and the thumbnail used uh to show this out um it looks beautiful uh so to summarize what what the json api provides us that we don't need to think about so it provides us with client responsibilities it provides us a definition of service server responsibilities it defines a document structure uh it provides a fetching of resources and", "start_timestamp": "00:13:56", "end_timestamp": "00:14:33", "start_second": 836, "end_second": 873, "url": "https://www.youtube.com/watch?v=F0CBkT0UVWE&t=836s", "title": "API Design - Getting Started\u2014Alen Pokos", "thumbnail": "https://i.ytimg.com/vi/F0CBkT0UVWE/maxresdefault.jpg"} {"video_id": "F0CBkT0UVWE", "text": "relationships and it provides a sparse field set so we can just have a selected set of data instead of the complete resources in addition it also provides us a definition how how sorting can be done pagination filtering uh it provides a full crowd operation so it describes how we can create resources updated uh uh updaters relationships delete the resources uh and it also provides a standardized uh errors uh so again it's a it's a nicely written specification uh if you're not familiar with it feel free to check it out", "start_timestamp": "00:14:33", "end_timestamp": "00:15:14", "start_second": 873, "end_second": 914, "url": "https://www.youtube.com/watch?v=F0CBkT0UVWE&t=873s", "title": "API Design - Getting Started\u2014Alen Pokos", "thumbnail": "https://i.ytimg.com/vi/F0CBkT0UVWE/maxresdefault.jpg"} {"video_id": "F0CBkT0UVWE", "text": "so now that we covered uh some of the definition and some of the thought process uh we can dive into the example because there is a lot of good things to go over uh so first example first use case we have here is a user illustration i think that everyone has once in their life of development uh met with some kind of huge registration so let's look at our uh requirements so we have a requirement that says that we have a mobile app mobile and web apps so we have two clients uh that have two-step registration in step", "start_timestamp": "00:15:14", "end_timestamp": "00:15:48", "start_second": 914, "end_second": 948, "url": "https://www.youtube.com/watch?v=F0CBkT0UVWE&t=914s", "title": "API Design - Getting Started\u2014Alen Pokos", "thumbnail": "https://i.ytimg.com/vi/F0CBkT0UVWE/maxresdefault.jpg"} {"video_id": "F0CBkT0UVWE", "text": "one the user enters his username email and password and in the step two it enters additional personal information like uh name surname or a phone number so uh when we think about the api design how would we solve this uh well there's actually a trick about it we don't so uh we ask the clients hey this is not our problem uh so from the api design perspective we enabled it uh what this means so uh we separate the work so client is responsible for covering the steps and the api needs to provide a way to validate", "start_timestamp": "00:15:48", "end_timestamp": "00:16:26", "start_second": 948, "end_second": 986, "url": "https://www.youtube.com/watch?v=F0CBkT0UVWE&t=948s", "title": "API Design - Getting Started\u2014Alen Pokos", "thumbnail": "https://i.ytimg.com/vi/F0CBkT0UVWE/maxresdefault.jpg"} {"video_id": "F0CBkT0UVWE", "text": "and complete the registration process so we would serve an endpoint to validate or verify the user enter data and we would uh serve an endpoint for uh process of registration itself if we look at the resource how would our resource look like so we would have a resource of the type user again this is just the name we can call it user we can call it uh a guest a person a buyer depending on your uh business uh business language and we would have some attributes uh for example username password uh first name and last name", "start_timestamp": "00:16:26", "end_timestamp": "00:17:06", "start_second": 986, "end_second": 1026, "url": "https://www.youtube.com/watch?v=F0CBkT0UVWE&t=986s", "title": "API Design - Getting Started\u2014Alen Pokos", "thumbnail": "https://i.ytimg.com/vi/F0CBkT0UVWE/maxresdefault.jpg"} {"video_id": "F0CBkT0UVWE", "text": "uh so if you've ever done a registration uh there is another example we can mention here uh usually when you ask for the password you also it's a good practice to have a repeated password just in case the user misses or types it wrong so again this is something that we shouldn't do on the uh api level it should be done by the client itself so the client can verify that two passwords are the same and just send us the password again if we want to think about it differently we can probably define the registration", "start_timestamp": "00:17:06", "end_timestamp": "00:17:40", "start_second": 1026, "end_second": 1060, "url": "https://www.youtube.com/watch?v=F0CBkT0UVWE&t=1026s", "title": "API Design - Getting Started\u2014Alen Pokos", "thumbnail": "https://i.ytimg.com/vi/F0CBkT0UVWE/maxresdefault.jpg"} {"video_id": "F0CBkT0UVWE", "text": "as a process of sending a registration request to create the user and then we can model our uh resource as such so we could have a resource that is type uh registration request with the same attribute so we would have uh username password uh last firstnamelastname and then when we think about it okay so i send a request for registration and when i get a success response uh we can have the created user as in relationship so we can have additional relationship which is a user of type user so for any fans of the rpcs this is one", "start_timestamp": "00:17:40", "end_timestamp": "00:18:21", "start_second": 1060, "end_second": 1101, "url": "https://www.youtube.com/watch?v=F0CBkT0UVWE&t=1060s", "title": "API Design - Getting Started\u2014Alen Pokos", "thumbnail": "https://i.ytimg.com/vi/F0CBkT0UVWE/maxresdefault.jpg"} {"video_id": "F0CBkT0UVWE", "text": "option how you can do it when you use json api what endpoints should we provide so we need an endpoint for uh creating of the user so for a resource and we provide additional endpoint for validation uh validation and can have a response okay if the validation is correct or a conflict or some kind of custom error and if we look at the other example for registration it could be a similar so we send have a registration endpoint and registration validate endpoint okay next next use case is classified so to help with anyone who is not", "start_timestamp": "00:18:21", "end_timestamp": "00:19:06", "start_second": 1101, "end_second": 1146, "url": "https://www.youtube.com/watch?v=F0CBkT0UVWE&t=1101s", "title": "API Design - Getting Started\u2014Alen Pokos", "thumbnail": "https://i.ytimg.com/vi/F0CBkT0UVWE/maxresdefault.jpg"} {"video_id": "F0CBkT0UVWE", "text": "familiar with the term classified is a business for uh selling of uh online uh like for example an ebay uh in our case we work for a creation a platform called new scholar the items that you're selling are usually called a posting or a listing sometimes ad and this use case will be example of uh evolution of the design as we get more knowledge additional requirements and to get more familiar with the topic that we are covering so uh first requirement is we need to serve the list of uh posted listings uh sounds simple enough so we would", "start_timestamp": "00:19:06", "end_timestamp": "00:19:46", "start_second": 1146, "end_second": 1186, "url": "https://www.youtube.com/watch?v=F0CBkT0UVWE&t=1146s", "title": "API Design - Getting Started\u2014Alen Pokos", "thumbnail": "https://i.ytimg.com/vi/F0CBkT0UVWE/maxresdefault.jpg"} {"video_id": "F0CBkT0UVWE", "text": "create a resource called listing and our listing would have a basic attributes of title description price and when it expires and it's a simple resource we can have a list of them and just serve the all the listings in the database but then we get additional requirement that says hey can you please please include uh seller information like his name and his contact information so uh again a simple so simple approach would be we just modified the existing listing uh with the title description pricing expires on and we add a seller name", "start_timestamp": "00:19:46", "end_timestamp": "00:20:25", "start_second": 1186, "end_second": 1225, "url": "https://www.youtube.com/watch?v=F0CBkT0UVWE&t=1186s", "title": "API Design - Getting Started\u2014Alen Pokos", "thumbnail": "https://i.ytimg.com/vi/F0CBkT0UVWE/maxresdefault.jpg"} {"video_id": "F0CBkT0UVWE", "text": "and a seller contact phone it's a simple addition uh but this is where the design thought process starts to think so uh the seller data can also be separated into additional uh additional resources and it can be reused on other endpoints so we could do it as additional resource so we would have again our listing with existing attributes and we would have a relationship to a call seller to a special type of resource and let's look at some examples so first example would be we can have a research type seller and then we can", "start_timestamp": "00:20:25", "end_timestamp": "00:21:06", "start_second": 1225, "end_second": 1266, "url": "https://www.youtube.com/watch?v=F0CBkT0UVWE&t=1225s", "title": "API Design - Getting Started\u2014Alen Pokos", "thumbnail": "https://i.ytimg.com/vi/F0CBkT0UVWE/maxresdefault.jpg"} {"video_id": "F0CBkT0UVWE", "text": "have an attribute called display name and content form it's it's a pretty simple one uh we can be a bit more uh descriptive so again we could have a seller but we could also signify that the seller is actually a user so we would have a relationship to some existing user and in this case our consumers can get even more data about it but looking at this example we can also uh think about the third example is that relation to the seller is a user so we can just have a resource user and we can have his username and phone uh directly attached", "start_timestamp": "00:21:06", "end_timestamp": "00:21:44", "start_second": 1266, "end_second": 1304, "url": "https://www.youtube.com/watch?v=F0CBkT0UVWE&t=1266s", "title": "API Design - Getting Started\u2014Alen Pokos", "thumbnail": "https://i.ytimg.com/vi/F0CBkT0UVWE/maxresdefault.jpg"} {"video_id": "F0CBkT0UVWE", "text": "uh with this example then we get a new uh requirement which says a mobile application needs to mark ads that are owned by currently logged in user uh and expand an additional data so thinking about this request how how can we uh solve it so if you remember the previous slide it sounds like a ui thing right well correct we could enable this so uh we provide the mobile application with the example to get the current logged in user and then the mobile app can have a simple logic as hey if the classified seller id is the", "start_timestamp": "00:21:44", "end_timestamp": "00:22:23", "start_second": 1304, "end_second": 1343, "url": "https://www.youtube.com/watch?v=F0CBkT0UVWE&t=1304s", "title": "API Design - Getting Started\u2014Alen Pokos", "thumbnail": "https://i.ytimg.com/vi/F0CBkT0UVWE/maxresdefault.jpg"} {"video_id": "F0CBkT0UVWE", "text": "same as the login user this is the logged in user so no additional data is needed for uh for the api and this is one example where we could become coupled to the ui but again if we think about design if we think about the requirements define them correctly define the goal we can already see that this solution exists in our system let's look at another use case so uh there is a requirement that says okay i as a user need to see my items so we need to enable a seller to see all of the listings that uh he has he's the author of uh so there are a", "start_timestamp": "00:22:23", "end_timestamp": "00:23:03", "start_second": 1343, "end_second": 1383, "url": "https://www.youtube.com/watch?v=F0CBkT0UVWE&t=1343s", "title": "API Design - Getting Started\u2014Alen Pokos", "thumbnail": "https://i.ytimg.com/vi/F0CBkT0UVWE/maxresdefault.jpg"} {"video_id": "F0CBkT0UVWE", "text": "couple of ways we can do this so one way uh look in the json api uh perspective we could filter the classifieds list and say okay filter classified list where seller is x and the x can be an id of the logged in user again this is all up to the clients to figure out the logged in user another example can be that we can add a special identifier called me and in that case depending on the context of the request uh we can see okay who is the user authenticated with this request oh we can then filter it uh for the client uh this is a bit", "start_timestamp": "00:23:03", "end_timestamp": "00:23:43", "start_second": 1383, "end_second": 1423, "url": "https://www.youtube.com/watch?v=F0CBkT0UVWE&t=1383s", "title": "API Design - Getting Started\u2014Alen Pokos", "thumbnail": "https://i.ytimg.com/vi/F0CBkT0UVWE/maxresdefault.jpg"} {"video_id": "F0CBkT0UVWE", "text": "more friendly to the clients but again it might be a bit more coupled to the to the client and to the back end uh pillars another way we can also create a special uh endpoint like classified me or uh we can also uh create uh in the users on the user relation to the classified so that the user can filter all of the classifieds that are related to his user uh so again to notify here that uh me keyword is a special in this case and this is again very very specific to the implementation and a bit more coupled to the to the back", "start_timestamp": "00:23:43", "end_timestamp": "00:24:27", "start_second": 1423, "end_second": 1467, "url": "https://www.youtube.com/watch?v=F0CBkT0UVWE&t=1423s", "title": "API Design - Getting Started\u2014Alen Pokos", "thumbnail": "https://i.ytimg.com/vi/F0CBkT0UVWE/maxresdefault.jpg"} {"video_id": "F0CBkT0UVWE", "text": "end so this is something to take into with caution uh another use case let's look at the messaging so we have a simple uh functionality where a couple of uh users for this example a seller and buyer can have a conversation on the item or listing so looking at the resource we could have a resource type conversation uh and we have participants so we can have the attributes of when the uh conversation is created and then we can have a relationship to a uh classified so this is the item that we are discussing about uh the buyer", "start_timestamp": "00:24:27", "end_timestamp": "00:25:07", "start_second": 1467, "end_second": 1507, "url": "https://www.youtube.com/watch?v=F0CBkT0UVWE&t=1467s", "title": "API Design - Getting Started\u2014Alen Pokos", "thumbnail": "https://i.ytimg.com/vi/F0CBkT0UVWE/maxresdefault.jpg"} {"video_id": "F0CBkT0UVWE", "text": "and the seller so the the two users that are uh currently participants of this conversation so looking at this conversation can have also additional attributes but we could already maybe uh simplify this so we can have a conversation that has a seller and a buyer and both of them are specified the specific research types so we can sell it with the type seller buyer with the type buyer but also we can reuse them because looking at both of these user types they can be type of user so we reduce the number of special", "start_timestamp": "00:25:07", "end_timestamp": "00:25:48", "start_second": 1507, "end_second": 1548, "url": "https://www.youtube.com/watch?v=F0CBkT0UVWE&t=1507s", "title": "API Design - Getting Started\u2014Alen Pokos", "thumbnail": "https://i.ytimg.com/vi/F0CBkT0UVWE/maxresdefault.jpg"} {"video_id": "F0CBkT0UVWE", "text": "resources that we have another example another design choice that we can do here looking at the conversation example is that we can have a classified so we have a item that they're discussing and we also have a seller but we already know the seller from the classified so we can uh say that uh seller is uh redundant because we can just say hey give me the seller of the class right so again design choice that we can simplify the endpoint by complicating the job for the client it's a design choice in this case it's", "start_timestamp": "00:25:48", "end_timestamp": "00:26:24", "start_second": 1548, "end_second": 1584, "url": "https://www.youtube.com/watch?v=F0CBkT0UVWE&t=1548s", "title": "API Design - Getting Started\u2014Alen Pokos", "thumbnail": "https://i.ytimg.com/vi/F0CBkT0UVWE/maxresdefault.jpg"} {"video_id": "F0CBkT0UVWE", "text": "really hard to say which one of the two is better now that we have a conversation we also need to have some conversation messages uh again here is some we can have a resource type message and then we can have a design choice do we group the messages into conversations or we connect them directly to the participants so for example we would have a message that has a content it has a sent on and it has uh is read it's always read by the recipient uh and then we can have a relationship that says okay it's part of the conversation x it", "start_timestamp": "00:26:24", "end_timestamp": "00:27:03", "start_second": 1584, "end_second": 1623, "url": "https://www.youtube.com/watch?v=F0CBkT0UVWE&t=1584s", "title": "API Design - Getting Started\u2014Alen Pokos", "thumbnail": "https://i.ytimg.com/vi/F0CBkT0UVWE/maxresdefault.jpg"} {"video_id": "F0CBkT0UVWE", "text": "has a sender and it has a recipient looking at this maybe we can find uh some redundancies because in the conversation we only need the sender and the recipient is the other party in the conversation so we can probably uh move this and also when we think about sender and the recipient types we can look at some options so we have a sender who can be a resource type sender and we can have a recipient who is a resource type recipient thinking about the previous example they are both message participants so maybe we can have", "start_timestamp": "00:27:03", "end_timestamp": "00:27:42", "start_second": 1623, "end_second": 1662, "url": "https://www.youtube.com/watch?v=F0CBkT0UVWE&t=1623s", "title": "API Design - Getting Started\u2014Alen Pokos", "thumbnail": "https://i.ytimg.com/vi/F0CBkT0UVWE/maxresdefault.jpg"} {"video_id": "F0CBkT0UVWE", "text": "a reused resource type that has okay this is a conversation participants and we can reuse them another thing so we can also remove the recipient because we already know that recipients are all the participants in the conversation so again we can simplify our message resource and look at the conversation resource for uh more information uh talking about the resource types so too many resource types can become a problem uh it can be a maintenance problem it can be a design problem because uh you start to maybe get the track", "start_timestamp": "00:27:42", "end_timestamp": "00:28:16", "start_second": 1662, "end_second": 1696, "url": "https://www.youtube.com/watch?v=F0CBkT0UVWE&t=1662s", "title": "API Design - Getting Started\u2014Alen Pokos", "thumbnail": "https://i.ytimg.com/vi/F0CBkT0UVWE/maxresdefault.jpg"} {"video_id": "F0CBkT0UVWE", "text": "lose the track of which resources uh are uh intended for what so for example we can have a user we can have a buyer we can have a seller then we can send a recipient uh customer uh etc so if you look at all of these resources they can all uh be basically come down to a simple user or a customer or some other solution that works for you so again thank you to account it's it's nice to have a really specific uh resources but when you build your apis it can become overwhelming and it can become a maintenance nightmare", "start_timestamp": "00:28:16", "end_timestamp": "00:28:53", "start_second": 1696, "end_second": 1733, "url": "https://www.youtube.com/watch?v=F0CBkT0UVWE&t=1696s", "title": "API Design - Getting Started\u2014Alen Pokos", "thumbnail": "https://i.ytimg.com/vi/F0CBkT0UVWE/maxresdefault.jpg"} {"video_id": "F0CBkT0UVWE", "text": "uh another use case that we can look about is a search so we have a simple request we have one search endpoint for several resource types so we can uh so let's look at the one solution how we can build our search uh functionality so we have a resource type that search result and then it can have an attribute of title and some excerpt so we can show the matching keywords and relationships is where it gets interesting so let's say we have three main resource types that can be in our search results so we can have a", "start_timestamp": "00:28:53", "end_timestamp": "00:29:33", "start_second": 1733, "end_second": 1773, "url": "https://www.youtube.com/watch?v=F0CBkT0UVWE&t=1733s", "title": "API Design - Getting Started\u2014Alen Pokos", "thumbnail": "https://i.ytimg.com/vi/F0CBkT0UVWE/maxresdefault.jpg"} {"video_id": "F0CBkT0UVWE", "text": "classified you can have article and we have category uh as uh the definition of json api says that you should only serve one resource type with endpoint so we need to have a search result and then using a relationship we can say okay this is a classified this is an article and this is a category and then within the included we can have the complete document uh another alternative for this uh can also be we can reuse a relationship so we can have an attribute with uh title and excerpt and then we can have just one relationship", "start_timestamp": "00:29:33", "end_timestamp": "00:30:06", "start_second": 1773, "end_second": 1806, "url": "https://www.youtube.com/watch?v=F0CBkT0UVWE&t=1773s", "title": "API Design - Getting Started\u2014Alen Pokos", "thumbnail": "https://i.ytimg.com/vi/F0CBkT0UVWE/maxresdefault.jpg"} {"video_id": "F0CBkT0UVWE", "text": "which is called the result and depending on the type of the related uh resource we could know that it's a classified it's an article uh or it's a category so this is another example of how we can do this there are many more examples that i could be showing but given that we cannot be here all day so yeah i've selected a few of more interesting ones and i hope you had a great time some takeaways that i would maybe hope to give you today so content and data we are building apis to provide some data some content", "start_timestamp": "00:30:06", "end_timestamp": "00:30:52", "start_second": 1806, "end_second": 1852, "url": "https://www.youtube.com/watch?v=F0CBkT0UVWE&t=1806s", "title": "API Design - Getting Started\u2014Alen Pokos", "thumbnail": "https://i.ytimg.com/vi/F0CBkT0UVWE/maxresdefault.jpg"} {"video_id": "F0CBkT0UVWE", "text": "naming is a big part of our design as well as thinking about how we will name things and design choice design decisions are something that we do every day so it's not a matter of uh the best design choice it's a matter of is this design choice good for us sometimes there are several design choices that can be made that have a similar list of pros and cons it can be difficult but then it's just a matter of okay pick one and uh continue on uh when i talk about the api design and apis in general so this is this is not the all that it's covering", "start_timestamp": "00:30:52", "end_timestamp": "00:31:31", "start_second": 1852, "end_second": 1891, "url": "https://www.youtube.com/watch?v=F0CBkT0UVWE&t=1852s", "title": "API Design - Getting Started\u2014Alen Pokos", "thumbnail": "https://i.ytimg.com/vi/F0CBkT0UVWE/maxresdefault.jpg"} {"video_id": "F0CBkT0UVWE", "text": "this uh this area there are some additional topics uh that could be covered like how can we do documentation on the api design uh how can we do a testing and validation of our apis versioning security so what kind of security are there not just authorization and authentication but also rate limits uh both protection topics like that there is also caching and backend speed optimizations there is some recipes that we can use to improve our apis like that and also a lot of implementation details on how can we implement our apis", "start_timestamp": "00:31:31", "end_timestamp": "00:32:12", "start_second": 1891, "end_second": 1932, "url": "https://www.youtube.com/watch?v=F0CBkT0UVWE&t=1891s", "title": "API Design - Getting Started\u2014Alen Pokos", "thumbnail": "https://i.ytimg.com/vi/F0CBkT0UVWE/maxresdefault.jpg"} {"video_id": "F0CBkT0UVWE", "text": "so that is it for me uh and we can go over some of the questions uh and the q a uh you're muted dragon hi so let's see if uh we have any questions so far yeah yeah uh between okay okay so this is a question where do you draw the line between okay uh requirement engineering and api is the api design so the api design is uh i would say a result of the requirements uh so uh just like you do uh ux design sometimes the wireframes are part of the requirements uh but it's usually the requirements define what we need to achieve and the api", "start_timestamp": "00:32:12", "end_timestamp": "00:33:10", "start_second": 1932, "end_second": 1990, "url": "https://www.youtube.com/watch?v=F0CBkT0UVWE&t=1932s", "title": "API Design - Getting Started\u2014Alen Pokos", "thumbnail": "https://i.ytimg.com/vi/F0CBkT0UVWE/maxresdefault.jpg"} {"video_id": "F0CBkT0UVWE", "text": "design is part of the technical solution how will we achieve our goal so uh if there is some uh let's put like this one thing that can affect the requirements if there is some design choice that can help us save money time and uh bring better results then we can go back to the requirements and ask for change but usually we have requirements and then we get then we go to the design stage uh does this answer your question great okay another question are we using swagger to build our api docs uh no right now we are using a postman uh", "start_timestamp": "00:33:10", "end_timestamp": "00:33:47", "start_second": 1990, "end_second": 2027, "url": "https://www.youtube.com/watch?v=F0CBkT0UVWE&t=1990s", "title": "API Design - Getting Started\u2014Alen Pokos", "thumbnail": "https://i.ytimg.com/vi/F0CBkT0UVWE/maxresdefault.jpg"} {"video_id": "F0CBkT0UVWE", "text": "to document uh our apis and we also use it to test so right now this is uh the tool we are using uh we have used swagger to build some apis more specifically the open api specification okay another one is there anything that can help better visualizing the api design yes uh so um i'm thinking about how to answer because there are a couple of ways i can answer this uh so i don't know if there is any uh done tool like out of the box tool but one thing that we have been doing is building a map and this is one thing", "start_timestamp": "00:33:47", "end_timestamp": "00:34:30", "start_second": 2027, "end_second": 2070, "url": "https://www.youtube.com/watch?v=F0CBkT0UVWE&t=2027s", "title": "API Design - Getting Started\u2014Alen Pokos", "thumbnail": "https://i.ytimg.com/vi/F0CBkT0UVWE/maxresdefault.jpg"} {"video_id": "F0CBkT0UVWE", "text": "that json api is really good because uh all the resources can be so basically adjacent api enables us to build a relational relationship uh of a relationship diagram of the whole uh api so yeah we have some tools that we developed to help us like build a uml diagram of all of our resources relationships attributes etc yeah okay a lot of questions uh this is a little bit of topic but uh yeah i know from ansible to uh thinking about it right now this uh out of the box yeah we've used ansible for some stuff we used custom solution for some type we", "start_timestamp": "00:34:30", "end_timestamp": "00:35:19", "start_second": 2070, "end_second": 2119, "url": "https://www.youtube.com/watch?v=F0CBkT0UVWE&t=2070s", "title": "API Design - Getting Started\u2014Alen Pokos", "thumbnail": "https://i.ytimg.com/vi/F0CBkT0UVWE/maxresdefault.jpg"} {"video_id": "F0CBkT0UVWE", "text": "used uh symphony provided tools so uh yeah anything anything that works for you this is this is uh really specific to the technology stack to the solution to the infrastructure so there's a lot of factors i would say to answer this question a bit more fairly yeah uh recommendations on how to return errors to the client and what's the importance of using http state to status codes in the in this regard uh uh short answer json api uh so uh this is again a great thing about the standards that we use uh json api standard which already helps us", "start_timestamp": "00:35:19", "end_timestamp": "00:36:01", "start_second": 2119, "end_second": 2161, "url": "https://www.youtube.com/watch?v=F0CBkT0UVWE&t=2119s", "title": "API Design - Getting Started\u2014Alen Pokos", "thumbnail": "https://i.ytimg.com/vi/F0CBkT0UVWE/maxresdefault.jpg"} {"video_id": "F0CBkT0UVWE", "text": "to describe how we should return errors what are the some of the status codes that are already part of the standards but there are some that we use customly so yeah i would say use it semantically if nobody hates me for saying it like that uh yeah so json api is really good for that uh it helps us a lot with this kind of questions yeah uh include testing in the process um i would say testing is part of the development process uh but again depending on which tools you use uh for the design so for example we use uh", "start_timestamp": "00:36:01", "end_timestamp": "00:36:37", "start_second": 2161, "end_second": 2197, "url": "https://www.youtube.com/watch?v=F0CBkT0UVWE&t=2161s", "title": "API Design - Getting Started\u2014Alen Pokos", "thumbnail": "https://i.ytimg.com/vi/F0CBkT0UVWE/maxresdefault.jpg"} {"video_id": "F0CBkT0UVWE", "text": "postman for the api design itself so when we are done with the design we already have a complete postman collections and we can use it for testing we can also automate it so yeah so we can use it uh we can use it in that way oh i i love this one yeah uh i actually have on one slide uh prepared for this yeah this is this was uh a common common question um so uh the graphql is one of those technologies that we have used uh and um how to say it uh we've used with it uh it's they're pretty simple so despite they're being described as uh", "start_timestamp": "00:36:37", "end_timestamp": "00:37:19", "start_second": 2197, "end_second": 2239, "url": "https://www.youtube.com/watch?v=F0CBkT0UVWE&t=2197s", "title": "API Design - Getting Started\u2014Alen Pokos", "thumbnail": "https://i.ytimg.com/vi/F0CBkT0UVWE/maxresdefault.jpg"} {"video_id": "F0CBkT0UVWE", "text": "data and this is a resource uh the json api for us has a clear conventions which helps us speed up the development process uh it has the right amount of implicit versus explicit so graphql is a great tool if you need a bit more let's say freedom but yeah if you like to design your own stuff but uh if you really want to use some standard json api works just fine and if you compare the two they can provide they provide pretty much the same uh functionality so yeah i i wouldn't say that either of them is uh better it's a personal taste maybe", "start_timestamp": "00:37:19", "end_timestamp": "00:37:55", "start_second": 2239, "end_second": 2275, "url": "https://www.youtube.com/watch?v=F0CBkT0UVWE&t=2239s", "title": "API Design - Getting Started\u2014Alen Pokos", "thumbnail": "https://i.ytimg.com/vi/F0CBkT0UVWE/maxresdefault.jpg"} {"video_id": "F0CBkT0UVWE", "text": "okay regarding content of data how would you rate having values in field that reflect earnings instead of easily readable values so a philosopher philosophical answer here would be uh if you're using a strings or integers they're both atoms so we have a key that means something so if you use two or if you use call they are both kind of m's uh but yeah so uh this is this is an example where it's uh really good to separate the two uh pillars so from the back in perspective you're using number constants from the api perspective you want", "start_timestamp": "00:37:55", "end_timestamp": "00:38:33", "start_second": 2275, "end_second": 2313, "url": "https://www.youtube.com/watch?v=F0CBkT0UVWE&t=2275s", "title": "API Design - Getting Started\u2014Alen Pokos", "thumbnail": "https://i.ytimg.com/vi/F0CBkT0UVWE/maxresdefault.jpg"} {"video_id": "F0CBkT0UVWE", "text": "something that is easily uh used by the client so i would probably most of the time i would prefer uh string so call something that when you look at the api as a per as a human or as a uh some kind of uh program you can you can then translate them into uh into your own values so if you use value two value two is very back in specific usually and if you use a uh call it describes the action so it doesn't matter what constant is behind it so yeah this is this kind of the answer this is a really good example where you", "start_timestamp": "00:38:33", "end_timestamp": "00:39:06", "start_second": 2313, "end_second": 2346, "url": "https://www.youtube.com/watch?v=F0CBkT0UVWE&t=2313s", "title": "API Design - Getting Started\u2014Alen Pokos", "thumbnail": "https://i.ytimg.com/vi/F0CBkT0UVWE/maxresdefault.jpg"} {"video_id": "F0CBkT0UVWE", "text": "think about separating the back end from the api api is built for the consumer yeah hello girl uh how much an api script could be complex how do you snap involves in the performance issue um yeah this is a good question uh in short it so the performance of the json api really uh depends on your implementation because you can implement it to be really fast and we have had some good uh cases where our implementation was really fast but if we sometimes we weren't maybe so oriented on performances and yeah you can really quickly", "start_timestamp": "00:39:06", "end_timestamp": "00:39:45", "start_second": 2346, "end_second": 2385, "url": "https://www.youtube.com/watch?v=F0CBkT0UVWE&t=2346s", "title": "API Design - Getting Started\u2014Alen Pokos", "thumbnail": "https://i.ytimg.com/vi/F0CBkT0UVWE/maxresdefault.jpg"} {"video_id": "F0CBkT0UVWE", "text": "degrade the performance but it's again it depends on your technological stack you can improve it but from our example uh a json api can be really performant so there isn't a lot of problems with getting it to be performed yeah uh have you heard of netjs framework uh i don't i cannot say i've heard it uh it sounds like it's a js framework so maybe i should ask if somebody from my front and colleagues is here to answer uh php typescript uh i mean in the company we so recorder is php based for the backend solutions uh", "start_timestamp": "00:39:45", "end_timestamp": "00:40:28", "start_second": 2385, "end_second": 2428, "url": "https://www.youtube.com/watch?v=F0CBkT0UVWE&t=2385s", "title": "API Design - Getting Started\u2014Alen Pokos", "thumbnail": "https://i.ytimg.com/vi/F0CBkT0UVWE/maxresdefault.jpg"} {"video_id": "F0CBkT0UVWE", "text": "we use symphony but we also have uh build building a uh node.js based uh backend solution so we are using symphony and some node.js solutions in the company uh what is the preferred way to implement ignition api uh the json api way again this is uh this is a good question but we followed the json api specification and they just uh defined there's it basically are two main strategies how you can paginate one is offset limit and the other one is a page size so they the json api offers two ways you can pay in it depending on the", "start_timestamp": "00:40:28", "end_timestamp": "00:41:06", "start_second": 2428, "end_second": 2466, "url": "https://www.youtube.com/watch?v=F0CBkT0UVWE&t=2428s", "title": "API Design - Getting Started\u2014Alen Pokos", "thumbnail": "https://i.ytimg.com/vi/F0CBkT0UVWE/maxresdefault.jpg"} {"video_id": "F0CBkT0UVWE", "text": "prefer preferences of the client can talk about the best practices of api version uh in my personal opinion yeah yeah okay i would maybe call it good practices best practices is uh this is kind of where it gets discussion oh something is better than the other um yeah i could talk about it i actually uh have uh in the in the longer version of this talk i actually mentioned a couple of things about the versioning so you could either a version by headers you can either version by url you can version by uh resource yeah uh", "start_timestamp": "00:41:06", "end_timestamp": "00:41:43", "start_second": 2466, "end_second": 2503, "url": "https://www.youtube.com/watch?v=F0CBkT0UVWE&t=2466s", "title": "API Design - Getting Started\u2014Alen Pokos", "thumbnail": "https://i.ytimg.com/vi/F0CBkT0UVWE/maxresdefault.jpg"} {"video_id": "F0CBkT0UVWE", "text": "so i i would say not in the time that i have right now because it's a it's a really really uh a wide topic and i know there's a lot of options you can go with it okay thank you alan a lot of questions today we come to the end uh yeah just maybe to mention if you have any other questions feel free to contact me contact our company or send me an message on you know and the recording of this talk will be in couple you can come back to via developers website and and find the recording uh back to stephanie if he's still with us", "start_timestamp": "00:41:43", "end_timestamp": "00:42:28", "start_second": 2503, "end_second": 2548, "url": "https://www.youtube.com/watch?v=F0CBkT0UVWE&t=2503s", "title": "API Design - Getting Started\u2014Alen Pokos", "thumbnail": "https://i.ytimg.com/vi/F0CBkT0UVWE/maxresdefault.jpg"} {"video_id": "F0CBkT0UVWE", "text": "if not thanks there okay i'm still alive uh there's another question right on implementing so this is kind of what i mentioned with uh constraints in the in the three uh main principles uh so if you standards you already have some rules in the place another big thing is uh naming so you if you have a ambiguous language some kind of uh dictionary that you agree with your business which is again something that can help uh and yeah that would be probably the two most important uh ones because if you standards if you use", "start_timestamp": "00:42:28", "end_timestamp": "00:43:16", "start_second": 2548, "end_second": 2596, "url": "https://www.youtube.com/watch?v=F0CBkT0UVWE&t=2548s", "title": "API Design - Getting Started\u2014Alen Pokos", "thumbnail": "https://i.ytimg.com/vi/F0CBkT0UVWE/maxresdefault.jpg"} {"video_id": "2Z1B0xESzMw", "text": "hello everyone and welcome back to my channel today we're trying something a bit different which is I'm going to be talking about my research area today which is machine learning but I really hope that any of you guys that are coming from a non mathematical or computer background if you are watching along that you can understand this because I tried to make it understandable to a general audience and if you are watching anyways let me know in the comments below if you are able to follow along with this and if it made", "start_timestamp": "00:00:00", "end_timestamp": "00:00:27", "start_second": 0, "end_second": 27, "url": "https://www.youtube.com/watch?v=2Z1B0xESzMw&t=0s", "title": "Supervised vs Unsupervised vs Semi / Self Supervised vs Reinforcement Learning | Machine Learning", "thumbnail": "https://i.ytimg.com/vi/2Z1B0xESzMw/maxresdefault.jpg"} {"video_id": "2Z1B0xESzMw", "text": "sense to you guys because I'm always trying to make sure that research areas can be accessible to everyone even if it's not your research area so I'm doing a PhD in computer science but my background is in maths and statistics and that's how I came across machine learning but some of you guys have been asking for me to do more machine learning on computer science videos and that's not something I really saw about doing on this channel but I did one to have it be more general PhD stuff but to kind of try this out for this month I", "start_timestamp": "00:00:27", "end_timestamp": "00:00:58", "start_second": 27, "end_second": 58, "url": "https://www.youtube.com/watch?v=2Z1B0xESzMw&t=27s", "title": "Supervised vs Unsupervised vs Semi / Self Supervised vs Reinforcement Learning | Machine Learning", "thumbnail": "https://i.ytimg.com/vi/2Z1B0xESzMw/maxresdefault.jpg"} {"video_id": "2Z1B0xESzMw", "text": "want to do one sort of day a week on machine learning or computer science programming that kind of thing one day a week on more general PhD stuff so that might be writing or skill so I've talked about transferable skills before so it could be something like that and then one day week that's more on the personal side of doing a PhD so things like routines weekly or daily vlogs which people have seem to respond really well to and things like you know financial aspects of being a PhD student side hustles for students all of that kind of", "start_timestamp": "00:00:58", "end_timestamp": "00:01:32", "start_second": 58, "end_second": 92, "url": "https://www.youtube.com/watch?v=2Z1B0xESzMw&t=58s", "title": "Supervised vs Unsupervised vs Semi / Self Supervised vs Reinforcement Learning | Machine Learning", "thumbnail": "https://i.ytimg.com/vi/2Z1B0xESzMw/maxresdefault.jpg"} {"video_id": "2Z1B0xESzMw", "text": "thing and then one day a week that's all the planning productivity stuff that tends to be what most people are interested in so that's kind of this the general plan I'm gonna have one of those each day and probably have one day of these study with me kind of videos as well but it is a lot so I'm gonna be trying that at this month seeing if it's something I can keep up with but what will be really important to me is understanding what is working for you guys so if there's a video style you like be sure you are liking be sure you", "start_timestamp": "00:01:32", "end_timestamp": "00:02:03", "start_second": 92, "end_second": 123, "url": "https://www.youtube.com/watch?v=2Z1B0xESzMw&t=92s", "title": "Supervised vs Unsupervised vs Semi / Self Supervised vs Reinforcement Learning | Machine Learning", "thumbnail": "https://i.ytimg.com/vi/2Z1B0xESzMw/maxresdefault.jpg"} {"video_id": "2Z1B0xESzMw", "text": "are commenting below and be sure you are subscribing because that's what helps me understand which videos are working and which ones I should continue doing so if you want me to continue doing one type of video be sure you're engaging with that video type or multiple video types if you love all the videos you want to see more videos about machine learning and my kind of research or PhD stuff in general then be sure you do subscribe and that you hit the notification valve so that you know when new videos are out", "start_timestamp": "00:02:03", "end_timestamp": "00:02:29", "start_second": 123, "end_second": 149, "url": "https://www.youtube.com/watch?v=2Z1B0xESzMw&t=123s", "title": "Supervised vs Unsupervised vs Semi / Self Supervised vs Reinforcement Learning | Machine Learning", "thumbnail": "https://i.ytimg.com/vi/2Z1B0xESzMw/maxresdefault.jpg"} {"video_id": "2Z1B0xESzMw", "text": "so this first video is going to be an introduction to machine learning in general just so you guys have a general understanding if anyone is like learning about machine learning for the first time here that you guys will have an understanding and then in future weeks we can get into some of the general algorithms and then as well I can show you the programming that goes alongside those things but let me know if you'd rather see more general stuff or you'd rather see things that I specifically used in my research so what is machine", "start_timestamp": "00:02:29", "end_timestamp": "00:02:58", "start_second": 149, "end_second": 178, "url": "https://www.youtube.com/watch?v=2Z1B0xESzMw&t=149s", "title": "Supervised vs Unsupervised vs Semi / Self Supervised vs Reinforcement Learning | Machine Learning", "thumbnail": "https://i.ytimg.com/vi/2Z1B0xESzMw/maxresdefault.jpg"} {"video_id": "2Z1B0xESzMw", "text": "learning so machine learning is a branch of artificial intelligence which is a technique that enables machines to act similarly to humans so trying to learn how to do things the same way that people do and then machine learning is a branch of that which uses statistical methods to enable machines to improve with experience so they learn based on their own experience and or from the experience that's in the data obviously so machine learning is essentially applied statistics so my background is in mathematics and statistics and that's", "start_timestamp": "00:02:58", "end_timestamp": "00:03:34", "start_second": 178, "end_second": 214, "url": "https://www.youtube.com/watch?v=2Z1B0xESzMw&t=178s", "title": "Supervised vs Unsupervised vs Semi / Self Supervised vs Reinforcement Learning | Machine Learning", "thumbnail": "https://i.ytimg.com/vi/2Z1B0xESzMw/maxresdefault.jpg"} {"video_id": "2Z1B0xESzMw", "text": "how I first learned about machine learning was in my degree in statistics and in my master's in data analytics so it's not 100% computer science it is more statistics you need kind of the mixture of statistics and computer science skills to be able to use machine learning and then deep learning I'm just going to mention briefly as well is another subcategory of machine learning and it's basically it has the ability to work with a multi-layer network of algorithms essentially of different layers of things going on and that's how", "start_timestamp": "00:03:34", "end_timestamp": "00:04:10", "start_second": 214, "end_second": 250, "url": "https://www.youtube.com/watch?v=2Z1B0xESzMw&t=214s", "title": "Supervised vs Unsupervised vs Semi / Self Supervised vs Reinforcement Learning | Machine Learning", "thumbnail": "https://i.ytimg.com/vi/2Z1B0xESzMw/maxresdefault.jpg"} {"video_id": "2Z1B0xESzMw", "text": "it makes decisions but before I get started with talking about data I just want to mention briefly as well I'm gonna be talking about features to do with data everyone in school probably would have done some form stats where you have different variables that are the input and then you have one that's the output so the features are the input variables so one example I'm going to talk about later is housing prices so the features might be the rooms the number of rooms and highest the number of bathrooms the square footage all of that stuff and", "start_timestamp": "00:04:10", "end_timestamp": "00:04:43", "start_second": 250, "end_second": 283, "url": "https://www.youtube.com/watch?v=2Z1B0xESzMw&t=250s", "title": "Supervised vs Unsupervised vs Semi / Self Supervised vs Reinforcement Learning | Machine Learning", "thumbnail": "https://i.ytimg.com/vi/2Z1B0xESzMw/maxresdefault.jpg"} {"video_id": "2Z1B0xESzMw", "text": "then the output variable will be the housing price that's associated so just to mention that briefly as well so I'm going to do some examples and I thought like how could I make these examples a bit different if anybody is familiar with machine learning so I decided to do things that are all based on YouTube so here we see one of my videos I'm actually wearing the same cost are kind of not ideal anyways so one of the things that YouTube offers the employees with speech recognition because you've got these captions that", "start_timestamp": "00:04:43", "end_timestamp": "00:05:12", "start_second": 283, "end_second": 312, "url": "https://www.youtube.com/watch?v=2Z1B0xESzMw&t=283s", "title": "Supervised vs Unsupervised vs Semi / Self Supervised vs Reinforcement Learning | Machine Learning", "thumbnail": "https://i.ytimg.com/vi/2Z1B0xESzMw/maxresdefault.jpg"} {"video_id": "2Z1B0xESzMw", "text": "are auto-generated it says that there on the screen that they are auto-generated meaning that I didn't provide the captions for this video instead they use a speech recognition software that turns my speech into text and that's automatically done and it's not always a hundred percent obviously especially when you have things like accents like I do say that I recorded in English from Ireland and then they obviously use some things so I mean that part already looks but it like looks like it's correct so that's good but that's one thing that", "start_timestamp": "00:05:12", "end_timestamp": "00:05:46", "start_second": 312, "end_second": 346, "url": "https://www.youtube.com/watch?v=2Z1B0xESzMw&t=312s", "title": "Supervised vs Unsupervised vs Semi / Self Supervised vs Reinforcement Learning | Machine Learning", "thumbnail": "https://i.ytimg.com/vi/2Z1B0xESzMw/maxresdefault.jpg"} {"video_id": "2Z1B0xESzMw", "text": "they do so that's an application of machine learning and sometimes some of these examples are also going to be using deep learning so it's hard to send a separate the two because you never know a lots of things that can be done using pure machine learning algorithms can also be done or are done with deep learning algorithms so the next thing that they do is spam classification so you see on my comments this is an YouTube studio but I have likely spam so they have they know when a couple of comments are going to be spammy so these", "start_timestamp": "00:05:46", "end_timestamp": "00:06:18", "start_second": 346, "end_second": 378, "url": "https://www.youtube.com/watch?v=2Z1B0xESzMw&t=346s", "title": "Supervised vs Unsupervised vs Semi / Self Supervised vs Reinforcement Learning | Machine Learning", "thumbnail": "https://i.ytimg.com/vi/2Z1B0xESzMw/maxresdefault.jpg"} {"video_id": "2Z1B0xESzMw", "text": "are people who comment down below subversive and things like that so they get put into the likely spam or they had helped to review and then I can go back and approve these comments time's comments that seem completely normal are come up with spam for some reason so it's not really 100% but you know like you know when Gmail obviously you've got your spam and that's pretty accurate like I've rarely ever had anybody anything in spam that's not spam so yeah so that's one thing that they do that's done with machine learning as", "start_timestamp": "00:06:18", "end_timestamp": "00:06:49", "start_second": 378, "end_second": 409, "url": "https://www.youtube.com/watch?v=2Z1B0xESzMw&t=378s", "title": "Supervised vs Unsupervised vs Semi / Self Supervised vs Reinforcement Learning | Machine Learning", "thumbnail": "https://i.ytimg.com/vi/2Z1B0xESzMw/maxresdefault.jpg"} {"video_id": "2Z1B0xESzMw", "text": "well so they learn the difference between spam and not spam by using they have examples of ones that have been labeled by people who said this is a spam email or this is an OnStar me mail and they learn how to categorize those as spam and not spam another thing that they do is text generation so you can see I have this comment here from eco wander shout-out to you for commenting on my video thank you so much you contribute to my growth here on youtube so thank you and and so you can see that YouTube has started", "start_timestamp": "00:06:49", "end_timestamp": "00:07:22", "start_second": 409, "end_second": 442, "url": "https://www.youtube.com/watch?v=2Z1B0xESzMw&t=409s", "title": "Supervised vs Unsupervised vs Semi / Self Supervised vs Reinforcement Learning | Machine Learning", "thumbnail": "https://i.ytimg.com/vi/2Z1B0xESzMw/maxresdefault.jpg"} {"video_id": "2Z1B0xESzMw", "text": "doing this in YouTube studio where you have like generated comments that you can just reply now I don't use those because obviously it's better to have longer comments if you want to get the engagement up so it looks better in the algorithm and it looks better for you guys if I'm doing longer comments and I'm engaging with you more when I can and so these are things that they do now though you can do automatically generated and that will be they'll read the machine learns how to understand the conflict the content of that comment and", "start_timestamp": "00:07:22", "end_timestamp": "00:07:54", "start_second": 442, "end_second": 474, "url": "https://www.youtube.com/watch?v=2Z1B0xESzMw&t=442s", "title": "Supervised vs Unsupervised vs Semi / Self Supervised vs Reinforcement Learning | Machine Learning", "thumbnail": "https://i.ytimg.com/vi/2Z1B0xESzMw/maxresdefault.jpg"} {"video_id": "2Z1B0xESzMw", "text": "what would be an appropriate response based on previous interactions they've seen on YouTube so they would have seen something similar to this on YouTube and had somebody some people are applying these kinds of comments and that's how they learn and that's how they come up with the responses and the last example that I'm going to show to do with ytube is recommender systems and that's really what I work in is recommender systems and you'll know this from things like Netflix or Amazon but as well YouTube does recommend you videos and you", "start_timestamp": "00:07:54", "end_timestamp": "00:08:26", "start_second": 474, "end_second": 506, "url": "https://www.youtube.com/watch?v=2Z1B0xESzMw&t=474s", "title": "Supervised vs Unsupervised vs Semi / Self Supervised vs Reinforcement Learning | Machine Learning", "thumbnail": "https://i.ytimg.com/vi/2Z1B0xESzMw/maxresdefault.jpg"} {"video_id": "2Z1B0xESzMw", "text": "homepage will be a recommended paint like a recommended set of videos that are specifically for you based on what you watched before and there's different ways that you can go by that so if you want to hear more about recommender systems that's my main research area that's the thing I know more about of like different ways that they can work so one way would be like people who like this video also liked this video and then another is you know finding similarities between the videos themselves and not actually taking into", "start_timestamp": "00:08:26", "end_timestamp": "00:08:58", "start_second": 506, "end_second": 538, "url": "https://www.youtube.com/watch?v=2Z1B0xESzMw&t=506s", "title": "Supervised vs Unsupervised vs Semi / Self Supervised vs Reinforcement Learning | Machine Learning", "thumbnail": "https://i.ytimg.com/vi/2Z1B0xESzMw/maxresdefault.jpg"} {"video_id": "2Z1B0xESzMw", "text": "account the human side of things and then another thing would be taking into account people that are similar to you and finding what kind of videos they liked so um there's just a couple different ways if you want to hear more about recommender systems let me know in the comments below so now we're going to go into the types of learning algorithms very basic introduction to these different learning algorithms and just so that you have an understanding of the basic categories of machine learning so supervised learning and the first one", "start_timestamp": "00:08:58", "end_timestamp": "00:09:30", "start_second": 538, "end_second": 570, "url": "https://www.youtube.com/watch?v=2Z1B0xESzMw&t=538s", "title": "Supervised vs Unsupervised vs Semi / Self Supervised vs Reinforcement Learning | Machine Learning", "thumbnail": "https://i.ytimg.com/vi/2Z1B0xESzMw/maxresdefault.jpg"} {"video_id": "2Z1B0xESzMw", "text": "we're going to talk about so the the main two categories are supervised and unsupervised learning but then there's also subcategories they're like semi-supervised and self supervised learning I'm gonna talk about those and then there's also reinforcement learning so I'm gonna talk about all of these in this video and but supervised learning basically what that means is you have a training data set with Associated labels so you essentially know the right answer in this situation and these labels have been provided by a human supervisor so", "start_timestamp": "00:09:30", "end_timestamp": "00:10:04", "start_second": 570, "end_second": 604, "url": "https://www.youtube.com/watch?v=2Z1B0xESzMw&t=570s", "title": "Supervised vs Unsupervised vs Semi / Self Supervised vs Reinforcement Learning | Machine Learning", "thumbnail": "https://i.ytimg.com/vi/2Z1B0xESzMw/maxresdefault.jpg"} {"video_id": "2Z1B0xESzMw", "text": "somebody has labeled this data set so I've got the example that I was talking about here so we know the size of a house we know the amenities whether it's said facing your north's facing all of that kind of thing and then the accompanying housing price was obtained from whenever the house was sold so we have this training data which is actual houses and the actual housing prices they sell - and then you want to learn how to form the housing price based on the description so we have a description solution and we want to learn a general", "start_timestamp": "00:10:04", "end_timestamp": "00:10:40", "start_second": 604, "end_second": 640, "url": "https://www.youtube.com/watch?v=2Z1B0xESzMw&t=604s", "title": "Supervised vs Unsupervised vs Semi / Self Supervised vs Reinforcement Learning | Machine Learning", "thumbnail": "https://i.ytimg.com/vi/2Z1B0xESzMw/maxresdefault.jpg"} {"video_id": "2Z1B0xESzMw", "text": "solution for when we have new descriptions but no solutions so for example we have a new house coming onto the market we know all of the information obviously we know that number of rooms number of bathrooms all of that stuff but obviously it hasn't gone to market we don't know the price but we can use the previous examples we can train a model on this data to kind of come up with an estimate of the housing price based on the fact that we don't know based on all the information we do know so that's the general", "start_timestamp": "00:10:40", "end_timestamp": "00:11:13", "start_second": 640, "end_second": 673, "url": "https://www.youtube.com/watch?v=2Z1B0xESzMw&t=640s", "title": "Supervised vs Unsupervised vs Semi / Self Supervised vs Reinforcement Learning | Machine Learning", "thumbnail": "https://i.ytimg.com/vi/2Z1B0xESzMw/maxresdefault.jpg"} {"video_id": "2Z1B0xESzMw", "text": "supervised problem is we have training data with labels and we want to learn based on the training data features what would the label be and what issue with supervised learning always is that it's very hard to find label data and for bigger free for problems that have a ton of different options it can be hard to do so for example in speech recognition you know there's an endless number of combinations of words that could be said so to get somebody to come up with all of the possible sentences ever like that's not possible and then", "start_timestamp": "00:11:13", "end_timestamp": "00:11:50", "start_second": 673, "end_second": 710, "url": "https://www.youtube.com/watch?v=2Z1B0xESzMw&t=673s", "title": "Supervised vs Unsupervised vs Semi / Self Supervised vs Reinforcement Learning | Machine Learning", "thumbnail": "https://i.ytimg.com/vi/2Z1B0xESzMw/maxresdefault.jpg"} {"video_id": "2Z1B0xESzMw", "text": "having labels so having the audio file and the accompanying labels it's just so difficult to come up with all of that so that's the problem with supervised learning is it's not always possible to get this kind of labeled data but then there are times that the housing prices that is definitely possible because you would have things like that on record and the good thing about modern times is that there's a ton of data being collected all the time by different apps and things like that and in those circumstances you often come up with a", "start_timestamp": "00:11:50", "end_timestamp": "00:12:20", "start_second": 710, "end_second": 740, "url": "https://www.youtube.com/watch?v=2Z1B0xESzMw&t=710s", "title": "Supervised vs Unsupervised vs Semi / Self Supervised vs Reinforcement Learning | Machine Learning", "thumbnail": "https://i.ytimg.com/vi/2Z1B0xESzMw/maxresdefault.jpg"} {"video_id": "2Z1B0xESzMw", "text": "lot of labeled data naturally so unsupervised learning then is we have training data and it doesn't have labels so the goal here is to identify interesting structure in the data because we don't know the answer is necessarily but you can see in the graph they have associated here because I'm supervising something people don't grasp as easily because you're kind of thinking well what would be the point of this but one example is anomaly detection so that's when you're trying to something that's a complete outlier and", "start_timestamp": "00:12:20", "end_timestamp": "00:12:53", "start_second": 740, "end_second": 773, "url": "https://www.youtube.com/watch?v=2Z1B0xESzMw&t=740s", "title": "Supervised vs Unsupervised vs Semi / Self Supervised vs Reinforcement Learning | Machine Learning", "thumbnail": "https://i.ytimg.com/vi/2Z1B0xESzMw/maxresdefault.jpg"} {"video_id": "2Z1B0xESzMw", "text": "it's either it's something new or it's just you know a total outlier so you can see we have in the data we have these clusters that we can see and obviously in a 2d data set like this it's very easy to see these clusters with your eyes not necessarily using any kind of algorithm so we can see we have these clusters of blue spots that are normal and then we have these orange ones that are being classified as noise because they don't fit into these clusters so those are the kinds of things you might want to look up for in a data set and", "start_timestamp": "00:12:53", "end_timestamp": "00:13:27", "start_second": 773, "end_second": 807, "url": "https://www.youtube.com/watch?v=2Z1B0xESzMw&t=773s", "title": "Supervised vs Unsupervised vs Semi / Self Supervised vs Reinforcement Learning | Machine Learning", "thumbnail": "https://i.ytimg.com/vi/2Z1B0xESzMw/maxresdefault.jpg"} {"video_id": "2Z1B0xESzMw", "text": "obviously you're saying you know why do we need unsupervised algorithms if we can just see this but in a multi dimensional problem so here we've just got two variables if we have like 50 you can't see this with your own eyes when there's outliers like that but you can use an algorithm to cluster your data and then if we have data points that are so far away from the center of a cluster and I cluster is a group here so you can see them in the graph as well if we have something that's so far away from a cluster then we know it's probably an", "start_timestamp": "00:13:27", "end_timestamp": "00:14:01", "start_second": 807, "end_second": 841, "url": "https://www.youtube.com/watch?v=2Z1B0xESzMw&t=807s", "title": "Supervised vs Unsupervised vs Semi / Self Supervised vs Reinforcement Learning | Machine Learning", "thumbnail": "https://i.ytimg.com/vi/2Z1B0xESzMw/maxresdefault.jpg"} {"video_id": "2Z1B0xESzMw", "text": "outlier or something like that so that's an example now we're gonna go into this sort of sub categories here of supervised and unsupervised so hopefully you have a general understanding of the difference here supervised we have labels unsupervised we don't and so the first example we're talking about is self supervised learning so we have an unlabeled data set but actually what people do in these self supervised learning is they actually turn an unsupervised they turn an unlabeled data set into a label data set by", "start_timestamp": "00:14:01", "end_timestamp": "00:14:37", "start_second": 841, "end_second": 877, "url": "https://www.youtube.com/watch?v=2Z1B0xESzMw&t=841s", "title": "Supervised vs Unsupervised vs Semi / Self Supervised vs Reinforcement Learning | Machine Learning", "thumbnail": "https://i.ytimg.com/vi/2Z1B0xESzMw/maxresdefault.jpg"} {"video_id": "2Z1B0xESzMw", "text": "manipulating the data in some way so the basic task here is to find the missing part given and one part so to explain that better want to imagine if we have like what we had before was a set of sentences that was the data we were given what the um what the researcher does is they take as one part of and they generate this training data set with labels so we have the information is the trick the sentence with a piece missing and the answer is the piece that's missing so they train this way and this way we have labels because we have these", "start_timestamp": "00:14:37", "end_timestamp": "00:15:24", "start_second": 877, "end_second": 924, "url": "https://www.youtube.com/watch?v=2Z1B0xESzMw&t=877s", "title": "Supervised vs Unsupervised vs Semi / Self Supervised vs Reinforcement Learning | Machine Learning", "thumbnail": "https://i.ytimg.com/vi/2Z1B0xESzMw/maxresdefault.jpg"} {"video_id": "2Z1B0xESzMw", "text": "sentence with the piece missing and we have the missing piece so they generate this label data set from unlabeled data so it works in these kinds of situations so in speech recognition well in sentence like if you're trying to understand the semantics of sentences you can take a sentence remove one piece and see if the system can learn to fill in the blanks and similarly with images we can take out a small part of the image and see kind of a system learn to spot fill in that missing part of the image and actually it has been", "start_timestamp": "00:15:24", "end_timestamp": "00:15:57", "start_second": 924, "end_second": 957, "url": "https://www.youtube.com/watch?v=2Z1B0xESzMw&t=924s", "title": "Supervised vs Unsupervised vs Semi / Self Supervised vs Reinforcement Learning | Machine Learning", "thumbnail": "https://i.ytimg.com/vi/2Z1B0xESzMw/maxresdefault.jpg"} {"video_id": "2Z1B0xESzMw", "text": "relatively like pretty successfully be surprised if you want to look up self supervised learning it is pretty interesting to see how a system can learn these things and that's usually done with deep learning because it's not something a traditional machine learning algorithm can necessarily do myself like a linear model is not going to do that and so yeah basically in this sort of learning you want to learn about the underlying properties of the data and this would have required a lot more data if you don't do it this way so this is", "start_timestamp": "00:15:57", "end_timestamp": "00:16:27", "start_second": 957, "end_second": 987, "url": "https://www.youtube.com/watch?v=2Z1B0xESzMw&t=957s", "title": "Supervised vs Unsupervised vs Semi / Self Supervised vs Reinforcement Learning | Machine Learning", "thumbnail": "https://i.ytimg.com/vi/2Z1B0xESzMw/maxresdefault.jpg"} {"video_id": "2Z1B0xESzMw", "text": "kind of like the system's being thrown into the deep end being expected to generate this missing piece from basic examples that they've seen before so moving on we have semi-supervised learning so I mentioned before that labels are very hard to obtain and especially in cases like speech recognition and web content classification so if you've got a webpage and you want to add tags to it you know it's hard to it's very time consuming for a person to sit down and read through that entire webpage and manually assign tags and then as well", "start_timestamp": "00:16:27", "end_timestamp": "00:17:07", "start_second": 987, "end_second": 1027, "url": "https://www.youtube.com/watch?v=2Z1B0xESzMw&t=987s", "title": "Supervised vs Unsupervised vs Semi / Self Supervised vs Reinforcement Learning | Machine Learning", "thumbnail": "https://i.ytimg.com/vi/2Z1B0xESzMw/maxresdefault.jpg"} {"video_id": "2Z1B0xESzMw", "text": "you probably have to have a couple of different people doing it so that they can it's not like it's one person's bias it's a few people who are collectively doing so it's very time-consuming it takes a lot of Human Resources to do all of that labeling so semi-supervised learning algorithms have been developed for the cases where we only have a small amount of labeled data and I'm a very small amount when there is a huge amount of data that could potentially be there so what I said about sentences there's unlimited possibilities for sentences", "start_timestamp": "00:17:07", "end_timestamp": "00:17:39", "start_second": 1027, "end_second": 1059, "url": "https://www.youtube.com/watch?v=2Z1B0xESzMw&t=1027s", "title": "Supervised vs Unsupervised vs Semi / Self Supervised vs Reinforcement Learning | Machine Learning", "thumbnail": "https://i.ytimg.com/vi/2Z1B0xESzMw/maxresdefault.jpg"} {"video_id": "2Z1B0xESzMw", "text": "and we might have a small number of labeled audio to sentence and files and a ton of audio data that haven't been labeled so basically this is used in any task that requires a big amount of human resources to do the labeling and when there's a big amount of possibilities so the characteristics here are that we have a huge amount of unlabeled data otherwise we would just use a supervised learning algorithm and there's an input/output proximity symmetry so what does that mean it basically means that the underlying requirement here is that", "start_timestamp": "00:17:39", "end_timestamp": "00:18:12", "start_second": 1059, "end_second": 1092, "url": "https://www.youtube.com/watch?v=2Z1B0xESzMw&t=1059s", "title": "Supervised vs Unsupervised vs Semi / Self Supervised vs Reinforcement Learning | Machine Learning", "thumbnail": "https://i.ytimg.com/vi/2Z1B0xESzMw/maxresdefault.jpg"} {"video_id": "2Z1B0xESzMw", "text": "if two inputs are similar their output should be similar and that's a very simple thing to understand but it's if it's not present this won't work because you can't make things you can't get the system to learn that to some inputs that are similar should be very different you know this kind of algorithm wouldn't make sense in that way so that's one other requirement and another requirement is that the labeling should be relatively simple so we have we need we need the simpler labeling to be not more difficult if we have this middle", "start_timestamp": "00:18:12", "end_timestamp": "00:18:47", "start_second": 1092, "end_second": 1127, "url": "https://www.youtube.com/watch?v=2Z1B0xESzMw&t=1092s", "title": "Supervised vs Unsupervised vs Semi / Self Supervised vs Reinforcement Learning | Machine Learning", "thumbnail": "https://i.ytimg.com/vi/2Z1B0xESzMw/maxresdefault.jpg"} {"video_id": "2Z1B0xESzMw", "text": "step so trying to get the labels from the small set of label data should not be more difficult than it would be if we had humans doing it as well like generally we'd have this is a low dimension problem meaning that there's not a ton of features and a ton of input variables and things like that so the last one we're going to talk about is reinforcement learning and this definitely could do with this whole other a whole other video all about reinforcement learning but I just wanted to give a basic introduction to it", "start_timestamp": "00:18:47", "end_timestamp": "00:19:16", "start_second": 1127, "end_second": 1156, "url": "https://www.youtube.com/watch?v=2Z1B0xESzMw&t=1127s", "title": "Supervised vs Unsupervised vs Semi / Self Supervised vs Reinforcement Learning | Machine Learning", "thumbnail": "https://i.ytimg.com/vi/2Z1B0xESzMw/maxresdefault.jpg"} {"video_id": "2Z1B0xESzMw", "text": "because I think that's something people are usually trying to figure out what's the difference between supervisor unsupervised and reinforcement learning but basically this is a form of supervised learning the the main factor here is the system focuses on maximizing a reward signal so the reward signal is supervision that that is a convenient so we have an agent and it's said that into the feature space so all of the input variable options are there and it was it has to learn what actions it can take in this", "start_timestamp": "00:19:16", "end_timestamp": "00:19:53", "start_second": 1156, "end_second": 1193, "url": "https://www.youtube.com/watch?v=2Z1B0xESzMw&t=1156s", "title": "Supervised vs Unsupervised vs Semi / Self Supervised vs Reinforcement Learning | Machine Learning", "thumbnail": "https://i.ytimg.com/vi/2Z1B0xESzMw/maxresdefault.jpg"} {"video_id": "2Z1B0xESzMw", "text": "feature space in order to maximize the reward signal so one example would be AI chess or any form of AI game playing it's generally done with reinforcement learning so it's if you example if you think about the chess board the the thought I guess that's playing chess it has all of these options of the different moves they can take and the reward they're trying to maximize is winning the game and that's all that really matters and this is whether they win the game or not so it'll play this game hundreds and hundreds of times", "start_timestamp": "00:19:53", "end_timestamp": "00:20:28", "start_second": 1193, "end_second": 1228, "url": "https://www.youtube.com/watch?v=2Z1B0xESzMw&t=1193s", "title": "Supervised vs Unsupervised vs Semi / Self Supervised vs Reinforcement Learning | Machine Learning", "thumbnail": "https://i.ytimg.com/vi/2Z1B0xESzMw/maxresdefault.jpg"} {"video_id": "2Z1B0xESzMw", "text": "until it learns what exact moves in both scenarios so that the environment also plays a role here so you can see in the graph we've got the agent actions environment and so here the agent would be the ball playing chess the action will be a move that the bot takes the environment is in this case we have somebody else playing chess so the move that they take is the sort of environment that they need to relate to as well as the chess board that results from these actions so when the bot makes a move their pieces move so they need to", "start_timestamp": "00:20:28", "end_timestamp": "00:21:09", "start_second": 1228, "end_second": 1269, "url": "https://www.youtube.com/watch?v=2Z1B0xESzMw&t=1228s", "title": "Supervised vs Unsupervised vs Semi / Self Supervised vs Reinforcement Learning | Machine Learning", "thumbnail": "https://i.ytimg.com/vi/2Z1B0xESzMw/maxresdefault.jpg"} {"video_id": "2Z1B0xESzMw", "text": "update their understanding of the moves that are available based on the move that they've taken and then again by the moves that the other person has taken and then so the state will be the game the board game the layout of the board essentially and the reward again will just be the whether they're winning or not their probability of winning will probably be included in the some way based on the layer of the board and they learned by trial and error what is the best move to make in what situation and there's also usually a", "start_timestamp": "00:21:09", "end_timestamp": "00:21:46", "start_second": 1269, "end_second": 1306, "url": "https://www.youtube.com/watch?v=2Z1B0xESzMw&t=1269s", "title": "Supervised vs Unsupervised vs Semi / Self Supervised vs Reinforcement Learning | Machine Learning", "thumbnail": "https://i.ytimg.com/vi/2Z1B0xESzMw/maxresdefault.jpg"} {"video_id": "2Z1B0xESzMw", "text": "delayed rewards obviously in this case we have that there is the winning of the game and that can be delayed in the sense that some moves that you take will have a knock-on effect on your probability of winning so like if this game is tracking the probability of winning your next move might not effect that necessarily but you'll give you access to the move after and that could increase by a lot another example is things like a but that has to travel around and collect things obviously every move that it takes won't necessarily collect", "start_timestamp": "00:21:46", "end_timestamp": "00:22:22", "start_second": 1306, "end_second": 1342, "url": "https://www.youtube.com/watch?v=2Z1B0xESzMw&t=1306s", "title": "Supervised vs Unsupervised vs Semi / Self Supervised vs Reinforcement Learning | Machine Learning", "thumbnail": "https://i.ytimg.com/vi/2Z1B0xESzMw/maxresdefault.jpg"} {"video_id": "2Z1B0xESzMw", "text": "something but a few moves from there if we continue along the right track they'll pick up something and that will be a reward so the delayed ward is a big characteristic there and then because that's how it learns how this set of moves are important as not just a single move but like a series of moves and then the trial by error is how it learns and yeah usually there'll be like hundreds and hundreds and thousands of instances of this and then that's how they learn so again reinforcement learning can do its whole other video I just wanted to", "start_timestamp": "00:22:22", "end_timestamp": "00:22:55", "start_second": 1342, "end_second": 1375, "url": "https://www.youtube.com/watch?v=2Z1B0xESzMw&t=1342s", "title": "Supervised vs Unsupervised vs Semi / Self Supervised vs Reinforcement Learning | Machine Learning", "thumbnail": "https://i.ytimg.com/vi/2Z1B0xESzMw/maxresdefault.jpg"} {"video_id": "2Z1B0xESzMw", "text": "give a basic introduction to all of these things so I really hope that that was helpful for you guys I hope you enjoyed this video and if you want to see more videos like this again do be sure to give it a thumbs up onto um comment down below because this is a new style of video on this channel and when we're trying get new things here I need to know they're working so buying you guys giving the video a thumbs up on commenting below it then that is way for me to understand whether it's working or not and then in future that means I can", "start_timestamp": "00:22:55", "end_timestamp": "00:23:25", "start_second": 1375, "end_second": 1405, "url": "https://www.youtube.com/watch?v=2Z1B0xESzMw&t=1375s", "title": "Supervised vs Unsupervised vs Semi / Self Supervised vs Reinforcement Learning | Machine Learning", "thumbnail": "https://i.ytimg.com/vi/2Z1B0xESzMw/maxresdefault.jpg"} {"video_id": "o3y1w6-Xhjg", "text": "okay so hi everyone so huge thanks to Alexander hyung you and Sammy for organizing this incredible workshop and excited to be here and thanks everyone for coming I know I'm between you and lunch so I'll try and keep things on time so today I'm really excited to talk a bit about transfer learning in the context of deep learning so transfer learning is this incredibly popular technique it's used almost everywhere that we apply deep neural networks but there's also this challenge that we really don't understand many aspects of", "start_timestamp": "00:00:00", "end_timestamp": "00:00:36", "start_second": 0, "end_second": 36, "url": "https://www.youtube.com/watch?v=o3y1w6-Xhjg&t=0s", "title": "Towards Understanding Transfer Learning with Applications to Medical Imaging", "thumbnail": "https://i.ytimg.com/vi/o3y1w6-Xhjg/maxresdefault.jpg"} {"video_id": "o3y1w6-Xhjg", "text": "it all that all that well and so by the end of this talk I hope that you know you have some sense of all the many different ways it comes up and also some of the interesting open questions there are in the field so a lot of this talk is going to be based off of a paper understanding transfer learning for medical imaging that's joint work with my collaborators chauhan jeong john Kleinberg and sami Benjo okay so let's dive right in what is transfer learning very basic so in the settings that we're gonna really be", "start_timestamp": "00:00:36", "end_timestamp": "00:01:09", "start_second": 36, "end_second": 69, "url": "https://www.youtube.com/watch?v=o3y1w6-Xhjg&t=36s", "title": "Towards Understanding Transfer Learning with Applications to Medical Imaging", "thumbnail": "https://i.ytimg.com/vi/o3y1w6-Xhjg/maxresdefault.jpg"} {"video_id": "o3y1w6-Xhjg", "text": "studying what you do is you first you learn classifier on some tasks let's call it Tosca and sometimes we call this pre-training then having learned Koski you continue training this classifier on a new task task B and the goal is really to get good performance on task B so the goal is to get good performance on toss B you might ask why train on task a at all and the general belief in the community is that if you know task a is sort of complex and diverse and very general then by going through this process of training on task a hopefully", "start_timestamp": "00:01:09", "end_timestamp": "00:01:44", "start_second": 69, "end_second": 104, "url": "https://www.youtube.com/watch?v=o3y1w6-Xhjg&t=69s", "title": "Towards Understanding Transfer Learning with Applications to Medical Imaging", "thumbnail": "https://i.ytimg.com/vi/o3y1w6-Xhjg/maxresdefault.jpg"} {"video_id": "o3y1w6-Xhjg", "text": "you've learned useful things that you can sort of transfer over when you start retraining on on task B so this high-level framework actually has connections to a lot of interesting work that's come out from the the theoretical perspective I think Zac mentioned some in his talk earlier today and I wanted to kind of give a quick point out to shy and Devine who is maybe here no I don't see him oh you're here you're here okay in the front where there's been a lot of interesting work done in in this very related field of", "start_timestamp": "00:01:44", "end_timestamp": "00:02:16", "start_second": 104, "end_second": 136, "url": "https://www.youtube.com/watch?v=o3y1w6-Xhjg&t=104s", "title": "Towards Understanding Transfer Learning with Applications to Medical Imaging", "thumbnail": "https://i.ytimg.com/vi/o3y1w6-Xhjg/maxresdefault.jpg"} {"video_id": "o3y1w6-Xhjg", "text": "domain adaptation to try and understand this from a more formal framework and I think the times really come to revisit some of these ideas and see how we can use them to give us better insights for transfer learning in in the deep learning context so speaking of transfer learning and deep learning what does that look like well it's very similar of sorry very simple you sort of just replace classifier with deep network so you have this deep network it's gonna be your classifier you randomly initialize it and you train on toski and this will", "start_timestamp": "00:02:16", "end_timestamp": "00:02:44", "start_second": 136, "end_second": 164, "url": "https://www.youtube.com/watch?v=o3y1w6-Xhjg&t=136s", "title": "Towards Understanding Transfer Learning with Applications to Medical Imaging", "thumbnail": "https://i.ytimg.com/vi/o3y1w6-Xhjg/maxresdefault.jpg"} {"video_id": "o3y1w6-Xhjg", "text": "be known as pre training then you take this network and and it's sort of converged to some set of parameters on taos k and then you train it again this time on task B and then voila you have your final model that you're going to deploy and it's it's hopefully going to do great on on task B so this paradigm is pretty simple but it's been extremely successful and it's probably the computer vision community that sort of really showed us how successful this could be in various applications so in sort of specifically the current setting", "start_timestamp": "00:02:44", "end_timestamp": "00:03:15", "start_second": 164, "end_second": 195, "url": "https://www.youtube.com/watch?v=o3y1w6-Xhjg&t=164s", "title": "Towards Understanding Transfer Learning with Applications to Medical Imaging", "thumbnail": "https://i.ytimg.com/vi/o3y1w6-Xhjg/maxresdefault.jpg"} {"video_id": "o3y1w6-Xhjg", "text": "what people do is pre train a large convolutional neural network on some data set of large images and you know image net gets a special shout-out here it's extremely popular for for pre training so much so that there are sort of entire papers saying why image net is good for transfer learning but besides that there are a couple of other data sets so ms cocoa is a very popular big computer vision benchmark for object detection that sometimes used for pre training and companies also tend to have their own internal data sets which they", "start_timestamp": "00:03:15", "end_timestamp": "00:03:44", "start_second": 195, "end_second": 224, "url": "https://www.youtube.com/watch?v=o3y1w6-Xhjg&t=195s", "title": "Towards Understanding Transfer Learning with Applications to Medical Imaging", "thumbnail": "https://i.ytimg.com/vi/o3y1w6-Xhjg/maxresdefault.jpg"} {"video_id": "o3y1w6-Xhjg", "text": "like using for pre training so a really big one that Google likes using is jf th has sort of three hundred million images so absolutely enormous what's been really interesting to see in the past couple of years though is that transfer learning has also become very popular in applications in natural language processing so in the past people were able to transfer word embeddings so you've probably seen these diagrams of taking a word in your vocabulary getting a vector representation and then these vector representations of all of these", "start_timestamp": "00:03:44", "end_timestamp": "00:04:14", "start_second": 224, "end_second": 254, "url": "https://www.youtube.com/watch?v=o3y1w6-Xhjg&t=224s", "title": "Towards Understanding Transfer Learning with Applications to Medical Imaging", "thumbnail": "https://i.ytimg.com/vi/o3y1w6-Xhjg/maxresdefault.jpg"} {"video_id": "o3y1w6-Xhjg", "text": "nice properties we could do that for a while but it's only more recently that sort of all of these a neural networks that are named after Muppets for some reason have have have been developed and that lets us transfer much more complex representations of language and that's shown to be very very successful in a lot of standard natural language tasks and now I have to mention the most important part of transfer learning which is github for transfer learning so been mentioned in a talk in his talk earlier that you know github can be this", "start_timestamp": "00:04:14", "end_timestamp": "00:04:46", "start_second": 254, "end_second": 286, "url": "https://www.youtube.com/watch?v=o3y1w6-Xhjg&t=254s", "title": "Towards Understanding Transfer Learning with Applications to Medical Imaging", "thumbnail": "https://i.ytimg.com/vi/o3y1w6-Xhjg/maxresdefault.jpg"} {"video_id": "o3y1w6-Xhjg", "text": "very useful research resource and that's definitely very true in transfer learning so in transfer learning kind of applications nobody actually bothers with the pre-training stuff instead you go to github and you sort of find the model you're interested in and find all of its pre trained weights and then you just download it and then after you download it you just perform the fine-tuning for whatever tells you're interested in and this is really important because what this is enabled is it's enabled people who totally", "start_timestamp": "00:04:46", "end_timestamp": "00:05:14", "start_second": 286, "end_second": 314, "url": "https://www.youtube.com/watch?v=o3y1w6-Xhjg&t=286s", "title": "Towards Understanding Transfer Learning with Applications to Medical Imaging", "thumbnail": "https://i.ytimg.com/vi/o3y1w6-Xhjg/maxresdefault.jpg"} {"video_id": "o3y1w6-Xhjg", "text": "aren't working on sort of core machine learning to apply transfer learning to all of their problems and nowhere is this more true than in medical imaging where where it's sort of the entire community has almost universally adopted transfer learning as this paradigm and what's the setup here well the setup here is that you take this sort of standard pre-trained imagenet model something large and complex like Inception v3 and then you and then you have sort of these pre trained weights on imagenet that you sort of downloaded from somewhere so", "start_timestamp": "00:05:14", "end_timestamp": "00:05:47", "start_second": 314, "end_second": 347, "url": "https://www.youtube.com/watch?v=o3y1w6-Xhjg&t=314s", "title": "Towards Understanding Transfer Learning with Applications to Medical Imaging", "thumbnail": "https://i.ytimg.com/vi/o3y1w6-Xhjg/maxresdefault.jpg"} {"video_id": "o3y1w6-Xhjg", "text": "imagenet has you know amongst other things a whole bunch of different dog breeds and then bizarrely you sort of find two in this model to do all kinds of medical predictions so you find unit to predict and diseases on chest x-rays diseases like retinal diseases PET scans for early detection of Alzheimer's and sort of the most exotic application was even sort of screening human embryos for IVF treatments so people are just sort of going out there and doing this and and when you think about this this is kind of bizarre because the reason the", "start_timestamp": "00:05:47", "end_timestamp": "00:06:17", "start_second": 347, "end_second": 377, "url": "https://www.youtube.com/watch?v=o3y1w6-Xhjg&t=347s", "title": "Towards Understanding Transfer Learning with Applications to Medical Imaging", "thumbnail": "https://i.ytimg.com/vi/o3y1w6-Xhjg/maxresdefault.jpg"} {"video_id": "o3y1w6-Xhjg", "text": "community um you know wanted to do transfer learning is this belief that you sort of learned features on sort of this this source dataset and then you can kind of transfer all of this to to your target tasks but of course medical images and natural images are extremely different to each other so it's kind of interesting to understand what's what's on here one final thing is these aren't just sort of sort of turning into papers where you see accuracies but they're actually being deployed in clinic so this is the example of one company", "start_timestamp": "00:06:17", "end_timestamp": "00:06:45", "start_second": 377, "end_second": 405, "url": "https://www.youtube.com/watch?v=o3y1w6-Xhjg&t=377s", "title": "Towards Understanding Transfer Learning with Applications to Medical Imaging", "thumbnail": "https://i.ytimg.com/vi/o3y1w6-Xhjg/maxresdefault.jpg"} {"video_id": "o3y1w6-Xhjg", "text": "called idx which literally states that it takes inception v3 pre turndown imagenet and is using these to diagnose retinal diseases and it's sort of out in clinic right now so understanding their examples the inception model mean for this problem adversarial examples where for medical images you mean I'm just asking you're saying this is being deployed and it's natural to think about at the state examples for the infection not that I mean so for example you know I'm not talking about transferability I mean that's that's an interesting", "start_timestamp": "00:06:45", "end_timestamp": "00:07:22", "start_second": 405, "end_second": 442, "url": "https://www.youtube.com/watch?v=o3y1w6-Xhjg&t=405s", "title": "Towards Understanding Transfer Learning with Applications to Medical Imaging", "thumbnail": "https://i.ytimg.com/vi/o3y1w6-Xhjg/maxresdefault.jpg"} {"video_id": "o3y1w6-Xhjg", "text": "question sort of like how much can you like kind of do like these are completely different datasets so I think that's also like a very interesting thing to look at um but okay so while we're kind of going ahead and sort of deploying all of these there's kind of this real challenge because even in the natural image setting we actually don't understand the effects of transfer learning that well so I'm gonna review some results which have just come out in literally the last year that have really challenged the common assumptions people", "start_timestamp": "00:07:22", "end_timestamp": "00:07:48", "start_second": 442, "end_second": 468, "url": "https://www.youtube.com/watch?v=o3y1w6-Xhjg&t=442s", "title": "Towards Understanding Transfer Learning with Applications to Medical Imaging", "thumbnail": "https://i.ytimg.com/vi/o3y1w6-Xhjg/maxresdefault.jpg"} {"video_id": "o3y1w6-Xhjg", "text": "have on transfer learning so this is a picture of Ms Coco and I mentioned it earlier it's a very popular computer vision benchmark for for doing object detection so this is kind of what it looks like you're trying to learn these balance boxes and in almost all of the competition entries the standard thing to do is you pre train on imagenet and then you sort of fine-tune your image net a model on this this ms koko tusk but then last year we got this paper rethinking image Annette pre training which basically showed just by kind of", "start_timestamp": "00:07:48", "end_timestamp": "00:08:20", "start_second": 468, "end_second": 500, "url": "https://www.youtube.com/watch?v=o3y1w6-Xhjg&t=468s", "title": "Towards Understanding Transfer Learning with Applications to Medical Imaging", "thumbnail": "https://i.ytimg.com/vi/o3y1w6-Xhjg/maxresdefault.jpg"} {"video_id": "o3y1w6-Xhjg", "text": "being up maybe a little bit more careful about how you pick your learning rate you actually get exactly the same results from random initialization as you do with Swiss pre-training now part of the reason people really like pre-training on imagenet is again this belief that it's kind of this big diverse task and sort of if you train on there you're gonna learn lots of interesting features that you can kind of reuse in lots of places so sort of this underlying assumption is more data is is great but then there was there's", "start_timestamp": "00:08:20", "end_timestamp": "00:08:50", "start_second": 500, "end_second": 530, "url": "https://www.youtube.com/watch?v=o3y1w6-Xhjg&t=500s", "title": "Towards Understanding Transfer Learning with Applications to Medical Imaging", "thumbnail": "https://i.ytimg.com/vi/o3y1w6-Xhjg/maxresdefault.jpg"} {"video_id": "o3y1w6-Xhjg", "text": "another paper also just last year which looked at pre training on jft which is sort of this even bigger data set sort of 300 million images and what they find is that more true more pre training data is not always better which is this sort of very closely held assumption in the community so in particular like they sort of try training from random initialization versus the entire data set and in a bunch of places the performances are actually really pretty comparable other criterion if you fix the number of iteration or training data", "start_timestamp": "00:08:50", "end_timestamp": "00:09:37", "start_second": 530, "end_second": 577, "url": "https://www.youtube.com/watch?v=o3y1w6-Xhjg&t=530s", "title": "Towards Understanding Transfer Learning with Applications to Medical Imaging", "thumbnail": "https://i.ytimg.com/vi/o3y1w6-Xhjg/maxresdefault.jpg"} {"video_id": "o3y1w6-Xhjg", "text": "more source domain data is not better I suspect that that won't be true right it's like fitting more to the training distribution it's not necessarily better to the story features not necessarily better so they're they're not fixing the number of training iterations I think I think they're literally just training to convergence and see what it's late right yeah right like you're saying maybe you just sort of like dues or barely stopping and then you just fix it whatever the number is that you got from the first amount of examples you do", "start_timestamp": "00:09:37", "end_timestamp": "00:10:14", "start_second": 577, "end_second": 614, "url": "https://www.youtube.com/watch?v=o3y1w6-Xhjg&t=577s", "title": "Towards Understanding Transfer Learning with Applications to Medical Imaging", "thumbnail": "https://i.ytimg.com/vi/o3y1w6-Xhjg/maxresdefault.jpg"} {"video_id": "o3y1w6-Xhjg", "text": "the same number of updates for more data or data points you know I'm not I'm not sure it'll make a difference I see your point and I'd have to check to see exactly what they did but we can discuss this offline yeah but yeah so so one interesting point to kind of make here and I think this is actually connected to some of the theoretical work in this that's that sort of come up in related topics is if you do a better job of actually picking the subsets of data that you train on you actually do see significant performance gains and I", "start_timestamp": "00:10:14", "end_timestamp": "00:10:50", "start_second": 614, "end_second": 650, "url": "https://www.youtube.com/watch?v=o3y1w6-Xhjg&t=614s", "title": "Towards Understanding Transfer Learning with Applications to Medical Imaging", "thumbnail": "https://i.ytimg.com/vi/o3y1w6-Xhjg/maxresdefault.jpg"} {"video_id": "o3y1w6-Xhjg", "text": "think again this is a place where we can kind of revisit some of the theoretical ideas and see if we can do something better this process is like relatively ad-hoc and then finally one other paper that came out just earlier this year do better image net models transfer a better has this kind of makes us implicitly makes this very interesting observation that when you decide to do transfer learning you're not just taking the the features but you're also committing to an architecture it because you download them both together and if", "start_timestamp": "00:10:50", "end_timestamp": "00:11:21", "start_second": 650, "end_second": 681, "url": "https://www.youtube.com/watch?v=o3y1w6-Xhjg&t=650s", "title": "Towards Understanding Transfer Learning with Applications to Medical Imaging", "thumbnail": "https://i.ytimg.com/vi/o3y1w6-Xhjg/maxresdefault.jpg"} {"video_id": "o3y1w6-Xhjg", "text": "your task looks very different to imagenet you know this is something you should be be aware of and thinking about and so do better image net models transfer better well it's complicated there's this nuance relationship that depends on sort of how you regularize during the training process the data set the specifics of the data set and size etc and so you sort of actually see a lot of variability based off of all of these conditions and sometimes you actually get pretty similar performance as usual optimal weight optimal where um oh here", "start_timestamp": "00:11:21", "end_timestamp": "00:11:57", "start_second": 681, "end_second": 717, "url": "https://www.youtube.com/watch?v=o3y1w6-Xhjg&t=681s", "title": "Towards Understanding Transfer Learning with Applications to Medical Imaging", "thumbnail": "https://i.ytimg.com/vi/o3y1w6-Xhjg/maxresdefault.jpg"} {"video_id": "o3y1w6-Xhjg", "text": "um yeah so this is optimal for transfer so so this is sort of like standard ways in which people can perform regularization and this is how you tend to download your pre trained models but these are actually sort of not as good for for doing transfer learning on and.and here if you kind of train in a slightly different way you actually end up with better features for transfer yeah really thorough so I highly recommend reading it so they tried they tried pretty much everything so they tried this sort of like fixed feature", "start_timestamp": "00:11:57", "end_timestamp": "00:12:33", "start_second": 717, "end_second": 753, "url": "https://www.youtube.com/watch?v=o3y1w6-Xhjg&t=717s", "title": "Towards Understanding Transfer Learning with Applications to Medical Imaging", "thumbnail": "https://i.ytimg.com/vi/o3y1w6-Xhjg/maxresdefault.jpg"} {"video_id": "o3y1w6-Xhjg", "text": "extractor setting where you sort of freeze things and then just retrain some of the top they also tried left fine-tuning setting where you sort of trained everything and they were also of course comparing to training from from scratch in these settings and so yeah the kind of exact results vary a little bit I think your fine-tuning maybe you have like slightly less sensitivity to some of these these settings particularly for larger target datasets but yeah they try everything so definitely worth reading so this is all", "start_timestamp": "00:12:33", "end_timestamp": "00:13:00", "start_second": 753, "end_second": 780, "url": "https://www.youtube.com/watch?v=o3y1w6-Xhjg&t=753s", "title": "Towards Understanding Transfer Learning with Applications to Medical Imaging", "thumbnail": "https://i.ytimg.com/vi/o3y1w6-Xhjg/maxresdefault.jpg"} {"video_id": "o3y1w6-Xhjg", "text": "in the natural image setting what about in the medical image setting actually hardly anything is explored in the medical image setting which i think is sort of something we should we could really try and address from sort of two angles firstly of course I think it's important to study it because we're actually deploying these in a lot of places and it's important to understand what's going on particularly is this is sort of a very counterintuitive thing to do secondly I think this medical imaging setting also captures a very interesting", "start_timestamp": "00:13:00", "end_timestamp": "00:13:29", "start_second": 780, "end_second": 809, "url": "https://www.youtube.com/watch?v=o3y1w6-Xhjg&t=780s", "title": "Towards Understanding Transfer Learning with Applications to Medical Imaging", "thumbnail": "https://i.ytimg.com/vi/o3y1w6-Xhjg/maxresdefault.jpg"} {"video_id": "o3y1w6-Xhjg", "text": "part of captures an interesting sort of regime in of performing transfer learning so here your source and your target tasks are extremely different to each other like the data is different the actual tasks you're going for is different and as we'll see later there are still some benefits that that come up so sort of understanding why that happens I think is sort of very interesting to to explore from a purely principled angle okay so in this talk so first we'll do a quick sort of performance evaluation of transfer just", "start_timestamp": "00:13:29", "end_timestamp": "00:14:00", "start_second": 809, "end_second": 840, "url": "https://www.youtube.com/watch?v=o3y1w6-Xhjg&t=809s", "title": "Towards Understanding Transfer Learning with Applications to Medical Imaging", "thumbnail": "https://i.ytimg.com/vi/o3y1w6-Xhjg/maxresdefault.jpg"} {"video_id": "o3y1w6-Xhjg", "text": "to sort of set things up and then we're going to go into a little bit more detail and try how is pre-training affecting the actual features we're learning in our model and for this I'll also touch on some work we've been looking at on studying representational similarity of networks using canonical correlation analysis and then finally and very interestingly and also somewhat paradoxical we'll look at some feature independent properties of transfer that we see okay so the first yeah so I'm fine um so I think the", "start_timestamp": "00:14:00", "end_timestamp": "00:14:47", "start_second": 840, "end_second": 887, "url": "https://www.youtube.com/watch?v=o3y1w6-Xhjg&t=840s", "title": "Towards Understanding Transfer Learning with Applications to Medical Imaging", "thumbnail": "https://i.ytimg.com/vi/o3y1w6-Xhjg/maxresdefault.jpg"} {"video_id": "o3y1w6-Xhjg", "text": "question is sort of is there a precise definition of what fine-tuning means in this setting is there as in like you sort of stop after some amount of time or yeah is there something very specified um the answer is like not really you're really just sort of training to convergence or pretty much you're mostly stopping once you see that like validation losses converged yeah but okay so okay so so let's let's take out let's breeze through the first part um so first part we're gonna just try and evaluate the actual performance", "start_timestamp": "00:14:47", "end_timestamp": "00:15:19", "start_second": 887, "end_second": 919, "url": "https://www.youtube.com/watch?v=o3y1w6-Xhjg&t=887s", "title": "Towards Understanding Transfer Learning with Applications to Medical Imaging", "thumbnail": "https://i.ytimg.com/vi/o3y1w6-Xhjg/maxresdefault.jpg"} {"video_id": "o3y1w6-Xhjg", "text": "gains of transfer and to do this the way people tend to evaluate transfer learning because you're sort of downloading these datasets from at least sorry these models and these weights from github is you just tend to evaluate it on on standard imagenet architectures so like some big complicated thing like this but as I mentioned if your task looks really different to imagenet its kind of important to think about the fact that you're making this implicit or contextual choice and so in our evaluation we also evaluated this much", "start_timestamp": "00:15:19", "end_timestamp": "00:15:48", "start_second": 919, "end_second": 948, "url": "https://www.youtube.com/watch?v=o3y1w6-Xhjg&t=919s", "title": "Towards Understanding Transfer Learning with Applications to Medical Imaging", "thumbnail": "https://i.ytimg.com/vi/o3y1w6-Xhjg/maxresdefault.jpg"} {"video_id": "o3y1w6-Xhjg", "text": "smaller family of architectures that we call CBR s they're really just vanilla convolutional neural networks they're called CBR s because the most sort of popular and successful way of having a vanilla Convenant these days is to have a convolution followed by batch norm followed by a rel u activation and these things are really tiny there may be sort of like one eighth to maybe one twentieth the size of your full-fledged two image net architecture and then in terms of tasks we looked at sort of two large-scale medical imaging", "start_timestamp": "00:15:48", "end_timestamp": "00:16:19", "start_second": 948, "end_second": 979, "url": "https://www.youtube.com/watch?v=o3y1w6-Xhjg&t=948s", "title": "Towards Understanding Transfer Learning with Applications to Medical Imaging", "thumbnail": "https://i.ytimg.com/vi/o3y1w6-Xhjg/maxresdefault.jpg"} {"video_id": "o3y1w6-Xhjg", "text": "tasks one of them is diagnosing different diseases from chest x-rays and another one is diagnosing a certain kind of retinal disease diabetic retinopathy from scans of the the back of your eye so we sort of run these experiments across all these different architectures random initialization and transfer learning there was a lot of experiments but we saw some clear takeaways so firstly perhaps you know sort of falling on from some of the results we've seen in the natural image settings transfer and random initialization actually", "start_timestamp": "00:16:19", "end_timestamp": "00:16:52", "start_second": 979, "end_second": 1012, "url": "https://www.youtube.com/watch?v=o3y1w6-Xhjg&t=979s", "title": "Towards Understanding Transfer Learning with Applications to Medical Imaging", "thumbnail": "https://i.ytimg.com/vi/o3y1w6-Xhjg/maxresdefault.jpg"} {"video_id": "o3y1w6-Xhjg", "text": "performed pretty comparably so here's a sort of complicated results table we got from our chest x-ray experiments and if we look at where transfer and random initialization perform comparably it's actually for most of the table and in some cases transfer even sorry random initialization even outperforms transfer secondly and and interestingly we observed that these simple vanilla kind of networks actually performed about as well as these standard imagenet architectures so we weren't really trying to optimize for performance we", "start_timestamp": "00:16:52", "end_timestamp": "00:17:26", "start_second": 1012, "end_second": 1046, "url": "https://www.youtube.com/watch?v=o3y1w6-Xhjg&t=1012s", "title": "Towards Understanding Transfer Learning with Applications to Medical Imaging", "thumbnail": "https://i.ytimg.com/vi/o3y1w6-Xhjg/maxresdefault.jpg"} {"video_id": "o3y1w6-Xhjg", "text": "just wanted to try some simple things to see what they look like compared to image net like image net architecture is because those architectures are really pretty different to to what you might want for the for the medical data that's it knows the simple architecture did you also free trainer when this yes yes so we pre trained them on imagenet and then we and then fine-tuned on the medical data sets yeah and then sort of did those comparisons so the data set sites so we actually buried this so like kind of variations of this", "start_timestamp": "00:17:26", "end_timestamp": "00:18:01", "start_second": 1046, "end_second": 1081, "url": "https://www.youtube.com/watch?v=o3y1w6-Xhjg&t=1046s", "title": "Towards Understanding Transfer Learning with Applications to Medical Imaging", "thumbnail": "https://i.ytimg.com/vi/o3y1w6-Xhjg/maxresdefault.jpg"} {"video_id": "o3y1w6-Xhjg", "text": "or in the paper um the full data set is around 200k images I think which is kind of like reasonable size but um sort of smallish compared to image net which is like in the millions so it's sort of interesting to see that these sort of perform comparably and then finally we also saw that image net performance was not actually indicative of how these architectures would perform on a medical task so what do I mean so let's look at another results table so here's where I at 50 and here are these two architectures and here's what we saw", "start_timestamp": "00:18:01", "end_timestamp": "00:18:32", "start_second": 1081, "end_second": 1112, "url": "https://www.youtube.com/watch?v=o3y1w6-Xhjg&t=1081s", "title": "Towards Understanding Transfer Learning with Applications to Medical Imaging", "thumbnail": "https://i.ytimg.com/vi/o3y1w6-Xhjg/maxresdefault.jpg"} {"video_id": "o3y1w6-Xhjg", "text": "when we train them on image net these architectures actually perform horribly they're not designed for image net in a way that I can sort of explain offline but when we look at how they do on the medical talk they're actually really within sort of ballpark performance of each other oh these architectures they're just um they're just like a family of simple vanilla convolutional networks yeah we kind of just made them up just because like we wanted something extremely simple and I can explain why sort of like you're sort of seeing this", "start_timestamp": "00:18:32", "end_timestamp": "00:19:06", "start_second": 1112, "end_second": 1146, "url": "https://www.youtube.com/watch?v=o3y1w6-Xhjg&t=1112s", "title": "Towards Understanding Transfer Learning with Applications to Medical Imaging", "thumbnail": "https://i.ytimg.com/vi/o3y1w6-Xhjg/maxresdefault.jpg"} {"video_id": "o3y1w6-Xhjg", "text": "difference I guess offline if you want but there are kind of clear ways in which image net is not the right way to design your architecture for some of these tasks and so we were able to take advantage of that you mean the data set size so yes it's something we've buried but I'm full dataset sizes maybe two hundred thousand images for both of them yeah so I mean this last point is interesting because other papers so this paper undo better image net models transfer better and even the papers that been Rick's group has been working on on", "start_timestamp": "00:19:06", "end_timestamp": "00:19:40", "start_second": 1146, "end_second": 1180, "url": "https://www.youtube.com/watch?v=o3y1w6-Xhjg&t=1146s", "title": "Towards Understanding Transfer Learning with Applications to Medical Imaging", "thumbnail": "https://i.ytimg.com/vi/o3y1w6-Xhjg/maxresdefault.jpg"} {"video_id": "o3y1w6-Xhjg", "text": "this distribution shift and sort of seeing if how performance car lights across distribution shift does show that there's this correlation but here here we don't see this okay so that was kind of a quick sort of overview of what the the performance evaluations look like but we like to go beyond just the performance evaluation we kind of really want to understand what is transfer learning doing to our architecture it's like what are we gaining from from applying transfer learning if anything at all and you know", "start_timestamp": "00:19:40", "end_timestamp": "00:20:09", "start_second": 1180, "end_second": 1209, "url": "https://www.youtube.com/watch?v=o3y1w6-Xhjg&t=1180s", "title": "Towards Understanding Transfer Learning with Applications to Medical Imaging", "thumbnail": "https://i.ytimg.com/vi/o3y1w6-Xhjg/maxresdefault.jpg"} {"video_id": "o3y1w6-Xhjg", "text": "I mean what we saw is like at a performance level things are performing about the same and so a really fundamental question is well random initialization and pre train weights don't really look anything like each other so we have one thing sitting in one part of the space another set of parameters sitting in another part of the space what's happening during this fine-tuning process is it just that it doesn't really matter how you initialize and then after you fine-tune you just sort of change dramatically and", "start_timestamp": "00:20:09", "end_timestamp": "00:20:34", "start_second": 1209, "end_second": 1234, "url": "https://www.youtube.com/watch?v=o3y1w6-Xhjg&t=1209s", "title": "Towards Understanding Transfer Learning with Applications to Medical Imaging", "thumbnail": "https://i.ytimg.com/vi/o3y1w6-Xhjg/maxresdefault.jpg"} {"video_id": "o3y1w6-Xhjg", "text": "everything you initialize with is erased so we could just kind of do whatever we liked or is something else going on and to alter that question what we really want to do is we want to look at some of the the latent representations of these models and and take a measurement to see how similar they are the problem with trying to do this kind of an analysis is that comparing representations from different neural networks is is really difficult there's this alignment problem it's not like one neuron and a layer of", "start_timestamp": "00:20:34", "end_timestamp": "00:21:03", "start_second": 1234, "end_second": 1263, "url": "https://www.youtube.com/watch?v=o3y1w6-Xhjg&t=1234s", "title": "Towards Understanding Transfer Learning with Applications to Medical Imaging", "thumbnail": "https://i.ytimg.com/vi/o3y1w6-Xhjg/maxresdefault.jpg"} {"video_id": "o3y1w6-Xhjg", "text": "one network is going to correspond nicely to to another neuron in the layer of another network in fact there's no reason that one neuron Maps so another neuron at all it could be like a group of neurons that are having the same function as as a single neuron and another network or one group mapping to to another group so it's pretty complicated but in another line of work we've been looking at doing exactly these kinds of comparisons using canonical correlation analysis and the basic framing is as follows we have some", "start_timestamp": "00:21:03", "end_timestamp": "00:21:31", "start_second": 1263, "end_second": 1291, "url": "https://www.youtube.com/watch?v=o3y1w6-Xhjg&t=1263s", "title": "Towards Understanding Transfer Learning with Applications to Medical Imaging", "thumbnail": "https://i.ytimg.com/vi/o3y1w6-Xhjg/maxresdefault.jpg"} {"video_id": "o3y1w6-Xhjg", "text": "data set of interest and we're going to think of the neurons representation what it's learned as what we call an activation vector so we feed in this kind of data set of interest and this neuron is going to emit a scalar value across all of these these input points and we can literally just sort of stack all of these and that'll form a vector that we call the the activation vector of this neuron and so there's this sort of nice framework where we kind of think of these neurons as these activation vectors and because layers are linearly", "start_timestamp": "00:21:31", "end_timestamp": "00:22:03", "start_second": 1291, "end_second": 1323, "url": "https://www.youtube.com/watch?v=o3y1w6-Xhjg&t=1291s", "title": "Towards Understanding Transfer Learning with Applications to Medical Imaging", "thumbnail": "https://i.ytimg.com/vi/o3y1w6-Xhjg/maxresdefault.jpg"} {"video_id": "o3y1w6-Xhjg", "text": "combining their neurons there are sort of these subspaces that are sort of spanned by their neurons so I'm so so this is going to be at a very high level the details of like all the the the mathematical details of CCA are in their relevant papers but so at a very high level what we do is we take in two sets of these neuron activity vectors and they're typically going to be layers so layer from one network a layer from another network and then CCA will find the linear combination of these neurons that maximizes correlation and by", "start_timestamp": "00:22:03", "end_timestamp": "00:22:36", "start_second": 1323, "end_second": 1356, "url": "https://www.youtube.com/watch?v=o3y1w6-Xhjg&t=1323s", "title": "Towards Understanding Transfer Learning with Applications to Medical Imaging", "thumbnail": "https://i.ytimg.com/vi/o3y1w6-Xhjg/maxresdefault.jpg"} {"video_id": "o3y1w6-Xhjg", "text": "iteratively applying this process we can basically get something like a similarity score between these layers so the score just tells us well you know how how similar are the representations learned by these layers sort of up to sort of scaled linear transforms so that's the way in which it addresses this disalignment issue and so previously we've kind of used this to study various properties of convolutional networks lately it's been become quite popular in studying various different kinds of language models in", "start_timestamp": "00:22:36", "end_timestamp": "00:23:05", "start_second": 1356, "end_second": 1385, "url": "https://www.youtube.com/watch?v=o3y1w6-Xhjg&t=1356s", "title": "Towards Understanding Transfer Learning with Applications to Medical Imaging", "thumbnail": "https://i.ytimg.com/vi/o3y1w6-Xhjg/maxresdefault.jpg"} {"video_id": "o3y1w6-Xhjg", "text": "NLP and more broadly this this kind of entire area of studying similarity between deep representations has quite a lot of people who have been thinking about it so we're I'm mostly gonna be talking about using CCA but the first paper here was probably this paper called convergent learning by Lee Osinski Kuhn Lipson and Hopcroft in ic LR 2016 where instead of dealing with the distributed problem they just tried to find nice one-to-one mappings between between neurons and different networks then we kind of followed up with some of", "start_timestamp": "00:23:05", "end_timestamp": "00:23:39", "start_second": 1385, "end_second": 1419, "url": "https://www.youtube.com/watch?v=o3y1w6-Xhjg&t=1385s", "title": "Towards Understanding Transfer Learning with Applications to Medical Imaging", "thumbnail": "https://i.ytimg.com/vi/o3y1w6-Xhjg/maxresdefault.jpg"} {"video_id": "o3y1w6-Xhjg", "text": "the CCI work then there was a more recent paper that's really pushing this framework of having activation vectors for neurons and and sort of comparing similarities between subspaces and then most recently there's been this paper similarity of neural networks representations revisited by Kornbluth naruse Ely and Hinton that's broadly proposing a kernel based similarity measure and one one quick note about sort of all of these is that I think performing these sort of similarity comparisons is a very interesting way to try and get at what", "start_timestamp": "00:23:39", "end_timestamp": "00:24:11", "start_second": 1419, "end_second": 1451, "url": "https://www.youtube.com/watch?v=o3y1w6-Xhjg&t=1419s", "title": "Towards Understanding Transfer Learning with Applications to Medical Imaging", "thumbnail": "https://i.ytimg.com/vi/o3y1w6-Xhjg/maxresdefault.jpg"} {"video_id": "o3y1w6-Xhjg", "text": "your are neural networks are doing it's kind of useful for interpretability and it has interesting consequences for things like compression and for model Ensemble Inc and sort of all these papers including our own are interesting but I think there's a lot of scope for doing things in a more formal and a more principled way so although many of these methods are built off of sort of principle techniques a lot of the ways in which we apply them are indeed heuristic and we I don't think we can claim we fully understand their", "start_timestamp": "00:24:11", "end_timestamp": "00:24:37", "start_second": 1451, "end_second": 1477, "url": "https://www.youtube.com/watch?v=o3y1w6-Xhjg&t=1451s", "title": "Towards Understanding Transfer Learning with Applications to Medical Imaging", "thumbnail": "https://i.ytimg.com/vi/o3y1w6-Xhjg/maxresdefault.jpg"} {"video_id": "o3y1w6-Xhjg", "text": "limitations or or where best to use them so I think there are a whole bunch of interesting questions in this space but okay so going back to transfer learning what are we going to do well we're gonna do a very simple experiment we're gonna train a bunch of networks from pre train weights we're gonna train a bunch of networks from random initialization then we're going to apply CCA to just see how similar they are to each other we want a baseline so CCA is gonna give us these similarity scores but we want some kind", "start_timestamp": "00:24:37", "end_timestamp": "00:25:04", "start_second": 1477, "end_second": 1504, "url": "https://www.youtube.com/watch?v=o3y1w6-Xhjg&t=1477s", "title": "Towards Understanding Transfer Learning with Applications to Medical Imaging", "thumbnail": "https://i.ytimg.com/vi/o3y1w6-Xhjg/maxresdefault.jpg"} {"video_id": "o3y1w6-Xhjg", "text": "of a baseline to compare these two and so we're also going to look at the similarity scores we get when we train a population of networks from different random initializations and apply CCA there so here what the results look like so along the x-axis are sort of different architectures these blue points are what you get from doing this comparison from networks trained from different random initializations and these yellow points are what you get when comparing pre train networks the network's train from random", "start_timestamp": "00:25:04", "end_timestamp": "00:25:31", "start_second": 1504, "end_second": 1531, "url": "https://www.youtube.com/watch?v=o3y1w6-Xhjg&t=1504s", "title": "Towards Understanding Transfer Learning with Applications to Medical Imaging", "thumbnail": "https://i.ytimg.com/vi/o3y1w6-Xhjg/maxresdefault.jpg"} {"video_id": "o3y1w6-Xhjg", "text": "initialization yeah so so you do see different architectures here but ah good point no we didn't do that and that would be kind of an interesting thing to study definitely so-so but yeah so we just kind of stayed within an architecture but kind of the the takeaway is that these blue points are sort of higher up than these yellow points what does that mean well it means that models train from random initializations seem to be more similar to each other representational e than then models trained from from pre", "start_timestamp": "00:25:31", "end_timestamp": "00:26:06", "start_second": 1531, "end_second": 1566, "url": "https://www.youtube.com/watch?v=o3y1w6-Xhjg&t=1531s", "title": "Towards Understanding Transfer Learning with Applications to Medical Imaging", "thumbnail": "https://i.ytimg.com/vi/o3y1w6-Xhjg/maxresdefault.jpg"} {"video_id": "o3y1w6-Xhjg", "text": "trained weights and transfer learning so even though we're seeing the same performance there is something different happening at the the representational level so yep the different circles actually compare a correspond to different networks so we train multiple networks and then we just sort of performed we put on these comparisons yeah so it's it's it's averaged over the layers um multiple I mean I can tell you offline but sort of slightly different layers for different networks because their architectures are", "start_timestamp": "00:26:06", "end_timestamp": "00:26:35", "start_second": 1566, "end_second": 1595, "url": "https://www.youtube.com/watch?v=o3y1w6-Xhjg&t=1566s", "title": "Towards Understanding Transfer Learning with Applications to Medical Imaging", "thumbnail": "https://i.ytimg.com/vi/o3y1w6-Xhjg/maxresdefault.jpg"} {"video_id": "o3y1w6-Xhjg", "text": "a bit different but yeah taking a few layers at different stages in the network performing this comparison and then averaging we have two distributions and I get a bunch of samples Ramon popular some examples of the other you know like basically just be something I'm wondering like any statistic is going to be kind of you know but I think this pertaining to a population but you know this number of like thirty four and a half verse 36 is this enough to tell me something about I think it is hard to compare across architectures at least", "start_timestamp": "00:26:35", "end_timestamp": "00:27:24", "start_second": 1595, "end_second": 1644, "url": "https://www.youtube.com/watch?v=o3y1w6-Xhjg&t=1595s", "title": "Towards Understanding Transfer Learning with Applications to Medical Imaging", "thumbnail": "https://i.ytimg.com/vi/o3y1w6-Xhjg/maxresdefault.jpg"} {"video_id": "o3y1w6-Xhjg", "text": "the way we did this experiment just because like we looked at different layers and things and we'd have one point see see a similarity actually telling us a lot about of transferability I think well not about transferability but I think it is actually telling us something about what's similar versus what's not similar like I think that was the whole point of like kind of having this sort of baseline comparison I guess in blue and I think the fact that we see and nobody tried like multiple networks for that so", "start_timestamp": "00:27:24", "end_timestamp": "00:27:48", "start_second": 1644, "end_second": 1668, "url": "https://www.youtube.com/watch?v=o3y1w6-Xhjg&t=1644s", "title": "Towards Understanding Transfer Learning with Applications to Medical Imaging", "thumbnail": "https://i.ytimg.com/vi/o3y1w6-Xhjg/maxresdefault.jpg"} {"video_id": "o3y1w6-Xhjg", "text": "I think the fact that we are seeing that the blue things are higher like in most cases is telling us that there is more similarity there we train on the same training data exactly yeah yeah because we're interested in lots of questions okay we're interested we're interested in yeah sort of seeing what this looks like once we've trained on the medical data yes oh yeah so like we're not done making the conclusion yet but sort of the first thing we saw was that like performance is similar and so like hypothesis one is like it kind of", "start_timestamp": "00:27:48", "end_timestamp": "00:28:28", "start_second": 1668, "end_second": 1708, "url": "https://www.youtube.com/watch?v=o3y1w6-Xhjg&t=1668s", "title": "Towards Understanding Transfer Learning with Applications to Medical Imaging", "thumbnail": "https://i.ytimg.com/vi/o3y1w6-Xhjg/maxresdefault.jpg"} {"video_id": "o3y1w6-Xhjg", "text": "doesn't matter how you initialize and like they're actually all doing the same thing all the way through so on top they're clearly similar because performance is similar but like we don't know what's happening in between so so then we try this analysis and then and then it looks like different things are happening in between but there's kind of more coming yep CCA assumes that the inputs are like linear you take a linear combination I was wondering if you try like deep CCA where you take a nonlinear or why did you assume make the linearity", "start_timestamp": "00:28:28", "end_timestamp": "00:29:00", "start_second": 1708, "end_second": 1740, "url": "https://www.youtube.com/watch?v=o3y1w6-Xhjg&t=1708s", "title": "Towards Understanding Transfer Learning with Applications to Medical Imaging", "thumbnail": "https://i.ytimg.com/vi/o3y1w6-Xhjg/maxresdefault.jpg"} {"video_id": "o3y1w6-Xhjg", "text": "assumption yeah I mean that's a good question you could definitely try deep sea CA for CCA like I mean we have to have some kind of I guess place where we want to say something is like you know like where we kind of conclude that things are not that's similar to each other representational e and we thought linear is like kind of a good proxy because you know layers kind of operate linearly and so like you know things are sorted within linear transforms of each other it seems like a reasonable kind of call to say okay that's sort of somewhat", "start_timestamp": "00:29:00", "end_timestamp": "00:29:25", "start_second": 1740, "end_second": 1765, "url": "https://www.youtube.com/watch?v=o3y1w6-Xhjg&t=1740s", "title": "Towards Understanding Transfer Learning with Applications to Medical Imaging", "thumbnail": "https://i.ytimg.com/vi/o3y1w6-Xhjg/maxresdefault.jpg"} {"video_id": "o3y1w6-Xhjg", "text": "similar whereas yeah when you come to like not like I think the nonlinear comparison could also be interesting but yeah you need to know exactly when to call things similar and not somewhere oh man so many questions okay I'll take one more question and then maybe move on did you have a question me yep okay Jack maybe we'll chat more offline okay yep similarity r/t layers yeah so it's that's an interesting question and what I'm about to get to so so the altar is like I think for like some models you see that but not for not", "start_timestamp": "00:29:25", "end_timestamp": "00:29:57", "start_second": 1765, "end_second": 1797, "url": "https://www.youtube.com/watch?v=o3y1w6-Xhjg&t=1765s", "title": "Towards Understanding Transfer Learning with Applications to Medical Imaging", "thumbnail": "https://i.ytimg.com/vi/o3y1w6-Xhjg/maxresdefault.jpg"} {"video_id": "o3y1w6-Xhjg", "text": "for not for others and so that is about to come up yeah so so so okay so like kind of so we saw performance is similar and then a hypothesis says maybe they're just all doing the same things that doesn't seem to quite be the case and now like kind of let's look further into that and to look further into that let's do something really simple so this are the actual filters from the first convolutional layer of of ResNet that's kind of initialize with pre-trained weights so you trick take your rest net you pre train it on image net and this", "start_timestamp": "00:29:57", "end_timestamp": "00:30:27", "start_second": 1797, "end_second": 1827, "url": "https://www.youtube.com/watch?v=o3y1w6-Xhjg&t=1797s", "title": "Towards Understanding Transfer Learning with Applications to Medical Imaging", "thumbnail": "https://i.ytimg.com/vi/o3y1w6-Xhjg/maxresdefault.jpg"} {"video_id": "o3y1w6-Xhjg", "text": "is what the filters look like and sort of you know true to kind of the community's expectations you see all of these really nice Gabor filters come up so yeah so I don't think the weights OCC I only operates on like activations like I think it's like I think it's difficult to sometimes make direct comparisons between weights because your weights can look really different but I think functionally is like what we what's like more interesting to us like I mean there's like the standard experiment where you have like a ground truth", "start_timestamp": "00:30:27", "end_timestamp": "00:31:07", "start_second": 1827, "end_second": 1867, "url": "https://www.youtube.com/watch?v=o3y1w6-Xhjg&t=1827s", "title": "Towards Understanding Transfer Learning with Applications to Medical Imaging", "thumbnail": "https://i.ytimg.com/vi/o3y1w6-Xhjg/maxresdefault.jpg"} {"video_id": "o3y1w6-Xhjg", "text": "neural network and then you train something to mimic that but like even they're like the weights are not going to look like that's similar but like kind of out with wise is yeah what we're looking at well yeah so we're interested like whether there are kind of functional similarities in these they're like like the actual outputs and so that's what we we study by making your your similarity measure parameterize by the examples not by the weights it factors out all these in variances and permutations and stuff like that inside", "start_timestamp": "00:31:07", "end_timestamp": "00:31:34", "start_second": 1867, "end_second": 1894, "url": "https://www.youtube.com/watch?v=o3y1w6-Xhjg&t=1867s", "title": "Towards Understanding Transfer Learning with Applications to Medical Imaging", "thumbnail": "https://i.ytimg.com/vi/o3y1w6-Xhjg/maxresdefault.jpg"} {"video_id": "o3y1w6-Xhjg", "text": "of a network and like you mentioned there are lots of papers legend sincere that find all these relation between the example mapping or the ground matrix it something yeah like I think almost all of these like even even the paper those doing kernel based measures yeah like kind of thinking about it in terms of examples I think is kind of really helpful for doing these sort of similarity measures but okay so so we're gonna do something even simpler let's just look at like let's just look at the filters okay so these", "start_timestamp": "00:31:34", "end_timestamp": "00:31:58", "start_second": 1894, "end_second": 1918, "url": "https://www.youtube.com/watch?v=o3y1w6-Xhjg&t=1894s", "title": "Towards Understanding Transfer Learning with Applications to Medical Imaging", "thumbnail": "https://i.ytimg.com/vi/o3y1w6-Xhjg/maxresdefault.jpg"} {"video_id": "o3y1w6-Xhjg", "text": "are the filters from conv one that we've initialized from having pre-trained freshly off of imagenet look at all these gorgeous Gabor filters what happens when we when we train this network so we train it on this medical data and well after training it actually looks kind of similar so so maybe that just means that you know these like these kind of Gabor filters are perfect for this medical data okay now let's see what happens when we do the same thing from random initialization so this is what our our network looks like when we", "start_timestamp": "00:31:58", "end_timestamp": "00:32:26", "start_second": 1918, "end_second": 1946, "url": "https://www.youtube.com/watch?v=o3y1w6-Xhjg&t=1918s", "title": "Towards Understanding Transfer Learning with Applications to Medical Imaging", "thumbnail": "https://i.ytimg.com/vi/o3y1w6-Xhjg/maxresdefault.jpg"} {"video_id": "o3y1w6-Xhjg", "text": "randomly initialize it and again we're gonna train it on this medical data so we do that and oh man this actually also looks somewhat similar so so so what is going on because so over here like maybe you could say oh there's interesting feature reuse happening but but you're also seeing the same stuff for random initialization okay so that's like our that's that's ResNet now we've trained a whole bunch of other architectures and some of them are much smaller so what happens here well here's one of our very small architectures it's maybe one-tenth", "start_timestamp": "00:32:26", "end_timestamp": "00:32:56", "start_second": 1946, "end_second": 1976, "url": "https://www.youtube.com/watch?v=o3y1w6-Xhjg&t=1946s", "title": "Towards Understanding Transfer Learning with Applications to Medical Imaging", "thumbnail": "https://i.ytimg.com/vi/o3y1w6-Xhjg/maxresdefault.jpg"} {"video_id": "o3y1w6-Xhjg", "text": "the size of the resonant so here it is initialize with it's nice imagenet weights and what happens after training well it actually changes dramatically and then again if we look at it at random initialization and then look at it off your training again it changes significantly your connections a few dozen skip connections activation I mean we should interpret them by adding you know having you across layers or something like the real representation is and it depend on all the layers not so this is just yeah so so because the", "start_timestamp": "00:32:56", "end_timestamp": "00:33:26", "start_second": 1976, "end_second": 2006, "url": "https://www.youtube.com/watch?v=o3y1w6-Xhjg&t=1976s", "title": "Towards Understanding Transfer Learning with Applications to Medical Imaging", "thumbnail": "https://i.ytimg.com/vi/o3y1w6-Xhjg/maxresdefault.jpg"} {"video_id": "o3y1w6-Xhjg", "text": "architectures are different we looked at Khan Wan specifically for that because it's sort of below all of that now you are getting feedback from kind of the skip connections that you aren't getting but I think this this is also true for for say like something like Inception as well I'm just sort of showing showing resonate here and I think I think what's really going on is to do with sort of to do with the size of these models so yeah so two quick points one point that's kind of interesting is that everyone loves Gabor filters but these models are", "start_timestamp": "00:33:26", "end_timestamp": "00:33:52", "start_second": 2006, "end_second": 2032, "url": "https://www.youtube.com/watch?v=o3y1w6-Xhjg&t=2006s", "title": "Towards Understanding Transfer Learning with Applications to Medical Imaging", "thumbnail": "https://i.ytimg.com/vi/o3y1w6-Xhjg/maxresdefault.jpg"} {"video_id": "o3y1w6-Xhjg", "text": "not actually the smaller model which is changing a lot does it actually really seem to be learning at least the classical Gabor filters and so here are like places where there's a good more filter and it's actually erased the Gabor filter so like here's another place where it's kind of good more filter erase cover or filter erase and then the other point is relates to what my tousle already brought up which is that like what we observe through kind of further experiments is that the size I think the size of these architectures", "start_timestamp": "00:33:52", "end_timestamp": "00:34:15", "start_second": 2032, "end_second": 2055, "url": "https://www.youtube.com/watch?v=o3y1w6-Xhjg&t=2032s", "title": "Towards Understanding Transfer Learning with Applications to Medical Imaging", "thumbnail": "https://i.ytimg.com/vi/o3y1w6-Xhjg/maxresdefault.jpg"} {"video_id": "o3y1w6-Xhjg", "text": "is actually kind of really impacting what you see during this fine tuning process so like the kind of picture we have in our head it's sort of maybe so random initialization and pre-trained weights take us to sort of very different parts of the space but sort of somehow when your model is kind of large and these imagenet architectures are indeed in some sense seem to be large for these medical tusks maybe you just don't sort of move as much and then and then whereas when you have sort of smaller models you sort of change a lot", "start_timestamp": "00:34:15", "end_timestamp": "00:34:44", "start_second": 2055, "end_second": 2084, "url": "https://www.youtube.com/watch?v=o3y1w6-Xhjg&t=2055s", "title": "Towards Understanding Transfer Learning with Applications to Medical Imaging", "thumbnail": "https://i.ytimg.com/vi/o3y1w6-Xhjg/maxresdefault.jpg"} {"video_id": "o3y1w6-Xhjg", "text": "more now we've you know we've throughout this workshop we've seen lots and lots of interesting work on the neural tangent kernel and sort of thinking about these infinite with limits in the kernel regime versus the sort of the not the deep regime but like at least it to my kind of high level understanding it's not a direct mapping to what we're seeing here so I think it's kind of interesting to to sort of study this further and try and understand try and make this connection because there's probably some kind of a connection and", "start_timestamp": "00:34:44", "end_timestamp": "00:35:12", "start_second": 2084, "end_second": 2112, "url": "https://www.youtube.com/watch?v=o3y1w6-Xhjg&t=2084s", "title": "Towards Understanding Transfer Learning with Applications to Medical Imaging", "thumbnail": "https://i.ytimg.com/vi/o3y1w6-Xhjg/maxresdefault.jpg"} {"video_id": "o3y1w6-Xhjg", "text": "sort of understanding why this is happening would be really interesting okay so um final point is that in the paper we have a lot more kind of work on sort of broadly thinking about similarity and reuse there are other interesting things we see like we can store for our larger models our similarity at initialization can be pretty predictive sort of similarity post training we can also like kind of look at how much feature reused is happening and this interesting co-adaptation problem which happy to chat about more offline but you know i", "start_timestamp": "00:35:12", "end_timestamp": "00:35:44", "start_second": 2112, "end_second": 2144, "url": "https://www.youtube.com/watch?v=o3y1w6-Xhjg&t=2112s", "title": "Towards Understanding Transfer Learning with Applications to Medical Imaging", "thumbnail": "https://i.ytimg.com/vi/o3y1w6-Xhjg/maxresdefault.jpg"} {"video_id": "o3y1w6-Xhjg", "text": "think time is running out and I don't want to I want to make sure we get to lunch on time so I wanted to end with I think one of the most interesting observations we saw during this entire set of experiments so one thing we observe again and again across different architectures and and across our different setups is that when you train with pre-trained whites versus training from random initialization there is a huge difference in convergence speed so this yellow line here is what you get when you convert in", "start_timestamp": "00:35:44", "end_timestamp": "00:36:12", "start_second": 2144, "end_second": 2172, "url": "https://www.youtube.com/watch?v=o3y1w6-Xhjg&t=2144s", "title": "Towards Understanding Transfer Learning with Applications to Medical Imaging", "thumbnail": "https://i.ytimg.com/vi/o3y1w6-Xhjg/maxresdefault.jpg"} {"video_id": "o3y1w6-Xhjg", "text": "sort of your training your training curve with with pre trained weights and this blue line is what you see with random initialization if you sort of extend these far enough out they basically converge to the same value but there's this sort of huge difference in in how quickly they converge and you know when you when you first see this plot you might think oh well this means that transfer learning is doing its job like you know you've learned some useful features and you're sort of reusing it and that's why you're converging foster", "start_timestamp": "00:36:12", "end_timestamp": "00:36:37", "start_second": 2172, "end_second": 2197, "url": "https://www.youtube.com/watch?v=o3y1w6-Xhjg&t=2172s", "title": "Towards Understanding Transfer Learning with Applications to Medical Imaging", "thumbnail": "https://i.ytimg.com/vi/o3y1w6-Xhjg/maxresdefault.jpg"} {"video_id": "o3y1w6-Xhjg", "text": "but we've also seen a lot of counterintuitive results on the things that larger models are sort of maybe a bit lazier and just don't move as much and it's it's still kind of not fully clear exactly how much feature use is happening in this in this process and so we tried an experiment to try and understand why we see this difference in convergence speeds and the experiment is very simple so we decided to initialize by drawing weights IID from from sort of the same distribution as random initialization but rescaled to match the", "start_timestamp": "00:36:37", "end_timestamp": "00:37:08", "start_second": 2197, "end_second": 2228, "url": "https://www.youtube.com/watch?v=o3y1w6-Xhjg&t=2197s", "title": "Towards Understanding Transfer Learning with Applications to Medical Imaging", "thumbnail": "https://i.ytimg.com/vi/o3y1w6-Xhjg/maxresdefault.jpg"} {"video_id": "o3y1w6-Xhjg", "text": "pre trained weights so sort of here's here's what this concretely would look like if you initialize those pre trained weights you'd look something like this if you initialize with random initialization you of course destroyed all the features and it looks something like this and then this thing which we call the mean var in it I mean it looks exactly like random initialization except that it's scaling is different because you sort of rescaled so how does this do if we initialize what this and train instead turns out that it actually", "start_timestamp": "00:37:08", "end_timestamp": "00:37:34", "start_second": 2228, "end_second": 2254, "url": "https://www.youtube.com/watch?v=o3y1w6-Xhjg&t=2228s", "title": "Towards Understanding Transfer Learning with Applications to Medical Imaging", "thumbnail": "https://i.ytimg.com/vi/o3y1w6-Xhjg/maxresdefault.jpg"} {"video_id": "o3y1w6-Xhjg", "text": "helps a lot with convergence speed and we sort of students across different architectures and across our different tasks and so what's really interesting here you didn't very and grains from the imagenet transfer the pre train weights exactly and and it was per later so not across the entire architecture that wouldn't make sense but sort of perlier take this and then initialize and so what's really interesting is that yeah this is this is a feature independent property because we're sampling i idea we've kind of destroyed all of the", "start_timestamp": "00:37:34", "end_timestamp": "00:38:03", "start_second": 2254, "end_second": 2283, "url": "https://www.youtube.com/watch?v=o3y1w6-Xhjg&t=2254s", "title": "Towards Understanding Transfer Learning with Applications to Medical Imaging", "thumbnail": "https://i.ytimg.com/vi/o3y1w6-Xhjg/maxresdefault.jpg"} {"video_id": "o3y1w6-Xhjg", "text": "features and we're only kind of keeping the scaling but these are these are just both for the the two large-scale medical imaging tells me like that and when we already know from and the relative order of magnitude of the amount of data on sorts first target times because uh a dramatic impact here like did you look at like Burt did you do all the stuff we were doing the exact same things in 2014 but instead of training on the billion or a dataset we were training on like penn treebank or something like this and", "start_timestamp": "00:38:03", "end_timestamp": "00:38:51", "start_second": 2283, "end_second": 2331, "url": "https://www.youtube.com/watch?v=o3y1w6-Xhjg&t=2283s", "title": "Towards Understanding Transfer Learning with Applications to Medical Imaging", "thumbnail": "https://i.ytimg.com/vi/o3y1w6-Xhjg/maxresdefault.jpg"} {"video_id": "o3y1w6-Xhjg", "text": "that difference like the discrepancy like and I've seen I know at least internally at Amazon I have some colleagues we're doing some stuff that was like how many so many unsupervised examples what I like look the fire fiber choosing it by the cost of like how much would I pay to get a million unsupervised examples versus a hundred extra labeled ones or something like that and these numbered skin you know could be it could be a very big difference so I wonder with 1 million versus two hundred thousand is are sort", "start_timestamp": "00:38:51", "end_timestamp": "00:39:18", "start_second": 2331, "end_second": 2358, "url": "https://www.youtube.com/watch?v=o3y1w6-Xhjg&t=2331s", "title": "Towards Understanding Transfer Learning with Applications to Medical Imaging", "thumbnail": "https://i.ytimg.com/vi/o3y1w6-Xhjg/maxresdefault.jpg"} {"video_id": "o3y1w6-Xhjg", "text": "of on the same order of magnitude and that's why you see yeah so actually this specific experiment we also so like I'm not covering this here but we tried a bunch of things where we varied the data and so it's interesting but this is actually pretty robust to varying the data so you see the same sort of like convergence like you actually see a speed-up even when your data is much smaller one thing you do see is that with these really large image that architectures if you have something as small as like five thousand data points", "start_timestamp": "00:39:18", "end_timestamp": "00:39:42", "start_second": 2358, "end_second": 2382, "url": "https://www.youtube.com/watch?v=o3y1w6-Xhjg&t=2358s", "title": "Towards Understanding Transfer Learning with Applications to Medical Imaging", "thumbnail": "https://i.ytimg.com/vi/o3y1w6-Xhjg/maxresdefault.jpg"} {"video_id": "o3y1w6-Xhjg", "text": "that's when you see like a little bit more of a gap between transfer learning versus random initialization maybe it's like two percent but then by the time you've gotten to fifty thousand examples that gap is like almost gone and and like there's really no reason why you'd a priori you want to use like kind of imagenet sized architecture on like 5000 examples like I think that's sort of like also like kind of yeah merits sort of further study we've been talking about over parametrizations I think that's like an interesting", "start_timestamp": "00:39:42", "end_timestamp": "00:40:07", "start_second": 2382, "end_second": 2407, "url": "https://www.youtube.com/watch?v=o3y1w6-Xhjg&t=2382s", "title": "Towards Understanding Transfer Learning with Applications to Medical Imaging", "thumbnail": "https://i.ytimg.com/vi/o3y1w6-Xhjg/maxresdefault.jpg"} {"video_id": "o3y1w6-Xhjg", "text": "related question but yep the main point here is that this is like this is just purely a feature like a property of scaling and it's kind of a purely feature independent property and so I think there are a whole bunch of open questions here especially related to this kind of scaling part so specifically is there sort of maybe some scaling rule that explains this convergence speed up we looked at this a little bit but sort of not extensively and there are differences but I think it seems like it should be possible to", "start_timestamp": "00:40:07", "end_timestamp": "00:40:31", "start_second": 2407, "end_second": 2431, "url": "https://www.youtube.com/watch?v=o3y1w6-Xhjg&t=2407s", "title": "Towards Understanding Transfer Learning with Applications to Medical Imaging", "thumbnail": "https://i.ytimg.com/vi/o3y1w6-Xhjg/maxresdefault.jpg"} {"video_id": "o3y1w6-Xhjg", "text": "maybe pin this down and then I think we also did some very preliminary experiments on natural images and I think we're seeing sort of similar effects and there are kind of interesting questions we can all skier like you know if we train and then sort of reinitialize but just preserve the scale like you know do you see a difference in convergence speed that's like one of the basic questions we could try and try and answer and then I think there are sort of other questions that also came up through this process sort", "start_timestamp": "00:40:31", "end_timestamp": "00:40:55", "start_second": 2431, "end_second": 2455, "url": "https://www.youtube.com/watch?v=o3y1w6-Xhjg&t=2431s", "title": "Towards Understanding Transfer Learning with Applications to Medical Imaging", "thumbnail": "https://i.ytimg.com/vi/o3y1w6-Xhjg/maxresdefault.jpg"} {"video_id": "o3y1w6-Xhjg", "text": "of like kind of maybe really kind of getting at sort of similarity of representations at initialization versus after training and sort of how I do things vary between large and small models because at least I mean looking at the weight certainly they're learning very different filters so sort of understanding that better would be really interesting and then I think there's also kind of scope here to actually formalize things a little more so I'm sort of seeing medical imaging partially also as this way of seeing a", "start_timestamp": "00:40:55", "end_timestamp": "00:41:21", "start_second": 2455, "end_second": 2481, "url": "https://www.youtube.com/watch?v=o3y1w6-Xhjg&t=2455s", "title": "Towards Understanding Transfer Learning with Applications to Medical Imaging", "thumbnail": "https://i.ytimg.com/vi/o3y1w6-Xhjg/maxresdefault.jpg"} {"video_id": "o3y1w6-Xhjg", "text": "place where transfer learning does provide some benefits some convergence benefits but in a scheme where like the the source and target distribution are extremely different from each other and so just understanding better what might be happening there and maybe saying something formal there could also be very interesting and with that thanks for thanks for coming [Applause] oh yeah absolutely I mean like honestly if you look at how much how long you train rent from random initialization versus training on imagenet plus", "start_timestamp": "00:41:21", "end_timestamp": "00:42:09", "start_second": 2481, "end_second": 2529, "url": "https://www.youtube.com/watch?v=o3y1w6-Xhjg&t=2481s", "title": "Towards Understanding Transfer Learning with Applications to Medical Imaging", "thumbnail": "https://i.ytimg.com/vi/o3y1w6-Xhjg/maxresdefault.jpg"} {"video_id": "o3y1w6-Xhjg", "text": "training on the medical data like training from random initialization is going to be way faster like this but this question makes scent like sort of the reason this question is very important is because people are just downloading their models from github and then sort of just doing the fine-tuning so at that point you're like oh well you know this thing is readily available so do I want to do this or do I want to do do something else yeah yep now we didn't do 200,000 so I can chat with you offline about that but you don't need to", "start_timestamp": "00:42:09", "end_timestamp": "00:42:42", "start_second": 2529, "end_second": 2562, "url": "https://www.youtube.com/watch?v=o3y1w6-Xhjg&t=2529s", "title": "Towards Understanding Transfer Learning with Applications to Medical Imaging", "thumbnail": "https://i.ytimg.com/vi/o3y1w6-Xhjg/maxresdefault.jpg"} {"video_id": "o3y1w6-Xhjg", "text": "do 200,000 - to kind of get a good similarity measure you can do something smaller than that the trade-off there is between sort of like the number of data points you're using and serve the actual number of like kind of vectors you're trying to find similarities over this is why people so so okay so I think there are two parts to this firstly like people I think there's some outs of people like using transfer learning simply because they've seen other people do it and kind of you train it and then you can do this process and it sort of", "start_timestamp": "00:42:42", "end_timestamp": "00:43:26", "start_second": 2562, "end_second": 2606, "url": "https://www.youtube.com/watch?v=o3y1w6-Xhjg&t=2562s", "title": "Towards Understanding Transfer Learning with Applications to Medical Imaging", "thumbnail": "https://i.ytimg.com/vi/o3y1w6-Xhjg/maxresdefault.jpg"} {"video_id": "o3y1w6-Xhjg", "text": "works and you're like oh great but I think I think in sort of settings where people are doing extensive experimentation like I know we think Google like part of the reason transfer learning is popular is I mean they have resources but you're still trying to run a lot of experiments and part of the reason transfer learning is popular it's because of this speed up you see in convergence I think like you have to be a little careful about this because um part of the reason I think you also see this fee up and convergence is because", "start_timestamp": "00:43:26", "end_timestamp": "00:43:51", "start_second": 2606, "end_second": 2631, "url": "https://www.youtube.com/watch?v=o3y1w6-Xhjg&t=2606s", "title": "Towards Understanding Transfer Learning with Applications to Medical Imaging", "thumbnail": "https://i.ytimg.com/vi/o3y1w6-Xhjg/maxresdefault.jpg"} {"video_id": "o3y1w6-Xhjg", "text": "you're also committed to this imagenet architecture so like in the paper we sort of studied this further but you know I think like we're meaningful feature reuse is happening if it is happening is really in the lower layers and so one way to get sort of similar speed ups but maybe have a better architecture is like you kind of just reuse some of the weights and then you sort of reinitialize stuff and sort of train that away and this ties into like kind of all kinds of other interesting questions so like there's been this", "start_timestamp": "00:43:51", "end_timestamp": "00:44:17", "start_second": 2631, "end_second": 2657, "url": "https://www.youtube.com/watch?v=o3y1w6-Xhjg&t=2631s", "title": "Towards Understanding Transfer Learning with Applications to Medical Imaging", "thumbnail": "https://i.ytimg.com/vi/o3y1w6-Xhjg/maxresdefault.jpg"} {"video_id": "o3y1w6-Xhjg", "text": "interesting point in in deep learning about this co-adaptation problem which is suppose I have an initialization but I only keep part of it and then I kind of reset everything else how will those work together and and it's kind of interesting because for some settings that's been a problem for this it doesn't appear to be a problem so that's also another interesting question to study I think yeah yep yeah absolutely so I mean I guess I guess Zack and I were we like I guess this is a discussion we mentioned briefly earlier so if you have a very", "start_timestamp": "00:44:17", "end_timestamp": "00:45:02", "start_second": 2657, "end_second": 2702, "url": "https://www.youtube.com/watch?v=o3y1w6-Xhjg&t=2657s", "title": "Towards Understanding Transfer Learning with Applications to Medical Imaging", "thumbnail": "https://i.ytimg.com/vi/o3y1w6-Xhjg/maxresdefault.jpg"} {"video_id": "o3y1w6-Xhjg", "text": "very small amount of data you will see a bit of a difference so I think we had to get to five thousand data points on the image net architectures at least and so there we saw maybe like a two percent difference instead of like a fraction of a percent difference but then by the time we got up to like fifty thousand data points that kind of that difference was really gone and then when we tried on a much smaller architecture so bear in mind before we were training this enormous image and net architecture so we tried on a much smaller architecture", "start_timestamp": "00:45:02", "end_timestamp": "00:45:26", "start_second": 2702, "end_second": 2726, "url": "https://www.youtube.com/watch?v=o3y1w6-Xhjg&t=2702s", "title": "Towards Understanding Transfer Learning with Applications to Medical Imaging", "thumbnail": "https://i.ytimg.com/vi/o3y1w6-Xhjg/maxresdefault.jpg"} {"video_id": "O1b0cbgpRBw", "text": "hi there if you play chess you'll probably recognize the following moves as illegal in the top row pawns move two squares at a time while they are not on their home row in the bottom row you'll see a pawn moving backwards and another one moving sidewards even so in classical chess these moves are illegal but there are variants of chess where these moves aren't illegal where they are actually explicitly part of the rules these are alternate chess rules and this paper is about exploring those rules what happens if you", "start_timestamp": "00:00:00", "end_timestamp": "00:00:36", "start_second": 0, "end_second": 36, "url": "https://www.youtube.com/watch?v=O1b0cbgpRBw&t=0s", "title": "Assessing Game Balance with AlphaZero: Exploring Alternative Rule Sets in Chess (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/O1b0cbgpRBw/maxresdefault.jpg"} {"video_id": "O1b0cbgpRBw", "text": "implement those rules how does the game play change and what can we learn for general games so the paper here is called assessing game balance with alpha zero exploring alternative rule sets in chess by nina thomasev ulrich paquette demis hospice and vladimir kramnik uh the former three of deepmind and the latter is was the world chess champion for these eight years depicted so the paper tries to bring together two different worlds first it is the chess world so a lot of this paper is explicitly about the game of chess if you don't play", "start_timestamp": "00:00:36", "end_timestamp": "00:01:20", "start_second": 36, "end_second": 80, "url": "https://www.youtube.com/watch?v=O1b0cbgpRBw&t=36s", "title": "Assessing Game Balance with AlphaZero: Exploring Alternative Rule Sets in Chess (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/O1b0cbgpRBw/maxresdefault.jpg"} {"video_id": "O1b0cbgpRBw", "text": "chess or if you occasionally play chess like myself this might not be the most interesting paper though it contains some really interesting kind of bits the other world is the reinforcement learning world which you'll see in the alpha zero name right here so the reasoning behind this is the following chess is a really really old game and rules have evolved over time and have sort of consolidated on the rules we have today but also strategy has evolved over time and lots and lots of thinking and theory has gone into the strategy of chess", "start_timestamp": "00:01:20", "end_timestamp": "00:02:01", "start_second": 80, "end_second": 121, "url": "https://www.youtube.com/watch?v=O1b0cbgpRBw&t=80s", "title": "Assessing Game Balance with AlphaZero: Exploring Alternative Rule Sets in Chess (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/O1b0cbgpRBw/maxresdefault.jpg"} {"video_id": "O1b0cbgpRBw", "text": "and to change the rules around um you can change the rules of chess however you can't really assess how the game would be played by humans uh if the rules were changed because you don't have a thousand years of the entire humanity studying these new rule sets and therefore you're kind of stuck with assessing the games from the perspective of someone who has learned the old rules but reinforcement learning to the rescue so consider the following rule changes no castling this is a really simple rule change no castling castling is disallowed", "start_timestamp": "00:02:01", "end_timestamp": "00:02:44", "start_second": 121, "end_second": 164, "url": "https://www.youtube.com/watch?v=O1b0cbgpRBw&t=121s", "title": "Assessing Game Balance with AlphaZero: Exploring Alternative Rule Sets in Chess (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/O1b0cbgpRBw/maxresdefault.jpg"} {"video_id": "O1b0cbgpRBw", "text": "throughout the game if you don't know what castling is castling is like a special move where there is this rook and the king is right here i don't know how to the king and if there's nothing in between they can sort of swap positions it's called castling uh it's a special move that you can do and it allows you to bring the king to the outside where the king is safe and to bring the rook to the inside where it can potentially cause a lot of damage so it's a very very favored move by a lot of players and no castling the rule change", "start_timestamp": "00:02:44", "end_timestamp": "00:03:20", "start_second": 164, "end_second": 200, "url": "https://www.youtube.com/watch?v=O1b0cbgpRBw&t=164s", "title": "Assessing Game Balance with AlphaZero: Exploring Alternative Rule Sets in Chess (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/O1b0cbgpRBw/maxresdefault.jpg"} {"video_id": "O1b0cbgpRBw", "text": "probably alters the game a lot because if you think of the chess board kings start about here they can only move one square at a time so to get them to safety will require like four or five um steps for them while you have to move everything else out of the way including the rook that stands here so players might elect to just leave their kings where they are but then they can't really open up in the middle as much because that would leave their kings exposed so it is fair to assume that just introducing this one rule might", "start_timestamp": "00:03:20", "end_timestamp": "00:03:57", "start_second": 200, "end_second": 237, "url": "https://www.youtube.com/watch?v=O1b0cbgpRBw&t=200s", "title": "Assessing Game Balance with AlphaZero: Exploring Alternative Rule Sets in Chess (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/O1b0cbgpRBw/maxresdefault.jpg"} {"video_id": "O1b0cbgpRBw", "text": "change the games around quite a bit how the game is played but as we said we don't know this is from someone who has learned classic chess and all the grandmasters that we have have played and learned classic chess so how do we assess this this paper says that alpha zero can be used to assess these new rules so alpha zero is a reinforcement learning algorithm that can learn these board games very very quickly in within one day or so and it can learn them so well it can beat humans at the game easily in fact modern", "start_timestamp": "00:03:57", "end_timestamp": "00:04:39", "start_second": 237, "end_second": 279, "url": "https://www.youtube.com/watch?v=O1b0cbgpRBw&t=237s", "title": "Assessing Game Balance with AlphaZero: Exploring Alternative Rule Sets in Chess (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/O1b0cbgpRBw/maxresdefault.jpg"} {"video_id": "O1b0cbgpRBw", "text": "modern grand masters and so on use these algorithms in order to learn and to better their play in order to expand their theory their knowledge of the game to play better against other humans so alpha zero imagine alpha 0 can solve a game to perfection what we could do is we could simply give this rule to alpha 0 together with the all the other chess rules and then let alpha 0 solve the game give it a day and 50 billion gpus solve the game to perfection and then look at what alpha zero came up with kind of look at the games how they turn", "start_timestamp": "00:04:39", "end_timestamp": "00:05:19", "start_second": 279, "end_second": 319, "url": "https://www.youtube.com/watch?v=O1b0cbgpRBw&t=279s", "title": "Assessing Game Balance with AlphaZero: Exploring Alternative Rule Sets in Chess (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/O1b0cbgpRBw/maxresdefault.jpg"} {"video_id": "O1b0cbgpRBw", "text": "out and um whether or not they are more interesting less interesting longer shorter and so on so that's that's what this paper does so there's the implicit assumption which you need to believe in order to believe anything in this paper is that alpha zero actually has this ability there is pretty good evidence that it does because of zero cans of classical chess and go and shogi and a bunch of other board games um all with the same hyper parameters it can solve them such that it is easily at superhuman power so but you need to recognize that", "start_timestamp": "00:05:19", "end_timestamp": "00:06:01", "start_second": 319, "end_second": 361, "url": "https://www.youtube.com/watch?v=O1b0cbgpRBw&t=319s", "title": "Assessing Game Balance with AlphaZero: Exploring Alternative Rule Sets in Chess (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/O1b0cbgpRBw/maxresdefault.jpg"} {"video_id": "O1b0cbgpRBw", "text": "this is an assumption so what is alpha zero if you don't know what alpha zero is alpha zero is a reinforcement learning algorithm but not in the kind of base reinforcement learning sense it is a reinforcement algorithm that has a planner included what do i mean by this so if you are in a let's consider the game tic-tac-toe so alpha zero for tic-tac-toe in tic-tac-toe you have this board and you have a situation where let's say you play your opponent plays this and now you're tasked of playing something you wonder should i play maybe here or", "start_timestamp": "00:06:01", "end_timestamp": "00:06:40", "start_second": 361, "end_second": 400, "url": "https://www.youtube.com/watch?v=O1b0cbgpRBw&t=361s", "title": "Assessing Game Balance with AlphaZero: Exploring Alternative Rule Sets in Chess (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/O1b0cbgpRBw/maxresdefault.jpg"} {"video_id": "O1b0cbgpRBw", "text": "here or here where should i play so what you can do is you can train a reinforcement learning algorithm you can do q learning what not okay that will maybe work what's better to do is you can plan so in planning what you want to do is you want to build a tree of possibilities so we're going to consider all your possibilities and in this case you have eight possibilities so we want to consider all the eight possibilities and i'm going to draw just some of them so up here you're going to consider the possibility that", "start_timestamp": "00:06:40", "end_timestamp": "00:07:15", "start_second": 400, "end_second": 435, "url": "https://www.youtube.com/watch?v=O1b0cbgpRBw&t=400s", "title": "Assessing Game Balance with AlphaZero: Exploring Alternative Rule Sets in Chess (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/O1b0cbgpRBw/maxresdefault.jpg"} {"video_id": "O1b0cbgpRBw", "text": "you place here and here you're gonna consider the possibility that you place in a different spot right here okay and you can see how this goes so if you want to plan and here you have your opponent has seven possibilities and here your opponent also has seven possibilities and so on so you get this entire tree of play but if you could do that and if you could do that to the end then you could easily simply choose the path here where you win okay where um no matter what your opponent does you win you can find such a path if it is", "start_timestamp": "00:07:15", "end_timestamp": "00:07:56", "start_second": 435, "end_second": 476, "url": "https://www.youtube.com/watch?v=O1b0cbgpRBw&t=435s", "title": "Assessing Game Balance with AlphaZero: Exploring Alternative Rule Sets in Chess (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/O1b0cbgpRBw/maxresdefault.jpg"} {"video_id": "O1b0cbgpRBw", "text": "possible at all to win which is not in tic-tac-toe right if everyone plays optimally it results in a draw but let's say you could win you could choose the path that gives you the best result and that's it there's no learning involved okay so alpha zero works with a planner and planners usually construct a tree so in an abstract way you are in a situation and you consider all your options and with all your options you consider again all your options and so on and you do a tree search now this tree in tic-tac-toe it's", "start_timestamp": "00:07:56", "end_timestamp": "00:08:32", "start_second": 476, "end_second": 512, "url": "https://www.youtube.com/watch?v=O1b0cbgpRBw&t=476s", "title": "Assessing Game Balance with AlphaZero: Exploring Alternative Rule Sets in Chess (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/O1b0cbgpRBw/maxresdefault.jpg"} {"video_id": "O1b0cbgpRBw", "text": "already huge as you can see um in something like chess it is way way huger okay and therefore it's not possible to actually search the entire tree because you need to consider every single possible future situation from the board position where you're in right this here is the board position where you're in and this is the future the entire future of the game so every single possibility so alpha zero uses this thing called a monte carlo tree search it has several components so it's first component and they right here they have", "start_timestamp": "00:08:32", "end_timestamp": "00:09:14", "start_second": 512, "end_second": 554, "url": "https://www.youtube.com/watch?v=O1b0cbgpRBw&t=512s", "title": "Assessing Game Balance with AlphaZero: Exploring Alternative Rule Sets in Chess (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/O1b0cbgpRBw/maxresdefault.jpg"} {"video_id": "O1b0cbgpRBw", "text": "a description and it's very short alpha zero this is alpha zero this is what it does it's like this is almost comically short so what you do is you put your state so s is your state okay s is it's the board as you have it right now okay this here that's this is s okay you put this into a neural network and the neural network gives you two things first of all it gives you p and and v so that's the second thing so v will simply give you a number v will tell you that this thing right here is about a plus 0.5 maybe so it says", "start_timestamp": "00:09:14", "end_timestamp": "00:10:03", "start_second": 554, "end_second": 603, "url": "https://www.youtube.com/watch?v=O1b0cbgpRBw&t=554s", "title": "Assessing Game Balance with AlphaZero: Exploring Alternative Rule Sets in Chess (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/O1b0cbgpRBw/maxresdefault.jpg"} {"video_id": "O1b0cbgpRBw", "text": "so plus one is winning and minus one is losing uh and it is this is called the value so maybe it says well this position i'm going to expect you to win uh roughly 75 of the time right which in expectation would be a value of positive 0.5 here because 75 percent of the time you win and the rest you lose let's say there is no draw in tic-tac-toe so there's this value function and the second thing is this p and the p is a policy function so the p will and i've drawn this a little bit maybe not super super duper", "start_timestamp": "00:10:03", "end_timestamp": "00:10:47", "start_second": 603, "end_second": 647, "url": "https://www.youtube.com/watch?v=O1b0cbgpRBw&t=603s", "title": "Assessing Game Balance with AlphaZero: Exploring Alternative Rule Sets in Chess (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/O1b0cbgpRBw/maxresdefault.jpg"} {"video_id": "O1b0cbgpRBw", "text": "too large but the p will tell you for every possible move you could make which one should you consider even okay so it maybe it assigns this here a point three and this here a point four but this here is like a point zero zero zero one and so on so for every possible move that you could do it will assign a number and it's a distribution so these numbers add up to one but that's not important it tells you which moves you should even consider going forward right so p in this case is a distribution over the next moves", "start_timestamp": "00:10:47", "end_timestamp": "00:11:27", "start_second": 647, "end_second": 687, "url": "https://www.youtube.com/watch?v=O1b0cbgpRBw&t=647s", "title": "Assessing Game Balance with AlphaZero: Exploring Alternative Rule Sets in Chess (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/O1b0cbgpRBw/maxresdefault.jpg"} {"video_id": "O1b0cbgpRBw", "text": "and with those two things together we can reduce our tree search quite a bit so now instead of expanding all the tree let's go back to the tree right here you can ask your p hey p which one of these three should i even consider and maybe p says you should only consider those two okay and then you go down and again you ask your p hey p which one should you consider and p maybe says well here you should consider those two here you should only consider that this one and this three over here we've we've already discarded this", "start_timestamp": "00:11:27", "end_timestamp": "00:12:05", "start_second": 687, "end_second": 725, "url": "https://www.youtube.com/watch?v=O1b0cbgpRBw&t=687s", "title": "Assessing Game Balance with AlphaZero: Exploring Alternative Rule Sets in Chess (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/O1b0cbgpRBw/maxresdefault.jpg"} {"video_id": "O1b0cbgpRBw", "text": "from the beginning okay so this p right here it guides your search it tells you at each point which moves should you consider and this as you can see reduces your tree dramatically in fact what alpha zero does is it simply says you have one second of time now expand as much as you can in this tree uh given this one second uh of of of time budget and the second thing is the value so what you would have to do expanding the tree is always to go to the end right so you always go to the end where at the end you have a fully filled", "start_timestamp": "00:12:05", "end_timestamp": "00:12:47", "start_second": 725, "end_second": 767, "url": "https://www.youtube.com/watch?v=O1b0cbgpRBw&t=725s", "title": "Assessing Game Balance with AlphaZero: Exploring Alternative Rule Sets in Chess (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/O1b0cbgpRBw/maxresdefault.jpg"} {"video_id": "O1b0cbgpRBw", "text": "board i don't know here x so you consider every possible situation okay here maybe this this player wins as you can see you always have to go to the end but in our case we don't want to always go to the end we'd rather explore more into like more branches than always go to the end and this is where the value comes in so at some point you simply say now i'm deep enough and now i'm going to ask my value v now there are slight differences with respect to alpha go and alpha 0 and so on but they all have in common that they", "start_timestamp": "00:12:47", "end_timestamp": "00:13:28", "start_second": 767, "end_second": 808, "url": "https://www.youtube.com/watch?v=O1b0cbgpRBw&t=767s", "title": "Assessing Game Balance with AlphaZero: Exploring Alternative Rule Sets in Chess (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/O1b0cbgpRBw/maxresdefault.jpg"} {"video_id": "O1b0cbgpRBw", "text": "estimate the value of the intermediate nodes using this v model from over here um i have v s v was green so they use this v model from over here to estimate at a certain depth so v learns to look into the future so everything that can happen from here and it estimates and it says well from here you maybe have a you know a 0.5 value or maybe a negative 0.7 and so on so v learns to assign these values to situations to states which are these nodes right here and p learns to suggest things to expand that's alpha zero", "start_timestamp": "00:13:28", "end_timestamp": "00:14:13", "start_second": 808, "end_second": 853, "url": "https://www.youtube.com/watch?v=O1b0cbgpRBw&t=808s", "title": "Assessing Game Balance with AlphaZero: Exploring Alternative Rule Sets in Chess (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/O1b0cbgpRBw/maxresdefault.jpg"} {"video_id": "O1b0cbgpRBw", "text": "and then at the end if you've expanded the tree enough and estimated well then you have a pretty good idea what's going to happen in each of the branches that you considered right in each of these branches you look into the future um from you here you look into the future here look into the future by doing this pv play and after one second after you've done you know a couple of hundred or thousand or however many uh looks into the future then you have a pretty good idea for each of the top level actions what's going to happen", "start_timestamp": "00:14:13", "end_timestamp": "00:14:49", "start_second": 853, "end_second": 889, "url": "https://www.youtube.com/watch?v=O1b0cbgpRBw&t=853s", "title": "Assessing Game Balance with AlphaZero: Exploring Alternative Rule Sets in Chess (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/O1b0cbgpRBw/maxresdefault.jpg"} {"video_id": "O1b0cbgpRBw", "text": "in the future and you can simply pick the one that has the best future for you according to your own model so that's what alpha zero does not so this is how you combine planning and neural networks you want to do planning but you can't because you can only go so deep so you use neural networks to first of all reduce the number of branches you consider because the neural network will tell you which ones are worthy to even look at and second of all you don't always have to plan to the end because you can simply", "start_timestamp": "00:14:49", "end_timestamp": "00:15:22", "start_second": 889, "end_second": 922, "url": "https://www.youtube.com/watch?v=O1b0cbgpRBw&t=889s", "title": "Assessing Game Balance with AlphaZero: Exploring Alternative Rule Sets in Chess (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/O1b0cbgpRBw/maxresdefault.jpg"} {"video_id": "O1b0cbgpRBw", "text": "ask your neural network how much an intermediate state is worth in expectation and this turns out to be pretty good why don't we do this for every single problem well we do for this we do need a simulator so you may recognize that right here i said we consider all the possible actions that we have and for each action we know exactly what's going to happen this is only possible like in a board game it's not even possible in like a board game where you have a a die to roll or a card to draw anything that is random there there is a way to", "start_timestamp": "00:15:22", "end_timestamp": "00:16:00", "start_second": 922, "end_second": 960, "url": "https://www.youtube.com/watch?v=O1b0cbgpRBw&t=922s", "title": "Assessing Game Balance with AlphaZero: Exploring Alternative Rule Sets in Chess (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/O1b0cbgpRBw/maxresdefault.jpg"} {"video_id": "O1b0cbgpRBw", "text": "include this right here but in this simple formulation we need to know exactly with 100 certainty what is going to happen if we take a particular action so this is only really applicable for the types of full information board games where we can write simulators that are pretty fast right and even then um even though chess you know has lots of available actions and complications it's nowhere near the complexity of like a let's say a modern video game or even or the real world is is completely out of scope for now for", "start_timestamp": "00:16:00", "end_timestamp": "00:16:38", "start_second": 960, "end_second": 998, "url": "https://www.youtube.com/watch?v=O1b0cbgpRBw&t=960s", "title": "Assessing Game Balance with AlphaZero: Exploring Alternative Rule Sets in Chess (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/O1b0cbgpRBw/maxresdefault.jpg"} {"video_id": "O1b0cbgpRBw", "text": "these types of things all right so that was alphago sorry alpha zero uh which builds on alphago of course and uh the rules of chess that we're going to consider using alpha zero are the following so there's no castling no castling for ten moves pawns can only move by one square forcing a stalemate is a win rather than a draw so you may know this in chess if you do not um checkmate the opponent's king but only put them put the king in a situation where it cannot move that's called that's considered a draw and i think even in the chess community", "start_timestamp": "00:16:38", "end_timestamp": "00:17:21", "start_second": 998, "end_second": 1041, "url": "https://www.youtube.com/watch?v=O1b0cbgpRBw&t=998s", "title": "Assessing Game Balance with AlphaZero: Exploring Alternative Rule Sets in Chess (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/O1b0cbgpRBw/maxresdefault.jpg"} {"video_id": "O1b0cbgpRBw", "text": "some people want to consider this a win there is torpedo where pawns can move by one or two squares anywhere on the board and semi torpedo where it's the same but only from the second and the third rank pawn back where pawns can move backwards and pawn sideways where pawns can move laterally by one squares but captures are unchanged diagonally upwards and there is self capture where it's possible to capture one's own pieces so um there are you know slight slight details here with respect to the 50 move rule and so on", "start_timestamp": "00:17:21", "end_timestamp": "00:18:04", "start_second": 1041, "end_second": 1084, "url": "https://www.youtube.com/watch?v=O1b0cbgpRBw&t=1041s", "title": "Assessing Game Balance with AlphaZero: Exploring Alternative Rule Sets in Chess (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/O1b0cbgpRBw/maxresdefault.jpg"} {"video_id": "O1b0cbgpRBw", "text": "but if you if you don't play chess simply consider these are changes minor in a lot of cases minor changes to the chess rules that make the new rules either a superset or a subset of the original rules but they are going to have quite some changes in for the play and we're going to look at what happens so that's the entire research setup as you've seen it's alpha 0 applied to these new rule sets and under the assumption that alpha 0 will solve these will become master at these games which we can't verify we can verify in", "start_timestamp": "00:18:04", "end_timestamp": "00:18:46", "start_second": 1084, "end_second": 1126, "url": "https://www.youtube.com/watch?v=O1b0cbgpRBw&t=1084s", "title": "Assessing Game Balance with AlphaZero: Exploring Alternative Rule Sets in Chess (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/O1b0cbgpRBw/maxresdefault.jpg"} {"video_id": "O1b0cbgpRBw", "text": "chess because right alpha zero can beat people that have trained chess for all their life we can't verify it here so again this is an assumption so the first thing i want to look at here and this is going to play a little bit into my criticism of this paper it's a pretty cool paper but i do have some concerns right here is the following uh the following charts so they do they do we don't consider how you train alpha zero let's just say you can train it you know to whatever pretty good performance here is how they evaluate so they evaluate for", "start_timestamp": "00:18:46", "end_timestamp": "00:19:29", "start_second": 1126, "end_second": 1169, "url": "https://www.youtube.com/watch?v=O1b0cbgpRBw&t=1126s", "title": "Assessing Game Balance with AlphaZero: Exploring Alternative Rule Sets in Chess (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/O1b0cbgpRBw/maxresdefault.jpg"} {"video_id": "O1b0cbgpRBw", "text": "each variant they do 10 000 games played at one second per move for each different chess current so if you remember as we do our tree search right we expand the tree according to our p and we estimate the values according to our v and we do this for one second in this first thing so in one second maybe this here is the tree so we have some sort of an understanding of what's going to happen in the future you can imagine if we have more time then we can expand this tree more and get a much more accurate picture of what", "start_timestamp": "00:19:29", "end_timestamp": "00:20:09", "start_second": 1169, "end_second": 1209, "url": "https://www.youtube.com/watch?v=O1b0cbgpRBw&t=1169s", "title": "Assessing Game Balance with AlphaZero: Exploring Alternative Rule Sets in Chess (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/O1b0cbgpRBw/maxresdefault.jpg"} {"video_id": "O1b0cbgpRBw", "text": "happens in the future okay so they do 10 000 games at one second per move but they also in addition to 1000 games played at one minute per move so there's 60 times more time and you can imagine that we'll add quite a number of nodes here and you know if if your p and v would be perfect then it wouldn't matter as much how much time you have as long as you sort of have enough time but since they're not going to be perfect since they're only neural networks they're not uh god or schmidt hoover um they cannot accurately extremely", "start_timestamp": "00:20:09", "end_timestamp": "00:20:55", "start_second": 1209, "end_second": 1255, "url": "https://www.youtube.com/watch?v=O1b0cbgpRBw&t=1209s", "title": "Assessing Game Balance with AlphaZero: Exploring Alternative Rule Sets in Chess (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/O1b0cbgpRBw/maxresdefault.jpg"} {"video_id": "O1b0cbgpRBw", "text": "accurately predict the future so this planning the the more you plan the more you actually look into the future the bigger your tree becomes the better moves you make so on the left you see the distributions of winds losses and draws for one second per move and on the right for one minute per move so both white and black pieces here are played by alpha zero so it's not alpha zero against something else this is playing against itself and you can see in uh in classic chess it's it's quite it's quite saddening actually um", "start_timestamp": "00:20:55", "end_timestamp": "00:21:35", "start_second": 1255, "end_second": 1295, "url": "https://www.youtube.com/watch?v=O1b0cbgpRBw&t=1255s", "title": "Assessing Game Balance with AlphaZero: Exploring Alternative Rule Sets in Chess (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/O1b0cbgpRBw/maxresdefault.jpg"} {"video_id": "O1b0cbgpRBw", "text": "that this game which is is so famous you can see that in of 10 000 plays 8 820 end in a draw which means that if both players are super duper good and uh and and play you know play against each other it most likely is going to be a draw and this i think is the criticism even in human chess is that it's not really a decisive game in that it ends a lot of times in a draw so one of the motivations here would be can we find a rule set that is maybe more decisive so that's one of the investigations they do in the paper but", "start_timestamp": "00:21:35", "end_timestamp": "00:22:20", "start_second": 1295, "end_second": 1340, "url": "https://www.youtube.com/watch?v=O1b0cbgpRBw&t=1295s", "title": "Assessing Game Balance with AlphaZero: Exploring Alternative Rule Sets in Chess (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/O1b0cbgpRBw/maxresdefault.jpg"} {"video_id": "O1b0cbgpRBw", "text": "you can see that there are actually so if you consider this torpedo chess right here um there it is more decisive as you can see in more times either white or black winds right here um and there are others which are even less decisive like pawn back so when pawns can move back then uh players may just camp they like move a pawn forward and move it back again and that will lead to a lot of closed plays and so on whereas torpedo makes you move much faster you can advance your pawns much faster and that will probably lead", "start_timestamp": "00:22:20", "end_timestamp": "00:23:00", "start_second": 1340, "end_second": 1380, "url": "https://www.youtube.com/watch?v=O1b0cbgpRBw&t=1340s", "title": "Assessing Game Balance with AlphaZero: Exploring Alternative Rule Sets in Chess (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/O1b0cbgpRBw/maxresdefault.jpg"} {"video_id": "O1b0cbgpRBw", "text": "to the end much faster so if you consider this on the right so what changed the rules didn't change alpha 0 didn't change it simply changed that we now let alpha 0 think for longer and you can see that the decisiveness reduces dramatically so whereas 88 resulted in a draw with one second per move now 98 result in a draw with one minute per move and this is a trend throughout these games and that's also what they say in the text it is to assume that if you let alpha zero plan for even longer that this trend will continue and", "start_timestamp": "00:23:00", "end_timestamp": "00:23:45", "start_second": 1380, "end_second": 1425, "url": "https://www.youtube.com/watch?v=O1b0cbgpRBw&t=1380s", "title": "Assessing Game Balance with AlphaZero: Exploring Alternative Rule Sets in Chess (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/O1b0cbgpRBw/maxresdefault.jpg"} {"video_id": "O1b0cbgpRBw", "text": "ultimately whatever rule set you make the result is going to be a draw um if two two let's say perfect players play against each other which is a bit which is a bit saddening right because um yeah that ultimately ultimately means that all of these rules aren't decisive it's only they're only decisive due to the fact that either um one or the other players is way better or or that in general that they are not they are not perfect um which is an appeal of the game but there are certainly games that are decisive even though both players", "start_timestamp": "00:23:45", "end_timestamp": "00:24:29", "start_second": 1425, "end_second": 1469, "url": "https://www.youtube.com/watch?v=O1b0cbgpRBw&t=1425s", "title": "Assessing Game Balance with AlphaZero: Exploring Alternative Rule Sets in Chess (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/O1b0cbgpRBw/maxresdefault.jpg"} {"video_id": "O1b0cbgpRBw", "text": "are pretty high level i mean think of every every competitive video game um so yes so that's a bit of my criticism all of this all of this needs to be analyzed in the background that what's actually happening here is that we're dealing with imperfect decision making due to a limit in resources okay and this assumption now is already a little bit invalid right the assumption we made at the beginning why i pointed this out is that alpha zero can solve these games let's say to perfection and here when we analyze the", "start_timestamp": "00:24:29", "end_timestamp": "00:25:10", "start_second": 1469, "end_second": 1510, "url": "https://www.youtube.com/watch?v=O1b0cbgpRBw&t=1469s", "title": "Assessing Game Balance with AlphaZero: Exploring Alternative Rule Sets in Chess (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/O1b0cbgpRBw/maxresdefault.jpg"} {"video_id": "O1b0cbgpRBw", "text": "decisiveness and so on it seems to be purely or largely a factor of um how much time alpha zero has to think about the moves and these two things to me they don't really go go together because we don't know if for a different rule set um you know the training is harder or might take longer and so on or that this exact one second makes a difference or not it's it's just um there are so many variables here and when you're dealing with let's say imperfect systems that are not trained to the end or evaluated in their", "start_timestamp": "00:25:10", "end_timestamp": "00:25:51", "start_second": 1510, "end_second": 1551, "url": "https://www.youtube.com/watch?v=O1b0cbgpRBw&t=1510s", "title": "Assessing Game Balance with AlphaZero: Exploring Alternative Rule Sets in Chess (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/O1b0cbgpRBw/maxresdefault.jpg"} {"video_id": "O1b0cbgpRBw", "text": "full potential you're always dealing with the fact that you stopped each thing at some intermediate point and that intermediate where that intermediate point is can influence the results drastically now here it seems at least the ordering isn't changed by much but um yeah this is one let's say one criticism the other criticism here uh that that i would have again is the fact that if you consider something like torpedo where you can move much much faster then yes of course uh let's say i don't know is it more interesting", "start_timestamp": "00:25:51", "end_timestamp": "00:26:35", "start_second": 1551, "end_second": 1595, "url": "https://www.youtube.com/watch?v=O1b0cbgpRBw&t=1551s", "title": "Assessing Game Balance with AlphaZero: Exploring Alternative Rule Sets in Chess (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/O1b0cbgpRBw/maxresdefault.jpg"} {"video_id": "O1b0cbgpRBw", "text": "that's that's the question right here so they look at a lot of things like decisiveness diversity and so on but the question is is it more or less interesting to play and i think that's what humans are really after and they're sort of trying to find proxies to this um i would argue if you play something like torpedo the game's maybe much faster and um so you you get to the end faster but also maybe might not be as interesting even though it's it's faster uh because your the complexity is is less and with respect to the decisiveness", "start_timestamp": "00:26:35", "end_timestamp": "00:27:13", "start_second": 1595, "end_second": 1633, "url": "https://www.youtube.com/watch?v=O1b0cbgpRBw&t=1595s", "title": "Assessing Game Balance with AlphaZero: Exploring Alternative Rule Sets in Chess (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/O1b0cbgpRBw/maxresdefault.jpg"} {"video_id": "O1b0cbgpRBw", "text": "here so if you have a game that's faster um you also need to take this to into account because here is another thing that is sort of an arbitrary choice as moves are determined in a deterministic fashion given the same condition diversity was enforced by sampling the first 20 plies in each game proportional to their mcts visit counts so what does that mean that means that if you run alpha 0 on the same situation on the same tree sorry on the same board position it will always come up with the same move except for parallelism inconsistencies", "start_timestamp": "00:27:13", "end_timestamp": "00:27:54", "start_second": 1633, "end_second": 1674, "url": "https://www.youtube.com/watch?v=O1b0cbgpRBw&t=1633s", "title": "Assessing Game Balance with AlphaZero: Exploring Alternative Rule Sets in Chess (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/O1b0cbgpRBw/maxresdefault.jpg"} {"video_id": "O1b0cbgpRBw", "text": "and so on but it will in you know in in a lot of times it will come up with the same move so how do you play 10 000 games because you can just play one game because each game will be the same because you simply tell alpha zero give me your best move right so it will just play its optimal strategy and all the games will be exactly the same so there's no reason why these should come out different so they enforce diversity by saying okay okay in the first 20 moves of a game we don't actually take the best move right usually you have", "start_timestamp": "00:27:54", "end_timestamp": "00:28:32", "start_second": 1674, "end_second": 1712, "url": "https://www.youtube.com/watch?v=O1b0cbgpRBw&t=1674s", "title": "Assessing Game Balance with AlphaZero: Exploring Alternative Rule Sets in Chess (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/O1b0cbgpRBw/maxresdefault.jpg"} {"video_id": "O1b0cbgpRBw", "text": "you have this distribution at the end of the tree search you have a distribution where you say okay this move right here is clearly the best move i'm going to play this however if this is one of the first 20 moves of the game they say no we need a bit of diversity uh so we're going to sample according to this distribution rather than just play the best one now this number 20. it's just sort of decided arbitrary right and if you consider something like torpedo it's a faster game so you're faster in opening faster make", "start_timestamp": "00:28:32", "end_timestamp": "00:29:10", "start_second": 1712, "end_second": 1750, "url": "https://www.youtube.com/watch?v=O1b0cbgpRBw&t=1712s", "title": "Assessing Game Balance with AlphaZero: Exploring Alternative Rule Sets in Chess (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/O1b0cbgpRBw/maxresdefault.jpg"} {"video_id": "O1b0cbgpRBw", "text": "and you're faster to the end game maybe even though they say well the game length isn't affected this much it could just be that um you're faster in a situation where um you're kind of forced to do certain moves and maybe the difference in decisiveness here is simply a result of the combination of the faster uh moves in torpedo together with this the fact that they just keep the 20 plies for each game again this is something that you need to consider when analyzing these results right here and there are a number of these choices", "start_timestamp": "00:29:10", "end_timestamp": "00:29:51", "start_second": 1750, "end_second": 1791, "url": "https://www.youtube.com/watch?v=O1b0cbgpRBw&t=1750s", "title": "Assessing Game Balance with AlphaZero: Exploring Alternative Rule Sets in Chess (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/O1b0cbgpRBw/maxresdefault.jpg"} {"video_id": "O1b0cbgpRBw", "text": "um right here like the one second or one minute per move we sample for the first 20 plies before we play the max move that where i think the the results of the study right here they have rather limited interpretability if you if you ask me because um because of these of these choices now of course they're still the results are quite plausible believable and the idea is really cool to explore these rule sets but this was this is just my criticism right here so we'll go through the rest of the results pretty pretty", "start_timestamp": "00:29:51", "end_timestamp": "00:30:31", "start_second": 1791, "end_second": 1831, "url": "https://www.youtube.com/watch?v=O1b0cbgpRBw&t=1791s", "title": "Assessing Game Balance with AlphaZero: Exploring Alternative Rule Sets in Chess (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/O1b0cbgpRBw/maxresdefault.jpg"} {"video_id": "O1b0cbgpRBw", "text": "quickly because a lot of people aren't chess enthusiasts and we'll just pick out kind of the core messages that the paper is trying to get across so here the table again with respect to decisiveness and you can see even uh for so for classic chess it's a white has a 50 this is the empirical score for white under different game conditions so 50.8 percent means most of the time it's a draw so white wins uh with a probability of 50.8 uh most of the time it's a draw and you see even like the most decisive variant torpedo right here", "start_timestamp": "00:30:31", "end_timestamp": "00:31:11", "start_second": 1831, "end_second": 1871, "url": "https://www.youtube.com/watch?v=O1b0cbgpRBw&t=1831s", "title": "Assessing Game Balance with AlphaZero: Exploring Alternative Rule Sets in Chess (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/O1b0cbgpRBw/maxresdefault.jpg"} {"video_id": "O1b0cbgpRBw", "text": "is a 54 only um so they they analyze different defenses and how the decisiveness is with respect to different defenses that are not really popular under classical chess and the results are interesting if you play chess but i would say they're rather they're kind of aha okay if you do not play chess because they consider individual moves and so on what is an interesting part is um this right here where they look at they look at one move that in classical chess so e4 is a very very um popular opening where you move your e", "start_timestamp": "00:31:11", "end_timestamp": "00:32:02", "start_second": 1871, "end_second": 1922, "url": "https://www.youtube.com/watch?v=O1b0cbgpRBw&t=1871s", "title": "Assessing Game Balance with AlphaZero: Exploring Alternative Rule Sets in Chess (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/O1b0cbgpRBw/maxresdefault.jpg"} {"video_id": "O1b0cbgpRBw", "text": "pawn twice for white and nf3 is not a super popular opening and here they compare this in classic chess and in no castling gesso this thing right here is a histogram and the histogram shows you the log probability of opening sequences when you play the individual moves so what does this mean right here if you play e4 then the distribution is something like this which means that you have some sequences that have no entropy at all which means that once you play e4 and maybe one move more then it's almost it's almost determined", "start_timestamp": "00:32:02", "end_timestamp": "00:32:54", "start_second": 1922, "end_second": 1974, "url": "https://www.youtube.com/watch?v=O1b0cbgpRBw&t=1922s", "title": "Assessing Game Balance with AlphaZero: Exploring Alternative Rule Sets in Chess (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/O1b0cbgpRBw/maxresdefault.jpg"} {"video_id": "O1b0cbgpRBw", "text": "what you have to do according to alpha zero you have like no choice except play these few next moves um however if you play nf3 then alpha zero says look this distribution is much more to the right which means that you have a lot more options here now again this could be because the move is actually less decisive because the move leads to more balanced more interesting situations where you can continue however you know with many choices it could also be because it's simply alpha zero simply doesn't know as well what to do because", "start_timestamp": "00:32:54", "end_timestamp": "00:33:35", "start_second": 1974, "end_second": 2015, "url": "https://www.youtube.com/watch?v=O1b0cbgpRBw&t=1974s", "title": "Assessing Game Balance with AlphaZero: Exploring Alternative Rule Sets in Chess (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/O1b0cbgpRBw/maxresdefault.jpg"} {"video_id": "O1b0cbgpRBw", "text": "it leads to more complicated games and you get to give each move one minute to evaluate alpha zero might just not be as good in those situations because it leads to more complicated situations if it could search for longer maybe this distribution would shift over here just as well again we don't know because you only give this one second or one minute each time for both um and again this goes under the assumption of alpha zero as this perfect player however back to what they want to say here if you do this in no castling chess you", "start_timestamp": "00:33:35", "end_timestamp": "00:34:11", "start_second": 2015, "end_second": 2051, "url": "https://www.youtube.com/watch?v=O1b0cbgpRBw&t=2015s", "title": "Assessing Game Balance with AlphaZero: Exploring Alternative Rule Sets in Chess (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/O1b0cbgpRBw/maxresdefault.jpg"} {"video_id": "O1b0cbgpRBw", "text": "can see that uh this spike right here are all the these berlin defense variants and castling this 0 right here is a big part of that line if you do this in no castling chest you can see that these two moves now the histograms overlap much more which means that and in fact you can see in the in this number of possible moves right here that they come closer together so not only does the blue shift to the right the orange actually shifts to the left and it basically means that whether you open with e4 or knight", "start_timestamp": "00:34:11", "end_timestamp": "00:34:50", "start_second": 2051, "end_second": 2090, "url": "https://www.youtube.com/watch?v=O1b0cbgpRBw&t=2051s", "title": "Assessing Game Balance with AlphaZero: Exploring Alternative Rule Sets in Chess (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/O1b0cbgpRBw/maxresdefault.jpg"} {"video_id": "O1b0cbgpRBw", "text": "f f3 you are going to have about the same complexity of game the same number of moves available to you going from there as you can see right here these lines are the moves available for white and black under the different rule sets so in e4 here especially as black you do not have many moves available as white a little bit more but also not more um whereas in no castling you do so again small rule change uh big effect on the possible moves that you have can consider and this is the type of this is the type of information", "start_timestamp": "00:34:50", "end_timestamp": "00:35:36", "start_second": 2090, "end_second": 2136, "url": "https://www.youtube.com/watch?v=O1b0cbgpRBw&t=2090s", "title": "Assessing Game Balance with AlphaZero: Exploring Alternative Rule Sets in Chess (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/O1b0cbgpRBw/maxresdefault.jpg"} {"video_id": "O1b0cbgpRBw", "text": "that you would want to have when you design a game and they allude to this also at the end here in their conclusions so the last thing is they also compare the material values of the pieces here in the different rule sets as you might imagine so some pieces become much more or less valuable i find it particularly interesting that if you do something like pawn sideways or then where the pawns are much more powerful of course all the other pieces drop in value again these results are pretty plausible so i don't want to trash the", "start_timestamp": "00:35:36", "end_timestamp": "00:36:12", "start_second": 2136, "end_second": 2172, "url": "https://www.youtube.com/watch?v=O1b0cbgpRBw&t=2136s", "title": "Assessing Game Balance with AlphaZero: Exploring Alternative Rule Sets in Chess (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/O1b0cbgpRBw/maxresdefault.jpg"} {"video_id": "O1b0cbgpRBw", "text": "paper right here because it seems like it seems like the the results are as i say plausible and can give some cool insights so the chess master also gives um gives his opinions on these different strategies that alpha zero comes up with for the different rules and let's go through the conclusions real quickly so they say assessing the consequences of rule change in the game design process demonstrate on chess where we've trained alpha zero to evaluate nine different variants representing atomic changes to the rules", "start_timestamp": "00:36:12", "end_timestamp": "00:36:52", "start_second": 2172, "end_second": 2212, "url": "https://www.youtube.com/watch?v=O1b0cbgpRBw&t=2172s", "title": "Assessing Game Balance with AlphaZero: Exploring Alternative Rule Sets in Chess (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/O1b0cbgpRBw/maxresdefault.jpg"} {"video_id": "O1b0cbgpRBw", "text": "of the game training alpha zero model on these rules changes helps us effectively simulate decades of human play in a matter of hours and answer the what if question what the play would potentially look like under developed theory in each chess variant we believe that a similar approach could be used for auto balancing game mechanics in other types of games including computer games in cases when a sufficiently performant reinforcement learning system is available and yes this is i mean this the application here would", "start_timestamp": "00:36:52", "end_timestamp": "00:37:25", "start_second": 2212, "end_second": 2245, "url": "https://www.youtube.com/watch?v=O1b0cbgpRBw&t=2212s", "title": "Assessing Game Balance with AlphaZero: Exploring Alternative Rule Sets in Chess (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/O1b0cbgpRBw/maxresdefault.jpg"} {"video_id": "O1b0cbgpRBw", "text": "be for something like this if you design a new game then you want to know what you have some choice with how you can make the rules and you don't want to let humans become really good at each of the rules and then compare you can simply give this to the algorithm and the algorithm will tell you what kind of plays result from each rule set and then you can choose the one that you find most interesting or most uh maybe commercially viable and what not i actually see this much i see this bigger than just games and this alludes a bit to the", "start_timestamp": "00:37:25", "end_timestamp": "00:38:02", "start_second": 2245, "end_second": 2282, "url": "https://www.youtube.com/watch?v=O1b0cbgpRBw&t=2245s", "title": "Assessing Game Balance with AlphaZero: Exploring Alternative Rule Sets in Chess (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/O1b0cbgpRBw/maxresdefault.jpg"} {"video_id": "O1b0cbgpRBw", "text": "salesforce paper on this ai economist i think we can let ai you know get tell us what happens if we change for example uh things like tax policy or any any sort of policy i know humanity is very complex to model and so on and you're never going to have a perfect simulator which probably makes alpha zero not good but in limited situations like maybe also stock trading rules and so on you could definitely have situations where the rule set is too complicated to solve analytically but you could give it to an rl algorithm and see", "start_timestamp": "00:38:02", "end_timestamp": "00:38:44", "start_second": 2282, "end_second": 2324, "url": "https://www.youtube.com/watch?v=O1b0cbgpRBw&t=2282s", "title": "Assessing Game Balance with AlphaZero: Exploring Alternative Rule Sets in Chess (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/O1b0cbgpRBw/maxresdefault.jpg"} {"video_id": "O1b0cbgpRBw", "text": "what happens and whether or not you like the outcome and whether or not there are any like obvious exploits that uh you did not see so this i find you know pretty it's it's a pretty cool approach and and we should think of this in the future as we build systems that have rules in whatever capacity be this games or policy so the they say okay yada yada yada we showed that there are several chess variants among those considering the study that are even more decisive than classical chess meaning torpedo chess semia torpedo", "start_timestamp": "00:38:44", "end_timestamp": "00:39:21", "start_second": 2324, "end_second": 2361, "url": "https://www.youtube.com/watch?v=O1b0cbgpRBw&t=2324s", "title": "Assessing Game Balance with AlphaZero: Exploring Alternative Rule Sets in Chess (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/O1b0cbgpRBw/maxresdefault.jpg"} {"video_id": "O1b0cbgpRBw", "text": "chess no castling chess and stalemate equals winches we quantified a rising diversity of opening play and the intersection of opening trees between chess variations showing how different the opening theory is for each of the rule changes yeah they again this this diversity of opening play it really rests on this assumption that alpha zero is a is a good player and any sort of an equally good player in all of these variants right because if it's worse in a variant it might not be as sure about the moves and that would just look like", "start_timestamp": "00:39:21", "end_timestamp": "00:39:57", "start_second": 2361, "end_second": 2397, "url": "https://www.youtube.com/watch?v=O1b0cbgpRBw&t=2361s", "title": "Assessing Game Balance with AlphaZero: Exploring Alternative Rule Sets in Chess (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/O1b0cbgpRBw/maxresdefault.jpg"} {"video_id": "O1b0cbgpRBw", "text": "oh you have many possibilities but in fact alpha zero is just worse at it and it doesn't know so they also look at the intersection of opening trees like if you change a rule how does this change um change the the kind of how does this change the the initial game so a lot of these grandmasters they learn by heart all of these opening trees the initial moves of a game how much would they have to relearn there is a negative correlation between the overall opening diversity and decisiveness as decisive variants", "start_timestamp": "00:39:57", "end_timestamp": "00:40:33", "start_second": 2397, "end_second": 2433, "url": "https://www.youtube.com/watch?v=O1b0cbgpRBw&t=2397s", "title": "Assessing Game Balance with AlphaZero: Exploring Alternative Rule Sets in Chess (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/O1b0cbgpRBw/maxresdefault.jpg"} {"video_id": "O1b0cbgpRBw", "text": "likely require more precise play with fewer plausible choices per move again this is one view right the other view is that there are rule sets that are just make it into a harder game and then alpha zero given the same amount of compute is a worse player and therefore it can't play as well therefore the games are less decisive and also the opening diversity is higher because it doesn't know if the game could be as decisive it might just be an effect of alpha zero for each of the chess variants we estimated yada yada okay no castling chess being", "start_timestamp": "00:40:33", "end_timestamp": "00:41:19", "start_second": 2433, "end_second": 2479, "url": "https://www.youtube.com/watch?v=O1b0cbgpRBw&t=2433s", "title": "Assessing Game Balance with AlphaZero: Exploring Alternative Rule Sets in Chess (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/O1b0cbgpRBw/maxresdefault.jpg"} {"video_id": "O1b0cbgpRBw", "text": "the first variant that we analyzed has already been tried in experimental blitz grand master tournament in chennai as well as a couple of longer grand master games our assessment suggests that several of the assessed chess variants might be quite appealing to interested players and we hope that this study will prove to be a valuable resource for the wider chess community i yeah i don't know is is the chess community flourishing or going under recently because it seems to me like it once once a game is solved that hard", "start_timestamp": "00:41:19", "end_timestamp": "00:41:51", "start_second": 2479, "end_second": 2511, "url": "https://www.youtube.com/watch?v=O1b0cbgpRBw&t=2479s", "title": "Assessing Game Balance with AlphaZero: Exploring Alternative Rule Sets in Chess (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/O1b0cbgpRBw/maxresdefault.jpg"} {"video_id": "O1b0cbgpRBw", "text": "by computers i mean it's still fun but um yeah i just i just i guess counter strike is also solved by bots real hard uh it's just impressive when humans play or so um yeah i don't know all of this is again if you're into chess look into this paper they have a lot of really interesting results that are not interesting to go into for the general community but i believe this should give you a good impression of what you could do if you design a system that is built on rules all right so this was it for this paper i hope you enjoyed this", "start_timestamp": "00:41:51", "end_timestamp": "00:42:34", "start_second": 2511, "end_second": 2554, "url": "https://www.youtube.com/watch?v=O1b0cbgpRBw&t=2511s", "title": "Assessing Game Balance with AlphaZero: Exploring Alternative Rule Sets in Chess (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/O1b0cbgpRBw/maxresdefault.jpg"} {"video_id": "7Q2JhZxNPow", "text": "next our each other I renege we'll talk talk about our fine spline insight into the attorney Oh Mike max it's great to be here how can everybody hear me at the back okay yeah it's great to be here really looking forward at this meeting that is part of a program on foundations of deep learning and and what I'd like to do is just talk a little bit about some of the progress we've been making trying to find a language a language we can use to describe what we're learning as we scratch away at these black box deep learning systems that have been", "start_timestamp": "00:00:00", "end_timestamp": "00:00:44", "start_second": 0, "end_second": 44, "url": "https://www.youtube.com/watch?v=7Q2JhZxNPow&t=0s", "title": "Mad Max: Affine Spline Insights into Deep Learning", "thumbnail": "https://i.ytimg.com/vi/7Q2JhZxNPow/maxresdefault.jpg"} {"video_id": "7Q2JhZxNPow", "text": "promised to catapult us into going into the end of the future right and what I'm gonna argue is splines provide a very natural framework for both describing what we've learned but also providing us avenues for extending both the design and analysis of a whole host of different deep learning systems and so I'm going to talk a little bit about a particular kind of spline today and then I'm gonna give you a whole bunch of apples of how we've been using it to describe and extend and of course we're not the first people to think about the", "start_timestamp": "00:00:44", "end_timestamp": "00:01:24", "start_second": 44, "end_second": 84, "url": "https://www.youtube.com/watch?v=7Q2JhZxNPow&t=44s", "title": "Mad Max: Affine Spline Insights into Deep Learning", "thumbnail": "https://i.ytimg.com/vi/7Q2JhZxNPow/maxresdefault.jpg"} {"video_id": "7Q2JhZxNPow", "text": "relationship with deep nets or neural nets and splines this goes way back to pre the the previous deep net winter but I think that we have you know particularly we've identified a combat a collection of particularly useful splines for modern deep nets okay so just let's jump in and talk about the basic set up so we all know that deep nets solve a function approximation problem we're trying to use training data to approximate the prediction function from data to some prediction might be a regression problem might be a classification problem and we", "start_timestamp": "00:01:24", "end_timestamp": "00:02:06", "start_second": 84, "end_second": 126, "url": "https://www.youtube.com/watch?v=7Q2JhZxNPow&t=84s", "title": "Mad Max: Affine Spline Insights into Deep Learning", "thumbnail": "https://i.ytimg.com/vi/7Q2JhZxNPow/maxresdefault.jpg"} {"video_id": "7Q2JhZxNPow", "text": "do this in a hierarchical way through a number of layers in a network and what I'm going to argue is that deep net do this in a very particular kind of way using a spline approximation so show of hands how many people here know about splines okay so there's two key parts to a spline approximation the first is a partition of the input space or the domain so if X is just a one-dimensional input variable then we have a partition Omega in this case we're splitting the domain up into four different regions we're now the second important thing is", "start_timestamp": "00:02:06", "end_timestamp": "00:02:43", "start_second": 126, "end_second": 163, "url": "https://www.youtube.com/watch?v=7Q2JhZxNPow&t=126s", "title": "Mad Max: Affine Spline Insights into Deep Learning", "thumbnail": "https://i.ytimg.com/vi/7Q2JhZxNPow/maxresdefault.jpg"} {"video_id": "7Q2JhZxNPow", "text": "there a local mapping that we use to approximate some in this case function so we have a blue function we want to approximate here and we approximate it we're gonna be interested in piecewise a fine or piecewise linear splines by just in this case four piecewise of fine mappings okay makes sense to everybody but really this yin-yang relationship between the partition and the mapping that that works the magic in splines there's two big classes of splines there's the really powerful splines for example free not splines this is where", "start_timestamp": "00:02:43", "end_timestamp": "00:03:19", "start_second": 163, "end_second": 199, "url": "https://www.youtube.com/watch?v=7Q2JhZxNPow&t=163s", "title": "Mad Max: Affine Spline Insights into Deep Learning", "thumbnail": "https://i.ytimg.com/vi/7Q2JhZxNPow/maxresdefault.jpg"} {"video_id": "7Q2JhZxNPow", "text": "you let the partition be arbitrary and this and then what you do is you jointly optimize both the partition and the local mapping these are the most allow you to have the highest quality approximation but it's important to note that they're computationally intractable in 1d in fact in hired even dimensions to an above it's not even clear how to define what a free not spline is so these are something we'd really like to be able to do but very very difficult typically what people do is they they fall back to some kind of gritting type", "start_timestamp": "00:03:19", "end_timestamp": "00:03:52", "start_second": 199, "end_second": 232, "url": "https://www.youtube.com/watch?v=7Q2JhZxNPow&t=199s", "title": "Mad Max: Affine Spline Insights into Deep Learning", "thumbnail": "https://i.ytimg.com/vi/7Q2JhZxNPow/maxresdefault.jpg"} {"video_id": "7Q2JhZxNPow", "text": "technique and if you think of even what wavelets are they're really just a dyadic grid approximation type of spline so what we're going to focus on today is a particular family of splines that we call we don't call that we're kind maxify splines by Stephen Boyd a number of years ago and these were developed just for the idea of approximating a convex function with a continuous piecewise affine approximation okay so let's do continue with this really simple example to just define a max of fine spline we're interested in approximating", "start_timestamp": "00:03:52", "end_timestamp": "00:04:30", "start_second": 232, "end_second": 270, "url": "https://www.youtube.com/watch?v=7Q2JhZxNPow&t=232s", "title": "Mad Max: Affine Spline Insights into Deep Learning", "thumbnail": "https://i.ytimg.com/vi/7Q2JhZxNPow/maxresdefault.jpg"} {"video_id": "7Q2JhZxNPow", "text": "this could convex function over R capital R regions so we assume that we have our affine functions these are parameterised by a set of slopes and a set of biases we're gonna have our set to those here's an example for R equals four we have four separate four of these distinct affine functions and if the key thing about the reason why we call the maxify splines is very conveniently if we want to approximate a convex function by these splines all we have to do is take the maximum the vertically highest in this case I find function okay so if", "start_timestamp": "00:04:30", "end_timestamp": "00:05:08", "start_second": 270, "end_second": 308, "url": "https://www.youtube.com/watch?v=7Q2JhZxNPow&t=270s", "title": "Mad Max: Affine Spline Insights into Deep Learning", "thumbnail": "https://i.ytimg.com/vi/7Q2JhZxNPow/maxresdefault.jpg"} {"video_id": "7Q2JhZxNPow", "text": "you think of these four f-find functions that we thrown down here and we think of approximating this blue curve all we're going to be using is simply the top right the piece that actually sits on the top okay and so the really important thing here is that it's just by fixing these four sets of slopes and biases we this automatically generates an implicit partition of the input space right yet you switch from one partition region to the next whenever these affine functions cross and that's gonna be you know really important for laters this makes", "start_timestamp": "00:05:08", "end_timestamp": "00:05:48", "start_second": 308, "end_second": 348, "url": "https://www.youtube.com/watch?v=7Q2JhZxNPow&t=308s", "title": "Mad Max: Affine Spline Insights into Deep Learning", "thumbnail": "https://i.ytimg.com/vi/7Q2JhZxNPow/maxresdefault.jpg"} {"video_id": "7Q2JhZxNPow", "text": "sense to everybody very very simple right of course this also gives a continuous approximation so let's just think a little bit about pointing towards deep nets without going to a lot of details just imagine so we're still in one dimension and we take our input X we scale by a add a bias B and then pass it through a ray loop right this operation here well it's pretty easy to show that this is a max affine spline approximation with R equals to a find functions the first is being 0 0 is the flat function 0 function and then the second being", "start_timestamp": "00:05:48", "end_timestamp": "00:06:25", "start_second": 348, "end_second": 385, "url": "https://www.youtube.com/watch?v=7Q2JhZxNPow&t=348s", "title": "Mad Max: Affine Spline Insights into Deep Learning", "thumbnail": "https://i.ytimg.com/vi/7Q2JhZxNPow/maxresdefault.jpg"} {"video_id": "7Q2JhZxNPow", "text": "basically that think of this like a like like the Ray Lu but now shifted over and with the slope change by the parameter a two okay so just this should get your yourself thinking about other deep net operations and whether they can be related to the max affine splines we're going to define a max if I'm spline operator simply by concatenating K of these max affine splines so you can think of an input vector now we're no longer in 1d X is in D dimensions and then we have K different splines the and the output of each of those splines will", "start_timestamp": "00:06:25", "end_timestamp": "00:07:01", "start_second": 385, "end_second": 421, "url": "https://www.youtube.com/watch?v=7Q2JhZxNPow&t=385s", "title": "Mad Max: Affine Spline Insights into Deep Learning", "thumbnail": "https://i.ytimg.com/vi/7Q2JhZxNPow/maxresdefault.jpg"} {"video_id": "7Q2JhZxNPow", "text": "just be one entry of this output vectors Z we're gonna call that Amazo or a max affine spline operator so what's the key key realization well let's start by just talking about deep nets okay if you think of the lion's share of what the deep nets that are used today basically any architecture you can think of using piecewise linear or you know affine operators fully connected operators convolution operators leaky Ray leaky Ray Lu or Ray Lu absolute value any of these types of pooling z-- these are all built these the state-of-the-art methods", "start_timestamp": "00:07:01", "end_timestamp": "00:07:43", "start_second": 421, "end_second": 463, "url": "https://www.youtube.com/watch?v=7Q2JhZxNPow&t=421s", "title": "Mad Max: Affine Spline Insights into Deep Learning", "thumbnail": "https://i.ytimg.com/vi/7Q2JhZxNPow/maxresdefault.jpg"} {"video_id": "7Q2JhZxNPow", "text": "are all built out of these kind of architectures and these kind of operators and it's actually pretty easy to show that all of these operators that comprise the layers of bit essentially all of today's state-of-the-art deep nets are maxify spline operators you can think of then each layer of a deep net is just a max affine spline operator and so that what we're doing is we're doing that we have a convex approximation going on at each of these layers and therefore a deep net is just a composition of maxify spline operations", "start_timestamp": "00:07:43", "end_timestamp": "00:08:19", "start_second": 463, "end_second": 499, "url": "https://www.youtube.com/watch?v=7Q2JhZxNPow&t=463s", "title": "Mad Max: Affine Spline Insights into Deep Learning", "thumbnail": "https://i.ytimg.com/vi/7Q2JhZxNPow/maxresdefault.jpg"} {"video_id": "7Q2JhZxNPow", "text": "no longer a convex operator because composition of two convex operators isn't necessarily convex okay so so this is gonna so we're just going to call this in a fine spline operator it remains continuous but it doesn't have this max affine spline property anymore I just as an aside if you wanted the overall net to be convex it's pretty easy to constraint to show that all you need to do is just ensure that the all of the weights in the second layer and onward are positive number right that guarantees that the overall", "start_timestamp": "00:08:19", "end_timestamp": "00:08:55", "start_second": 499, "end_second": 535, "url": "https://www.youtube.com/watch?v=7Q2JhZxNPow&t=499s", "title": "Mad Max: Affine Spline Insights into Deep Learning", "thumbnail": "https://i.ytimg.com/vi/7Q2JhZxNPow/maxresdefault.jpg"} {"video_id": "7Q2JhZxNPow", "text": "mapping will be convex any questions about this very simple baseline stuff okay so they'll recall the nice thing about these these particular splines is that as soon as you fix the parameters right the slopes and the offsets wherever those hyper planes are these affine varieties cross that defines a partition and that's really where things get interesting right is to think about the partitioning that goes on in these maxify spline operators because that allows us to think a lot about the geometry of what's going on in a deep", "start_timestamp": "00:08:55", "end_timestamp": "00:09:32", "start_second": 535, "end_second": 572, "url": "https://www.youtube.com/watch?v=7Q2JhZxNPow&t=535s", "title": "Mad Max: Affine Spline Insights into Deep Learning", "thumbnail": "https://i.ytimg.com/vi/7Q2JhZxNPow/maxresdefault.jpg"} {"video_id": "7Q2JhZxNPow", "text": "network so again just reiterating what I just said if you think about a set of parameters of a deep network layer they're gonna automatically induce a partition of the input space of that particular layer into convex regions and then if you compose several of these layers we're going to form a non convex partition of of the input space and and this provides really interesting non-trivial links to classical ideas out of signal processing information theory computational geometry namely ideas like vector quantization k-means and and", "start_timestamp": "00:09:32", "end_timestamp": "00:10:10", "start_second": 572, "end_second": 610, "url": "https://www.youtube.com/watch?v=7Q2JhZxNPow&t=572s", "title": "Mad Max: Affine Spline Insights into Deep Learning", "thumbnail": "https://i.ytimg.com/vi/7Q2JhZxNPow/maxresdefault.jpg"} {"video_id": "7Q2JhZxNPow", "text": "Voronoi tiles and we'll get into these as we go so one of the yeah key ideas is linking these modern deep net ideas back to more classical signal processing ideas so let's just do a toy example so that you can visualize what goes on in one of these vector quantization partitions of the input spaces so let's just consider a toy example a three-layer net we go from an input space it's two-dimensional we're gonna four classes in the two dimensional input 2d just so we can visualize can't visualize really anything beyond 3d we", "start_timestamp": "00:10:10", "end_timestamp": "00:10:46", "start_second": 610, "end_second": 646, "url": "https://www.youtube.com/watch?v=7Q2JhZxNPow&t=610s", "title": "Mad Max: Affine Spline Insights into Deep Learning", "thumbnail": "https://i.ytimg.com/vi/7Q2JhZxNPow/maxresdefault.jpg"} {"video_id": "7Q2JhZxNPow", "text": "go to a layer of 45 units then three units and then we're gonna do a four class classification problem so we have four units on the output okay makes sense to everybody so this is the inputs this is what goes on in the input space we have four different classes with these four colors our goal is to build a classifier to to tell these apart this is the first axis of the input space the second axis and this is what happens after we go through the first layer right the we go to 45 this the layer the Maslow layer that map's the", "start_timestamp": "00:10:46", "end_timestamp": "00:11:22", "start_second": 646, "end_second": 682, "url": "https://www.youtube.com/watch?v=7Q2JhZxNPow&t=646s", "title": "Mad Max: Affine Spline Insights into Deep Learning", "thumbnail": "https://i.ytimg.com/vi/7Q2JhZxNPow/maxresdefault.jpg"} {"video_id": "7Q2JhZxNPow", "text": "input to the output of this first layer this is the vector quantization partition or the spline partition that you that that that you obtain importantly we're going through a single layer so the tiling is convex right these are convex regions right makes sense okay moreover let's remember that these are splines after all so we can ask what what is the mapping from the input of the first layer to the output of the first layer well it's just a very simple affine map because it's an affine spline after all okay that once you know", "start_timestamp": "00:11:22", "end_timestamp": "00:11:57", "start_second": 682, "end_second": 717, "url": "https://www.youtube.com/watch?v=7Q2JhZxNPow&t=682s", "title": "Mad Max: Affine Spline Insights into Deep Learning", "thumbnail": "https://i.ytimg.com/vi/7Q2JhZxNPow/maxresdefault.jpg"} {"video_id": "7Q2JhZxNPow", "text": "a signal right a particular ax lands in this particular tile right there is a that that gives you a particular matrix a right and a offset vector B right that are different for every VQ tile but then the mapping for the input to the output of that layer is just simply this affine map so you can think of a the mapping from the input to the output of one deep knit layer is just a VQ dependent affine transformation so this is one layer so now if we go through two layers and we think of the partition induced on the", "start_timestamp": "00:11:57", "end_timestamp": "00:12:35", "start_second": 717, "end_second": 755, "url": "https://www.youtube.com/watch?v=7Q2JhZxNPow&t=717s", "title": "Mad Max: Affine Spline Insights into Deep Learning", "thumbnail": "https://i.ytimg.com/vi/7Q2JhZxNPow/maxresdefault.jpg"} {"video_id": "7Q2JhZxNPow", "text": "input space we now see that we start picking up non-convex or we start having non convex regions because the non-convex operator however we still have the same concept right that if a signal falls in this particular tile right this particular partition region the mapping from the input to the output of the second layer remains just simply in affine map right where the a and the B are indexed by this particular tile and just to be SuperDuper clear about it one more time every signal that lives in this tile that falls in this tile on the", "start_timestamp": "00:12:35", "end_timestamp": "00:13:14", "start_second": 755, "end_second": 794, "url": "https://www.youtube.com/watch?v=7Q2JhZxNPow&t=755s", "title": "Mad Max: Affine Spline Insights into Deep Learning", "thumbnail": "https://i.ytimg.com/vi/7Q2JhZxNPow/maxresdefault.jpg"} {"video_id": "7Q2JhZxNPow", "text": "input space has the exact same f-fine mapping okay and this is what happens when you learn just to see when you if you initialize with random random weights zero biases you just get a set of cones and as we go through learning epochs you see that we end up with these cones pulling away from the origin and then cones being cut up by other cones and we result again and for at least layers wanted to this particular mapping it at convergence okay and I'm gonna III think that it's really thinking of this geometrical", "start_timestamp": "00:13:14", "end_timestamp": "00:13:49", "start_second": 794, "end_second": 829, "url": "https://www.youtube.com/watch?v=7Q2JhZxNPow&t=794s", "title": "Mad Max: Affine Spline Insights into Deep Learning", "thumbnail": "https://i.ytimg.com/vi/7Q2JhZxNPow/maxresdefault.jpg"} {"video_id": "7Q2JhZxNPow", "text": "picture is really very useful to think about the inner workings of what's going on in a in a deep network in particular a deep net is a VQ mer machine right it's computing a vector quantization so was their question yeah we said oh let me just think if I got this right we set all the biases to zero in the whole in the whole network yeah so we'll just still it's still there's no beat there's no box set so it's just gonna remain yeah calling the corners of Collins is just calm that makes sense okay good so let's talk a little bit about some of", "start_timestamp": "00:13:49", "end_timestamp": "00:14:41", "start_second": 829, "end_second": 881, "url": "https://www.youtube.com/watch?v=7Q2JhZxNPow&t=829s", "title": "Mad Max: Affine Spline Insights into Deep Learning", "thumbnail": "https://i.ytimg.com/vi/7Q2JhZxNPow/maxresdefault.jpg"} {"video_id": "7Q2JhZxNPow", "text": "the geometrical properties you can actually delve deeper into the what the structure of these VQ tiles and show that that the part of the partition of each of a single layer right a single layers input space in terms of the output is something called it's actually not a Voronoi diagram it's something called power diagram question anybody here heard of power diagrams okay fantastic so it's a generalization of a Voronoi diagram now instead of just having a centroid it has a centroid and a radius all right so it's a it's a mild", "start_timestamp": "00:14:41", "end_timestamp": "00:15:18", "start_second": 881, "end_second": 918, "url": "https://www.youtube.com/watch?v=7Q2JhZxNPow&t=881s", "title": "Mad Max: Affine Spline Insights into Deep Learning", "thumbnail": "https://i.ytimg.com/vi/7Q2JhZxNPow/maxresdefault.jpg"} {"video_id": "7Q2JhZxNPow", "text": "generalization of a Voronoi tiling but the basically you just compute a Voronoi tiling but with something called a genre distance instead of the standard Euclidean distance but the tiles remain convex convex polytopes right and in high dimensional space moreover given these affine maps given the the entries in these a matrices and these B bias vectors we have there they're close form formulas for the centroids and the radii that determine all of these polytopes so you can understand you can study the the geometry of these the the eccentricity", "start_timestamp": "00:15:18", "end_timestamp": "00:15:56", "start_second": 918, "end_second": 956, "url": "https://www.youtube.com/watch?v=7Q2JhZxNPow&t=918s", "title": "Mad Max: Affine Spline Insights into Deep Learning", "thumbnail": "https://i.ytimg.com/vi/7Q2JhZxNPow/maxresdefault.jpg"} {"video_id": "7Q2JhZxNPow", "text": "the size etc by thanks to the in closed form thanks to these formulas moreover it should be pretty clear that since you're piling layers upon layers that the the pop the power diagram formed from let's say two Mazal layers applied in composition is going to be formed by a subdivision process because the cutting of the cuts from the second layer input of the second layer to the output will basically cut the vq tiling from the first layer right and so this is just an example of an input space tiling first layer will just be a set of", "start_timestamp": "00:15:56", "end_timestamp": "00:16:41", "start_second": 956, "end_second": 1001, "url": "https://www.youtube.com/watch?v=7Q2JhZxNPow&t=956s", "title": "Mad Max: Affine Spline Insights into Deep Learning", "thumbnail": "https://i.ytimg.com/vi/7Q2JhZxNPow/maxresdefault.jpg"} {"video_id": "7Q2JhZxNPow", "text": "straight line cuts the second layer is going to be a subdivision of those cuts we colored them gray here subdivision but now the important thing is that the cuts are going to be bent right they're going to be bent at the gray bond arees which are the boundaries defined by the first layer cuts and these by these bends you can actually compute bounds for example on the dihedral angles and and these bends are precisely two main continent to maintain continuity of the mapping from the input to the output of this operator if you didn't have these", "start_timestamp": "00:16:41", "end_timestamp": "00:17:17", "start_second": 1001, "end_second": 1037, "url": "https://www.youtube.com/watch?v=7Q2JhZxNPow&t=1001s", "title": "Mad Max: Affine Spline Insights into Deep Learning", "thumbnail": "https://i.ytimg.com/vi/7Q2JhZxNPow/maxresdefault.jpg"} {"video_id": "7Q2JhZxNPow", "text": "bending then you could have the spline become non continuous okay but again these bends are very important and they have a lot to do with weight sharing in deep deep networks so one of the conclusions you can just take away from this part partway through is that deep networks are really a very practical very efficient way of doing something that is extremely difficult which is free not spline approximation and Heidi all right that's that's that's really what deep deep networks are doing you could carry this all the way to the last layer in a", "start_timestamp": "00:17:17", "end_timestamp": "00:17:54", "start_second": 1037, "end_second": 1074, "url": "https://www.youtube.com/watch?v=7Q2JhZxNPow&t=1037s", "title": "Mad Max: Affine Spline Insights into Deep Learning", "thumbnail": "https://i.ytimg.com/vi/7Q2JhZxNPow/maxresdefault.jpg"} {"video_id": "7Q2JhZxNPow", "text": "classification problem say it study the decision boundary of the deep net it is again just going to be one of these basically just one of these cuts and you can understand for example the the smoothness of the boundary by the fact that the you can you can only have so much bending between the cuts when you cut through the power diagram partition that you obtained from the previous regions there's lots of things that can be done to understand for example smoothness of the decision boundaries in different kinds", "start_timestamp": "00:17:54", "end_timestamp": "00:18:27", "start_second": 1074, "end_second": 1107, "url": "https://www.youtube.com/watch?v=7Q2JhZxNPow&t=1074s", "title": "Mad Max: Affine Spline Insights into Deep Learning", "thumbnail": "https://i.ytimg.com/vi/7Q2JhZxNPow/maxresdefault.jpg"} {"video_id": "7Q2JhZxNPow", "text": "of deep Nets this is one direction that we've been exploring the other is looking in particular at these affine mappings so again when you're in in a VQ tile you know that there's for all signals that live in that tile there's just a fine map that goes from the input to the output what what what what properties can we do for me glean from these okay so in particular if we think let's just study the the simple the the case of input to the output of the entire network okay which we'll call this Z big L you can", "start_timestamp": "00:18:27", "end_timestamp": "00:19:04", "start_second": 1107, "end_second": 1144, "url": "https://www.youtube.com/watch?v=7Q2JhZxNPow&t=1107s", "title": "Mad Max: Affine Spline Insights into Deep Learning", "thumbnail": "https://i.ytimg.com/vi/7Q2JhZxNPow/maxresdefault.jpg"} {"video_id": "7Q2JhZxNPow", "text": "ignore the softmax that doesn't really enter into any of the the discussions I'm gonna bring up but we're interested in the mapping through all the layers of the network the this affine mapping formula applies no matter where you are in the network you'll just have different A's and B's but we're interested in the one from the input to the very output okay from the input to the very output well you can develop closed-form formulas for this map particularly for a continent this is what the a the a matrix looks like this", "start_timestamp": "00:19:04", "end_timestamp": "00:19:36", "start_second": 1144, "end_second": 1176, "url": "https://www.youtube.com/watch?v=7Q2JhZxNPow&t=1144s", "title": "Mad Max: Affine Spline Insights into Deep Learning", "thumbnail": "https://i.ytimg.com/vi/7Q2JhZxNPow/maxresdefault.jpg"} {"video_id": "7Q2JhZxNPow", "text": "is what this offset B looks like we can know all of these matrices here in close form so you can do different kinds of analyses for example look at the Lipschitz stability two different points in the network based on different inputs but the thing I'm most interested in talking about here is what what are the rows of this a matrix look like because if you think about this what is the output of the deep net right everything up until the the softmax well it's basically just a matrix a multiplied just ignore this typo it's", "start_timestamp": "00:19:36", "end_timestamp": "00:20:09", "start_second": 1176, "end_second": 1209, "url": "https://www.youtube.com/watch?v=7Q2JhZxNPow&t=1176s", "title": "Mad Max: Affine Spline Insights into Deep Learning", "thumbnail": "https://i.ytimg.com/vi/7Q2JhZxNPow/maxresdefault.jpg"} {"video_id": "7Q2JhZxNPow", "text": "this matrix a multiplied by my signal ax plus a bias well that then that this is just a vector how big is this vector is one output for every class right I and what how do I determine which class the input is it's whichever of these outputs is largest right okay so let's think we have a matrix a that we're multiplying by our X each entry in this output is what just a inner product of a row of a with X so what is that right if we think about this matrix a well the C throw right course buying a Class C is dist we're just going to take", "start_timestamp": "00:20:09", "end_timestamp": "00:20:49", "start_second": 1209, "end_second": 1249, "url": "https://www.youtube.com/watch?v=7Q2JhZxNPow&t=1209s", "title": "Mad Max: Affine Spline Insights into Deep Learning", "thumbnail": "https://i.ytimg.com/vi/7Q2JhZxNPow/maxresdefault.jpg"} {"video_id": "7Q2JhZxNPow", "text": "the inner product of the C throw with X in order to get the C output of Z we want to find the biggest what's the what's the the what do we call this in signal processing nomenclature we call this a match filter bank right because basically what we're doing is we're applying to our signal a set of filters by inner products cauchy-schwarz tells us that the more the filter looks like the input the larger the output is going to be okay and the optimal filter is what where the row is exactly the input right standard standard stuff that's", "start_timestamp": "00:20:49", "end_timestamp": "00:21:26", "start_second": 1249, "end_second": 1286, "url": "https://www.youtube.com/watch?v=7Q2JhZxNPow&t=1249s", "title": "Mad Max: Affine Spline Insights into Deep Learning", "thumbnail": "https://i.ytimg.com/vi/7Q2JhZxNPow/maxresdefault.jpg"} {"video_id": "7Q2JhZxNPow", "text": "done in you know radar signal processing sonar walk communication systems etc and you can actually visualize this in a real network so this is just see far ten here's an input of an airplane here's the row this that this is the the row of the corresponding a matrix for that input vector unvectorized so that it looks like an image see it looks if you squint it looks a lot like an airplane okay I have a large inner product if you look at these other rows corresponding ship class dog class you see they don't actually look like a ship or a dog but", "start_timestamp": "00:21:26", "end_timestamp": "00:22:03", "start_second": 1286, "end_second": 1323, "url": "https://www.youtube.com/watch?v=7Q2JhZxNPow&t=1286s", "title": "Mad Max: Affine Spline Insights into Deep Learning", "thumbnail": "https://i.ytimg.com/vi/7Q2JhZxNPow/maxresdefault.jpg"} {"video_id": "7Q2JhZxNPow", "text": "more like a anti-air plane all right in order to push down the inner product largest inner product smallest even smaller inner product in fact I yes sir I didn't talk about the bias but the way to think about the bias is if if you're a Bayesian then the B's would be related to the prior probabilities of the different classes so if you knew that that planes were very very likely you would put you would load B with a large number in the beef the beef entry that make sense yeah yeah Renee yeah so it's subtle it's subtle", "start_timestamp": "00:22:03", "end_timestamp": "00:22:49", "start_second": 1323, "end_second": 1369, "url": "https://www.youtube.com/watch?v=7Q2JhZxNPow&t=1323s", "title": "Mad Max: Affine Spline Insights into Deep Learning", "thumbnail": "https://i.ytimg.com/vi/7Q2JhZxNPow/maxresdefault.jpg"} {"video_id": "7Q2JhZxNPow", "text": "but you could think of this as like a dictionary learning machine that's basically given an input is is is defining then a bit of Bayesian classifier does that help a little bit okay so and of course if you if you think what these rows of these a matrix matrices are and you think of the fact that we're decomposing the deep net input output relationship in terms of affine maps there's a just a direct link between the rows of this a matrix and what are called saliency maps by by the the community so it gives new intuition", "start_timestamp": "00:22:49", "end_timestamp": "00:23:37", "start_second": 1369, "end_second": 1417, "url": "https://www.youtube.com/watch?v=7Q2JhZxNPow&t=1369s", "title": "Mad Max: Affine Spline Insights into Deep Learning", "thumbnail": "https://i.ytimg.com/vi/7Q2JhZxNPow/maxresdefault.jpg"} {"video_id": "7Q2JhZxNPow", "text": "behind what goes on or what happens when we when we think about salience email moreover if you if you if you you can prove a simple result that says if you have a high capacity deep net that's capable of producing basically any and arbitrary a matrix if you will from a from a given input then you can show that the Seath role of the a matrix when you input a piece of training data xn is going to become exactly xn when you're on the true class right X is late X ends label and essentially minus a constant times xn when you are not in when you're", "start_timestamp": "00:23:37", "end_timestamp": "00:24:21", "start_second": 1417, "end_second": 1461, "url": "https://www.youtube.com/watch?v=7Q2JhZxNPow&t=1417s", "title": "Mad Max: Affine Spline Insights into Deep Learning", "thumbnail": "https://i.ytimg.com/vi/7Q2JhZxNPow/maxresdefault.jpg"} {"video_id": "7Q2JhZxNPow", "text": "in a different class and so this is this will tell us a little bit both about again reinforcing this match filter interpretation but also helping us understand a little bit about this memorization memorization process okay a couple more a couple more points so another thing we can do is we can think now because we under we can have formulas for these affine Maps we can characterize the prediction function f that map's the input to the output and we can think of different kind of complexity measures that we can derive", "start_timestamp": "00:24:21", "end_timestamp": "00:24:58", "start_second": 1461, "end_second": 1498, "url": "https://www.youtube.com/watch?v=7Q2JhZxNPow&t=1461s", "title": "Mad Max: Affine Spline Insights into Deep Learning", "thumbnail": "https://i.ytimg.com/vi/7Q2JhZxNPow/maxresdefault.jpg"} {"video_id": "7Q2JhZxNPow", "text": "out of these affine mapping formulas so there's a lot of applications for complexity measures for example you might want to compare two deep networks one which has a very complicated prediction function the other that solves the same task but has a much simpler prediction function Occam's razor type idea we might also want to apply a complexity measure as a penalty directly to our to our learning process right so there's a large literature of deriving different complexity measures and complexity penalties for deep nets all this point", "start_timestamp": "00:24:58", "end_timestamp": "00:25:43", "start_second": 1498, "end_second": 1543, "url": "https://www.youtube.com/watch?v=7Q2JhZxNPow&t=1498s", "title": "Mad Max: Affine Spline Insights into Deep Learning", "thumbnail": "https://i.ytimg.com/vi/7Q2JhZxNPow/maxresdefault.jpg"} {"video_id": "7Q2JhZxNPow", "text": "to you know two of them as examples one is there's a very nice recent paper that that links the the ubiquitous to norm to norm of the weights penalty for learning to a particular measure of the second derivative of the prediction function all right so that it really does say that for at least a very very simple kind of network there's a link between the weight values of the weights and the wiggliness of s and then there's another school of approaches that looks at well we have a VQ tiling of the input space let's count the number of tiles because", "start_timestamp": "00:25:43", "end_timestamp": "00:26:26", "start_second": 1543, "end_second": 1586, "url": "https://www.youtube.com/watch?v=7Q2JhZxNPow&t=1543s", "title": "Mad Max: Affine Spline Insights into Deep Learning", "thumbnail": "https://i.ytimg.com/vi/7Q2JhZxNPow/maxresdefault.jpg"} {"video_id": "7Q2JhZxNPow", "text": "presumably the more codes there are in your code book the more tiles there are the more complicated the function that you're trying to approximate so these are two approaches well I'm going to get one that really expands upon these two and it it is leveraging the fact that we can was leveraging really the the fact that lots of data sets of particular image type data sets we have a reasonable reasonably true property that the the the training data live on low lowered that doesn't that be low dimensional but lower dimensional", "start_timestamp": "00:26:26", "end_timestamp": "00:27:04", "start_second": 1586, "end_second": 1624, "url": "https://www.youtube.com/watch?v=7Q2JhZxNPow&t=1586s", "title": "Mad Max: Affine Spline Insights into Deep Learning", "thumbnail": "https://i.ytimg.com/vi/7Q2JhZxNPow/maxresdefault.jpg"} {"video_id": "7Q2JhZxNPow", "text": "manifold sub manifolds of the high dimensional space okay so let's assume that our data lives not filling up the entire input space but living on some lower dimensional sub manifold or sub manifolds and in this case we can we can look into the manifold learning literature and there's a beautiful paper by Donna Hahn crimes that defines what's called the Hessian eigen map manifold learning technique which is is basically trying to flatten a curvy manifold using the tangent a CN along the manifold so we can just a dot this this same measure", "start_timestamp": "00:27:04", "end_timestamp": "00:27:43", "start_second": 1624, "end_second": 1663, "url": "https://www.youtube.com/watch?v=7Q2JhZxNPow&t=1624s", "title": "Mad Max: Affine Spline Insights into Deep Learning", "thumbnail": "https://i.ytimg.com/vi/7Q2JhZxNPow/maxresdefault.jpg"} {"video_id": "7Q2JhZxNPow", "text": "and we can define a complexity measure C as the integral of the of the tangent hessian along the manifold so you can just think of it roughly speaking is the low look you have F it's a peacefull at the continuous piecewise defined function and what we're looking at is the local deviation of F from flat so f if f of X was a completely flat function this measure would be 0 if F was very very jagged II meaning locally when you look over a few regions it's jumping up and down wildly this will be a large this will be a large number yes well", "start_timestamp": "00:27:43", "end_timestamp": "00:28:29", "start_second": 1663, "end_second": 1709, "url": "https://www.youtube.com/watch?v=7Q2JhZxNPow&t=1663s", "title": "Mad Max: Affine Spline Insights into Deep Learning", "thumbnail": "https://i.ytimg.com/vi/7Q2JhZxNPow/maxresdefault.jpg"} {"video_id": "7Q2JhZxNPow", "text": "simply by integrating along this basically we're just integrating along the tangent manifold part yes yes yeah and you could also you know just integrate over the entire space but then you lose some of the nice some of the nice properties did that help oh yeah and we could talk we can talk about after so the nice thing about this measure is that you can develop a marker a Monte Carlo approximation in terms of the training data points the X ends that are your training data point and the affine mapping parameter so it's", "start_timestamp": "00:28:29", "end_timestamp": "00:29:10", "start_second": 1709, "end_second": 1750, "url": "https://www.youtube.com/watch?v=7Q2JhZxNPow&t=1709s", "title": "Mad Max: Affine Spline Insights into Deep Learning", "thumbnail": "https://i.ytimg.com/vi/7Q2JhZxNPow/maxresdefault.jpg"} {"video_id": "7Q2JhZxNPow", "text": "actually extremely easy to actually compute the value of C given a set of trained training points and giving the affine mapping parameters no I won't left it just think of P is true for right now so it's all the ideally you will choose the P depending on particular for example the manifold dimension the ambient dimension week yeah let's talk about it at the break this is a date let's the data manifolds assume the training data or samples from some sub manifold in the in the ambient space because there are two factors that", "start_timestamp": "00:29:10", "end_timestamp": "00:30:07", "start_second": 1750, "end_second": 1807, "url": "https://www.youtube.com/watch?v=7Q2JhZxNPow&t=1750s", "title": "Mad Max: Affine Spline Insights into Deep Learning", "thumbnail": "https://i.ytimg.com/vi/7Q2JhZxNPow/maxresdefault.jpg"} {"video_id": "7Q2JhZxNPow", "text": "can increase C one is f there is the many other you mean the smoothness of the manifold say yeah absolutely but but if if we just assume let's just say you have to to prediction functions and their domain in both cases is the same manifold and that would be normalized out right yes yeah are you no longer working with values here this is where rail is absolutely or piecewise peaceful a convex piecewise a fine nonlinear fashion then zero everywhere I know yeah let's talk about let's talk about it offline I think otherwise I'll run out of time", "start_timestamp": "00:30:07", "end_timestamp": "00:30:52", "start_second": 1807, "end_second": 1852, "url": "https://www.youtube.com/watch?v=7Q2JhZxNPow&t=1807s", "title": "Mad Max: Affine Spline Insights into Deep Learning", "thumbnail": "https://i.ytimg.com/vi/7Q2JhZxNPow/maxresdefault.jpg"} {"video_id": "7Q2JhZxNPow", "text": "yeah well it won't be yeah okay so let's let's look at an application of this to something that is I would say somewhat still mysterious in the deep learning world and that is data the data augmentation so if we think of how deep networks are trained today they're typically trained with this technique called data augmentation that we don't just feed in images of been right we feed in images of translates right of been rotations have been etc and if we have a hypothesis that the images have been somehow came from a lower", "start_timestamp": "00:30:52", "end_timestamp": "00:31:36", "start_second": 1852, "end_second": 1896, "url": "https://www.youtube.com/watch?v=7Q2JhZxNPow&t=1852s", "title": "Mad Max: Affine Spline Insights into Deep Learning", "thumbnail": "https://i.ytimg.com/vi/7Q2JhZxNPow/maxresdefault.jpg"} {"video_id": "7Q2JhZxNPow", "text": "dimensional manifold in high dimensional space where points on that manifold were translates and rotations of each other then the it's very convenient that these augmented data points that are generated from just your raw initial training data will live on the same data manifold okay so there's a in this particular setting you can you can prove a result that says that just starting with date writing out the the cost of say cross-entropy cost function with data augmentation terms you can actually develop that those data", "start_timestamp": "00:31:36", "end_timestamp": "00:32:14", "start_second": 1896, "end_second": 1934, "url": "https://www.youtube.com/watch?v=7Q2JhZxNPow&t=1896s", "title": "Mad Max: Affine Spline Insights into Deep Learning", "thumbnail": "https://i.ytimg.com/vi/7Q2JhZxNPow/maxresdefault.jpg"} {"video_id": "7Q2JhZxNPow", "text": "augmentation terms and pull them out of the first part of the cost function and show that they basically form a this Hessian penalty this Hessian complexity regularization penalty okay so what that's saying is that data augmentation implicitly implements a hessian complexity regularization on the optimization okay so that's like the theorem here too or just a simple experiment with the CFR 100 data that so this is training epochs in the x-axis this complexity measure in the vertical axis and all we're doing here", "start_timestamp": "00:32:14", "end_timestamp": "00:32:59", "start_second": 1934, "end_second": 1979, "url": "https://www.youtube.com/watch?v=7Q2JhZxNPow&t=1934s", "title": "Mad Max: Affine Spline Insights into Deep Learning", "thumbnail": "https://i.ytimg.com/vi/7Q2JhZxNPow/maxresdefault.jpg"} {"video_id": "7Q2JhZxNPow", "text": "is we are as we're training the network trained trained without data augmentation in black and with data augmentation in blue what we're looking at are based on the A's and the B's that we have the that we have learned with the network we're plugging that into our complexity measure that was on the previous previous slide and we're seeing that the measure is showing that the network that has learned using data augmentation has far lower complexity than the network that has learned with data augmentation this is both on the", "start_timestamp": "00:32:59", "end_timestamp": "00:33:33", "start_second": 1979, "end_second": 2013, "url": "https://www.youtube.com/watch?v=7Q2JhZxNPow&t=1979s", "title": "Mad Max: Affine Spline Insights into Deep Learning", "thumbnail": "https://i.ytimg.com/vi/7Q2JhZxNPow/maxresdefault.jpg"} {"video_id": "7Q2JhZxNPow", "text": "training and on the test data yeah we're doing the station depends on the loss and is these anticipation arising due to two square lows oh yeah good question well in this case we it was it was cross entropy loss it wasn't that wasn't weird work with rotations of two similar images have the same label yeah so the loss in comparative labels and so that has to change the regularization has to change to the same yeah let's yeah so so for just for now the interest of time let's just assume cross-entropy loss for a classification", "start_timestamp": "00:33:33", "end_timestamp": "00:34:33", "start_second": 2013, "end_second": 2073, "url": "https://www.youtube.com/watch?v=7Q2JhZxNPow&t=2013s", "title": "Mad Max: Affine Spline Insights into Deep Learning", "thumbnail": "https://i.ytimg.com/vi/7Q2JhZxNPow/maxresdefault.jpg"} {"video_id": "7Q2JhZxNPow", "text": "problem rather than l2 l2 loss for a regression problem and then let me think and maybe I'll have a better answer by the time that we get to question yeah complex measure is only a function of the model but not a lot is that right absolutely yes - okay so let's one last quick quick note what can we do beyond piecewise defined deep nets because sigmoid hyperbolic tanh these are still very useful in certain applications in particular in recurrent deep nets and it turns out that you can bring those under the same umbrella that of this max", "start_timestamp": "00:34:33", "end_timestamp": "00:35:22", "start_second": 2073, "end_second": 2122, "url": "https://www.youtube.com/watch?v=7Q2JhZxNPow&t=2073s", "title": "Mad Max: Affine Spline Insights into Deep Learning", "thumbnail": "https://i.ytimg.com/vi/7Q2JhZxNPow/maxresdefault.jpg"} {"video_id": "7Q2JhZxNPow", "text": "affine spline framework and the way to do that is to switch from a deterministic hard vector quantization approach or way of thinking where if X lives in a in this particular vector quantization tile it definitively lives in that vector quantization tile to a soft VQ approach where now we just we have a probability that X will fall in a given vector quantization tile where for this particular signal maybe there's a high probability in this tile somewhat smaller in the local local region a burring tiles and then decreasing", "start_timestamp": "00:35:22", "end_timestamp": "00:36:03", "start_second": 2122, "end_second": 2163, "url": "https://www.youtube.com/watch?v=7Q2JhZxNPow&t=2122s", "title": "Mad Max: Affine Spline Insights into Deep Learning", "thumbnail": "https://i.ytimg.com/vi/7Q2JhZxNPow/maxresdefault.jpg"} {"video_id": "7Q2JhZxNPow", "text": "probability as you move away so if you just set up a very simple Gaussian mixture model where the means and covariances are based on these A's and B's that we derive you can you can basically derive nonlinearities like the sigmoid like the softmax directly from Ray Lu absolute value and other piecewise defined convex nonlin non-linearity's and in particular if you if you do a look at a hybrid approach it's this between a hard vq and a soft vq alright with where you're basically blending between the two you can", "start_timestamp": "00:36:03", "end_timestamp": "00:36:46", "start_second": 2163, "end_second": 2206, "url": "https://www.youtube.com/watch?v=7Q2JhZxNPow&t=2163s", "title": "Mad Max: Affine Spline Insights into Deep Learning", "thumbnail": "https://i.ytimg.com/vi/7Q2JhZxNPow/maxresdefault.jpg"} {"video_id": "7Q2JhZxNPow", "text": "generate infinite classes of interesting and potentially useful nonlinearities and I'll just point out one how many people here have heard of the swish non-linearity a few so this was a non-linearity that was discovered a few years ago through an empirical search that's the empirical search for is there a normal in the area that works better than Ray Lu right for large-scale classification problems and it turned out there there there was and it was a fairly sizable you know non-trivial gain in a lot of cases and it's this black", "start_timestamp": "00:36:46", "end_timestamp": "00:37:22", "start_second": 2206, "end_second": 2242, "url": "https://www.youtube.com/watch?v=7Q2JhZxNPow&t=2206s", "title": "Mad Max: Affine Spline Insights into Deep Learning", "thumbnail": "https://i.ytimg.com/vi/7Q2JhZxNPow/maxresdefault.jpg"} {"video_id": "7Q2JhZxNPow", "text": "dashed line here and the interesting thing it's hard to know if it's a coincidence or not but if you look at the in some sense the midway point between hard VQ and South vq i based on the the Ray Lu function at the hard VQ side and the sigmoid gated linear unit at the soft vq the swish is precisely halfway in between it's quite quite ok you could also pull out sigmoid hyperbolic tangent by adopting a probabilistic viewpoint of the output of a layer no longer being just a deterministic output of the input but", "start_timestamp": "00:37:22", "end_timestamp": "00:38:05", "start_second": 2242, "end_second": 2285, "url": "https://www.youtube.com/watch?v=7Q2JhZxNPow&t=2242s", "title": "Mad Max: Affine Spline Insights into Deep Learning", "thumbnail": "https://i.ytimg.com/vi/7Q2JhZxNPow/maxresdefault.jpg"} {"video_id": "7Q2JhZxNPow", "text": "instead the probabilities that you fall in the different VQ regions in the input that's what we can do beyond piecewise so I better wrap up so what I hope to get across is that this spline in particular max affine splined viewpoint can provide a useful language to talk about the things that we're learning about deep networks but also frame the kind of questions that we would like to move forward with my talked a bit about the the basic you know framework of max affine splines and deep nets I talked about the relationships with with vector", "start_timestamp": "00:38:05", "end_timestamp": "00:38:45", "start_second": 2285, "end_second": 2325, "url": "https://www.youtube.com/watch?v=7Q2JhZxNPow&t=2285s", "title": "Mad Max: Affine Spline Insights into Deep Learning", "thumbnail": "https://i.ytimg.com/vi/7Q2JhZxNPow/maxresdefault.jpg"} {"video_id": "7Q2JhZxNPow", "text": "coin sation and really that a deep net is you could think of it as a vector quantization machine or you could think of it as a free not spline machine there's there really I think interesting links between power diagrams from computational geometry and the the subdivision that's generated by this layer upon layer max affine spline process this the affine transformations that we derive based on that these difficulties different vq regions allow us to link deep nets back to old-school signal processing ideas like match", "start_timestamp": "00:38:45", "end_timestamp": "00:39:25", "start_second": 2325, "end_second": 2365, "url": "https://www.youtube.com/watch?v=7Q2JhZxNPow&t=2325s", "title": "Mad Max: Affine Spline Insights into Deep Learning", "thumbnail": "https://i.ytimg.com/vi/7Q2JhZxNPow/maxresdefault.jpg"} {"video_id": "7Q2JhZxNPow", "text": "filter banks and they allow us to define new kinds of complexity measures in particular this Hessian measure that we talked about it's all in there and say there's some some papers that people would like to take a peek and I'd be happy to answer any additional questions [Applause] it's all really a question the second derivative of that one so yeah so the basically the the way the way that we think about it is that you have a it's okay there's there's let's stop it to hear a heuristic way of thinking about it is if you had a piecewise-defined", "start_timestamp": "00:39:25", "end_timestamp": "00:40:16", "start_second": 2365, "end_second": 2416, "url": "https://www.youtube.com/watch?v=7Q2JhZxNPow&t=2365s", "title": "Mad Max: Affine Spline Insights into Deep Learning", "thumbnail": "https://i.ytimg.com/vi/7Q2JhZxNPow/maxresdefault.jpg"} {"video_id": "7Q2JhZxNPow", "text": "function it's gonna be undefined that obviously kinks right but if you thought if you think of basically here some heuristic way to smooth any heuristic way you can think of to smooth out that kink then the second derivative is going to be related to the slope parameters on one side and the soul parameters on the other side with yeah exactly and the bandwidth of this movie that's the epsilon that was in that in that formula there's a there there details that we could you know we could talk about the brain yes", "start_timestamp": "00:40:16", "end_timestamp": "00:40:56", "start_second": 2416, "end_second": 2456, "url": "https://www.youtube.com/watch?v=7Q2JhZxNPow&t=2416s", "title": "Mad Max: Affine Spline Insights into Deep Learning", "thumbnail": "https://i.ytimg.com/vi/7Q2JhZxNPow/maxresdefault.jpg"} {"video_id": "7Q2JhZxNPow", "text": "[Music] you mean the bat like how large the measure yeah so that was what this yeah that the point of this experiment was really to look at ads as we are training so as we're going through training affects what is happening you know what is the value of in this case the value of this complexity measure as we train the network through the various training training cycles both with in this case with data augmentation and in this case without data augmentation does that make sense no you just so it just in a nutshell", "start_timestamp": "00:40:56", "end_timestamp": "00:41:48", "start_second": 2456, "end_second": 2508, "url": "https://www.youtube.com/watch?v=7Q2JhZxNPow&t=2456s", "title": "Mad Max: Affine Spline Insights into Deep Learning", "thumbnail": "https://i.ytimg.com/vi/7Q2JhZxNPow/maxresdefault.jpg"} {"video_id": "7Q2JhZxNPow", "text": "think of think of a Gaussian mixture model a Gaussian mixture model defined in terms of covariance means and covariances we're now the means and covariances are defined in terms of these different tiles pardon me so okay what's the best way to describe it so start from start from a hard VQ where we have a tiling now you have because of the power diagram you have a radius and you have a centroid now use that radius in that centroid as the per to develop a Gaussian mixture model for example a you know circle a symmetric Gaussian mixture", "start_timestamp": "00:41:48", "end_timestamp": "00:42:41", "start_second": 2508, "end_second": 2561, "url": "https://www.youtube.com/watch?v=7Q2JhZxNPow&t=2508s", "title": "Mad Max: Affine Spline Insights into Deep Learning", "thumbnail": "https://i.ytimg.com/vi/7Q2JhZxNPow/maxresdefault.jpg"} {"video_id": "7Q2JhZxNPow", "text": "with that particularly the radius being the the variance now under that if you think in terms of now under that model you'll have a given an input you can think about the probability that it falls into each of these individual tiles will be determined by the probability under that particular each of those mixtures does that make sense and if you put if you now look at these probabilities these probabilities behave like in the case of say you start with the tiling derive from array Lu you will you will end up with set of", "start_timestamp": "00:42:41", "end_timestamp": "00:43:15", "start_second": 2561, "end_second": 2595, "url": "https://www.youtube.com/watch?v=7Q2JhZxNPow&t=2561s", "title": "Mad Max: Affine Spline Insights into Deep Learning", "thumbnail": "https://i.ytimg.com/vi/7Q2JhZxNPow/maxresdefault.jpg"} {"video_id": "7Q2JhZxNPow", "text": "probabilities that are that that follow the functional form of a sigmoid gated linear unit in that case does that help that is not funny it's already trained you can mean there is no time right oh okay okay you know you I my question is I assume there is a procedure to arrive so can we go backwards you're saying I need to think about that yeah we were thinking only in the one in one direction going from a hard v q to a soft vq presumably you could reverse that process by if you have a certain kind of but it would have", "start_timestamp": "00:43:15", "end_timestamp": "00:43:57", "start_second": 2595, "end_second": 2637, "url": "https://www.youtube.com/watch?v=7Q2JhZxNPow&t=2595s", "title": "Mad Max: Affine Spline Insights into Deep Learning", "thumbnail": "https://i.ytimg.com/vi/7Q2JhZxNPow/maxresdefault.jpg"} {"video_id": "7Q2JhZxNPow", "text": "to be not all possible nonlinearities are reachable via this saw this this soft our relaxation of this hard v q right you can't reach you can't reach arbitrary you can't reach arbitrary nonlinearities right only certain kind of nonlinearities if you wanted to reach arbitrary ones you would have to there's no way you could do that with just a ghost standard kind of Gaussian mixture framework that help yeah yes go back to the contacts to make sure it was sure or are we here it's a very good question um so it sounds like a", "start_timestamp": "00:43:57", "end_timestamp": "00:44:37", "start_second": 2637, "end_second": 2677, "url": "https://www.youtube.com/watch?v=7Q2JhZxNPow&t=2637s", "title": "Mad Max: Affine Spline Insights into Deep Learning", "thumbnail": "https://i.ytimg.com/vi/7Q2JhZxNPow/maxresdefault.jpg"} {"video_id": "7Q2JhZxNPow", "text": "blood test sorry for the low D a after one and fifty ipok there's no number it is just because of a proton so that's something happen like oh oh sorry sorry yeah no we jet this is an artifact of the plotting we shall probably should have stopped a plot here yeah yes like a total number of examples yeah so the that's a really good question because in fact we're we're only the way that we compute okay there's the complexity measure and then there's the computation of the approximation of the complexity measure the more training", "start_timestamp": "00:44:37", "end_timestamp": "00:45:28", "start_second": 2677, "end_second": 2728, "url": "https://www.youtube.com/watch?v=7Q2JhZxNPow&t=2677s", "title": "Mad Max: Affine Spline Insights into Deep Learning", "thumbnail": "https://i.ytimg.com/vi/7Q2JhZxNPow/maxresdefault.jpg"} {"video_id": "7Q2JhZxNPow", "text": "samples you have the more densely you will sample the manifold the the signal manifold and that closer your approximation will be to the true the true measure and so the more training data that you have the closer you'll get to the true measure yeah that's a really good question so there are methods that attempt to do adaptive yawn selection movies like Trent oh yeah and you can apply it with some on occasions to multivariate data and get adaptive finally yeah is there some sense of so it would give you similar results it's right hierarchical", "start_timestamp": "00:45:28", "end_timestamp": "00:46:16", "start_second": 2728, "end_second": 2776, "url": "https://www.youtube.com/watch?v=7Q2JhZxNPow&t=2728s", "title": "Mad Max: Affine Spline Insights into Deep Learning", "thumbnail": "https://i.ytimg.com/vi/7Q2JhZxNPow/maxresdefault.jpg"} {"video_id": "7Q2JhZxNPow", "text": "VQ would be another yeah yeah but is there some sense about the properties of these quantizations and these timings are that will differentiate you from something that's that more directly tries to penalize is that to me this is this is the this is this is a really key key unanswered question right what the question was there that there are you know there are a number of different ways to try to find different free not spline approximations in higher dimensions and why are deepening of the Oh Howard deep nets bet you know", "start_timestamp": "00:46:16", "end_timestamp": "00:46:58", "start_second": 2776, "end_second": 2818, "url": "https://www.youtube.com/watch?v=7Q2JhZxNPow&t=2776s", "title": "Mad Max: Affine Spline Insights into Deep Learning", "thumbnail": "https://i.ytimg.com/vi/7Q2JhZxNPow/maxresdefault.jpg"} {"video_id": "7Q2JhZxNPow", "text": "different or better and this this is a big question I answered question there there's no question that the methods that we use the you know current training method our optimization approaches are enabling us to find these free not spline approximations in there truly ridiculously high dimensions right where a lot of these other techniques you wouldn't even attempt to them right but still it does not mean that this is the that we that we've stumbled on the best way of doing this so I think that as we think of new kinds of optimization", "start_timestamp": "00:46:58", "end_timestamp": "00:47:38", "start_second": 2818, "end_second": 2858, "url": "https://www.youtube.com/watch?v=7Q2JhZxNPow&t=2818s", "title": "Mad Max: Affine Spline Insights into Deep Learning", "thumbnail": "https://i.ytimg.com/vi/7Q2JhZxNPow/maxresdefault.jpg"} {"video_id": "oDKXwxaGkNA", "text": "[Music] all right thank you very much for the introduction and I hope you had a nice lunch and welcome to this talk about self supervised deep learning towards autonomously learning machines as you already heard my name is Simon T ballina I'm head of yard craft works and also lectured a couple of universities and craft works well you might have heard about us we are a vienna based artificial intelligence and big data company specializing in solving hard industrial problems using artificial intelligence and of course data also", "start_timestamp": "00:00:00", "end_timestamp": "00:00:44", "start_second": 0, "end_second": 44, "url": "https://www.youtube.com/watch?v=oDKXwxaGkNA&t=0s", "title": "Self-Supervised Learning - Towards Autonomously Learning Machines\u2014Simon Stiebellehner", "thumbnail": "https://i.ytimg.com/vi/oDKXwxaGkNA/maxresdefault.jpg"} {"video_id": "oDKXwxaGkNA", "text": "most of our clients come from the industry ranging from the automotive sector all the way to the energy sector and pretty much everything in between the topic of this talk self supervised deep learning is also primarily motivated by the work we do with our clients Before we jump right into the topic we first need to lose a few words about the current state of artificial intelligence though probably as most of you are aware of AI has come a pretty long way in the last couple of years in fact we have made massive progress for", "start_timestamp": "00:00:44", "end_timestamp": "00:01:17", "start_second": 44, "end_second": 77, "url": "https://www.youtube.com/watch?v=oDKXwxaGkNA&t=44s", "title": "Self-Supervised Learning - Towards Autonomously Learning Machines\u2014Simon Stiebellehner", "thumbnail": "https://i.ytimg.com/vi/oDKXwxaGkNA/maxresdefault.jpg"} {"video_id": "oDKXwxaGkNA", "text": "example when you think of popular voice assistance such as Syria or Cortana they have become incredibly good at understanding human natural language just in the last couple of years similarly in a completely different area of artificial intelligence in computer vision we can now highly accurately segment even complex images into all of its part parts and not only that we can even automatically generate text that describes what's happening in an image so called scene understanding so obviously great great progress has been", "start_timestamp": "00:01:17", "end_timestamp": "00:01:51", "start_second": 77, "end_second": 111, "url": "https://www.youtube.com/watch?v=oDKXwxaGkNA&t=77s", "title": "Self-Supervised Learning - Towards Autonomously Learning Machines\u2014Simon Stiebellehner", "thumbnail": "https://i.ytimg.com/vi/oDKXwxaGkNA/maxresdefault.jpg"} {"video_id": "oDKXwxaGkNA", "text": "made when it comes to artificial intelligence just in the last year's and it turns out that many of these breakthroughs many of these things you read about in the news are actually based on something that's called supervised deep learning and this brings us to the first part of this talk supervised deep learning the good and the ugly we will find - I supervise deep learning is so great and so many breakthroughs are based on it but also we will get to know it's very ugly side one of its major downsides one of its major weaknesses", "start_timestamp": "00:01:51", "end_timestamp": "00:02:24", "start_second": 111, "end_second": 144, "url": "https://www.youtube.com/watch?v=oDKXwxaGkNA&t=111s", "title": "Self-Supervised Learning - Towards Autonomously Learning Machines\u2014Simon Stiebellehner", "thumbnail": "https://i.ytimg.com/vi/oDKXwxaGkNA/maxresdefault.jpg"} {"video_id": "oDKXwxaGkNA", "text": "subsequently we will find out how taking a self supervised approach to deep learning can help us overcome or at least mitigate that core weakness and finally we will look into an industry case that hopefully gives you a good idea of how you can use self supervision in practice to make your models better and more robust but first let's look into supervised learning probably some of you or maybe if many of you know this data set it's an incredibly popular famous data set even containing images of obviously cats and dogs and this data", "start_timestamp": "00:02:24", "end_timestamp": "00:03:00", "start_second": 144, "end_second": 180, "url": "https://www.youtube.com/watch?v=oDKXwxaGkNA&t=144s", "title": "Self-Supervised Learning - Towards Autonomously Learning Machines\u2014Simon Stiebellehner", "thumbnail": "https://i.ytimg.com/vi/oDKXwxaGkNA/maxresdefault.jpg"} {"video_id": "oDKXwxaGkNA", "text": "set is used by data scientists around the world to do the first steps in image classification using especially deep learning so usually the task at hand is building a classified a differentiates between cats and dogs based on images theoretically you could approach this problem from two sides you could approach it from an unsupervised side unsupervised learning we do not use labels right and in supervised learning this would be the other side we do use labels so we do use textual or numeric information that tells us if there", "start_timestamp": "00:03:00", "end_timestamp": "00:03:33", "start_second": 180, "end_second": 213, "url": "https://www.youtube.com/watch?v=oDKXwxaGkNA&t=180s", "title": "Self-Supervised Learning - Towards Autonomously Learning Machines\u2014Simon Stiebellehner", "thumbnail": "https://i.ytimg.com/vi/oDKXwxaGkNA/maxresdefault.jpg"} {"video_id": "oDKXwxaGkNA", "text": "really is a cat or if there really is a dog in that image just a quick recap to bring us on the same page in unsupervised learning as I mentioned we don't use labels we just tried to detect similarities and dissimilarities in these images and based on that form homogenous clusters that hopefully so we end up with a cat and the dog cluster of course you're not using labels here you lack supervision so usually results will not be optimal actually the web more choice for this kind of task really is supervised learning especially", "start_timestamp": "00:03:33", "end_timestamp": "00:04:05", "start_second": 213, "end_second": 245, "url": "https://www.youtube.com/watch?v=oDKXwxaGkNA&t=213s", "title": "Self-Supervised Learning - Towards Autonomously Learning Machines\u2014Simon Stiebellehner", "thumbnail": "https://i.ytimg.com/vi/oDKXwxaGkNA/maxresdefault.jpg"} {"video_id": "oDKXwxaGkNA", "text": "supervised deep learning because in supervised deep learning by using different different deep learning techniques such as convolutional neural networks we can actually teach them what makes a cat a cat and what makes a dog a dog and they learn that in a fully automated fashion by providing them labels by providing them ground truth if the relays or if there really is a dog in that image and based on that they then make the classification and results can actually be astonishing astonishing good right we can use we can achieve human", "start_timestamp": "00:04:05", "end_timestamp": "00:04:40", "start_second": 245, "end_second": 280, "url": "https://www.youtube.com/watch?v=oDKXwxaGkNA&t=245s", "title": "Self-Supervised Learning - Towards Autonomously Learning Machines\u2014Simon Stiebellehner", "thumbnail": "https://i.ytimg.com/vi/oDKXwxaGkNA/maxresdefault.jpg"} {"video_id": "oDKXwxaGkNA", "text": "and even super human performance especially for these types of tasks which are very specific and also based on images you can achieve astonishingly great results using supervised deep learning but well that great performance that you often read about in the news for reaching that for reaching human level and superhuman level performance very often you're gonna need just tons of labeled data and really tons we are speaking about tens of thousands hundreds of thousands millions of labeled images and that is a lot and", "start_timestamp": "00:04:40", "end_timestamp": "00:05:14", "start_second": 280, "end_second": 314, "url": "https://www.youtube.com/watch?v=oDKXwxaGkNA&t=280s", "title": "Self-Supervised Learning - Towards Autonomously Learning Machines\u2014Simon Stiebellehner", "thumbnail": "https://i.ytimg.com/vi/oDKXwxaGkNA/maxresdefault.jpg"} {"video_id": "oDKXwxaGkNA", "text": "this is also a big big problem because obviously label data is so important for achieving this great performance but at the same time label data is scarce right data we have a lot of it we have data lakes and data warehouse is full of data but actually getting the labels that you need for solving your specific problem that's something that's usually not existent of course at that point you could argue well you know if I don't have these labels I can just label it by myself I can sit down on my desk look look through these images of cats and", "start_timestamp": "00:05:14", "end_timestamp": "00:05:50", "start_second": 314, "end_second": 350, "url": "https://www.youtube.com/watch?v=oDKXwxaGkNA&t=314s", "title": "Self-Supervised Learning - Towards Autonomously Learning Machines\u2014Simon Stiebellehner", "thumbnail": "https://i.ytimg.com/vi/oDKXwxaGkNA/maxresdefault.jpg"} {"video_id": "oDKXwxaGkNA", "text": "dogs and note if there is a cat or a dog on that image and I fully agree yes you can do that absolutely but you can imagine that this is quite some effort and that effort Rises very very quickly with the complexity of the image for example this image is an image we took from a real practical use case we did together with me burr moebus a large manufacturer of industrial parts and that image actually shows one of these parts well on that image in theory you can recognize defects in the underlying part well it's not an easy dog because", "start_timestamp": "00:05:50", "end_timestamp": "00:06:27", "start_second": 350, "end_second": 387, "url": "https://www.youtube.com/watch?v=oDKXwxaGkNA&t=350s", "title": "Self-Supervised Learning - Towards Autonomously Learning Machines\u2014Simon Stiebellehner", "thumbnail": "https://i.ytimg.com/vi/oDKXwxaGkNA/maxresdefault.jpg"} {"video_id": "oDKXwxaGkNA", "text": "these images they are really large they are highly noisy and defects they can be incredibly subtle incredibly small so it even takes a human expert quite some time to see me reliably find and label mark all these defects on such an image so assume you want to automate that by trading some supervised deep learning model and for that you need to build up a large label data set of let's say a hundred thousand images labeled images you can imagine that that's going to be quite some effort and you know time is money so it's also going to be", "start_timestamp": "00:06:27", "end_timestamp": "00:07:01", "start_second": 387, "end_second": 421, "url": "https://www.youtube.com/watch?v=oDKXwxaGkNA&t=387s", "title": "Self-Supervised Learning - Towards Autonomously Learning Machines\u2014Simon Stiebellehner", "thumbnail": "https://i.ytimg.com/vi/oDKXwxaGkNA/maxresdefault.jpg"} {"video_id": "oDKXwxaGkNA", "text": "expensive how expensive well it's actually not a too hard calculation to to compute this assume you use some publicly available labeling tool such as Amazon Sage make a ground truth for example they charge you I think if I remember correct around four US dollars per labeled image then as we said before you can do 30 images per hour because you can do you need two minutes per image and then of course it's not going to be you labeling but probably you're going to employ some working student for example it does the labeling and that", "start_timestamp": "00:07:01", "end_timestamp": "00:07:33", "start_second": 421, "end_second": 453, "url": "https://www.youtube.com/watch?v=oDKXwxaGkNA&t=421s", "title": "Self-Supervised Learning - Towards Autonomously Learning Machines\u2014Simon Stiebellehner", "thumbnail": "https://i.ytimg.com/vi/oDKXwxaGkNA/maxresdefault.jpg"} {"video_id": "oDKXwxaGkNA", "text": "working student charges you 15 US dollars per hour well and now this is critical highly critical but most people forget about that you do not only need one person labeling your images why well because a person can have a bad day a person you know might just not be too accurate on that task and and this actually biggest problem these images you saw a highly complex these defects are so difficult to spot it's really really easy to oversee one so you need multiple people labeling the same image and then you need to aggregate these", "start_timestamp": "00:07:33", "end_timestamp": "00:08:10", "start_second": 453, "end_second": 490, "url": "https://www.youtube.com/watch?v=oDKXwxaGkNA&t=453s", "title": "Self-Supervised Learning - Towards Autonomously Learning Machines\u2014Simon Stiebellehner", "thumbnail": "https://i.ytimg.com/vi/oDKXwxaGkNA/maxresdefault.jpg"} {"video_id": "oDKXwxaGkNA", "text": "labels to actually end up with a really robustly labeled data set if you don't do that you're gonna end up with a garbage dataset and you know how it is in machine learning garbage in garbage out your model is not gonna learn accurately what you wanted to learn and if you do the math behind this you will find out that building up that label data set cost you ten thousand hours work time and that amounts to more than a quarter of a million US dollars just labeling cost and this is significant right of course if you're big if your business", "start_timestamp": "00:08:10", "end_timestamp": "00:08:44", "start_second": 490, "end_second": 524, "url": "https://www.youtube.com/watch?v=oDKXwxaGkNA&t=490s", "title": "Self-Supervised Learning - Towards Autonomously Learning Machines\u2014Simon Stiebellehner", "thumbnail": "https://i.ytimg.com/vi/oDKXwxaGkNA/maxresdefault.jpg"} {"video_id": "oDKXwxaGkNA", "text": "case is big quarter of a million might be nothing you might not even lose a single thought on that but for most companies in most departments spending quarter of a million I'm just building up a label dataset it's a significant challenge because at that point you haven't built a single classifier at that point you haven't deployed anything in production you basically haven't shown any value and this is a problem and this is why we argue that supervised learning and especially supervised deep learning is just very often not feasible because", "start_timestamp": "00:08:44", "end_timestamp": "00:09:18", "start_second": 524, "end_second": 558, "url": "https://www.youtube.com/watch?v=oDKXwxaGkNA&t=524s", "title": "Self-Supervised Learning - Towards Autonomously Learning Machines\u2014Simon Stiebellehner", "thumbnail": "https://i.ytimg.com/vi/oDKXwxaGkNA/maxresdefault.jpg"} {"video_id": "oDKXwxaGkNA", "text": "first labels the labels you need to solve your problems probably don't exist and second if you want to create these labels well that's gonna cost you a lot of money and this is also by thought leaders of the fields argue that Dai Revolution is not going to be based on supervised learning and I agree how can Dai Revolution be based on supervised learning if we don't have labels right if labels are not ubiquitous how can a I be ubiquitous it can't at least not when it's based on supervised learning and when I thought about this the first time", "start_timestamp": "00:09:18", "end_timestamp": "00:09:52", "start_second": 558, "end_second": 592, "url": "https://www.youtube.com/watch?v=oDKXwxaGkNA&t=558s", "title": "Self-Supervised Learning - Towards Autonomously Learning Machines\u2014Simon Stiebellehner", "thumbnail": "https://i.ytimg.com/vi/oDKXwxaGkNA/maxresdefault.jpg"} {"video_id": "oDKXwxaGkNA", "text": "I still found it a bit strange because well deep learning is based on artificial neural networks and neural networks as the name already says are inspired at least inspired by the human brain but we humans we need so much less labelled data to learn a task highly accurately so where did we go wrong here what makes the difference and this brings us to the core of this talk self supervise deep learning when we talk about self supervise deep learning we first need to take a step back and think about the human brain we need to ask", "start_timestamp": "00:09:52", "end_timestamp": "00:10:31", "start_second": 592, "end_second": 631, "url": "https://www.youtube.com/watch?v=oDKXwxaGkNA&t=592s", "title": "Self-Supervised Learning - Towards Autonomously Learning Machines\u2014Simon Stiebellehner", "thumbnail": "https://i.ytimg.com/vi/oDKXwxaGkNA/maxresdefault.jpg"} {"video_id": "oDKXwxaGkNA", "text": "ourselves how do we humans actually learn do we do supervised learning well absolutely yes sometimes for example in school or at university you have an explicit supervisor telling you what's right and what's wrong what is a cat and what is a dog so yes we do but of course not always you don't always have a supervisor standing next to you telling you what's good and what's bad and even if we have a supervisor for example at school you don't need to be taught something 10,000 times before you understand to do a task actually just a", "start_timestamp": "00:10:31", "end_timestamp": "00:11:07", "start_second": 631, "end_second": 667, "url": "https://www.youtube.com/watch?v=oDKXwxaGkNA&t=631s", "title": "Self-Supervised Learning - Towards Autonomously Learning Machines\u2014Simon Stiebellehner", "thumbnail": "https://i.ytimg.com/vi/oDKXwxaGkNA/maxresdefault.jpg"} {"video_id": "oDKXwxaGkNA", "text": "few example as a fish so again we are we humans are such efficient and effective learners our supervised learning seems to be highly different from the supervised learning between machine learning obviously well how about trial and error learning or reinforcement learning just to bring us on the same page what is reinforcement learning basically it's a subfield of machine learning where we try to teach an agent to learn a policy a behavior to solve a highly complex mostly sequential task such as driving a car in a simulation or playing chess and", "start_timestamp": "00:11:07", "end_timestamp": "00:11:41", "start_second": 667, "end_second": 701, "url": "https://www.youtube.com/watch?v=oDKXwxaGkNA&t=667s", "title": "Self-Supervised Learning - Towards Autonomously Learning Machines\u2014Simon Stiebellehner", "thumbnail": "https://i.ytimg.com/vi/oDKXwxaGkNA/maxresdefault.jpg"} {"video_id": "oDKXwxaGkNA", "text": "that agent learns that by let's say smart trial-and-error basically we humans we also do trial and error learning absolutely sometimes we try something we fail we try again and then do better hopefully but of course we don't trial and error everything for example you don't trial and error learning how to drive a car you don't just drive around randomly until you manage to stay on the road right this is not a good strategy how to how you should learn how to drive actually what we do is quite different we get into a", "start_timestamp": "00:11:41", "end_timestamp": "00:12:10", "start_second": 701, "end_second": 730, "url": "https://www.youtube.com/watch?v=oDKXwxaGkNA&t=701s", "title": "Self-Supervised Learning - Towards Autonomously Learning Machines\u2014Simon Stiebellehner", "thumbnail": "https://i.ytimg.com/vi/oDKXwxaGkNA/maxresdefault.jpg"} {"video_id": "oDKXwxaGkNA", "text": "car the first time and after a couple of hours we are actually reasonable drivers we can at least follow traffic in a basic way and this is fascinating because if you want to teach a machine to follow a real-world traffic in a reliable way but this is a big big big problem you need a lot not only of examples but also engineering power behind this before as humans it's so easy it's oh it's so it's because we are such effective and efficient learners again so obviously also our trial and error learning that we humans do is fundamentally different", "start_timestamp": "00:12:10", "end_timestamp": "00:12:45", "start_second": 730, "end_second": 765, "url": "https://www.youtube.com/watch?v=oDKXwxaGkNA&t=730s", "title": "Self-Supervised Learning - Towards Autonomously Learning Machines\u2014Simon Stiebellehner", "thumbnail": "https://i.ytimg.com/vi/oDKXwxaGkNA/maxresdefault.jpg"} {"video_id": "oDKXwxaGkNA", "text": "from the trial and error learning we see in machine learning so what makes us humans such effective learners what is this magic ingredient that allows us to only require very little label data to learn a task incredibly well well this magic ingredient is something that we call having a general understanding of the world it's something that we all some also called common sense we humans we just know how things work right we just know how things work and we obtained this general understanding through observe raishin from the day that we are born", "start_timestamp": "00:12:45", "end_timestamp": "00:13:26", "start_second": 765, "end_second": 806, "url": "https://www.youtube.com/watch?v=oDKXwxaGkNA&t=765s", "title": "Self-Supervised Learning - Towards Autonomously Learning Machines\u2014Simon Stiebellehner", "thumbnail": "https://i.ytimg.com/vi/oDKXwxaGkNA/maxresdefault.jpg"} {"video_id": "oDKXwxaGkNA", "text": "until the day we die we humans we observe we observe what's happening around us with all our senses we smell we touch we see we hear and this continuous observation this taking the world around us as our supervisor this is actually what forms our general understanding and all this observation lets us understand the true meaning of things what does it mean if something is heavy what does it mean if something is hot what are the implications what are the consequences also we start to understand abstract concepts and concept", "start_timestamp": "00:13:26", "end_timestamp": "00:13:59", "start_second": 806, "end_second": 839, "url": "https://www.youtube.com/watch?v=oDKXwxaGkNA&t=806s", "title": "Self-Supervised Learning - Towards Autonomously Learning Machines\u2014Simon Stiebellehner", "thumbnail": "https://i.ytimg.com/vi/oDKXwxaGkNA/maxresdefault.jpg"} {"video_id": "oDKXwxaGkNA", "text": "relationships such as friendship for example and all these forms our general understanding and this general understanding makes us such effective and efficient learners because this means everything we learn we do not start from zero we always have a head start we always base everything we learn upon our understanding of how the world works and all our previously acquired knowledge and this is what makes us human such effective and efficient learners well that's great for us humans right but this talk is not primarily", "start_timestamp": "00:13:59", "end_timestamp": "00:14:38", "start_second": 839, "end_second": 878, "url": "https://www.youtube.com/watch?v=oDKXwxaGkNA&t=839s", "title": "Self-Supervised Learning - Towards Autonomously Learning Machines\u2014Simon Stiebellehner", "thumbnail": "https://i.ytimg.com/vi/oDKXwxaGkNA/maxresdefault.jpg"} {"video_id": "oDKXwxaGkNA", "text": "about human intelligence but of course it's about artificial intelligence so the question remains how do we inject this general understanding into machines how do we make machines effective and efficient learners how do we allow machines not to need tens of thousand hundreds of thousands millions of label data points anymore and still achieve great great maybe even human level performance well the answer is quite simple actually we just need to let them observe the world we just need to make the data and their supervisor well but", "start_timestamp": "00:14:38", "end_timestamp": "00:15:16", "start_second": 878, "end_second": 916, "url": "https://www.youtube.com/watch?v=oDKXwxaGkNA&t=878s", "title": "Self-Supervised Learning - Towards Autonomously Learning Machines\u2014Simon Stiebellehner", "thumbnail": "https://i.ytimg.com/vi/oDKXwxaGkNA/maxresdefault.jpg"} {"video_id": "oDKXwxaGkNA", "text": "how do you do that how do you force a machine to observe the world doesn't sound too easy well imagine you put up a video camera at this traffic junction and that video camera just continuously records what's happening in that traffic situation so you basically end up with an endless and an endless sequence of images and now imagine you have never seen a car before you have never seen a motorcycle before you have never seen traffic before and somebody gives you this video and you watch that video I promise after", "start_timestamp": "00:15:16", "end_timestamp": "00:15:55", "start_second": 916, "end_second": 955, "url": "https://www.youtube.com/watch?v=oDKXwxaGkNA&t=916s", "title": "Self-Supervised Learning - Towards Autonomously Learning Machines\u2014Simon Stiebellehner", "thumbnail": "https://i.ytimg.com/vi/oDKXwxaGkNA/maxresdefault.jpg"} {"video_id": "oDKXwxaGkNA", "text": "a sufficiently long amount of time you will have figured out what a car is what a motorcyclist and how traffic works when it's a car allowed to go right when it's a car allowed to go left when it needs to stops and so on you have understood the concept of traffic by simple observation well that's how we humans do it but again how do we frame this a machine learning problem now the good thing is the good news is that supervised machine learning problems always somewhat the same they always have this very very basic structure that", "start_timestamp": "00:15:55", "end_timestamp": "00:16:25", "start_second": 955, "end_second": 985, "url": "https://www.youtube.com/watch?v=oDKXwxaGkNA&t=955s", "title": "Self-Supervised Learning - Towards Autonomously Learning Machines\u2014Simon Stiebellehner", "thumbnail": "https://i.ytimg.com/vi/oDKXwxaGkNA/maxresdefault.jpg"} {"video_id": "oDKXwxaGkNA", "text": "you can see here and I would say 95% of all supervised machine learning problems are framed just like that you have some kind of input which in our case of course is our video our sequence of images that goes into a model could be anything in our case that lets say some type of neural network that model spits out a prediction and we then compare the prediction to the label to the truth based on a difference based on the error that our model makes we're going to take a step in our optimization procedure for example using gradient descent and the", "start_timestamp": "00:16:25", "end_timestamp": "00:16:57", "start_second": 985, "end_second": 1017, "url": "https://www.youtube.com/watch?v=oDKXwxaGkNA&t=985s", "title": "Self-Supervised Learning - Towards Autonomously Learning Machines\u2014Simon Stiebellehner", "thumbnail": "https://i.ytimg.com/vi/oDKXwxaGkNA/maxresdefault.jpg"} {"video_id": "oDKXwxaGkNA", "text": "next time our model is going to perform better that's how supervised learning works at the very foundations and it's quite clear about to use this input and also the model part is clear but what is our model gonna predict actually and what are the labels we don't have labels right we only have a video only a sequence of images so what should we actually teach our model to output and this is actually myself supervision comes in and this is also where your creativity comes in you need to think about how do I shape and frame the data", "start_timestamp": "00:16:57", "end_timestamp": "00:17:30", "start_second": 1017, "end_second": 1050, "url": "https://www.youtube.com/watch?v=oDKXwxaGkNA&t=1017s", "title": "Self-Supervised Learning - Towards Autonomously Learning Machines\u2014Simon Stiebellehner", "thumbnail": "https://i.ytimg.com/vi/oDKXwxaGkNA/maxresdefault.jpg"} {"video_id": "oDKXwxaGkNA", "text": "the unlabeled data that I have to form a supervised learning problem from it one example would be well you could simply chop up your endless video your endless sequence of images into smaller sub sequences and always take the last image of such a sub sequence and use it as label and then teach your network to predict the future from the past you teach it to predict the end of that sequence from the beginning or the previous elements of that sequence of these images your model is going to learn to predict the", "start_timestamp": "00:17:30", "end_timestamp": "00:18:04", "start_second": 1050, "end_second": 1084, "url": "https://www.youtube.com/watch?v=oDKXwxaGkNA&t=1050s", "title": "Self-Supervised Learning - Towards Autonomously Learning Machines\u2014Simon Stiebellehner", "thumbnail": "https://i.ytimg.com/vi/oDKXwxaGkNA/maxresdefault.jpg"} {"video_id": "oDKXwxaGkNA", "text": "future from the past it's going to learn when a car is going to move when it can drive forward when it has to stop when it's allowed to go left and went allowed to go allowed to go right and we can do this in a fully automated fashion you know you can simply create a small Python program for example that chops up your video into small sequences always takes the last frame of such a sequence using it as label and then you train your network and you're gonna have an expert network when it comes to traffic basically and you can use this concept", "start_timestamp": "00:18:04", "end_timestamp": "00:18:34", "start_second": 1084, "end_second": 1114, "url": "https://www.youtube.com/watch?v=oDKXwxaGkNA&t=1084s", "title": "Self-Supervised Learning - Towards Autonomously Learning Machines\u2014Simon Stiebellehner", "thumbnail": "https://i.ytimg.com/vi/oDKXwxaGkNA/maxresdefault.jpg"} {"video_id": "oDKXwxaGkNA", "text": "of self supervision of automatically creating labels from unlabeled data also in other types of data it's very flexible for example you can use it on images you can just randomly crop out images of let's say faces and teach your model to predict these rectangles or you can also use it on text you randomly remove words from text and use the surrounding words the context words to predict the target word the missing word that you removed well that's that's awesome but so what right what what are you gonna do with the model I predict", "start_timestamp": "00:18:34", "end_timestamp": "00:19:09", "start_second": 1114, "end_second": 1149, "url": "https://www.youtube.com/watch?v=oDKXwxaGkNA&t=1114s", "title": "Self-Supervised Learning - Towards Autonomously Learning Machines\u2014Simon Stiebellehner", "thumbnail": "https://i.ytimg.com/vi/oDKXwxaGkNA/maxresdefault.jpg"} {"video_id": "oDKXwxaGkNA", "text": "the missing rectangle in an image that's usually not what you want to solve and that's true so what we actually gain from this what do we gain from a model that knows how to complete an incomplete face well what we gain from it is understanding that model by performing this task by learning this task builds up an understanding of the concept of a face it learns very low-level representations about the face and also high-level representations such as it's gonna learn that usually areas one knows in a face there are two ears in a face", "start_timestamp": "00:19:09", "end_timestamp": "00:19:42", "start_second": 1149, "end_second": 1182, "url": "https://www.youtube.com/watch?v=oDKXwxaGkNA&t=1149s", "title": "Self-Supervised Learning - Towards Autonomously Learning Machines\u2014Simon Stiebellehner", "thumbnail": "https://i.ytimg.com/vi/oDKXwxaGkNA/maxresdefault.jpg"} {"video_id": "oDKXwxaGkNA", "text": "these ears sits on the sides of your head and so on it will learn a general understanding of the concept of a face and you can use this understanding you can use this knowledge for any other task you really want to solve that is somewhat related what does it mean well first let's say you do exactly what did you randomly remove rectangles for images of faces and if if you have little images of faces you can remove rectangles in a variety of different ways right so you can actually end up with a large data set you train your", "start_timestamp": "00:19:42", "end_timestamp": "00:20:15", "start_second": 1182, "end_second": 1215, "url": "https://www.youtube.com/watch?v=oDKXwxaGkNA&t=1182s", "title": "Self-Supervised Learning - Towards Autonomously Learning Machines\u2014Simon Stiebellehner", "thumbnail": "https://i.ytimg.com/vi/oDKXwxaGkNA/maxresdefault.jpg"} {"video_id": "oDKXwxaGkNA", "text": "model to predict these missing rectangles your model will then have an understanding of the concept of a face then you take the same model you take the model with all of its knowledge and use it for whatever task you really want to solve for example for predicting the age based on images you just fine-tune it a bit maybe modify the architecture bit no big modifications and your model is not going to give you the missing rectangle anymore but it's going to give you the age of the person that's visible on that image and all that by simply", "start_timestamp": "00:20:15", "end_timestamp": "00:20:46", "start_second": 1215, "end_second": 1246, "url": "https://www.youtube.com/watch?v=oDKXwxaGkNA&t=1215s", "title": "Self-Supervised Learning - Towards Autonomously Learning Machines\u2014Simon Stiebellehner", "thumbnail": "https://i.ytimg.com/vi/oDKXwxaGkNA/maxresdefault.jpg"} {"video_id": "oDKXwxaGkNA", "text": "using a model that has acquired general understanding through self supervision and then you just fine tune it with little label data for the tasks you really want to solve what you gain from that is overall well first of all since your first building up general understanding and then fine-tuning your model to the task you want to solve and that means your model will need less label data to achieve better performance your model will converge faster because its weight already in a somewhat optimal position and also your model is going to", "start_timestamp": "00:20:46", "end_timestamp": "00:21:17", "start_second": 1246, "end_second": 1277, "url": "https://www.youtube.com/watch?v=oDKXwxaGkNA&t=1246s", "title": "Self-Supervised Learning - Towards Autonomously Learning Machines\u2014Simon Stiebellehner", "thumbnail": "https://i.ytimg.com/vi/oDKXwxaGkNA/maxresdefault.jpg"} {"video_id": "oDKXwxaGkNA", "text": "be more robust because the features it learned these important characteristics will learn on a very large data set on yourself supervised data set and you were just fine-tuning on your downstream tasks where you're potentially a very little labeled data this is actually really a win-win situation here but how does this work in practice instead really usable in practice well and this is why I brought this industry case with me which we did together with upstream ability and our task at hand was estimating mode of transport from GPS", "start_timestamp": "00:21:17", "end_timestamp": "00:21:49", "start_second": 1277, "end_second": 1309, "url": "https://www.youtube.com/watch?v=oDKXwxaGkNA&t=1277s", "title": "Self-Supervised Learning - Towards Autonomously Learning Machines\u2014Simon Stiebellehner", "thumbnail": "https://i.ytimg.com/vi/oDKXwxaGkNA/maxresdefault.jpg"} {"video_id": "oDKXwxaGkNA", "text": "movement data well GPS movement data is quite easily described you just have sequences of GPS coordinates and then associated timestamp and basically what we what we got was a data set of these trips across Vienna and that person here started his trip at the Green Dot ended it at the red dot and every dot in between was a GPS signal our task was for each of these GPS signals estimate which mode of transport he was taking was it going by bus was it going the Metro was he walking cycling going the car and so on actually", "start_timestamp": "00:21:49", "end_timestamp": "00:22:21", "start_second": 1309, "end_second": 1341, "url": "https://www.youtube.com/watch?v=oDKXwxaGkNA&t=1309s", "title": "Self-Supervised Learning - Towards Autonomously Learning Machines\u2014Simon Stiebellehner", "thumbnail": "https://i.ytimg.com/vi/oDKXwxaGkNA/maxresdefault.jpg"} {"video_id": "oDKXwxaGkNA", "text": "quite hard quite complex given the little information so the first thing we asked our self here was so how would we humans solve this problem well what I would do is I would go on Google Maps and check out where transport lines are running right because if a GPS signal is nowhere close to a metro line probably he was not going by Metro and that's what we did we enriched this map of Vienna by adding image channels and each image channel represents a different transport network one represents the Metro network one the bus network and so", "start_timestamp": "00:22:21", "end_timestamp": "00:22:53", "start_second": 1341, "end_second": 1373, "url": "https://www.youtube.com/watch?v=oDKXwxaGkNA&t=1341s", "title": "Self-Supervised Learning - Towards Autonomously Learning Machines\u2014Simon Stiebellehner", "thumbnail": "https://i.ytimg.com/vi/oDKXwxaGkNA/maxresdefault.jpg"} {"video_id": "oDKXwxaGkNA", "text": "on this is awful for humans right this produces an image you cannot interpret but it is great when it comes to deep learning well and then we used that information to embark on building a super lies deep learning model with no self supervision at this point the way we framed this problem was well we framed it as a sequence learning problem what does this mean so basically we thought our hypothesis was the movement history of a person provides us with important information also for estimating his current mode of transport", "start_timestamp": "00:22:53", "end_timestamp": "00:23:25", "start_second": 1373, "end_second": 1405, "url": "https://www.youtube.com/watch?v=oDKXwxaGkNA&t=1373s", "title": "Self-Supervised Learning - Towards Autonomously Learning Machines\u2014Simon Stiebellehner", "thumbnail": "https://i.ytimg.com/vi/oDKXwxaGkNA/maxresdefault.jpg"} {"video_id": "oDKXwxaGkNA", "text": "because obviously if a person has been on the Metro for the last five GPS signals because his GPS signals were just on top of that metro line and his current trip ethical is again on top of that metro line it is quite likely that he is still on that Metro because changing transport modes of transport is actually associated with costs in terms of time effort and maybe even money so our hypothesis was abused the previous movement history to estimate this current mode of transport so again how do we frame this a machine learning", "start_timestamp": "00:23:25", "end_timestamp": "00:23:56", "start_second": 1405, "end_second": 1436, "url": "https://www.youtube.com/watch?v=oDKXwxaGkNA&t=1405s", "title": "Self-Supervised Learning - Towards Autonomously Learning Machines\u2014Simon Stiebellehner", "thumbnail": "https://i.ytimg.com/vi/oDKXwxaGkNA/maxresdefault.jpg"} {"video_id": "oDKXwxaGkNA", "text": "problem we have a big big image with dots on it and then we want to incorporate the sequence of dots to make our estimation so you can do this in a variety of ways of course the way we did it was for each of these dots we want to incorporate in our for making our prediction we cropped out a small rectangle of the image around it so we end up with a sequence of image tiles each of these image tiles representing the path in total then goes into a convolutional neural network a so-called CNN which is a type of neural network", "start_timestamp": "00:23:56", "end_timestamp": "00:24:29", "start_second": 1436, "end_second": 1469, "url": "https://www.youtube.com/watch?v=oDKXwxaGkNA&t=1436s", "title": "Self-Supervised Learning - Towards Autonomously Learning Machines\u2014Simon Stiebellehner", "thumbnail": "https://i.ytimg.com/vi/oDKXwxaGkNA/maxresdefault.jpg"} {"video_id": "oDKXwxaGkNA", "text": "that is very good at extracting important information from images that CNN then produces what that CNN thinks is important for making the modality estimation so called feature vectors it learns feature vectors which is simply a numeric dense vector representation of that image and what is important in that small image title in that sequence of image a feature vector s then goes into a recurrent neural network a so called R and n an errand n again is a type of neural network just a very different type which is great for modeling", "start_timestamp": "00:24:29", "end_timestamp": "00:25:01", "start_second": 1469, "end_second": 1501, "url": "https://www.youtube.com/watch?v=oDKXwxaGkNA&t=1469s", "title": "Self-Supervised Learning - Towards Autonomously Learning Machines\u2014Simon Stiebellehner", "thumbnail": "https://i.ytimg.com/vi/oDKXwxaGkNA/maxresdefault.jpg"} {"video_id": "oDKXwxaGkNA", "text": "sequences from modeling temporal dependencies between elements of a sequence and since we are dealing with a sequence of GPS coordinates here or a sequence of image tiles are actually now a sequence of learned image features an RNN is an obvious choice here and there are an N then finally it gives you in the end which modality the person was likely to be taking was he walking going by bus taking the metro and so on and that what you can see here all that was basically one single deep learning model and it worked out reasonably well", "start_timestamp": "00:25:01", "end_timestamp": "00:25:35", "start_second": 1501, "end_second": 1535, "url": "https://www.youtube.com/watch?v=oDKXwxaGkNA&t=1501s", "title": "Self-Supervised Learning - Towards Autonomously Learning Machines\u2014Simon Stiebellehner", "thumbnail": "https://i.ytimg.com/vi/oDKXwxaGkNA/maxresdefault.jpg"} {"video_id": "oDKXwxaGkNA", "text": "actually surprisingly well however you can imagine that this was not a small model actually we were dealing with I think more a bit more than hundreds of thousands of parameters in that model which is ok if you have sufficient labeled data and you can guess well we didn't of course we were dealing with rather little real-world data here which is a problem given if you use a large network because the network is likely to overfit and so on so we thought about well how about self supervised deployer this is actually a", "start_timestamp": "00:25:35", "end_timestamp": "00:26:06", "start_second": 1535, "end_second": 1566, "url": "https://www.youtube.com/watch?v=oDKXwxaGkNA&t=1535s", "title": "Self-Supervised Learning - Towards Autonomously Learning Machines\u2014Simon Stiebellehner", "thumbnail": "https://i.ytimg.com/vi/oDKXwxaGkNA/maxresdefault.jpg"} {"video_id": "oDKXwxaGkNA", "text": "very good very good case for that how about first letting the model develop general understanding and then taking this model with all of its understanding and using it for solving our very specific modality estimation tasks and that's how we came up with that concept you could for example teach your model the entire Vienna transport network in a self supervised pre training stage just as we did before then take the not take that model with all of its knowledge and find unit to make modality estimation happen sounds", "start_timestamp": "00:26:06", "end_timestamp": "00:26:38", "start_second": 1566, "end_second": 1598, "url": "https://www.youtube.com/watch?v=oDKXwxaGkNA&t=1566s", "title": "Self-Supervised Learning - Towards Autonomously Learning Machines\u2014Simon Stiebellehner", "thumbnail": "https://i.ytimg.com/vi/oDKXwxaGkNA/maxresdefault.jpg"} {"video_id": "oDKXwxaGkNA", "text": "good but how do you teach a model EBN a transport network and this is again where it comes to self supervision where it comes to creativity how can you frame a problem from the data you have to make this happen one way is to for example randomly sample 32 by 32 pixels again these small image tiles from a map of Vienna which is publicly available of course and then you just automatically check which line of transport is running on that tile it's all information is publicly available again you take that as labels", "start_timestamp": "00:26:38", "end_timestamp": "00:27:12", "start_second": 1598, "end_second": 1632, "url": "https://www.youtube.com/watch?v=oDKXwxaGkNA&t=1598s", "title": "Self-Supervised Learning - Towards Autonomously Learning Machines\u2014Simon Stiebellehner", "thumbnail": "https://i.ytimg.com/vi/oDKXwxaGkNA/maxresdefault.jpg"} {"video_id": "oDKXwxaGkNA", "text": "and you teach a model to predict which line of transport is running on a given image tile your model is going to learn the Vienna transport network it's going to be an absolute expert and you can then take this model you can take this CNN that knows everything about the Vienna transport network and put it back into our original architecture replacing the CNN from before the CNN that had to be trained from scratch our model now does need to be trained from scratch it has a big big head start it has a quiet general understanding so it just needs", "start_timestamp": "00:27:12", "end_timestamp": "00:27:42", "start_second": 1632, "end_second": 1662, "url": "https://www.youtube.com/watch?v=oDKXwxaGkNA&t=1632s", "title": "Self-Supervised Learning - Towards Autonomously Learning Machines\u2014Simon Stiebellehner", "thumbnail": "https://i.ytimg.com/vi/oDKXwxaGkNA/maxresdefault.jpg"} {"video_id": "oDKXwxaGkNA", "text": "to be fine-tuned a bit and that's an end and passes on high-quality feature vectors again to RNN which learns the temporal dependencies and outputs the predictions what you gain from this is you have a significantly smaller model because you do not need to train the CNN from scratch this means you can have many of the parameters frozen and you end up with a more with a model with trainable parameters going from a hundred thousand before to ten or tens of thousands now which is a lot better suited for little labelled data so again", "start_timestamp": "00:27:42", "end_timestamp": "00:28:15", "start_second": 1662, "end_second": 1695, "url": "https://www.youtube.com/watch?v=oDKXwxaGkNA&t=1662s", "title": "Self-Supervised Learning - Towards Autonomously Learning Machines\u2014Simon Stiebellehner", "thumbnail": "https://i.ytimg.com/vi/oDKXwxaGkNA/maxresdefault.jpg"} {"video_id": "oDKXwxaGkNA", "text": "what you're going to get is your model will be able to deal with little label data better it's gonna be more robust it's gonna converge faster because you simply have to optimize less parameters here and overall it's gonna make more accurate predictions given the same labelled data we have come a pretty long way now all the way from supervised deep learning why it's so great so powerful but also we got to know its ugly side we got to know that it just did tons of label data to achieve this famous performance subsequently we found out", "start_timestamp": "00:28:15", "end_timestamp": "00:28:44", "start_second": 1695, "end_second": 1724, "url": "https://www.youtube.com/watch?v=oDKXwxaGkNA&t=1695s", "title": "Self-Supervised Learning - Towards Autonomously Learning Machines\u2014Simon Stiebellehner", "thumbnail": "https://i.ytimg.com/vi/oDKXwxaGkNA/maxresdefault.jpg"} {"video_id": "oDKXwxaGkNA", "text": "how taking a self supervised approach can help us overcome that key problem of supervised learning and finally we're looking through the industry case that hopefully gave you a pretty good practical idea of how you can apply self supervision to your problems to make your models work better on little label data I hope you enjoyed the talk feel free to ask any question you might have in case we are out of time for questions I will be hereafter talk as well so feel free to approach me and ask any question in private that may have been", "start_timestamp": "00:28:44", "end_timestamp": "00:29:14", "start_second": 1724, "end_second": 1754, "url": "https://www.youtube.com/watch?v=oDKXwxaGkNA&t=1724s", "title": "Self-Supervised Learning - Towards Autonomously Learning Machines\u2014Simon Stiebellehner", "thumbnail": "https://i.ytimg.com/vi/oDKXwxaGkNA/maxresdefault.jpg"} {"video_id": "oDKXwxaGkNA", "text": "left unanswered also of course drop me an email and add me on LinkedIn I'm always happy for for method for interesting message as I get and most of all visit our website craft works dot AI thank you very much okay um let's see I think we maybe have time for one question would you like to choose which one you think would be the best one the unites answer okay I think the second one what is the main difference between supervisors and self supervised learning this is a important question because it's very fundamental", "start_timestamp": "00:29:14", "end_timestamp": "00:29:54", "start_second": 1754, "end_second": 1794, "url": "https://www.youtube.com/watch?v=oDKXwxaGkNA&t=1754s", "title": "Self-Supervised Learning - Towards Autonomously Learning Machines\u2014Simon Stiebellehner", "thumbnail": "https://i.ytimg.com/vi/oDKXwxaGkNA/maxresdefault.jpg"} {"video_id": "oDKXwxaGkNA", "text": "so self supervised learning is supervised learning the concept behind self supervision is that you do not have a label dataset but you come up with a smart task how you can create labels from a previously unlabeled data set not needing any human labeling this is the concept of self supervision you use the data that you have that video that we had before you do not have any labels you cannot just apply supervised learning because you don't have labels but you need to implement a self supervised task that where you first", "start_timestamp": "00:29:54", "end_timestamp": "00:30:29", "start_second": 1794, "end_second": 1829, "url": "https://www.youtube.com/watch?v=oDKXwxaGkNA&t=1794s", "title": "Self-Supervised Learning - Towards Autonomously Learning Machines\u2014Simon Stiebellehner", "thumbnail": "https://i.ytimg.com/vi/oDKXwxaGkNA/maxresdefault.jpg"} {"video_id": "7agK0nkiZpA", "text": "well I'm honored to be here and my hat is my favorite hat I think it makes me look rather handsome this hat is made from a mushroom called Amadou Amadou is a birch polypore and a Madhu is a hardwood conch and this mushroom is responsible for human survival not too long ago there's no doubt that we all came from Africa we went north we discovered something new called winter oops this mushroom allowed for the portability of fire moreover you can hollow this mushroom out put embers of fire inside and carry fire for days and", "start_timestamp": "00:00:00", "end_timestamp": "00:00:54", "start_second": 0, "end_second": 54, "url": "https://www.youtube.com/watch?v=7agK0nkiZpA&t=0s", "title": "Mushrooms as Medicine with Paul Stamets at Exponential Medicine", "thumbnail": "https://i.ytimg.com/vi/7agK0nkiZpA/hqdefault.jpg"} {"video_id": "7agK0nkiZpA", "text": "the fire keeper of our clans thousands of years ago were absolutely critical for the clans survival well this mushroom has other properties and when you boil this mushroom it delaminates and becomes mycelium a fabric and since some ladies in Transylvania have kept this tradition alive so this threat of knowledge has carried forth over thousands of years and so many threads of knowledge have been interrupted because of famine disease and war well this mushroom is first described by Hippocrates in 450 BC II as an", "start_timestamp": "00:00:54", "end_timestamp": "00:01:28", "start_second": 54, "end_second": 88, "url": "https://www.youtube.com/watch?v=7agK0nkiZpA&t=54s", "title": "Mushrooms as Medicine with Paul Stamets at Exponential Medicine", "thumbnail": "https://i.ytimg.com/vi/7agK0nkiZpA/hqdefault.jpg"} {"video_id": "7agK0nkiZpA", "text": "anti-inflammatory as well as for Carter izing wounds another mushroom I brought mushroom friend of mine also is a polypore wood conch and this is a Garak on a gerakan is the longest living mushroom in the world rose exclusively in the old-growth forests now presently only known from Northern California Oregon Washington and British Columbia in a sky island or two in Central Europe it was described by ascribe ease in the very first materia medica as Alexei reom at Langham vitam the elixir of long life and it was suggested thousands of years", "start_timestamp": "00:01:28", "end_timestamp": "00:02:04", "start_second": 88, "end_second": 124, "url": "https://www.youtube.com/watch?v=7agK0nkiZpA&t=88s", "title": "Mushrooms as Medicine with Paul Stamets at Exponential Medicine", "thumbnail": "https://i.ytimg.com/vi/7agK0nkiZpA/hqdefault.jpg"} {"video_id": "7agK0nkiZpA", "text": "ago as a treatment against consumption later to be known as tuberculosis so I'm going to take you on a journey and I'm going to take a radical left turn halfway through this talk and I'm going to present some data that has never been shown any to anyone else outside of my research team so I am honored to be representing triple a s on June 9th this year I was awarded as the invention ambassador this is great it's like a first audience that knows what Triple A s is so I don't have to explain that but it's a huge honor and I grew up", "start_timestamp": "00:02:04", "end_timestamp": "00:02:39", "start_second": 124, "end_second": 159, "url": "https://www.youtube.com/watch?v=7agK0nkiZpA&t=124s", "title": "Mushrooms as Medicine with Paul Stamets at Exponential Medicine", "thumbnail": "https://i.ytimg.com/vi/7agK0nkiZpA/hqdefault.jpg"} {"video_id": "7agK0nkiZpA", "text": "in a small town in Ohio and my brother John with my got me into science and he went on to Yale my brother Bill one on the Cornell and we had this incredible laboratory in the basement which they would not let me have access to but they went off to college and I suddenly had this fully equipped laboratory including the the radio from the aircraft carrier the intrepid my father was on it and after world war ii he got the radio so I was listening to all sorts of things behind the Iron Curtain I was just having a fabulous time so my dream was", "start_timestamp": "00:02:39", "end_timestamp": "00:03:09", "start_second": 159, "end_second": 189, "url": "https://www.youtube.com/watch?v=7agK0nkiZpA&t=159s", "title": "Mushrooms as Medicine with Paul Stamets at Exponential Medicine", "thumbnail": "https://i.ytimg.com/vi/7agK0nkiZpA/hqdefault.jpg"} {"video_id": "7agK0nkiZpA", "text": "always to live in the country and be a scientist and have my own scientific laboratory well on June 9th I got that award my brother John we were competitive and there's you know like brothers are you love him 80% of the time and 20% of time they kind of piss you off and so John really never respected you know my interest in my ecology what is this mushroom stuff but so when Triple A yes gave me this award it was highly vetted and I said Wow you know I can I get this is exciting I can tell my older brother now John I got", "start_timestamp": "00:03:09", "end_timestamp": "00:03:37", "start_second": 189, "end_second": 217, "url": "https://www.youtube.com/watch?v=7agK0nkiZpA&t=189s", "title": "Mushrooms as Medicine with Paul Stamets at Exponential Medicine", "thumbnail": "https://i.ytimg.com/vi/7agK0nkiZpA/hqdefault.jpg"} {"video_id": "7agK0nkiZpA", "text": "this award so I called him he didn't answer and then I emailed him and that was a day they discovered his body John I died from cardiac arrest standing up and I just want to tell all of you you have brothers and sisters that bug you you know think about the good times and how our life is so precious and so short so this talk is dedicated to my brother John who first got me into science so my main theme is biodiversity as biosecurity this is we live in I live in Washington State in the southern regions of the", "start_timestamp": "00:03:37", "end_timestamp": "00:04:17", "start_second": 217, "end_second": 257, "url": "https://www.youtube.com/watch?v=7agK0nkiZpA&t=217s", "title": "Mushrooms as Medicine with Paul Stamets at Exponential Medicine", "thumbnail": "https://i.ytimg.com/vi/7agK0nkiZpA/hqdefault.jpg"} {"video_id": "7agK0nkiZpA", "text": "Puget Sound and I want to point out the largest organism in the world is a mycelial mat 2,200 acres in size over two thousand years old and it's one cell wall thick surrounded by hundreds of millions of microbes per gram of soil we have several skin layers to protect us from infection the mycelium has one and yet achieves the largest mass of any organ in the world how is that possible well it's possible because it the is involved in I own his own microbiome it selects beneficial bacteria that it works in concert with and the mycelium", "start_timestamp": "00:04:17", "end_timestamp": "00:04:52", "start_second": 257, "end_second": 292, "url": "https://www.youtube.com/watch?v=7agK0nkiZpA&t=257s", "title": "Mushrooms as Medicine with Paul Stamets at Exponential Medicine", "thumbnail": "https://i.ytimg.com/vi/7agK0nkiZpA/hqdefault.jpg"} {"video_id": "7agK0nkiZpA", "text": "has based on a network like design the mycelium is digested nutrients externally we share more common ancestor with fungi than we do with any other Kingdom six hundred and fifty million years ago we split from fungi and there is an announcement a new super Kingdom it's been published called a pice the canta that joins Animalia and fungi together we exhale carbon dioxide we inhale oxygen and the fungi are able to stream nuclei to their tips and because of epigenesis the ability to adapt to change this is one of the few organisms", "start_timestamp": "00:04:52", "end_timestamp": "00:05:26", "start_second": 292, "end_second": 326, "url": "https://www.youtube.com/watch?v=7agK0nkiZpA&t=292s", "title": "Mushrooms as Medicine with Paul Stamets at Exponential Medicine", "thumbnail": "https://i.ytimg.com/vi/7agK0nkiZpA/hqdefault.jpg"} {"video_id": "7agK0nkiZpA", "text": "that actually benefits from disruption and so when these mats are disrupted the fork the us streaming of nuclei epigenesis comes into play reassortment nuclei at the end tips it codes for new genes for new enzymes acids to capture new food and then the information becomes back channeled into the mice illegal network so using the epidemic properties of mycelium i think is a way of the future of medicine so the mycelium is a lot more pervasive than most people realize virtually 90% of all plants have mycorrhizae fungi has now been", "start_timestamp": "00:05:26", "end_timestamp": "00:06:00", "start_second": 326, "end_second": 360, "url": "https://www.youtube.com/watch?v=7agK0nkiZpA&t=326s", "title": "Mushrooms as Medicine with Paul Stamets at Exponential Medicine", "thumbnail": "https://i.ytimg.com/vi/7agK0nkiZpA/hqdefault.jpg"} {"video_id": "7agK0nkiZpA", "text": "determined that these microbial networks and the mycelium communicates across landscapes in between plants and indeed all plants are part fungi so any research on botanical medicine the contribution of the end of it funds that are associated inside this plants needs to be taken into account because the conferring medicinal properties may be welcoming from the end of it expunged I as opposed to the plant by itself these my cellular networks stream across landscapes and I have these epiphanies and I believe habitats have immune", "start_timestamp": "00:06:00", "end_timestamp": "00:06:33", "start_second": 360, "end_second": 393, "url": "https://www.youtube.com/watch?v=7agK0nkiZpA&t=360s", "title": "Mushrooms as Medicine with Paul Stamets at Exponential Medicine", "thumbnail": "https://i.ytimg.com/vi/7agK0nkiZpA/hqdefault.jpg"} {"video_id": "7agK0nkiZpA", "text": "systems and the mice ileal networks are the foundation of the food web that's joining us all together now here is something that's I grow lots of mycelium twenty to thirty thousand kilos a week we have a small company sixty seven employees and this is differently frankly just not fair that I can tell you this in fifteen seconds which took me thirty years to discover the problem with mcgroin mycelium in laboratories is immunologically naive it's grown in pure culture when you throw it out into the ground all these organisms consume it", "start_timestamp": "00:06:33", "end_timestamp": "00:07:03", "start_second": 393, "end_second": 423, "url": "https://www.youtube.com/watch?v=7agK0nkiZpA&t=393s", "title": "Mushrooms as Medicine with Paul Stamets at Exponential Medicine", "thumbnail": "https://i.ytimg.com/vi/7agK0nkiZpA/hqdefault.jpg"} {"video_id": "7agK0nkiZpA", "text": "well we've soaked woodchips or a straw under water salt water or fresh water for two weeks in aerobic interwebz become predominant then we take this out and we drain off the water and then the oxygen becomes a sterilizer the anaerobes are largely killed simply our robes are in there and then the mycelium then becomes immunologically educated this is a profoundly powerful mycelium it's got an immune system and resident within this mycelium is the enormous amounts of bacteria we did next-gen sequencing here and this is a color heat", "start_timestamp": "00:07:03", "end_timestamp": "00:07:36", "start_second": 423, "end_second": 456, "url": "https://www.youtube.com/watch?v=7agK0nkiZpA&t=423s", "title": "Mushrooms as Medicine with Paul Stamets at Exponential Medicine", "thumbnail": "https://i.ytimg.com/vi/7agK0nkiZpA/hqdefault.jpg"} {"video_id": "7agK0nkiZpA", "text": "map a thousand full difference in the relative abundance of different genre of bacteria to different mushroom species selected out whole different constellations of bacteria this enables the mycelium to cell set up guilds and have commensal mutualistic organisms that i can combine with that allows it to conquer such large habitats well we all know that we have cancer 41% of us will get cancer 21% of us will die from it but did you know still that 73% of all against our cancer drugs have their origins and natural products we grow", "start_timestamp": "00:07:36", "end_timestamp": "00:08:08", "start_second": 456, "end_second": 488, "url": "https://www.youtube.com/watch?v=7agK0nkiZpA&t=456s", "title": "Mushrooms as Medicine with Paul Stamets at Exponential Medicine", "thumbnail": "https://i.ytimg.com/vi/7agK0nkiZpA/hqdefault.jpg"} {"video_id": "7agK0nkiZpA", "text": "about 500 different species turkey tails featured here or when the best described and studied medicinal mushrooms in the world we received a 2.2 million dollar breast cancer clinical grant from the nih for phase one breast cancer and the results of the studies have been published and on a dose-dependent basis well prior to radiation your immune system is as active and then when the turkey tail mushrooms eight capsules per day are consumed there is an upregulation of natural killer cells and then post radiation most of you know the immune", "start_timestamp": "00:08:08", "end_timestamp": "00:08:43", "start_second": 488, "end_second": 523, "url": "https://www.youtube.com/watch?v=7agK0nkiZpA&t=488s", "title": "Mushrooms as Medicine with Paul Stamets at Exponential Medicine", "thumbnail": "https://i.ytimg.com/vi/7agK0nkiZpA/hqdefault.jpg"} {"video_id": "7agK0nkiZpA", "text": "system is damaged it and then it has to recover and then on a dose-dependent basis two weeks and then four weeks the immune system kicks into gear natural killer cells are enhanced dramatically and also cytotoxic T cells look at the significance value here and so the immune system is activated by the consumption of these mushrooms and there's TLR tlr4 receptors I don't want to get into that right now but we've identified seven different distinct pathways of the immune systems activated than the consumption of these mushrooms", "start_timestamp": "00:08:43", "end_timestamp": "00:09:15", "start_second": 523, "end_second": 555, "url": "https://www.youtube.com/watch?v=7agK0nkiZpA&t=523s", "title": "Mushrooms as Medicine with Paul Stamets at Exponential Medicine", "thumbnail": "https://i.ytimg.com/vi/7agK0nkiZpA/hqdefault.jpg"} {"video_id": "7agK0nkiZpA", "text": "now this became deeply personal to me in june 2009 when my 83 year old mother called me up she's a charismatic Christian she's not seen a doctor since 1968 she called me a mrs. Paul I'm scared I didn't recognize her voice she was shaking I said what's wrong and she said my right breast is five times the size of my left I have six angry lymph nodes dark and swollen on my right side I said I couldn't believe it why didn't you tell me sooner and so I rushed her to Swedish breast cancer clinic in Seattle and then we got", "start_timestamp": "00:09:15", "end_timestamp": "00:09:46", "start_second": 555, "end_second": 586, "url": "https://www.youtube.com/watch?v=7agK0nkiZpA&t=555s", "title": "Mushrooms as Medicine with Paul Stamets at Exponential Medicine", "thumbnail": "https://i.ytimg.com/vi/7agK0nkiZpA/hqdefault.jpg"} {"video_id": "7agK0nkiZpA", "text": "the worst news after the second visit the oncologist said she should have been seen two years earlier the cancer is inoperable they could not do a mastectomy because of her age they couldn't give a radiation therapy because of the same reason because the likelihood of infection and so the oncologist tried to make best of it saying you live a long life and we kept on asking how long how long how long and she said you lucky if you had three months the tumor is erupting out of her breast across the Meridian invaded her", "start_timestamp": "00:09:46", "end_timestamp": "00:10:15", "start_second": 586, "end_second": 615, "url": "https://www.youtube.com/watch?v=7agK0nkiZpA&t=586s", "title": "Mushrooms as Medicine with Paul Stamets at Exponential Medicine", "thumbnail": "https://i.ytimg.com/vi/7agK0nkiZpA/hqdefault.jpg"} {"video_id": "7agK0nkiZpA", "text": "sternum and when it went into her liver so we had the circle meeting many of you have had this we planned for her funeral she chose a pink dress that bought the cheapest coffin that she could find because she was going to Jesus there's a lot of Tears and then on the third visit the oncologist said you know if your immune system could kick in patty you might be able to beat this and so she said you know there's these turkey tail mushroom study that's on a misty romantical school University of Minnesota Medical you might want to", "start_timestamp": "00:10:15", "end_timestamp": "00:10:43", "start_second": 615, "end_second": 643, "url": "https://www.youtube.com/watch?v=7agK0nkiZpA&t=615s", "title": "Mushrooms as Medicine with Paul Stamets at Exponential Medicine", "thumbnail": "https://i.ytimg.com/vi/7agK0nkiZpA/hqdefault.jpg"} {"video_id": "7agK0nkiZpA", "text": "take start taking turkey tail mushrooms well that's my mother said well that's what my son was talking about but she had to hear it from a doctor right so my mother started taking turkey tail she was on tax haul briefly had a horrific reaction refused to take it and she was ten then she's worth taking Herceptin a wonderful drug well that was in June of 2009 and I'm happy to say my mother she crossed the five year disease free period she's totally cancer-free this led to then a study saying well maybe turkey tail mushrooms can enhance", "start_timestamp": "00:10:43", "end_timestamp": "00:11:32", "start_second": 643, "end_second": 692, "url": "https://www.youtube.com/watch?v=7agK0nkiZpA&t=643s", "title": "Mushrooms as Medicine with Paul Stamets at Exponential Medicine", "thumbnail": "https://i.ytimg.com/vi/7agK0nkiZpA/hqdefault.jpg"} {"video_id": "7agK0nkiZpA", "text": "Herceptin now the good news that my mother survived but then she told me something her oncologist told her of the 50 women who joined at Herceptin program and listed in a new program in Ellensburg Washington where she enlisted of the 50 women 48 of them have died my mother was the only one taking turkey tail with Herceptin so how did this is interesting on multiple levels she's been written up as a best case outcome several medical journals she that no chem brain no nausea no loss of appetite so she's happy and she her", "start_timestamp": "00:11:32", "end_timestamp": "00:12:08", "start_second": 692, "end_second": 728, "url": "https://www.youtube.com/watch?v=7agK0nkiZpA&t=692s", "title": "Mushrooms as Medicine with Paul Stamets at Exponential Medicine", "thumbnail": "https://i.ytimg.com/vi/7agK0nkiZpA/hqdefault.jpg"} {"video_id": "7agK0nkiZpA", "text": "acumen has come back she's smarter now than were quick with her wit than I've ever ever seen so then a series of other articles came out this past year turkey tail enhances the microbiome specifically of lactobacillus and Bifidobacterium while suppressing and anti flattery bacteria so this is extremely interesting because this speaks to the fact that when we grow the much of mycelium in pure culture we do see a resident mutualistic population of bacteria which may we at first thought were contaminants but now we understand", "start_timestamp": "00:12:08", "end_timestamp": "00:12:38", "start_second": 728, "end_second": 758, "url": "https://www.youtube.com/watch?v=7agK0nkiZpA&t=728s", "title": "Mushrooms as Medicine with Paul Stamets at Exponential Medicine", "thumbnail": "https://i.ytimg.com/vi/7agK0nkiZpA/hqdefault.jpg"} {"video_id": "7agK0nkiZpA", "text": "better that they're part of the of the microbiome of the fungi when we split from fungi 650 million years ago we chose the relevance circulating our food in a gastric in a sack and a stomach basically digested nutrients within the mycelium went externally well just as we have a microbiome within us the mycelium is selected a micro biome also Muta listicle e to its advantage I'm very interested in the viral to cancer connection there are seven identified viruses or probably a lot more that caused cancer then Fred hutch", "start_timestamp": "00:12:38", "end_timestamp": "00:13:12", "start_second": 758, "end_second": 792, "url": "https://www.youtube.com/watch?v=7agK0nkiZpA&t=758s", "title": "Mushrooms as Medicine with Paul Stamets at Exponential Medicine", "thumbnail": "https://i.ytimg.com/vi/7agK0nkiZpA/hqdefault.jpg"} {"video_id": "7agK0nkiZpA", "text": "medical school called me up and said we have a very interesting case from the Merkel cell carcinoma one of the most deadly cancers of the world only ten people have ever been reported to it have ever recovered from it and I call it the NIMH hypothesis dr. paul NIMH MD PhD at Fred hutch and they had this patient he started taking a seven species mushroom blend and this is immune evasion and then after taking the mushrooms there's no chemotherapy no radiation therapy nothing can be done for these patients and then after taking", "start_timestamp": "00:13:12", "end_timestamp": "00:13:45", "start_second": 792, "end_second": 825, "url": "https://www.youtube.com/watch?v=7agK0nkiZpA&t=792s", "title": "Mushrooms as Medicine with Paul Stamets at Exponential Medicine", "thumbnail": "https://i.ytimg.com/vi/7agK0nkiZpA/hqdefault.jpg"} {"video_id": "7agK0nkiZpA", "text": "the seven mushroom species blend he had spontaneous recovery and he is alive today so we think that it D cloaks cancers for discovery by the immune system we don't know exactly how it does it but we've seen this over and over again your immune systems activated and your mutant cells can discover receptor sites in the stroma of tumors this could go abroad and be useful for addressing lots of solid tumors as an adjunct therapy so this case also was written up in the medical journals and then Hayling Lou and I submitted an application to", "start_timestamp": "00:13:45", "end_timestamp": "00:14:21", "start_second": 825, "end_second": 861, "url": "https://www.youtube.com/watch?v=7agK0nkiZpA&t=825s", "title": "Mushrooms as Medicine with Paul Stamets at Exponential Medicine", "thumbnail": "https://i.ytimg.com/vi/7agK0nkiZpA/hqdefault.jpg"} {"video_id": "7agK0nkiZpA", "text": "NIH to standardize the methodology for analyzing medicinal mushroom products it was amazed or so and unfortunately the sequester occurred and we did not get funded for this but it's a great paper that I'd be glad to provide to anyone who likes to see it the Bison team produces extracellular metabolites and in these metabolite droplets are all sorts of interesting compounds I was working with a bio shield program of the US Defense Department directly after 9/11 we submitted over 700 samples and this is the said the samples of", "start_timestamp": "00:14:21", "end_timestamp": "00:14:51", "start_second": 861, "end_second": 891, "url": "https://www.youtube.com/watch?v=7agK0nkiZpA&t=861s", "title": "Mushrooms as Medicine with Paul Stamets at Exponential Medicine", "thumbnail": "https://i.ytimg.com/vi/7agK0nkiZpA/hqdefault.jpg"} {"video_id": "7agK0nkiZpA", "text": "mushrooms in particular a gerakan reishi and chaga and this is the selectivity index and the viruses h5n1 h3n2 h1n1 and the rip of ayran is being the positive control the selectivity index is an indication of antiviral activity our extracts were diluted from the mycelium 100 to 1 and this is the selectivity index of the diluted extracts that were much far more powerful than the pharmaceutical control well then we were we did by our guided fractionation of the University Mississippi this is a school pharmacy and we've identified a", "start_timestamp": "00:14:51", "end_timestamp": "00:15:28", "start_second": 891, "end_second": 928, "url": "https://www.youtube.com/watch?v=7agK0nkiZpA&t=891s", "title": "Mushrooms as Medicine with Paul Stamets at Exponential Medicine", "thumbnail": "https://i.ytimg.com/vi/7agK0nkiZpA/hqdefault.jpg"} {"video_id": "7agK0nkiZpA", "text": "group of sterols this one has been unreported in the literature I've given it the name phony top straw so we have our first eight api's here that are active against in this case poxviruses now that's coming on the same mcgarrick on extract that was active against flu viruses but we sent these structures to to st. Jude University in order to st. Jude hospital in order for tests against HIV and when they were totally inactive which suggests that there's more than one antiviral API that's present within these these mushroom extracts NIH called", "start_timestamp": "00:15:28", "end_timestamp": "00:16:03", "start_second": 928, "end_second": 963, "url": "https://www.youtube.com/watch?v=7agK0nkiZpA&t=928s", "title": "Mushrooms as Medicine with Paul Stamets at Exponential Medicine", "thumbnail": "https://i.ytimg.com/vi/7agK0nkiZpA/hqdefault.jpg"} {"video_id": "7agK0nkiZpA", "text": "us three times in the past month we've submitted now ten of these structures they potential api's for testing against Ebola and a wide number of other viruses so resident when these mushrooms are very interesting complex molecules that were beginning to discover so after ten years I finally received a patent universality of opinion by the patent extent patent examiner's and it took a long time to get the patent but I was happy to see that vector in Russia published an article two years ago authenticating that a gerakan is highly", "start_timestamp": "00:16:03", "end_timestamp": "00:16:36", "start_second": 963, "end_second": 996, "url": "https://www.youtube.com/watch?v=7agK0nkiZpA&t=963s", "title": "Mushrooms as Medicine with Paul Stamets at Exponential Medicine", "thumbnail": "https://i.ytimg.com/vi/7agK0nkiZpA/hqdefault.jpg"} {"video_id": "7agK0nkiZpA", "text": "active against flu viruses this article was published yesterday so is there people are catching up but it's great that other researchers are authenticating that which we had discovered now working with a gerakan instructor scott franzblau who is the director of the tuberculosis research institute at the university of chicago we started doing experimentation and he started using our mycelium and we did by organic fraction we found a new active anti-tb set of molecules chlorinated chlorinated coumarins now this is interesting to me", "start_timestamp": "00:16:36", "end_timestamp": "00:17:07", "start_second": 996, "end_second": 1027, "url": "https://www.youtube.com/watch?v=7agK0nkiZpA&t=996s", "title": "Mushrooms as Medicine with Paul Stamets at Exponential Medicine", "thumbnail": "https://i.ytimg.com/vi/7agK0nkiZpA/hqdefault.jpg"} {"video_id": "7agK0nkiZpA", "text": "this mushroom has a dual activity against viruses and bacteria very few medicines do that the majority people who die from viral pneumonia actually die from bacterial pneumonia and so to have something as a nutraceutical that can be broad-based against multiple viruses multiple bacteria I think is medically extremely interesting my wife and I spend a lot of time in the old-growth forest the force used to be resplendent around the world and now we are facing a radical change and in our ecosystems through deforestation so the composition in the", "start_timestamp": "00:17:07", "end_timestamp": "00:17:48", "start_second": 1027, "end_second": 1068, "url": "https://www.youtube.com/watch?v=7agK0nkiZpA&t=1027s", "title": "Mushrooms as Medicine with Paul Stamets at Exponential Medicine", "thumbnail": "https://i.ytimg.com/vi/7agK0nkiZpA/hqdefault.jpg"} {"video_id": "7agK0nkiZpA", "text": "ecology of the forests have changed and 70% of the soils are composed of microbial mass of which 40% of the mouse's fungal but because of our practices of logging and harvesting and creating monoculture which in repetitive Harbor planting of trees leads to premature decline disease vectors spread the diameter of trees become smaller and you lose that that plurality of biodiversity of Ages of trees and their associated organisms we have really changed the face of this planet so I'm going to now take a radical left turn so", "start_timestamp": "00:17:48", "end_timestamp": "00:18:28", "start_second": 1068, "end_second": 1108, "url": "https://www.youtube.com/watch?v=7agK0nkiZpA&t=1068s", "title": "Mushrooms as Medicine with Paul Stamets at Exponential Medicine", "thumbnail": "https://i.ytimg.com/vi/7agK0nkiZpA/hqdefault.jpg"} {"video_id": "7agK0nkiZpA", "text": "this is the case now imagine hundreds of millions of years our ancestors and other organisms in the ecosystem have been used these resplendent for us and now we've deforested much of the planet and the deforestation continues at an incredible clip we've now entered 6x the sixth greatest extinction event on the life of this planet and we're losing about 30,000 species per year of 8.3 million species on this planet that means that a hundred years we'll lose more than 30% of the biodiversity on this planet this is this", "start_timestamp": "00:18:28", "end_timestamp": "00:19:01", "start_second": 1108, "end_second": 1141, "url": "https://www.youtube.com/watch?v=7agK0nkiZpA&t=1108s", "title": "Mushrooms as Medicine with Paul Stamets at Exponential Medicine", "thumbnail": "https://i.ytimg.com/vi/7agK0nkiZpA/hqdefault.jpg"} {"video_id": "7agK0nkiZpA", "text": "is an all-hands-on-deck moment so a friend of mine came to me he said Paul I do a lot of work with antibiotic genic fungi controlling insects and says can you help the bees and then whole foods provide this very interesting graphic here's your dairy choices with bees and there's your dairy choices without bees bees liberate pollinators 30% of the food in the grocery store is direct result of pollens it and pollination 70% is indirect and the President Obama came out with a presidential memorandum and there is a we call it click on a hex", "start_timestamp": "00:19:01", "end_timestamp": "00:19:39", "start_second": 1141, "end_second": 1179, "url": "https://www.youtube.com/watch?v=7agK0nkiZpA&t=1141s", "title": "Mushrooms as Medicine with Paul Stamets at Exponential Medicine", "thumbnail": "https://i.ytimg.com/vi/7agK0nkiZpA/hqdefault.jpg"} {"video_id": "7agK0nkiZpA", "text": "efekta there is like six different converging stressors on the ecosystem deforestation is one the bees now don't have the ecosystem that it's evolved to to draw from as part of this menu it's banquet of food well then because of the pollution and one dozen speakers mentioned his blood was analyzed he has a thousand different xenobiotic toxins president is blood unprecedented a theater of evolution mites are carrying viruses and then you have of the the fact that the bees are being trucked hundreds of miles into almond orchards", "start_timestamp": "00:19:39", "end_timestamp": "00:20:17", "start_second": 1179, "end_second": 1217, "url": "https://www.youtube.com/watch?v=7agK0nkiZpA&t=1179s", "title": "Mushrooms as Medicine with Paul Stamets at Exponential Medicine", "thumbnail": "https://i.ytimg.com/vi/7agK0nkiZpA/hqdefault.jpg"} {"video_id": "7agK0nkiZpA", "text": "in the middle of the desert in California in January and February this is totally unnatural so the bees fly out and we see bees around a flower that's the last seven or ten days of a life the bees flap their wings into the wings are shredded and the bees then with a colony collapse disorder the bees leave the Beehive and they just don't come back they just suddenly disappear now it's a very complicated set of stressors but just like there's colony collapse disorder I suggest to you that we are facing cultural collapse", "start_timestamp": "00:20:17", "end_timestamp": "00:20:50", "start_second": 1217, "end_second": 1250, "url": "https://www.youtube.com/watch?v=7agK0nkiZpA&t=1217s", "title": "Mushrooms as Medicine with Paul Stamets at Exponential Medicine", "thumbnail": "https://i.ytimg.com/vi/7agK0nkiZpA/hqdefault.jpg"} {"video_id": "7agK0nkiZpA", "text": "disorder this is a proverbial canary in the coal mine so I had some very strange events in my life a lot more that I can tell you but but I was growing mushroom mycelium in my garden this is 1984 and I went out to my garden and this is the mushroom beds and I went Wow what's going on here I look very closely and bees had come to my mushroom bed move the wood chips away and started sucking on my mycelium I went what is going on from day to night for 40 days for 40 days a direct stream of bees from my beehives to my mycelium back and forth", "start_timestamp": "00:20:50", "end_timestamp": "00:21:36", "start_second": 1250, "end_second": 1296, "url": "https://www.youtube.com/watch?v=7agK0nkiZpA&t=1250s", "title": "Mushrooms as Medicine with Paul Stamets at Exponential Medicine", "thumbnail": "https://i.ytimg.com/vi/7agK0nkiZpA/hqdefault.jpg"} {"video_id": "7agK0nkiZpA", "text": "all day long the mycelium shrunk from about 8 or 10 inches to about 3 inches well I noted this in one of my books and Harrow Smith magazine virtually everybody ignored me I got one beekeeper from Canada wrote me well maybe that's why do they go to sawdust piles so ok I put that in the back of my mind and then a friend of mine said you know what can you do to help the bees and I thought well you know I had this very weird experience in my garden 1984 so here's dusty in the old-growth forest and bear scratch trees well we used to have a lot", "start_timestamp": "00:21:36", "end_timestamp": "00:22:12", "start_second": 1296, "end_second": 1332, "url": "https://www.youtube.com/watch?v=7agK0nkiZpA&t=1296s", "title": "Mushrooms as Medicine with Paul Stamets at Exponential Medicine", "thumbnail": "https://i.ytimg.com/vi/7agK0nkiZpA/hqdefault.jpg"} {"video_id": "7agK0nkiZpA", "text": "of bears but the tip industry put a bounty on them killed the Bears and only in the past 20 years we've come out with research knowing and finding out the Bears bring salmon carcasses up on the banks and returning sea phosphorus from the ocean into the roots of the trees which is a limiting nutrient for tree growth so the industry totally got it backwards Bears help trees grow well the Bears scratched the trees and dusty and I are hiking in the old-growth forest and the South Fork of the hoe and we go around the corner and dusty sees this", "start_timestamp": "00:22:12", "end_timestamp": "00:22:45", "start_second": 1332, "end_second": 1365, "url": "https://www.youtube.com/watch?v=7agK0nkiZpA&t=1332s", "title": "Mushrooms as Medicine with Paul Stamets at Exponential Medicine", "thumbnail": "https://i.ytimg.com/vi/7agK0nkiZpA/hqdefault.jpg"} {"video_id": "7agK0nkiZpA", "text": "bear scratch BAM the bear scratched the trees the best bear scratch I've ever seen but that's why I photographed it now looked into this and wow the timid industry says the bear scratched the trees and it causes a mushroom to form which is related to a Garrett con so we went back two years later there's that bear scratch ok so think about this and the bear scratch the trees then resin comes out and bees go after the resins and they get propolis which is a very strong antimicrobial and they used for patching up you know spaces in the", "start_timestamp": "00:22:45", "end_timestamp": "00:23:20", "start_second": 1365, "end_second": 1400, "url": "https://www.youtube.com/watch?v=7agK0nkiZpA&t=1365s", "title": "Mushrooms as Medicine with Paul Stamets at Exponential Medicine", "thumbnail": "https://i.ytimg.com/vi/7agK0nkiZpA/hqdefault.jpg"} {"video_id": "7agK0nkiZpA", "text": "Beehive so the red belted polypore sure enough was growing out of that tree with a bear scratch that we saw so in a sense the timber industry is correct this is a parasitic fungus that kills the trees and then grows Sacre physically well also interesting the mycelium breaks down pesticide herbicide and fungicides okay so that's another box another experience I had the garden now hiking the old-growth forest a bear scratch I looked into the you know why the timber industry was trying to kill the bears then this article comes comes", "start_timestamp": "00:23:20", "end_timestamp": "00:23:55", "start_second": 1400, "end_second": 1435, "url": "https://www.youtube.com/watch?v=7agK0nkiZpA&t=1400s", "title": "Mushrooms as Medicine with Paul Stamets at Exponential Medicine", "thumbnail": "https://i.ytimg.com/vi/7agK0nkiZpA/hqdefault.jpg"} {"video_id": "7agK0nkiZpA", "text": "out these are all very very recent and it turns out that fungi produced the P chimeric acid related to the chloral chlorinated coumarins that Scott Franklin our fund their active against tuberculosis by the way and it turns out that the absence of P chimeric acid stops the up regulation of the cytochrome p450 pathway bees only have 47 cpy genes whereas most insects have 80 and the absence of P chimeric acid coming from fungi turns off their mono oxygenase pathway and so they can't detoxify this accumulation of all these", "start_timestamp": "00:23:55", "end_timestamp": "00:24:30", "start_second": 1435, "end_second": 1470, "url": "https://www.youtube.com/watch?v=7agK0nkiZpA&t=1435s", "title": "Mushrooms as Medicine with Paul Stamets at Exponential Medicine", "thumbnail": "https://i.ytimg.com/vi/7agK0nkiZpA/hqdefault.jpg"} {"video_id": "7agK0nkiZpA", "text": "toxins that are become resident as they for a out into the farmer's fields sprayed with pesticides fungicides and herbicides okay interesting and so then it turns out that when the EPA license many of these fungicides and insecticides and herbicides they didn't look at the consortium of them all coming together and turn out the sub-lethal doses of these toxins defeat the microbiome in the gut of the bee so you have another problem happening here not only is this my oxidase pathway the cytochrome p450s turned off but the", "start_timestamp": "00:24:30", "end_timestamp": "00:25:05", "start_second": 1470, "end_second": 1505, "url": "https://www.youtube.com/watch?v=7agK0nkiZpA&t=1470s", "title": "Mushrooms as Medicine with Paul Stamets at Exponential Medicine", "thumbnail": "https://i.ytimg.com/vi/7agK0nkiZpA/hqdefault.jpg"} {"video_id": "7agK0nkiZpA", "text": "microbiome now is being damaged by glycolysis by the way there's one of the big cult culprits some craig Venter hope you're listening so then fungicide contamination in the fields is harming the resident fungi and now we don't have rotting logs we've got agricultural crops may the species of which are not native okay so beekeepers feed bees sugar water and up to 50 these a 50 percent water 50 percent sugar this is because they need to have the sugar obviously for food and then the bees are truck hundreds of miles in this case the almond and walnut", "start_timestamp": "00:25:05", "end_timestamp": "00:25:46", "start_second": 1505, "end_second": 1546, "url": "https://www.youtube.com/watch?v=7agK0nkiZpA&t=1505s", "title": "Mushrooms as Medicine with Paul Stamets at Exponential Medicine", "thumbnail": "https://i.ytimg.com/vi/7agK0nkiZpA/hqdefault.jpg"} {"video_id": "7agK0nkiZpA", "text": "orchards for pollination so the bees are now being fed pure Sugar's as opposed to the complex carbohydrates and polysaccharides was coming from the sweat of the Weisse ilium so I had an epiphany why don't we take our mycelium and my research team you know it gets credit for this and we came up with Michael honey this is totally made from mycelium it's like 90% sugars but they're complex sugars and guess what it has P chimeric acid in it as the antiviral agents in it as the antibacterial agents so we contacted", "start_timestamp": "00:25:46", "end_timestamp": "00:26:22", "start_second": 1546, "end_second": 1582, "url": "https://www.youtube.com/watch?v=7agK0nkiZpA&t=1546s", "title": "Mushrooms as Medicine with Paul Stamets at Exponential Medicine", "thumbnail": "https://i.ytimg.com/vi/7agK0nkiZpA/hqdefault.jpg"} {"video_id": "7agK0nkiZpA", "text": "Washington State University working with dr. Steve Sheppard and and Brandon Hopkins and we started doing a series of experiments by feeding extracts of the mycelium to bees at different concentrations this is called a stress test they're in captivity they only live 30 days when the worker bees fly out and they're doing a pollination if they don't come back nurse bees are prematurely recruited to become worker bees and they fly out they abandon the brood so it's a doubling down every time the fewer and fewer worker bees come back nurse bees now", "start_timestamp": "00:26:22", "end_timestamp": "00:26:57", "start_second": 1582, "end_second": 1617, "url": "https://www.youtube.com/watch?v=7agK0nkiZpA&t=1582s", "title": "Mushrooms as Medicine with Paul Stamets at Exponential Medicine", "thumbnail": "https://i.ytimg.com/vi/7agK0nkiZpA/hqdefault.jpg"} {"video_id": "7agK0nkiZpA", "text": "have to go out and get pollen and food for the hive so the the larvae and abandoned mites then proliferate mites are injecting viruses into the larva okay so the mushrooms that we're talking about clean the one of my hat Amadou reishi and chaga are polypore mushrooms and birch forests worldwide apis mellifera the honeybee is from europe it's not native to North America but it produced a four-digit amounts of honey so there's chaga there's Amadou and there's red reishi we provided the bees with twelve different species these are", "start_timestamp": "00:26:57", "end_timestamp": "00:27:34", "start_second": 1617, "end_second": 1654, "url": "https://www.youtube.com/watch?v=7agK0nkiZpA&t=1617s", "title": "Mushrooms as Medicine with Paul Stamets at Exponential Medicine", "thumbnail": "https://i.ytimg.com/vi/7agK0nkiZpA/hqdefault.jpg"} {"video_id": "7agK0nkiZpA", "text": "the three ones I'm going to talk about I've never shown this before this this information just came came in the sugar control in one week's time the virus is increased by 63% when the bees started sipping on the mycelium the viral about pathogen payload plummeted across these three different species in week one versus a week two the sugar control the viruses you know increased dramatically and with with the bees that were taking sips of the mushroom mycelium extract the vibe is plummeted they went up here and they", "start_timestamp": "00:27:34", "end_timestamp": "00:28:07", "start_second": 1654, "end_second": 1687, "url": "https://www.youtube.com/watch?v=7agK0nkiZpA&t=1654s", "title": "Mushrooms as Medicine with Paul Stamets at Exponential Medicine", "thumbnail": "https://i.ytimg.com/vi/7agK0nkiZpA/hqdefault.jpg"} {"video_id": "7agK0nkiZpA", "text": "plummeted down on a dose-dependent basis and so it also occurred with a red reishi so now we're trying to get the right concentrations and the often is obvious now that if we use a combination of these you know what benefit what a benefit will this be but we don't know the way of the B tomorrow the rosetta spaceship lands on a comet 300 million years 300 million miles out in space well we can find a comet we don't know the way of the B now I've spoken to entomologists about this they spoke through their friends no one's ever mentioned this I", "start_timestamp": "00:28:07", "end_timestamp": "00:28:43", "start_second": 1687, "end_second": 1723, "url": "https://www.youtube.com/watch?v=7agK0nkiZpA&t=1687s", "title": "Mushrooms as Medicine with Paul Stamets at Exponential Medicine", "thumbnail": "https://i.ytimg.com/vi/7agK0nkiZpA/hqdefault.jpg"} {"video_id": "7agK0nkiZpA", "text": "spoke at a national mycological Congress I said any mycologist out there this 500 I call just has anyone ever heard of this B's go no one has bees go to rotted logs because of the immunological benefit increasing their hosts defensive resistance they're complex sugars of nutrition and the antiviral properties I'm the first one to have discovered this really how is that possible we grew up with Winnie the Pooh reading it to our kids they're going after the rotted logs and and and we don't know the way of the bee I think this says a lot so", "start_timestamp": "00:28:43", "end_timestamp": "00:29:19", "start_second": 1723, "end_second": 1759, "url": "https://www.youtube.com/watch?v=7agK0nkiZpA&t=1723s", "title": "Mushrooms as Medicine with Paul Stamets at Exponential Medicine", "thumbnail": "https://i.ytimg.com/vi/7agK0nkiZpA/hqdefault.jpg"} {"video_id": "7agK0nkiZpA", "text": "doctor have Steve Shepherd it was so impressed he provided this wonderful quote as entomologist 39 years of experience I'm unaware of any reports extended the lives of worker bees this is this spread is incredibly important this is a period of high pollen acquisition and so if you increase the workers lifespans by 20% they had a tremendous effect on a tipping point a tipping point in favor of the colony survival so I suggest to you let's be friendly let's be mushroom the scientists cross disciplines need to work together biodiversity is our", "start_timestamp": "00:29:19", "end_timestamp": "00:30:00", "start_second": 1759, "end_second": 1800, "url": "https://www.youtube.com/watch?v=7agK0nkiZpA&t=1759s", "title": "Mushrooms as Medicine with Paul Stamets at Exponential Medicine", "thumbnail": "https://i.ytimg.com/vi/7agK0nkiZpA/hqdefault.jpg"} {"video_id": "7agK0nkiZpA", "text": "biosecurity now think of the bigger picture here we were forced people bees evolved in the forest the mycelium is confirmed confirming an immunological benefit to to animals but it's unprecedented as far as I know that there is an antiviral agent that is duly active in helping bees and also helping humans and these are from polypore mushrooms are resident in the forest that our ancestors were dependent upon so I want to conclude that humans trees bears mushrooms are all terrestrial organisms that evolved to be interconnected within the mycelial", "start_timestamp": "00:30:00", "end_timestamp": "00:30:44", "start_second": 1800, "end_second": 1844, "url": "https://www.youtube.com/watch?v=7agK0nkiZpA&t=1800s", "title": "Mushrooms as Medicine with Paul Stamets at Exponential Medicine", "thumbnail": "https://i.ytimg.com/vi/7agK0nkiZpA/hqdefault.jpg"} {"video_id": "7agK0nkiZpA", "text": "web of life Earth's natural Internet and I think the way of the future is using mycelial scaffolding with a mutualistic organisms in the bacteria using epigenesis and then being able to have the quorum sensing and there's response and being able to up regulate gene expressions that otherwise may not be present or up regulated with one organism but quorum sensing can give up regulation of multiple gene sequences otherwise hidden in nature this is the way of life so as much as many of you are ultra specialized I want you to", "start_timestamp": "00:30:44", "end_timestamp": "00:31:15", "start_second": 1844, "end_second": 1875, "url": "https://www.youtube.com/watch?v=7agK0nkiZpA&t=1844s", "title": "Mushrooms as Medicine with Paul Stamets at Exponential Medicine", "thumbnail": "https://i.ytimg.com/vi/7agK0nkiZpA/hqdefault.jpg"} {"video_id": "w4uMX5mWAF4", "text": "but steel bath with a temperature of 1,500 degrees sparks fly is a diesel engine is about to be born a veritable Colossus the huge engine will soon power a modern ship it will have to withstand enormous pressure and extremely high temperatures for many years manufacturing such a giant calls for high tech and precision one of the most powerful high speed diesel engines in the world it is part of the MTU concerned series 8000 made in Germany the engines are produced in philosophical on lake constance malte from here the high speed ferry", "start_timestamp": "00:00:00", "end_timestamp": "00:01:25", "start_second": 0, "end_second": 85, "url": "https://www.youtube.com/watch?v=w4uMX5mWAF4&t=0s", "title": "Exceptional Engineering | Mega Diesel Engine | Free Documentary", "thumbnail": "https://i.ytimg.com/vi/w4uMX5mWAF4/maxresdefault.jpg"} {"video_id": "w4uMX5mWAF4", "text": "Jean de la Villette transports passengers as well as cars and other goods to and from Italy the catamaran ferry is the most modern of its kind in the Mediterranean and it is powered by four series 8000 engines from MTU three levels below deck the crew start up the ferries for mega diesel engines at idling speed the men check that everything is technically in order this takes ten minutes and is a must before every voyage the team then head back up because down here things are about to get very loud and above all extremely", "start_timestamp": "00:01:25", "end_timestamp": "00:02:20", "start_second": 85, "end_second": 140, "url": "https://www.youtube.com/watch?v=w4uMX5mWAF4&t=85s", "title": "Exceptional Engineering | Mega Diesel Engine | Free Documentary", "thumbnail": "https://i.ytimg.com/vi/w4uMX5mWAF4/maxresdefault.jpg"} {"video_id": "w4uMX5mWAF4", "text": "hot Dejan de la Villette leaves Malta bound for the Italian island of Sicily 120 kilometres away [Music] weather permitting the 52 thousand horsepower ferry takes less than two hours for the crossing on the open sea the 1500 ton catamaran can reach a speed of up to 42 knots that's 75 kilometers an hour this extreme performance is achieved through its 4 MTU series 8000 diesel engines [Music] MTU series 8000 consists of 20 cylinder common-rail diesel engines each unit is 7 meters long 2 metres wide and three and a half meters high roughly the same", "start_timestamp": "00:02:20", "end_timestamp": "00:03:21", "start_second": 140, "end_second": 201, "url": "https://www.youtube.com/watch?v=w4uMX5mWAF4&t=140s", "title": "Exceptional Engineering | Mega Diesel Engine | Free Documentary", "thumbnail": "https://i.ytimg.com/vi/w4uMX5mWAF4/maxresdefault.jpg"} {"video_id": "w4uMX5mWAF4", "text": "size as a steam locomotive it also weighs as much around 48 tons the 20 cylinders have a capacity of 350 liters and produced thirteen thousand six hundred horsepower fuel consumption is 2,000 liters an hour the core element of the engine and its biggest component is the huge crankcase [Music] like all the engines components the crankcase also comes from Germany this foundry and phone bag in Bavaria is the birthplace of the mega diesel engine and a location with a long tradition a small smithy existed here way back in the 15th", "start_timestamp": "00:03:21", "end_timestamp": "00:04:09", "start_second": 201, "end_second": 249, "url": "https://www.youtube.com/watch?v=w4uMX5mWAF4&t=201s", "title": "Exceptional Engineering | Mega Diesel Engine | Free Documentary", "thumbnail": "https://i.ytimg.com/vi/w4uMX5mWAF4/maxresdefault.jpg"} {"video_id": "w4uMX5mWAF4", "text": "century so hot iron has been molded in phone bag for nearly 600 years today the foundries three smelting furnaces are operating at full speed to cast a series 8000 crankcase for 10 hours now the steel workers have been feeding the voracious furnaces different materials the materials we need for the series 8000 are steel scrap pig iron and deep drawn sheet metal we also need electrode graphite and silicon carbide the composition is calculated by our superiors in this case by the engineers in our production planning department", "start_timestamp": "00:04:09", "end_timestamp": "00:05:02", "start_second": 249, "end_second": 302, "url": "https://www.youtube.com/watch?v=w4uMX5mWAF4&t=249s", "title": "Exceptional Engineering | Mega Diesel Engine | Free Documentary", "thumbnail": "https://i.ytimg.com/vi/w4uMX5mWAF4/maxresdefault.jpg"} {"video_id": "w4uMX5mWAF4", "text": "casting a crankcase calls for 16 tons of material hot hard work for the smelters who have to keep a constant eye on the temperature only when the molten metal has reached a temperature of 1500 degrees is it ready to be transported to the mold here we have a measuring sleeve made of press board inserted at the front here is a thermal element it's attached to a measuring probe and used to measure the temperature of the melt every steelworks has its own recipe time and again the smelters add this or that material the better the composition of", "start_timestamp": "00:05:02", "end_timestamp": "00:05:47", "start_second": 302, "end_second": 347, "url": "https://www.youtube.com/watch?v=w4uMX5mWAF4&t=302s", "title": "Exceptional Engineering | Mega Diesel Engine | Free Documentary", "thumbnail": "https://i.ytimg.com/vi/w4uMX5mWAF4/maxresdefault.jpg"} {"video_id": "w4uMX5mWAF4", "text": "the smelt the harder and more durable the crankcase will be from time to time the smelt is checked in a modern foundry a sample of the molten metal is poured into a crucible containing a small measuring sleeve which in a matter of seconds transmits the temperature profile to a computer in the office the smelt has the perfect composition when the sample cools down in a certain way and then heats up again the enormous heat is no longer generated with fire but with electricity the principle is similar to the way an induction cooker", "start_timestamp": "00:05:47", "end_timestamp": "00:06:28", "start_second": 347, "end_second": 388, "url": "https://www.youtube.com/watch?v=w4uMX5mWAF4&t=347s", "title": "Exceptional Engineering | Mega Diesel Engine | Free Documentary", "thumbnail": "https://i.ytimg.com/vi/w4uMX5mWAF4/maxresdefault.jpg"} {"video_id": "w4uMX5mWAF4", "text": "functions in a domestic kitchen the big difference is that the three electric induction furnaces in the foundry have the power of five thousand microwave ovens [Music] therefore megawatt have heated the molten metal to a temperature of over 1500 degrees 200 meters away the mold is already waiting speed is now essential the steel workers transfer the molten mass the final ingredient magnesium is added via chutes the bubbling sound you can hear stems partly from the magnesium at temperatures of over 1500 degrees", "start_timestamp": "00:06:28", "end_timestamp": "00:07:13", "start_second": 388, "end_second": 433, "url": "https://www.youtube.com/watch?v=w4uMX5mWAF4&t=388s", "title": "Exceptional Engineering | Mega Diesel Engine | Free Documentary", "thumbnail": "https://i.ytimg.com/vi/w4uMX5mWAF4/maxresdefault.jpg"} {"video_id": "w4uMX5mWAF4", "text": "magnesium vaporizes much faster and that's what causes the bubbling sound the magnesium vapour flows through the molten metal like carbon dioxide through mineral water this changes the molecular lattice structure of the steel and will make the crankcase slightly elastic [Music] the 19 tons of molten mass now flow into two huge casting ladles there would be no point in having more than two because the crankcase mold has only two intake funnels [Music] this is what the various parts of a crankcase mold look like these two", "start_timestamp": "00:07:13", "end_timestamp": "00:08:10", "start_second": 433, "end_second": 490, "url": "https://www.youtube.com/watch?v=w4uMX5mWAF4&t=433s", "title": "Exceptional Engineering | Mega Diesel Engine | Free Documentary", "thumbnail": "https://i.ytimg.com/vi/w4uMX5mWAF4/maxresdefault.jpg"} {"video_id": "w4uMX5mWAF4", "text": "discourse consist of resin soaked quartz and which after setting has become rock-hard ten of these cores and sequence comprised the mold of a series 8000 crankcase prior to casting each template is given a coating which has a horrific smell to it the coating prevents the sand from mixing with the steel during casting the molten steel flows into the mold via this channel it passes through the filter chamber with its ceramic filter and enters the mold cavity via this feed channel and then spreads throughout the entire mold but", "start_timestamp": "00:08:10", "end_timestamp": "00:08:55", "start_second": 490, "end_second": 535, "url": "https://www.youtube.com/watch?v=w4uMX5mWAF4&t=490s", "title": "Exceptional Engineering | Mega Diesel Engine | Free Documentary", "thumbnail": "https://i.ytimg.com/vi/w4uMX5mWAF4/maxresdefault.jpg"} {"video_id": "w4uMX5mWAF4", "text": "before that the ten disk cores are placed in a casting box the cavities are filled with special sand without this sand parts of the mold would fly off during casting it's a job which calls for a lot of experience any mistake could render the mold useless all this takes place days before the actual casting which also demands total precision the clock is ticking the molten mass is cooling by the minute so the smelters have to check its temperature time and again on no account must have dropped below 1,400 degrees", "start_timestamp": "00:08:55", "end_timestamp": "00:09:49", "start_second": 535, "end_second": 589, "url": "https://www.youtube.com/watch?v=w4uMX5mWAF4&t=535s", "title": "Exceptional Engineering | Mega Diesel Engine | Free Documentary", "thumbnail": "https://i.ytimg.com/vi/w4uMX5mWAF4/maxresdefault.jpg"} {"video_id": "w4uMX5mWAF4", "text": "then the mass would no longer be suitable for casting a huge overhead gantry transports the cauldrons to the mold the big moment is close the birth of a mega diesel engine the two casting ladles are positioned over the to filling funnels from now on everything will have to take place with absolute synchronicity the head smelter starts the countdown three two one go in a fascinating spectacle the molten metal flows into the two funnels for the moment the two plugs are still in place because the mass first has to settle", "start_timestamp": "00:09:49", "end_timestamp": "00:10:42", "start_second": 589, "end_second": 642, "url": "https://www.youtube.com/watch?v=w4uMX5mWAF4&t=589s", "title": "Exceptional Engineering | Mega Diesel Engine | Free Documentary", "thumbnail": "https://i.ytimg.com/vi/w4uMX5mWAF4/maxresdefault.jpg"} {"video_id": "w4uMX5mWAF4", "text": "once again the command to remove the two plugs is given by the head smelter now pull the mass can now flow in it takes 70 seconds for the 16 tons of molten metal to flood the mold and fill every cavity completely the head smelter closely monitors the entire process giving instructions to his colleagues controlling the casting ladles a little bit faster faster although speed is of the essence all movements have to take place simultaneously otherwise swirls could form in the funnels now after a good minute the job is done", "start_timestamp": "00:10:42", "end_timestamp": "00:11:21", "start_second": 642, "end_second": 681, "url": "https://www.youtube.com/watch?v=w4uMX5mWAF4&t=642s", "title": "Exceptional Engineering | Mega Diesel Engine | Free Documentary", "thumbnail": "https://i.ytimg.com/vi/w4uMX5mWAF4/maxresdefault.jpg"} {"video_id": "w4uMX5mWAF4", "text": "within a few hours the metal cast in the mold will cool by several hundred degrees but it will be two weeks before the men can peel the crankcase from its mold only then will it be hard and elastic enough and extremely durable LaShonda la Valette is surging through the Mediterranean at a rate of over 40 knots the high speed ferry is heading for the Sicilian port of Port Salut it's not a propeller that the four mega diesel engines in its hull are driving but water jet power units which function on the recoil principle", "start_timestamp": "00:11:21", "end_timestamp": "00:12:08", "start_second": 681, "end_second": 728, "url": "https://www.youtube.com/watch?v=w4uMX5mWAF4&t=681s", "title": "Exceptional Engineering | Mega Diesel Engine | Free Documentary", "thumbnail": "https://i.ytimg.com/vi/w4uMX5mWAF4/maxresdefault.jpg"} {"video_id": "w4uMX5mWAF4", "text": "water is sucked in under the ship and then expelled again with tremendous force the engine on this vessel pumps out 9.1 megawatts of power at eleven hundred and fifty rpm the the aggregate power is 36.4 megawatts so in actual fact losing an engine will not affect the the vessel drastically the immense power from the diesel engines is a big help especially in berthing the 1500 ton ferry the ship is highly maneuverable and can be steered easily even in a confined space the captain slowly brings the 107 meter long jean de la Villette", "start_timestamp": "00:12:08", "end_timestamp": "00:13:07", "start_second": 728, "end_second": 787, "url": "https://www.youtube.com/watch?v=w4uMX5mWAF4&t=728s", "title": "Exceptional Engineering | Mega Diesel Engine | Free Documentary", "thumbnail": "https://i.ytimg.com/vi/w4uMX5mWAF4/maxresdefault.jpg"} {"video_id": "w4uMX5mWAF4", "text": "against the key wall in sicily in just eight hours time the catamaran will be heading back to Malta that's not a long time because today a team of mechanics specialists in marine engines has come on board one of the four mega diesel engines needs two new cylinders and the old ones require maintenance the men have to dismantle the diesel engine and replace the cylinders since a stay in Port is very expensive the specialists need to get to work straight away this means heading down to levels awaiting the men at a temperature of 50 degrees in the", "start_timestamp": "00:13:07", "end_timestamp": "00:13:51", "start_second": 787, "end_second": 831, "url": "https://www.youtube.com/watch?v=w4uMX5mWAF4&t=787s", "title": "Exceptional Engineering | Mega Diesel Engine | Free Documentary", "thumbnail": "https://i.ytimg.com/vi/w4uMX5mWAF4/maxresdefault.jpg"} {"video_id": "w4uMX5mWAF4", "text": "engine room are a booming auxiliary power unit and the 48 ton diesel engine they all know that this means a lot of work under time pressure brakes are out of the question back at the foundry and phone back two weeks have now passed that is how long the steel block has taken to cool down completely the heavy and in so far as steel permits elastic crankcase for the mega diesel engine is about to see daylight for the first time with the help of the overhead gantry the 16 ton Colossus is lifted out of the casting pit this causes the", "start_timestamp": "00:13:51", "end_timestamp": "00:14:32", "start_second": 831, "end_second": 872, "url": "https://www.youtube.com/watch?v=w4uMX5mWAF4&t=831s", "title": "Exceptional Engineering | Mega Diesel Engine | Free Documentary", "thumbnail": "https://i.ytimg.com/vi/w4uMX5mWAF4/maxresdefault.jpg"} {"video_id": "w4uMX5mWAF4", "text": "courts and cocoon to crumble exposing the brand-new series 8000 crankcase what we now see here is the down sprue and the path taken by the steel into the mold the first task now falls to a welder who has to sever the two down sprues the gas jet slowly cuts its way through the tubes which are as thick as tank Armour other superfluous parts are also still attached to the seven meter long cast body they were needed for the casting process now they too are cut off [Music] there's another four to higher up and to", "start_timestamp": "00:14:32", "end_timestamp": "00:15:20", "start_second": 872, "end_second": 920, "url": "https://www.youtube.com/watch?v=w4uMX5mWAF4&t=872s", "title": "Exceptional Engineering | Mega Diesel Engine | Free Documentary", "thumbnail": "https://i.ytimg.com/vi/w4uMX5mWAF4/maxresdefault.jpg"} {"video_id": "w4uMX5mWAF4", "text": "lower down then we'll tear everything down and it will remain standing in the middle the experienced welder burns his way along the crankcase bit by bit after an hour the block is fully exposed the casting channel can be removed but the material that has been cut off is by no means scrap it will be needed for the next melt the overhead gantry then comes into action again it lifts the block to the next station a huge mechanical vibrator the deafeningly loud vibrations cause any remaining quartz sand to fall off it too", "start_timestamp": "00:15:20", "end_timestamp": "00:16:08", "start_second": 920, "end_second": 968, "url": "https://www.youtube.com/watch?v=w4uMX5mWAF4&t=920s", "title": "Exceptional Engineering | Mega Diesel Engine | Free Documentary", "thumbnail": "https://i.ytimg.com/vi/w4uMX5mWAF4/maxresdefault.jpg"} {"video_id": "w4uMX5mWAF4", "text": "will be recycled after being cleaned the sand will be used to produce the next casting three minutes later the mega diesel crankcase is roughly clean it's next stop is the shot blasting chamber where the crankcase will be bombarded with a storm of tiny steel pellets [Music] these are the steel pellets they're made of ordinary steel and have a maximum diameter of 2.5 millimeters so we can roughly estimate what the surface will look like afterwards and how clean it will be millions of the steel pellets are fired", "start_timestamp": "00:16:08", "end_timestamp": "00:16:59", "start_second": 968, "end_second": 1019, "url": "https://www.youtube.com/watch?v=w4uMX5mWAF4&t=968s", "title": "Exceptional Engineering | Mega Diesel Engine | Free Documentary", "thumbnail": "https://i.ytimg.com/vi/w4uMX5mWAF4/maxresdefault.jpg"} {"video_id": "w4uMX5mWAF4", "text": "to the chamber at a speed of 200 kilometers an hour this not only cleans the surface of the component the bombardment makes the steel harder and more corrosion resistant the process takes an hour then the doors are opened and with the help of a crane the men remove a perfectly cleaned engine block from the shot blasting chamber the razor-sharp casting noses also have to be cut off the crankcase can then leave its birthplace for transportation to Lake Constance Assembly of the mega engines takes place at the NTU plant in", "start_timestamp": "00:16:59", "end_timestamp": "00:17:41", "start_second": 1019, "end_second": 1061, "url": "https://www.youtube.com/watch?v=w4uMX5mWAF4&t=1019s", "title": "Exceptional Engineering | Mega Diesel Engine | Free Documentary", "thumbnail": "https://i.ytimg.com/vi/w4uMX5mWAF4/maxresdefault.jpg"} {"video_id": "w4uMX5mWAF4", "text": "fleet lease often the firm specializes in large high speed diesel engines ships submarines military vehicles and locomotives worldwide are powered by MTU systems [Music] a series 8000 naval engine has to pass through six assembly stations twenty-five of these mega diesel engines leave the plant every year prior to assembly the raw part is milled the Colossus is machined to the right dimensions in several stages losing up to a ton of material in the process this takes several days engines of this size are machined with extreme precision", "start_timestamp": "00:17:41", "end_timestamp": "00:18:34", "start_second": 1061, "end_second": 1114, "url": "https://www.youtube.com/watch?v=w4uMX5mWAF4&t=1061s", "title": "Exceptional Engineering | Mega Diesel Engine | Free Documentary", "thumbnail": "https://i.ytimg.com/vi/w4uMX5mWAF4/maxresdefault.jpg"} {"video_id": "w4uMX5mWAF4", "text": "the tolerances to be adhere to are so minut that they can no longer be perceived with the naked eye Lonnegan now we're talking about an accuracy of three or four hundredths of a millimeter such dimensions pose quite a challenge in other words a component the size of a small bus must not deviate from the ideal by one hundredth of the width of a human hair to check this the Colossus is placed on Europe's biggest measuring table Engineers measure every opening every drilled hole indeed every square millimeter to ensure that no", "start_timestamp": "00:18:34", "end_timestamp": "00:19:15", "start_second": 1114, "end_second": 1155, "url": "https://www.youtube.com/watch?v=w4uMX5mWAF4&t=1114s", "title": "Exceptional Engineering | Mega Diesel Engine | Free Documentary", "thumbnail": "https://i.ytimg.com/vi/w4uMX5mWAF4/maxresdefault.jpg"} {"video_id": "w4uMX5mWAF4", "text": "disaster can occur when the engine is in operation after two weeks the crankcase is ready now main assembly can get underway over the next five weeks these 35 fitters will turn the barre crankcase into an extra extra-large mega diesel engine the overhead gantry transports the 10-ton parts of the rotary station where it is positioned with millimeter accuracy during this process the entire area is sealed off little towards you now got it it's in [Music] first of all the crankcase is given a serial number in the past this had to be", "start_timestamp": "00:19:15", "end_timestamp": "00:20:03", "start_second": 1155, "end_second": 1203, "url": "https://www.youtube.com/watch?v=w4uMX5mWAF4&t=1155s", "title": "Exceptional Engineering | Mega Diesel Engine | Free Documentary", "thumbnail": "https://i.ytimg.com/vi/w4uMX5mWAF4/maxresdefault.jpg"} {"video_id": "w4uMX5mWAF4", "text": "hammered in with effort now a special tool performs the task mechanically in seconds the green light for assembly is about to be given by Foreman : ooh huh he is in charge of the workshop but first the fitters check the surfaces and the bearings once again large axis will illuminate the entire crankcase starting with the inlets and then continue with the main oil channels the two fitters job is to detect any contamination tiny burrs or metal splinters in one of the channels could cause serious problems later the main oil channel here has to", "start_timestamp": "00:20:03", "end_timestamp": "00:20:46", "start_second": 1203, "end_second": 1246, "url": "https://www.youtube.com/watch?v=w4uMX5mWAF4&t=1203s", "title": "Exceptional Engineering | Mega Diesel Engine | Free Documentary", "thumbnail": "https://i.ytimg.com/vi/w4uMX5mWAF4/maxresdefault.jpg"} {"video_id": "w4uMX5mWAF4", "text": "remain free of dirt dust and splinters which could of course destroy the engine then it is the turn of the camshaft components when it is fully assembled the shaft will be 6 meters long and weigh 400 kilograms to ensure that everything fits together the mechanics work with liquid nitrogen they sink dozens of small guide pins in a nitrogen bath with a temperature of minus 198 degrees Celsius the liquid gas chills the metal pins and makes them contract the super-cold guide pins can be tapped into the flange of the camshaft segment", "start_timestamp": "00:20:46", "end_timestamp": "00:21:27", "start_second": 1246, "end_second": 1287, "url": "https://www.youtube.com/watch?v=w4uMX5mWAF4&t=1246s", "title": "Exceptional Engineering | Mega Diesel Engine | Free Documentary", "thumbnail": "https://i.ytimg.com/vi/w4uMX5mWAF4/maxresdefault.jpg"} {"video_id": "w4uMX5mWAF4", "text": "with ease as they heat up they expand to give a really firm fit guide connections have to be greased and running surfaces oiled the men then push the camshaft into the 6 metre long channel section by section making sure nothing gets tilted a bit lower okay the next babies like then you'll see it coming down a bit more the initial phase is completed the guide neck is now in the channel the men slowly push it in further once again this is precision work there is ZERO room for error should something tilt the engine would be blocked up again optimal", "start_timestamp": "00:21:27", "end_timestamp": "00:22:19", "start_second": 1287, "end_second": 1339, "url": "https://www.youtube.com/watch?v=w4uMX5mWAF4&t=1287s", "title": "Exceptional Engineering | Mega Diesel Engine | Free Documentary", "thumbnail": "https://i.ytimg.com/vi/w4uMX5mWAF4/maxresdefault.jpg"} {"video_id": "w4uMX5mWAF4", "text": "just hold it like that when the three segments have been bolted together the two fitters carefully push the unwieldy component through the engine the camshaft is made of surface hardened steel and will not be replaced throughout the entire life of the engine after two days the work at station 1 is completed one of the 25 mega diesel engines manufactured here each year is transported to the next station by overhead gantry the rotary station is even bigger than the first station here fitters will turn and wheel the Colossus", "start_timestamp": "00:22:19", "end_timestamp": "00:22:59", "start_second": 1339, "end_second": 1379, "url": "https://www.youtube.com/watch?v=w4uMX5mWAF4&t=1339s", "title": "Exceptional Engineering | Mega Diesel Engine | Free Documentary", "thumbnail": "https://i.ytimg.com/vi/w4uMX5mWAF4/maxresdefault.jpg"} {"video_id": "w4uMX5mWAF4", "text": "several times and even turn it upside down the crankcase is the biggest component of the mega diesel engine now the second biggest is added the gigantic crankshaft six meters long and weighing six thousand kilograms it is the crucial element at the heart of the engine [Music] operating at full speed the heavy crankshaft will rotate at a staggering twenty revolutions per second [Music] for installation of the crankshaft the engine has to be turned upside down the fitters wheel the cage suspended from the overhead gantry the", "start_timestamp": "00:22:59", "end_timestamp": "00:23:53", "start_second": 1379, "end_second": 1433, "url": "https://www.youtube.com/watch?v=w4uMX5mWAF4&t=1379s", "title": "Exceptional Engineering | Mega Diesel Engine | Free Documentary", "thumbnail": "https://i.ytimg.com/vi/w4uMX5mWAF4/maxresdefault.jpg"} {"video_id": "w4uMX5mWAF4", "text": "huge chef now moves towards the crankcase once again every millimeter counts very slowly six tons of high-grade metal are approaching the interior of the engine if the component were to slough the irreparable damage would be caused [Music] after 20 minutes the difficult task is completed the fitters now turn the huge frame a few more degrees to gain better access to the engines inlets over the next few hours they will install 20 of these cylinder Pistons later the entire combustion process will take place in", "start_timestamp": "00:23:53", "end_timestamp": "00:24:35", "start_second": 1433, "end_second": 1475, "url": "https://www.youtube.com/watch?v=w4uMX5mWAF4&t=1433s", "title": "Exceptional Engineering | Mega Diesel Engine | Free Documentary", "thumbnail": "https://i.ytimg.com/vi/w4uMX5mWAF4/maxresdefault.jpg"} {"video_id": "w4uMX5mWAF4", "text": "the power units as they are known each power unit has a capacity of 17.5 litres and is basically an independent single cylinder engine the workshop also contains the pre assembly site for the power units before assembly of the cylinders can get underway the huge connecting rods have to be measured this includes measuring the bearing shells because if the bearing shells are too small this could damage the crankshaft all the measurements are recorded exactly to ensure precise positioning then the war can begin the fitters first prepare the", "start_timestamp": "00:24:35", "end_timestamp": "00:25:28", "start_second": 1475, "end_second": 1528, "url": "https://www.youtube.com/watch?v=w4uMX5mWAF4&t=1475s", "title": "Exceptional Engineering | Mega Diesel Engine | Free Documentary", "thumbnail": "https://i.ytimg.com/vi/w4uMX5mWAF4/maxresdefault.jpg"} {"video_id": "w4uMX5mWAF4", "text": "Pistons each of which has a diameter of 27 centimeters weighing 100 kilograms each con rod can only be moved with a crane the con rod is carefully lowered into the piston the retaining bolt is then slid into place it takes a quarter of an hour to install the first of 20 Pistons next the focus is on the complex cylinder head on the rotary table the fitter inserts four bolts and secures the cylinder liner with another 24 bolts to guarantee precision they are tightened automatically by a machine the process is fully automated and monitored", "start_timestamp": "00:25:28", "end_timestamp": "00:26:19", "start_second": 1528, "end_second": 1579, "url": "https://www.youtube.com/watch?v=w4uMX5mWAF4&t=1528s", "title": "Exceptional Engineering | Mega Diesel Engine | Free Documentary", "thumbnail": "https://i.ytimg.com/vi/w4uMX5mWAF4/maxresdefault.jpg"} {"video_id": "w4uMX5mWAF4", "text": "to ensure that I haven't forgotten the bolt and that every bolt has the right torque the computer determines the next operation and records every action by the fitter now the piston together with the con rod is introduced carefully and slowly into the cylinder a single error would cause considerable damage later 20 of these power units will make the 16 crankshaft rotate at a speed of 1,150 revolutions a minute a final components and assembly will be complete this is part of the exhaust the power unit weighs 750 kilograms in future it", "start_timestamp": "00:26:19", "end_timestamp": "00:27:15", "start_second": 1579, "end_second": 1635, "url": "https://www.youtube.com/watch?v=w4uMX5mWAF4&t=1579s", "title": "Exceptional Engineering | Mega Diesel Engine | Free Documentary", "thumbnail": "https://i.ytimg.com/vi/w4uMX5mWAF4/maxresdefault.jpg"} {"video_id": "w4uMX5mWAF4", "text": "will have to withstand a great deal the ignition phase forces the piston downwards turning vertical motion into the circular motion of the crankshaft the crankcase is turned once more and again the overhead gantry comes into action as it brings up the power unit [Music] the men slowly lower the cylinder unit into the engine in its interior the connecting rod grips the crankshaft with millimeter accuracy the team installed 20 power units in one day the con rods are provisionally secured with white dummies because the engine will be", "start_timestamp": "00:27:15", "end_timestamp": "00:28:04", "start_second": 1635, "end_second": 1684, "url": "https://www.youtube.com/watch?v=w4uMX5mWAF4&t=1635s", "title": "Exceptional Engineering | Mega Diesel Engine | Free Documentary", "thumbnail": "https://i.ytimg.com/vi/w4uMX5mWAF4/maxresdefault.jpg"} {"video_id": "w4uMX5mWAF4", "text": "turned once again and nothing must be allowed to shift its position the fitters reposition the engine block in order to install the 20 bearing shells in its underside the shell weighs 25 kilograms and is pushed on to the bolts of a con rod later it is this area that will have to withstand the greatest forces the up and down motion of the piston causes the crank shaft to rotate 20 times a second the forces involved are extreme and all the components will have to withstand them for many years a special procedure", "start_timestamp": "00:28:04", "end_timestamp": "00:28:44", "start_second": 1684, "end_second": 1724, "url": "https://www.youtube.com/watch?v=w4uMX5mWAF4&t=1684s", "title": "Exceptional Engineering | Mega Diesel Engine | Free Documentary", "thumbnail": "https://i.ytimg.com/vi/w4uMX5mWAF4/maxresdefault.jpg"} {"video_id": "w4uMX5mWAF4", "text": "is followed to secure the con rod the men equipped the bolt with a hydraulic clamping cylinder they then attached oil pressure lines [Music] the hydraulic system forces the bolt apart at a pressure of 1,600 bar now the nut can be screwed on and the pressure reduced again the threads of bolt and nut are pulled together in a solid link installation of the power units is now finished along with the preliminary work on the fuel lines and the exhaust and cooling systems the gantry transports the engine block on to station 3 where a", "start_timestamp": "00:28:44", "end_timestamp": "00:29:31", "start_second": 1724, "end_second": 1771, "url": "https://www.youtube.com/watch?v=w4uMX5mWAF4&t=1724s", "title": "Exceptional Engineering | Mega Diesel Engine | Free Documentary", "thumbnail": "https://i.ytimg.com/vi/w4uMX5mWAF4/maxresdefault.jpg"} {"video_id": "w4uMX5mWAF4", "text": "two-man team focus on the gear train here it is about to be sealed with a cover but beforehand the main coat the edges with a special sealing compound [Music] weighing 200 kilograms the cover is now attached it is secured with dozens of bolts in the years ahead the gear train will not be opened again depending on its application the mega diesel engine will have a service life of up to 35 years only then will maintenance be required back to the Mediterranean and put Salah Harbor on Sicily where the high speed ferry jean de la Villette is", "start_timestamp": "00:29:31", "end_timestamp": "00:30:21", "start_second": 1771, "end_second": 1821, "url": "https://www.youtube.com/watch?v=w4uMX5mWAF4&t=1771s", "title": "Exceptional Engineering | Mega Diesel Engine | Free Documentary", "thumbnail": "https://i.ytimg.com/vi/w4uMX5mWAF4/maxresdefault.jpg"} {"video_id": "w4uMX5mWAF4", "text": "birthed three levels below deck a team is working against the clock the men have eight hours in which to replace two power units from one of the four diesel engines the maintenance work can only be carried out by specialists while the covering is being removed at the top one of the men crawls under the still warm engine to drain the cooling liquid he then opens the heavy covers to gain access to the interior of the engine above him two mechanics are loosening the bolts on the cylinder unit so far everything is going according to plan", "start_timestamp": "00:30:21", "end_timestamp": "00:31:03", "start_second": 1821, "end_second": 1863, "url": "https://www.youtube.com/watch?v=w4uMX5mWAF4&t=1821s", "title": "Exceptional Engineering | Mega Diesel Engine | Free Documentary", "thumbnail": "https://i.ytimg.com/vi/w4uMX5mWAF4/maxresdefault.jpg"} {"video_id": "w4uMX5mWAF4", "text": "a crucial moment the power unit has to be taken out the fitter has to open the eye of the connecting rod which encloses the crankshaft the nuts can only be loosened with a hydraulic bolt tensioning cylinder now pressure of up to 1,600 bars applied through hoses to ensure that all the bolts are loosened the four main bolts on the upper side of the power unit are also loosened with the help of the hydraulic system finally the power unit is free using nothing more than a pulley and their own strength the men maneuver the power unit", "start_timestamp": "00:31:03", "end_timestamp": "00:31:41", "start_second": 1863, "end_second": 1901, "url": "https://www.youtube.com/watch?v=w4uMX5mWAF4&t=1863s", "title": "Exceptional Engineering | Mega Diesel Engine | Free Documentary", "thumbnail": "https://i.ytimg.com/vi/w4uMX5mWAF4/maxresdefault.jpg"} {"video_id": "w4uMX5mWAF4", "text": "out of the crankcase centimeter by centimeter 750 kilograms make their way upwards working at a temperature of 50 degrees Celsius the men now all the unit through the engine room until it is under the service hatch where the hook from the small truck mounted crane on deck is already waiting after three hours the first power unit is taken aloft care is still vital because the component is by no means scrapped it will be given a complete overhaul and at some time or other be installed in another series 8000 mega diesel engine", "start_timestamp": "00:31:41", "end_timestamp": "00:32:17", "start_second": 1901, "end_second": 1937, "url": "https://www.youtube.com/watch?v=w4uMX5mWAF4&t=1901s", "title": "Exceptional Engineering | Mega Diesel Engine | Free Documentary", "thumbnail": "https://i.ytimg.com/vi/w4uMX5mWAF4/maxresdefault.jpg"} {"video_id": "w4uMX5mWAF4", "text": "today the part is first placed in a special transport frame but why did it actually need to be replaced it is suspected that because of minut cracks in the cylinder head cooling water was able to get into the combustion chamber but only examination under a microscope will provide exact information in flicks half and on lake constance mt you technicians have made good progress the mega diesel engine is now being fitted with a turbocharger group a complex component the supercharger group gives the engine it's impressive height of 3 meters 50", "start_timestamp": "00:32:17", "end_timestamp": "00:33:03", "start_second": 1937, "end_second": 1983, "url": "https://www.youtube.com/watch?v=w4uMX5mWAF4&t=1937s", "title": "Exceptional Engineering | Mega Diesel Engine | Free Documentary", "thumbnail": "https://i.ytimg.com/vi/w4uMX5mWAF4/maxresdefault.jpg"} {"video_id": "w4uMX5mWAF4", "text": "comprised of several thousand separate parts the turbochargers are responsible for a huge increase in the performance of the marine diesel engine every year fitter Tobias Haider assembles 25 such turbochargers for series 8000 engines the purpose of the turbocharger is to increase the oxygen content of the combustion chamber this is the intake for the exhaust gases here we have the blade wheel which can rotate here exhaust gases flow out again and on this side drive the blade wheel this is where the air is then drawn in here it is", "start_timestamp": "00:33:03", "end_timestamp": "00:33:44", "start_second": 1983, "end_second": 2024, "url": "https://www.youtube.com/watch?v=w4uMX5mWAF4&t=1983s", "title": "Exceptional Engineering | Mega Diesel Engine | Free Documentary", "thumbnail": "https://i.ytimg.com/vi/w4uMX5mWAF4/maxresdefault.jpg"} {"video_id": "w4uMX5mWAF4", "text": "compressed and here it enters the intercooler 40,000 revs per minute and the boost pressure the four and a half bar compared with two and a half to 3 bar for a car tire lyta needs about a week to assemble the two-ton turbocharger today he's sealing the housing airtight to prevent any toxic exhaust fumes getting into the machine room later the turbocharger group is then transported by overhead gantry to station five now the turbocharger can be attached to the engine block two men connect apart with the exhaust circuit", "start_timestamp": "00:33:44", "end_timestamp": "00:34:27", "start_second": 2024, "end_second": 2067, "url": "https://www.youtube.com/watch?v=w4uMX5mWAF4&t=2024s", "title": "Exceptional Engineering | Mega Diesel Engine | Free Documentary", "thumbnail": "https://i.ytimg.com/vi/w4uMX5mWAF4/maxresdefault.jpg"} {"video_id": "w4uMX5mWAF4", "text": "the series 8000 mega diesel engine now weighs around 40 tons the clutches then flange mounted the fitters maneuver the huge part to the end of the crankshaft and once again they need special tools the 15 this is the cylinder with which the clutch flange will later be pushed to the back [Music] here to the components have to expand hoses ensure a high oil pressure of 1500 bar [Music] the flange expands and little by little the clutch moves on to the shaft two minutes later the men release the pressure again the connection is solid", "start_timestamp": "00:34:27", "end_timestamp": "00:35:28", "start_second": 2067, "end_second": 2128, "url": "https://www.youtube.com/watch?v=w4uMX5mWAF4&t=2067s", "title": "Exceptional Engineering | Mega Diesel Engine | Free Documentary", "thumbnail": "https://i.ytimg.com/vi/w4uMX5mWAF4/maxresdefault.jpg"} {"video_id": "w4uMX5mWAF4", "text": "the second part of the clutch is now bolted on above all the giant torque damper is designed to prevent the clutch breaking off when the mega diesel engine enters operation once again in fleet lease often it's a case of ready for takeoff then the overhead gantry transports the massive engine through the workshop it will now be lowered on to the mega diesel engines oil pan the 40-ton Colossus is placed securely on six stilts the oil pan is filled with two-thirds of maximum capacity that's 1600 liters which is the volume", "start_timestamp": "00:35:28", "end_timestamp": "00:36:21", "start_second": 2128, "end_second": 2181, "url": "https://www.youtube.com/watch?v=w4uMX5mWAF4&t=2128s", "title": "Exceptional Engineering | Mega Diesel Engine | Free Documentary", "thumbnail": "https://i.ytimg.com/vi/w4uMX5mWAF4/maxresdefault.jpg"} {"video_id": "w4uMX5mWAF4", "text": "pumped through the engine twice every minute with total precision a hydraulic hoist raises the sump to be bolted on to more in the middle should be fine the fleet leaks off an engine plant all the tasks for the day have been completed but on Sicily there is still a lot to be done the high speed ferry jean de la Villette is still birthed at the key side the maintenance team on board the catamaran have got a tough job in their hands to power units need to be replaced and the men now have only six hours to install", "start_timestamp": "00:36:21", "end_timestamp": "00:36:57", "start_second": 2181, "end_second": 2217, "url": "https://www.youtube.com/watch?v=w4uMX5mWAF4&t=2181s", "title": "Exceptional Engineering | Mega Diesel Engine | Free Documentary", "thumbnail": "https://i.ytimg.com/vi/w4uMX5mWAF4/maxresdefault.jpg"} {"video_id": "w4uMX5mWAF4", "text": "them the replacement unit which has been given a complete overhaul is lowered into the machine room through the service hatch the seven-man team are all specialists who have been specifically trained for this engine series slowly they lower the 750 kilogram part in the crankcase tough manual work with the help of a pulley and the confines of the machine room there is no space for an automatic ceiling crane the con rod cover then makes its descent since it weighs only 25 kilograms it is no problem for the mechanics skillful hands easily slip the", "start_timestamp": "00:36:57", "end_timestamp": "00:37:49", "start_second": 2217, "end_second": 2269, "url": "https://www.youtube.com/watch?v=w4uMX5mWAF4&t=2217s", "title": "Exceptional Engineering | Mega Diesel Engine | Free Documentary", "thumbnail": "https://i.ytimg.com/vi/w4uMX5mWAF4/maxresdefault.jpg"} {"video_id": "w4uMX5mWAF4", "text": "cover on to the threaded bolts to nuts and the job is done the team don't talk much each man knows what he has to do only in this way will they be able to meet their deadline the next step is to get the injector ready it has to be cleaned greased and have new seals fitted on the cylinder head the tolerance of the rocker arms and the valves also has to be reset each feeler gauge has a specific thickness enabling the mechanic to adjust the valve clearance exactly [Music] the cover is put on and the men have reached the halfway stage the first of", "start_timestamp": "00:37:49", "end_timestamp": "00:38:31", "start_second": 2269, "end_second": 2311, "url": "https://www.youtube.com/watch?v=w4uMX5mWAF4&t=2269s", "title": "Exceptional Engineering | Mega Diesel Engine | Free Documentary", "thumbnail": "https://i.ytimg.com/vi/w4uMX5mWAF4/maxresdefault.jpg"} {"video_id": "w4uMX5mWAF4", "text": "the two power units has been replaced the teams still have four hours for the repeat procedure removing the cylinder unit lifting it on deck with the crane and packing it safely for transport the new power unit is then lowered into the machine room installed bolted greased and adjusted all the men can do now is hope that the engine will spring to life immediately the fitters are still working but the ship's crew are already closing the maintenance hatch the first trucks will soon be driven on board everything is", "start_timestamp": "00:38:31", "end_timestamp": "00:39:10", "start_second": 2311, "end_second": 2350, "url": "https://www.youtube.com/watch?v=w4uMX5mWAF4&t=2311s", "title": "Exceptional Engineering | Mega Diesel Engine | Free Documentary", "thumbnail": "https://i.ytimg.com/vi/w4uMX5mWAF4/maxresdefault.jpg"} {"video_id": "w4uMX5mWAF4", "text": "still going to plan but will the overhauled engines start up [Music] at MTU on lake constance the new engine block is again being transported by crane through the workshop in the meantime the mega diesel engine has become even heavier just before its final assembly station it now weighs nearly 48 tons at station six the fitters focus on the electronic system and on pumps and lines of all kinds the series 8,000 marine diesel engine is a complex structure of circulation lines for fuel oil air and exhaust fumes", "start_timestamp": "00:39:10", "end_timestamp": "00:40:00", "start_second": 2350, "end_second": 2400, "url": "https://www.youtube.com/watch?v=w4uMX5mWAF4&t=2350s", "title": "Exceptional Engineering | Mega Diesel Engine | Free Documentary", "thumbnail": "https://i.ytimg.com/vi/w4uMX5mWAF4/maxresdefault.jpg"} {"video_id": "w4uMX5mWAF4", "text": "several hundred meters of copper pipes tubes cables and hoses have been installed in and on the engine two thousand liters of diesel fuel per hour and hundreds of cubic meters of air in exhaust gases along with three thousand liters of oil every minute will flow through the engine all the lines are special home and custom-made products like the pipe for the sea water cooling system the fitter assembles the separate elements of the coiled pipe in a gauge anything that's slightly out is ground to size so that it fits with millimeter", "start_timestamp": "00:40:00", "end_timestamp": "00:40:38", "start_second": 2400, "end_second": 2438, "url": "https://www.youtube.com/watch?v=w4uMX5mWAF4&t=2400s", "title": "Exceptional Engineering | Mega Diesel Engine | Free Documentary", "thumbnail": "https://i.ytimg.com/vi/w4uMX5mWAF4/maxresdefault.jpg"} {"video_id": "w4uMX5mWAF4", "text": "precision [Music] this is I this site needs to fit now it fits first of all a welder joins the various sections loosely together they're made of a very special material it's a cute print achill ferrous alloy the ship's engine is exposed to salt water but that can't affect this pipe welding curved sections while the workpiece is turning is a big challenge for any welder the seam has to be absolutely watertight if any sea water got into the engine it would destroy it the final pipe for the engine block is now ready it has been a long road over", "start_timestamp": "00:40:38", "end_timestamp": "00:41:34", "start_second": 2438, "end_second": 2494, "url": "https://www.youtube.com/watch?v=w4uMX5mWAF4&t=2438s", "title": "Exceptional Engineering | Mega Diesel Engine | Free Documentary", "thumbnail": "https://i.ytimg.com/vi/w4uMX5mWAF4/maxresdefault.jpg"} {"video_id": "w4uMX5mWAF4", "text": "the last five weeks around 20,000 parts have been attached to the crankcase the result is an extra-large marine diesel engine made in Germany once again the team of 35 fitters has done a great job but one thing is still missing the test run no engine leaves the plant without having been tested there are 46 test benches here and our mega diesel engine from series 8000 is placed on the biggest we start up the engine with an air starter that operates with 40 bar it drives the flywheel which causes the engine to start up before it", "start_timestamp": "00:41:34", "end_timestamp": "00:42:18", "start_second": 2494, "end_second": 2538, "url": "https://www.youtube.com/watch?v=w4uMX5mWAF4&t=2494s", "title": "Exceptional Engineering | Mega Diesel Engine | Free Documentary", "thumbnail": "https://i.ytimg.com/vi/w4uMX5mWAF4/maxresdefault.jpg"} {"video_id": "w4uMX5mWAF4", "text": "started immortal finally the big moment comes the mega diesel engine springs to life for the first time at the control console hold on Luthor and a colleague increased the speed the engine's performance is measured with the help of a water break attached to its transmission the water break has a paddle wheel which rotates in the water creating an artificial resistance [Music] after half an hour it is clear that the engine has passed the test it functions flawlessly in the hull of the Jean de la Villette ferry still birthed in Sicily another", "start_timestamp": "00:42:18", "end_timestamp": "00:43:18", "start_second": 2538, "end_second": 2598, "url": "https://www.youtube.com/watch?v=w4uMX5mWAF4&t=2538s", "title": "Exceptional Engineering | Mega Diesel Engine | Free Documentary", "thumbnail": "https://i.ytimg.com/vi/w4uMX5mWAF4/maxresdefault.jpg"} {"video_id": "w4uMX5mWAF4", "text": "diesel engine of the same type still has to pass its test a team of fitters had just eight hours in which to replace the heart of the engine the two power units with cars and trucks now queued up in front of the ferries loading ramp the engine just has to start up otherwise well that doesn't bear thinking about no way but in fact the engine won't start so the fitters have to check all the lines and try again [Applause] success the engine is running again and to the ears of the fitters the deafening noise sounds like music the pitstop has", "start_timestamp": "00:43:18", "end_timestamp": "00:44:23", "start_second": 2598, "end_second": 2663, "url": "https://www.youtube.com/watch?v=w4uMX5mWAF4&t=2598s", "title": "Exceptional Engineering | Mega Diesel Engine | Free Documentary", "thumbnail": "https://i.ytimg.com/vi/w4uMX5mWAF4/maxresdefault.jpg"} {"video_id": "w4uMX5mWAF4", "text": "taken 8 hours and cost thousands of euros but the high speed ferry is now ready once again to make the crossing to Malta the ship's engineer just has to give the green light then the team of fitters can call it a day the diesel engine can now look forward to a long service life these engines run virtually forever and ever under typical conditions we're talking about 70 mm to operating hours after that the engines are taken out and checked for wear and if everything is okay they are reassembled and put back into service", "start_timestamp": "00:44:23", "end_timestamp": "00:45:05", "start_second": 2663, "end_second": 2705, "url": "https://www.youtube.com/watch?v=w4uMX5mWAF4&t=2663s", "title": "Exceptional Engineering | Mega Diesel Engine | Free Documentary", "thumbnail": "https://i.ytimg.com/vi/w4uMX5mWAF4/maxresdefault.jpg"} {"video_id": "w4uMX5mWAF4", "text": "it's already dark when the Jean de la Villette reaches malta its engine is working perfectly after performing a final skillful turn in the narrow Basin the captain reverses the ship alongside the key [Music] the high-speed catamaran will again be able to cross the Mediterranean three times a day taking cars and trucks boots and tourists for Malta to Sicily and back in record time and reliably thanks to four mega diesel engines in its hull power made in Germany [Music] by lake constance the brand new engine has passed its test with flying colors", "start_timestamp": "00:45:05", "end_timestamp": "00:45:56", "start_second": 2705, "end_second": 2756, "url": "https://www.youtube.com/watch?v=w4uMX5mWAF4&t=2705s", "title": "Exceptional Engineering | Mega Diesel Engine | Free Documentary", "thumbnail": "https://i.ytimg.com/vi/w4uMX5mWAF4/maxresdefault.jpg"} {"video_id": "w4uMX5mWAF4", "text": "now the 7 meter long Colossus is heading for the wash unit the biggest of its kind far and wide you have molecules every large engine is washed here on average the procedure takes five hours at a temperature of 70 degrees the engine is washed with the detergent and then dried for paint spraying it's a really tough job clad in a heavy protective suit the man with the high-pressure cleaner has to cope with heat noise and damp after five hours the engine is spotless above all it is free from grease essential if it is to be sprayed later", "start_timestamp": "00:45:56", "end_timestamp": "00:46:44", "start_second": 2756, "end_second": 2804, "url": "https://www.youtube.com/watch?v=w4uMX5mWAF4&t=2756s", "title": "Exceptional Engineering | Mega Diesel Engine | Free Documentary", "thumbnail": "https://i.ytimg.com/vi/w4uMX5mWAF4/maxresdefault.jpg"} {"video_id": "w4uMX5mWAF4", "text": "with a special type of paint the roughly one thousand cubic meters of water that are used will be treated and recycled [Music] prior to paint spraying parts of the engine are masked the sensitive cables and pipes must be kept free of the paint which could change their thermal and electrical properties finally the engine has given its coat this too is a process that takes time the Motown big slice we apply two coats first a primer then it's left to dry for about four hours then we apply a thick coat which also", "start_timestamp": "00:46:44", "end_timestamp": "00:47:37", "start_second": 2804, "end_second": 2857, "url": "https://www.youtube.com/watch?v=w4uMX5mWAF4&t=2804s", "title": "Exceptional Engineering | Mega Diesel Engine | Free Documentary", "thumbnail": "https://i.ytimg.com/vi/w4uMX5mWAF4/maxresdefault.jpg"} {"video_id": "w4uMX5mWAF4", "text": "takes about four hours to dry it's a lot of work for the two paint sprayers who go through 200 leaders of paints including drying time the mega diesel engines spend 16 hours in the cabin now it is ready the colossus took five weeks to manufacture and will power a vessel for several years the series 8000 mega diesel engine has a capacity of 350 liters and an output of thirteen thousand six hundred horsepower total cost the lower end of a scale between 1 and 10 million euros well packed the engine is ready for delivery worldwide", "start_timestamp": "00:47:37", "end_timestamp": "00:48:24", "start_second": 2857, "end_second": 2904, "url": "https://www.youtube.com/watch?v=w4uMX5mWAF4&t=2857s", "title": "Exceptional Engineering | Mega Diesel Engine | Free Documentary", "thumbnail": "https://i.ytimg.com/vi/w4uMX5mWAF4/maxresdefault.jpg"} {"video_id": "p0_-7FmrDq8", "text": "Thank you. It's a great pleasure to see all of you here tonight. The festival has had conversations about science and religion over the years. Perhaps some of you have come to some of those. And oftentimes, there are two sides represented in that conversation and sometimes the two sides, you know, science and religion, sometimes they're contentious, sometimes they're harmonious, but tonight we're doing something differently. We really only have one side here tonight. So the group of people who are going to come out for this discussion, they're all scientists.", "start_timestamp": "00:00:00", "end_timestamp": "00:00:44", "start_second": 0, "end_second": 44, "url": "https://www.youtube.com/watch?v=p0_-7FmrDq8&t=0s", "title": "The Believing Brain: Evolution, Neuroscience, and the Spiritual Instinct", "thumbnail": "https://i.ytimg.com/vi/p0_-7FmrDq8/maxresdefault.jpg"} {"video_id": "p0_-7FmrDq8", "text": "They all come from the background of science, but our goal is to see if by walking this one side, this one trajectory of science, we can gain some illumination into the other side. Into the side of religion side of faith. Before I bring out our esteemed group of panelists, I just want to set some context and to do so, I'm going to begin with something which is presumably familiar to many of you. So this is what a beautiful midnight sky brimming with stars looks like in New York City. Now, I also have a little cabin, Upstate New York in the Catskill Mountains and when I'm", "start_timestamp": "00:00:44", "end_timestamp": "00:01:28", "start_second": 44, "end_second": 88, "url": "https://www.youtube.com/watch?v=p0_-7FmrDq8&t=44s", "title": "The Believing Brain: Evolution, Neuroscience, and the Spiritual Instinct", "thumbnail": "https://i.ytimg.com/vi/p0_-7FmrDq8/maxresdefault.jpg"} {"video_id": "p0_-7FmrDq8", "text": "up there and it's a nice, dark night sky, I can look up and see something that looks just like this. Maybe not just like this. This is takes a Hubble Space Telescope, you know? But you get the idea and when you see a wondrous sky like this, you can't help but ask yourself, how does it all work? How did it all come to be? And I have spent part of my professional life trying to advance the scientific understanding of some of these questions, and because I work on the more mathematical end of physics, when I look up, I tend to see order and harmony in a peculiar language.", "start_timestamp": "00:01:28", "end_timestamp": "00:02:06", "start_second": 88, "end_second": 126, "url": "https://www.youtube.com/watch?v=p0_-7FmrDq8&t=88s", "title": "The Believing Brain: Evolution, Neuroscience, and the Spiritual Instinct", "thumbnail": "https://i.ytimg.com/vi/p0_-7FmrDq8/maxresdefault.jpg"} {"video_id": "p0_-7FmrDq8", "text": "The language of mathematics, a language of symbols. But, many others, when they look up at a sky like this, it brings to mind other things, right? Ideas of soul, of eternity, of divinity, of God. And for some, that kind of talk, it feels kind of loose or vague. For some, it's even off-putting. But when you look at the data, you see something utterly remarkable, right here in the 21st century, the modern technological age and we have long since cracked the atom, explored the surface of Mars, detected gravitational waves and so much more.", "start_timestamp": "00:02:06", "end_timestamp": "00:02:46", "start_second": 126, "end_second": 166, "url": "https://www.youtube.com/watch?v=p0_-7FmrDq8&t=126s", "title": "The Believing Brain: Evolution, Neuroscience, and the Spiritual Instinct", "thumbnail": "https://i.ytimg.com/vi/p0_-7FmrDq8/maxresdefault.jpg"} {"video_id": "p0_-7FmrDq8", "text": "There are still many of us who are believers. So if we look at some of the numbers, about say 2.2 billion of us identify as Christians. About 1.7 billion is Muslims. Hindus, Buddhists, that gives us another two billion, plus, if we throw in my little tribe, it's about 14 million, right? And then if we add in the atheists, this takes us to one and half billion which is just to say there's a lot of people on this planet who would look to the heavens and think of heaven. So if aliens were able to sweep down toward planet Earth, and let's say they had some", "start_timestamp": "00:02:46", "end_timestamp": "00:03:26", "start_second": 166, "end_second": 206, "url": "https://www.youtube.com/watch?v=p0_-7FmrDq8&t=166s", "title": "The Believing Brain: Evolution, Neuroscience, and the Spiritual Instinct", "thumbnail": "https://i.ytimg.com/vi/p0_-7FmrDq8/maxresdefault.jpg"} {"video_id": "p0_-7FmrDq8", "text": "wondrous equipment that allowed them to detect religious belief, to give us a kind of heat map of faith, this is what our planet would look like. You could probably work out the color scheme for yourself. Blue is Protestant, Red is Catholic and so forth. You get the idea. We are a religious planet. Personally, I am not religious in any conventional sense, but I do consider myself spiritual and I certainly do consider myself curious. One thing that I have certainly gotten ever more curious about is why do we believe?", "start_timestamp": "00:03:26", "end_timestamp": "00:04:00", "start_second": 206, "end_second": 240, "url": "https://www.youtube.com/watch?v=p0_-7FmrDq8&t=206s", "title": "The Believing Brain: Evolution, Neuroscience, and the Spiritual Instinct", "thumbnail": "https://i.ytimg.com/vi/p0_-7FmrDq8/maxresdefault.jpg"} {"video_id": "p0_-7FmrDq8", "text": "Now, the simplest answer is we have religious belief because what religion tells us is true. That raises a whole lot of challenges that we're all familiar with and perhaps the most relevant for tonight's discussion is there are over 4000 distinct religions practiced on Earth and if we just take one of them, say we parse Christianity a little more finely, there are over 33,000 distinct denominations. They can't all be right. So the natural supposition is that at most one of them is right, which would mean that", "start_timestamp": "00:04:00", "end_timestamp": "00:04:35", "start_second": 240, "end_second": 275, "url": "https://www.youtube.com/watch?v=p0_-7FmrDq8&t=240s", "title": "The Believing Brain: Evolution, Neuroscience, and the Spiritual Instinct", "thumbnail": "https://i.ytimg.com/vi/p0_-7FmrDq8/maxresdefault.jpg"} {"video_id": "p0_-7FmrDq8", "text": "if Sarah here, happy in her own beliefs, she denies therefor the beliefs of all others, like Terrik over here who again, happy in his own faith, denies the validity of all others and that goes true for Pim and for Ofryim and also for Amalyia and it even holds for, say this guy over here, Richard, who not only denies in the validity of all other beliefs, he denies the validity all beliefs. We may be a believing planet, but most of us deny the validity of most beliefs, which means that even if Sarah holds to her religion because it is true, she still needs to explain", "start_timestamp": "00:04:35", "end_timestamp": "00:05:16", "start_second": 275, "end_second": 316, "url": "https://www.youtube.com/watch?v=p0_-7FmrDq8&t=275s", "title": "The Believing Brain: Evolution, Neuroscience, and the Spiritual Instinct", "thumbnail": "https://i.ytimg.com/vi/p0_-7FmrDq8/maxresdefault.jpg"} {"video_id": "p0_-7FmrDq8", "text": "why everybody else holds to their own misguided faiths. And that holds true for everybody else. So this takes us to a simple but remarkable conclusion. Normally, the discussion of science and religions, you know, it all comes down to what's right, what's wrong, what's true, what's false. But here we see that even if a given religion is true, it hardly changes the question at all. We still need to ask why it is that so many of us have a tendency to believe. We have to ask yourself, what is it about the human species that drives us to find order", "start_timestamp": "00:05:16", "end_timestamp": "00:05:54", "start_second": 316, "end_second": 354, "url": "https://www.youtube.com/watch?v=p0_-7FmrDq8&t=316s", "title": "The Believing Brain: Evolution, Neuroscience, and the Spiritual Instinct", "thumbnail": "https://i.ytimg.com/vi/p0_-7FmrDq8/maxresdefault.jpg"} {"video_id": "p0_-7FmrDq8", "text": "and meaning and, in particular, to find the turn toward the supernatural so utterly natural. 1936, this guy over here, Albert Einstein wrote a letter to a school girl named Phyllis who had asked Einstein about his own religious beliefs. Everyone who is seriously involved in the pursuit of science becomes convinced that some spirit is manifested in the laws of the universe. One that is vastly superior to that of man. In this way the pursuit of science leads to a religious feeling of a special sort which is surely quite different from the religiosity of someone more na\u00efve.", "start_timestamp": "00:05:54", "end_timestamp": "00:06:39", "start_second": 354, "end_second": 399, "url": "https://www.youtube.com/watch?v=p0_-7FmrDq8&t=354s", "title": "The Believing Brain: Evolution, Neuroscience, and the Spiritual Instinct", "thumbnail": "https://i.ytimg.com/vi/p0_-7FmrDq8/maxresdefault.jpg"} {"video_id": "p0_-7FmrDq8", "text": "Much has been made about Einstein's use of this phrase, religious feeling, but his later writings made very clear that he was speaking of an abstract spirituality, not a conventional religion. The word of God is, for me, nothing more than the expression and product of human weaknesses. The bible a collection of honorable, but still primitive legends which are nevertheless pretty childish. No interpretation no matter how subtle can, for me, change this. Charles Darwin, the Father of Evolution by natural selection, he allowed for the possibility", "start_timestamp": "00:06:39", "end_timestamp": "00:07:18", "start_second": 399, "end_second": 438, "url": "https://www.youtube.com/watch?v=p0_-7FmrDq8&t=399s", "title": "The Believing Brain: Evolution, Neuroscience, and the Spiritual Instinct", "thumbnail": "https://i.ytimg.com/vi/p0_-7FmrDq8/maxresdefault.jpg"} {"video_id": "p0_-7FmrDq8", "text": "of God. I have never denied the existence of God. I think the theory of evolution is fully compatible with faith in God. I think the greatest argument for the existence in God is the impossibility of demonstrating and understanding that the immense universe, sublime above all measure and man, were the result of chance. At the same time, Darwin also noted that a religious belief, a religious sensibility could emerge from the interplay between biological and cultural evolution. Nor must we overlook the probability of the constant inculcation in a belief of God on", "start_timestamp": "00:07:18", "end_timestamp": "00:08:00", "start_second": 438, "end_second": 480, "url": "https://www.youtube.com/watch?v=p0_-7FmrDq8&t=438s", "title": "The Believing Brain: Evolution, Neuroscience, and the Spiritual Instinct", "thumbnail": "https://i.ytimg.com/vi/p0_-7FmrDq8/maxresdefault.jpg"} {"video_id": "p0_-7FmrDq8", "text": "the minds of children, producing so strong and perhaps an inherited effect on their brains, not yet fully developed that it would be as difficult for them to throw off their belief in God as for a monkey to throw off its instinctive fear and hatred of a snake. The Dalai Lama has his own iconic perspective on these issues. Both Buddhism and modern science shared a deep suspicion of any notion of absolutes, whether conceptualized as a transcendent being, as an eternal unchanging principal such as soul, or as a fundamental substratum of reality.", "start_timestamp": "00:08:00", "end_timestamp": "00:08:43", "start_second": 480, "end_second": 523, "url": "https://www.youtube.com/watch?v=p0_-7FmrDq8&t=480s", "title": "The Believing Brain: Evolution, Neuroscience, and the Spiritual Instinct", "thumbnail": "https://i.ytimg.com/vi/p0_-7FmrDq8/maxresdefault.jpg"} {"video_id": "p0_-7FmrDq8", "text": "Both Buddhism and science prefer to account for the evolution and emergence of the cosmos and life in terms of the complex interrelations of the natural laws of cause and effect. From the methodological perspective, both the traditions emphasize the role of empiricism. In the Buddhist investigation of reality, at least in principle, empirical evidence should triumph over scriptural authority, no matter how deeply venerate a scripture may be. Years ago, I had the pleasure of sharing the stage with the Dalai Lama in an event that", "start_timestamp": "00:08:43", "end_timestamp": "00:09:20", "start_second": 523, "end_second": 560, "url": "https://www.youtube.com/watch?v=p0_-7FmrDq8&t=523s", "title": "The Believing Brain: Evolution, Neuroscience, and the Spiritual Instinct", "thumbnail": "https://i.ytimg.com/vi/p0_-7FmrDq8/maxresdefault.jpg"} {"video_id": "p0_-7FmrDq8", "text": "took place down in Texas and I had an opportunity to ask him a question. The question is asked was, I said, \"Look, there are all these books out there that make the case that what we're doing in modern physics is somehow a recapitulation or a reflection of ideas that ultimately find their origin in eastern religious thought.\" So I asked him, \"Is this true? Is this your perspective?\" And he very forthrightly said, he said, \"Look, when it comes to questions of consciousness, that's where we have something to offer science.\"", "start_timestamp": "00:09:20", "end_timestamp": "00:09:57", "start_second": 560, "end_second": 597, "url": "https://www.youtube.com/watch?v=p0_-7FmrDq8&t=560s", "title": "The Believing Brain: Evolution, Neuroscience, and the Spiritual Instinct", "thumbnail": "https://i.ytimg.com/vi/p0_-7FmrDq8/maxresdefault.jpg"} {"video_id": "p0_-7FmrDq8", "text": "But he said, \"When it comes to understanding the fundamental laws and the particles and all that detail about how the world actually works,\" he said, \"We need to look to science.\" So it was a kind of remarkable moment where this great spiritual leader showed this remarkable and broad embrace of science. At the same time there are great scientists who show a similar embrace of religious thought. Here's Nobel Laureate, William Phillips. The point is that there are plenty of scientists who see no difficulty in being serious about", "start_timestamp": "00:09:57", "end_timestamp": "00:10:34", "start_second": 597, "end_second": 634, "url": "https://www.youtube.com/watch?v=p0_-7FmrDq8&t=597s", "title": "The Believing Brain: Evolution, Neuroscience, and the Spiritual Instinct", "thumbnail": "https://i.ytimg.com/vi/p0_-7FmrDq8/maxresdefault.jpg"} {"video_id": "p0_-7FmrDq8", "text": "their science and serious about their faith. I know plenty of others, and you\u2019ve see the statistics that support that idea, but nevertheless there is a common misperception in society that this isn\u2019t the case. And here's Francis Collins, head of the National Institutes of Health. I think most people are actually kind of comfortable with the idea that science is a reliable way to learn about nature, but it's not the whole story and there's a place also for religion, for faith, for theology, for philosophy. But that harmony perspective doesn't get as much attention.", "start_timestamp": "00:10:34", "end_timestamp": "00:11:11", "start_second": 634, "end_second": 671, "url": "https://www.youtube.com/watch?v=p0_-7FmrDq8&t=634s", "title": "The Believing Brain: Evolution, Neuroscience, and the Spiritual Instinct", "thumbnail": "https://i.ytimg.com/vi/p0_-7FmrDq8/maxresdefault.jpg"} {"video_id": "p0_-7FmrDq8", "text": "Nobody's as interested in harmony as they are in conflict, I'm afraid. 2015, Pew Research Foundation found that the percent of Americans that agreed with the statement that science and religion are often in conflict, they found that agreement with that was almost 60% and that again is often how the conversation is framed. Science versus religion. That is an important question. It may come up here tonight, but it's not the focus of what we're talking about here tonight. And so we're asking ourselves, can we use science to illuminate religion?", "start_timestamp": "00:11:11", "end_timestamp": "00:11:47", "start_second": 671, "end_second": 707, "url": "https://www.youtube.com/watch?v=p0_-7FmrDq8&t=671s", "title": "The Believing Brain: Evolution, Neuroscience, and the Spiritual Instinct", "thumbnail": "https://i.ytimg.com/vi/p0_-7FmrDq8/maxresdefault.jpg"} {"video_id": "p0_-7FmrDq8", "text": "Can we gain some understanding of why people have a need to look to a power beyond themselves, beyond the laws of physics? Is that need written into our DNA? The natural selection for that kind of worldview, right? Why in the world does this world have so many brains that want to believe? That's the question. And to deal with this question, try to gain some insight, we have a great group of thinkers and I'd like to now bring them out to the stage. Our first participant is professor emerita from the College of William and Mary, where", "start_timestamp": "00:11:47", "end_timestamp": "00:12:25", "start_second": 707, "end_second": 745, "url": "https://www.youtube.com/watch?v=p0_-7FmrDq8&t=707s", "title": "The Believing Brain: Evolution, Neuroscience, and the Spiritual Instinct", "thumbnail": "https://i.ytimg.com/vi/p0_-7FmrDq8/maxresdefault.jpg"} {"video_id": "p0_-7FmrDq8", "text": "she taught anthropology for 28 years, author of numerous books, including Personalities on the Plate, How Animals Grieve, and Evolving God. Please join me in welcoming our first guest ... Barbara King on the fly. Our next guest is a research scientist at NYU Langone Medical Center. He's also professor of cognitive and affective neuroscience at NYU, co-founded the Nonduality Institute where he is the principle science investigator. Please join me in welcoming neuroscience, Zoran Josipovic. Also with us tonight is a university distinguished professor of psychology at Northeastern University", "start_timestamp": "00:12:25", "end_timestamp": "00:13:03", "start_second": 745, "end_second": 783, "url": "https://www.youtube.com/watch?v=p0_-7FmrDq8&t=745s", "title": "The Believing Brain: Evolution, Neuroscience, and the Spiritual Instinct", "thumbnail": "https://i.ytimg.com/vi/p0_-7FmrDq8/maxresdefault.jpg"} {"video_id": "p0_-7FmrDq8", "text": "with appointments at Harvard Medical School and the Mass General Hospital. In addition to the book, How Emotions Are Made, she has published over 100 scholarly papers. Please join me in welcoming Lisa Barrett. All right, finally. Our guest is the Johnstone Professor of Psychology at Harvard University, a two-time Pulitzer prize finalist and author of the bestselling books including, How the Mind Works and The Language Instinct, a pioneer and champion of evolutionary psychology, named one of Time Magazine's 100 Most Influential People, please welcome Steven Pinker.", "start_timestamp": "00:13:03", "end_timestamp": "00:13:36", "start_second": 783, "end_second": 816, "url": "https://www.youtube.com/watch?v=p0_-7FmrDq8&t=783s", "title": "The Believing Brain: Evolution, Neuroscience, and the Spiritual Instinct", "thumbnail": "https://i.ytimg.com/vi/p0_-7FmrDq8/maxresdefault.jpg"} {"video_id": "p0_-7FmrDq8", "text": "All right, so we're going to have a pretty free form discussion here, where we're going to try to address some of these questions and we're going to organize the discussion into three parts, roughly speaking. A kind of trinity of parts, befitting for tonight's discussion. We're going to talk about some of the history of religious belief. We're going to talk about the longevity, the fact that this is something that has stuck with us for some time. Then we're going to focus on the benefit, if at all, for this kind of way of interacting", "start_timestamp": "00:13:36", "end_timestamp": "00:14:15", "start_second": 816, "end_second": 855, "url": "https://www.youtube.com/watch?v=p0_-7FmrDq8&t=816s", "title": "The Believing Brain: Evolution, Neuroscience, and the Spiritual Instinct", "thumbnail": "https://i.ytimg.com/vi/p0_-7FmrDq8/maxresdefault.jpg"} {"video_id": "p0_-7FmrDq8", "text": "with the world. What I'd like to do before getting started, if you don't mind, especially since it's a nice small group here, it's good to get a sense of where people are coming from in this kind of discussion, so if we could just sort of go one by one, just sort of give us a sense of where you ... We'll do it ... If you don't mind, and you don't have to, but if you're willing to share it, just a couple of words on where you come from in the religious spectrum. Steven, you willing to just say a few words? You mean our own beliefs personally?", "start_timestamp": "00:14:15", "end_timestamp": "00:14:49", "start_second": 855, "end_second": 889, "url": "https://www.youtube.com/watch?v=p0_-7FmrDq8&t=855s", "title": "The Believing Brain: Evolution, Neuroscience, and the Spiritual Instinct", "thumbnail": "https://i.ytimg.com/vi/p0_-7FmrDq8/maxresdefault.jpg"} {"video_id": "p0_-7FmrDq8", "text": "If you don't mind. You don't have to, but if you're willing to. Yeah. Well, I don't believe in the existence of supernatural entities, including God, souls, spirits, genies, devils, and so on. I am a ... I belong to the same tribe as you. I'm Jewish and appreciate many of the iconography, the traditions, the community of my own and other cultural groups, but that doesn't mean you have to sign on to the content, and I don't. Right. Lisa. I would say Steve pretty summed it up pretty well for me too. We practice some rituals in our home as sort of, I don't know, not exactly archeological", "start_timestamp": "00:14:49", "end_timestamp": "00:15:35", "start_second": 889, "end_second": 935, "url": "https://www.youtube.com/watch?v=p0_-7FmrDq8&t=889s", "title": "The Believing Brain: Evolution, Neuroscience, and the Spiritual Instinct", "thumbnail": "https://i.ytimg.com/vi/p0_-7FmrDq8/maxresdefault.jpg"} {"video_id": "p0_-7FmrDq8", "text": "artifacts, but they are kind of artifacts of the past, you know? If we decide to light candles on Friday night, I'm using candlesticks that my great-grandmother schlepped from Russia and that people have been doing this for over 5000 years, and that's meaningful. I also think that Judaism is an interesting moral code that is somewhat ... emphasizes somewhat more behavior over intent, which is appealing to us in some ways. I would say we're ... colloquially we're atheists as a ... definitely in our house, although", "start_timestamp": "00:15:35", "end_timestamp": "00:16:15", "start_second": 935, "end_second": 975, "url": "https://www.youtube.com/watch?v=p0_-7FmrDq8&t=935s", "title": "The Believing Brain: Evolution, Neuroscience, and the Spiritual Instinct", "thumbnail": "https://i.ytimg.com/vi/p0_-7FmrDq8/maxresdefault.jpg"} {"video_id": "p0_-7FmrDq8", "text": "we do have trappings of, as I said, of ritual in the way that I described. Yep. Zoran. I was raised as atheist, but later discovered that really my family believed in scientism. Like, science has an answer to everything. It's a form of religion, I think for some people. Personally, I have practiced meditation for over 35 years and I'm mostly interested in this mystical unitary states are known to us as the consciousness where people experience both unitary consciousness, either alone or unitary consciousness with experience.", "start_timestamp": "00:16:15", "end_timestamp": "00:16:58", "start_second": 975, "end_second": 1018, "url": "https://www.youtube.com/watch?v=p0_-7FmrDq8&t=975s", "title": "The Believing Brain: Evolution, Neuroscience, and the Spiritual Instinct", "thumbnail": "https://i.ytimg.com/vi/p0_-7FmrDq8/maxresdefault.jpg"} {"video_id": "p0_-7FmrDq8", "text": "I'm interested what it does to a person and what it does to the brain. Right. Barbara. Growing up in New Jersey, I was raised as a Presbyterian, spent a fair amount of time in church. I know identify also as an atheist. When I travel, I do find myself drawn to churches, to sitting in the stillness of a church and to looking at the art and the architecture. I think that is a beautiful part of our history, but I do that as an atheist. And for my own sense of spirituality, I go to a Springsteen concert. Right. So, you know, there are some curious human behaviors that strike us as unusual, like,", "start_timestamp": "00:16:58", "end_timestamp": "00:17:44", "start_second": 1018, "end_second": 1064, "url": "https://www.youtube.com/watch?v=p0_-7FmrDq8&t=1018s", "title": "The Believing Brain: Evolution, Neuroscience, and the Spiritual Instinct", "thumbnail": "https://i.ytimg.com/vi/p0_-7FmrDq8/maxresdefault.jpg"} {"video_id": "p0_-7FmrDq8", "text": "I think ... I don't know how much of this is true, but Beethoven is said to have always dunked his head in a bucket of ice water every morning. Ben Franklin is said to have stood naked in front of an open window every morning. Nikola Tesla, you know, a great champion and iconic scientific figure, apparently used to curl his toes a hundred times each night before going to sleep. So you sort of hear those, you raise your eyebrows, it's kind of curious and so on. But, we don't feel the need to explain that kind of behavior, but when it comes to a behavior", "start_timestamp": "00:17:44", "end_timestamp": "00:18:15", "start_second": 1064, "end_second": 1095, "url": "https://www.youtube.com/watch?v=p0_-7FmrDq8&t=1064s", "title": "The Believing Brain: Evolution, Neuroscience, and the Spiritual Instinct", "thumbnail": "https://i.ytimg.com/vi/p0_-7FmrDq8/maxresdefault.jpg"} {"video_id": "p0_-7FmrDq8", "text": "that is pervasive and that lasts for thousands of years, then it feels like it deserves an explanation and that's really why we're having this conversation here tonight. So, maybe start with you, Barbara. When I hear the word faith or religion, my mind automatically goes toward one of the major religions that are practiced in the world today. Is that too limited of you? Yeah, I think it's a very natural view, but speaking anthropologically, if we were to do that heat sensing map of the world, we would see people who not only believe in God,", "start_timestamp": "00:18:15", "end_timestamp": "00:18:54", "start_second": 1095, "end_second": 1134, "url": "https://www.youtube.com/watch?v=p0_-7FmrDq8&t=1095s", "title": "The Believing Brain: Evolution, Neuroscience, and the Spiritual Instinct", "thumbnail": "https://i.ytimg.com/vi/p0_-7FmrDq8/maxresdefault.jpg"} {"video_id": "p0_-7FmrDq8", "text": "or don't believe in God, but also many, many people who believe in gods, plural, spirits in the forest, venerate ancestors, or have an enormous range of beliefs. So, I think broadening our view to understand that there's numbers of ways, not just in the past but now, to believe is a very helpful starting point. Now, you've also done work where you've gone beyond this species, right? Absolutely, yes. My work is in animals and there's a fascinating conversation going on now about whether it is reasonable to suggest that other animals than us, do have a sense of either spirituality", "start_timestamp": "00:18:54", "end_timestamp": "00:19:35", "start_second": 1134, "end_second": 1175, "url": "https://www.youtube.com/watch?v=p0_-7FmrDq8&t=1134s", "title": "The Believing Brain: Evolution, Neuroscience, and the Spiritual Instinct", "thumbnail": "https://i.ytimg.com/vi/p0_-7FmrDq8/maxresdefault.jpg"} {"video_id": "p0_-7FmrDq8", "text": "or religiosity and there's very invigorated debate going on. You know, Jane Goodall was the very first person to suggest, as far as I'm aware, that chimpanzees may be spiritual. But this is continued over decades. This is not my view. I am not suggesting that chimpanzees are spiritual or religious. Where I come in is suggesting that their behavior is an evolutionary platform, so that what we see in our closest living relative gives us and understanding of the building blocks of what later became our religiosity.", "start_timestamp": "00:19:35", "end_timestamp": "00:20:06", "start_second": 1175, "end_second": 1206, "url": "https://www.youtube.com/watch?v=p0_-7FmrDq8&t=1175s", "title": "The Believing Brain: Evolution, Neuroscience, and the Spiritual Instinct", "thumbnail": "https://i.ytimg.com/vi/p0_-7FmrDq8/maxresdefault.jpg"} {"video_id": "p0_-7FmrDq8", "text": "So, we know that chimpanzees, for example at a waterfall, can show what we might understand as a sense of awe and wonder. We know that chimpanzees can take the perspective of another through theory of mind. We know that they can show empathy, compassion, that they have their own rituals and their own rules. And so I think that we wouldn't be where we are today without our primate past, which includes, of course, not only living apes, but what we'll talk about later I imagine, other human ancestors. Early Homo sapiens, Neanderthals.", "start_timestamp": "00:20:06", "end_timestamp": "00:20:40", "start_second": 1206, "end_second": 1240, "url": "https://www.youtube.com/watch?v=p0_-7FmrDq8&t=1206s", "title": "The Believing Brain: Evolution, Neuroscience, and the Spiritual Instinct", "thumbnail": "https://i.ytimg.com/vi/p0_-7FmrDq8/maxresdefault.jpg"} {"video_id": "p0_-7FmrDq8", "text": "So, just as culture evolved, language evolved and technology evolved, I believe that that human religious imagination evolved. So, Steve, part of what we're doing here is trying to think about behavior and think about evolution and sort of how they can play off of each other, and I know that the field of evolution in psychology is dedicated to trying to make those kinds of connections precise. Can you just give us a sense of what evolutionary psychology actually does and how it can give insight into these kinds of issues?", "start_timestamp": "00:20:40", "end_timestamp": "00:21:15", "start_second": 1240, "end_second": 1275, "url": "https://www.youtube.com/watch?v=p0_-7FmrDq8&t=1240s", "title": "The Believing Brain: Evolution, Neuroscience, and the Spiritual Instinct", "thumbnail": "https://i.ytimg.com/vi/p0_-7FmrDq8/maxresdefault.jpg"} {"video_id": "p0_-7FmrDq8", "text": "Well, the brain, like other complex organs, owes its non-random organization to natural selection. That if there are circuits in the brain that accomplish improbable feats, then natural selection is the explanation for how they got wired up the way they are. And we're going to ask of various psychological features whether they are adaptations. That is, whether they increased the chances of reproduction in our ancestors. For a lot of psychological features that's pretty straightforward to do. It's no mystery why we see in stereo, because it's a ... for many reasons, highly adaptive", "start_timestamp": "00:21:15", "end_timestamp": "00:21:59", "start_second": 1275, "end_second": 1319, "url": "https://www.youtube.com/watch?v=p0_-7FmrDq8&t=1275s", "title": "The Believing Brain: Evolution, Neuroscience, and the Spiritual Instinct", "thumbnail": "https://i.ytimg.com/vi/p0_-7FmrDq8/maxresdefault.jpg"} {"video_id": "p0_-7FmrDq8", "text": "to get a sense of the third dimension. Why we're repulsed by kinds of substances that are likely to carry disease. Why we find certain partners sexually attractive. For religion, it's a religious belief. For supernatural belief, it's not so obvious. I don't think there's any accepted theory that religious belief, per se, is an adaptation. Rather, it can be a by-product of other adaptations. In particular, the ability to attribute minds to other people. We can't literally get inside people's heads. A mind is invisible, colorless, odorless, tasteless, but we couldn't survive as social", "start_timestamp": "00:21:59", "end_timestamp": "00:22:43", "start_second": 1319, "end_second": 1363, "url": "https://www.youtube.com/watch?v=p0_-7FmrDq8&t=1319s", "title": "The Believing Brain: Evolution, Neuroscience, and the Spiritual Instinct", "thumbnail": "https://i.ytimg.com/vi/p0_-7FmrDq8/maxresdefault.jpg"} {"video_id": "p0_-7FmrDq8", "text": "beings unless we assume that other people had minds as we do. We interpret their behavior in terms of their beliefs and desires. From there it may be a short step to attribute minds to entities that aren't other human beings, such as to trees and rivers and the wind, in which case we call it animism. We attribute minds to inanimate entities, to our own artifacts, in which case we call it idolatry. Or to no hunk of matter in particular, in which we call it ... in case we call it spiritualism. Disembodied souls and spirits and father-like entities that don't have any material existence,", "start_timestamp": "00:22:43", "end_timestamp": "00:23:23", "start_second": 1363, "end_second": 1403, "url": "https://www.youtube.com/watch?v=p0_-7FmrDq8&t=1363s", "title": "The Believing Brain: Evolution, Neuroscience, and the Spiritual Instinct", "thumbnail": "https://i.ytimg.com/vi/p0_-7FmrDq8/maxresdefault.jpg"} {"video_id": "p0_-7FmrDq8", "text": "but have this thing that we naturally attribute to one another. So it would be an extension. One would then have to explain why the adaptation of attributing minds to others, sometimes called theory of mind, or mentalizing or mind reading, or intuitive psychology, why should be so easy to overextend it to entities that aren't in fact brains. And there, part of the answer comes from experience, what kind of input do we have in living our lives that makes this belief congenial and a number of anthropologists have pointed out", "start_timestamp": "00:23:23", "end_timestamp": "00:24:01", "start_second": 1403, "end_second": 1441, "url": "https://www.youtube.com/watch?v=p0_-7FmrDq8&t=1403s", "title": "The Believing Brain: Evolution, Neuroscience, and the Spiritual Instinct", "thumbnail": "https://i.ytimg.com/vi/p0_-7FmrDq8/maxresdefault.jpg"} {"video_id": "p0_-7FmrDq8", "text": "that before the advent of modern neuroscience, the idea that minds can exist independently of brains was not so farfetched. There's actually some compelling and empirical data. Edward Tyler I think was the originator of this observation, that when we dream for example, it's apparent that some part of us is up and about, walking around in the world and our body's in bed the whole time. A natural hypothesis is that our ... some locus of experience is not wedded to the body, but can part company from it. Or in death, if someone suddenly collapses, they may look identical to the way they were", "start_timestamp": "00:24:01", "end_timestamp": "00:24:39", "start_second": 1441, "end_second": 1479, "url": "https://www.youtube.com/watch?v=p0_-7FmrDq8&t=1441s", "title": "The Believing Brain: Evolution, Neuroscience, and the Spiritual Instinct", "thumbnail": "https://i.ytimg.com/vi/p0_-7FmrDq8/maxresdefault.jpg"} {"video_id": "p0_-7FmrDq8", "text": "a few minutes ago, but something seems to have left their body that animated shortly beforehand. And reflections in still water, shadows, seem to capture the essence of a person, including their activity, their expressions, their goal directed actions. And again, divorced from the actual hunk of flesh. If you're in a trance from lack of sleep, or a fever, or a drug, again, the experience is that your mind can part company from your body. So, if you combine those experiences with our natural habit of attributing minds, it's", "start_timestamp": "00:24:39", "end_timestamp": "00:25:16", "start_second": 1479, "end_second": 1516, "url": "https://www.youtube.com/watch?v=p0_-7FmrDq8&t=1479s", "title": "The Believing Brain: Evolution, Neuroscience, and the Spiritual Instinct", "thumbnail": "https://i.ytimg.com/vi/p0_-7FmrDq8/maxresdefault.jpg"} {"video_id": "p0_-7FmrDq8", "text": "not farfetched to think that minds can exist separately from bodies. Now we know better. We know that the brain is the locus of experience, that there are many ways in which the brain can be vulnerable to illusions, dreaming being an obvious case. There's brain activity when we're asleep and that's why we experience things. But, before modern neuroscience, it wasn't such a crazy belief. One other ingredient is that we depend for our beliefs on other people, on experts.i believe a lot of things that I have no basis for believing in my own experience.", "start_timestamp": "00:25:16", "end_timestamp": "00:25:52", "start_second": 1516, "end_second": 1552, "url": "https://www.youtube.com/watch?v=p0_-7FmrDq8&t=1516s", "title": "The Believing Brain: Evolution, Neuroscience, and the Spiritual Instinct", "thumbnail": "https://i.ytimg.com/vi/p0_-7FmrDq8/maxresdefault.jpg"} {"video_id": "p0_-7FmrDq8", "text": "Like quantum physics. Yeah, like superstrings. I really believe- You believe in superstrings? I do, because very smart people tell me that they exist and I trust them. I don't say they exist. They may exist. That they may exist. I give some non-zero probability of that. That opens up a niche for people to market all kinds of beliefs about unobservable entities including gods and messiahs, and devils and so on, and a whole set of questions which I won't talk about now is, what are the incentives for the purveyors of supernatural beliefs?", "start_timestamp": "00:25:52", "end_timestamp": "00:26:25", "start_second": 1552, "end_second": 1585, "url": "https://www.youtube.com/watch?v=p0_-7FmrDq8&t=1552s", "title": "The Believing Brain: Evolution, Neuroscience, and the Spiritual Instinct", "thumbnail": "https://i.ytimg.com/vi/p0_-7FmrDq8/maxresdefault.jpg"} {"video_id": "p0_-7FmrDq8", "text": "What's in it for them to get other people to believe in gods and souls and spirits. There are plenty of reasons, but that's the other part of the story. Now presume the overactive assigning of agency out to the world is better than an underactive version of it, right? If you're walking around and there's a rock and you happen to think that it has a mind, so be it, but if you're walking around and there's a snake and you don't think it has a mind, you don't think it can attack you, that's probably not a good thing.", "start_timestamp": "00:26:25", "end_timestamp": "00:26:51", "start_second": 1585, "end_second": 1611, "url": "https://www.youtube.com/watch?v=p0_-7FmrDq8&t=1585s", "title": "The Believing Brain: Evolution, Neuroscience, and the Spiritual Instinct", "thumbnail": "https://i.ytimg.com/vi/p0_-7FmrDq8/maxresdefault.jpg"} {"video_id": "p0_-7FmrDq8", "text": "So, evolutionary speaking, presumably this overactive assigning of agency has adaptive value. Possibly. It's not so clear. If it involves making sacrifices that are ultimately irrational, if it involves being manipulated by others, maybe not. But it may just be that the overall benefit of being able to attribute minds outweighs the cost in cases where others can exploit us. In the case of animals, of course, animals actually do have minds so it's not such a crazy thing. Indeed, a lot of ... In some hunter-gatherer peoples, they do attribute enormous amounts", "start_timestamp": "00:26:51", "end_timestamp": "00:27:32", "start_second": 1611, "end_second": 1652, "url": "https://www.youtube.com/watch?v=p0_-7FmrDq8&t=1611s", "title": "The Believing Brain: Evolution, Neuroscience, and the Spiritual Instinct", "thumbnail": "https://i.ytimg.com/vi/p0_-7FmrDq8/maxresdefault.jpg"} {"video_id": "p0_-7FmrDq8", "text": "of intentionality to the animals they hunt and with good reason, 'cause the animals really are trying to escape them for the same reason that we try to escape from threats. So, that degree of extension is not so farfetched. It's when it comes to rocks and rivers and mountains and trees and wind, that it becomes more problematic. Right. So, Lisa, what is your view in terms of are we at some level wired for belief, or is that not an important part of the equation? I think it is actually. When we say ... When you ask are we wired for beliefs, I think that that can mean a", "start_timestamp": "00:27:32", "end_timestamp": "00:28:10", "start_second": 1652, "end_second": 1690, "url": "https://www.youtube.com/watch?v=p0_-7FmrDq8&t=1652s", "title": "The Believing Brain: Evolution, Neuroscience, and the Spiritual Instinct", "thumbnail": "https://i.ytimg.com/vi/p0_-7FmrDq8/maxresdefault.jpg"} {"video_id": "p0_-7FmrDq8", "text": "couple of different things, right? So, in a sense, you could say, well, all brains, actually every brain on this planet, to some extent, is wired to make predictions about what's going to happen next based on what's happened in the past. So, brains are not wired to react to things in the world, they're wired to predict. It's metabolically efficient to predict. Physiologically, most of the biological systems we have in the body are predictive to some extent. And so, if you mean ... A lot of people talk about predictions where ... When I say prediction", "start_timestamp": "00:28:10", "end_timestamp": "00:28:56", "start_second": 1690, "end_second": 1736, "url": "https://www.youtube.com/watch?v=p0_-7FmrDq8&t=1690s", "title": "The Believing Brain: Evolution, Neuroscience, and the Spiritual Instinct", "thumbnail": "https://i.ytimg.com/vi/p0_-7FmrDq8/maxresdefault.jpg"} {"video_id": "p0_-7FmrDq8", "text": "I mean, our brains for example, change the firing of their own neurons in advance of sensory input arriving to the brain. That's how you're understanding the words that I'm speaking to you right now. You've had a lifetime of experience of patterns, encoding patterns of what these sounds refer to and the patterns in their temporal contingencies. All brains work like this and if you believe that a prediction is like a belief, which scientists do write about predictions this way, as if they are beliefs or explanations", "start_timestamp": "00:28:56", "end_timestamp": "00:29:32", "start_second": 1736, "end_second": 1772, "url": "https://www.youtube.com/watch?v=p0_-7FmrDq8&t=1736s", "title": "The Believing Brain: Evolution, Neuroscience, and the Spiritual Instinct", "thumbnail": "https://i.ytimg.com/vi/p0_-7FmrDq8/maxresdefault.jpg"} {"video_id": "p0_-7FmrDq8", "text": "that are preemptively offered to anticipate and explain incoming sensory inputs, then yes, we are wired. Another way in which we're wired, you could say, is that- But that's for belief in things presumably that are demonstrably true. That's belief in any case, right? So, the idea that the brain is wired for prediction as opposed to reaction, is a general explanation, it's a general computational approach to understanding meaning making of any sort. So, that means making meaning of fluctuating changes in light, which you experience as", "start_timestamp": "00:29:32", "end_timestamp": "00:30:17", "start_second": 1772, "end_second": 1817, "url": "https://www.youtube.com/watch?v=p0_-7FmrDq8&t=1772s", "title": "The Believing Brain: Evolution, Neuroscience, and the Spiritual Instinct", "thumbnail": "https://i.ytimg.com/vi/p0_-7FmrDq8/maxresdefault.jpg"} {"video_id": "p0_-7FmrDq8", "text": "sights, as vision. It's making meaning of fluctuating changes in air pressure, which you experience as sounds. And it's also making meaning of changes that happen longer ... longer temporal sensory changes which we would think of as an episode or an event. Little infant brains, you know, newborn brains ... A newborn brain is not like a miniature adult brain. It's not completely ... it's wiring isn't completely finished and we ... So, what infants are doing to some extent, is they're waiting for a set of wiring instructions from the", "start_timestamp": "00:30:17", "end_timestamp": "00:31:06", "start_second": 1817, "end_second": 1866, "url": "https://www.youtube.com/watch?v=p0_-7FmrDq8&t=1817s", "title": "The Believing Brain: Evolution, Neuroscience, and the Spiritual Instinct", "thumbnail": "https://i.ytimg.com/vi/p0_-7FmrDq8/maxresdefault.jpg"} {"video_id": "p0_-7FmrDq8", "text": "world. The brain expects certain inputs in order for it to wire itself normally and it wires itself both to the physical circumstances it grows up in, but also to social circumstances it grows up in. We encourage ... So, that's sort of the normal aspect of brain development that's related to being wired for belief, but we also wire our children for belief in other ways. We indulge them. In our culture, we indulge them in believing in animacy of their blankets and their little cars and their little toys and some people in this room might believe that their cars", "start_timestamp": "00:31:06", "end_timestamp": "00:31:45", "start_second": 1866, "end_second": 1905, "url": "https://www.youtube.com/watch?v=p0_-7FmrDq8&t=1866s", "title": "The Believing Brain: Evolution, Neuroscience, and the Spiritual Instinct", "thumbnail": "https://i.ytimg.com/vi/p0_-7FmrDq8/maxresdefault.jpg"} {"video_id": "p0_-7FmrDq8", "text": "have minds, right? So, we do ... that's another way in which brains can become wired for belief in the sense of development actually influences the wiring of the brain. Then there, we could also talk about feelings as the root of belief. That to some extent feeling is believing. When you believe ... When you feel something very strongly, you are more likely to believe it and feeling is at the core of the wiring of our brains, and really you could argue most mammalian brains. Some people would like to make that argument, pull that argument even earlier.", "start_timestamp": "00:31:45", "end_timestamp": "00:32:33", "start_second": 1905, "end_second": 1953, "url": "https://www.youtube.com/watch?v=p0_-7FmrDq8&t=1905s", "title": "The Believing Brain: Evolution, Neuroscience, and the Spiritual Instinct", "thumbnail": "https://i.ytimg.com/vi/p0_-7FmrDq8/maxresdefault.jpg"} {"video_id": "p0_-7FmrDq8", "text": "I'm going to come back to that in a for Zoran, you've spent some time studying the human brain. Do you feel that there's evidence that we're ... there's an internal physiological predilection for religious belief? Yeah, I wouldn't so much ... Yes. As much as brain is I think organized to be conscious, it's organized for spiritual experiences and indirectly for beliefs just as Steve and Lisa pointed out. I think something happened to us, to our species, right? We don't know when, maybe 3000 years ago, 5000 years, maybe longer.", "start_timestamp": "00:32:33", "end_timestamp": "00:33:14", "start_second": 1953, "end_second": 1994, "url": "https://www.youtube.com/watch?v=p0_-7FmrDq8&t=1953s", "title": "The Believing Brain: Evolution, Neuroscience, and the Spiritual Instinct", "thumbnail": "https://i.ytimg.com/vi/p0_-7FmrDq8/maxresdefault.jpg"} {"video_id": "p0_-7FmrDq8", "text": "Suddenly, we became conscious. We became conscious in very unique way. It's not just we have experience, or that we have conscious experience, but we know that we are conscious. We have implicit knowing that we are conscious. We have an expression of religiosity going as far as we have records, you know, around 3000 years ago, maybe longer actually, but we don't have records any more of that, that people were really trying to figure out what is this thing. We're conscious, what is it? Who is this person who is conscious?", "start_timestamp": "00:33:14", "end_timestamp": "00:33:48", "start_second": 1994, "end_second": 2028, "url": "https://www.youtube.com/watch?v=p0_-7FmrDq8&t=1994s", "title": "The Believing Brain: Evolution, Neuroscience, and the Spiritual Instinct", "thumbnail": "https://i.ytimg.com/vi/p0_-7FmrDq8/maxresdefault.jpg"} {"video_id": "p0_-7FmrDq8", "text": "What is it that's conscious inside us. And also, what is this universe? The way it appears when we perceive it with the depth of our consciousness, not just with the surface of our mind, but with the deepest part of ourselves. And so that gives rise to some very kind of a deep core sort of explorations in the nature of human mind that we have records of. When we look at the ... What I personally feel is sort of the innermost core of the religious practices, pretty much in every religious traditions we find this, this unitary", "start_timestamp": "00:33:48", "end_timestamp": "00:34:29", "start_second": 2028, "end_second": 2069, "url": "https://www.youtube.com/watch?v=p0_-7FmrDq8&t=2028s", "title": "The Believing Brain: Evolution, Neuroscience, and the Spiritual Instinct", "thumbnail": "https://i.ytimg.com/vi/p0_-7FmrDq8/maxresdefault.jpg"} {"video_id": "p0_-7FmrDq8", "text": "experiences, experiences of consciousness itself. They can be either very deep mental silence in which all mental processes quiet down and then there is either just complete blackness and then within it, there's just awareness. Consciousness itself. Doesn't think, doesn't feel, doesn't need to do anything, but it's aware and knows that it's conscious innately, directly. Doesn't have to think. Doesn't have to take itself as an object. Just consciousness itself. Then, that deepest part of ourselves, if it wakes up suddenly, and within our experience,", "start_timestamp": "00:34:29", "end_timestamp": "00:35:12", "start_second": 2069, "end_second": 2112, "url": "https://www.youtube.com/watch?v=p0_-7FmrDq8&t=2069s", "title": "The Believing Brain: Evolution, Neuroscience, and the Spiritual Instinct", "thumbnail": "https://i.ytimg.com/vi/p0_-7FmrDq8/maxresdefault.jpg"} {"video_id": "p0_-7FmrDq8", "text": "then the quality of our experience changes dramatically, from this ordinary experience where I'm over here, I am limited to my body, I'm limited to my surface of my skin, whatever my mind has constructed and learned over the course of my life, who I am, what the world is, how to relate to each other, how I relate to others. So, we have this elaborate self-world model inside our head that filters everything you experience. That takes a break temporarily, however briefly, and suddenly we experience that everything", "start_timestamp": "00:35:12", "end_timestamp": "00:35:45", "start_second": 2112, "end_second": 2145, "url": "https://www.youtube.com/watch?v=p0_-7FmrDq8&t=2112s", "title": "The Believing Brain: Evolution, Neuroscience, and the Spiritual Instinct", "thumbnail": "https://i.ytimg.com/vi/p0_-7FmrDq8/maxresdefault.jpg"} {"video_id": "p0_-7FmrDq8", "text": "is one reality. One interdependent, but also at the same time, one consciousness that seems to extend, that's the experience and encompasses everything. I think that religiosity tried to capture what this is. When the theistic religion says that God is simultaneously transcendent and immanent in all things. So, in all things. In this experience here that we're having, right now sitting in this wonderful place, this is the experience of God being transcendent and immanent at the same time. That is one way of saying it, right?", "start_timestamp": "00:35:45", "end_timestamp": "00:36:26", "start_second": 2145, "end_second": 2186, "url": "https://www.youtube.com/watch?v=p0_-7FmrDq8&t=2145s", "title": "The Believing Brain: Evolution, Neuroscience, and the Spiritual Instinct", "thumbnail": "https://i.ytimg.com/vi/p0_-7FmrDq8/maxresdefault.jpg"} {"video_id": "p0_-7FmrDq8", "text": "I didn't look at it that way, but that's very true. So, another way to say it is that we have two sides to our consciousness. One side is mind that creates experience. The other side is awareness which is just like a mirror. It simply register what is happening without doing anything to it. The two are different. In this view, they're separated by the substrate which is kind of like an unconscious film. Matrix. It actually exists in the universe, they say. I don't. What's interesting, what happens is when the mind wants to find what consciousness is,", "start_timestamp": "00:36:26", "end_timestamp": "00:37:04", "start_second": 2186, "end_second": 2224, "url": "https://www.youtube.com/watch?v=p0_-7FmrDq8&t=2186s", "title": "The Believing Brain: Evolution, Neuroscience, and the Spiritual Instinct", "thumbnail": "https://i.ytimg.com/vi/p0_-7FmrDq8/maxresdefault.jpg"} {"video_id": "p0_-7FmrDq8", "text": "it just finds itself. It finds attention, it finds intelligence, and it finds vigilance, but it can't ... If it doesn't know how, it can't penetrate through this unconscious substrate. And then it's basically concludes there is no consciousness. It's just a mental processes, right? From the side of awareness, what the substrate does is awareness can't recognize itself. It can't recognize what it is directly so it experiences itself as a subject who is having experience. From that perspective, spirituality and spiritual beliefs are consciousness trying to find itself.", "start_timestamp": "00:37:04", "end_timestamp": "00:37:43", "start_second": 2224, "end_second": 2263, "url": "https://www.youtube.com/watch?v=p0_-7FmrDq8&t=2224s", "title": "The Believing Brain: Evolution, Neuroscience, and the Spiritual Instinct", "thumbnail": "https://i.ytimg.com/vi/p0_-7FmrDq8/maxresdefault.jpg"} {"video_id": "p0_-7FmrDq8", "text": "It's trying to figure out what it is. So, Barbara, can you take us back to the earliest evidence that we have for that kind of internal self-reflection that ultimately we think may have been the seeds for religiosity? The first thing I'd like to start and say is that, yes, it's certainly true that we attribute intentionality to a lot of animals, but the fact also is that they are intentional. So, we certainly don't have a corner on the market of intentionality or consciousness or sentience or any of these other things.", "start_timestamp": "00:37:43", "end_timestamp": "00:38:18", "start_second": 2263, "end_second": 2298, "url": "https://www.youtube.com/watch?v=p0_-7FmrDq8&t=2263s", "title": "The Believing Brain: Evolution, Neuroscience, and the Spiritual Instinct", "thumbnail": "https://i.ytimg.com/vi/p0_-7FmrDq8/maxresdefault.jpg"} {"video_id": "p0_-7FmrDq8", "text": "But if we're going to talk about the human evolutionary trajectory, we know that our species is about 200,000 years old. Our genus is around 2.4 million years old. So the question becomes when do we start seeing any of these symptoms, if you will. It's very interesting that there's a cave in South Africa, Rising Star cave, that is the home of this human, perhaps ancestor, but hominid in any case, called Homo naledi and apparently there were numbers of individuals who were literally dragged by others into a very deep, subterranean chamber in this cave.", "start_timestamp": "00:38:18", "end_timestamp": "00:38:58", "start_second": 2298, "end_second": 2338, "url": "https://www.youtube.com/watch?v=p0_-7FmrDq8&t=2298s", "title": "The Believing Brain: Evolution, Neuroscience, and the Spiritual Instinct", "thumbnail": "https://i.ytimg.com/vi/p0_-7FmrDq8/maxresdefault.jpg"} {"video_id": "p0_-7FmrDq8", "text": "So, Rising Star is a very famous project in paleo anthropology and one can watch often live feeds of the scientists trying to study these chambers and they have to crawl through these incredibly small passageways. And yet, we know that approximately 250,000 years ago, people were disposing of their dead in very intentional ritual ways, going through a lot of effort and a lot of energy to do this. The problem becomes- Is that controversial or is that why? Beside the fact that they're bringing the people to the chamber is not particularly", "start_timestamp": "00:38:58", "end_timestamp": "00:39:36", "start_second": 2338, "end_second": 2376, "url": "https://www.youtube.com/watch?v=p0_-7FmrDq8&t=2338s", "title": "The Believing Brain: Evolution, Neuroscience, and the Spiritual Instinct", "thumbnail": "https://i.ytimg.com/vi/p0_-7FmrDq8/maxresdefault.jpg"} {"video_id": "p0_-7FmrDq8", "text": "controversial. The next step\u2026 But whether it was a ritual burial. Exactly. The next step is controversial because of course we have this small problem, which is that belief doesn't fossilize, so we don't know, and we have people, we have a chamber, and we have our minds and as we're talking about we're searching and yearning always to figure this out. But isn't it the case though that it's ... the people of ... anthropologists have already discovered, let's say Homo neanderthals skeletons that are ... You know, they've been buried", "start_timestamp": "00:39:36", "end_timestamp": "00:40:05", "start_second": 2376, "end_second": 2405, "url": "https://www.youtube.com/watch?v=p0_-7FmrDq8&t=2376s", "title": "The Believing Brain: Evolution, Neuroscience, and the Spiritual Instinct", "thumbnail": "https://i.ytimg.com/vi/p0_-7FmrDq8/maxresdefault.jpg"} {"video_id": "p0_-7FmrDq8", "text": "and posed in a particular way with things around them and so, it's- Like in Sungir, right? Right, but we're going in a kind of order so I'm starting a little earlier than Neanderthals. We have the roots of Neanderthal populations this time, but what's so fascinating is that the Neanderthal burials don't come to a hundred thousand years ago or 60,000 years ago in Sungir in Russia, which is a Homo sapiens site just mentioned, is like 27,000. So, my idea is that, again, we have some glimmers and some intriguing hints, 250,000 years ago.", "start_timestamp": "00:40:05", "end_timestamp": "00:40:38", "start_second": 2405, "end_second": 2438, "url": "https://www.youtube.com/watch?v=p0_-7FmrDq8&t=2405s", "title": "The Believing Brain: Evolution, Neuroscience, and the Spiritual Instinct", "thumbnail": "https://i.ytimg.com/vi/p0_-7FmrDq8/maxresdefault.jpg"} {"video_id": "p0_-7FmrDq8", "text": "Now, let's just fast forward, let me leap over many thousands of years, we've come to Neanderthals. They are not our ancestors. They are our cousins. We used to say that they lived from 200 something thousand to 40,000 and then they went extinct. We no longer say that because here in the audience there's tons of Neanderthal genetic material and many populations, except some populations in Africa because we did not have Neanderthals in Africa, we find just as you were saying, Lisa, that there are very intentional", "start_timestamp": "00:40:38", "end_timestamp": "00:41:11", "start_second": 2438, "end_second": 2471, "url": "https://www.youtube.com/watch?v=p0_-7FmrDq8&t=2438s", "title": "The Believing Brain: Evolution, Neuroscience, and the Spiritual Instinct", "thumbnail": "https://i.ytimg.com/vi/p0_-7FmrDq8/maxresdefault.jpg"} {"video_id": "p0_-7FmrDq8", "text": "burials with all kinds of grave goods. So people didn't just stick people in the earth. They marked the graves as something special. To give you one example, there's a 40,000 year old burial of a toddler in what's today Spain, with a hearth all around, 60 sets of oryx and bison horns, a rhino skull. This was a place that mattered. In some sense we can think of it as a sacred place. The question is, is there belief in an afterlife? Is there belief in supernatural beings? How would we know? We're imposing a great deal of our framework onto the past.", "start_timestamp": "00:41:11", "end_timestamp": "00:41:49", "start_second": 2471, "end_second": 2509, "url": "https://www.youtube.com/watch?v=p0_-7FmrDq8&t=2471s", "title": "The Believing Brain: Evolution, Neuroscience, and the Spiritual Instinct", "thumbnail": "https://i.ytimg.com/vi/p0_-7FmrDq8/maxresdefault.jpg"} {"video_id": "p0_-7FmrDq8", "text": "Keep going in time, we come to cave art. And, of course, we're familiar with the cave paintings. These are not only early Homo sapiens, but also in some cases Neanderthals. We do know that now. This is a relatively recent discovery, that we are not the only cave painters, but what's fascinating for me about this is you have these glorious depictions of animals that these people hunted, but in addition to that, some very mystical and fantastic figures. A bird-headed man in Lascaux cave in France. A human that is part bison.", "start_timestamp": "00:41:49", "end_timestamp": "00:42:24", "start_second": 2509, "end_second": 2544, "url": "https://www.youtube.com/watch?v=p0_-7FmrDq8&t=2509s", "title": "The Believing Brain: Evolution, Neuroscience, and the Spiritual Instinct", "thumbnail": "https://i.ytimg.com/vi/p0_-7FmrDq8/maxresdefault.jpg"} {"video_id": "p0_-7FmrDq8", "text": "Some other just wild figures. So this is not just people representing the reality they saw before them, but rather there's an interest in what is not in front of you, what is not just here and now. We fast forward one more time. We go to Turkey, to this particular, perhaps temple, Gobekli Tepe, is dated to that period on a hill in Turkey. Massive 50 ton blocks that people moved onto a hillside and carved with, again, elaborate, largely animal, images. We think this is a ritual space. Not everyone agrees. This is contentious.", "start_timestamp": "00:42:24", "end_timestamp": "00:43:03", "start_second": 2544, "end_second": 2583, "url": "https://www.youtube.com/watch?v=p0_-7FmrDq8&t=2544s", "title": "The Believing Brain: Evolution, Neuroscience, and the Spiritual Instinct", "thumbnail": "https://i.ytimg.com/vi/p0_-7FmrDq8/maxresdefault.jpg"} {"video_id": "p0_-7FmrDq8", "text": "But in every single case there is a good argument to be made for the possibility of the human brain uncoupling itself from the here and now, to think about these questions of the supernatural. And we have hints. We have to go forward in time again before we come to a really institutionalized religious system. But again, the reason that I think the human religious imagination evolved because of all these earlier cues. Right, right. I think it's important ... I think that Barbara's bringing up something really important and", "start_timestamp": "00:43:03", "end_timestamp": "00:43:37", "start_second": 2583, "end_second": 2617, "url": "https://www.youtube.com/watch?v=p0_-7FmrDq8&t=2583s", "title": "The Believing Brain: Evolution, Neuroscience, and the Spiritual Instinct", "thumbnail": "https://i.ytimg.com/vi/p0_-7FmrDq8/maxresdefault.jpg"} {"video_id": "p0_-7FmrDq8", "text": "that is, we're all talking ... We're sort of fluidly talking back and forth as if spirituality and religiosity are identical forms of meaning making and they're really not. There are many, many ways to be spiritual. Some involve belief in a supernatural deity with agency, but not all of them do, right? Some of them ... sometimes spirituality means just being full of awe and wonder at something larger than you, that transcends itself transcendence, like in connecting with nature for example. As Einstein was saying in his book.", "start_timestamp": "00:43:37", "end_timestamp": "00:44:15", "start_second": 2617, "end_second": 2655, "url": "https://www.youtube.com/watch?v=p0_-7FmrDq8&t=2617s", "title": "The Believing Brain: Evolution, Neuroscience, and the Spiritual Instinct", "thumbnail": "https://i.ytimg.com/vi/p0_-7FmrDq8/maxresdefault.jpg"} {"video_id": "p0_-7FmrDq8", "text": "Exactly, and I think ... So, one way to think about this is that when we're talking about the evolution of religious thought or spiritual thought or we're talking about the biology of spiritual thought, we have to be thinking about the fact that we're talking about different psychological features here. One has to do with connecting to something in the moment that's bigger than you and that might transcend you. One element or feature is about explanation, right? Another is about agency. And so those may not have all evolved at the same time, or perhaps they're not all meaningful", "start_timestamp": "00:44:15", "end_timestamp": "00:44:57", "start_second": 2655, "end_second": 2697, "url": "https://www.youtube.com/watch?v=p0_-7FmrDq8&t=2655s", "title": "The Believing Brain: Evolution, Neuroscience, and the Spiritual Instinct", "thumbnail": "https://i.ytimg.com/vi/p0_-7FmrDq8/maxresdefault.jpg"} {"video_id": "p0_-7FmrDq8", "text": "for all people, so maybe everyone in this room has had a spiritual experience. They might not call it that, but they've had an experience where they've connected to something that's bigger than themselves that leaves them feeling awestruck, but not everyone would take the additional steps of trying to find an explanation in that or trying to find agency in that and so forth. But if we do go and focus on beliefs that do transcend just a sense that there's a larger reality that you were a part of and goes toward a supernatural belief in things that science", "start_timestamp": "00:44:57", "end_timestamp": "00:45:32", "start_second": 2697, "end_second": 2732, "url": "https://www.youtube.com/watch?v=p0_-7FmrDq8&t=2697s", "title": "The Believing Brain: Evolution, Neuroscience, and the Spiritual Instinct", "thumbnail": "https://i.ytimg.com/vi/p0_-7FmrDq8/maxresdefault.jpg"} {"video_id": "p0_-7FmrDq8", "text": "typically would not confirm, do you see the potential for an adaptive value, for a progression that would lead to a brain that would have a tendency to do that? It's hard to give an adaptive explanation for belief in entities that don't exist. There can be an adapter of explanation for the search for explanations which obviously are not infallible and they can be misled by absence of evidence, by people who have an interest in promulgating certain explanations. I think what you have to direct the question at not the content of beliefs that we associate", "start_timestamp": "00:45:32", "end_timestamp": "00:46:20", "start_second": 2732, "end_second": 2780, "url": "https://www.youtube.com/watch?v=p0_-7FmrDq8&t=2732s", "title": "The Believing Brain: Evolution, Neuroscience, and the Spiritual Instinct", "thumbnail": "https://i.ytimg.com/vi/p0_-7FmrDq8/maxresdefault.jpg"} {"video_id": "p0_-7FmrDq8", "text": "with particular religions, but just in particular ways of thinking, but ways of interpreting the world, ways in which people influence the beliefs of one another. What are the kinds of things that we can hypothesize and then what does that leave us vulnerable to hypothesizing which, from the perspective of science, we know may be incorrect, but can nonetheless be very seductive to a mind that is apt to think in certain directions. So what's your view say of those who've made the case that the adaptive value is not so", "start_timestamp": "00:46:20", "end_timestamp": "00:46:56", "start_second": 2780, "end_second": 2816, "url": "https://www.youtube.com/watch?v=p0_-7FmrDq8&t=2780s", "title": "The Believing Brain: Evolution, Neuroscience, and the Spiritual Instinct", "thumbnail": "https://i.ytimg.com/vi/p0_-7FmrDq8/maxresdefault.jpg"} {"video_id": "p0_-7FmrDq8", "text": "much in the actual belief in things that perhaps don't exist, but it is from the cohesion, the group cohesion that that can yield if there are many people for whom that belief is shared, then all of a sudden you've got a stronger group bonding? Does that hold any weight for you at all? There is a folk theory of evolution that adaptations are all for group cohesion because whenever there is some mysterious aspect of human psychology for which it's not clear what the adaptive value is, people will say, \"Well, it fosters group cohesion.\"", "start_timestamp": "00:46:56", "end_timestamp": "00:47:37", "start_second": 2816, "end_second": 2857, "url": "https://www.youtube.com/watch?v=p0_-7FmrDq8&t=2816s", "title": "The Believing Brain: Evolution, Neuroscience, and the Spiritual Instinct", "thumbnail": "https://i.ytimg.com/vi/p0_-7FmrDq8/maxresdefault.jpg"} {"video_id": "p0_-7FmrDq8", "text": "Why do we enjoy music? Group cohesion. Why do we dance? Group cohesion. But there a couple things wrong with that style of explanation. I'm very deeply suspicious of the explanation always says group cohesion. One of them is, group cohesion is not, in fact, what natural selection selects for. It selects for propagation of genes. Sometimes groups, cohesive groups can help the individuals that compose those groups, but if a group is too cohesive, you could be exploited by the group. You could be cannon fodder.", "start_timestamp": "00:47:37", "end_timestamp": "00:48:05", "start_second": 2857, "end_second": 2885, "url": "https://www.youtube.com/watch?v=p0_-7FmrDq8&t=2857s", "title": "The Believing Brain: Evolution, Neuroscience, and the Spiritual Instinct", "thumbnail": "https://i.ytimg.com/vi/p0_-7FmrDq8/maxresdefault.jpg"} {"video_id": "p0_-7FmrDq8", "text": "You could be a sacrificial victim for the benefit, for the cohesion of the group. But any gene that would allow you to be exploited by the group would be selected out because genes are selected much more quickly than groups. Also, I think it's too easy to use our own intuition that we like to bond over music, over religion and so on, but that is itself a part of our psychology that needs an explanation. Why would beliefs in invisible entities make a group coherent more? You can't take that for granted. That's as much of a puzzle to a psychologist as\u2026", "start_timestamp": "00:48:05", "end_timestamp": "00:48:41", "start_second": 2885, "end_second": 2921, "url": "https://www.youtube.com/watch?v=p0_-7FmrDq8&t=2885s", "title": "The Believing Brain: Evolution, Neuroscience, and the Spiritual Instinct", "thumbnail": "https://i.ytimg.com/vi/p0_-7FmrDq8/maxresdefault.jpg"} {"video_id": "p0_-7FmrDq8", "text": "But we do see, we do see evidence of that even though we may need to explain it. We do, although ... the supernatural beliefs can also divide a group, needless to say. There are wars of religion and precisely because they ... The content of those beliefs aren't derived from shared experience. They're not things that everyone can just open their eyes and see. They're things you have to be told. And that means that if you're told by different shamans or different priests or imams than you can go to war over those beliefs.", "start_timestamp": "00:48:41", "end_timestamp": "00:49:12", "start_second": 2921, "end_second": 2952, "url": "https://www.youtube.com/watch?v=p0_-7FmrDq8&t=2921s", "title": "The Believing Brain: Evolution, Neuroscience, and the Spiritual Instinct", "thumbnail": "https://i.ytimg.com/vi/p0_-7FmrDq8/maxresdefault.jpg"} {"video_id": "p0_-7FmrDq8", "text": "That's why I think that group cohesion doesn't strike me as a satisfying explanation for belief. Lisa, do you have a different view of that? I have ... yes, I think I have a different view or maybe I want to add some information. You can be contentious. You could just like- Believe me, I have no problem with being contentious, at all. Anyone who knows me, knows this is true. Here's what I want to say, that I think that there is an immediate advantage potentially, which is there are two that I can think of that relate to the functioning of a nervous", "start_timestamp": "00:49:12", "end_timestamp": "00:49:51", "start_second": 2952, "end_second": 2991, "url": "https://www.youtube.com/watch?v=p0_-7FmrDq8&t=2952s", "title": "The Believing Brain: Evolution, Neuroscience, and the Spiritual Instinct", "thumbnail": "https://i.ytimg.com/vi/p0_-7FmrDq8/maxresdefault.jpg"} {"video_id": "p0_-7FmrDq8", "text": "system in the following way. First of all, uncertainty is tremendously stressful for a human nervous system. And I don't mean stress in a euphemistic way. I mean it adds a metabolic burden to a nervous system which if it persists can actually make someone sick and I think religious beliefs can reduce uncertainty. They sometimes explain the unexplainable. Things that we now might explain through science, used to be thought of as magic or as caused by a deity. So, I think in some ways it is not just psychologically comforting, it's actually physiologically", "start_timestamp": "00:49:51", "end_timestamp": "00:50:41", "start_second": 2991, "end_second": 3041, "url": "https://www.youtube.com/watch?v=p0_-7FmrDq8&t=2991s", "title": "The Believing Brain: Evolution, Neuroscience, and the Spiritual Instinct", "thumbnail": "https://i.ytimg.com/vi/p0_-7FmrDq8/maxresdefault.jpg"} {"video_id": "p0_-7FmrDq8", "text": "potentially less ... it reduces people's stress. It reduces their, what scientists would call allostatic burden. Very simply, just step back one minute and say, partly our brains evolved not to think and see and feel, but in order to regulate the systems of our body. As our bodies got more complex, brains got bigger. A brain's main job is to keep the systems of your body alive and well so that you can propagate your genes to the next ... let me just finish. I know you're going to disagree, but ... Your brain is constantly running a budget", "start_timestamp": "00:50:41", "end_timestamp": "00:51:28", "start_second": 3041, "end_second": 3088, "url": "https://www.youtube.com/watch?v=p0_-7FmrDq8&t=3041s", "title": "The Believing Brain: Evolution, Neuroscience, and the Spiritual Instinct", "thumbnail": "https://i.ytimg.com/vi/p0_-7FmrDq8/maxresdefault.jpg"} {"video_id": "p0_-7FmrDq8", "text": "for the resources in your body and it's not budgeting money, it's budgeting glucose and salt and so on and so forth. And so, if you think about your brain running a budget for your body, uncertainty just drains that budget. Drains that budget much faster and makes it really harder for people. There's also, I think, a social aspect to this too, in the sense that we are social animals, we evolved to be social animals. It's one of our major adaptive advantages, to be social animals. But what that means is that we regulate each other's nervous systems.", "start_timestamp": "00:51:28", "end_timestamp": "00:52:02", "start_second": 3088, "end_second": 3122, "url": "https://www.youtube.com/watch?v=p0_-7FmrDq8&t=3088s", "title": "The Believing Brain: Evolution, Neuroscience, and the Spiritual Instinct", "thumbnail": "https://i.ytimg.com/vi/p0_-7FmrDq8/maxresdefault.jpg"} {"video_id": "p0_-7FmrDq8", "text": "We don't bear that body budget on our own. We have other people to help us do it. There are other social species, right? So, insects are social, and they regulate each other's nervous systems through chemicals, through scent. Rats, and some mammals, add touch and they might add hearing and primates add vision. We, as primates, have all of those ways to regulate each other. Plus, we have ideas that we share. And so, there are many ways in which religious belief can actually reduce the metabolic burden on the nervous system.", "start_timestamp": "00:52:02", "end_timestamp": "00:52:42", "start_second": 3122, "end_second": 3162, "url": "https://www.youtube.com/watch?v=p0_-7FmrDq8&t=3122s", "title": "The Believing Brain: Evolution, Neuroscience, and the Spiritual Instinct", "thumbnail": "https://i.ytimg.com/vi/p0_-7FmrDq8/maxresdefault.jpg"} {"video_id": "p0_-7FmrDq8", "text": "Is there data for that? I mean, is there data that really makes a convincing argument that religious belief does reduce ... There actually is. I'm not advocating this, I'm just saying as a scientist, there is data. There are data to show that people who ... I want to say this, you know, I'm not negating any of the challenges or problems that religious belief introduces to a fitness argument. I'm just saying that there is this other side where there are data to show that people who are religious actually are somewhat happier and healthier and have greater well-being.", "start_timestamp": "00:52:42", "end_timestamp": "00:53:24", "start_second": 3162, "end_second": 3204, "url": "https://www.youtube.com/watch?v=p0_-7FmrDq8&t=3162s", "title": "The Believing Brain: Evolution, Neuroscience, and the Spiritual Instinct", "thumbnail": "https://i.ytimg.com/vi/p0_-7FmrDq8/maxresdefault.jpg"} {"video_id": "p0_-7FmrDq8", "text": "But that's because, of course, they're living amongst other people who believe what they believe. Steve, you had a response again. Barbara ... yeah. I just really wanted to make the point that we're actually operating in a framework of human exceptionalism when we keep asking things like, was social cohesion part of the reason that we were religious, or did being religious drive social cohesion, because, you know ... Why are we not asking about orcas, for example. Orcas are exquisitely cohesive and they do things as a group and they regulate each other", "start_timestamp": "00:53:24", "end_timestamp": "00:53:58", "start_second": 3204, "end_second": 3238, "url": "https://www.youtube.com/watch?v=p0_-7FmrDq8&t=3204s", "title": "The Believing Brain: Evolution, Neuroscience, and the Spiritual Instinct", "thumbnail": "https://i.ytimg.com/vi/p0_-7FmrDq8/maxresdefault.jpg"} {"video_id": "p0_-7FmrDq8", "text": "as individuals and they solve their problems as a group and they manage to do that without God. And chimpanzees manage to do this without God. So, I think that in addition to the problems that Steven pointed out with social cohesion arguments, that they're thrown out constantly that we can just look at the natural world and we can see that there are so many different pathways for this. If we only look at our species and we don't take this comparative approach. We're not going to get answers to these questions.", "start_timestamp": "00:53:58", "end_timestamp": "00:54:27", "start_second": 3238, "end_second": 3267, "url": "https://www.youtube.com/watch?v=p0_-7FmrDq8&t=3238s", "title": "The Believing Brain: Evolution, Neuroscience, and the Spiritual Instinct", "thumbnail": "https://i.ytimg.com/vi/p0_-7FmrDq8/maxresdefault.jpg"} {"video_id": "p0_-7FmrDq8", "text": "Yeah. Steve. Lisa, I agree that are religious belief can reduce stress but I don't think that that can be an explanation as to why it's adaptive. Because the fact that uncertainty leads to stress is itself an adaptation, namely there ... we're missing some information that's critical to our well-being and we're ... when we get stressed and nervous that motivates us to seek out that information or to act in a way that keeps us safe even in the state of ignorance. But there can't be an adaptation to reduce stress by false certainty.", "start_timestamp": "00:54:27", "end_timestamp": "00:55:03", "start_second": 3267, "end_second": 3303, "url": "https://www.youtube.com/watch?v=p0_-7FmrDq8&t=3267s", "title": "The Believing Brain: Evolution, Neuroscience, and the Spiritual Instinct", "thumbnail": "https://i.ytimg.com/vi/p0_-7FmrDq8/maxresdefault.jpg"} {"video_id": "p0_-7FmrDq8", "text": "That is, by being certain about something, some claim about the world that in fact is not true. Because if I'm really nervous, say because I think there might be a predator, and someone convinces me, no, it's actually a rabbit appearing in the guise of a predatory cat, that might reduce my stress, but it's not an adaptation. Fair enough, fair enough. So Barbara, you gave us some history of where you think this may have begun all the way back in human history. What's your sense of why it has persisted for so long?", "start_timestamp": "00:55:03", "end_timestamp": "00:55:35", "start_second": 3303, "end_second": 3335, "url": "https://www.youtube.com/watch?v=p0_-7FmrDq8&t=3303s", "title": "The Believing Brain: Evolution, Neuroscience, and the Spiritual Instinct", "thumbnail": "https://i.ytimg.com/vi/p0_-7FmrDq8/maxresdefault.jpg"} {"video_id": "p0_-7FmrDq8", "text": "I tend to come back to this issue of community and practice. Because I think if we shift the perspective from looking so much at belief and sacred texts, which we tend to do in today's world. You know, you put up a slide that talked about the percentage of the pie in terms of Christianity and Islam and that's one important aspect of this, but I think that for many of the world's people, there is just something that's irreplaceable about that sense of community, that sense of ritual practice and that sense of familiarity.", "start_timestamp": "00:55:35", "end_timestamp": "00:56:07", "start_second": 3335, "end_second": 3367, "url": "https://www.youtube.com/watch?v=p0_-7FmrDq8&t=3335s", "title": "The Believing Brain: Evolution, Neuroscience, and the Spiritual Instinct", "thumbnail": "https://i.ytimg.com/vi/p0_-7FmrDq8/maxresdefault.jpg"} {"video_id": "p0_-7FmrDq8", "text": "And it is ... Sure, it's possible to try to replace that with some other ways to find those same things, but there is something about the connectivity that comes through the transcendence that I think is important. When you bring those two elements together, the community and the transcendence and sharing that emotional meaning making. And one of the things that i like very much about the people who are discussing whether there's faith in other animals is the idea of breaking the link between making religion", "start_timestamp": "00:56:07", "end_timestamp": "00:56:37", "start_second": 3367, "end_second": 3397, "url": "https://www.youtube.com/watch?v=p0_-7FmrDq8&t=3367s", "title": "The Believing Brain: Evolution, Neuroscience, and the Spiritual Instinct", "thumbnail": "https://i.ytimg.com/vi/p0_-7FmrDq8/maxresdefault.jpg"} {"video_id": "p0_-7FmrDq8", "text": "always be about text and belief. So I think that that helps us understand this question a little bit. The idea ... I think about what Martin Buber wrote, coming from the tradition of Judaism, when he wrote that all of real life is encounter. There's something that's particularly transporting about sharing encounters of transcendence and I really feel it has something to do with the persistence that we see. But clearly, when we talk about the spirituality instinct, that's a very fraught term because of the secularization that's happening in the world.", "start_timestamp": "00:56:37", "end_timestamp": "00:57:12", "start_second": 3397, "end_second": 3432, "url": "https://www.youtube.com/watch?v=p0_-7FmrDq8&t=3397s", "title": "The Believing Brain: Evolution, Neuroscience, and the Spiritual Instinct", "thumbnail": "https://i.ytimg.com/vi/p0_-7FmrDq8/maxresdefault.jpg"} {"video_id": "p0_-7FmrDq8", "text": "If that really were to be considered an instinct, how do we explain the tremendous transformation that we're undergoing? So people are finding humanism communities, other communities with a different type of transcendent connection. I think there's a balance between what continues as very, very strong tradition that carries communities forward together with new ways of imagining some of these very same things that are coming about. The ways that people can experience religion now. I mean, they extend into communities with AI.", "start_timestamp": "00:57:12", "end_timestamp": "00:57:45", "start_second": 3432, "end_second": 3465, "url": "https://www.youtube.com/watch?v=p0_-7FmrDq8&t=3432s", "title": "The Believing Brain: Evolution, Neuroscience, and the Spiritual Instinct", "thumbnail": "https://i.ytimg.com/vi/p0_-7FmrDq8/maxresdefault.jpg"} {"video_id": "p0_-7FmrDq8", "text": "They extend to virtual realities, virtual churches, virtual connection, virtual mosques, but also the idea that we're beginning to think just differently about animals and nature. We know Emily Dickinson's church, right? Well, we also know that one of the beauties of the evolutionary perspective among many others, is not only understanding our own place in the world, but our really deep sharing with other animals. And so, I think there's the possibility that we are going to continue this shift of finding different ways of sharing transcendence as I feel with nature, with animals.", "start_timestamp": "00:57:45", "end_timestamp": "00:58:19", "start_second": 3465, "end_second": 3499, "url": "https://www.youtube.com/watch?v=p0_-7FmrDq8&t=3465s", "title": "The Believing Brain: Evolution, Neuroscience, and the Spiritual Instinct", "thumbnail": "https://i.ytimg.com/vi/p0_-7FmrDq8/maxresdefault.jpg"} {"video_id": "p0_-7FmrDq8", "text": "So does that transcendence relate to ... I mean, Steven J. Gould, I believe once said that all religions begin with an awareness of death. So, is that transcendence profoundly connected with death or is it somehow independent of it? I think you've hit on an important thing. Part of my last six years of my work has been very profoundly taken up with the question of animal grief and animal mourning. And I'm not suggesting, again, to be very clear that animals have some kind of sacred sense of death, but they have a deep awareness of loss, so that we find over and over again-", "start_timestamp": "00:58:19", "end_timestamp": "00:58:57", "start_second": 3499, "end_second": 3537, "url": "https://www.youtube.com/watch?v=p0_-7FmrDq8&t=3499s", "title": "The Believing Brain: Evolution, Neuroscience, and the Spiritual Instinct", "thumbnail": "https://i.ytimg.com/vi/p0_-7FmrDq8/maxresdefault.jpg"} {"video_id": "p0_-7FmrDq8", "text": "Can you give an example? I mean, that's\u2026 Yeah, I can give loads of examples. For example, with elephants we know that the entire community responds if a matriarch dies. There was one particular example in Africa. A particular community of scientists who followed for seven days, a parade of mourners who came to this particular matriarch who had died. Her name was Eleanor. Not only her family, but matriarchs of other families. Some stood vigil over the body, some rocked over the body. Others showed distress. So my definition of animal grief involves some kind of symptom of distress.", "start_timestamp": "00:58:57", "end_timestamp": "00:59:36", "start_second": 3537, "end_second": 3576, "url": "https://www.youtube.com/watch?v=p0_-7FmrDq8&t=3537s", "title": "The Believing Brain: Evolution, Neuroscience, and the Spiritual Instinct", "thumbnail": "https://i.ytimg.com/vi/p0_-7FmrDq8/maxresdefault.jpg"} {"video_id": "p0_-7FmrDq8", "text": "Social withdrawal, failure to eat, failure to sleep, some vocalizations. But it's not only the, what I call the usual suspects, the big brained mammals like chimpanzees, cetaceans and elephants where we see this. My research is showing that we find it in animals as different as collared peccaries in Arizona, chickens, all sorts of domestic animals, the animals that we live with. And again, what I think is so important about this is not necessarily that the animals have the same awareness of death that we have, but that they feel this profound sense of", "start_timestamp": "00:59:36", "end_timestamp": "01:00:14", "start_second": 3576, "end_second": 3614, "url": "https://www.youtube.com/watch?v=p0_-7FmrDq8&t=3576s", "title": "The Believing Brain: Evolution, Neuroscience, and the Spiritual Instinct", "thumbnail": "https://i.ytimg.com/vi/p0_-7FmrDq8/maxresdefault.jpg"} {"video_id": "p0_-7FmrDq8", "text": "loss. That is emotional meaning making and that's where they enter into this community of sort of a transcendent experience in an animal sort of way that I think is the foundation for this discussion. So, Steve, let me ask you. Transcendent experience, community, is one powerful way of thinking about what religion provides. On the other side of the discussion, you've got people like Dan Dennett. You got people like Pascal Boyer, and various others, whose explanation tends more toward a mechanism. The spreading of ideas.", "start_timestamp": "01:00:14", "end_timestamp": "01:00:52", "start_second": 3614, "end_second": 3652, "url": "https://www.youtube.com/watch?v=p0_-7FmrDq8&t=3614s", "title": "The Believing Brain: Evolution, Neuroscience, and the Spiritual Instinct", "thumbnail": "https://i.ytimg.com/vi/p0_-7FmrDq8/maxresdefault.jpg"} {"video_id": "p0_-7FmrDq8", "text": "The spreading of memes, you know? An idea jumps from brain to brain, to brain and it naturally tickles certain receptors that we are naturally attuned to and therefore certain ideas have a tendency to stick and spread, among them being the very ideas that constitute religious belief. Is that an approach that you think gives us insight or is that not a useful way of thinking about it? Yeah, because what puzzles us when we try to explain the prevalence of religious belief, is not so much why people mourn the dead, feel a sense of loss, feel it profoundly affects", "start_timestamp": "01:00:52", "end_timestamp": "01:01:28", "start_second": 3652, "end_second": 3688, "url": "https://www.youtube.com/watch?v=p0_-7FmrDq8&t=3652s", "title": "The Believing Brain: Evolution, Neuroscience, and the Spiritual Instinct", "thumbnail": "https://i.ytimg.com/vi/p0_-7FmrDq8/maxresdefault.jpg"} {"video_id": "p0_-7FmrDq8", "text": "their lives because it does profoundly affect their lives, it ought to. If you didn't mourn someone when they were dead, when they die how could you have loved them when they were alive? That is, in a sense an easier set of reactions to explain. What puzzles about religion is belief in the Trinity and in hell and in 72 virgins and all of the other contentful beliefs that go well beyond a sense of awe at the immensity of the cosmos or loss in the sense of death. That's where Pascal Boyer and Dennis Barbour and others going to step in to why we're vulnerable", "start_timestamp": "01:01:28", "end_timestamp": "01:02:09", "start_second": 3688, "end_second": 3729, "url": "https://www.youtube.com/watch?v=p0_-7FmrDq8&t=3688s", "title": "The Believing Brain: Evolution, Neuroscience, and the Spiritual Instinct", "thumbnail": "https://i.ytimg.com/vi/p0_-7FmrDq8/maxresdefault.jpg"} {"video_id": "p0_-7FmrDq8", "text": "to such specific beliefs as opposed to emotional reactions to major events that affect us. There, I should actually credit Pascal Boyer for linking the idea that we are mentalizing, we're apt to attribute minds to others as one of the core explanations for why we are subject to religious beliefs that leads to spiritualist beliefs. Right. So, Lisa, what's your view on these two sort of poles, the need for community transcendent experience and perhaps something that just speaks to the way in which certain ideas naturally", "start_timestamp": "01:02:09", "end_timestamp": "01:02:50", "start_second": 3729, "end_second": 3770, "url": "https://www.youtube.com/watch?v=p0_-7FmrDq8&t=3729s", "title": "The Believing Brain: Evolution, Neuroscience, and the Spiritual Instinct", "thumbnail": "https://i.ytimg.com/vi/p0_-7FmrDq8/maxresdefault.jpg"} {"video_id": "p0_-7FmrDq8", "text": "stick inside a brain that evolved to perform certain tasks and survive? I think that both of those explanations to some extent are phenomena. I'm not really sure if you're referring to them as explanations or just phenomena, are actually rooted in our sociality as a species, so I think it's not a metaphor to say that we regulate each other. We do, in very substantial ways and in ways that we're completely unaware of and part of how we do this is we create meaning that is shared and realities that emerge only by", "start_timestamp": "01:02:50", "end_timestamp": "01:03:51", "start_second": 3770, "end_second": 3831, "url": "https://www.youtube.com/watch?v=p0_-7FmrDq8&t=3770s", "title": "The Believing Brain: Evolution, Neuroscience, and the Spiritual Instinct", "thumbnail": "https://i.ytimg.com/vi/p0_-7FmrDq8/maxresdefault.jpg"} {"video_id": "p0_-7FmrDq8", "text": "virtue of collective agreement. What I mean by that is ... We're talking here, for example, about grief and that animals, non-human animals feel grief and so on. Non-human animals feel loss, for sure. I think there's no question that that's the case, and they suffer. I think there's no question that's the case, but research on emotion suggests pretty clearly that there is no inherent emotional meaning in any set of physical signals that occur from your body. What we do is ... humans, is we learn to impose meanings on those signals, right?", "start_timestamp": "01:03:51", "end_timestamp": "01:04:37", "start_second": 3831, "end_second": 3877, "url": "https://www.youtube.com/watch?v=p0_-7FmrDq8&t=3831s", "title": "The Believing Brain: Evolution, Neuroscience, and the Spiritual Instinct", "thumbnail": "https://i.ytimg.com/vi/p0_-7FmrDq8/maxresdefault.jpg"} {"video_id": "p0_-7FmrDq8", "text": "So, a scowling face for example, is not a universal display of anger. People only scowl about 25% of the time when they're angry and they scowl at many other times when they're not and there are many cultures around the world, including hunter-gatherers who don't recognize a scowl as anger, for example. In many cultures, and it's an interesting question about why this is the case, but we'll just hold that aside for a moment, what we do is we impose meaning on a scowling face, we impose meaning on a scowl and by virtue of that meaning that we've imposed, the scowl", "start_timestamp": "01:04:37", "end_timestamp": "01:05:16", "start_second": 3877, "end_second": 3916, "url": "https://www.youtube.com/watch?v=p0_-7FmrDq8&t=3877s", "title": "The Believing Brain: Evolution, Neuroscience, and the Spiritual Instinct", "thumbnail": "https://i.ytimg.com/vi/p0_-7FmrDq8/maxresdefault.jpg"} {"video_id": "p0_-7FmrDq8", "text": "actually literally takes on that meaning and we can easily predict what's going to happen next. What I mean by this is it sort of works in the same way as money works, right? There's no inherent ... Nothing that's ever served as currency in human cultures does so by virtue of its physical nature alone. What happens is a group of humans impose a meaning on pieces of paper, or little rocks, or salt, or barley, or big rocks in the ocean that can't be moved, or mortgages, or any number of things and all of a sudden, those things literally take on value.", "start_timestamp": "01:05:16", "end_timestamp": "01:05:55", "start_second": 3916, "end_second": 3955, "url": "https://www.youtube.com/watch?v=p0_-7FmrDq8&t=3916s", "title": "The Believing Brain: Evolution, Neuroscience, and the Spiritual Instinct", "thumbnail": "https://i.ytimg.com/vi/p0_-7FmrDq8/maxresdefault.jpg"} {"video_id": "p0_-7FmrDq8", "text": "They can be traded for material goods only because we all agree that they can and when someone moves their agreement, when people withdraw some number, people withdraw their agreement, those things no longer have value. Well, emotions are kind of built in the same way. Heart rates change, faces move, distress can occur out of loss when you lose someone who helps to regulate your body budget and you lose that person, you feel like you've lost a part of yourself because sort of you have actually lost someone who's helped you regulate", "start_timestamp": "01:05:55", "end_timestamp": "01:06:32", "start_second": 3955, "end_second": 3992, "url": "https://www.youtube.com/watch?v=p0_-7FmrDq8&t=3955s", "title": "The Believing Brain: Evolution, Neuroscience, and the Spiritual Instinct", "thumbnail": "https://i.ytimg.com/vi/p0_-7FmrDq8/maxresdefault.jpg"} {"video_id": "p0_-7FmrDq8", "text": "your nervous system. We impose meaning on those physical events that take on that meaning. I mean the physical events take on those meanings by virtue of the fact that we, as a culture, agree that that's the case. I think that in my view, this is partly why memes occur, because ideas are contagious in a sense because we often as part of our ... one of our superpowers as a species is the ability to create meaning. The ability to create something real where there used to be nothing real, only by virtue of collective agreement.", "start_timestamp": "01:06:32", "end_timestamp": "01:07:32", "start_second": 3992, "end_second": 4052, "url": "https://www.youtube.com/watch?v=p0_-7FmrDq8&t=3992s", "title": "The Believing Brain: Evolution, Neuroscience, and the Spiritual Instinct", "thumbnail": "https://i.ytimg.com/vi/p0_-7FmrDq8/maxresdefault.jpg"} {"video_id": "p0_-7FmrDq8", "text": "We impose meaning on something physical and then that physical thing takes on a bigger meaning. To some extent, I think we also do this with what we think of as transcendent experiences. So, when a group of people are all together having a similar experience at being awestruck or wonderstruck at something in nature, there's an opportunity for creating social reality, for creating a meaning that wasn't there before, that supersedes just the shared wonder of the moment and so, I don't see these ends as really different.", "start_timestamp": "01:07:32", "end_timestamp": "01:08:13", "start_second": 4052, "end_second": 4093, "url": "https://www.youtube.com/watch?v=p0_-7FmrDq8&t=4052s", "title": "The Believing Brain: Evolution, Neuroscience, and the Spiritual Instinct", "thumbnail": "https://i.ytimg.com/vi/p0_-7FmrDq8/maxresdefault.jpg"} {"video_id": "p0_-7FmrDq8", "text": "I see them as kind of emerging out of the same capacities- So, in the remaining time, maybe we can just focus on humans and on religious belief and maybe we could start with you, Barbara. Mm-hmm (affirmative). There's been a view that's been around for a long time that as science progresses, it kind of pushes out the need for religion, in terms of its explanatory capacities and so forth. Now it's suggested over time that the role of religion would decrease. Do you imagine that that is the pattern that will play out or is that a completely wrong", "start_timestamp": "01:08:13", "end_timestamp": "01:08:56", "start_second": 4093, "end_second": 4136, "url": "https://www.youtube.com/watch?v=p0_-7FmrDq8&t=4093s", "title": "The Believing Brain: Evolution, Neuroscience, and the Spiritual Instinct", "thumbnail": "https://i.ytimg.com/vi/p0_-7FmrDq8/maxresdefault.jpg"} {"video_id": "p0_-7FmrDq8", "text": "and oblique way of thinking about the role of religion and therefore what its future will be? Yeah, it's interesting. I feel two things at the same time. I do think that the increasing tendency towards humanism and secularization is a very welcome thing. I mean, I did say that I am here speaking as an atheist, as a person who is a non-believer. We certainly want to be able to think clearly about science and about the forces that act in this world and we know, all of us know that religion is not always helpful in that", "start_timestamp": "01:08:56", "end_timestamp": "01:09:27", "start_second": 4136, "end_second": 4167, "url": "https://www.youtube.com/watch?v=p0_-7FmrDq8&t=4136s", "title": "The Believing Brain: Evolution, Neuroscience, and the Spiritual Instinct", "thumbnail": "https://i.ytimg.com/vi/p0_-7FmrDq8/maxresdefault.jpg"} {"video_id": "p0_-7FmrDq8", "text": "particular way. At the same time, I think it's really important to think again about the cross-cultural patterns and the number of people in the world who don't fall into believing in big sky gods, who don't even, in some cases, have a word for religion. I'm not suggesting that that makes them different in any kind of scale of intelligence, not at all. We know that all human populations have the same capacities. But sometimes, just being religious is just the way life is. It's so much a part of the era, of the way that you live that it's not something that", "start_timestamp": "01:09:27", "end_timestamp": "01:10:03", "start_second": 4167, "end_second": 4203, "url": "https://www.youtube.com/watch?v=p0_-7FmrDq8&t=4167s", "title": "The Believing Brain: Evolution, Neuroscience, and the Spiritual Instinct", "thumbnail": "https://i.ytimg.com/vi/p0_-7FmrDq8/maxresdefault.jpg"} {"video_id": "p0_-7FmrDq8", "text": "is going to change. I think these are two different ways of looking at it and I'm not sure how to weigh them. I'd be interested to hear what other people would say about that. Why don't we go right down the line. Zoran, do you have any ... I think that at its best, it gives us a framework to experience the spirituality, to be able to connection to something that's larger than ourselves. And so, if it fulfills that role for people, I think then that's, you know, that it's its purpose. I hope that as the ages go, the science and spirituality basically flow through each other", "start_timestamp": "01:10:03", "end_timestamp": "01:10:44", "start_second": 4203, "end_second": 4244, "url": "https://www.youtube.com/watch?v=p0_-7FmrDq8&t=4203s", "title": "The Believing Brain: Evolution, Neuroscience, and the Spiritual Instinct", "thumbnail": "https://i.ytimg.com/vi/p0_-7FmrDq8/maxresdefault.jpg"} {"video_id": "p0_-7FmrDq8", "text": "smoothly. I think that for that, really we just need more research in these topics. Lisa, thoughts on the future? I think I would stand by my descriptions that I think that there are some advantages to religious belief, but I think there are also some major, major disadvantages, some of which Steve has talked about and I think it ... From my perspective it's probably about time to wonder whether or not the disadvantages outweigh the advantages, frankly. Because there are other meaning making systems that are available to humans to help them", "start_timestamp": "01:10:44", "end_timestamp": "01:11:23", "start_second": 4244, "end_second": 4283, "url": "https://www.youtube.com/watch?v=p0_-7FmrDq8&t=4244s", "title": "The Believing Brain: Evolution, Neuroscience, and the Spiritual Instinct", "thumbnail": "https://i.ytimg.com/vi/p0_-7FmrDq8/maxresdefault.jpg"} {"video_id": "p0_-7FmrDq8", "text": "make sense of the world, some of which may not have the disadvantages. They may have the advantages of religious belief, but they might not also have the disadvantages. So, I probably lean more in the direction of wondering how it would be possible to test that, to investigate that. Steve, thoughts on that? Several trends in the overall historical archival of disbelief when there's a lot of religions have become more humanistic. They don't take their literal beliefs as seriously as they used to. If you're a real, believing Christian and that if you don't accept Jesus then you're", "start_timestamp": "01:11:23", "end_timestamp": "01:12:07", "start_second": 4283, "end_second": 4327, "url": "https://www.youtube.com/watch?v=p0_-7FmrDq8&t=4283s", "title": "The Believing Brain: Evolution, Neuroscience, and the Spiritual Instinct", "thumbnail": "https://i.ytimg.com/vi/p0_-7FmrDq8/maxresdefault.jpg"} {"video_id": "p0_-7FmrDq8", "text": "going to go to hell, then you really ought to try to convert people at sword point. And you really ought to slay heretics. You'd be doing ... It's like a great public health measure. You're saving eternity of suffering in hell for billions of people. But most Christians, no matter how seriously they take their belief, don't try to convert people at sword point anymore. They don't have inquisitions and they're not completely consistent and that is a kind of benign hypocrisy among many believers that they fortunately don't act on the totality", "start_timestamp": "01:12:07", "end_timestamp": "01:12:35", "start_second": 4327, "end_second": 4355, "url": "https://www.youtube.com/watch?v=p0_-7FmrDq8&t=4327s", "title": "The Believing Brain: Evolution, Neuroscience, and the Spiritual Instinct", "thumbnail": "https://i.ytimg.com/vi/p0_-7FmrDq8/maxresdefault.jpg"} {"video_id": "p0_-7FmrDq8", "text": "of their religious beliefs and that's been a very beneficial trend. The institutions persist with the all-encompassing nature of the beliefs, we get deluded. Another is that when people switch their religious affiliations, the overwhelming tendency is toward no religion at all, so the world is becoming less religious. There are two reasons why that may seem hard to believe. One of them is that religious people have more babies and so the number or religious people is actually increasing, and projected to increase even as the number of people who", "start_timestamp": "01:12:35", "end_timestamp": "01:13:08", "start_second": 4355, "end_second": 4388, "url": "https://www.youtube.com/watch?v=p0_-7FmrDq8&t=4355s", "title": "The Believing Brain: Evolution, Neuroscience, and the Spiritual Instinct", "thumbnail": "https://i.ytimg.com/vi/p0_-7FmrDq8/maxresdefault.jpg"} {"video_id": "p0_-7FmrDq8", "text": "switch are switched in the direction of no religion. The other is that religious groups tend to be more politically organized. So, the problem with secularists and humanists, and so called nones, N-O-N-E, not N-U-N, that is people with no religion, is they don't vote. Evangelicals all vote. I shouldn't say all. Something like 80% of evangelicals vote, 25% of just the unaffiliated vote. And so there's a outsized influence of religion in politics because of this organization. Our perception of the growing influence of religion is in, not exactly an illusion, but", "start_timestamp": "01:13:08", "end_timestamp": "01:13:51", "start_second": 4388, "end_second": 4431, "url": "https://www.youtube.com/watch?v=p0_-7FmrDq8&t=4388s", "title": "The Believing Brain: Evolution, Neuroscience, and the Spiritual Instinct", "thumbnail": "https://i.ytimg.com/vi/p0_-7FmrDq8/maxresdefault.jpg"} {"video_id": "p0_-7FmrDq8", "text": "it is pushed along by the greater fecundity and greater political organization of the religious, even as the overall direction is away from religious belief with secularization, including the United States, which was a ... for a long time was an outlier that every other western democracy had become less religious than the United States. The United States is now moving in that direction as well. One final question which is sort of the inverse of the topic that took up some of our time in thinking about animals and their reactions and beliefs.", "start_timestamp": "01:13:51", "end_timestamp": "01:14:25", "start_second": 4431, "end_second": 4465, "url": "https://www.youtube.com/watch?v=p0_-7FmrDq8&t=4431s", "title": "The Believing Brain: Evolution, Neuroscience, and the Spiritual Instinct", "thumbnail": "https://i.ytimg.com/vi/p0_-7FmrDq8/maxresdefault.jpg"} {"video_id": "p0_-7FmrDq8", "text": "What if we flip it the other way? So, a hundred years from now, or 500 years from now, we get visited by an alien civilization and we show them what we've learned in math and physics and they nod their tentacles and they ... you know, we're all sort of good. But then we show them our religious beliefs. Do you think that they'll look at that and say, \"Yeah, yeah, we get it. You know, we've got out Jesus too.\" Or will they be completely baffled as to what this thing called religion is? Barbara, thoughts on that?", "start_timestamp": "01:14:25", "end_timestamp": "01:15:03", "start_second": 4465, "end_second": 4503, "url": "https://www.youtube.com/watch?v=p0_-7FmrDq8&t=4465s", "title": "The Believing Brain: Evolution, Neuroscience, and the Spiritual Instinct", "thumbnail": "https://i.ytimg.com/vi/p0_-7FmrDq8/maxresdefault.jpg"} {"video_id": "p0_-7FmrDq8", "text": "Wild speculation. I have no idea the answer to that question. Fair enough. If I had to guess, I would guess baffled. I think that if you think about how 500 years ago, we didn't know much about electromagnetism right? And now, we can go all kinds of things with and we can actually entertain ourselves with. Well, think of the aliens come and they understand the function of consciousness in the universe, right? And they can use consciousness that we use for all kinds of things. It's not any kind of mysterious thing for them.", "start_timestamp": "01:15:03", "end_timestamp": "01:15:41", "start_second": 4503, "end_second": 4541, "url": "https://www.youtube.com/watch?v=p0_-7FmrDq8&t=4503s", "title": "The Believing Brain: Evolution, Neuroscience, and the Spiritual Instinct", "thumbnail": "https://i.ytimg.com/vi/p0_-7FmrDq8/maxresdefault.jpg"} {"video_id": "p0_-7FmrDq8", "text": "That's I think where it's going. Lisa? I don't think they'll be baffled and I don't think that they'll necessarily share ... I mean, I'm just ... I'm not even speculating, I'm imagining. I think that they will see it as part of the evolutionary trajectory of ... Or evolutionary development of a species and maybe something that was a necessary step along the way, but became unnecessary at a certain point. Final thoughts on that one, Steve? I tend to agree there. It may be similar to our attitudes towards the animistic beliefs of people that we've", "start_timestamp": "01:15:41", "end_timestamp": "01:16:27", "start_second": 4541, "end_second": 4587, "url": "https://www.youtube.com/watch?v=p0_-7FmrDq8&t=4541s", "title": "The Believing Brain: Evolution, Neuroscience, and the Spiritual Instinct", "thumbnail": "https://i.ytimg.com/vi/p0_-7FmrDq8/maxresdefault.jpg"} {"video_id": "p0_-7FmrDq8", "text": "come across. We can ... or an intelligible but we might consider them obsolete. Do you think that they will have had a similar evolutionary trajectory? I know there ... Is this an intrinsic part of the way in which a living system would evolve that can survive that will necessarily ascribe agency in the world and tell stories about what those agents do and the role that they play or is this some peculiar thing that happened to the human species? A great question, a profound one, but I suspect that ... I guess the question is, does sociality", "start_timestamp": "01:16:27", "end_timestamp": "01:17:01", "start_second": 4587, "end_second": 4621, "url": "https://www.youtube.com/watch?v=p0_-7FmrDq8&t=4587s", "title": "The Believing Brain: Evolution, Neuroscience, and the Spiritual Instinct", "thumbnail": "https://i.ytimg.com/vi/p0_-7FmrDq8/maxresdefault.jpg"} {"video_id": "SaJL4SLfrcY", "text": "all right so I'll talk these two parts to my talk about three parts so one is about a little bit of kind of state of the art of supervisor on you in France Maroni and and then the second part about supervised running and that's the title really of the talk and with an introduction to something I call energy based running which is sort of a way of sort of a general framework or paradigm if you want to approach to approach learning in general should I use this all right much better oh this is just for recording I guess okay so we all", "start_timestamp": "00:00:00", "end_timestamp": "00:00:37", "start_second": 0, "end_second": 37, "url": "https://www.youtube.com/watch?v=SaJL4SLfrcY&t=0s", "title": "Self-Supervised Learning", "thumbnail": "https://i.ytimg.com/vi/SaJL4SLfrcY/maxresdefault.jpg"} {"video_id": "SaJL4SLfrcY", "text": "know what supervised running is about I'm told you all know what supervised learning is about and this is the situation where you train a machine by telling it where the correct answer is for a bunch of training samples and this works really well if you have lots of data it works for image recognition translation natural language processing speech recognition you're all kinds of applications but those are applications where the economics are such that it's worth actually labeling a lot of data by hand and and of course you know with", "start_timestamp": "00:00:37", "end_timestamp": "00:01:11", "start_second": 37, "end_second": 71, "url": "https://www.youtube.com/watch?v=SaJL4SLfrcY&t=37s", "title": "Self-Supervised Learning", "thumbnail": "https://i.ytimg.com/vi/SaJL4SLfrcY/maxresdefault.jpg"} {"video_id": "SaJL4SLfrcY", "text": "that context machine learning is basically comes down to finding a good form for a parameterize function preferably differentiable at least almost everywhere in such a way that by using gradient descent type algorithm you can tune the parameters to optimize the performance of the system right so everything is differentiable or almost differentiable you can optimize using gradient or sub gradient and and it all works we all know about that and there is you know guarantees of generalization if the capacity of the machine is", "start_timestamp": "00:01:11", "end_timestamp": "00:01:44", "start_second": 71, "end_second": 104, "url": "https://www.youtube.com/watch?v=SaJL4SLfrcY&t=71s", "title": "Self-Supervised Learning", "thumbnail": "https://i.ytimg.com/vi/SaJL4SLfrcY/maxresdefault.jpg"} {"video_id": "SaJL4SLfrcY", "text": "limited is of course another form of running which I'm not going to talk about very much called reinforcement learning and if respect running has seen a lot of success over the last several years this success are almost all can be restricted to gains or virtual environments there are also applications of reinforcement running in situations where you can collect lots of data really quickly to have like a really fast adaptation so if you are you know you want to control you want to show people content and you want to kind of figure", "start_timestamp": "00:01:44", "end_timestamp": "00:02:20", "start_second": 104, "end_second": 140, "url": "https://www.youtube.com/watch?v=SaJL4SLfrcY&t=104s", "title": "Self-Supervised Learning", "thumbnail": "https://i.ytimg.com/vi/SaJL4SLfrcY/maxresdefault.jpg"} {"video_id": "SaJL4SLfrcY", "text": "out you know well where you want to rank the content there is no differentiable objective function because you don't know what people are gonna are gonna do so you can use whether they click on a piece of content or not as kind of a reinforcement and then optimize the the policy as to what you show to people to maximize the to maximize that but these are you know on these situations where we get lots and lots of feedbacks and otherwise it works for games because you can get wishes to look to to play games really quickly and so you so they can", "start_timestamp": "00:02:20", "end_timestamp": "00:02:54", "start_second": 140, "end_second": 174, "url": "https://www.youtube.com/watch?v=SaJL4SLfrcY&t=140s", "title": "Self-Supervised Learning", "thumbnail": "https://i.ytimg.com/vi/SaJL4SLfrcY/maxresdefault.jpg"} {"video_id": "SaJL4SLfrcY", "text": "play millions of games per you can run it in parallel on lots of computers and so you can train which is to carry games to play go to play Starcraft dot you know whatever what kind of games and it's only due to the fact that you can run those games faster than we all-time on many machines if the machine had to run at the same speed as as we do which means real time play the games in real time it basically wouldn't be practical it would take about 80 hours for the best current algorithms to learn to play a single Atari game - level of performance", "start_timestamp": "00:02:54", "end_timestamp": "00:03:28", "start_second": 174, "end_second": 208, "url": "https://www.youtube.com/watch?v=SaJL4SLfrcY&t=174s", "title": "Self-Supervised Learning", "thumbnail": "https://i.ytimg.com/vi/SaJL4SLfrcY/maxresdefault.jpg"} {"video_id": "SaJL4SLfrcY", "text": "that a human can reach in about 15 minutes so the efficiency the sample efficiency of this type of enforcement running is horrible compared to humans at least for go so you've probably heard of alphago alphago 0 from from deep mind whether the details are not fully released and there is no open source code there is a system called alpha pango which was released by Facebook and this one you can just download and run or trade it yourself it's used by a lot of different people who are interested in this this one required about 20", "start_timestamp": "00:03:28", "end_timestamp": "00:04:01", "start_second": 208, "end_second": 241, "url": "https://www.youtube.com/watch?v=SaJL4SLfrcY&t=208s", "title": "Self-Supervised Learning", "thumbnail": "https://i.ytimg.com/vi/SaJL4SLfrcY/maxresdefault.jpg"} {"video_id": "SaJL4SLfrcY", "text": "million South play games running on 2000 GT use for 2 weeks to reach superhuman performance so this is not cheap in terms of computation as you can tell if you were to to buy this on a on a you know cloud computing server it would cost you a couple million bucks and and it you know it's more games that you know a single person can play in a lifetime probably more than all your humanity is played in a in a number of years there's a very interesting paper recently but it mind by our vineyards group alpha star which plays a single", "start_timestamp": "00:04:01", "end_timestamp": "00:04:45", "start_second": 241, "end_second": 285, "url": "https://www.youtube.com/watch?v=SaJL4SLfrcY&t=241s", "title": "Self-Supervised Learning", "thumbnail": "https://i.ytimg.com/vi/SaJL4SLfrcY/maxresdefault.jpg"} {"video_id": "SaJL4SLfrcY", "text": "map on StarCraft with a single type of player and the training for this to the equivalent of two hundred years of real-time real-time play which is definitely more than any single Starcraft player has been able to do there's no paper on this as far as I can tell yet so they all use you know deep architectures they all use conventional Nets actually with the combination of other things transformers and start quite a few particular but as you can tell in terms of sampling efficiency it's very bad and it's a huge problem", "start_timestamp": "00:04:45", "end_timestamp": "00:05:19", "start_second": 285, "end_second": 319, "url": "https://www.youtube.com/watch?v=SaJL4SLfrcY&t=285s", "title": "Self-Supervised Learning", "thumbnail": "https://i.ytimg.com/vi/SaJL4SLfrcY/maxresdefault.jpg"} {"video_id": "SaJL4SLfrcY", "text": "because what that means is that we can't really use me for my training for other than in simulation to Train real-world systems like a car to drive itself or robot that grabs objects unless you have a room full of robots you know training all day so if you were to use we first went running at the moment to train a car to drive itself it will have to you know drive itself with millions of hours and cause tons and tons of accidents and and and it's just not practical right so people do it in simulation it kind of", "start_timestamp": "00:05:19", "end_timestamp": "00:05:57", "start_second": 319, "end_second": 357, "url": "https://www.youtube.com/watch?v=SaJL4SLfrcY&t=319s", "title": "Self-Supervised Learning", "thumbnail": "https://i.ytimg.com/vi/SaJL4SLfrcY/maxresdefault.jpg"} {"video_id": "SaJL4SLfrcY", "text": "works simulators are not very accurate so there is a problem of sort of transferring from simulation to the real world it's got to be to work on this but it's a big mystery there which I'll come back to in the second half of the talk and the mystery is is how is it that humans can learn to drive a car in about 20 hours of training without causing any accident and the sort of preview of the answer to this is that we have internal predictive models of the world that allows us to predict that if we drive near a cliff", "start_timestamp": "00:05:57", "end_timestamp": "00:06:29", "start_second": 357, "end_second": 389, "url": "https://www.youtube.com/watch?v=SaJL4SLfrcY&t=357s", "title": "Self-Supervised Learning", "thumbnail": "https://i.ytimg.com/vi/SaJL4SLfrcY/maxresdefault.jpg"} {"video_id": "SaJL4SLfrcY", "text": "and we turn the wheel to the right the car is going to run off the cliff because the gravity is going to fall and nothing good is going to come out of it we don't need to actually try it to predict this and so perhaps the answer is for eventually machines to learn to to learn to have those predictive models of the of the world that will allow them to predict the consequences of your actions before before they occur and plan plan ahead and to some extent we can say that the essence of intelligence is the ability to predict but for now", "start_timestamp": "00:06:29", "end_timestamp": "00:07:05", "start_second": 389, "end_second": 425, "url": "https://www.youtube.com/watch?v=SaJL4SLfrcY&t=389s", "title": "Self-Supervised Learning", "thumbnail": "https://i.ytimg.com/vi/SaJL4SLfrcY/maxresdefault.jpg"} {"video_id": "SaJL4SLfrcY", "text": "let's stick with supervised running so I'm sure so who isn't who doesn't know what the condition that is don't be shy okay that's great I can skip a lot of stuff okay so a combination that of course is an architecture that is designed to you know recognize images but in fact it's designed to recognize array data where the property is that there is strong local correlations in the in the features and and some sort of translation invariance of the statistics of the signal so it's true for images is true for audio signals it's true for", "start_timestamp": "00:07:05", "end_timestamp": "00:07:44", "start_second": 425, "end_second": 464, "url": "https://www.youtube.com/watch?v=SaJL4SLfrcY&t=425s", "title": "Self-Supervised Learning", "thumbnail": "https://i.ytimg.com/vi/SaJL4SLfrcY/maxresdefault.jpg"} {"video_id": "SaJL4SLfrcY", "text": "basically anything that comes to you in the form of an array where the locality in the array has some meaning and of course you know the first applications of this were on character recognition but we quickly realized that we could recognize multiple objects with those things not just single objects by kind of scanning if you want or doing the equivalent of scanning a commercial net on the only big image which of course you don't have to do it stupidly because it's all the layers are convolution you don't actually need to explicitly", "start_timestamp": "00:07:44", "end_timestamp": "00:08:11", "start_second": 464, "end_second": 491, "url": "https://www.youtube.com/watch?v=SaJL4SLfrcY&t=464s", "title": "Self-Supervised Learning", "thumbnail": "https://i.ytimg.com/vi/SaJL4SLfrcY/maxresdefault.jpg"} {"video_id": "SaJL4SLfrcY", "text": "recompute the converse on that every location you just make each layer bigger and then you you know make every layer convolution people rename these fully conventional Nets afterwards but it's just complex on nets and when you apply this to natural images you can you know you can train systems like this to detect objects and natural images you can apply them locally you can apply accomplish on that locally to an image to have it label every pixel in the image with for example the category of the object it belongs to and the conversation that has", "start_timestamp": "00:08:11", "end_timestamp": "00:08:46", "start_second": 491, "end_second": 526, "url": "https://www.youtube.com/watch?v=SaJL4SLfrcY&t=491s", "title": "Self-Supervised Learning", "thumbnail": "https://i.ytimg.com/vi/SaJL4SLfrcY/maxresdefault.jpg"} {"video_id": "SaJL4SLfrcY", "text": "some sort of you know each output of the network has some sort of window of influence on the input which in this case is actually quite large so to decide of the category of a single pixel the the network here looks at a wide contextual window around this pixel and and then gives you an output for that particular pixel and and this is done sort of conditionally so it's very cheap so the system that was built about ten years ago and it could run at about thirty frames per second on a on the specialized Hardware actually an FPGA", "start_timestamp": "00:08:46", "end_timestamp": "00:09:20", "start_second": 526, "end_second": 560, "url": "https://www.youtube.com/watch?v=SaJL4SLfrcY&t=526s", "title": "Self-Supervised Learning", "thumbnail": "https://i.ytimg.com/vi/SaJL4SLfrcY/maxresdefault.jpg"} {"video_id": "SaJL4SLfrcY", "text": "and of course with what you probably all know is that around 2012 2013 those networks started you know beating other methods for object recognition for by a large margin largely due to the fact that the data sets we can became bigger those systems are pretty hungry in terms of data more than whatever methods people were using before and so the appearance of things like data sets like image net of you know sort of made it possible to sort of really exploit the capacity in those in those networks and then the second thing was the", "start_timestamp": "00:09:20", "end_timestamp": "00:09:51", "start_second": 560, "end_second": 591, "url": "https://www.youtube.com/watch?v=SaJL4SLfrcY&t=560s", "title": "Self-Supervised Learning", "thumbnail": "https://i.ytimg.com/vi/SaJL4SLfrcY/maxresdefault.jpg"} {"video_id": "SaJL4SLfrcY", "text": "availability of GPUs which allowed to run those systems really quickly but you all know this and what you all know also is that there's been an inflation in a number of layers using in those networks over the over the years where you know some of the workhorse of image recognition nowadays is you know some sort of backbone convolutional net similar to resonate for example so ResNet is a composition on that where every pair of layers I mean a block or residual block I'm sure again many of you have heard of this but you basically", "start_timestamp": "00:09:51", "end_timestamp": "00:10:28", "start_second": 591, "end_second": 628, "url": "https://www.youtube.com/watch?v=SaJL4SLfrcY&t=591s", "title": "Self-Supervised Learning", "thumbnail": "https://i.ytimg.com/vi/SaJL4SLfrcY/maxresdefault.jpg"} {"video_id": "SaJL4SLfrcY", "text": "have pairs of layers completion non-linearity completion sometimes you have subsampling pooling as well but this one this one doesn't and then you have some sort of connection that can skips pairs of layers and so essentially what you can think of the function of one of those blocks as basically computing the identity function and those those those layers compute the deviation of the function of that layer from the identity so that sounds kind of a waste to just have a layer that computes the identity function and it is in fact many of the", "start_timestamp": "00:10:28", "end_timestamp": "00:11:05", "start_second": 628, "end_second": 665, "url": "https://www.youtube.com/watch?v=SaJL4SLfrcY&t=628s", "title": "Self-Supervised Learning", "thumbnail": "https://i.ytimg.com/vi/SaJL4SLfrcY/maxresdefault.jpg"} {"video_id": "SaJL4SLfrcY", "text": "layers in those systems don't do much you can kind of get rid of them actually after after training but what it does is that it makes the system sort of fault tolerant if you want so if the learning algorithm somehow kind of gets into a situation where some layers die which can happen it's not catastrophic because you always have the information going through the bypass connection and so that you know a pair of layers just you know kind of checks itself out of the network so it's not it's not used but it doesn't kill the entire entire effort so", "start_timestamp": "00:11:05", "end_timestamp": "00:11:42", "start_second": 665, "end_second": 702, "url": "https://www.youtube.com/watch?v=SaJL4SLfrcY&t=665s", "title": "Self-Supervised Learning", "thumbnail": "https://i.ytimg.com/vi/SaJL4SLfrcY/maxresdefault.jpg"} {"video_id": "SaJL4SLfrcY", "text": "that's one of the advantages of ResNet you can you can think of the the long succession of layers as sort of progressively we finding the the answer and cleaning up the the the output or the representation between variations of this where you have you know skipping connections that skip multiple layers etc that's called dense dense net so I'm sure a lot of people will talk you know talk to you about progressing purpura vision over the next few years and there's been a huge amount of progress over the last few years with things like", "start_timestamp": "00:11:42", "end_timestamp": "00:12:15", "start_second": 702, "end_second": 735, "url": "https://www.youtube.com/watch?v=SaJL4SLfrcY&t=702s", "title": "Self-Supervised Learning", "thumbnail": "https://i.ytimg.com/vi/SaJL4SLfrcY/maxresdefault.jpg"} {"video_id": "SaJL4SLfrcY", "text": "Maersk our CN n which is sort of a two pass image recognition system that can pick out every instance of every object in an image and with you know really good performance so there's sort of a first first few layers that kind of identify regions of interest and then you kind of apply a second no neural net to the conditional net to the the regions of interest that you've identified by the first one there's also kind of one pass systems that my colleague at in Menlo Park at Facebook use the you know call rich internet or", "start_timestamp": "00:12:15", "end_timestamp": "00:12:48", "start_second": 735, "end_second": 768, "url": "https://www.youtube.com/watch?v=SaJL4SLfrcY&t=735s", "title": "Self-Supervised Learning", "thumbnail": "https://i.ytimg.com/vi/SaJL4SLfrcY/maxresdefault.jpg"} {"video_id": "SaJL4SLfrcY", "text": "future pyramid network and you can think of this as let's say you want to produce a dense map of everything that's in the image so for every pixel in the input you want to give a category of an instance or or category whether it's an object or kind of a background region if you want so you have you know a bunch of layers of a commercial net where the spatial resolution goes down as you go up because of subsampling and then you have sort of a similarly architected network that can it goes the other way from low", "start_timestamp": "00:12:48", "end_timestamp": "00:13:23", "start_second": 768, "end_second": 803, "url": "https://www.youtube.com/watch?v=SaJL4SLfrcY&t=768s", "title": "Self-Supervised Learning", "thumbnail": "https://i.ytimg.com/vi/SaJL4SLfrcY/maxresdefault.jpg"} {"video_id": "SaJL4SLfrcY", "text": "resolution to high resolution you have kind of skipping connections that go from one map in the in the sort of you know abstraction pyramid if you want to the corresponing that in the output you know the part of the network that produces the output and you can train this end to end with sort of weekly supervised architectures you can plug classifiers taking inputs from sort of various levels in the in the network and this works amazingly well so this is result from a scarf CNN actually not from the retina net or future I mean", "start_timestamp": "00:13:23", "end_timestamp": "00:14:05", "start_second": 803, "end_second": 845, "url": "https://www.youtube.com/watch?v=SaJL4SLfrcY&t=803s", "title": "Self-Supervised Learning", "thumbnail": "https://i.ytimg.com/vi/SaJL4SLfrcY/maxresdefault.jpg"} {"video_id": "SaJL4SLfrcY", "text": "network but it's the results are quite similar you can get every instance of every object outlined together with a box so the colors are actually produced by the network and correspond to two categories and you know it's pretty it's pretty amazing how well this works you need data these are results from this sort of single-pass sheeter pyramid network again the colors indicate so the individual objects but this system actually labels not just the object but also the background regions so it's sort of they call this panel optic vision so", "start_timestamp": "00:14:05", "end_timestamp": "00:14:47", "start_second": 845, "end_second": 887, "url": "https://www.youtube.com/watch?v=SaJL4SLfrcY&t=845s", "title": "Self-Supervised Learning", "thumbnail": "https://i.ytimg.com/vi/SaJL4SLfrcY/maxresdefault.jpg"} {"video_id": "SaJL4SLfrcY", "text": "these this type of architectures we have sort of a commercial net with decreasing resolution followed by another one with sort of increasing resolution which some people called accomplished on that this is a paper from I guess 2012 or so or 11 by my colleague Bob Fergus on this idea geek on visual net and this architecture is used quite a lot in image segmentation particularly for applications in medical image analysis and some people call this kind of architecture a unit because of the of the shape when you represent it this way", "start_timestamp": "00:14:47", "end_timestamp": "00:15:22", "start_second": 887, "end_second": 922, "url": "https://www.youtube.com/watch?v=SaJL4SLfrcY&t=887s", "title": "Self-Supervised Learning", "thumbnail": "https://i.ytimg.com/vi/SaJL4SLfrcY/maxresdefault.jpg"} {"video_id": "SaJL4SLfrcY", "text": "so it's exactly you know very very much the same idea I showed before except now the the layers of the kind of feed-forward part of the network I kind of drawn on this side and then the the sort of resolution increasing half is drawn on that side with skipping connections going directly so it looks like a u and these sort of variations of this so this is work from my colleagues that at NYU who are working on with medical image analysis these are 3d MRI scans and so the cognate here is three-dimensional the completions take", "start_timestamp": "00:15:22", "end_timestamp": "00:15:59", "start_second": 922, "end_second": 959, "url": "https://www.youtube.com/watch?v=SaJL4SLfrcY&t=922s", "title": "Self-Supervised Learning", "thumbnail": "https://i.ytimg.com/vi/SaJL4SLfrcY/maxresdefault.jpg"} {"video_id": "SaJL4SLfrcY", "text": "place in three dimensions over the three spatial dimensions and and every voxel now is labeled as you know one of a number of categories so you can do things like segment hip bones and things like this for you know preparing for hip replacement surgery and stuff like that and and it works really it works much better if you use 3d rather than 2d because you get the consistency of all the slices so as you see at the top here is there are artifacts of the recognition if you use kind of 2d segmentation perhaps with a little bit", "start_timestamp": "00:15:59", "end_timestamp": "00:16:38", "start_second": 959, "end_second": 998, "url": "https://www.youtube.com/watch?v=SaJL4SLfrcY&t=959s", "title": "Self-Supervised Learning", "thumbnail": "https://i.ytimg.com/vi/SaJL4SLfrcY/maxresdefault.jpg"} {"video_id": "SaJL4SLfrcY", "text": "of cleanup and if you really you you sort of get with your of those artifacts the same team with a different subset of people has applied this to things like mammograms so this is 2d data but you have multiple images from sort of multiple with use angles of view and here's a surprising thing so this is the kind of application that you some of you may not have heard of which is the application of comp nets in physics this is an example in astrophysics so this is a paper from I think was in PNAS published in PNAS a", "start_timestamp": "00:16:38", "end_timestamp": "00:17:17", "start_second": 998, "end_second": 1037, "url": "https://www.youtube.com/watch?v=SaJL4SLfrcY&t=998s", "title": "Self-Supervised Learning", "thumbnail": "https://i.ytimg.com/vi/SaJL4SLfrcY/maxresdefault.jpg"} {"video_id": "SaJL4SLfrcY", "text": "few months ago from the Flatiron Institute which is a private research institute in in New York and what they did here was use a accomplished on a to accelerate the solution of partial differential equations solvers so what they were interested in these are cosmologists and they're interested in you know what are the initial conditions of you know baby universe that will cause the kind of universe verbs or in today what you have to do for that is basically simulate the entire universe at its birth you know the expansion the", "start_timestamp": "00:17:17", "end_timestamp": "00:17:45", "start_second": 1037, "end_second": 1065, "url": "https://www.youtube.com/watch?v=SaJL4SLfrcY&t=1037s", "title": "Self-Supervised Learning", "thumbnail": "https://i.ytimg.com/vi/SaJL4SLfrcY/maxresdefault.jpg"} {"video_id": "SaJL4SLfrcY", "text": "first expansion phase of the universe you can do this because you could do this in principle because you know you have the density of matter and dark matter ordinary matter dark matter photons whatever at every location and you can solve a partial differential equation which basically is just you know physics at every location and I'm gonna compute the evolution of the universe this way the problem of with this is that she only do this at the scale of the universe given the size of the the grid the grid that you have to", "start_timestamp": "00:17:45", "end_timestamp": "00:18:14", "start_second": 1065, "end_second": 1094, "url": "https://www.youtube.com/watch?v=SaJL4SLfrcY&t=1065s", "title": "Self-Supervised Learning", "thumbnail": "https://i.ytimg.com/vi/SaJL4SLfrcY/maxresdefault.jpg"} {"video_id": "SaJL4SLfrcY", "text": "use to solve this equation it will take too long and so what they did here was they used one of those solvers known PDE solvers to kind of solve those equations on kind of small domains small four-dimensional domains right because it's three dimensions of space and one dimensional time and they train a convolutional net to produce the same result but that convolutional net has a bigger grid right so a PDE solver basically takes one value when vauxhall okay four dimensional voxel if you want or 3-dimensional Vauxhall and then looks", "start_timestamp": "00:18:14", "end_timestamp": "00:18:52", "start_second": 1094, "end_second": 1132, "url": "https://www.youtube.com/watch?v=SaJL4SLfrcY&t=1094s", "title": "Self-Supervised Learning", "thumbnail": "https://i.ytimg.com/vi/SaJL4SLfrcY/maxresdefault.jpg"} {"video_id": "SaJL4SLfrcY", "text": "at the neighbors and then passes it to some function that computes the new value for the next time step let's say of the the center or the central grid cell so it's a completion like operation except that you know it's maybe nonlinear so so what they did was train accomplish on that with a few layers and they it you use this it uses this unit architecture so we can take a fairly large context into account not just the neighboring cells but sort of a bigger neighborhood rather big grid cells and it's trained to produce the result that", "start_timestamp": "00:18:52", "end_timestamp": "00:19:32", "start_second": 1132, "end_second": 1172, "url": "https://www.youtube.com/watch?v=SaJL4SLfrcY&t=1132s", "title": "Self-Supervised Learning", "thumbnail": "https://i.ytimg.com/vi/SaJL4SLfrcY/maxresdefault.jpg"} {"video_id": "SaJL4SLfrcY", "text": "the PD solver would would produce and they can easily generate data by running the PD solver but they run it on kind of small 3d domains and then once they have this commercial net they can run it on the the big scale of you know universai universe size scale if you want and and what they get is those kind of match here which are the displacement maps of densities and there's also different methods and the colors indicate errors blue is low error and you know red is high error and those are also various ways of doing this and this is kind of", "start_timestamp": "00:19:32", "end_timestamp": "00:20:07", "start_second": 1172, "end_second": 1207, "url": "https://www.youtube.com/watch?v=SaJL4SLfrcY&t=1172s", "title": "Self-Supervised Learning", "thumbnail": "https://i.ytimg.com/vi/SaJL4SLfrcY/maxresdefault.jpg"} {"video_id": "SaJL4SLfrcY", "text": "their proposed method and compared with what the PDE solver we do for a relatively small domain so that's kind of an interesting thing which is to use neural nets or D planning in general as a phenomenological model of something that we might possibly know the underlying physics but it's computationally too expensive people are doing this also for predicting the properties of materials for solving problems in molecular dynamics so for example confirmation of protein where the two proteins are going to stick to", "start_timestamp": "00:20:07", "end_timestamp": "00:20:43", "start_second": 1207, "end_second": 1243, "url": "https://www.youtube.com/watch?v=SaJL4SLfrcY&t=1207s", "title": "Self-Supervised Learning", "thumbnail": "https://i.ytimg.com/vi/SaJL4SLfrcY/maxresdefault.jpg"} {"video_id": "SaJL4SLfrcY", "text": "each other you know things like that which is of course super important for ticks like drug design I was at Harvard a couple weeks ago and I talked to people who are trying to use neural nets to predict the property of certain solids so if you take graphene which is a two-dimensional mesh of carbon atoms and you take two and it's cool you know just a single atom thick layer you take two layers of graphene and the one on top you twist it just a little bit compared to the the one at the bottom there's a particular", "start_timestamp": "00:20:43", "end_timestamp": "00:21:14", "start_second": 1243, "end_second": 1274, "url": "https://www.youtube.com/watch?v=SaJL4SLfrcY&t=1243s", "title": "Self-Supervised Learning", "thumbnail": "https://i.ytimg.com/vi/SaJL4SLfrcY/maxresdefault.jpg"} {"video_id": "SaJL4SLfrcY", "text": "angle at which this material becomes superconductor and nobody has any idea why and so there's some idea of using neural Nets to kind of build phenomenological models of all those properties so that perhaps we could predict other properties there's interesting work along those lines also by Pascale flora who is actually originally a vision guy at a pea ferry and what he's doing what he's been doing was to predict the aerodynamic or hydrodynamic properties of a solid using a by training accomplish on that feel", "start_timestamp": "00:21:14", "end_timestamp": "00:21:47", "start_second": 1274, "end_second": 1307, "url": "https://www.youtube.com/watch?v=SaJL4SLfrcY&t=1274s", "title": "Self-Supervised Learning", "thumbnail": "https://i.ytimg.com/vi/SaJL4SLfrcY/maxresdefault.jpg"} {"video_id": "SaJL4SLfrcY", "text": "accomplished on that basically so you feed the shape of the the solid to the to the system and again using fluid dynamics computation to generate data you train it to produce the properties of that shape for example its drag or lift if you are interested in designing air foils for for you know blades of propellers or airplanes or hydrofoils or whatever and then what you have now is on your net that predicts those properties and so because it's on your own that is differentiable so now you can optimize the shape by doing gradient", "start_timestamp": "00:21:47", "end_timestamp": "00:22:23", "start_second": 1307, "end_second": 1343, "url": "https://www.youtube.com/watch?v=SaJL4SLfrcY&t=1307s", "title": "Self-Supervised Learning", "thumbnail": "https://i.ytimg.com/vi/SaJL4SLfrcY/maxresdefault.jpg"} {"video_id": "SaJL4SLfrcY", "text": "descent in input space you can optimize the shape so as to get the properties you want on the output which you you know you can't really do it with a regular computational fluid dynamics piece of code so it's really interesting he actually has a startup that works on this yes huh well I guess you have to really know the underlying physics to be able to make that generalization so I mean you can test on relatively small spatial domain because you can run the PD solver so you know how accurate your new ComNet is the", "start_timestamp": "00:22:23", "end_timestamp": "00:23:59", "start_second": 1343, "end_second": 1439, "url": "https://www.youtube.com/watch?v=SaJL4SLfrcY&t=1343s", "title": "Self-Supervised Learning", "thumbnail": "https://i.ytimg.com/vi/SaJL4SLfrcY/maxresdefault.jpg"} {"video_id": "SaJL4SLfrcY", "text": "question is when you extend the size is it still accurate okay and there is a leap of faith there's no question now to your comment that this has nothing to do with Commerce on that no it does it has very very much to do with Congress on it because all of those piggies are local operations that basically look like convolutions that are essentially the same you know press some nonlinear thing because you know if you do navier-stokes equation for free dynamics you have to do some projection afterwards that's nonlinear but you know but it's a local", "start_timestamp": "00:23:59", "end_timestamp": "00:24:29", "start_second": 1439, "end_second": 1469, "url": "https://www.youtube.com/watch?v=SaJL4SLfrcY&t=1439s", "title": "Self-Supervised Learning", "thumbnail": "https://i.ytimg.com/vi/SaJL4SLfrcY/maxresdefault.jpg"} {"video_id": "SaJL4SLfrcY", "text": "operation and it's the same operation you do everywhere in the image so or if we're in the in the volume so it is accomplish on that it's it's directly it's probably one of the most appropriate use of kinetically imagine so of course it's been quite a bit of progress in in things like you know start driving cars it's working progress these are actually videos that are quite a few years old I think about five years five years old from this one is from mobile I which is now Intel and NVIDIA and there's a huge amount of work on", "start_timestamp": "00:24:29", "end_timestamp": "00:25:05", "start_second": 1469, "end_second": 1505, "url": "https://www.youtube.com/watch?v=SaJL4SLfrcY&t=1469s", "title": "Self-Supervised Learning", "thumbnail": "https://i.ytimg.com/vi/SaJL4SLfrcY/maxresdefault.jpg"} {"video_id": "SaJL4SLfrcY", "text": "surviving cars as you know a lot of engineering goes into this but all the perception system use you know some sort of cognitive process either images from cameras or from various other types of sensors like lidar and other things like this okay so all this is great it's all supervised and reinforcement and one big question that we can ask ourselves is is this going to take us to the possibility of building you know truly intelligent machines machines were that you know I'm not talking about human level intelligence but maybe intelligence of", "start_timestamp": "00:25:05", "end_timestamp": "00:25:37", "start_second": 1505, "end_second": 1537, "url": "https://www.youtube.com/watch?v=SaJL4SLfrcY&t=1505s", "title": "Self-Supervised Learning", "thumbnail": "https://i.ytimg.com/vi/SaJL4SLfrcY/maxresdefault.jpg"} {"video_id": "SaJL4SLfrcY", "text": "the house cat or something like that so a house cat has more common sense than any AI systems that we build that we can build today and the answer is no we need we need you know significant can a conceptual progress if you really if you really want to make machines that are more intelligent that we have today so we can do all the stuff we have on the on the left assuming that we put enough efforts in them engineering efforts like you know separate cars you know semi autonomous cars better medical image analysis systems you know", "start_timestamp": "00:25:37", "end_timestamp": "00:26:11", "start_second": 1537, "end_second": 1571, "url": "https://www.youtube.com/watch?v=SaJL4SLfrcY&t=1537s", "title": "Self-Supervised Learning", "thumbnail": "https://i.ytimg.com/vi/SaJL4SLfrcY/maxresdefault.jpg"} {"video_id": "SaJL4SLfrcY", "text": "all kinds of stuff stupid chat BOTS you know that are entertaining but we can't have things that you know we the technology we have is not enough to get machines that have common sense to get to build things like intelligent personal assistants that really help us it can help us in the daily lives answer any question we have and you know kind of be a bit more like like human assistance we can have really smart chat BOTS we can have household robots that you know take care of all the chores in their house we don't really have a jar", "start_timestamp": "00:26:11", "end_timestamp": "00:26:44", "start_second": 1571, "end_second": 1604, "url": "https://www.youtube.com/watch?v=SaJL4SLfrcY&t=1571s", "title": "Self-Supervised Learning", "thumbnail": "https://i.ytimg.com/vi/SaJL4SLfrcY/maxresdefault.jpg"} {"video_id": "SaJL4SLfrcY", "text": "in dexterous robots they are a giant dexterous and very kind of specific situations but it's sort of very brittle and we can't have artificial intelligence so in general we can't have artificial general intelligence because that concept does not exist there is no such thing as general intelligence and I hate this term AVI there's a lot of people who claim that you know they are going to get to AGI by scaling up reinforcements running just having more computation this is completely false okay those people are after investment so", "start_timestamp": "00:26:44", "end_timestamp": "00:27:16", "start_second": 1604, "end_second": 1636, "url": "https://www.youtube.com/watch?v=SaJL4SLfrcY&t=1604s", "title": "Self-Supervised Learning", "thumbnail": "https://i.ytimg.com/vi/SaJL4SLfrcY/maxresdefault.jpg"} {"video_id": "SaJL4SLfrcY", "text": "they're ready to either you know self be so deluded or or kind of stretch the truth a little bit but in my opinion we're not gonna get there with the current type of running that we are that we're using and so why is there no such thing as artificial general intelligence and that's because there is no general intelligence human intelligence is incredibly specialized I'm sorry to say that okay that applies to everyone in this room but our intelligence is super specialized you know we're built by hit by evolution", "start_timestamp": "00:27:16", "end_timestamp": "00:27:57", "start_second": 1636, "end_second": 1677, "url": "https://www.youtube.com/watch?v=SaJL4SLfrcY&t=1636s", "title": "Self-Supervised Learning", "thumbnail": "https://i.ytimg.com/vi/SaJL4SLfrcY/maxresdefault.jpg"} {"video_id": "SaJL4SLfrcY", "text": "to kind of survive in our environment and we have this sort of impression that our intelligence in general but we just suck at a lot of tasks okay and in fact a lot of the tasks that computers can do quite well we totally suck at it so there was this idea that you know before alphago if I go 0 etc that humans were the best go player in the world were very very close to the ideal player okay good god alright that you could you could get just a few stones of handicap with idea player and basically beat the idea player two or three stones and you", "start_timestamp": "00:27:57", "end_timestamp": "00:28:39", "start_second": 1677, "end_second": 1719, "url": "https://www.youtube.com/watch?v=SaJL4SLfrcY&t=1677s", "title": "Self-Supervised Learning", "thumbnail": "https://i.ytimg.com/vi/SaJL4SLfrcY/maxresdefault.jpg"} {"video_id": "SaJL4SLfrcY", "text": "care for something like this turns that no turns out the best human players are horrible you know current machines are much much better than they are like by a huge margin so we just suck at it we're really bad which means you know that's not part of the stuff that evolution kind of built into our our brain to be able to do well now the thing is the reason why people were thinking that you know they were very close to the ideal player was because they could not imagine you know smarter considerably smaller entities and so we cannot", "start_timestamp": "00:28:39", "end_timestamp": "00:29:16", "start_second": 1719, "end_second": 1756, "url": "https://www.youtube.com/watch?v=SaJL4SLfrcY&t=1719s", "title": "Self-Supervised Learning", "thumbnail": "https://i.ytimg.com/vi/SaJL4SLfrcY/maxresdefault.jpg"} {"video_id": "SaJL4SLfrcY", "text": "imagine all the stuff that we're not able to do and therefore we think of ourselves as having general intelligence it's just that or imagination for what you know what functions we need to be able to do is very limited let me give you another more specific example anymore I wouldn't say mathematical but it's you more quantitative your your optical nerve has 1 million fibers so imagine we just take the 1 million fibers coming out of your optical nerve that goes to your brain and imagine that they're just binary so what you see is just a binary", "start_timestamp": "00:29:16", "end_timestamp": "00:29:55", "start_second": 1756, "end_second": 1795, "url": "https://www.youtube.com/watch?v=SaJL4SLfrcY&t=1756s", "title": "Self-Supervised Learning", "thumbnail": "https://i.ytimg.com/vi/SaJL4SLfrcY/maxresdefault.jpg"} {"video_id": "SaJL4SLfrcY", "text": "image ok 1 million bits how many so a particular recognition function if you want ok recognizing your grandmother or whatever is a boolean function 1 million bits in the input and 1 bit in the output and the question is how many search functions are there anyone has any idea how many search function how many boolean functions of 1 million bits are there any suggestion any idea to to the 1 million yet you are off by a huge factor but it's a good start 25 yes 2 to the 2 to the 1 million that's a correct answer ok so you have 2", "start_timestamp": "00:29:55", "end_timestamp": "00:30:43", "start_second": 1795, "end_second": 1843, "url": "https://www.youtube.com/watch?v=SaJL4SLfrcY&t=1795s", "title": "Self-Supervised Learning", "thumbnail": "https://i.ytimg.com/vi/SaJL4SLfrcY/maxresdefault.jpg"} {"video_id": "SaJL4SLfrcY", "text": "to the 1 million input configuration of 1 million bits right and for each of those 2 to the 1 million configuration you have one output bit that's the truth table of a particular boolean function ok so the number of configurations of 2 to the 1 million bits is 2 to the 2 to the 1 million it's an adorably large number I mean it's just a ridiculously large number now among all of those functions how many do you think what proportion do you think your brain can actually compute your visual cortex visual cortex has you know order between", "start_timestamp": "00:30:43", "end_timestamp": "00:31:19", "start_second": 1843, "end_second": 1879, "url": "https://www.youtube.com/watch?v=SaJL4SLfrcY&t=1843s", "title": "Self-Supervised Learning", "thumbnail": "https://i.ytimg.com/vi/SaJL4SLfrcY/maxresdefault.jpg"} {"video_id": "SaJL4SLfrcY", "text": "10 and 100 billion neurons order 10 to the 14 synapses okay so it has 10 to the 14 synapses let's say to be generous each synapse scanning can store 10 bits ok so there's 10 to the 15 bits in your entire visual cortex that's your that's what determines the function of you know of your your visual cortex that means the number of functions your visual cortex can possibly implement is 2 to the 10 to the 15 that's a lot less than 2 to the 2 to the 1 million not a lot less it's just you know this is like no comparison right one so the number of", "start_timestamp": "00:31:19", "end_timestamp": "00:32:00", "start_second": 1879, "end_second": 1920, "url": "https://www.youtube.com/watch?v=SaJL4SLfrcY&t=1879s", "title": "Self-Supervised Learning", "thumbnail": "https://i.ytimg.com/vi/SaJL4SLfrcY/maxresdefault.jpg"} {"video_id": "SaJL4SLfrcY", "text": "functions that your visual cortex can implement compared to all possible functions it's just this tiny tiny tiny sliver we're super specialized in particular if I play a trick on you I cut your article nerve I'm gonna do it ok and I put a device between your your retina and your brain that permutes all the pixels in your head in your optical nerve with a random permutation but a fixed one ok so now there is no spatial consistency in the signal you get to your visual cortex I don't think you can envision because your cortex has local", "start_timestamp": "00:32:00", "end_timestamp": "00:32:38", "start_second": 1920, "end_second": 1958, "url": "https://www.youtube.com/watch?v=SaJL4SLfrcY&t=1920s", "title": "Self-Supervised Learning", "thumbnail": "https://i.ytimg.com/vi/SaJL4SLfrcY/maxresdefault.jpg"} {"video_id": "SaJL4SLfrcY", "text": "connections and those local connections are there to exploit local correlation and now you break the circle correlation but doing this permutation you can you can see anyone you might see at very low resolution because the higher layers have big context but what is it true what is true yes okay so it is retin-a topic so the connection between the the optical nerve and the visual cortex is within a topic which means the topology is preserved the connections are largely local there are long range collections but this is only", "start_timestamp": "00:32:38", "end_timestamp": "00:33:27", "start_second": 1958, "end_second": 2007, "url": "https://www.youtube.com/watch?v=SaJL4SLfrcY&t=1958s", "title": "Self-Supervised Learning", "thumbnail": "https://i.ytimg.com/vi/SaJL4SLfrcY/maxresdefault.jpg"} {"video_id": "SaJL4SLfrcY", "text": "a small number of them and so you don't you don't have a huge amount of you know communication bandwidth for the long range you you have you know big bundles of connections from sort of low layers to high riders if you want from V 1 to V 2 and V 2 to V 4 and once you get to the higher layers the the spatial distribution is not represented anymore it's like a cornet where you are pulling and so in the high high layers you don't need that organization but by the time you get there the spatial resolution is lost so we can do the experiment", "start_timestamp": "00:33:27", "end_timestamp": "00:34:00", "start_second": 2007, "end_second": 2040, "url": "https://www.youtube.com/watch?v=SaJL4SLfrcY&t=2007s", "title": "Self-Supervised Learning", "thumbnail": "https://i.ytimg.com/vi/SaJL4SLfrcY/maxresdefault.jpg"} {"video_id": "SaJL4SLfrcY", "text": "actually it would be fun ok so next question is how do humans and animals learn so I don't know how many of you were were here yesterday at the inauguration probably not many but there is this idea that humans learn in a very different way from either reinforcement or supervised running and I'll call this later so supervised running but this is just a hypothesis but if you but you know babies learn concepts they learn sort of basic facts about basic knowledge about the world basically just by observation in the first few weeks", "start_timestamp": "00:34:00", "end_timestamp": "00:34:36", "start_second": 2040, "end_second": 2076, "url": "https://www.youtube.com/watch?v=SaJL4SLfrcY&t=2040s", "title": "Self-Supervised Learning", "thumbnail": "https://i.ytimg.com/vi/SaJL4SLfrcY/maxresdefault.jpg"} {"video_id": "SaJL4SLfrcY", "text": "and months of life and Emanuel depuis who is a colleague of Jean and me Vaness and on Facebook put together this chart that shows at what age babies learn different concepts so things like being able to make the difference between animate and inanimate object that pops up around three months and the notion of object permanence the fact that an object that is hidden behind another one is still there still exists the notions of solidity rigidity stability and then intuitive physics like gravity inertia are things like", "start_timestamp": "00:34:36", "end_timestamp": "00:35:16", "start_second": 2076, "end_second": 2116, "url": "https://www.youtube.com/watch?v=SaJL4SLfrcY&t=2076s", "title": "Self-Supervised Learning", "thumbnail": "https://i.ytimg.com/vi/SaJL4SLfrcY/maxresdefault.jpg"} {"video_id": "SaJL4SLfrcY", "text": "this pop up around eight months so if you show a six-month-old baby the scenario on the top left where you put a little car on a on a platform and you push up you push the car off the platform and it doesn't fall it's hidden in the back put the baby you can see that it's a trick six months they they're not surprised to just you know that's how the world works it was one more thing I need to learn after nine months they've learned that objects are not supposed to float in the air that you're supposed to fall and they go like", "start_timestamp": "00:35:16", "end_timestamp": "00:35:48", "start_second": 2116, "end_second": 2148, "url": "https://www.youtube.com/watch?v=SaJL4SLfrcY&t=2116s", "title": "Self-Supervised Learning", "thumbnail": "https://i.ytimg.com/vi/SaJL4SLfrcY/maxresdefault.jpg"} {"video_id": "SaJL4SLfrcY", "text": "this okay and you can measure how long they stare at it and with how much attention and so that's how you know that a concept has been has been known on that we know if a concept is is violated by a particular scene that you show the baby the baby is going to be really surprised and you can measure the degree of surprise if you want so how is it that babies learned is just basically by observation you know young babies you know before a few months old or completely helpless I just observed and don't really have any way of affecting", "start_timestamp": "00:35:48", "end_timestamp": "00:36:22", "start_second": 2148, "end_second": 2182, "url": "https://www.youtube.com/watch?v=SaJL4SLfrcY&t=2148s", "title": "Self-Supervised Learning", "thumbnail": "https://i.ytimg.com/vi/SaJL4SLfrcY/maxresdefault.jpg"} {"video_id": "SaJL4SLfrcY", "text": "the physical world around them so how does that happen so it's a different type of learning that either refers mental supervised ready and it's not just babies it's you know almost animals learn learn this kind of stuff this is a baby or wrong.we Tong is being shown a magic trick whether it's a an object in the cup and the object is removed but he doesn't see that and now the cup is empty and he's running on the floor laughing so obviously you know his model of the world includes object permanence and objects are not supposed to", "start_timestamp": "00:36:22", "end_timestamp": "00:36:53", "start_second": 2182, "end_second": 2213, "url": "https://www.youtube.com/watch?v=SaJL4SLfrcY&t=2182s", "title": "Self-Supervised Learning", "thumbnail": "https://i.ytimg.com/vi/SaJL4SLfrcY/maxresdefault.jpg"} {"video_id": "SaJL4SLfrcY", "text": "disappear like this and you know and when we see something that surprises us we are you know we laugh or we get scared because here is something we didn't predict and it can kill us so it's all kinds of concepts like this the reason for this animation here at the top is is that these very basic concepts like the fact that the world is filled emotional that perhaps we can learn but there's training of cells to predict very simple things so if I if I train myself at train my brain or if I train a running machine to predict what the", "start_timestamp": "00:36:53", "end_timestamp": "00:37:27", "start_second": 2213, "end_second": 2247, "url": "https://www.youtube.com/watch?v=SaJL4SLfrcY&t=2213s", "title": "Self-Supervised Learning", "thumbnail": "https://i.ytimg.com/vi/SaJL4SLfrcY/maxresdefault.jpg"} {"video_id": "SaJL4SLfrcY", "text": "world is going to look like when I'm when I move my my head or the camera a few centimeters to the left the view of the world changes depending on you know objects move with parallax depending on the depth the distance to my eyes and so if I train myself to predict what the world's going to look like when I move the camera perhaps I can automatically infer that every object in the world has a depth because that's the best explanation for the simplest explanation for how things change okay so the notion of depth the", "start_timestamp": "00:37:27", "end_timestamp": "00:37:59", "start_second": 2247, "end_second": 2279, "url": "https://www.youtube.com/watch?v=SaJL4SLfrcY&t=2247s", "title": "Self-Supervised Learning", "thumbnail": "https://i.ytimg.com/vi/SaJL4SLfrcY/maxresdefault.jpg"} {"video_id": "SaJL4SLfrcY", "text": "fact that the world is feel emotional might just simply emerge from training ourselves to predict what the world looks like when we move our head once you have that you have occlusion edges you know objects that are nearby don't move the same way that objects that are far away and so you see them as as objects okay there's a bunch of you know so weekly supervised vision systems that exploit this this kind of this kind of property once you have objects you know you have objects they can they can move independently of others your background", "start_timestamp": "00:37:59", "end_timestamp": "00:38:34", "start_second": 2279, "end_second": 2314, "url": "https://www.youtube.com/watch?v=SaJL4SLfrcY&t=2279s", "title": "Self-Supervised Learning", "thumbnail": "https://i.ytimg.com/vi/SaJL4SLfrcY/maxresdefault.jpg"} {"video_id": "SaJL4SLfrcY", "text": "objects you have you know the notion of obstacle you know things like that localization so you could think of you know concepts like this being sort of built hierarchically by just training yourself to predict and then representing you know coming up with good representations of that allow you to do a good job at predicting okay so this is not supervised it would be an unsupervised way form of running and that led some of us to so this is the joke from earlier shell force it's a play on some poem from the 1960s or 70s", "start_timestamp": "00:38:34", "end_timestamp": "00:39:12", "start_second": 2314, "end_second": 2352, "url": "https://www.youtube.com/watch?v=SaJL4SLfrcY&t=2314s", "title": "Self-Supervised Learning", "thumbnail": "https://i.ytimg.com/vi/SaJL4SLfrcY/maxresdefault.jpg"} {"video_id": "SaJL4SLfrcY", "text": "the variation will not be supervised so the future is in a new form of learning that will allow a machine to accumulate all those background knowledge about how the world works mostly by observation a little bit by interaction but mostly without supervision mostly without reinforcement so maybe that's the salvation so supervised learning what is it the the basic concepts the basic concept is I'll give you a piece of data let's say a piece of video for the sake of being concrete and I'm going to mask a piece of that video", "start_timestamp": "00:39:12", "end_timestamp": "00:39:52", "start_second": 2352, "end_second": 2392, "url": "https://www.youtube.com/watch?v=SaJL4SLfrcY&t=2352s", "title": "Self-Supervised Learning", "thumbnail": "https://i.ytimg.com/vi/SaJL4SLfrcY/maxresdefault.jpg"} {"video_id": "SaJL4SLfrcY", "text": "perhaps the second half of the video which is in blue at the top I'm going to train a machine to predict the future of the video from the past and the present okay but the general concepts of supervised learning is you have a piece of data you mask a piece of it and you ask the machine to predict the piece that is masked from the piece that is not mask if the piece that is masked is always the same so is the future for example you know you can use a some sort of prediction architecture for that but more often than not you don't actually", "start_timestamp": "00:39:52", "end_timestamp": "00:40:28", "start_second": 2392, "end_second": 2428, "url": "https://www.youtube.com/watch?v=SaJL4SLfrcY&t=2392s", "title": "Self-Supervised Learning", "thumbnail": "https://i.ytimg.com/vi/SaJL4SLfrcY/maxresdefault.jpg"} {"video_id": "SaJL4SLfrcY", "text": "know which piece is gonna be masked for example in this scene here right now you don't see my back but you can have you might have some good idea that you know what it looks like and you know maybe your brain sort of unconsciously tries you know it's predicting what I look like from the back and once I turn around you know your belief about this is updated it you can you can train yourself you can train your model you know same for all kinds of parts of the scene here which are currently occluded from your from your", "start_timestamp": "00:40:28", "end_timestamp": "00:40:57", "start_second": 2428, "end_second": 2457, "url": "https://www.youtube.com/watch?v=SaJL4SLfrcY&t=2428s", "title": "Self-Supervised Learning", "thumbnail": "https://i.ytimg.com/vi/SaJL4SLfrcY/maxresdefault.jpg"} {"video_id": "SaJL4SLfrcY", "text": "view so this principle of sort of learning to predict things that you will eventually see I think is a good one now this is so again you could train yourself to predict the past from the present to predict the top of the image from the bottom you know whatever doesn't doesn't matter exactly what it is the advantage of this is that this is something that you know Jeff Fenton has claimed for a long time is that the amount of information you're giving to the machine at every time step but every trial at every sample is enormous you're", "start_timestamp": "00:40:57", "end_timestamp": "00:41:33", "start_second": 2457, "end_second": 2493, "url": "https://www.youtube.com/watch?v=SaJL4SLfrcY&t=2457s", "title": "Self-Supervised Learning", "thumbnail": "https://i.ytimg.com/vi/SaJL4SLfrcY/maxresdefault.jpg"} {"video_id": "SaJL4SLfrcY", "text": "asking to predict every pixel you know in a bunch of frames in a video which is a lot of information much more than the label of a of an image for example which means you're putting a lot more constraints on the parameters of the machine which means you can train the machine to learn a lot of knowledge with a relatively small number of samples and furthermore those samples are free because you know we have more video data than then we can deal with so essentially if you think about sort of a hierarchy of the type of loading paradigms that we've", "start_timestamp": "00:41:33", "end_timestamp": "00:42:13", "start_second": 2493, "end_second": 2533, "url": "https://www.youtube.com/watch?v=SaJL4SLfrcY&t=2493s", "title": "Self-Supervised Learning", "thumbnail": "https://i.ytimg.com/vi/SaJL4SLfrcY/maxresdefault.jpg"} {"video_id": "SaJL4SLfrcY", "text": "been talking about here in South supervisors running there's a huge amount of feedback you're giving to the machine you know you're giving it a piece of video and then you were asking it you're telling it you know predict all those pixels it's an enormous amount of information this is a technical issue with it which I'll come back I'll come to in a minute supervise running you give a relatively small amount of feedback you tell the Machine this is class number three out of a thousand it's not a huge amount of information", "start_timestamp": "00:42:13", "end_timestamp": "00:42:40", "start_second": 2533, "end_second": 2560, "url": "https://www.youtube.com/watch?v=SaJL4SLfrcY&t=2533s", "title": "Self-Supervised Learning", "thumbnail": "https://i.ytimg.com/vi/SaJL4SLfrcY/maxresdefault.jpg"} {"video_id": "SaJL4SLfrcY", "text": "and I should say right now the the reason why neural nets work so well on image net is not just that it has 1 million training samples it's that it has 1,000 categories having a problem with lots of categories helps a lot to kind of construct good representations and then reinforcement learning is a very very weak feedback you're only telling the Machine once in a while you got it right or you get it wrong you're giving just a scalar value it's absolutely no way that a machine can learn anything complex without lots", "start_timestamp": "00:42:40", "end_timestamp": "00:43:09", "start_second": 2560, "end_second": 2589, "url": "https://www.youtube.com/watch?v=SaJL4SLfrcY&t=2560s", "title": "Self-Supervised Learning", "thumbnail": "https://i.ytimg.com/vi/SaJL4SLfrcY/maxresdefault.jpg"} {"video_id": "SaJL4SLfrcY", "text": "and lots of interactions using basic reinforcement running it's just no way you're just not giving it a lot of information yeah you know that's what you know a learning theory the only theory is called this sample complexity and it's just completely obvious that there is no way you can run complex stuff without tons and tons of interactions when you're giving just one scalar value once in a while as a feedback so the path to human intelligence you know my go through reinforcement running but it's not gonna be necessary", "start_timestamp": "00:43:09", "end_timestamp": "00:43:42", "start_second": 2589, "end_second": 2622, "url": "https://www.youtube.com/watch?v=SaJL4SLfrcY&t=2589s", "title": "Self-Supervised Learning", "thumbnail": "https://i.ytimg.com/vi/SaJL4SLfrcY/maxresdefault.jpg"} {"video_id": "SaJL4SLfrcY", "text": "it's not gonna be sufficient that's for sure that led me to this obnoxious analogy of intelligence as a cake and you know if so supervised running is the bulk of the cake machine learning is in the same embarrassing situation as physics in the sense that physicists have no idea what 95% of the mass in the universe is you know it's dark matter and dark energy they have no idea what it is we only know the 5% that is actually real matter and but the rest we don't know what it is so here's the same thing we can do we can", "start_timestamp": "00:43:42", "end_timestamp": "00:44:17", "start_second": 2622, "end_second": 2657, "url": "https://www.youtube.com/watch?v=SaJL4SLfrcY&t=2622s", "title": "Self-Supervised Learning", "thumbnail": "https://i.ytimg.com/vi/SaJL4SLfrcY/maxresdefault.jpg"} {"video_id": "SaJL4SLfrcY", "text": "make the chair you can make the icing on the cake but we can't actually bake the cake okay so I give you a preview earlier of what's missing and Wes missing is the ability to learn models predictive models of the world you know in the example of the car you if you want to train your system to drive a car it has to have some sort of politi model of what's gonna happen so as not to try stupid things like running into a tree or off a cliff so sorry years ago a few of my colleagues at Facebook ran this experiment which they you know they did", "start_timestamp": "00:44:17", "end_timestamp": "00:44:56", "start_second": 2657, "end_second": 2696, "url": "https://www.youtube.com/watch?v=SaJL4SLfrcY&t=2657s", "title": "Self-Supervised Learning", "thumbnail": "https://i.ytimg.com/vi/SaJL4SLfrcY/maxresdefault.jpg"} {"video_id": "SaJL4SLfrcY", "text": "so cooked up a bunch of simple physical situations where you stack a bunch of cubes and then you either run a game engine to can assimilate the cubes following or not following or you actually have real data where you take videos of stack of cubes and you train a video version of cornet basically to predict what's going to happen to the cubes so what you see here is what actually happens those are so the segmentation maps of the various cubes and it says what the continent is producing and the predictions are kind", "start_timestamp": "00:44:56", "end_timestamp": "00:45:25", "start_second": 2696, "end_second": 2725, "url": "https://www.youtube.com/watch?v=SaJL4SLfrcY&t=2696s", "title": "Self-Supervised Learning", "thumbnail": "https://i.ytimg.com/vi/SaJL4SLfrcY/maxresdefault.jpg"} {"video_id": "SaJL4SLfrcY", "text": "of blurry the reason why they vary is because there is no there's no way of exactly predicting what's going to happen there is a little bit of uncertainty about where other cubes actually are and everything and so what the system produces is a prediction which is sort of an average of multiple futures that can happen and that's a blurry prediction so how to deal with uncertainty is going to become the the main the main problem here right we're gonna make we're going to predict what's going to happen in the world by doing video prediction or we're", "start_timestamp": "00:45:25", "end_timestamp": "00:46:04", "start_second": 2725, "end_second": 2764, "url": "https://www.youtube.com/watch?v=SaJL4SLfrcY&t=2725s", "title": "Self-Supervised Learning", "thumbnail": "https://i.ytimg.com/vi/SaJL4SLfrcY/maxresdefault.jpg"} {"video_id": "SaJL4SLfrcY", "text": "going to take an image or a piece of text mask some parts of it and then ask a system to reconstruct and there is no way to make those predictions exactly and so what we had to have our systems that can deal with the uncertainty in the prediction they can represent the uncertainty of the prediction and it's for that reason to introduce the notion of energy based learning so you can think of it as kind of a weaker form of learning that people are used to which is sort of learning probabilistic models or learning the cities running you know", "start_timestamp": "00:46:04", "end_timestamp": "00:46:37", "start_second": 2764, "end_second": 2797, "url": "https://www.youtube.com/watch?v=SaJL4SLfrcY&t=2764s", "title": "Self-Supervised Learning", "thumbnail": "https://i.ytimg.com/vi/SaJL4SLfrcY/maxresdefault.jpg"} {"video_id": "SaJL4SLfrcY", "text": "distributions and the reason that we have to weaken it is because in high dimensional continuous spaces like images we don't have good ways of representing distributions that mean anything useful so let's say our entire world consists of two scalar variables y1 and y2 we're not going to do prediction here I mean we could consider that one variables observe the other one is not but we don't know in advance which one so let's say you observe y2 okay and this is this is our training set each each each point here is a", "start_timestamp": "00:46:37", "end_timestamp": "00:47:12", "start_second": 2797, "end_second": 2832, "url": "https://www.youtube.com/watch?v=SaJL4SLfrcY&t=2797s", "title": "Self-Supervised Learning", "thumbnail": "https://i.ytimg.com/vi/SaJL4SLfrcY/maxresdefault.jpg"} {"video_id": "SaJL4SLfrcY", "text": "training sample so this is obviously some structure in our world here you know all the values of y1 y2 seem to rely on some sort of surface line I mean your curve and if I give you a value of y2 you can predict that the value of y1 will be sort of around here or around there this is this is 30 okay I couldn't tell it was like a comma or not ok 30 minute that's perfect so right so there are multiple possible predictions so if you train a neural net or whatever primary trace function to make one prediction of y2 of y1 as a", "start_timestamp": "00:47:12", "end_timestamp": "00:48:01", "start_second": 2832, "end_second": 2881, "url": "https://www.youtube.com/watch?v=SaJL4SLfrcY&t=2832s", "title": "Self-Supervised Learning", "thumbnail": "https://i.ytimg.com/vi/SaJL4SLfrcY/maxresdefault.jpg"} {"video_id": "SaJL4SLfrcY", "text": "function of Y - it's not gonna work because you can only predict one output if you train the system willie square when it sees half of the sample on this side the other half on that side what is going to produce is the mean of the two okay that's the best way to minimize the square error that's not a good prediction for y1 for this value of y2 right that's those blurry predictions I was showing you about that's the brewery prediction here right in the middle so how do we kind of turn a prediction with sort of multiple possible outputs into", "start_timestamp": "00:48:01", "end_timestamp": "00:48:35", "start_second": 2881, "end_second": 2915, "url": "https://www.youtube.com/watch?v=SaJL4SLfrcY&t=2881s", "title": "Self-Supervised Learning", "thumbnail": "https://i.ytimg.com/vi/SaJL4SLfrcY/maxresdefault.jpg"} {"video_id": "SaJL4SLfrcY", "text": "kind of an architecture if you want and the proposal of energy based model so if you of course if you are probabilities you say well you know this is just a joint distribution I'm just going to learn the density of the Joint Distribution between those two variables and I'm done yeah you can do this in two dimensions you can't do this in 1 million dimensions when the those those things represent natural images for example so what I'm proposing is we're going to learn a energy function as its energy function so think of it as the", "start_timestamp": "00:48:35", "end_timestamp": "00:49:08", "start_second": 2915, "end_second": 2948, "url": "https://www.youtube.com/watch?v=SaJL4SLfrcY&t=2915s", "title": "Self-Supervised Learning", "thumbnail": "https://i.ytimg.com/vi/SaJL4SLfrcY/maxresdefault.jpg"} {"video_id": "SaJL4SLfrcY", "text": "negative log of a probability but it's not going to be normalized we're not gonna care about normalization okay so that in that way it's a little more general then then then probably stick approaches so give if if our data are those blue beads here an appropriate energy function that captures the dependency between the two variables is look something like this it takes low energies on the on the samples and higher energies outside okay and if we have a system like this if we have a function that has two inputs in this", "start_timestamp": "00:49:08", "end_timestamp": "00:49:39", "start_second": 2948, "end_second": 2979, "url": "https://www.youtube.com/watch?v=SaJL4SLfrcY&t=2948s", "title": "Self-Supervised Learning", "thumbnail": "https://i.ytimg.com/vi/SaJL4SLfrcY/maxresdefault.jpg"} {"video_id": "SaJL4SLfrcY", "text": "case when I put that gives us the basically the compatibility between the two inputs the two values that we give it we can use it to predict you know I'll give you a value of y2 and then buy gradient descent you can find or buy some search our method you can you can find the two values that produce a low energy on the output and they correspond to the two values of by one that are compatible way to okay so that's that's how inference works in those in those systems the system doesn't produce an output it only has", "start_timestamp": "00:49:39", "end_timestamp": "00:50:11", "start_second": 2979, "end_second": 3011, "url": "https://www.youtube.com/watch?v=SaJL4SLfrcY&t=2979s", "title": "Self-Supervised Learning", "thumbnail": "https://i.ytimg.com/vi/SaJL4SLfrcY/maxresdefault.jpg"} {"video_id": "SaJL4SLfrcY", "text": "inputs if I could strain the value of some of the inputs I can compute the value of the other inputs that will minimize the energy using some some scheme so now the second question is how do you train this box that produces the energy function and the training will do things like shaping the energy function so that it will take low energy on the on the blue beads and higher energies outside so if you have a primary Trice function say in the form of neural net that produces a scalar output it's very easy to show it a sample and then tune", "start_timestamp": "00:50:11", "end_timestamp": "00:50:48", "start_second": 3011, "end_second": 3048, "url": "https://www.youtube.com/watch?v=SaJL4SLfrcY&t=3011s", "title": "Self-Supervised Learning", "thumbnail": "https://i.ytimg.com/vi/SaJL4SLfrcY/maxresdefault.jpg"} {"video_id": "SaJL4SLfrcY", "text": "the parameter so the output goes down right so show it a sample one of the blue beads and then tweak the parameters on your net so that the output goes down so you get low energy four data points another second question is how do you make sure the energy is higher outside because if it's if the energy function is flat it just gives you low value for everything it doesn't play any interesting role so you have to make sure the energy is higher outside of the region of data and this is something that probably stick model give you know", "start_timestamp": "00:50:48", "end_timestamp": "00:51:21", "start_second": 3048, "end_second": 3081, "url": "https://www.youtube.com/watch?v=SaJL4SLfrcY&t=3048s", "title": "Self-Supervised Learning", "thumbnail": "https://i.ytimg.com/vi/SaJL4SLfrcY/maxresdefault.jpg"} {"video_id": "SaJL4SLfrcY", "text": "like maximum likelihood and normalize probabilistic model do automatically and the way you transform an energy based model into into a public one is it's really gives distribution I mean you have several ways but that's kind of a very natural when to do so take your energy function and take the exponential minus multiplied by some arbitrary positive constant and then normalize so you get a bunch of numbers between zero and one that's sum to one okay in the discrete case that's that's called soft max in the continuous case", "start_timestamp": "00:51:21", "end_timestamp": "00:51:54", "start_second": 3081, "end_second": 3114, "url": "https://www.youtube.com/watch?v=SaJL4SLfrcY&t=3081s", "title": "Self-Supervised Learning", "thumbnail": "https://i.ytimg.com/vi/SaJL4SLfrcY/maxresdefault.jpg"} {"video_id": "SaJL4SLfrcY", "text": "it's it's called a gives distribution in the general case where the energy function is complicated this normalizing turn normalization term is intractable you can't compute this integral so you can't actually turn the energy into into a probability and that's why I'm arguing for just manipulating the energy function now going to a density because you can't normalize it you can only normalize when the energy is kind of has a trivial form and it's not that interesting so right so that's the Department of Energy", "start_timestamp": "00:51:54", "end_timestamp": "00:52:28", "start_second": 3114, "end_second": 3148, "url": "https://www.youtube.com/watch?v=SaJL4SLfrcY&t=3114s", "title": "Self-Supervised Learning", "thumbnail": "https://i.ytimg.com/vi/SaJL4SLfrcY/maxresdefault.jpg"} {"video_id": "SaJL4SLfrcY", "text": "baserunning it's easy to make the energy low on the samples it's higher to make it it's harder to make sure it's higher outside it's possible to interpret classic running algorithms in terms of sort of energy type models you know classic and supervisor algorithms like PCA or k-means or gets a mixture model or square ICA things like this those are two examples here for PCA and k-means so this is in two dimensions the variable has is just a vector with two dimensions and the energy function for 4k means is for PCA I'm sorry it's just the square", "start_timestamp": "00:52:28", "end_timestamp": "00:53:12", "start_second": 3148, "end_second": 3192, "url": "https://www.youtube.com/watch?v=SaJL4SLfrcY&t=3148s", "title": "Self-Supervised Learning", "thumbnail": "https://i.ytimg.com/vi/SaJL4SLfrcY/maxresdefault.jpg"} {"video_id": "SaJL4SLfrcY", "text": "root construction error so you take a vector Y but you know project it on the principle subspace which is done by this matrix W and then multiply by W transpose and assuming things are appropriately normalized you get a reconstruction of the original point which is just in the original space the location of a projection of a point onto the onto the mat the linear subspace the principal subspace so this is the principal subspace here in dark and you take any point you project it here and the energy now the reconstruction error", "start_timestamp": "00:53:12", "end_timestamp": "00:53:48", "start_second": 3192, "end_second": 3228, "url": "https://www.youtube.com/watch?v=SaJL4SLfrcY&t=3192s", "title": "Self-Supervised Learning", "thumbnail": "https://i.ytimg.com/vi/SaJL4SLfrcY/maxresdefault.jpg"} {"video_id": "SaJL4SLfrcY", "text": "which is the distance between the original point on this projection on a principal subspace and so the the grayscale here represent that energy okay zero energy on the principal subspace and energy that goes quadratic quadratically as you move away obviously this is not a good representation of that data manifold this is the data okay so the data points are sampled from this spiral and PCA doesn't do a good job at this right k-means has a funny kind of energy function which is not directly doesn't have a sort of direct form but", "start_timestamp": "00:53:48", "end_timestamp": "00:54:28", "start_second": 3228, "end_second": 3268, "url": "https://www.youtube.com/watch?v=SaJL4SLfrcY&t=3228s", "title": "Self-Supervised Learning", "thumbnail": "https://i.ytimg.com/vi/SaJL4SLfrcY/maxresdefault.jpg"} {"video_id": "SaJL4SLfrcY", "text": "it's the it's the minimum of some more elementary energy function over some latent variable Z and we'll come back to models of this type so we have a energy function which is a reconstruction error so it's the squared distance between a data point Y and its reconstruction and its reconstruction is the product of a prototype matrix where the columns are bunch of prototypes multiplied by Z vector which is a latent variable and that Z vector is constrained to be a one hot vector so it's a vector with all zeros except it has a one at one", "start_timestamp": "00:54:28", "end_timestamp": "00:54:59", "start_second": 3268, "end_second": 3299, "url": "https://www.youtube.com/watch?v=SaJL4SLfrcY&t=3268s", "title": "Self-Supervised Learning", "thumbnail": "https://i.ytimg.com/vi/SaJL4SLfrcY/maxresdefault.jpg"} {"video_id": "SaJL4SLfrcY", "text": "location and to measure the reconstruction error of a particular point you figure out which prototype is closest to it and then the reconstruction error is the distance between the data point and the reconstruction and the the the closest prototype okay so the minimizing with respect to the Z vector will figure out which column of W which is which prototype is close as to why and that's the that's the one that produces the lowest reconstruction error and so now the energy function is defined as the minimum of Z where Z is a word of one of", "start_timestamp": "00:54:59", "end_timestamp": "00:55:34", "start_second": 3299, "end_second": 3334, "url": "https://www.youtube.com/watch?v=SaJL4SLfrcY&t=3299s", "title": "Self-Supervised Learning", "thumbnail": "https://i.ytimg.com/vi/SaJL4SLfrcY/maxresdefault.jpg"} {"video_id": "SaJL4SLfrcY", "text": "K code of this energy function now when you plot this energy will you train k-means on this data set where the data is sampled from this pink spiral and yet I think 2020 prototypes here you get a whole bunch of potential wells if you want so the quadratic bowls energy minima and the overall energy is the minimum of all of those twenty quadratic balls it looks really beautiful in two dimensions it doesn't scale very well within in high dimension because you know how do you populate a high dimensional space with photo types", "start_timestamp": "00:55:34", "end_timestamp": "00:56:18", "start_second": 3334, "end_second": 3378, "url": "https://www.youtube.com/watch?v=SaJL4SLfrcY&t=3334s", "title": "Self-Supervised Learning", "thumbnail": "https://i.ytimg.com/vi/SaJL4SLfrcY/maxresdefault.jpg"} {"video_id": "SaJL4SLfrcY", "text": "okay so those are merely two examples of classical and supervised Ling methods but here what I've done is list seven different classes or methods that ensure that the energy outside the region of of data is higher than on the of data so I mentioned the one at the top I build a machine so that the volume of low energy stuff is constant and that's the case for PCA for k-means gas mixture model square C etc another one is you make your energy function very flexible but you think of it as the log of some probability and you do maximum", "start_timestamp": "00:56:18", "end_timestamp": "00:56:58", "start_second": 3378, "end_second": 3418, "url": "https://www.youtube.com/watch?v=SaJL4SLfrcY&t=3378s", "title": "Self-Supervised Learning", "thumbnail": "https://i.ytimg.com/vi/SaJL4SLfrcY/maxresdefault.jpg"} {"video_id": "SaJL4SLfrcY", "text": "likelihood so automatically because your probability distribution needs to be normalized it's going to have the effect of pushing up the energy of stuff you don't observe but it's very difficult because you get the log of this normalization term the partition function which is generally intractable and so it's very hard to do this so that's called maximum likelihood and which when it's intractable you have to use approximation like variational approximations or or Monte Carlo methods or Markov chain Monte Carlo methods", "start_timestamp": "00:56:58", "end_timestamp": "00:57:31", "start_second": 3418, "end_second": 3451, "url": "https://www.youtube.com/watch?v=SaJL4SLfrcY&t=3418s", "title": "Self-Supervised Learning", "thumbnail": "https://i.ytimg.com/vi/SaJL4SLfrcY/maxresdefault.jpg"} {"video_id": "SaJL4SLfrcY", "text": "another technique which is quite popular is you push down on the energy of data points and then you push up on chosen locations outside and if you are familiar with Gans generated referral networks john chavis all networks our way of doing this so think of the object you're training it again as the discriminator not the generator the discriminator is the thing you're actually training and we think about the discriminator it is an energy function ok we move the exponential at the end of your audio discriminator it just", "start_timestamp": "00:57:31", "end_timestamp": "00:58:00", "start_second": 3451, "end_second": 3480, "url": "https://www.youtube.com/watch?v=SaJL4SLfrcY&t=3451s", "title": "Self-Supervised Learning", "thumbnail": "https://i.ytimg.com/vi/SaJL4SLfrcY/maxresdefault.jpg"} {"video_id": "SaJL4SLfrcY", "text": "produces a score think of this score as an energy that you're going to try to make large for bad samples and low for good samples okay good for you know low for examples you observe and bad for samples you don't observe and the question is how do you generate bad samples whose energy going to push up and the idea of gann is that you train a neural net which from a bunch of random numbers is going to produce a bad samples whose energy you're gonna push up that's your generator ok so you get and in fact a paper on this called", "start_timestamp": "00:58:00", "end_timestamp": "00:58:33", "start_second": 3480, "end_second": 3513, "url": "https://www.youtube.com/watch?v=SaJL4SLfrcY&t=3480s", "title": "Self-Supervised Learning", "thumbnail": "https://i.ytimg.com/vi/SaJL4SLfrcY/maxresdefault.jpg"} {"video_id": "SaJL4SLfrcY", "text": "energy based Ganso it's gonna be interpretation against in the context of energy based models and we talked about the other ones except except those two so this one denoising auto-encoder says I'm not going to train the system to actually compute an energy function it's physically but I'm gonna train a dynamical system to start from a point outside the region of data and then I'm going to train a neural net to map it back to the region of data okay so I take a training sample I corrupt it so now I take a noisy training sample and I", "start_timestamp": "00:58:33", "end_timestamp": "00:59:16", "start_second": 3513, "end_second": 3556, "url": "https://www.youtube.com/watch?v=SaJL4SLfrcY&t=3513s", "title": "Self-Supervised Learning", "thumbnail": "https://i.ytimg.com/vi/SaJL4SLfrcY/maxresdefault.jpg"} {"video_id": "SaJL4SLfrcY", "text": "train a neural net to map it back so my reconstruction error now is the distance between the noisy sample and the original one and so it's going to make the energy kind of be large outside the the region of data and I'll come back to this a little more explicitly but my favorite one is the last one and that last one says we're going to regularize some parameter inside the network so that the volume of stuff that is properly reconstructed is low okay so we're going to make the system pay for reconstructing too many things and how", "start_timestamp": "00:59:16", "end_timestamp": "00:59:51", "start_second": 3556, "end_second": 3591, "url": "https://www.youtube.com/watch?v=SaJL4SLfrcY&t=3556s", "title": "Self-Supervised Learning", "thumbnail": "https://i.ytimg.com/vi/SaJL4SLfrcY/maxresdefault.jpg"} {"video_id": "SaJL4SLfrcY", "text": "to do this and I'll come to this in a minute so if you are if you are probabilities and you go through this you do maximum likelihood you have you know you you your energy function as the normalized log of some probability distribution so you go through the gives distribution to turn it into a distribution into a normalized distribution and now you have a bunch of data points what you want to do is maximize the probability that your model gives to your data points so the product of the probability that your model gives", "start_timestamp": "00:59:51", "end_timestamp": "01:00:26", "start_second": 3591, "end_second": 3626, "url": "https://www.youtube.com/watch?v=SaJL4SLfrcY&t=3591s", "title": "Self-Supervised Learning", "thumbnail": "https://i.ytimg.com/vi/SaJL4SLfrcY/maxresdefault.jpg"} {"video_id": "SaJL4SLfrcY", "text": "to the data point or you want to minimize the negative log of the probability that your model gives to all the data points and that's here the objective function you see here so this is the negative log like if you want of a data point it's the negative log of the numerator also divided by data because it's easier minus the negative log of the denominator okay but there's two minuses so that gives us surprise so that's the objective function in each minimize if you are probably list and this will have the effect of pushing", "start_timestamp": "01:00:26", "end_timestamp": "01:00:57", "start_second": 3626, "end_second": 3657, "url": "https://www.youtube.com/watch?v=SaJL4SLfrcY&t=3626s", "title": "Self-Supervised Learning", "thumbnail": "https://i.ytimg.com/vi/SaJL4SLfrcY/maxresdefault.jpg"} {"video_id": "SaJL4SLfrcY", "text": "down on the energy of the data points and this will have the effect if minimize this you'll have to push up on the energy of every single point in your space okay because to make this small you have to make those energies high because it's negative exponential if you compute the gradient of that last function with respect to the parameters of your energy function you get the expression at the top here which tells you that a step of gradient is going to push down with a unit force on the data point and then the second term says I'm", "start_timestamp": "01:00:57", "end_timestamp": "01:01:32", "start_second": 3657, "end_second": 3692, "url": "https://www.youtube.com/watch?v=SaJL4SLfrcY&t=3657s", "title": "Self-Supervised Learning", "thumbnail": "https://i.ytimg.com/vi/SaJL4SLfrcY/maxresdefault.jpg"} {"video_id": "SaJL4SLfrcY", "text": "going to push up on every point in the space with a force that's proportional to the probability that my model gives to that point okay so points I have high probability gonna get pushed up really high points that I have low probability which means high energy I'm gonna push that not as not as hard the integral of the force is one and so when the only data point that has high probability which means low energy is the correct one then those two terms balance and the thing can converges if you want you can't actually compute this integral so", "start_timestamp": "01:01:32", "end_timestamp": "01:02:06", "start_second": 3692, "end_second": 3726, "url": "https://www.youtube.com/watch?v=SaJL4SLfrcY&t=3692s", "title": "Self-Supervised Learning", "thumbnail": "https://i.ytimg.com/vi/SaJL4SLfrcY/maxresdefault.jpg"} {"video_id": "SaJL4SLfrcY", "text": "very often you have to approximate it using Monte Carlo methods or variation of approximations and this is what a lot of people instead of probabilistic modeling do there's a lot of papers on this but you don't need to do all this ok so now let's talk about latent variable models so I talked about the k-means method where this is Z variable that you have to minimize over to get the energy of your of your system and it's kind of specific example of a kind of more general approach ignore the stuff on the on the right for", "start_timestamp": "01:02:06", "end_timestamp": "01:02:43", "start_second": 3726, "end_second": 3763, "url": "https://www.youtube.com/watch?v=SaJL4SLfrcY&t=3726s", "title": "Self-Supervised Learning", "thumbnail": "https://i.ytimg.com/vi/SaJL4SLfrcY/maxresdefault.jpg"} {"video_id": "SaJL4SLfrcY", "text": "now just look at the left side so you can either prefer the equations or the block diagram depending on whether you are a mathematician and computer scientist or an engineer but it's okay the block diagram now so you have data point Y that you want to reconstruct and your energy function is going to be the reconstruction error you're going to reconstruct this data point by running a latent variable through a decoder function think of it as on your own net in the simple sketch is just a simple matrix W code dictionary matrix and then", "start_timestamp": "01:02:43", "end_timestamp": "01:03:20", "start_second": 3763, "end_second": 3800, "url": "https://www.youtube.com/watch?v=SaJL4SLfrcY&t=3763s", "title": "Self-Supervised Learning", "thumbnail": "https://i.ytimg.com/vi/SaJL4SLfrcY/maxresdefault.jpg"} {"video_id": "SaJL4SLfrcY", "text": "you compute the square error so in this case the C cost function here is just a square error between the data points and the multiplication of a matrix by latent vector okay this is like a means except we don't constrain this vector to be one of K and there's a problem with this if you just use this without anything else if Z has the same dimension as Y your bigger or even slightly smaller there's always going to be a wire that's going to reconstruct ie it is always going to be a Z that is going to perfectly", "start_timestamp": "01:03:20", "end_timestamp": "01:03:52", "start_second": 3800, "end_second": 3832, "url": "https://www.youtube.com/watch?v=SaJL4SLfrcY&t=3800s", "title": "Self-Supervised Learning", "thumbnail": "https://i.ytimg.com/vi/SaJL4SLfrcY/maxresdefault.jpg"} {"video_id": "SaJL4SLfrcY", "text": "reconstruct any way you throw at it okay for a non degenerate version of the decoder nondegenerate matrix here is always going to be a Z that exactly we constructed the Y that's not good because that means every point in your white space is going to be exactly reconstructed your energy function is going to be flat equal to zero everywhere so the question now is how do you make sure the energy is high on points that you don't train on and this R of Z here is a regular razor that is going to make you pay for choosing a Z that's outside", "start_timestamp": "01:03:52", "end_timestamp": "01:04:26", "start_second": 3832, "end_second": 3866, "url": "https://www.youtube.com/watch?v=SaJL4SLfrcY&t=3832s", "title": "Self-Supervised Learning", "thumbnail": "https://i.ytimg.com/vi/SaJL4SLfrcY/maxresdefault.jpg"} {"video_id": "SaJL4SLfrcY", "text": "of kind of a small volume if you want okay and the usual trick something that gmer has worked on a lot is sparse coding so it's fast coding the the regular reservations the l1 norm of Z so it's the sum of the absolute values of the components with Z and the effect of this is basically to make the Machine want to make many components of the zero which is why it's called sparse coding it's you're trying to reconstruct a data point why as the product of a sparse vector with lots of zeros in it multiplied by a matrix okay so assuming", "start_timestamp": "01:04:26", "end_timestamp": "01:05:10", "start_second": 3866, "end_second": 3910, "url": "https://www.youtube.com/watch?v=SaJL4SLfrcY&t=3866s", "title": "Self-Supervised Learning", "thumbnail": "https://i.ytimg.com/vi/SaJL4SLfrcY/maxresdefault.jpg"} {"video_id": "SaJL4SLfrcY", "text": "W is known I'll give you a why you find the Z that minimizes the sum of those two terms it's going to give you a Z vector that is sparse has got lots of zeros okay and the region of space they have low energy other ones you know basically that energy once you minimize with respect to Z is the energy of every data point and because Z is constrained to be sparse it's going to be a small region of space that takes low energy and outside is going to be high energy okay so by adjusting the alpha coefficient here you can make the region", "start_timestamp": "01:05:10", "end_timestamp": "01:05:48", "start_second": 3910, "end_second": 3948, "url": "https://www.youtube.com/watch?v=SaJL4SLfrcY&t=3910s", "title": "Self-Supervised Learning", "thumbnail": "https://i.ytimg.com/vi/SaJL4SLfrcY/maxresdefault.jpg"} {"video_id": "SaJL4SLfrcY", "text": "of space that is properly we constructed as small as you want or or larger if you want the general form is something like this where Z equals some cost function that measures the discrepancy between the data point and the decoder function applied to Z the decoder function is trainable and then you have a regularizer that limits the information content essentially of z k-means does this implicitly by restricting Z to be a discrete variable okay PCA or things similar to this implicitly do this by limiting the dimension of Z", "start_timestamp": "01:05:48", "end_timestamp": "01:06:25", "start_second": 3948, "end_second": 3985, "url": "https://www.youtube.com/watch?v=SaJL4SLfrcY&t=3948s", "title": "Self-Supervised Learning", "thumbnail": "https://i.ytimg.com/vi/SaJL4SLfrcY/maxresdefault.jpg"} {"video_id": "SaJL4SLfrcY", "text": "ignore the bottom for now so if you apply sparse coding to this little spiral data set this is the energy function you get so every line that you you can you can sort of see here is a different linear subspace which is actually the selection of a different column or pair of columns of the W matrix and and the whole thing kind of fits the fits the data right okay the system is trained to just minimize the reconstruction error plus the rigor Iser on the data points and because it's got this regularizer as a consequence it gets high energy to stuff", "start_timestamp": "01:06:25", "end_timestamp": "01:07:12", "start_second": 3985, "end_second": 4032, "url": "https://www.youtube.com/watch?v=SaJL4SLfrcY&t=3985s", "title": "Self-Supervised Learning", "thumbnail": "https://i.ytimg.com/vi/SaJL4SLfrcY/maxresdefault.jpg"} {"video_id": "SaJL4SLfrcY", "text": "that is outside you apply this to em mist you get things like this so these are the columns of the W matrix and what sparse coding does in this case is that it reconstructs every digit in m nest as a linear combination of a small number of those guys okay because only a small number components of Z can be nonzero so a small number of columns will be selected so every digit is gonna be reconstructed as a linear combination of a small number of those you train the W matrix here in sparse coding but just with great understand you have to play a", "start_timestamp": "01:07:12", "end_timestamp": "01:07:44", "start_second": 4032, "end_second": 4064, "url": "https://www.youtube.com/watch?v=SaJL4SLfrcY&t=4032s", "title": "Self-Supervised Learning", "thumbnail": "https://i.ytimg.com/vi/SaJL4SLfrcY/maxresdefault.jpg"} {"video_id": "SaJL4SLfrcY", "text": "trick which is that you have to constrain the norm of those vectors to be within a sphere like less than 1 for example otherwise they blow up and the Z variable shrinks but that's not really interesting it's gonna be degenerate solution so you have to constrain the those vectors the columns of W to be small to be bounded and as a result when you train the system the the system identifies those pieces that need to be combined to form characters as kind of small pieces of strokes which is kind of a logical way of decomposing a character", "start_timestamp": "01:07:44", "end_timestamp": "01:08:17", "start_second": 4064, "end_second": 4097, "url": "https://www.youtube.com/watch?v=SaJL4SLfrcY&t=4064s", "title": "Self-Supervised Learning", "thumbnail": "https://i.ytimg.com/vi/SaJL4SLfrcY/maxresdefault.jpg"} {"video_id": "SaJL4SLfrcY", "text": "into elementary components some our colleagues at Facebook recently used one of those latent variable system but they kind of limited the information content of the little representation by kind of making it low dimensional they call that glow generation through latent optimization this is by version of skies around Lopes pass and shun and if you train on faces for example you get you know recon faces like this with you know relatively low dimensional latent vectors and you can sort of interpolate that latent", "start_timestamp": "01:08:17", "end_timestamp": "01:08:56", "start_second": 4097, "end_second": 4136, "url": "https://www.youtube.com/watch?v=SaJL4SLfrcY&t=4097s", "title": "Self-Supervised Learning", "thumbnail": "https://i.ytimg.com/vi/SaJL4SLfrcY/maxresdefault.jpg"} {"video_id": "SaJL4SLfrcY", "text": "space and sort of interpolate from you know one phase to another this is you know not as I mean this is a few years old but you know it's it's kind of at the time similar to the the Gans of the time there's been a lot of progress in Gans not so much in this particular approach but but it it's kind of interesting we're approaching the problem now there is an issue with sparse coding which is that if I give you a why you have to run an optimization algorithm to find the optimal Z that minimizes the energy that", "start_timestamp": "01:08:56", "end_timestamp": "01:09:28", "start_second": 4136, "end_second": 4168, "url": "https://www.youtube.com/watch?v=SaJL4SLfrcY&t=4136s", "title": "Self-Supervised Learning", "thumbnail": "https://i.ytimg.com/vi/SaJL4SLfrcY/maxresdefault.jpg"} {"video_id": "SaJL4SLfrcY", "text": "might be a little expensive even if you use Julian's code called spans it's still a little expensive so here's a trick and the trick is you're not going to make the Z variable a latent variable that you have to optimize over you're just going to make the output of an encoder so you're going to train a neural net here to predict what the optimal code is for sparse coding let's go as possible to encoder so you have a piece of data here you run into an encoder it predicts the value of a variable this variable is regular wise", "start_timestamp": "01:09:28", "end_timestamp": "01:10:00", "start_second": 4168, "end_second": 4200, "url": "https://www.youtube.com/watch?v=SaJL4SLfrcY&t=4168s", "title": "Self-Supervised Learning", "thumbnail": "https://i.ytimg.com/vi/SaJL4SLfrcY/maxresdefault.jpg"} {"video_id": "SaJL4SLfrcY", "text": "to be sparse through this RZ which could be another one norm and then you reconstruct the d adwords this is in the general form of regular risotto encoder in this particular form of an l1 regularizer that's called a sparse o2 encoder now these two forms of it a form where is just a neural net with an additional term in the in the objective function for training is another form where you still have Z as a written variable but now you have a third term in the energy that makes you pay for making z2 different from the output of", "start_timestamp": "01:10:00", "end_timestamp": "01:10:38", "start_second": 4200, "end_second": 4238, "url": "https://www.youtube.com/watch?v=SaJL4SLfrcY&t=4200s", "title": "Self-Supervised Learning", "thumbnail": "https://i.ytimg.com/vi/SaJL4SLfrcY/maxresdefault.jpg"} {"video_id": "SaJL4SLfrcY", "text": "the encoder and so the procedure is you give it a Y back within the sense you find a Z that minimizes the sum of three terms the reconstruction error the regularizer and the prediction error here which is the distance from Z to the output of the encoder which is a prediction of Z if you want then once you have that Z you do one step of gradient descent in the parameters of the encoder of the decoder so the decoder tries to get it put to get closer to why and the anchor get try to guess its I'd put the bar to", "start_timestamp": "01:10:38", "end_timestamp": "01:11:05", "start_second": 4238, "end_second": 4265, "url": "https://www.youtube.com/watch?v=SaJL4SLfrcY&t=4238s", "title": "Self-Supervised Learning", "thumbnail": "https://i.ytimg.com/vi/SaJL4SLfrcY/maxresdefault.jpg"} {"video_id": "SaJL4SLfrcY", "text": "get closer to the by minimizing this okay and there's couple papers on this that are about ten years old we used to call this predictive spas decomposition there's a form of equal lista so if you run this algorithm on natural image patches this is the running algorithm actually running and you start with random so what you see here each square is one column of the W matrix in the encoder I believe here but there is a similar one in the decoder which looks very similar and as learning proceeds of you train on more and more", "start_timestamp": "01:11:05", "end_timestamp": "01:11:44", "start_second": 4265, "end_second": 4304, "url": "https://www.youtube.com/watch?v=SaJL4SLfrcY&t=4265s", "title": "Self-Supervised Learning", "thumbnail": "https://i.ytimg.com/vi/SaJL4SLfrcY/maxresdefault.jpg"} {"video_id": "SaJL4SLfrcY", "text": "natural image patches what you see is a pattern of kind of features X appearing they end up being oriented edge oriented contour detectors you want something like this is a compositional form of this where the reconstruction now consists in taking a bunch of feature maps those are your latent variables convolving it with a bunch of kernels summing up the results and that's your reconstruction okay so instead of having scaler here you have future maps and instead of having columns of a matrix you have conditional kernels and you get", "start_timestamp": "01:11:44", "end_timestamp": "01:12:22", "start_second": 4304, "end_second": 4342, "url": "https://www.youtube.com/watch?v=SaJL4SLfrcY&t=4304s", "title": "Self-Supervised Learning", "thumbnail": "https://i.ytimg.com/vi/SaJL4SLfrcY/maxresdefault.jpg"} {"video_id": "SaJL4SLfrcY", "text": "really cool filters so those are the decoder filters these are the encoder filters for various number of various dimensions of the of the code here so get natural feature is emerging completely and you know unsupervised from training all natural image patches let me skip ahead a little bit okay so there is a formulation you've probably heard conservation electronic odor and when you look at those papers is a whole bunch of you know sort of vegetable bounds and all that stuff and it's very hard to understand intuitively", "start_timestamp": "01:12:22", "end_timestamp": "01:13:01", "start_second": 4342, "end_second": 4381, "url": "https://www.youtube.com/watch?v=SaJL4SLfrcY&t=4342s", "title": "Self-Supervised Learning", "thumbnail": "https://i.ytimg.com/vi/SaJL4SLfrcY/maxresdefault.jpg"} {"video_id": "SaJL4SLfrcY", "text": "what's going on so I'm going to kind of formulate this in the context of this energy based regularized o2 encoder model what a violet one color is and I'm sure for you is going to be enlightening if you haven't completely understood already with a version of the 20 color is so version auto encoder is an auto encoder you fit it the data a piece of data image patch or whatever running to an encoder you predict the code and then you add noise to that code in a particular way you add Gaussian noise to that code and then you run through a", "start_timestamp": "01:13:01", "end_timestamp": "01:13:32", "start_second": 4381, "end_second": 4412, "url": "https://www.youtube.com/watch?v=SaJL4SLfrcY&t=4381s", "title": "Self-Supervised Learning", "thumbnail": "https://i.ytimg.com/vi/SaJL4SLfrcY/maxresdefault.jpg"} {"video_id": "SaJL4SLfrcY", "text": "decoder and you have a reconstruction error and you minimize a reconstruction error by training this entire system now okay so in the space of codes in the z space every training sample is going to be a point okay and the house is going to be some sort of structure to that to those to those points you don't know if you add noise to each of those points which is what the factional encoded does you turn every single one of those points into a fuzzy ball right so you have a fuzzy sphere around every point now here's a problem if you give this", "start_timestamp": "01:13:32", "end_timestamp": "01:14:14", "start_second": 4412, "end_second": 4454, "url": "https://www.youtube.com/watch?v=SaJL4SLfrcY&t=4412s", "title": "Self-Supervised Learning", "thumbnail": "https://i.ytimg.com/vi/SaJL4SLfrcY/maxresdefault.jpg"} {"video_id": "SaJL4SLfrcY", "text": "data point and then you add noise and the noise puts it here when you reconstruct the system is going to think it was that point and so the reconstruction error is not going to be very good because it's going to confuse one you know one data point with another one so when you train the system the consequence of of this noise is that all those fuzzy balls are going to fly away from each other okay they're going to try to get as far from each other as possible to minimize the confusion and that doesn't help you it just makes the", "start_timestamp": "01:14:14", "end_timestamp": "01:14:50", "start_second": 4454, "end_second": 4490, "url": "https://www.youtube.com/watch?v=SaJL4SLfrcY&t=4454s", "title": "Self-Supervised Learning", "thumbnail": "https://i.ytimg.com/vi/SaJL4SLfrcY/maxresdefault.jpg"} {"video_id": "SaJL4SLfrcY", "text": "weights of the encoder larger but doesn't really help you so to prevent this from happening you're going to attach every every single one of those spheres with a spring it to the center okay so you tell us years you can't go too far you know you pay price for going too far and so the system is going to try to find a trade-off between you know pushing the balls far away from each other but not being able to do this is going to merge or or let some of the spheres interpenetrate as long as the reconstruction error that this causes is", "start_timestamp": "01:14:50", "end_timestamp": "01:15:27", "start_second": 4490, "end_second": 4527, "url": "https://www.youtube.com/watch?v=SaJL4SLfrcY&t=4490s", "title": "Self-Supervised Learning", "thumbnail": "https://i.ytimg.com/vi/SaJL4SLfrcY/maxresdefault.jpg"} {"video_id": "SaJL4SLfrcY", "text": "not too high and so in the end is going to try to find some sort of structure in the latent space that will sort of organize those points so that they you know basically capture the structure of the data and you can think of this as just another way of limiting the information content of the code so our regular riser RZ with sparse coding was limiting the information content of the code this is another way of limiting the information content of the code you can do it by imposing low dimension sparsity various other ways of this type or you", "start_timestamp": "01:15:27", "end_timestamp": "01:15:58", "start_second": 4527, "end_second": 4558, "url": "https://www.youtube.com/watch?v=SaJL4SLfrcY&t=4527s", "title": "Self-Supervised Learning", "thumbnail": "https://i.ytimg.com/vi/SaJL4SLfrcY/maxresdefault.jpg"} {"video_id": "SaJL4SLfrcY", "text": "can impose it by adding noise as long as you limit the norm of the other points now it's a bit more formulas in inversion autoencoder where the side of this of those balls is not fixed it can actually vary in all dimensions but there is a cost function that makes it makes you pay for making it significantly smaller than one and there's another term that makes sure the average the mean of all those points is actually centered on zero which is not with the spring analogy doesn't quite quite quite do you notice you go to", "start_timestamp": "01:15:58", "end_timestamp": "01:16:30", "start_second": 4558, "end_second": 4590, "url": "https://www.youtube.com/watch?v=SaJL4SLfrcY&t=4558s", "title": "Self-Supervised Learning", "thumbnail": "https://i.ytimg.com/vi/SaJL4SLfrcY/maxresdefault.jpg"} {"video_id": "SaJL4SLfrcY", "text": "encoder is this idea where you you take input data you corrupt it you ready to go to encoder and you reconstruct the original point without corruption and so visually it looks like this where this would be the data point every every one of those points is a is a a corrupted training sample if you want so it's still the spiral example and then you train a neural net to take you know you take a data point you corrupt it and then you train a neuron that to map from this to here this is the input this is the desired output very simple", "start_timestamp": "01:16:30", "end_timestamp": "01:17:11", "start_second": 4590, "end_second": 4631, "url": "https://www.youtube.com/watch?v=SaJL4SLfrcY&t=4590s", "title": "Self-Supervised Learning", "thumbnail": "https://i.ytimg.com/vi/SaJL4SLfrcY/maxresdefault.jpg"} {"video_id": "SaJL4SLfrcY", "text": "you transistor on that and that way you can plot is the blue points here are the outputs of that auto encoder neural net for every one of those golden points whatever right so I don't know this one maps to here and this one too there this one to here etc so you see that the system has kind of learned really it's going to similar representation here but you take every grid point and every point on the grid in the space and the blue points are the image of every single one of those points and they primarily cluster around the region of", "start_timestamp": "01:17:11", "end_timestamp": "01:17:51", "start_second": 4631, "end_second": 4671, "url": "https://www.youtube.com/watch?v=SaJL4SLfrcY&t=4631s", "title": "Self-Supervised Learning", "thumbnail": "https://i.ytimg.com/vi/SaJL4SLfrcY/maxresdefault.jpg"} {"video_id": "SaJL4SLfrcY", "text": "high data density which is what you want the energy function is just the square reconstruction error between the between every point in the space and the point that it maps to and those little vectors here can indicate the displacement if you want and the color indicates the energy function it's an issue which is that the energy is actually zero on this this is not a Ridge it's actually a valley and it's terrible okay so there's a it's a flaw of denoising auto-encoder it can create values in places that you shouldn't have", "start_timestamp": "01:17:51", "end_timestamp": "01:18:20", "start_second": 4671, "end_second": 4700, "url": "https://www.youtube.com/watch?v=SaJL4SLfrcY&t=4671s", "title": "Self-Supervised Learning", "thumbnail": "https://i.ytimg.com/vi/SaJL4SLfrcY/maxresdefault.jpg"} {"video_id": "SaJL4SLfrcY", "text": "won but it works really well for NLP so some of you may probably have heard of Bert okay it's gonna take on the world by storm in NLP and Bert is a special case of denoising auto-encoder where you give it a piece of text a window of a few hundred words and I know I'm at a time a window of a few hundred words umask so the the corruption consists in masking some of the words in that sentence or in that piece of text typically 15 percent of the words and then you train a giant neuron that transform a neural net which I'm not", "start_timestamp": "01:18:20", "end_timestamp": "01:18:53", "start_second": 4700, "end_second": 4733, "url": "https://www.youtube.com/watch?v=SaJL4SLfrcY&t=4700s", "title": "Self-Supervised Learning", "thumbnail": "https://i.ytimg.com/vi/SaJL4SLfrcY/maxresdefault.jpg"} {"video_id": "SaJL4SLfrcY", "text": "going to go into the explanation of of basically predicting the words that are missing and they're it's easy to handle the uncertainty in the prediction because at the output you have a big softmax that gives you a probability distribution over all words your dictionary so here I don't have the issue that you have with editing video where it's a high dimensional continuous space it's easy to handle uncertainty in discrete space which is why it works well in this case so you train the system to do reconstruction on tons of", "start_timestamp": "01:18:53", "end_timestamp": "01:19:21", "start_second": 4733, "end_second": 4761, "url": "https://www.youtube.com/watch?v=SaJL4SLfrcY&t=4733s", "title": "Self-Supervised Learning", "thumbnail": "https://i.ytimg.com/vi/SaJL4SLfrcY/maxresdefault.jpg"} {"video_id": "SaJL4SLfrcY", "text": "text billions of segments of text and in the end you use the internal representation learned by the network as input we to a downstream supervised task like you know natural language inference parsing winogradsky all kinds of stuff and this meets the record on just about it you know although the benchmarks in the glue so glue is in super glue our set of benchmarks in natural language understanding and systems based on bird the latest one is Roberta from Facebook actually has the record I think a few days ago Microsoft came up with an", "start_timestamp": "01:19:21", "end_timestamp": "01:19:57", "start_second": 4761, "end_second": 4797, "url": "https://www.youtube.com/watch?v=SaJL4SLfrcY&t=4761s", "title": "Self-Supervised Learning", "thumbnail": "https://i.ytimg.com/vi/SaJL4SLfrcY/maxresdefault.jpg"} {"video_id": "SaJL4SLfrcY", "text": "improvement on Roberta that actually can I brought the performance a little better it works really well doesn't work on images so if you use the same trick on images you block a piece of the image and you ask the system to reconstruct it doesn't quite work it doesn't give you features that are very useful in the end for for vision let's see I'm gonna stop here okay I just want to want to show you a simple a simple example of prediction and or certainty that that's that's interesting and it's an example of training one of those predictive model", "start_timestamp": "01:19:57", "end_timestamp": "01:20:43", "start_second": 4797, "end_second": 4843, "url": "https://www.youtube.com/watch?v=SaJL4SLfrcY&t=4797s", "title": "Self-Supervised Learning", "thumbnail": "https://i.ytimg.com/vi/SaJL4SLfrcY/maxresdefault.jpg"} {"video_id": "SaJL4SLfrcY", "text": "to be a forward model for a control problem of driving a car for example so if you drive a car you might want to be able to predict what the cars around you are going to do and it's not deterministic so if you are if you are this guy you have this car these are all the cars around you this is a little rectangle extracted around around yourself and it might be useful to predict what the cars around you are going to do if you want to plan a kind of trajectory that will avoid you know the probability of future accidents so", "start_timestamp": "01:20:43", "end_timestamp": "01:21:21", "start_second": 4843, "end_second": 4881, "url": "https://www.youtube.com/watch?v=SaJL4SLfrcY&t=4843s", "title": "Self-Supervised Learning", "thumbnail": "https://i.ytimg.com/vi/SaJL4SLfrcY/maxresdefault.jpg"} {"video_id": "SaJL4SLfrcY", "text": "one thing you can do is something like this where you take a few frames so you have the blue car the green cars our cars around you you're observing the world doing some stuff around you what you're going to train is one of those predicting model a big Commission on that with some latent variable to predict the next frame and the precise I'm going to go into the details with precise architecture it's basically very similar to a VA II in this case where you get the state of the world which is the environment of the car you run it to", "start_timestamp": "01:21:21", "end_timestamp": "01:21:50", "start_second": 4881, "end_second": 4910, "url": "https://www.youtube.com/watch?v=SaJL4SLfrcY&t=4881s", "title": "Self-Supervised Learning", "thumbnail": "https://i.ytimg.com/vi/SaJL4SLfrcY/maxresdefault.jpg"} {"video_id": "SaJL4SLfrcY", "text": "a couple you know few layers of a ComNet you add a latent variable here which goes through another couple layers of a neural net the certain variable is relatively low dimensional and then you run through a decoder to predict the next frame and to predict the value of the latent variable there is actually an encoder and of course the system could cheat because now it has the answer of the target and so you could just use the value of the target to predict the latent variable that wouldn't be very useful so you restrict information", "start_timestamp": "01:21:50", "end_timestamp": "01:22:18", "start_second": 4910, "end_second": 4938, "url": "https://www.youtube.com/watch?v=SaJL4SLfrcY&t=4910s", "title": "Self-Supervised Learning", "thumbnail": "https://i.ytimg.com/vi/SaJL4SLfrcY/maxresdefault.jpg"} {"video_id": "SaJL4SLfrcY", "text": "content in the latent variable because this looks like an auto encoder and you do this by the noise via e style essentially skipping details so what you get here is this is the kind of prediction you get if you don't use the latent variable you set it to zero all the time you can deterministic predictions but they become blurry there quickly after a while and those are multiple predictions that you get by making different samples of the latent variables right so you get different futures by getting different samples of", "start_timestamp": "01:22:18", "end_timestamp": "01:22:52", "start_second": 4938, "end_second": 4972, "url": "https://www.youtube.com/watch?v=SaJL4SLfrcY&t=4938s", "title": "Self-Supervised Learning", "thumbnail": "https://i.ytimg.com/vi/SaJL4SLfrcY/maxresdefault.jpg"} {"video_id": "SaJL4SLfrcY", "text": "the latent variable you can use this type of model as a forward model in a control system where you have the state of the world the action you take whether you turn the wheel without your accelerator brake you run this to your forward model gives you the next state it takes a random sample from the latent variable to make that prediction you can read this multiple time steps there's a cost function that measures if the car is in lane if it's too close to other cars it's differentiable so by back propagation you can propagate gradient", "start_timestamp": "01:22:52", "end_timestamp": "01:23:21", "start_second": 4972, "end_second": 5001, "url": "https://www.youtube.com/watch?v=SaJL4SLfrcY&t=4972s", "title": "Self-Supervised Learning", "thumbnail": "https://i.ytimg.com/vi/SaJL4SLfrcY/maxresdefault.jpg"} {"video_id": "SaJL4SLfrcY", "text": "to this entire thing and train another neuron that call a policy network to figure out what is the best action that should take so as to minimize the expected value of the cost and it's a it's not reinforcement running it's all differentiable so it's just back prop okay no no reinforcement there if you do this it doesn't work very well you have to regularize the system by making sure it stays in regions of the space where the ford model does a good prediction which are not going to go into the details off and then after you do this", "start_timestamp": "01:23:21", "end_timestamp": "01:23:49", "start_second": 5001, "end_second": 5029, "url": "https://www.youtube.com/watch?v=SaJL4SLfrcY&t=5001s", "title": "Self-Supervised Learning", "thumbnail": "https://i.ytimg.com/vi/SaJL4SLfrcY/maxresdefault.jpg"} {"video_id": "SaJL4SLfrcY", "text": "you run the the policy network so there's no planning necessary in the end the policy network has already kind of thought in his mind when it was training all the bad things that could happen and this is the the blue car here is driving itself in traffic and you have to realize that the traffic doesn't realize the car the blue car is here it doesn't see it is maybe a better example here so the yellow car is a real car the blue car started at the same location but decided to do something else so this is the one that we drive and here it's", "start_timestamp": "01:23:49", "end_timestamp": "01:24:23", "start_second": 5029, "end_second": 5063, "url": "https://www.youtube.com/watch?v=SaJL4SLfrcY&t=5029s", "title": "Self-Supervised Learning", "thumbnail": "https://i.ytimg.com/vi/SaJL4SLfrcY/maxresdefault.jpg"} {"video_id": "NTz4rJS9BAI", "text": "I just feel I feel the Ludvig has young legs that's all set anyway great to see everyone right so where to begin so this is this is joint work with a bunch of awesome people and I think really pushing on a lot of really interesting empirical work and I think I'm not gonna say that the talk will be uncontroversial so if I say a thing that makes you angry that's for me and if you say anything that's like well that's awesome that's from one of these fours alright so I know this is works up on deep learning I'd like to start with", "start_timestamp": "00:00:00", "end_timestamp": "00:00:49", "start_second": 0, "end_second": 49, "url": "https://www.youtube.com/watch?v=NTz4rJS9BAI&t=0s", "title": "Training on the Test Set and Other Heresies", "thumbnail": "https://i.ytimg.com/vi/NTz4rJS9BAI/maxresdefault.jpg"} {"video_id": "NTz4rJS9BAI", "text": "machine learning that's why I talk about machine learning and I want to talk about kind of the conventional wisdom that we give to our undergraduates Jonathan here teaches hundreds of them and so I hope I don't know how many of these things you tell them one they're like some how to have a good predictor you have to balance bias and variance alright and that you really should not perfectly fit your training data that's a no-no t something we definitely teach them that in right yeah you have to oh he already knows you don't the punch", "start_timestamp": "00:00:49", "end_timestamp": "00:01:20", "start_second": 49, "end_second": 80, "url": "https://www.youtube.com/watch?v=NTz4rJS9BAI&t=49s", "title": "Training on the Test Set and Other Heresies", "thumbnail": "https://i.ytimg.com/vi/NTz4rJS9BAI/maxresdefault.jpg"} {"video_id": "NTz4rJS9BAI", "text": "line this is the sad part and we know that high capacity models right wouldn't have really big ones shouldn't generalize that you got to keep them small because if you you know said on the first day trying to do one-dimensional polynomial regression right and optimize high precision non convex optimization that's hard and I think everybody in this room at this point knows that none of these are true right it's a it's weird right so we keep all kind of come to the terms that none of these things are true and yet we", "start_timestamp": "00:01:20", "end_timestamp": "00:01:45", "start_second": 80, "end_second": 105, "url": "https://www.youtube.com/watch?v=NTz4rJS9BAI&t=80s", "title": "Training on the Test Set and Other Heresies", "thumbnail": "https://i.ytimg.com/vi/NTz4rJS9BAI/maxresdefault.jpg"} {"video_id": "NTz4rJS9BAI", "text": "teach in our undergraduate classes I'm trying it's really hard to teach them anything else to be honest nuance is really difficult at the undergraduate level especially when there are 700 of them so I think that's a kind of complicated thing but it is this weird thing that we're in the square in this weird regime where everybody here knows that none of these are true so how I want to push in a little bit on some other things that we tell people that also maybe aren't true kind of in this talk just in case you you don't", "start_timestamp": "00:01:45", "end_timestamp": "00:02:13", "start_second": 105, "end_second": 133, "url": "https://www.youtube.com/watch?v=NTz4rJS9BAI&t=105s", "title": "Training on the Test Set and Other Heresies", "thumbnail": "https://i.ytimg.com/vi/NTz4rJS9BAI/maxresdefault.jpg"} {"video_id": "NTz4rJS9BAI", "text": "believe me what are we okay first of all what do I mean by machine learning I mean something very narrow right for me when I say machine learning yes I would upset some of my colleagues nobly Mike Jordan but I assess on my college might say this is all machine learning is it's just predict making predictions from examples I have a set of something I have a set of something else and I want to find prediction functions that map X to Y I know there's more than that but this is the backbone this is why all of you are here this is why they're", "start_timestamp": "00:02:13", "end_timestamp": "00:02:38", "start_second": 133, "end_second": 158, "url": "https://www.youtube.com/watch?v=NTz4rJS9BAI&t=133s", "title": "Training on the Test Set and Other Heresies", "thumbnail": "https://i.ytimg.com/vi/NTz4rJS9BAI/maxresdefault.jpg"} {"video_id": "NTz4rJS9BAI", "text": "building our semester than this and so the whole idea is that how do we actually estimate these things we ask to make them from data we collect as many training data points as we can we ask them ate them from data and then we just hope that this works on other data a big hope big hope is that we have a lot of data if we collect enough of it we get all edge cases we can drive our cars whatever those cases are right and so that's the notion of what machine learning people mean by generalization I actually think generalization is one of", "start_timestamp": "00:02:38", "end_timestamp": "00:03:10", "start_second": 158, "end_second": 190, "url": "https://www.youtube.com/watch?v=NTz4rJS9BAI&t=158s", "title": "Training on the Test Set and Other Heresies", "thumbnail": "https://i.ytimg.com/vi/NTz4rJS9BAI/maxresdefault.jpg"} {"video_id": "NTz4rJS9BAI", "text": "the machine learning is full of words that confuse the the listener they're just designed to confuse you so generalization when I say that to you that means that you know you learn how to throw a baseball and then you can throw a softball that is not what we mean in machine learning at all and machine learning mean if you throw a baseball you know how to throw a baseball as long as this regulation baseball made to regulation wait you're gonna be able to throw that baseball and that's not at all what would moral", "start_timestamp": "00:03:10", "end_timestamp": "00:03:38", "start_second": 190, "end_second": 218, "url": "https://www.youtube.com/watch?v=NTz4rJS9BAI&t=190s", "title": "Training on the Test Set and Other Heresies", "thumbnail": "https://i.ytimg.com/vi/NTz4rJS9BAI/maxresdefault.jpg"} {"video_id": "NTz4rJS9BAI", "text": "people think right so the idea is that you give me an examples from a distribution I would like to find a good prediction function on these examples I have some loss function that I believe is a reasonable thing that I would like to minimize an expected value meaning that if I have some new thing that comes from this distributions I'd like to be able to do well that's not known and so what you do instead is you replace it with the sample average and the empirical risk and you minimize this and like the only thing we're allowed to do", "start_timestamp": "00:03:38", "end_timestamp": "00:04:05", "start_second": 218, "end_second": 245, "url": "https://www.youtube.com/watch?v=NTz4rJS9BAI&t=218s", "title": "Training on the Test Set and Other Heresies", "thumbnail": "https://i.ytimg.com/vi/NTz4rJS9BAI/maxresdefault.jpg"} {"video_id": "NTz4rJS9BAI", "text": "what's wrong the only thing we're to do now is minimize this using one of two algorithm and they're like three things we can do we minimize it that way and then of course our generalization error is just the difference between these two things and so the question is you know if we can compute this one when is that a good or bad proxy for the other one right that's our whole that's what we mean by realization it's a very it's just a badly named term but it's very interesting couple problem and it's kind of like the core problem in machine", "start_timestamp": "00:04:05", "end_timestamp": "00:04:33", "start_second": 245, "end_second": 273, "url": "https://www.youtube.com/watch?v=NTz4rJS9BAI&t=245s", "title": "Training on the Test Set and Other Heresies", "thumbnail": "https://i.ytimg.com/vi/NTz4rJS9BAI/maxresdefault.jpg"} {"video_id": "NTz4rJS9BAI", "text": "learning right and the core theorem of machine learning is that the the population error that we really care about is just equal to the training error plus the generalization error right sorry that's like the foundation of ten thousand ten thousand papers right we've all done that at one point in our lives right so we can measure this one and then we do a lot of thinking about why this should be small I mean this requires like the associative property there's stuff to be done here right I think we can equip a", "start_timestamp": "00:04:33", "end_timestamp": "00:05:08", "start_second": 273, "end_second": 308, "url": "https://www.youtube.com/watch?v=NTz4rJS9BAI&t=273s", "title": "Training on the Test Set and Other Heresies", "thumbnail": "https://i.ytimg.com/vi/NTz4rJS9BAI/maxresdefault.jpg"} {"video_id": "NTz4rJS9BAI", "text": "little it about derive versus actually Matic right so I think right so what can we take away from this we know that if you have a small training error that means that the risk itself is just you're really only in leveraging generalization error right so if your training error for example is zero you're just hopefully know somehow that the generalization error is small and sorry and zero training error we know does not imply overfitting even though that's another thing that we do tend to sometimes gets lost in the weeds like", "start_timestamp": "00:05:08", "end_timestamp": "00:05:38", "start_second": 308, "end_second": 338, "url": "https://www.youtube.com/watch?v=NTz4rJS9BAI&t=308s", "title": "Training on the Test Set and Other Heresies", "thumbnail": "https://i.ytimg.com/vi/NTz4rJS9BAI/maxresdefault.jpg"} {"video_id": "NTz4rJS9BAI", "text": "for example the the paper I really like which one test of time and we're at knits last year by um Bruce K and Mbutu right has this has this thing saying that you shouldn't train you shouldn't run an SGD too long because at some point all of these terms should be one over square root of n but we know that's not really true right I mean just because this is whatever squared event doesn't mean this one is one over square root of n and just cuz this is zero doesn't mean this one's small so you know we it's useful it's a useful way of", "start_timestamp": "00:05:38", "end_timestamp": "00:06:04", "start_second": 338, "end_second": 364, "url": "https://www.youtube.com/watch?v=NTz4rJS9BAI&t=338s", "title": "Training on the Test Set and Other Heresies", "thumbnail": "https://i.ytimg.com/vi/NTz4rJS9BAI/maxresdefault.jpg"} {"video_id": "NTz4rJS9BAI", "text": "thinking about it but sometimes we over fit to what kind of these papers say all right so don't want to say it's not always true but it's a useful way of thinking about it another way that this one ends up sometimes is presented this is how I learned it not everybody possess it this way is that you decompose is this it's it's the same same proof though it's you did you put the error it's three parts right so there you compare your error of your estimator versus the error the best thing in that in the class of stuff", "start_timestamp": "00:06:04", "end_timestamp": "00:06:30", "start_second": 364, "end_second": 390, "url": "https://www.youtube.com/watch?v=NTz4rJS9BAI&t=364s", "title": "Training on the Test Set and Other Heresies", "thumbnail": "https://i.ytimg.com/vi/NTz4rJS9BAI/maxresdefault.jpg"} {"video_id": "NTz4rJS9BAI", "text": "you've been looking at right and then you compare the error the best in the class - like what the true prediction function the best population risk monument monitor is and then you are stuck with some bias at the end which is this is just the irreducible error that you can never get away from and to whichever one you like more I don't care I like this one just because it allows me to attack this most my least favorite figure I'm kind of gonna be like the core the core of the talk today right so this is my least favorite", "start_timestamp": "00:06:30", "end_timestamp": "00:06:58", "start_second": 390, "end_second": 418, "url": "https://www.youtube.com/watch?v=NTz4rJS9BAI&t=390s", "title": "Training on the Test Set and Other Heresies", "thumbnail": "https://i.ytimg.com/vi/NTz4rJS9BAI/maxresdefault.jpg"} {"video_id": "NTz4rJS9BAI", "text": "figure it's from hasty and tip shirani I like those guys buns in this figure we read way too much into this right so somehow the idea here is that we have to balance bias and variance in order to get good model complexity again you can do that you can do that but this is by no means by knowing is the only way to generalize and we know this okay I learned this because of deep learning and to be fair I learned this before and had forgotten it and I'm gonna talk to you about that in a second but I learned it again recently because of deep", "start_timestamp": "00:06:58", "end_timestamp": "00:07:26", "start_second": 418, "end_second": 446, "url": "https://www.youtube.com/watch?v=NTz4rJS9BAI&t=418s", "title": "Training on the Test Set and Other Heresies", "thumbnail": "https://i.ytimg.com/vi/NTz4rJS9BAI/maxresdefault.jpg"} {"video_id": "NTz4rJS9BAI", "text": "learning that you can just get make these models gigantic and and they generalize so in a very unpopular paper that I know lots of people who are have my back tear right now don't like sorry maybe just one anyway so we read we read a bunch of experiments on this if our 10 dataset everybody's favorite dataset where we have chickens frogs deer in trucks and you know you rut so this is a 10 class classification problem it's relatively high dimensional has three thousand dimensions because the pixels 50,000 data points and if you take you", "start_timestamp": "00:07:26", "end_timestamp": "00:08:04", "start_second": 446, "end_second": 484, "url": "https://www.youtube.com/watch?v=NTz4rJS9BAI&t=446s", "title": "Training on the Test Set and Other Heresies", "thumbnail": "https://i.ytimg.com/vi/NTz4rJS9BAI/maxresdefault.jpg"} {"video_id": "NTz4rJS9BAI", "text": "know what we found is that you can either get a loss to be nonzero here this is the loss is the log loss which is not the classification accuracy you can work really hard to get this to get your generalization error down and in this case the test error or you could just run it to zero essentially just taking this configuration and turning off all the regularization round to zero and while you see a drop in accuracy you don't see a gigantic drop so the test error increases but only by about 5% and moreover if you just pick a bigger model", "start_timestamp": "00:08:04", "end_timestamp": "00:08:34", "start_second": 484, "end_second": 514, "url": "https://www.youtube.com/watch?v=NTz4rJS9BAI&t=484s", "title": "Training on the Test Set and Other Heresies", "thumbnail": "https://i.ytimg.com/vi/NTz4rJS9BAI/maxresdefault.jpg"} {"video_id": "NTz4rJS9BAI", "text": "now you're considerably better than the original Alex net if you pick an even bigger model you keep going down she wants here I'm sure he tried something even larger we just ran out of time so right so somehow here that the the the regularization parameters turned out to be just knobs that you can tune in terms of you could also tune architectures you could do lots of things and keep pushing this error down and indeed we saw the same thing on image net where here we have chicken frog Deere and truck much more clear right I", "start_timestamp": "00:08:34", "end_timestamp": "00:09:04", "start_second": 514, "end_second": 544, "url": "https://www.youtube.com/watch?v=NTz4rJS9BAI&t=514s", "title": "Training on the Test Set and Other Heresies", "thumbnail": "https://i.ytimg.com/vi/NTz4rJS9BAI/maxresdefault.jpg"} {"video_id": "NTz4rJS9BAI", "text": "think at that point anyway and so we looked at an inception model inside Google I really don't like this experiment I just want to show it just to kind of give evidence that this happens and larger datasets the reason I don't like this experiment is that because of the way that these things were trained Inc site Google at the time we were able to run about six experiments six runs I think have we used I think one of the more valuable things that's happened to the community is this add-on benchmark and we just", "start_timestamp": "00:09:04", "end_timestamp": "00:09:31", "start_second": 544, "end_second": 571, "url": "https://www.youtube.com/watch?v=NTz4rJS9BAI&t=544s", "title": "Training on the Test Set and Other Heresies", "thumbnail": "https://i.ytimg.com/vi/NTz4rJS9BAI/maxresdefault.jpg"} {"video_id": "NTz4rJS9BAI", "text": "used the DA benchmark we've been enabled around hundreds of experiments much better here but all we were able to do here was toggle flags inside some Google 3 models and so in particular you know in this case what we could do is we could turn off the LT regularization and we could turn off the the drop out sorry the UM data augmentation and you could still get perfect dieter palatial and even note that the top five accuracy here is only 19 point three sorry tight error top tight five errors only ninety point three you try to get to something", "start_timestamp": "00:09:31", "end_timestamp": "00:10:00", "start_second": 571, "end_second": 600, "url": "https://www.youtube.com/watch?v=NTz4rJS9BAI&t=571s", "title": "Training on the Test Set and Other Heresies", "thumbnail": "https://i.ytimg.com/vi/NTz4rJS9BAI/maxresdefault.jpg"} {"video_id": "NTz4rJS9BAI", "text": "that has a 19 point three percent error it's really hard so it's significantly worse than what the inception model is getting but again it's not it's not catastrophic about this would have won this would have beat Alex net the first time around so still if it's good accuracy yeah let's ever talk like that regularization before my control variance so what yeah so okay there is talk of implicit regularization I guess what I would say is that we started to just look at more models when we started to do is stop looking at just this individual models", "start_timestamp": "00:10:00", "end_timestamp": "00:10:46", "start_second": 600, "end_second": 646, "url": "https://www.youtube.com/watch?v=NTz4rJS9BAI&t=600s", "title": "Training on the Test Set and Other Heresies", "thumbnail": "https://i.ytimg.com/vi/NTz4rJS9BAI/maxresdefault.jpg"} {"video_id": "NTz4rJS9BAI", "text": "or for we just started download more this actually became a big trend in the research group um it's there's something that Becca roll-off started kind of kind of pushed us on this direction and the mood became sitting here kind of really pushed us even further in this context it huh Baz an experimental resource so you just get pull some some models and then see what happens and a bunch of things so this is the scatter pot I don't really buy that parameters the neural network is actually meaningful but you can see that essentially as you", "start_timestamp": "00:10:46", "end_timestamp": "00:11:12", "start_second": 646, "end_second": 672, "url": "https://www.youtube.com/watch?v=NTz4rJS9BAI&t=646s", "title": "Training on the Test Set and Other Heresies", "thumbnail": "https://i.ytimg.com/vi/NTz4rJS9BAI/maxresdefault.jpg"} {"video_id": "NTz4rJS9BAI", "text": "make the number of parameters bigger the models continue to get bigger blue line is just the minimum error and the red is just a scatter plot isn't models at a particular model size I'm just taking the minimum error of the Reds yeah I mean like I'm not saying that they're all true I'm just saying you could do a pull request over here I go fine I say I can call Fernando get him on this one this is even better what you do here is just remove models and so now you see the trend this is there's no Alex net or vgd on here they kind of", "start_timestamp": "00:11:12", "end_timestamp": "00:12:29", "start_second": 672, "end_second": 749, "url": "https://www.youtube.com/watch?v=NTz4rJS9BAI&t=672s", "title": "Training on the Test Set and Other Heresies", "thumbnail": "https://i.ytimg.com/vi/NTz4rJS9BAI/maxresdefault.jpg"} {"video_id": "NTz4rJS9BAI", "text": "ruin that but they're just up here some ways fundings like the Curt you really care about this lower envelope but this is the same thing kind of happens on imagenet you just keep making them bigger well you'll know he's like we have log axis on the X and and and and linear on the Y so semi-log this I pulled from a paper by some Google folks because only Google folks would think it would be fun to train something with 600 billion parameters million million million ok that's reasonable 600 billion was stupid but six our million perfectly", "start_timestamp": "00:12:29", "end_timestamp": "00:12:55", "start_second": 749, "end_second": 775, "url": "https://www.youtube.com/watch?v=NTz4rJS9BAI&t=749s", "title": "Training on the Test Set and Other Heresies", "thumbnail": "https://i.ytimg.com/vi/NTz4rJS9BAI/maxresdefault.jpg"} {"video_id": "NTz4rJS9BAI", "text": "reasonable anyway so that's that one he's getting bigger here's your visa this is from misha's talk yesterday well I made it but it's fine it's the same same plot right we see the double descents this is now we're random feature models where we just keep adding random features and see how the accuracy goes and even though it does go up it kind of comes back down as you make the models bigger and bigger am i i'm just interpolating so this one is just interpolate I'm not doing this case is man we minimum euclidian arm", "start_timestamp": "00:12:55", "end_timestamp": "00:13:38", "start_second": 775, "end_second": 818, "url": "https://www.youtube.com/watch?v=NTz4rJS9BAI&t=775s", "title": "Training on the Test Set and Other Heresies", "thumbnail": "https://i.ytimg.com/vi/NTz4rJS9BAI/maxresdefault.jpg"} {"video_id": "NTz4rJS9BAI", "text": "although although what's interesting is that I like this plot better is that if you don't do minimal Euclidean norm but you just go use read regression so you allow yourself a tunable parameter now that dip goes away no it just gets better oh so I don't know all right so this is not minimum nor mystique this is regression and then that bump goes away yeah you've picked the best Ridge you pick the best Ridge it's not a constable I picked the best one for each and you're gonna see why in a second this is the best one for each it's the only done", "start_timestamp": "00:13:38", "end_timestamp": "00:14:20", "start_second": 818, "end_second": 860, "url": "https://www.youtube.com/watch?v=NTz4rJS9BAI&t=818s", "title": "Training on the Test Set and Other Heresies", "thumbnail": "https://i.ytimg.com/vi/NTz4rJS9BAI/maxresdefault.jpg"} {"video_id": "NTz4rJS9BAI", "text": "with a number of features give it like for that the best value goes down every number is so yes so sorry in this case I'm not taking the best one from here I'm not taking the best of these two it is just if I tuned this what's the best I can do just on this one fiber here is a paper something I stole from Peter this is from an inner UPS actually was nipped at the time but nerves tutorial from 20 years ago here he was doing boosting some of us what do you see here well I have first of all note the semi-log axis there we go semi log X", "start_timestamp": "00:14:20", "end_timestamp": "00:14:55", "start_second": 860, "end_second": 895, "url": "https://www.youtube.com/watch?v=NTz4rJS9BAI&t=860s", "title": "Training on the Test Set and Other Heresies", "thumbnail": "https://i.ytimg.com/vi/NTz4rJS9BAI/maxresdefault.jpg"} {"video_id": "NTz4rJS9BAI", "text": "on the you know that's pretty neat and you add parameters I mean boosting right you increase your model size with every step and so now I have the models get bigger and bigger and bigger and the test error keeps going down I think the the so that's that was interesting right so that was small data machine learning but I still seem to have a thing this is another really interesting one which I pulled from a C ACM article by Bell and Koren describing how they won the Netflix prize and they saw exam the same thing that they have different kinds of", "start_timestamp": "00:14:55", "end_timestamp": "00:15:25", "start_second": 895, "end_second": 925, "url": "https://www.youtube.com/watch?v=NTz4rJS9BAI&t=895s", "title": "Training on the Test Set and Other Heresies", "thumbnail": "https://i.ytimg.com/vi/NTz4rJS9BAI/maxresdefault.jpg"} {"video_id": "NTz4rJS9BAI", "text": "models but they kept making their models larger again we have semi log X on the routers linear in error and you see this thing kind of continue to keep going down to bigger and bigger you make your model so I think there is something interesting there so there you're seeing two things one making the model really huge doesn't doesn't you know it has a ton of capacity maybe you're controlling it with various kinds of regularization of some form or another but oh it just seems make it bigger worry about that later", "start_timestamp": "00:15:25", "end_timestamp": "00:15:53", "start_second": 925, "end_second": 953, "url": "https://www.youtube.com/watch?v=NTz4rJS9BAI&t=925s", "title": "Training on the Test Set and Other Heresies", "thumbnail": "https://i.ytimg.com/vi/NTz4rJS9BAI/maxresdefault.jpg"} {"video_id": "NTz4rJS9BAI", "text": "does seem to be a good take home and the other thing is that you see significant diminishing returns right I mean if you have a log x axis I mean this doesn't mean that eventually you have to give up even Google eventually you have to give up so I'm not sure that this is necessary how to get a suit like that that irreducible error we'd like to get you no no no no no it's not a typo not a typo a hundred thousand million that's like Carl Sagan it's a hundred thousand million parameters that's a big deep I didn't make this plot they don't get", "start_timestamp": "00:15:53", "end_timestamp": "00:16:55", "start_second": 953, "end_second": 1015, "url": "https://www.youtube.com/watch?v=NTz4rJS9BAI&t=953s", "title": "Training on the Test Set and Other Heresies", "thumbnail": "https://i.ytimg.com/vi/NTz4rJS9BAI/maxresdefault.jpg"} {"video_id": "NTz4rJS9BAI", "text": "their ice log these need different things you have to read the caption that's not here sorry I maybe should've edited these things do not mean don't they're in the caption whatever those mean I don't know what those numbers are these number factors are you sure paper [Laughter] anyway so we could look it up and think see a cm it's cool they did a lot of good work let's go back to this one so look here's what I would like to say like there are crazy diminishing returns - it does seem making the model bare for", "start_timestamp": "00:16:55", "end_timestamp": "00:17:46", "start_second": 1015, "end_second": 1066, "url": "https://www.youtube.com/watch?v=NTz4rJS9BAI&t=1015s", "title": "Training on the Test Set and Other Heresies", "thumbnail": "https://i.ytimg.com/vi/NTz4rJS9BAI/maxresdefault.jpg"} {"video_id": "NTz4rJS9BAI", "text": "a fixed holdout set that we fix in time for the history of the universe does make that test error go down but that's also not generalization error alright if you do a holdout split or you take a train set and you take a holdout set and then you just fix that holdout set forever maybe what's happening here is you have this these giant models have enough fluctuations in them that they can you now over you could actually leak a lot to the test set and over fit to this one holdout set that you fixed forever so this leads to a question this", "start_timestamp": "00:17:46", "end_timestamp": "00:18:14", "start_second": 1066, "end_second": 1094, "url": "https://www.youtube.com/watch?v=NTz4rJS9BAI&t=1066s", "title": "Training on the Test Set and Other Heresies", "thumbnail": "https://i.ytimg.com/vi/NTz4rJS9BAI/maxresdefault.jpg"} {"video_id": "NTz4rJS9BAI", "text": "leads to a question we're all spend most of the rest of the time of the talk there it's perfect I mean I know we have planet I'm good so yeah that's the rest of the talk so maybe maybe what's happening here is you make these models really big and that allows you to overfit on this one holdout set and so there's only one way to check right which is make a new holdout set okay there a better way I don't know but we'd that's what we did that's what we did we made a new holdout set let me explain how so if here is progress on SIF r10", "start_timestamp": "00:18:14", "end_timestamp": "00:18:45", "start_second": 1094, "end_second": 1125, "url": "https://www.youtube.com/watch?v=NTz4rJS9BAI&t=1094s", "title": "Training on the Test Set and Other Heresies", "thumbnail": "https://i.ytimg.com/vi/NTz4rJS9BAI/maxresdefault.jpg"} {"video_id": "NTz4rJS9BAI", "text": "over time so if you just use raw pixels and do linear classification you get thirty seven percent accuracy I just realized we've switched from error to accuracy hopefully it will be clear from context of the same accuracy now we have 97.1% in 2017 and 2019 what is it lou big-nose these numbers ninety-nine point what nine point zero so we got you know there's still time to write more iclear papers everybody we've got ten more ten more ticks so this is like our deep revolution here all right this is the deep revolution happened in", "start_timestamp": "00:18:45", "end_timestamp": "00:19:20", "start_second": 1125, "end_second": 1160, "url": "https://www.youtube.com/watch?v=NTz4rJS9BAI&t=1125s", "title": "Training on the Test Set and Other Heresies", "thumbnail": "https://i.ytimg.com/vi/NTz4rJS9BAI/maxresdefault.jpg"} {"video_id": "NTz4rJS9BAI", "text": "2012 and you know we just keep making progress make them bigger get them more capacity make these models really large the shake-shake models are you.why rednecks also or just ones right I try to get them to fit brushes is this overfitting right is this overfitting right because this early part we can match with shallow Methos yeah actually you can get even get to like eighty I can't remember the number but there's there some work by chumki cada aleck agrawal greg valiant and leigh song like white 13 so if you just do random", "start_timestamp": "00:19:20", "end_timestamp": "00:19:49", "start_second": 1160, "end_second": 1189, "url": "https://www.youtube.com/watch?v=NTz4rJS9BAI&t=1160s", "title": "Training on the Test Set and Other Heresies", "thumbnail": "https://i.ytimg.com/vi/NTz4rJS9BAI/maxresdefault.jpg"} {"video_id": "NTz4rJS9BAI", "text": "features just you can get to about 85% accuracy using just dumb random features shallow stuff can get it was big so it's also a large shallow model can get to about 85 and so the question is are all we're doing here is just overfitting to the test set by graduate student design so we're gonna check and we're gonna check by building a new test set now what does that mean so it turns out that the syph are ten creation process is super well documented and it was documented by the folks at Toronto who made it in the first place", "start_timestamp": "00:19:49", "end_timestamp": "00:20:24", "start_second": 1189, "end_second": 1224, "url": "https://www.youtube.com/watch?v=NTz4rJS9BAI&t=1189s", "title": "Training on the Test Set and Other Heresies", "thumbnail": "https://i.ytimg.com/vi/NTz4rJS9BAI/maxresdefault.jpg"} {"video_id": "NTz4rJS9BAI", "text": "and so there there is a lot of details about how they did it and in particular where they got their images to begin with and that comes from one of my favorite data sets ever it's called the 80 million tiny images the tiny images data set and it's a was curated by Antonio - Rob Rob Ferguson bill Freeman and they made beautiful like scatter plots like all the images on the internet and these like mosaics and just seeing like how everything varies it's very cool kind of looking at what kind of things are out there that you can get", "start_timestamp": "00:20:24", "end_timestamp": "00:20:51", "start_second": 1224, "end_second": 1251, "url": "https://www.youtube.com/watch?v=NTz4rJS9BAI&t=1224s", "title": "Training on the Test Set and Other Heresies", "thumbnail": "https://i.ytimg.com/vi/NTz4rJS9BAI/maxresdefault.jpg"} {"video_id": "NTz4rJS9BAI", "text": "to get off the internet in 2008 so the reason why there were thumbnails is because they wanted to make something that you could store and then do all sorts of visualization and studies with so from these thumbnails they subsample 60,000 using a process that was very well detailed supply well detailed with human not experts but human laborers and they tried to get down to these ten classes so can we get iid resampling well there are you know there's out of 80 million images we only took 60,000 out so the hope is maybe we could sample", "start_timestamp": "00:20:51", "end_timestamp": "00:21:28", "start_second": 1251, "end_second": 1288, "url": "https://www.youtube.com/watch?v=NTz4rJS9BAI&t=1251s", "title": "Training on the Test Set and Other Heresies", "thumbnail": "https://i.ytimg.com/vi/NTz4rJS9BAI/maxresdefault.jpg"} {"video_id": "NTz4rJS9BAI", "text": "some more maybe not even that many more how many do we need to actually get something that would believe so I think Ludvig did his error bar calculation that said mm that's what he wanted 2,000 new ones and so and work that I did not want to get involved with Ludvig Becca and VY salt labeled I say mostly just Becca right this one for this one was looting in Becca labeled tens of thousands of images in in the tiny images data set and we got a new test set of size 2000 and again there find details about like what counts as a boat what counts to the", "start_timestamp": "00:21:28", "end_timestamp": "00:22:02", "start_second": 1288, "end_second": 1322, "url": "https://www.youtube.com/watch?v=NTz4rJS9BAI&t=1288s", "title": "Training on the Test Set and Other Heresies", "thumbnail": "https://i.ytimg.com/vi/NTz4rJS9BAI/maxresdefault.jpg"} {"video_id": "NTz4rJS9BAI", "text": "car and this kind of thing we drive to match as closely as possible so what did we see okay so the first thing we see is we take vgg 16 everyone's favorite network all right I guess anyway this is a big one and what we saw is a huge drop in accuracy right there's an 8 percent drop in accuracy from the first test set to the second test set which is much bigger than you would expect by a 1 over root N and some some reasonable capacity that's big mean this is we put a little confidence interval about where we should be this dashed line is the", "start_timestamp": "00:22:02", "end_timestamp": "00:22:31", "start_second": 1322, "end_second": 1351, "url": "https://www.youtube.com/watch?v=NTz4rJS9BAI&t=1322s", "title": "Training on the Test Set and Other Heresies", "thumbnail": "https://i.ytimg.com/vi/NTz4rJS9BAI/maxresdefault.jpg"} {"video_id": "NTz4rJS9BAI", "text": "reproducibility you'd hope that the confidence interval would hit the dash line that would mean okay there's you know these we'd say hey we're not having been overfitting at all well that's great that's great so it's clear what's gonna happen cuz right 85.3 was right in the ballpark of what people had seen you could do with just showering clearly what's gonna happen let me just I don't know if anybody's read this paper yeah but clearly what I thought would happen what I put my money on was that yeah now the shallow learning stuff will just be", "start_timestamp": "00:22:31", "end_timestamp": "00:23:00", "start_second": 1351, "end_second": 1380, "url": "https://www.youtube.com/watch?v=NTz4rJS9BAI&t=1351s", "title": "Training on the Test Set and Other Heresies", "thumbnail": "https://i.ytimg.com/vi/NTz4rJS9BAI/maxresdefault.jpg"} {"video_id": "NTz4rJS9BAI", "text": "here and we'll just see that we'll just get a saturation so we've just been adapting and we'll just see a saturation up here and everybody will be equal and right and that was like this big drop and so here here here's our random features that's not what we saw at all not what we saw at all I lost money I didn't lose money because all everybody else got the same it's good there was no house in this case right bigger drop 12% drop so that went from eighty five point six to seventy three point one bigger job present and the", "start_timestamp": "00:23:00", "end_timestamp": "00:23:35", "start_second": 1380, "end_second": 1415, "url": "https://www.youtube.com/watch?v=NTz4rJS9BAI&t=1380s", "title": "Training on the Test Set and Other Heresies", "thumbnail": "https://i.ytimg.com/vi/NTz4rJS9BAI/maxresdefault.jpg"} {"video_id": "NTz4rJS9BAI", "text": "shake-shake model only had a 4% drop this one that had super-high Agra so we saw exactly the opposite of what our hypothesis said right our hypothesis suggested that maybe the big models were able to just fluctuate themselves to overfit to there's one particular holdout set and if we draw a new holdout set we'll see some sign that they had adapted to it but we saw the opposite in fact the ones that have the better test error on the original test set have a better accuracy on the new testament oh did you have the original weights for", "start_timestamp": "00:23:35", "end_timestamp": "00:24:09", "start_second": 1415, "end_second": 1449, "url": "https://www.youtube.com/watch?v=NTz4rJS9BAI&t=1415s", "title": "Training on the Test Set and Other Heresies", "thumbnail": "https://i.ytimg.com/vi/NTz4rJS9BAI/maxresdefault.jpg"} {"video_id": "NTz4rJS9BAI", "text": "these models or did you have to retrain them we did well there was that we did retraining in the paper but in this case this was just original weights this is some summary training but again this is why github as an experimental resource is so powerful people put post the weights in the repos saves you tons of time so it actually is the contrary it's very you don't even need a GPU right thank you you can just download stuff that people already done change a parameter here do some kind experiments you already picked out like got a big", "start_timestamp": "00:24:09", "end_timestamp": "00:25:00", "start_second": 1449, "end_second": 1500, "url": "https://www.youtube.com/watch?v=NTz4rJS9BAI&t=1449s", "title": "Training on the Test Set and Other Heresies", "thumbnail": "https://i.ytimg.com/vi/NTz4rJS9BAI/maxresdefault.jpg"} {"video_id": "NTz4rJS9BAI", "text": "problem here right so there there was clearly this is not an iid resampling it's not ID resampling because that can Ludvig or not we don't know who the toronto folks were probably some paid undergraduates and they're not that kind of like who have never been to Toronto right yes so there is a there is a yeah so it's not perfectly ID right this is a great question but we'll come back to it let's come back on the empirical training possibly yeah but no you can't tell a difference if you train a classifier do you sample", "start_timestamp": "00:25:00", "end_timestamp": "00:25:44", "start_second": 1500, "end_second": 1544, "url": "https://www.youtube.com/watch?v=NTz4rJS9BAI&t=1500s", "title": "Training on the Test Set and Other Heresies", "thumbnail": "https://i.ytimg.com/vi/NTz4rJS9BAI/maxresdefault.jpg"} {"video_id": "NTz4rJS9BAI", "text": "test what white features oh just individual features to see if you see significant things we didn't do we did not do any of this yeah but let me keep going well then they're not from the same distribution right Marshall over it yeah we would hope so right I mean I that that you could probably check because there were some sub selected to begin with right so if too many were sub-select 'add then they're not going to be iid and that's something I think we could probably the dead part didn't vote human so that part", "start_timestamp": "00:25:44", "end_timestamp": "00:26:26", "start_second": 1544, "end_second": 1586, "url": "https://www.youtube.com/watch?v=NTz4rJS9BAI&t=1544s", "title": "Training on the Test Set and Other Heresies", "thumbnail": "https://i.ytimg.com/vi/NTz4rJS9BAI/maxresdefault.jpg"} {"video_id": "NTz4rJS9BAI", "text": "did not involve humans but not he's asking is just that probably the only thing that changed probably again being generous to us the only thing that's changing here is the labeling function not that of the human the thing is producing the why not the thing that's producing the X actually sight under why you only cycle you just want to know if you could do read you interleave the two test sets if you or sorry if you can interleave some of the stuff into the training data so we're gonna be then the entire and actually the fact that better models", "start_timestamp": "00:26:26", "end_timestamp": "00:27:43", "start_second": 1586, "end_second": 1663, "url": "https://www.youtube.com/watch?v=NTz4rJS9BAI&t=1586s", "title": "Training on the Test Set and Other Heresies", "thumbnail": "https://i.ytimg.com/vi/NTz4rJS9BAI/maxresdefault.jpg"} {"video_id": "NTz4rJS9BAI", "text": "have a smaller drop suggests that maybe this boils down to the test and it could be I mean what well so sorry what so what were you what are you suggesting can we just quit crossing it so this is actually really important this is really important the fact that you've already moved on from the fact that there's no adaptive overfitting is shocking look there's no adaptive overfitting we took this is rule one don't look at the holdout set more than once that's rule one we look at the holdout set fifty thousand trillion times at Google", "start_timestamp": "00:27:43", "end_timestamp": "00:28:15", "start_second": 1663, "end_second": 1695, "url": "https://www.youtube.com/watch?v=NTz4rJS9BAI&t=1663s", "title": "Training on the Test Set and Other Heresies", "thumbnail": "https://i.ytimg.com/vi/NTz4rJS9BAI/maxresdefault.jpg"} {"video_id": "NTz4rJS9BAI", "text": "every day fifty thousand trillion times and yet it doesn't matter the better you do on this holdout set this one stupid fixed holdout set like it doesn't matter that I think really before we talking to about why the drop happens just the fact that we don't see adaptive overfitting isn't it blew my mind I did not think that would happen I did not think that was or whether you see the single noun to productive or fitting into some others and so those are two different things which so far from your results in love", "start_timestamp": "00:28:15", "end_timestamp": "00:28:47", "start_second": 1695, "end_second": 1727, "url": "https://www.youtube.com/watch?v=NTz4rJS9BAI&t=1695s", "title": "Training on the Test Set and Other Heresies", "thumbnail": "https://i.ytimg.com/vi/NTz4rJS9BAI/maxresdefault.jpg"} {"video_id": "NTz4rJS9BAI", "text": "wait what does that mean might be your video yeah I cut her with Zico there's no adaptive or control and this gap is only because of the difference in the distribution yeah the thing is to be fair gonna be hard to get it's gonna be well I think so but right listen one second what is that I think it's gonna be hard to tell from safar 10 and I think what's interesting is if we go to this other data set that's more interesting anyway I mean maybe we can't a little bit but I think it's much more competitive yeah", "start_timestamp": "00:28:47", "end_timestamp": "00:29:37", "start_second": 1727, "end_second": 1777, "url": "https://www.youtube.com/watch?v=NTz4rJS9BAI&t=1727s", "title": "Training on the Test Set and Other Heresies", "thumbnail": "https://i.ytimg.com/vi/NTz4rJS9BAI/maxresdefault.jpg"} {"video_id": "NTz4rJS9BAI", "text": "yeah I'm getting there maybe I might be gay there we'll see how a lot of people want to stay through lunch yeah tell you it's not his hand up for a second here go ahead what it's gonna have a quick propose also epub Bank I knows we can have like 50 km yeah all these new data set and then test on all superpower tested and I don't know if we do get at leats I I don't think we get too concerned about mm emerges and some of 50 was a basically exposed so if you wanted to get another 50,000 mm just would actually be different", "start_timestamp": "00:29:37", "end_timestamp": "00:30:16", "start_second": 1777, "end_second": 1816, "url": "https://www.youtube.com/watch?v=NTz4rJS9BAI&t=1777s", "title": "Training on the Test Set and Other Heresies", "thumbnail": "https://i.ytimg.com/vi/NTz4rJS9BAI/maxresdefault.jpg"} {"video_id": "NTz4rJS9BAI", "text": "I think we could probably do more in 2000 but I'm not sure what the limit is yeah yeah that's actually in the paper you can see at the end but but also also the thing is I only have two Becca Becca and living now they have limited cycles you know this is a problem it's a problem it's a problem and certainly certain and I lost both of them that's really sad actually that's really sad yet both well I know loot big sting sorry good okay that was okay no first thing we have to do though we did these Becca she's graduated this year okay", "start_timestamp": "00:30:16", "end_timestamp": "00:30:56", "start_second": 1816, "end_second": 1856, "url": "https://www.youtube.com/watch?v=NTz4rJS9BAI&t=1816s", "title": "Training on the Test Set and Other Heresies", "thumbnail": "https://i.ytimg.com/vi/NTz4rJS9BAI/maxresdefault.jpg"} {"video_id": "NTz4rJS9BAI", "text": "also who cares about as if our time so far ten is this fun little thing that we all kind of like this we can train it and it certainly it's interesting because it's like the first non-trivial thing we can get to but you know what captured people's imagination was this image net data set questions do we do that one now that's harder because that's bigger there's more labels there's more and even like even here like this 1.2 million training images were not labeled in the lab they were labeled using Mechanical", "start_timestamp": "00:30:56", "end_timestamp": "00:31:21", "start_second": 1856, "end_second": 1881, "url": "https://www.youtube.com/watch?v=NTz4rJS9BAI&t=1856s", "title": "Training on the Test Set and Other Heresies", "thumbnail": "https://i.ytimg.com/vi/NTz4rJS9BAI/maxresdefault.jpg"} {"video_id": "NTz4rJS9BAI", "text": "Turk that's opportunity right so now we could try to reproduce this data set and this procedure using Mechanical Turk which makes it much more scalable and we can make much larger who can do much larger experiments so how do we get the xiety resampling of image that in the water but it's not as surprising as you say given the fact that we can we can drive particular models to zero laws something which we thought was impossible and that's okay this is like the community drive in there it's like community doing or repeating", "start_timestamp": "00:31:21", "end_timestamp": "00:31:56", "start_second": 1881, "end_second": 1916, "url": "https://www.youtube.com/watch?v=NTz4rJS9BAI&t=1881s", "title": "Training on the Test Set and Other Heresies", "thumbnail": "https://i.ytimg.com/vi/NTz4rJS9BAI/maxresdefault.jpg"} {"video_id": "NTz4rJS9BAI", "text": "on the test and it's kind of a same effect no I don't think so like why do it why don't we see a plateau anyway that's right that's right that's why anyway let's this is more interesting Misha the zipper tempting is so boring that meets let me show you this one this is much more interesting much more interesting I promise I promise much more interesting as if you want we can go into some more nitty-gritty with this one I have far more that I'm gonna be able to get through which is fine it's about no equations I'm no proof assemble class", "start_timestamp": "00:31:56", "end_timestamp": "00:32:38", "start_second": 1916, "end_second": 1958, "url": "https://www.youtube.com/watch?v=NTz4rJS9BAI&t=1916s", "title": "Training on the Test Set and Other Heresies", "thumbnail": "https://i.ytimg.com/vi/NTz4rJS9BAI/maxresdefault.jpg"} {"video_id": "NTz4rJS9BAI", "text": "like this so this this is kind of the interface that you would see if you were a mechanical turk being paid to label images on the internet which is basically how all of our everything is made to work now inside all the big tent companies they hire people and have them watch horrible disturbing content and flag it for their their algorithms is great wonderful world we live in so hey so we have so here's our images right these are supposed to be bow everybody can read the top a bow is actually is not quite clear a weapon for", "start_timestamp": "00:32:38", "end_timestamp": "00:33:07", "start_second": 1958, "end_second": 1987, "url": "https://www.youtube.com/watch?v=NTz4rJS9BAI&t=1958s", "title": "Training on the Test Set and Other Heresies", "thumbnail": "https://i.ytimg.com/vi/NTz4rJS9BAI/maxresdefault.jpg"} {"video_id": "NTz4rJS9BAI", "text": "shooting arrows composed of a curved piece of resilient wood with a top cord to propel the arrow and the task is click on every single image that has a boat that's what the Turker is asked to do click on every single image that has a bow so for example these three have both these ones don't actually these were three that were adversarially placed in here to make sure that the people that that the turgor z' weren't cheating by the so what we were eight what we did was we actually flag all of these nuke so we come in reproducing and", "start_timestamp": "00:33:07", "end_timestamp": "00:33:43", "start_second": 1987, "end_second": 2023, "url": "https://www.youtube.com/watch?v=NTz4rJS9BAI&t=1987s", "title": "Training on the Test Set and Other Heresies", "thumbnail": "https://i.ytimg.com/vi/NTz4rJS9BAI/maxresdefault.jpg"} {"video_id": "NTz4rJS9BAI", "text": "reproducing image that we tried to reproduce the query that was used to collect images reproduce the things that were allowed like including the date range from flickr reproduce the way that these things were labeled and we also through the old test set in here to see how those were labeled by these Turner's which is much much nicer than what we're able to do with with safar tender and what's interesting here is that there's super high variability in what everybody's like not everybody says the same thing obviously there's gonna be", "start_timestamp": "00:33:43", "end_timestamp": "00:34:11", "start_second": 2023, "end_second": 2051, "url": "https://www.youtube.com/watch?v=NTz4rJS9BAI&t=2023s", "title": "Training on the Test Set and Other Heresies", "thumbnail": "https://i.ytimg.com/vi/NTz4rJS9BAI/maxresdefault.jpg"} {"video_id": "NTz4rJS9BAI", "text": "disagreement right there's one you have this is site anybody who work in crowdsourcing knows this right you have the people who do these tasks see through they work too quickly or there's just new us right so for example this is this bow was selected a hundred percent of the time by every turn this one was 70% of our heroine from brave I mean there's a bow there it's a little bit hard to see you on this blown off screen right so there's a loss of reason it's not Center frame so the lot of reason to miss it this one now becomes a metaphysical", "start_timestamp": "00:34:11", "end_timestamp": "00:34:41", "start_second": 2051, "end_second": 2081, "url": "https://www.youtube.com/watch?v=NTz4rJS9BAI&t=2051s", "title": "Training on the Test Set and Other Heresies", "thumbnail": "https://i.ytimg.com/vi/NTz4rJS9BAI/maxresdefault.jpg"} {"video_id": "NTz4rJS9BAI", "text": "conversation about what is the bow good friend Conrad Curtin says yes so I mean neuroscientist so you know I don't know everybody has a good opinion about what that is and this one's wrong and then 20% of the people the boat it's a boat it's not the boat we want we want right ok so I think that's that's our issue right so this is not the boat we want and so we actually had like the histograms for every class for every class thousand classes we had histograms of the selection frequencies and so when we actually sampled our new test set we", "start_timestamp": "00:34:41", "end_timestamp": "00:35:14", "start_second": 2081, "end_second": 2114, "url": "https://www.youtube.com/watch?v=NTz4rJS9BAI&t=2081s", "title": "Training on the Test Set and Other Heresies", "thumbnail": "https://i.ytimg.com/vi/NTz4rJS9BAI/maxresdefault.jpg"} {"video_id": "NTz4rJS9BAI", "text": "tried to match the statistics of the the label from the original set with the new set also the big back end vise all looked at every single image and made sure it was correct in the new test which is clearly creating a distribution shift yes Alex told me it's Berg told me but I'm not anyway it's not true oh yeah you could go look look that's actually the other amazing thing about all these data sets how many people who ever looked at the images name isn't it I mean you don't need to you just watch the curve go down but like you know it's", "start_timestamp": "00:35:14", "end_timestamp": "00:35:52", "start_second": 2114, "end_second": 2152, "url": "https://www.youtube.com/watch?v=NTz4rJS9BAI&t=2114s", "title": "Training on the Test Set and Other Heresies", "thumbnail": "https://i.ytimg.com/vi/NTz4rJS9BAI/maxresdefault.jpg"} {"video_id": "NTz4rJS9BAI", "text": "like actually especially though it's like we kind of have decoupled ourselves from actually any of the domain expertise it's actually quite entertaining what you do yes it's quite a JD what do you do I grew up right ok so punch line right we see exactly the same thing we see exactly the same thing we saw in separate n a bigger drop technically speaking is the tempers dislike it 10 percent drop at the top here for the best ones but we still see a positive slope so the models that have that models that fared better on the original", "start_timestamp": "00:35:52", "end_timestamp": "00:36:19", "start_second": 2152, "end_second": 2179, "url": "https://www.youtube.com/watch?v=NTz4rJS9BAI&t=2152s", "title": "Training on the Test Set and Other Heresies", "thumbnail": "https://i.ytimg.com/vi/NTz4rJS9BAI/maxresdefault.jpg"} {"video_id": "NTz4rJS9BAI", "text": "set fared better on the new set but a little bit it's not for analysis it was first two part 10 but we definitely see a positive slope in our fit but we do still see the significant drop so what is how is the new legally we need you said you also labeled the ultimately just we labeled the old imagenet just to match the selection frequencies from our so okay there's was there a distribution shift already you are you already pointed out that Ludwig and Becca are not the same and then we matched their sampling frequencies but", "start_timestamp": "00:36:19", "end_timestamp": "00:36:58", "start_second": 2179, "end_second": 2218, "url": "https://www.youtube.com/watch?v=NTz4rJS9BAI&t=2179s", "title": "Training on the Test Set and Other Heresies", "thumbnail": "https://i.ytimg.com/vi/NTz4rJS9BAI/maxresdefault.jpg"} {"video_id": "NTz4rJS9BAI", "text": "you also in the process you also saying we sampled from we have a bigger pool we didn't have our new test set with a bigger pool of images and we sampled from that pool to match the statistics of the old Tessa oh just the statistics yes no no no not using in this evaluation say there are no old images in here really yeah yeah yes and there isn't same neighborhood yeah you can measure how will the old labels this might labeling take those images on those images take the predictor defined by the old labels yeah", "start_timestamp": "00:36:58", "end_timestamp": "00:37:44", "start_second": 2218, "end_second": 2264, "url": "https://www.youtube.com/watch?v=NTz4rJS9BAI&t=2218s", "title": "Training on the Test Set and Other Heresies", "thumbnail": "https://i.ytimg.com/vi/NTz4rJS9BAI/maxresdefault.jpg"} {"video_id": "NTz4rJS9BAI", "text": "accuracy on old accuracy images it doesn't write it has seven foundation just old validation is yeah it's noisy its noisy no man it's noisy it's still noisy again so the orange right here nothing too big nor think it's then but even if it doesn't matter it says you're taking the label that was as it appears in the chain sent every image on these the label yeah it was generated using some procedure yeah measure of how well those predictions predicted a pose that was assigned to me for your process that's not how these sets are built at", "start_timestamp": "00:37:44", "end_timestamp": "00:38:37", "start_second": 2264, "end_second": 2317, "url": "https://www.youtube.com/watch?v=NTz4rJS9BAI&t=2264s", "title": "Training on the Test Set and Other Heresies", "thumbnail": "https://i.ytimg.com/vi/NTz4rJS9BAI/maxresdefault.jpg"} {"video_id": "NTz4rJS9BAI", "text": "all hey Louie let me I got that I've got the okay it's okay I got the forest but I know I'm also used to this it's a procedure it's a procedure so so it's also true right that that none of there's no these data sets are not made by labeling images again like what happened right you show people you you query and then you test correctness as to whether or not they go in or out it's a weird it's a weird process the other thing that's really interesting by the way the other thing it's really interesting we take for granted the", "start_timestamp": "00:38:37", "end_timestamp": "00:39:13", "start_second": 2317, "end_second": 2353, "url": "https://www.youtube.com/watch?v=NTz4rJS9BAI&t=2317s", "title": "Training on the Test Set and Other Heresies", "thumbnail": "https://i.ytimg.com/vi/NTz4rJS9BAI/maxresdefault.jpg"} {"video_id": "NTz4rJS9BAI", "text": "question you asked the Turkish is which which of these images contains at least one type of object Bo that is not the classification problem that everybody is now weightlifting against each other for the question we have now is just label the damn image it has one label and that label somehow comes out and we know that like why is this a book I mean this contains a bow but this is the the I forgot her name what's her name she's from brave okay it's not Elsa that's frozen I know that one yeah great so again like I asked you what is that", "start_timestamp": "00:39:13", "end_timestamp": "00:39:52", "start_second": 2353, "end_second": 2392, "url": "https://www.youtube.com/watch?v=NTz4rJS9BAI&t=2353s", "title": "Training on the Test Set and Other Heresies", "thumbnail": "https://i.ytimg.com/vi/NTz4rJS9BAI/maxresdefault.jpg"} {"video_id": "NTz4rJS9BAI", "text": "image you it you could describe so much it's Elsa in a forest with her bow on the branch here anyway there's like I'm sorry Meredith excuse me anyway right so it's like a much more complicated thing and what we evaluate is very different let me do I I didn't make the original image that day is that so I I just illusion equals ask the question was we asked exactly the same question that the imagenet people asked I'm just saying it's the exact same question oh my god maybe you should redo it $1,000 it's really expensive it's really", "start_timestamp": "00:39:52", "end_timestamp": "00:40:33", "start_second": 2392, "end_second": 2433, "url": "https://www.youtube.com/watch?v=NTz4rJS9BAI&t=2392s", "title": "Training on the Test Set and Other Heresies", "thumbnail": "https://i.ytimg.com/vi/NTz4rJS9BAI/maxresdefault.jpg"} {"video_id": "NTz4rJS9BAI", "text": "expensive it's so at anytime you want to say you want more right I mean just note that this is a very expensive project and thank you to Microsoft for Darren's leave funding part of the lately anyway can I just let me just blaze for to cut late so they run Mont I have to like two more examples do you want I mean everyone's goal oh yeah okay okay here two things I do think and yeah you could fight with me maybe try to find more evidence that we do not see adaptive overfitting or this is not obvious but we do see significant", "start_timestamp": "00:40:33", "end_timestamp": "00:41:04", "start_second": 2433, "end_second": 2464, "url": "https://www.youtube.com/watch?v=NTz4rJS9BAI&t=2433s", "title": "Training on the Test Set and Other Heresies", "thumbnail": "https://i.ytimg.com/vi/NTz4rJS9BAI/maxresdefault.jpg"} {"video_id": "NTz4rJS9BAI", "text": "fragility from distribution shift and the distribution shift is here is just humans disagreeing with each other barely I mean just so sorry Simon humans disagreeing with each other it's just that there were much chemikal Turkish in 2011 and now they're mechanical turkeys and 2019 and that's different people most likely I don't think people Turk for that long oh geez no no retraining no nothing is just take the thing take the weights don't have to retrain because otherwise that would be really expensive again", "start_timestamp": "00:41:04", "end_timestamp": "00:41:36", "start_second": 2464, "end_second": 2496, "url": "https://www.youtube.com/watch?v=NTz4rJS9BAI&t=2464s", "title": "Training on the Test Set and Other Heresies", "thumbnail": "https://i.ytimg.com/vi/NTz4rJS9BAI/maxresdefault.jpg"} {"video_id": "NTz4rJS9BAI", "text": "unless they were tearing with the dawn bench some of them are really slow you know some of them come from Google so they're really really slow and so you have like these things that are huge but yeah the trend is the trend is so the question is can we find more evidence I just want present more data and then I'll stop go ahead I know man well it's the small that it feels like it's the small distribution shift and just manifested in a huge error so I guess the other way I'd say that is small distribution shifts and", "start_timestamp": "00:41:36", "end_timestamp": "00:42:02", "start_second": 2496, "end_second": 2522, "url": "https://www.youtube.com/watch?v=NTz4rJS9BAI&t=2496s", "title": "Training on the Test Set and Other Heresies", "thumbnail": "https://i.ytimg.com/vi/NTz4rJS9BAI/maxresdefault.jpg"} {"video_id": "NTz4rJS9BAI", "text": "seem to propagate into large like imagine what happens in reality right I mean this is like one of these so then I say I mean this is trying as hard as we possibly could to match the statistics maybe we could've tried harder it's it's yeah it's a small distribution shifting reduces the large error and we should be worried about that I'm more worried about that than the diminishing returns reversing but what's also surprising that in what flux you're just like you would imagine that the fragility increases as it were it doesn't right so", "start_timestamp": "00:42:02", "end_timestamp": "00:42:30", "start_second": 2522, "end_second": 2550, "url": "https://www.youtube.com/watch?v=NTz4rJS9BAI&t=2522s", "title": "Training on the Test Set and Other Heresies", "thumbnail": "https://i.ytimg.com/vi/NTz4rJS9BAI/maxresdefault.jpg"} {"video_id": "NTz4rJS9BAI", "text": "they clearly all we have to do is go from 600 600 million parameters to 600 billion parameters and we'll be up on the line and then it'll be fine right I guess then we'll have cheese self-driving and full self-driving will happen once we get up there but um yeah I don't know I mean I don't know I don't know how to extrapolate this any further because I we've already even even at Google they've run out of resources actually even iid know and actually I think the test set okay validation set commissioner and folded into the", "start_timestamp": "00:42:30", "end_timestamp": "00:43:07", "start_second": 2550, "end_second": 2587, "url": "https://www.youtube.com/watch?v=NTz4rJS9BAI&t=2550s", "title": "Training on the Test Set and Other Heresies", "thumbnail": "https://i.ytimg.com/vi/NTz4rJS9BAI/maxresdefault.jpg"} {"video_id": "NTz4rJS9BAI", "text": "training set and so on you can basically the same actresses yeah evidence video yeah you want lunch you don't see plots 10:00 you want to see Plus man you want to fight with him I have a date bar in my back right so this is this is a cool one same time this is a curated data set for 2015 by the original image net folks right here there was like there were 4,000 videos but then they just rendered them down as 1 million JPEGs and then presented to you as JPEGs and each of these corresponds to some classes it's a 30-30 classes the subset", "start_timestamp": "00:43:07", "end_timestamp": "00:43:51", "start_second": 2587, "end_second": 2631, "url": "https://www.youtube.com/watch?v=NTz4rJS9BAI&t=2587s", "title": "Training on the Test Set and Other Heresies", "thumbnail": "https://i.ytimg.com/vi/NTz4rJS9BAI/maxresdefault.jpg"} {"video_id": "NTz4rJS9BAI", "text": "of imagenet so what we know exactly where these images came and it's kind of it was supposed to be for video but you could just use this for detection and classification and what we did is we invented a metric that's fairly reasonable metric which is you teach you cheat each video as a set of similar images and then for every frame you pick a K and you look in the neighborhood of that K and you see if you could find one where you get a miss classification this just allowed us to prune through that data set pretty quickly so remember that", "start_timestamp": "00:43:51", "end_timestamp": "00:44:21", "start_second": 2631, "end_second": 2661, "url": "https://www.youtube.com/watch?v=NTz4rJS9BAI&t=2631s", "title": "Training on the Test Set and Other Heresies", "thumbnail": "https://i.ytimg.com/vi/NTz4rJS9BAI/maxresdefault.jpg"} {"video_id": "NTz4rJS9BAI", "text": "these are all mostly 30 frames per second videos so 10 frames is about a third of a second ok so so here's some cool pictures you see these cotton pickers on Twitter all the time where we go from a domestic cat and within 10 frames it's called a monkey oh yeah I think it's cuz now I see a monkey this what I don't see it goes from bird to domestic cat I guess that's what is he eating I don't know this one goes from turtle to lizard this one goes from dog to horse what's amazing it's easily these images to you look the same I mean I", "start_timestamp": "00:44:21", "end_timestamp": "00:44:57", "start_second": 2661, "end_second": 2697, "url": "https://www.youtube.com/watch?v=NTz4rJS9BAI&t=2661s", "title": "Training on the Test Set and Other Heresies", "thumbnail": "https://i.ytimg.com/vi/NTz4rJS9BAI/maxresdefault.jpg"} {"video_id": "NTz4rJS9BAI", "text": "have to tell you to squint to see why they're different right again within a third of a second of each other and of course Jason can't see you that's right to Jason that jason has a good filter that makes them all look to say they all are cats and we made a lot of effort to like make sure when you're going through here that the kinds of things we were pruning when we actually get to this next plot were like deep so the ones we saw before would look really similar these look really different right these are very different we prune those they", "start_timestamp": "00:44:57", "end_timestamp": "00:45:24", "start_second": 2697, "end_second": 2724, "url": "https://www.youtube.com/watch?v=NTz4rJS9BAI&t=2697s", "title": "Training on the Test Set and Other Heresies", "thumbnail": "https://i.ytimg.com/vi/NTz4rJS9BAI/maxresdefault.jpg"} {"video_id": "NTz4rJS9BAI", "text": "did not go into the data set when we were doing curation so we did this again we I didn't do this Ludvig if I saw I saw Dave we incorporeal Ike like we recruited people from CMU for this I mean it's like a lot of work to label and we see the exact same plot man we see the exact same plot so that you yeah again like just using this metric you see exam this big shift this again it's again it's a small distribution ship is within four point three frames this is 10 this is PM 10 this is in 0.3 seconds of each other you", "start_timestamp": "00:45:24", "end_timestamp": "00:45:58", "start_second": 2724, "end_second": 2758, "url": "https://www.youtube.com/watch?v=NTz4rJS9BAI&t=2724s", "title": "Training on the Test Set and Other Heresies", "thumbnail": "https://i.ytimg.com/vi/NTz4rJS9BAI/maxresdefault.jpg"} {"video_id": "NTz4rJS9BAI", "text": "see it's a small distribution shift and again everything you don't see adaptive overfitting you do see some sensitivity to distribution shift let me skip this go I'll talk about the end finally kaggle two more things Kaggle CAG will released a nice meta-analysis that were metadata set of all their competitions with a lot of information not all the information we would have liked but a lot of information everybody knows on cavil you have a public and a private leaderboard the nice thing is almost in every competition here those are a ID", "start_timestamp": "00:45:58", "end_timestamp": "00:46:26", "start_second": 2758, "end_second": 2786, "url": "https://www.youtube.com/watch?v=NTz4rJS9BAI&t=2758s", "title": "Training on the Test Set and Other Heresies", "thumbnail": "https://i.ytimg.com/vi/NTz4rJS9BAI/maxresdefault.jpg"} {"video_id": "NTz4rJS9BAI", "text": "split from each other this is cool because now what we should see is that if you train the public leaderboard and you train and you somehow evaluating the on the private you should just see clustering around the y equals x and that's exactly what we see clustering around y equals x and it's not just on two it's on like B's how many do we do look I forgot the number 117 I don't have all of them in here but really key in this case because you have iid splits you just see clustering around the y equals x so now so again", "start_timestamp": "00:46:26", "end_timestamp": "00:46:57", "start_second": 2786, "end_second": 2817, "url": "https://www.youtube.com/watch?v=NTz4rJS9BAI&t=2786s", "title": "Training on the Test Set and Other Heresies", "thumbnail": "https://i.ytimg.com/vi/NTz4rJS9BAI/maxresdefault.jpg"} {"video_id": "NTz4rJS9BAI", "text": "evidence that distribution shift because these are these are iid evidence here is that the distribution shift is really what's causing us to be ok I'll stop there um oh sorry the most important slide of course is that you because no one day right the data set to make all data sets and we see Leon Batu wrote this beautiful oral history of the M this data set and also managed to reconstruct a bunch more examples and it's very nice fun read and again we see the distribution shift again between these two tests we were doing accuracy", "start_timestamp": "00:46:57", "end_timestamp": "00:47:41", "start_second": 2817, "end_second": 2861, "url": "https://www.youtube.com/watch?v=NTz4rJS9BAI&t=2817s", "title": "Training on the Test Set and Other Heresies", "thumbnail": "https://i.ytimg.com/vi/NTz4rJS9BAI/maxresdefault.jpg"} {"video_id": "NTz4rJS9BAI", "text": "so it should be negative right yeah yeah I didn't want to I just cut this out of their paper I should already made it that all right so I'll stop there it sounds like we've seen this before we know this we knew this was true in boosting the interpolating training data we knew that was fine and it did seem to always make you like building bigger models seem to have better test error making your models big doesn't hurt but definitely does seem to be some other issues that are going to really be the pressing ones that we need to deal with", "start_timestamp": "00:47:41", "end_timestamp": "00:48:07", "start_second": 2861, "end_second": 2887, "url": "https://www.youtube.com/watch?v=NTz4rJS9BAI&t=2861s", "title": "Training on the Test Set and Other Heresies", "thumbnail": "https://i.ytimg.com/vi/NTz4rJS9BAI/maxresdefault.jpg"} {"video_id": "NTz4rJS9BAI", "text": "moving forward I'm not sure Jonathan and I will talk about how we teach this to our undergrad machine learning class is very gently I think so how but I do think that for us the researchers and and for anyone in industry the big issues the bigger issues are this distribution shift is a real dangerous thing right if you're putting it in a car or you're making health care decisions and like that we I can show you if you're interested afterwards I could show you a paper which which demonstrates that actually a lot of the", "start_timestamp": "00:48:07", "end_timestamp": "00:48:36", "start_second": 2887, "end_second": 2916, "url": "https://www.youtube.com/watch?v=NTz4rJS9BAI&t=2887s", "title": "Training on the Test Set and Other Heresies", "thumbnail": "https://i.ytimg.com/vi/NTz4rJS9BAI/maxresdefault.jpg"} {"video_id": "NTz4rJS9BAI", "text": "there's huge distribution shift effects in medic with all of the radiology there's a deep learning for radiology huge things where you're basically overfitting to the machine that took the image so that's dangerous if that's a life-and-death situation we already know that Tesla cars kill people what's amazing about Tesla cars two people have died driving under trucks with their autopilot on in Florida and what's nice about that is that machine learning generalization you should never make the same twice I showed you your corner case and", "start_timestamp": "00:48:36", "end_timestamp": "00:49:04", "start_second": 2916, "end_second": 2944, "url": "https://www.youtube.com/watch?v=NTz4rJS9BAI&t=2916s", "title": "Training on the Test Set and Other Heresies", "thumbnail": "https://i.ytimg.com/vi/NTz4rJS9BAI/maxresdefault.jpg"} {"video_id": "fgwurrihq4A", "text": "you haven't heard this yet so I'm Gordon our facilitator is Florian so so today the paper we're going over is unsupervised data augmentation the primary author qi j xie and several other co-authors from google brain and carnegie mellon this is we're gonna go over motivation so deep learning is sorry typically requires a lot of labelled data in order to succeed and and label data is very expensive so that's one of the main motivations for this paper some of the less costly ways of applying improving deep learning are", "start_timestamp": "00:00:00", "end_timestamp": "00:00:40", "start_second": 0, "end_second": 40, "url": "https://www.youtube.com/watch?v=fgwurrihq4A&t=0s", "title": "Unsupervised Data Augmentation | AISC", "thumbnail": "https://i.ytimg.com/vi/fgwurrihq4A/hqdefault.jpg"} {"video_id": "fgwurrihq4A", "text": "using unlabeled data which is much more abundant and and easy cheaper to accumulate and data augmentation which basically stretches your supervised labeled samples further and as well are we good with the sound data augmentation has mostly been applied in the supervised setting and so we want to see if it can be applied in the unsupervised setting as well the main contributions which we'll get into are applying state-of-the-art data augmentation to semi-supervised learning a training technique called TSA training SiC leg", "start_timestamp": "00:00:40", "end_timestamp": "00:01:19", "start_second": 40, "end_second": 79, "url": "https://www.youtube.com/watch?v=fgwurrihq4A&t=40s", "title": "Unsupervised Data Augmentation | AISC", "thumbnail": "https://i.ytimg.com/vi/fgwurrihq4A/hqdefault.jpg"} {"video_id": "fgwurrihq4A", "text": "kneeling that effectively prevents overfitting when you have much more unsupervised data than supervised data and they they achieve performance improvements on multiple text and vision benchmarks and then they also introduce a method to even the prediction distributions over across classes for unlabeled and labeled data semi-supervised learning I hope people online know what this is I probably won't explain it again so we'll get into smoothness in forcing okay so this is one approach to semi-supervised learning", "start_timestamp": "00:01:19", "end_timestamp": "00:01:58", "start_second": 79, "end_second": 118, "url": "https://www.youtube.com/watch?v=fgwurrihq4A&t=79s", "title": "Unsupervised Data Augmentation | AISC", "thumbnail": "https://i.ytimg.com/vi/fgwurrihq4A/hqdefault.jpg"} {"video_id": "fgwurrihq4A", "text": "and the general idea here is you you try and regularize a models prediction to be less sensitive to small perturbations applied to to the the input data so that in potato can be label or unlabeled and when we say perturbations we're basically talking about adding some sort of noise to the inputs ampuls and so yeah you want your model to given a sample and an Augmented sample or perturbed sample you want the models predictions to be similar on both I think that's what I just said so enforce the predictions to be similar", "start_timestamp": "00:01:58", "end_timestamp": "00:02:41", "start_second": 118, "end_second": 161, "url": "https://www.youtube.com/watch?v=fgwurrihq4A&t=118s", "title": "Unsupervised Data Augmentation | AISC", "thumbnail": "https://i.ytimg.com/vi/fgwurrihq4A/hqdefault.jpg"} {"video_id": "fgwurrihq4A", "text": "and in general you want a good model should be invariant to small perturbations on this input data that don't actually change the nature of the example and yeah so data augmentation is a technique to boost your training data size and the diversity of it so the general idea is you're augmenting in some way again adding some noise to your input samples so that you cannot that you can both get more training data and have more diverse training data and I guess what we'll see some examples of what diversity means so yeah basically", "start_timestamp": "00:02:41", "end_timestamp": "00:03:28", "start_second": 161, "end_second": 208, "url": "https://www.youtube.com/watch?v=fgwurrihq4A&t=161s", "title": "Unsupervised Data Augmentation | AISC", "thumbnail": "https://i.ytimg.com/vi/fgwurrihq4A/hqdefault.jpg"} {"video_id": "fgwurrihq4A", "text": "you apply some sort of transformation you have a transformation function the apply to your input data and in data augmentation there's always this trade-off of diversity and validity that's being managed so so yeah you want to create novel and realistic training samples without augmenting them so much that you change their underlying inherent label so diversity it means growing the the reach of your data set or making your data set more broad and validity is making sure that you're not blowing up your samples so much that", "start_timestamp": "00:03:28", "end_timestamp": "00:04:06", "start_second": 208, "end_second": 246, "url": "https://www.youtube.com/watch?v=fgwurrihq4A&t=208s", "title": "Unsupervised Data Augmentation | AISC", "thumbnail": "https://i.ytimg.com/vi/fgwurrihq4A/hqdefault.jpg"} {"video_id": "fgwurrihq4A", "text": "they are no longer recognizable or they're not related to the label that they should have assigned any questions so far so this this is what supervised data augmentation looks like here so here Q is is our transformation function so you can see it's conditioned on an input example X and then X hat is the Augmented data sample and so so basically we're trying to minimize the log likelihood the negative log likelihood of the true ground source ground truth label which is y star given an Augmented sample X hat and and so", "start_timestamp": "00:04:06", "end_timestamp": "00:04:58", "start_second": 246, "end_second": 298, "url": "https://www.youtube.com/watch?v=fgwurrihq4A&t=246s", "title": "Unsupervised Data Augmentation | AISC", "thumbnail": "https://i.ytimg.com/vi/fgwurrihq4A/hqdefault.jpg"} {"video_id": "fgwurrihq4A", "text": "yeah you can see this as basically an additional training signal that's being sent to the objective function that that is hoping to yeah that is that is just using the the Augmented samples and then very similarly or actually slightly differently unsupervised data augmentation so this is when you have unsupervised unlabeled data this is a common way to to use that data for data on plantation you can basically take examine the output distribution prediction probability distributions for an unlabeled sample so here that's X and", "start_timestamp": "00:04:58", "end_timestamp": "00:05:45", "start_second": 298, "end_second": 345, "url": "https://www.youtube.com/watch?v=fgwurrihq4A&t=298s", "title": "Unsupervised Data Augmentation | AISC", "thumbnail": "https://i.ytimg.com/vi/fgwurrihq4A/hqdefault.jpg"} {"video_id": "fgwurrihq4A", "text": "then an Augmented unlabeled sample X hat again and and you've got the same transformation function Q that you had in the supervised setting and so really what you're trying to do is in this case minimize the divergence or the the difference between these two probability distributions so you're trying to normalize or regularize the predictions on the Augmented samples to have similar class distributions to to the unag mented unlabeled data does anyone have any questions here doesn't minimizing this minimize they", "start_timestamp": "00:05:45", "end_timestamp": "00:06:33", "start_second": 345, "end_second": 393, "url": "https://www.youtube.com/watch?v=fgwurrihq4A&t=345s", "title": "Unsupervised Data Augmentation | AISC", "thumbnail": "https://i.ytimg.com/vi/fgwurrihq4A/hqdefault.jpg"} {"video_id": "fgwurrihq4A", "text": "are therefore as well minimize this minimize the other in this case I'm sorry maybe I misspoke earlier in this case these are X's all labeled labeled samples here ordinary boys why is so why here is not the ground truth label y here so this is y star that's the ground truth annotated label and here we just have the the output prediction distribution or from the model for both the unlabeled and the Augmented unlabeled sample okay what's the difference oh yeah good question so theta it is implying that these are parameters that", "start_timestamp": "00:06:33", "end_timestamp": "00:07:28", "start_second": 393, "end_second": 448, "url": "https://www.youtube.com/watch?v=fgwurrihq4A&t=393s", "title": "Unsupervised Data Augmentation | AISC", "thumbnail": "https://i.ytimg.com/vi/fgwurrihq4A/hqdefault.jpg"} {"video_id": "fgwurrihq4A", "text": "are being updated so the gradient is passing through these I'm pretty sure that the idea with theta tilde is those parameters are frozen so they're they're not they're not updated in the objective function data here yeah yeah so this is the instead of thing yeah these would be these are two different settings but but yeah in both cases they're the model parameters question here is the transformation function so so the augmentation function basically the way that you're adjusting your input samples so the first version sees X's or actual", "start_timestamp": "00:07:28", "end_timestamp": "00:08:31", "start_second": 448, "end_second": 511, "url": "https://www.youtube.com/watch?v=fgwurrihq4A&t=448s", "title": "Unsupervised Data Augmentation | AISC", "thumbnail": "https://i.ytimg.com/vi/fgwurrihq4A/hqdefault.jpg"} {"video_id": "fgwurrihq4A", "text": "the data that you have that you have labels for in this one yeah so in sorry in this case the unsurprised a documentation you're assuming that you have so yeah I guess you can see here you've got a so in U and U is an unlabeled set whereas in the first one you have x and y star in in a labeled set so yeah there's no labels in this data at all so this this actual approach there are a few different ways you can do this this specific approach of using the KL divergence between the unlabeled or started the Augmented and", "start_timestamp": "00:08:31", "end_timestamp": "00:09:08", "start_second": 511, "end_second": 548, "url": "https://www.youtube.com/watch?v=fgwurrihq4A&t=511s", "title": "Unsupervised Data Augmentation | AISC", "thumbnail": "https://i.ytimg.com/vi/fgwurrihq4A/hqdefault.jpg"} {"video_id": "fgwurrihq4A", "text": "unadmitted was from a paper in 2018 VA t believe was the mall so so they borrowed that approach here and really the main difference they applied in this case is the the transformation function so what they did with Q and so that's what we're going to talk about now so this idea of targeted data augmentation so so over conventional methods such as adding Gaussian noise or affine transformations perturbations like that if there are a few advantages applying targeted augmentation so one is that they give a valid and realistic perturbation so so", "start_timestamp": "00:09:08", "end_timestamp": "00:09:53", "start_second": 548, "end_second": 593, "url": "https://www.youtube.com/watch?v=fgwurrihq4A&t=548s", "title": "Unsupervised Data Augmentation | AISC", "thumbnail": "https://i.ytimg.com/vi/fgwurrihq4A/hqdefault.jpg"} {"video_id": "fgwurrihq4A", "text": "the idea is when you apply some of these state-of-the-art data augmentation methods the the output augment example it is still very much in in the same distribution as the the sample it was transformed from so so these are sort of realistic augment augmented examples whereas when you just apply some random Gaussian noise it can often make the the data point if you apply too much make it sort of very unrecognizable and not realistic so so not really as we say valid it also applies a diverse perturbation so again if you want to use", "start_timestamp": "00:09:53", "end_timestamp": "00:10:41", "start_second": 593, "end_second": 641, "url": "https://www.youtube.com/watch?v=fgwurrihq4A&t=593s", "title": "Unsupervised Data Augmentation | AISC", "thumbnail": "https://i.ytimg.com/vi/fgwurrihq4A/hqdefault.jpg"} {"video_id": "fgwurrihq4A", "text": "those other methods just adding Gaussian noise you're usually not going to change you're not going to be able to change your input samples significantly enough and so you just end up with sort of local changes to the samples whereas with targeted augmentation you can really generate diverse diverse samples that are much more useful and in growing your training set and then as well we'll see what some of the methods they had a targeted targeted inductive bias so so yeah you can actually apply approaches that are optimized for the tasks that", "start_timestamp": "00:10:41", "end_timestamp": "00:11:26", "start_second": 641, "end_second": 686, "url": "https://www.youtube.com/watch?v=fgwurrihq4A&t=641s", "title": "Unsupervised Data Augmentation | AISC", "thumbnail": "https://i.ytimg.com/vi/fgwurrihq4A/hqdefault.jpg"} {"video_id": "fgwurrihq4A", "text": "you're that you're solving in the particular data set and so so we'll see we'll see an example of that in the augmentation strategies they applied yep so this is the the training set up that they applied so on the left hand side we have the labeled data and it's so split up to x and y star part of me x is fed in through m that's the model and then we just have the standard supervised cross entropy loss being calculated here that's feeding up into a final loss on the right hand side is where they take the unlabeled data and and they do two", "start_timestamp": "00:11:26", "end_timestamp": "00:12:16", "start_second": 686, "end_second": 736, "url": "https://www.youtube.com/watch?v=fgwurrihq4A&t=686s", "title": "Unsupervised Data Augmentation | AISC", "thumbnail": "https://i.ytimg.com/vi/fgwurrihq4A/hqdefault.jpg"} {"video_id": "fgwurrihq4A", "text": "things so so one they feed it through the same model and and then as well they take that sample the unlabeled sample and then they apply some augmentation to it so we'll get into all these different segmentation strategies so that results in X hat so this would be their their Q the transformation function so they get X hat and then they feed that through the model as well and then they take the the output of the model on X and on X hat and they feed that into the unsupervised loss function and that's the the KL", "start_timestamp": "00:12:16", "end_timestamp": "00:12:56", "start_second": 736, "end_second": 776, "url": "https://www.youtube.com/watch?v=fgwurrihq4A&t=736s", "title": "Unsupervised Data Augmentation | AISC", "thumbnail": "https://i.ytimg.com/vi/fgwurrihq4A/hqdefault.jpg"} {"video_id": "fgwurrihq4A", "text": "divergence that we saw here so basically they just take a weighted sum of the supervised and unsupervised loss and and combine those together question yeah what is supervised course and complete loss this is just the standard loss function for okay across the so you take the the ground truth label and and compare that to the output prediction probabilities yeah Joey so none of these data seem to have two tilde over it does that mean that oh yeah that's a great point I pretty sure they forgot to apply a theta on this this prediction", "start_timestamp": "00:12:56", "end_timestamp": "00:13:38", "start_second": 776, "end_second": 818, "url": "https://www.youtube.com/watch?v=fgwurrihq4A&t=776s", "title": "Unsupervised Data Augmentation | AISC", "thumbnail": "https://i.ytimg.com/vi/fgwurrihq4A/hqdefault.jpg"} {"video_id": "fgwurrihq4A", "text": "distribution so the idea is when you when the gradient propagates back through the through the model it will run through the supervised portion and it will also run through the Augmented application up here but but I'm pretty sure it's not flowing through just the unag mented unlabeled data good question I can't remember if there was a justification for that what's up and okay we're in this flow does the targeted or the diversity augmentation where does that come to play in this flow the target okay so so the idea is that based on the type of", "start_timestamp": "00:13:38", "end_timestamp": "00:14:28", "start_second": 818, "end_second": 868, "url": "https://www.youtube.com/watch?v=fgwurrihq4A&t=818s", "title": "Unsupervised Data Augmentation | AISC", "thumbnail": "https://i.ytimg.com/vi/fgwurrihq4A/hqdefault.jpg"} {"video_id": "fgwurrihq4A", "text": "input data that you have for the unlabeled data they'll apply a specific augmentation to that sample so so here they've listed the ones that they've used but the idea is they only use one at a time so based on the type of data you're using and the particular data set they'll apply a particular augmentation strategy so so I think in the next slides it'll show the particular policies they just they just sort of listed them all here on the left hand side is a leap of data so you have the ground truth the white star the right I", "start_timestamp": "00:14:28", "end_timestamp": "00:15:06", "start_second": 868, "end_second": 906, "url": "https://www.youtube.com/watch?v=fgwurrihq4A&t=868s", "title": "Unsupervised Data Augmentation | AISC", "thumbnail": "https://i.ytimg.com/vi/fgwurrihq4A/hqdefault.jpg"} {"video_id": "fgwurrihq4A", "text": "know if you don't have one right so the lesson1 the left hand side is your doing the prediction training and the right hand side you do the prediction infant yeah so so on on the right side yeah the loss function does look different than then on the left but in both cases you have you have a version of the model and with its parameters that that can be updated with signal from from the loss function so so in the unsupervised case this is the this is the function that is leading to the unsupervised or signal flowing through the unsupervised portion", "start_timestamp": "00:15:06", "end_timestamp": "00:15:58", "start_second": 906, "end_second": 958, "url": "https://www.youtube.com/watch?v=fgwurrihq4A&t=906s", "title": "Unsupervised Data Augmentation | AISC", "thumbnail": "https://i.ytimg.com/vi/fgwurrihq4A/hqdefault.jpg"} {"video_id": "fgwurrihq4A", "text": "yeah so there's just two different loss functions one looks like this and then the other is the center oh my oh my okay scheduling of training do they train all supervise first then on label or are they training all mixed together that's a great question and that relates to the big contribution that I mentioned earlier TSA the training signal and annealing so we'll probably wait until that two to explain that yes so for the unscrew right side it doesn't matter what the label and what the model what with the label is not about the model is", "start_timestamp": "00:15:58", "end_timestamp": "00:16:40", "start_second": 958, "end_second": 1000, "url": "https://www.youtube.com/watch?v=fgwurrihq4A&t=958s", "title": "Unsupervised Data Augmentation | AISC", "thumbnail": "https://i.ytimg.com/vi/fgwurrihq4A/hqdefault.jpg"} {"video_id": "fgwurrihq4A", "text": "right like the lost isn't it yeah really what you care about here is that the the amount that you're changing or sample by is not differing significantly from the the same model prediction on the sorry the model prediction on the same unlabeled sample okay so so going into the augmentation strategies the first one we'll talk about is Auto and all so Auto augment learning augmentation strategies from data the general idea here is they have input data on the left and they basically have a model that will automatically search", "start_timestamp": "00:16:40", "end_timestamp": "00:17:33", "start_second": 1000, "end_second": 1053, "url": "https://www.youtube.com/watch?v=fgwurrihq4A&t=1000s", "title": "Unsupervised Data Augmentation | AISC", "thumbnail": "https://i.ytimg.com/vi/fgwurrihq4A/hqdefault.jpg"} {"video_id": "fgwurrihq4A", "text": "through multiple different augmentation strategies so you can see here in policy one there they're making a transformation rotation to the input data the other ones are mostly changing the color of the samples and and so the idea is the the model will automatically select the policy that is adding the most novel signal to to the yeah to the training so for example if and they find that in different in different data sets different policies are optimal so in in one data set you might need to modify the color of your images a lot to get", "start_timestamp": "00:17:33", "end_timestamp": "00:18:20", "start_second": 1053, "end_second": 1100, "url": "https://www.youtube.com/watch?v=fgwurrihq4A&t=1053s", "title": "Unsupervised Data Augmentation | AISC", "thumbnail": "https://i.ytimg.com/vi/fgwurrihq4A/hqdefault.jpg"} {"video_id": "fgwurrihq4A", "text": "more diversity and improve your training training set and in another you might need to rotate the images etc so basically it's a again coming back to the idea of being a targeted policy and this is something that you can vary on a task by task basis so even on a particular data set you can see what's the optimal augmentation approach any questions on this one so that was the one that they applied for vision and then they have to for text one is back translation so the general idea here it's pretty intuitive you take a question in one language in", "start_timestamp": "00:18:20", "end_timestamp": "00:19:03", "start_second": 1100, "end_second": 1143, "url": "https://www.youtube.com/watch?v=fgwurrihq4A&t=1100s", "title": "Unsupervised Data Augmentation | AISC", "thumbnail": "https://i.ytimg.com/vi/fgwurrihq4A/hqdefault.jpg"} {"video_id": "fgwurrihq4A", "text": "their case they used English to French so they trained a machine translation model between those two languages then they translate the English sentence into French they then take that French translation from the model and they translate that back into English and and then they use that as the Augmented sample in the model so obviously any machine translation model is going to have some some loss and it's not going to exactly translate something the same way back and forth so in this case you can see a lot of it has stayed the same", "start_timestamp": "00:19:03", "end_timestamp": "00:19:43", "start_second": 1143, "end_second": 1183, "url": "https://www.youtube.com/watch?v=fgwurrihq4A&t=1143s", "title": "Unsupervised Data Augmentation | AISC", "thumbnail": "https://i.ytimg.com/vi/fgwurrihq4A/hqdefault.jpg"} {"video_id": "fgwurrihq4A", "text": "this is Google Translate but but this word crankily was previously grin jingly and I think spoof spoof gets translated to tragic travesty so so this is one way that you can augment your samples and then the other one that they have for text is this tf-idf based word replacement so and they use this for text classification so so the idea here is sometimes in back translation the the Augmented transformation might actually miss translate some of the key words for that sample and in the classification tasks and so here they basically assign", "start_timestamp": "00:19:43", "end_timestamp": "00:20:33", "start_second": 1183, "end_second": 1233, "url": "https://www.youtube.com/watch?v=fgwurrihq4A&t=1183s", "title": "Unsupervised Data Augmentation | AISC", "thumbnail": "https://i.ytimg.com/vi/fgwurrihq4A/hqdefault.jpg"} {"video_id": "fgwurrihq4A", "text": "an IDF score to each word in the in the sample and then they randomly sample or randomly swap out words and giving a higher likelihood to swap words that have a low IDF score so here you can see I've just sort of created this example and so in this case the words this in decides to etc are transformed or swapped but but the words that are a little bit more rare and therefore have a higher IDF score such as pathetically cringing lease poof those are more likely not to be swapped out and so that's yeah based on the intuition that certain keywords", "start_timestamp": "00:20:33", "end_timestamp": "00:21:19", "start_second": 1233, "end_second": 1279, "url": "https://www.youtube.com/watch?v=fgwurrihq4A&t=1233s", "title": "Unsupervised Data Augmentation | AISC", "thumbnail": "https://i.ytimg.com/vi/fgwurrihq4A/hqdefault.jpg"} {"video_id": "fgwurrihq4A", "text": "sometimes are really useful for text classification any questions on the strategies those are the three okay so so now we come back to the question of how do we balance the need for having a large model so so when we're dealing with unlabeled data of which we have a really often you have much higher volume you have much more unlabeled data than say labeled data and so you generally would need a very large model to to train on that data so but but you may have a small amount of labeled data so you wanna the question is how to balance", "start_timestamp": "00:21:19", "end_timestamp": "00:22:03", "start_second": 1279, "end_second": 1323, "url": "https://www.youtube.com/watch?v=fgwurrihq4A&t=1279s", "title": "Unsupervised Data Augmentation | AISC", "thumbnail": "https://i.ytimg.com/vi/fgwurrihq4A/hqdefault.jpg"} {"video_id": "fgwurrihq4A", "text": "a need for a large model while preventing overfitting at the same time and so they're their answer to that question is to gradually release the training signals of supervisors examples as the model is trained on more and more of the unsupervised examples so I'll show the the equations for for all this okay so here let's see so B here is the batch that is just a renormalization constant and the key is is this part over here so again this is sorry this is the objective function and and so really what they've introduced is", "start_timestamp": "00:22:03", "end_timestamp": "00:22:52", "start_second": 1323, "end_second": 1372, "url": "https://www.youtube.com/watch?v=fgwurrihq4A&t=1323s", "title": "Unsupervised Data Augmentation | AISC", "thumbnail": "https://i.ytimg.com/vi/fgwurrihq4A/hqdefault.jpg"} {"video_id": "fgwurrihq4A", "text": "this portion on the right hand side where they say and this is all with regards to the labeled samples so n or what's this constant in Ada Adah yeah so so a dub G is is a threshold that varies with your training progress so we'll see that in the next slide but basically it's a threshold and and if the if the models prediction probability on the on the correct class for a labeled sample is above that threshold then then that this will evaluate to zero and therefore that sample signal will not get propagated through to the loss function at that", "start_timestamp": "00:22:52", "end_timestamp": "00:23:46", "start_second": 1372, "end_second": 1426, "url": "https://www.youtube.com/watch?v=fgwurrihq4A&t=1372s", "title": "Unsupervised Data Augmentation | AISC", "thumbnail": "https://i.ytimg.com/vi/fgwurrihq4A/hqdefault.jpg"} {"video_id": "fgwurrihq4A", "text": "time and yeah so so the constant will will change over time but generally the idea here is is that one yeah when ADA is is small then you'll pretty much be rejecting most most signal from labeled samples from going to the loss function so you'll be preventing the model from overfitting on on say a small set of labeled data so that was the that was the reason they introduced this any questions question it is I the indicator function yes sorry I didn't say that I is the indicator function here and and and Zed is just to", "start_timestamp": "00:23:46", "end_timestamp": "00:24:34", "start_second": 1426, "end_second": 1474, "url": "https://www.youtube.com/watch?v=fgwurrihq4A&t=1426s", "title": "Unsupervised Data Augmentation | AISC", "thumbnail": "https://i.ytimg.com/vi/fgwurrihq4A/hqdefault.jpg"} {"video_id": "fgwurrihq4A", "text": "rebalance or renormalize the effect from that during this rain ah ADA goes from it's always increasing okay so I'll show the schedule so they they introduce a few different schedules for for ADA so here's the equation for ADA on the right K is the number of classes in the classification example and lambda of T you can see in in the plot so lambda of T varies with the training progress and and as lambda of T increases or sorry as training progress increases lambda increases and as well the threshold also increases so at the beginning lambda", "start_timestamp": "00:24:34", "end_timestamp": "00:25:29", "start_second": 1474, "end_second": 1529, "url": "https://www.youtube.com/watch?v=fgwurrihq4A&t=1474s", "title": "Unsupervised Data Augmentation | AISC", "thumbnail": "https://i.ytimg.com/vi/fgwurrihq4A/hqdefault.jpg"} {"video_id": "fgwurrihq4A", "text": "will be zero and ADA will be one over K so 1 over K is 1 over the number of classes which is just the random chance of predicting a sample and then at the end ADA will be 1 and so coming back to this equation if ADA is 1 that means every single sample practically will be will be carry forward towards the the loss function and and at the beginning when it's 1 over K only the predictions that only the samples that the model is very unconfident on will actually be used in the loss function and so so the idea here the intuition is that if you", "start_timestamp": "00:25:29", "end_timestamp": "00:26:16", "start_second": 1529, "end_second": 1576, "url": "https://www.youtube.com/watch?v=fgwurrihq4A&t=1529s", "title": "Unsupervised Data Augmentation | AISC", "thumbnail": "https://i.ytimg.com/vi/fgwurrihq4A/hqdefault.jpg"} {"video_id": "fgwurrihq4A", "text": "have a small labelled set so you have only a few examples that have labelled labels you want to avoid your model overfitting on those at the beginning of the training so they suggest using this exponential schedule for for the case where you have a low number of labeled examples and so yeah the idea is at the beginning you you won't be feeding as much signal from those samples to your loss function but by the end you can start releasing more and more of it once more of the unlabeled data has been incorporated", "start_timestamp": "00:26:16", "end_timestamp": "00:26:54", "start_second": 1576, "end_second": 1614, "url": "https://www.youtube.com/watch?v=fgwurrihq4A&t=1576s", "title": "Unsupervised Data Augmentation | AISC", "thumbnail": "https://i.ytimg.com/vi/fgwurrihq4A/hqdefault.jpg"} {"video_id": "fgwurrihq4A", "text": "and then conversely if you have say a large number of labeled samples then you can use a log schedule the green line here and so that will release a lot of the supervised signal at the beginning and less at the end any questions on this think that's it before the break so we'll just take a 5-minute break and then afterwards we'll go over the results and have some discussion come back people online the sound is working sounds good okay great so we're gonna do a five minute break now we just did a five minute break", "start_timestamp": "00:26:54", "end_timestamp": "00:27:47", "start_second": 1614, "end_second": 1667, "url": "https://www.youtube.com/watch?v=fgwurrihq4A&t=1614s", "title": "Unsupervised Data Augmentation | AISC", "thumbnail": "https://i.ytimg.com/vi/fgwurrihq4A/hqdefault.jpg"} {"video_id": "fgwurrihq4A", "text": "we're gonna do experiments so basically they they applied this method to two different types of tasks Texas classification and then image recognition so a couple of vision benchmarks these are the actual data sets that they used so there's a mixture of binary and five class text classification most of it is sentiment I except for dbpedia which is I believe categories and I think dbpedia actually has fourteen classes or ten classes the two image benchmarks both have ten classes they also use image net they test on", "start_timestamp": "00:27:47", "end_timestamp": "00:28:28", "start_second": 1667, "end_second": 1708, "url": "https://www.youtube.com/watch?v=fgwurrihq4A&t=1667s", "title": "Unsupervised Data Augmentation | AISC", "thumbnail": "https://i.ytimg.com/vi/fgwurrihq4A/hqdefault.jpg"} {"video_id": "fgwurrihq4A", "text": "image net and they do some ablation studies for TSA and for the targeted augmentation so for text classification and the settings for the labeled data so the goal here was to use a small number of labeled samples so for binary classification that means twenty just twenty labeled samples and the rest of the data coming from the unlabeled and for the five class classification they found they needed to use a bit more so here they use twenty five hundred total samples and and so that's five hundred per class so when speaking to the author", "start_timestamp": "00:28:28", "end_timestamp": "00:29:11", "start_second": 1708, "end_second": 1751, "url": "https://www.youtube.com/watch?v=fgwurrihq4A&t=1708s", "title": "Unsupervised Data Augmentation | AISC", "thumbnail": "https://i.ytimg.com/vi/fgwurrihq4A/hqdefault.jpg"} {"video_id": "fgwurrihq4A", "text": "the goal here was to find out how low could they go so I was asking him did you experiment or how did you stumble across these numbers and really they wanted to determine how few examples they could labeled examples they could keep while still achieving really strong results from the unlabeled data for IMDB which is one of the the binary classification they use the concatenation of the training set that they didn't use so the training data that was not used for super as supervised they use that as unlabeled and then they use the rest of", "start_timestamp": "00:29:11", "end_timestamp": "00:29:51", "start_second": 1751, "end_second": 1791, "url": "https://www.youtube.com/watch?v=fgwurrihq4A&t=1751s", "title": "Unsupervised Data Augmentation | AISC", "thumbnail": "https://i.ytimg.com/vi/fgwurrihq4A/hqdefault.jpg"} {"video_id": "fgwurrihq4A", "text": "the unlabeled set so I believe the total training set is 25,000 and the unlabeled set is about 50,000 so that's how much unlabeled data they're using there and for Yelp and Amazon they obtained some really large review datasets and I believe there was something like 6 million samples in in the unlabeled and so for the most part again another choice was that I think for the most part they used one Augmented sample per unlabeled sample so so but but he said that that would be something that might be a task specific parameter and that", "start_timestamp": "00:29:51", "end_timestamp": "00:30:32", "start_second": 1791, "end_second": 1832, "url": "https://www.youtube.com/watch?v=fgwurrihq4A&t=1791s", "title": "Unsupervised Data Augmentation | AISC", "thumbnail": "https://i.ytimg.com/vi/fgwurrihq4A/hqdefault.jpg"} {"video_id": "fgwurrihq4A", "text": "you could adjust so for some tasks for each unlabeled sample you might want to make a couple of augmentations and use both and as for the model they try a few different initialization schemes so all are working on the transformer architecture applied in Beart so they have just a random initialization then they have bird base bird large and Bert large fine-tuned on on the unlabeled data the in domain on little data so the same unlabeled data that they're using and and for each setting they compare the performance for each of these", "start_timestamp": "00:30:32", "end_timestamp": "00:31:20", "start_second": 1832, "end_second": 1880, "url": "https://www.youtube.com/watch?v=fgwurrihq4A&t=1832s", "title": "Unsupervised Data Augmentation | AISC", "thumbnail": "https://i.ytimg.com/vi/fgwurrihq4A/hqdefault.jpg"} {"video_id": "fgwurrihq4A", "text": "settings with and without the unsupervised data augmentation method UDA so here are the results for for the text benchmarks so you can see at the top the data set name and then below that the number of supervisors examples that exist in that data set the the top two results are the pre bert state-of-the-art so that's that's how influential Burt was there's a before bird and after bird so they report both in some cases Burt was better than the state of the art in other cases I I guess it's always okay and then their", "start_timestamp": "00:31:20", "end_timestamp": "00:32:06", "start_second": 1880, "end_second": 1926, "url": "https://www.youtube.com/watch?v=fgwurrihq4A&t=1880s", "title": "Unsupervised Data Augmentation | AISC", "thumbnail": "https://i.ytimg.com/vi/fgwurrihq4A/hqdefault.jpg"} {"video_id": "fgwurrihq4A", "text": "results the results from their paper are in the bottom in the semi-supervised setting again they've got the different initialization strategies on the on the left and you da X or check indicates whether or not they use u da and and so these are error rates here so lower is better and below the the data set name in the bottom you can see the number of labeled samples that they used so for IMDB they literally only use 20 examples and and when they apply when they use an initialization from fine-tuned Burt model they're they're", "start_timestamp": "00:32:06", "end_timestamp": "00:32:47", "start_second": 1926, "end_second": 1967, "url": "https://www.youtube.com/watch?v=fgwurrihq4A&t=1926s", "title": "Unsupervised Data Augmentation | AISC", "thumbnail": "https://i.ytimg.com/vi/fgwurrihq4A/hqdefault.jpg"} {"video_id": "fgwurrihq4A", "text": "able to beat the state of the art and the pre bird state of the art so that literally means they're just using 20 labeled examples along with Burt which is fine-tuned on unlabeled data and and then they use augmented samples from the unlabeled unlabeled data set so that's that's one of the most significant improvements and they found across the board they they got very close it or actually beat the the the state of the art for these tasks I think the one that they found the most difficult and they perform the the worst on was the de 5", "start_timestamp": "00:32:47", "end_timestamp": "00:33:32", "start_second": 1967, "end_second": 2012, "url": "https://www.youtube.com/watch?v=fgwurrihq4A&t=1967s", "title": "Unsupervised Data Augmentation | AISC", "thumbnail": "https://i.ytimg.com/vi/fgwurrihq4A/hqdefault.jpg"} {"video_id": "fgwurrihq4A", "text": "class classification for both Yelp and Amazon you can see that their results are still a bit a bit off from from the baselines so yeah a really big finding here is that obviously just with using Burt and know you da you still get 6.5 error with just 20 labeled examples so a lot of this is just indicating how much information is being contained in Burt but but it's quite significant that this shows that you can use pre-trained language models along with UDA so it's it can be complimentary to pre-trained language district models yeah that's", "start_timestamp": "00:33:32", "end_timestamp": "00:34:27", "start_second": 2012, "end_second": 2067, "url": "https://www.youtube.com/watch?v=fgwurrihq4A&t=2012s", "title": "Unsupervised Data Augmentation | AISC", "thumbnail": "https://i.ytimg.com/vi/fgwurrihq4A/hqdefault.jpg"} {"video_id": "fgwurrihq4A", "text": "right on the 20 labeled samples plus fine tuning for the last case but but yeah that's it yeah they do they do do some studies on the vision benchmarks that investigate whether or not the whether or not doing augmentation on the unlabeled data is actually more advantageous than doing it on the labeled data so but they don't think they did that for for this so yeah it could be a good unless I'm wrong that could be a good feedback they might have their I should say that they did do something speaking with you out there", "start_timestamp": "00:34:27", "end_timestamp": "00:35:14", "start_second": 2067, "end_second": 2114, "url": "https://www.youtube.com/watch?v=fgwurrihq4A&t=2067s", "title": "Unsupervised Data Augmentation | AISC", "thumbnail": "https://i.ytimg.com/vi/fgwurrihq4A/hqdefault.jpg"} {"video_id": "fgwurrihq4A", "text": "they did do other experiments that they didn't report here and I think I might have asked about that I just can't remember if if that was something they did but if they did it wasn't as as performant so for the vision benchmarks yeah so they wanted to use the the same model that was used for prior semi-supervised work and this was the wide residual networks with depth 28 and with 2 and and so yeah they use the same exact labeled samples that Auto augment use to find its optimal policy so for C far 10 which is a 10 10 Way image", "start_timestamp": "00:35:14", "end_timestamp": "00:36:03", "start_second": 2114, "end_second": 2163, "url": "https://www.youtube.com/watch?v=fgwurrihq4A&t=2114s", "title": "Unsupervised Data Augmentation | AISC", "thumbnail": "https://i.ytimg.com/vi/fgwurrihq4A/hqdefault.jpg"} {"video_id": "fgwurrihq4A", "text": "classification task they have 4000 samples and for this is Street View house numbers it's a digit recognition data set they used a thousand labeled examples and so they do 10 10 runs with this model and and calculate the average and the standard deviation so here are the results here these on the left you see the the fully supervised setting at the top so with no augmentation and then following that you see different different augmentation techniques that that that are applied to to the unlabeled data so I think for I can't", "start_timestamp": "00:36:03", "end_timestamp": "00:36:56", "start_second": 2163, "end_second": 2216, "url": "https://www.youtube.com/watch?v=fgwurrihq4A&t=2163s", "title": "Unsupervised Data Augmentation | AISC", "thumbnail": "https://i.ytimg.com/vi/fgwurrihq4A/hqdefault.jpg"} {"video_id": "fgwurrihq4A", "text": "remember how many samples but there are roughly maybe 50,000 unlabeled samples for 4c far 10 yeah I'm not sure of the numbers right now but I think you might remember earlier I mentioned V 80 which was the the paper that they got the the idea to take the KL divergence between the distribution of unlabeled and an Augmented unlabeled sample so really what you're seeing here so obviously they performed the the previous state of the art method but between UDA and v80 the only real difference is the perturbation or", "start_timestamp": "00:36:56", "end_timestamp": "00:37:41", "start_second": 2216, "end_second": 2261, "url": "https://www.youtube.com/watch?v=fgwurrihq4A&t=2216s", "title": "Unsupervised Data Augmentation | AISC", "thumbnail": "https://i.ytimg.com/vi/fgwurrihq4A/hqdefault.jpg"} {"video_id": "fgwurrihq4A", "text": "transformation function that they applied since they're both using the same KL divergence technique and so so this is indicating that that targeted data augmentation strategy was was helpful so is there any metric or measure that we can look up into in order to see like they are saturated and why not using hundred-thousand right so we would say morning that's better but there's a point of a saturate there like the diversity that they don't were like the distribution or being the new thing that you're defining in our data we are", "start_timestamp": "00:37:41", "end_timestamp": "00:38:29", "start_second": 2261, "end_second": 2309, "url": "https://www.youtube.com/watch?v=fgwurrihq4A&t=2261s", "title": "Unsupervised Data Augmentation | AISC", "thumbnail": "https://i.ytimg.com/vi/fgwurrihq4A/hqdefault.jpg"} {"video_id": "fgwurrihq4A", "text": "like many in the data which doesn't matter after that right so for example Wendy chose to see a thousand there's a reason for that certain metric for us to see that's level oh you know I don't think they provide our metric there might be a good discussion for at the end though when we get to the discussion points like so so yeah keep that out of minds but but as far as I remember there wasn't anything provided Hockman Auto admins will find the optimal policy but I didn't read the paper in depth I'm not sure if it", "start_timestamp": "00:38:29", "end_timestamp": "00:39:12", "start_second": 2309, "end_second": 2352, "url": "https://www.youtube.com/watch?v=fgwurrihq4A&t=2309s", "title": "Unsupervised Data Augmentation | AISC", "thumbnail": "https://i.ytimg.com/vi/fgwurrihq4A/hqdefault.jpg"} {"video_id": "fgwurrihq4A", "text": "optimizes anything else such as the number of samples as far as I know it's just finding the optimal policy for a particular data set the optimal augmentation transformation and okay so they also perform some experiments on imagenet and the the motivation here was they the first the initial data sets they use all had between two and ten classes so they wanted to and and and they all had a low number of supervised examples four thousand or less so they wanted to use a data set that had a hot much higher number of classes it's a bit", "start_timestamp": "00:39:12", "end_timestamp": "00:39:54", "start_second": 2352, "end_second": 2394, "url": "https://www.youtube.com/watch?v=fgwurrihq4A&t=2352s", "title": "Unsupervised Data Augmentation | AISC", "thumbnail": "https://i.ytimg.com/vi/fgwurrihq4A/hqdefault.jpg"} {"video_id": "fgwurrihq4A", "text": "of a harder task and much more supervised examples and see if this approach was still applicable or if it was really only sort of a niche improvement for for smaller data sets and then they also wanted to see if they could make use of out of domain unlabeled data that had different class distributions so keep in mind in all of the previous examples the unlabeled data sets that they were using largely were just the the actual labeled data without the labels being used so so it's very much in domain literally samples that", "start_timestamp": "00:39:54", "end_timestamp": "00:40:36", "start_second": 2394, "end_second": 2436, "url": "https://www.youtube.com/watch?v=fgwurrihq4A&t=2394s", "title": "Unsupervised Data Augmentation | AISC", "thumbnail": "https://i.ytimg.com/vi/fgwurrihq4A/hqdefault.jpg"} {"video_id": "fgwurrihq4A", "text": "have a true label they just didn't tell the model what the label was so they wanted to see if this would be applicable if you didn't have yeah if your labels coming from out of domain and so imagenet overall has almost 1.3 million images and and about a thousand different classes so the settings they do a couple of settings so one is image they call image net 10% and this is where they they take roughly 10% of all image net data and use that as labeled samples and they use all of the the rest of image net as unlabeled data so ten", "start_timestamp": "00:40:36", "end_timestamp": "00:41:24", "start_second": 2436, "end_second": 2484, "url": "https://www.youtube.com/watch?v=fgwurrihq4A&t=2436s", "title": "Unsupervised Data Augmentation | AISC", "thumbnail": "https://i.ytimg.com/vi/fgwurrihq4A/hqdefault.jpg"} {"video_id": "fgwurrihq4A", "text": "percent would be I guess one hundred and thirty thousand samples and and then they would have over a million unlabeled or about a million and then the the other one is the fully supervised scenario so this is where they use the entire image net data is supervised data and they obtain extra unlabeled data from data set called jft it's another image I believe it was automatically generated image data set so they essentially train a model on image net and they use that model to to source out the most relevant samples from the jft", "start_timestamp": "00:41:24", "end_timestamp": "00:42:11", "start_second": 2484, "end_second": 2531, "url": "https://www.youtube.com/watch?v=fgwurrihq4A&t=2484s", "title": "Unsupervised Data Augmentation | AISC", "thumbnail": "https://i.ytimg.com/vi/fgwurrihq4A/hqdefault.jpg"} {"video_id": "fgwurrihq4A", "text": "data set for each class in the image net so they basically for each class they take the I think it was thirteen hundred most most relevant examples from jft and they and they use that as the unlabeled set for for their experiment any questions okay the baseline model that they used was ResNet fifty here so they did encounter some some errors some issues with with image net so they observed that they had flat class distributions so the prediction probability distributions across classes was pretty flat or uniform for the", "start_timestamp": "00:42:11", "end_timestamp": "00:43:00", "start_second": 2531, "end_second": 2580, "url": "https://www.youtube.com/watch?v=fgwurrihq4A&t=2531s", "title": "Unsupervised Data Augmentation | AISC", "thumbnail": "https://i.ytimg.com/vi/fgwurrihq4A/hqdefault.jpg"} {"video_id": "fgwurrihq4A", "text": "unlabeled and the Augmented unlabeled samples so there really wasn't much signal coming through from from the unlabeled part of the the training set up and yeah so there's there's a probably to do with the fact that there are so many more classes and and there's also so much supervised data available here actually sorry no but the amount of supervised data wasn't an issue I think this was more of an issue even for the image net 10% so where they only had 10% of the training data as supervised this was this was an even larger issue so", "start_timestamp": "00:43:00", "end_timestamp": "00:43:42", "start_second": 2580, "end_second": 2622, "url": "https://www.youtube.com/watch?v=fgwurrihq4A&t=2580s", "title": "Unsupervised Data Augmentation | AISC", "thumbnail": "https://i.ytimg.com/vi/fgwurrihq4A/hqdefault.jpg"} {"video_id": "fgwurrihq4A", "text": "this led to the unsupervised training signal being pretty much dominated by the the supervised signal and so that their solution they another one of the contributions I mentioned in the beginning was to sharpen in a few different ways the prediction the predicted distribution produced on unlabeled samples so that there would be basically to encourage the model to use the training signal from from the unlabeled samples so the specific techniques that they use were entropy minimization so they added an over an entropy term to", "start_timestamp": "00:43:42", "end_timestamp": "00:44:21", "start_second": 2622, "end_second": 2661, "url": "https://www.youtube.com/watch?v=fgwurrihq4A&t=2622s", "title": "Unsupervised Data Augmentation | AISC", "thumbnail": "https://i.ytimg.com/vi/fgwurrihq4A/hqdefault.jpg"} {"video_id": "fgwurrihq4A", "text": "the overall objective to regularize the predicted distribution on the Augmented examples to have a low entropy so again to discourage these uniform distributions of probabilities they also did soft max temperature control and here this is to control the temperature of soft max when they're computing the prediction on the original example and then confidence based masking so this was where they basically removed any samples that the unlabeled samples that the model was not very confident on so all of these approaches were were to try", "start_timestamp": "00:44:21", "end_timestamp": "00:45:04", "start_second": 2661, "end_second": 2704, "url": "https://www.youtube.com/watch?v=fgwurrihq4A&t=2661s", "title": "Unsupervised Data Augmentation | AISC", "thumbnail": "https://i.ytimg.com/vi/fgwurrihq4A/hqdefault.jpg"} {"video_id": "fgwurrihq4A", "text": "and sharpen the probability distribution of the on the unlabeled Augmented samples for these sessions but the car if the actual distribution of your training data your related training data is uniform and then you try to sort of force it to take a different shape on your unlabeled a of what we shouldn't that prepared for that why should that work so the the number of samples is uniform across the classes but here this is this is saying the problem was that the the models predictions prediction probabilities across the different", "start_timestamp": "00:45:04", "end_timestamp": "00:45:51", "start_second": 2704, "end_second": 2751, "url": "https://www.youtube.com/watch?v=fgwurrihq4A&t=2704s", "title": "Unsupervised Data Augmentation | AISC", "thumbnail": "https://i.ytimg.com/vi/fgwurrihq4A/hqdefault.jpg"} {"video_id": "fgwurrihq4A", "text": "classes for an individual example were more uniform so so the example has some category let's say it includes a giraffe so that should be the true label but the actual output prediction distribution was was pretty no uniform so they wanted to find ways to encourage it to be sharper to - yeah to be less less prone to just being killed divergence per batch no I believe it would be calculated per sample pretty sure it would be each sample would look at the figure so their first average error and then the unsupervised error and then it", "start_timestamp": "00:45:51", "end_timestamp": "00:46:48", "start_second": 2751, "end_second": 2808, "url": "https://www.youtube.com/watch?v=fgwurrihq4A&t=2751s", "title": "Unsupervised Data Augmentation | AISC", "thumbnail": "https://i.ytimg.com/vi/fgwurrihq4A/hqdefault.jpg"} {"video_id": "fgwurrihq4A", "text": "will do back propagation right hmm I did do it like separately but the way I add necessarily do it all together so the mass calculation kind of patchwork L the Virgin's okay yeah so I suppose that they do these are the results here that the left is the image net 10% again with the baseline its resonant 50 and the right is is the fully supervised image net setting and top 1% is the the models accuracy on for its first top prediction top 5 is its accuracy looking within the first 5 samples and and yes so you can", "start_timestamp": "00:46:48", "end_timestamp": "00:47:34", "start_second": 2808, "end_second": 2854, "url": "https://www.youtube.com/watch?v=fgwurrihq4A&t=2808s", "title": "Unsupervised Data Augmentation | AISC", "thumbnail": "https://i.ytimg.com/vi/fgwurrihq4A/hqdefault.jpg"} {"video_id": "fgwurrihq4A", "text": "see that it improved on on the baselines in both cases I guess the smaller improvement for the fully supervised setting where they're using an additional 1.3 million samples from jft yeah over the the previous Auto augment policy I think for image net 10 was it for this one yeah I think they also so someone was asking earlier about why not using the why not use the labeled examples for augmentation for the baseline and I think they also did run that experiment here so they they used the 10% labeled examples as for", "start_timestamp": "00:47:34", "end_timestamp": "00:48:21", "start_second": 2854, "end_second": 2901, "url": "https://www.youtube.com/watch?v=fgwurrihq4A&t=2854s", "title": "Unsupervised Data Augmentation | AISC", "thumbnail": "https://i.ytimg.com/vi/fgwurrihq4A/hqdefault.jpg"} {"video_id": "fgwurrihq4A", "text": "augmentation and so that I think got something like 58 accuracy so still significantly below and yeah moving on to the ablation studies so they want to do they did a couple the first one is for TSA so to determine if this training signal annealing actually made a difference and so they did it for Yelp 5 and see far 10 in the Yelp case they didn't use Bart pre-training just to really make sure that there was all the information was coming from from the data use and not from the pre train distribution or language model", "start_timestamp": "00:48:21", "end_timestamp": "00:49:12", "start_second": 2901, "end_second": 2952, "url": "https://www.youtube.com/watch?v=fgwurrihq4A&t=2901s", "title": "Unsupervised Data Augmentation | AISC", "thumbnail": "https://i.ytimg.com/vi/fgwurrihq4A/hqdefault.jpg"} {"video_id": "fgwurrihq4A", "text": "and yes so you can see in the first case where there's an X that's where they're not using any tsa and so you can see in both cases applying some schedule does improve the results and you can also see that it's it's slightly different for the two cases and I think for CFR 10 there were 4,000 examples and so that's optimal with the linear schedule and Yelp five was about 2,500 and so that's that's one of the lower data settings that are that they hypothesized would be best with the exponential schedule and then the other one was for a question", "start_timestamp": "00:49:12", "end_timestamp": "00:49:57", "start_second": 2952, "end_second": 2997, "url": "https://www.youtube.com/watch?v=fgwurrihq4A&t=2952s", "title": "Unsupervised Data Augmentation | AISC", "thumbnail": "https://i.ytimg.com/vi/fgwurrihq4A/hqdefault.jpg"} {"video_id": "fgwurrihq4A", "text": "yes you say race when TSA work for pure supervised learning that's a good question I'm not sure I don't think there's a clear reason that it wouldn't seems like you could apply it so the idea would be that you would [Music] yeah so remove yeah I don't know if there would be as much motivation to do that when you're not incorporating additional data but but that's a good question maybe someone in the discussion will have some thoughts and for for the actual data augmentation policy so they wanted to see here did the targeted data", "start_timestamp": "00:49:57", "end_timestamp": "00:51:00", "start_second": 2997, "end_second": 3060, "url": "https://www.youtube.com/watch?v=fgwurrihq4A&t=2997s", "title": "Unsupervised Data Augmentation | AISC", "thumbnail": "https://i.ytimg.com/vi/fgwurrihq4A/hqdefault.jpg"} {"video_id": "fgwurrihq4A", "text": "augmentation that they applied actually make a difference so they have the two image data sets here and some on the left you can see different augmentation strategies Auto augment was they what the one they actually used and switched augment implies they they took the optimal policy for from one data set and applied that on the other data set so you can see that it really does perform best when you take the optimal augmentation policy the most targeted policy and and apply that to the data set I think that's mostly it just some", "start_timestamp": "00:51:00", "end_timestamp": "00:51:47", "start_second": 3060, "end_second": 3107, "url": "https://www.youtube.com/watch?v=fgwurrihq4A&t=3060s", "title": "Unsupervised Data Augmentation | AISC", "thumbnail": "https://i.ytimg.com/vi/fgwurrihq4A/hqdefault.jpg"} {"video_id": "fgwurrihq4A", "text": "conclusions so in summary they introduced a method called UTA unsupervised state augmentation that used targeted data augmentation to generate diverse and realistic perturbations to input samples and so this enforces the model to be smooth the way they applied it with respect to these perturbations they also introduced TSA training signal annealing and this was to prevent UDA from overfitting when a lot more unlabeled data is available compared to the amount of labeled data and their results showed that they achieved state-of-the-art on", "start_timestamp": "00:51:47", "end_timestamp": "00:52:27", "start_second": 3107, "end_second": 3147, "url": "https://www.youtube.com/watch?v=fgwurrihq4A&t=3107s", "title": "Unsupervised Data Augmentation | AISC", "thumbnail": "https://i.ytimg.com/vi/fgwurrihq4A/hqdefault.jpg"} {"video_id": "fgwurrihq4A", "text": "IMDb with only 20 labeled examples and they were able to reduce vision error rates by more than 30 percent in the in the semi supervised setting and they were also able to leverage unlabeled data to improve performance on on image net even though what's a maybe a small improvement and so some overall findings are that data augmentation and semi-supervised learning are well-connected topics that should be explored further and you da+ unsupervised representation learning in this case Bert can compliment each other", "start_timestamp": "00:52:27", "end_timestamp": "00:53:10", "start_second": 3147, "end_second": 3190, "url": "https://www.youtube.com/watch?v=fgwurrihq4A&t=3147s", "title": "Unsupervised Data Augmentation | AISC", "thumbnail": "https://i.ytimg.com/vi/fgwurrihq4A/hqdefault.jpg"} {"video_id": "fgwurrihq4A", "text": "and work work well together and so that's that's the presentation we just have some discussion points over class that has weight less data right someone was asking about that earlier I don't think no they didn't do that they specifically in the case where they were using image net and they were boosting from the the jft data set they specifically took an even number of samples across all the classes from jft and I don't think there was any case where they had an imbalance across classes oh yeah that would be an", "start_timestamp": "00:53:10", "end_timestamp": "00:53:56", "start_second": 3190, "end_second": 3236, "url": "https://www.youtube.com/watch?v=fgwurrihq4A&t=3190s", "title": "Unsupervised Data Augmentation | AISC", "thumbnail": "https://i.ytimg.com/vi/fgwurrihq4A/hqdefault.jpg"} {"video_id": "fgwurrihq4A", "text": "interesting problem to solve for sure okay so fluorine and I or fluorine and the the author so TJ had had one discussion point that he wanted to to pose to the audience which was so to some extent this type of approach the ability to use more on labelled data can can help to make machine learning more University accessible because it's much more affordable to train models on unlabeled data and so what applications do you guys think stand to benefit the most from from semi-supervised learning that that haven't already been tried in", "start_timestamp": "00:53:56", "end_timestamp": "00:54:48", "start_second": 3236, "end_second": 3288, "url": "https://www.youtube.com/watch?v=fgwurrihq4A&t=3236s", "title": "Unsupervised Data Augmentation | AISC", "thumbnail": "https://i.ytimg.com/vi/fgwurrihq4A/hqdefault.jpg"} {"video_id": "fgwurrihq4A", "text": "this paper big problem is you have like a cooking video where you want to get the recipe use automatic machine translation to get to rest there are like small data sets available and it's a very complex problem because of just how recipes are organized right so if you have a small training data that can really work there are tons of unlabeled data for like videos for recipes right hey there's a gift recip so whatever where you can get large amount of data very easy it's just everything sound a little hmm so so video yeah yeah", "start_timestamp": "00:54:48", "end_timestamp": "00:55:30", "start_second": 3288, "end_second": 3330, "url": "https://www.youtube.com/watch?v=fgwurrihq4A&t=3288s", "title": "Unsupervised Data Augmentation | AISC", "thumbnail": "https://i.ytimg.com/vi/fgwurrihq4A/hqdefault.jpg"} {"video_id": "fgwurrihq4A", "text": "interesting take a video and you just make a description of what is happening in the video right yeah I don't know if that's been and tried or or not and then I think so as well what are some other targeted augmentation strategies for text so they introduce back translation and this word replacement I'm sorry they didn't introduce those those those have already existed at least the first one are there any other strategies for for text augmentation that anyone's used or think could be could be valuable is UI regression when you're trying to compare", "start_timestamp": "00:55:30", "end_timestamp": "00:56:16", "start_second": 3330, "end_second": 3376, "url": "https://www.youtube.com/watch?v=fgwurrihq4A&t=3330s", "title": "Unsupervised Data Augmentation | AISC", "thumbnail": "https://i.ytimg.com/vi/fgwurrihq4A/hqdefault.jpg"} {"video_id": "fgwurrihq4A", "text": "your interface with a new interface and you don't want to label everything and in the test cycle you want it to be on labeled potentially measures of potential here again you are it hasn't gained much traction because you'd have to label every part of that that's not enough for agile so there was some and so part of the reason that I asked this question is I felt like the approaches for text augmentations still felt like they were may be most useful for for a text classification case but they're there many other case many other tasks", "start_timestamp": "00:56:16", "end_timestamp": "00:57:07", "start_second": 3376, "end_second": 3427, "url": "https://www.youtube.com/watch?v=fgwurrihq4A&t=3376s", "title": "Unsupervised Data Augmentation | AISC", "thumbnail": "https://i.ytimg.com/vi/fgwurrihq4A/hqdefault.jpg"} {"video_id": "fgwurrihq4A", "text": "to solve in in NLP and and so I think that they said back translation has been shown to be successful for other tasks but I think that the word replacement one might not be as relevant so I was curious if anyone had had tried any others and then Florian I think you had a few here so so yeah what are some other ways to automatically optimize data augmentation strategies the idea here instead you see they have like some kind of targeted augmentation strategy right but instead of having some kind of time use augmentation start sheet let's put in", "start_timestamp": "00:57:07", "end_timestamp": "00:57:51", "start_second": 3427, "end_second": 3471, "url": "https://www.youtube.com/watch?v=fgwurrihq4A&t=3427s", "title": "Unsupervised Data Augmentation | AISC", "thumbnail": "https://i.ytimg.com/vi/fgwurrihq4A/hqdefault.jpg"} {"video_id": "fgwurrihq4A", "text": "kind of like a neural network right so I kind of like again and then at the same time you optimize scan to make the best possible kind of transformations right so the kind of the gang would be optimized to help the other neural network to get to better care older versions right I wouldn't know if doesn't make sense so it would work this is beginning like if you want to have minimum corralled emergency we just do the identity right so no but I think is an idea I can throw in a room koc is like some kind of variation of attack", "start_timestamp": "00:57:51", "end_timestamp": "00:58:42", "start_second": 3471, "end_second": 3522, "url": "https://www.youtube.com/watch?v=fgwurrihq4A&t=3471s", "title": "Unsupervised Data Augmentation | AISC", "thumbnail": "https://i.ytimg.com/vi/fgwurrihq4A/hqdefault.jpg"} {"video_id": "fgwurrihq4A", "text": "order right it doesn't need to be again some generative model to come it with a policy yeah I think it ties in with some of the other points so you're asking you thinking about should we compare UDA or combine it with automatically generated data sets this is like an active field of research as well yes so so data sets that are completely fabricated without any annotation is this something that competes head-to-head like it's is are they strictly one or the other or can it be combined and another food for thought", "start_timestamp": "00:58:42", "end_timestamp": "00:59:27", "start_second": 3522, "end_second": 3567, "url": "https://www.youtube.com/watch?v=fgwurrihq4A&t=3522s", "title": "Unsupervised Data Augmentation | AISC", "thumbnail": "https://i.ytimg.com/vi/fgwurrihq4A/hqdefault.jpg"} {"video_id": "fgwurrihq4A", "text": "is will in the long run UDA be a cost effective alternative to to cheap manual labeling so if if manual labeling is I think you throw out a number like 140 per hour with yeah we're pretty us us 140 US per hour will this actually be a cost alternative method if if the costs are that low this type of approach would face and that's some Angela data privacy I'm not sure if there's an easy easy way to build on something like differential privacy or something like that in order to ensure that when we're gobbling out unlabeled", "start_timestamp": "00:59:27", "end_timestamp": "01:00:18", "start_second": 3567, "end_second": 3618, "url": "https://www.youtube.com/watch?v=fgwurrihq4A&t=3567s", "title": "Unsupervised Data Augmentation | AISC", "thumbnail": "https://i.ytimg.com/vi/fgwurrihq4A/hqdefault.jpg"} {"video_id": "fgwurrihq4A", "text": "data that we're not you know accidentally throwing in user data that might even wonder like maybe they could be picked him over let go malicious attacks rates so something to keep in mind with one applying this method the network model is still located present data but you just don't have the neighbor yeah so if we have all the customer data right and you want to do something you should I mean you still have all the Congress customer looks at all the customers who change much and you just doesn't have some kind of", "start_timestamp": "01:00:18", "end_timestamp": "01:01:01", "start_second": 3618, "end_second": 3661, "url": "https://www.youtube.com/watch?v=fgwurrihq4A&t=3618s", "title": "Unsupervised Data Augmentation | AISC", "thumbnail": "https://i.ytimg.com/vi/fgwurrihq4A/hqdefault.jpg"} {"video_id": "fgwurrihq4A", "text": "decision [Music] we applied like we were saying the case about the credit scores so like maybe there's a lot to inconsistency in like the ones who retro actively got approved for credit so like there might be the human error inconsistency or just like the fact that there is self selection or something so like this could kind of help with that right like you're saying there are some people who maybe fit the traits of being approved but then they just somehow didn't get approved so that they're actually being not approved", "start_timestamp": "01:01:01", "end_timestamp": "01:01:45", "start_second": 3661, "end_second": 3705, "url": "https://www.youtube.com/watch?v=fgwurrihq4A&t=3661s", "title": "Unsupervised Data Augmentation | AISC", "thumbnail": "https://i.ytimg.com/vi/fgwurrihq4A/hqdefault.jpg"} {"video_id": "fgwurrihq4A", "text": "right yeah there's some there's some bias of some sort in the labeling process the annotation process then definitely that could mean that the the samples in your unlabeled set are valid but but just not being included for one reason or another so I think yeah those kind of cases are really good candidates for for this kind of work any other thoughts or questions in case of a supervisor provide sort of clustering job and using the sort of the model only on the label to maybe if you have to do a job I'm very nervous very rare that said could you", "start_timestamp": "01:01:45", "end_timestamp": "01:02:33", "start_second": 3705, "end_second": 3753, "url": "https://www.youtube.com/watch?v=fgwurrihq4A&t=3705s", "title": "Unsupervised Data Augmentation | AISC", "thumbnail": "https://i.ytimg.com/vi/fgwurrihq4A/hqdefault.jpg"} {"video_id": "fgwurrihq4A", "text": "actually refers to a cluster job on a much smaller data set and use those as the label and service supervisor set and apply this for the classes of all the others and see if those classes would actually fit in the initial foster class and whether it be actually faster than doing the big massive across from John just okay so so the idea would be to take a subset of the unlabeled data you're strictly unlabeled data take a subset train a clustering model on that and use the clusters as ground truth labels that sounds to me like a", "start_timestamp": "01:02:33", "end_timestamp": "01:03:14", "start_second": 3753, "end_second": 3794, "url": "https://www.youtube.com/watch?v=fgwurrihq4A&t=3753s", "title": "Unsupervised Data Augmentation | AISC", "thumbnail": "https://i.ytimg.com/vi/fgwurrihq4A/hqdefault.jpg"} {"video_id": "fgwurrihq4A", "text": "cool way of starting from scratch like you have no labels I think that's been that's been done before but in conjuncture with with this type of approach would be interesting yeah I don't see why you couldn't do it I think you might you might find that I don't know how well it would perform yeah as opposed to just doing all once I have like one criticism about this paper I only skimmed it and I wasn't able to follow your entire presentation so crab me if I'm wrong but what he liked it seemed they called unsupervised data", "start_timestamp": "01:03:14", "end_timestamp": "01:04:02", "start_second": 3794, "end_second": 3842, "url": "https://www.youtube.com/watch?v=fgwurrihq4A&t=3794s", "title": "Unsupervised Data Augmentation | AISC", "thumbnail": "https://i.ytimg.com/vi/fgwurrihq4A/hqdefault.jpg"} {"video_id": "fgwurrihq4A", "text": "augmentation but it seems that like especially in the case of text they use the model to translate and translate back and there's a very right it's a it's a separate model they train a separate machine translation model yeah to do the back translation but they do utilize that model to to port data augmentation right they use that machine translation model just to generate just to generate the Augmented samples oh yes yeah so I guess my criticism is that they say it's unsupervised but this is not exactly distillation but there is", "start_timestamp": "01:04:02", "end_timestamp": "01:04:42", "start_second": 3842, "end_second": 3882, "url": "https://www.youtube.com/watch?v=fgwurrihq4A&t=3842s", "title": "Unsupervised Data Augmentation | AISC", "thumbnail": "https://i.ytimg.com/vi/fgwurrihq4A/hqdefault.jpg"} {"video_id": "fgwurrihq4A", "text": "some kind of because that translation translation back that's supervised and there is some a lot of like that kind of signal that's kind of being learned and distilled by doing this approach even the tf-idf frequency approach so there's a very strong prior that you built into the into the augmentation part so this doesn't feel like you see like exactly unsupervised in the sense that you just give it a lot of data but there's some kind of implicit supervision I don't think it exactly distillation but some something there is some minute I think", "start_timestamp": "01:04:42", "end_timestamp": "01:05:17", "start_second": 3882, "end_second": 3917, "url": "https://www.youtube.com/watch?v=fgwurrihq4A&t=3882s", "title": "Unsupervised Data Augmentation | AISC", "thumbnail": "https://i.ytimg.com/vi/fgwurrihq4A/hqdefault.jpg"} {"video_id": "fgwurrihq4A", "text": "it's acknowledged they call it I mean there isn't a language of targeted data augmentation so you're you're picking an augmentation strategy that is targeted to the particular task and in some case to the data set there you're working on so I think it's not it's not an unacknowledged part of the paper and like they are it's it's yeah on purpose but they're doing that way nonetheless I do agree that I don't know how well some of those that was my question here how some of those augmentation strategies would apply to other for example text", "start_timestamp": "01:05:17", "end_timestamp": "01:05:56", "start_second": 3917, "end_second": 3956, "url": "https://www.youtube.com/watch?v=fgwurrihq4A&t=3917s", "title": "Unsupervised Data Augmentation | AISC", "thumbnail": "https://i.ytimg.com/vi/fgwurrihq4A/hqdefault.jpg"} {"video_id": "fgwurrihq4A", "text": "tasks so so yeah it remains to be seen how much work you would have to do to come up with an augmentation policy that is successful for a particular task and data set I think that would probably be very interesting future work something to add on to that something I think would be interesting is what happens if you have a worse translated back translation because you're adding in creative for the patient to it right but does your unit become better because you have worse back translation for that oh if you if you if your machine", "start_timestamp": "01:05:56", "end_timestamp": "01:06:34", "start_second": 3956, "end_second": 3994, "url": "https://www.youtube.com/watch?v=fgwurrihq4A&t=3956s", "title": "Unsupervised Data Augmentation | AISC", "thumbnail": "https://i.ytimg.com/vi/fgwurrihq4A/hqdefault.jpg"} {"video_id": "fgwurrihq4A", "text": "translation all was less accurate yeah yeah I think that comes back to the trade-off between diversity and validity if your translation model is terrible then the actual sample that you generate is probably not going to be a valid sample for the label that it's supposed to be assigned to but if you don't change it enough then you're not really adding any new signal so so yeah there's some inherent trade-off there I think so yeah maybe using a slightly worse translation model would be effective definitely not using a terrible one but", "start_timestamp": "01:06:34", "end_timestamp": "01:07:16", "start_second": 3994, "end_second": 4036, "url": "https://www.youtube.com/watch?v=fgwurrihq4A&t=3994s", "title": "Unsupervised Data Augmentation | AISC", "thumbnail": "https://i.ytimg.com/vi/fgwurrihq4A/hqdefault.jpg"} {"video_id": "fgwurrihq4A", "text": "yeah it's a good point the other scenario and that's something the various amid sedimentation so that's means they had a like the jury instructed the different level of the target from the background so in that of Seven AO how can we use the you dated to did up automatic when you have different social services for image segmentation yeah object detection or yeah because this scenario is one Operator for the pilot but the other some cannot do you know network about the model the end is a the outputs will be the images", "start_timestamp": "01:07:16", "end_timestamp": "01:08:08", "start_second": 4036, "end_second": 4088, "url": "https://www.youtube.com/watch?v=fgwurrihq4A&t=4036s", "title": "Unsupervised Data Augmentation | AISC", "thumbnail": "https://i.ytimg.com/vi/fgwurrihq4A/hqdefault.jpg"} {"video_id": "fgwurrihq4A", "text": "most sorry at the end the output of the ornamentation yeah hmm right so that's a good question I'm not too sure I'm sure there's been work done on augmentation for object detection and and segmentation but I'm not I'm not too familiar with that work yeah so I don't know exactly how its applied settings is that the question is having work before you leveled the the trailer the car view yeah yeah but for this one you it's it's a it's a prepared via the distribution of the white and something else buddy in the sedimentation but how we prepare of", "start_timestamp": "01:08:08", "end_timestamp": "01:09:01", "start_second": 4088, "end_second": 4141, "url": "https://www.youtube.com/watch?v=fgwurrihq4A&t=4088s", "title": "Unsupervised Data Augmentation | AISC", "thumbnail": "https://i.ytimg.com/vi/fgwurrihq4A/hqdefault.jpg"} {"video_id": "fgwurrihq4A", "text": "the year I mean I think you can still some augmentations are pretty clearly still valid if you just changed the color for example the object detection is still going to be valid you're not transforming or rotating removing anything outside of a box so I think that would that's one approach you could take and and I think color augmentation was one of the strategies that that they used for the image data sets so maybe we'll cap it there thanks very much for your attention there's some pendeks that's all and that's it", "start_timestamp": "01:09:01", "end_timestamp": "01:09:42", "start_second": 4141, "end_second": 4182, "url": "https://www.youtube.com/watch?v=fgwurrihq4A&t=4141s", "title": "Unsupervised Data Augmentation | AISC", "thumbnail": "https://i.ytimg.com/vi/fgwurrihq4A/hqdefault.jpg"} {"video_id": "EkzZSaeIikI", "text": "I kept trying to get this secret to work for me for such a long time and it just wouldn't work until I changed one thing and I just want to show you what that one thing is are you ready let's go when it did work what happened is I manifested this no I'm only joking watch this because actually what I manifest was much more than just a five thousand dollar massage chair aside from a $20,000 watch I don't know if you can see out here and then this all became my lifestyle alongside that beautiful little boat there now your", "start_timestamp": "00:00:00", "end_timestamp": "00:00:52", "start_second": 0, "end_second": 52, "url": "https://www.youtube.com/watch?v=EkzZSaeIikI&t=0s", "title": "Why 'The Secret' Won\u2019t Work For You Until You Do This.. [Law of Attraction]", "thumbnail": "https://i.ytimg.com/vi/EkzZSaeIikI/maxresdefault.jpg"} {"video_id": "EkzZSaeIikI", "text": "question is why am i showing all of this to you and what a show-off who is this guy anyway I just want to break it all down for you because it's not about me and I showed you all of this stuff it's because I want you to know that it is possible one of the biggest things is that we need to believe before we can manifest and that's the biggest problem that a lot of people don't know how to get so I want to break it down for you into three simple steps in this video how to actually manifest and attract what you like what you desire in life", "start_timestamp": "00:00:52", "end_timestamp": "00:01:23", "start_second": 52, "end_second": 83, "url": "https://www.youtube.com/watch?v=EkzZSaeIikI&t=52s", "title": "Why 'The Secret' Won\u2019t Work For You Until You Do This.. [Law of Attraction]", "thumbnail": "https://i.ytimg.com/vi/EkzZSaeIikI/maxresdefault.jpg"} {"video_id": "EkzZSaeIikI", "text": "using the law of attraction or the principles from the secret but before we get into it so real hit it Bob today isn't that they're struggling it's not they can't get out of this it's because the that they're in isn't big enough and why I say that is because if that was big enough right now if the people had a girl acquaintance or that head they would actually get that ass moving always understand that emotions is what these emotion is energy in motion so if your emotion or drive is not stronger you're not gonna do anything to get out of it", "start_timestamp": "00:01:23", "end_timestamp": "00:01:54", "start_second": 83, "end_second": 114, "url": "https://www.youtube.com/watch?v=EkzZSaeIikI&t=83s", "title": "Why 'The Secret' Won\u2019t Work For You Until You Do This.. [Law of Attraction]", "thumbnail": "https://i.ytimg.com/vi/EkzZSaeIikI/maxresdefault.jpg"} {"video_id": "EkzZSaeIikI", "text": "what's up guys this era coherence national speaker entrepreneur and best-selling author and in this video I'll break down for you the three keys to make the secret work for you finally while I relax on this chair love it so much so we're gonna get straight into it the first thing using the law of attraction is we must visualize and this what the secret talks about but you know just visualizing alone is really not going to be able to allow the manifestation to work for you the biggest problem is is because there's a", "start_timestamp": "00:01:54", "end_timestamp": "00:02:27", "start_second": 114, "end_second": 147, "url": "https://www.youtube.com/watch?v=EkzZSaeIikI&t=114s", "title": "Why 'The Secret' Won\u2019t Work For You Until You Do This.. [Law of Attraction]", "thumbnail": "https://i.ytimg.com/vi/EkzZSaeIikI/maxresdefault.jpg"} {"video_id": "EkzZSaeIikI", "text": "lot of that goes on through the day so let's say for example you spend I don't know three minutes five minutes visualizing in the morning the rest of the day you're visualizing something else so what that means is it kind of like puts us in a position where we're manifesting everything that we don't want and the small time and energy that we put into what we do want isn't manifesting first so what is the solution the solution is to understand a step number two and step number two is the principle of the law of attraction", "start_timestamp": "00:02:27", "end_timestamp": "00:02:58", "start_second": 147, "end_second": 178, "url": "https://www.youtube.com/watch?v=EkzZSaeIikI&t=147s", "title": "Why 'The Secret' Won\u2019t Work For You Until You Do This.. [Law of Attraction]", "thumbnail": "https://i.ytimg.com/vi/EkzZSaeIikI/maxresdefault.jpg"} {"video_id": "EkzZSaeIikI", "text": "we have Canada here say hi the principle of the law of attraction is to understand that it's all based around emotions the more you are able to magnify your emotions what happens is a speeds the whole law of attraction of but aside from that it makes it much much more powerful because vibrations if you think about it when you are very very emotionally charged up your vibration is kind of raised to a crazy crazy crazy vibrational frequency and the more crazy it is the more it sets it in there so even if you just do it for", "start_timestamp": "00:02:58", "end_timestamp": "00:03:34", "start_second": 178, "end_second": 214, "url": "https://www.youtube.com/watch?v=EkzZSaeIikI&t=178s", "title": "Why 'The Secret' Won\u2019t Work For You Until You Do This.. [Law of Attraction]", "thumbnail": "https://i.ytimg.com/vi/EkzZSaeIikI/maxresdefault.jpg"} {"video_id": "EkzZSaeIikI", "text": "three minutes five minutes in the morning hold stronger vibrations than your whole entire day so it's all about the second part raising those emotions and really getting into the group so step number one is to actually visualize that number two is to really amplify the emotions as if you're living it right now and finally step number three Aires with that step number three is the most important part at all which is to let go now you're thinking what do you mean like let go I think a lot of people recently have been getting really really", "start_timestamp": "00:03:34", "end_timestamp": "00:04:10", "start_second": 214, "end_second": 250, "url": "https://www.youtube.com/watch?v=EkzZSaeIikI&t=214s", "title": "Why 'The Secret' Won\u2019t Work For You Until You Do This.. [Law of Attraction]", "thumbnail": "https://i.ytimg.com/vi/EkzZSaeIikI/maxresdefault.jpg"} {"video_id": "EkzZSaeIikI", "text": "confused when I say set it and forget it that's what I write about in my book they say Eric if I said it when do I said it how long do I sell it for when do I let it go what's the balance between setting it and forgetting it well actually you said it early on in the morning when you're most at peace the rest of the time you've got to have trust I was doing this call yesterday for the whole superphone thing I know a lot of you a lot of you have been texting me over the last couple of days and I am receiving your messages by the", "start_timestamp": "00:04:10", "end_timestamp": "00:04:37", "start_second": 250, "end_second": 277, "url": "https://www.youtube.com/watch?v=EkzZSaeIikI&t=250s", "title": "Why 'The Secret' Won\u2019t Work For You Until You Do This.. [Law of Attraction]", "thumbnail": "https://i.ytimg.com/vi/EkzZSaeIikI/maxresdefault.jpg"} {"video_id": "EkzZSaeIikI", "text": "way it's just like there's thousands and thousands of thousands of requests coming in so it's very hard to actually reply to you all however one of the biggest questions that I keep getting from a lot of you is when you said it when do you forget it and also how do you actually trust the universe what if you have a lot of doubt happening within you because if there's doubt it's very hard to make the Law of Attraction work for you so the whole idea of it what I was teaching yesterday during the live call was that you need to get this into", "start_timestamp": "00:04:37", "end_timestamp": "00:05:09", "start_second": 277, "end_second": 309, "url": "https://www.youtube.com/watch?v=EkzZSaeIikI&t=277s", "title": "Why 'The Secret' Won\u2019t Work For You Until You Do This.. [Law of Attraction]", "thumbnail": "https://i.ytimg.com/vi/EkzZSaeIikI/maxresdefault.jpg"} {"video_id": "EkzZSaeIikI", "text": "your head and I want you to type this in the comments below those of you who are new to this channel we always type our learnings below to reaffirm the learning Who am I not to trust I've run my whole life based on this very very simple concept Who am I not to trust if you understand we were all created by something that's greater than us much powerful force a much magical force you can call it God you can name it the universe you can name it whatever you want to name it but we can't deny that thing exists now if that thing exists", "start_timestamp": "00:05:09", "end_timestamp": "00:05:42", "start_second": 309, "end_second": 342, "url": "https://www.youtube.com/watch?v=EkzZSaeIikI&t=309s", "title": "Why 'The Secret' Won\u2019t Work For You Until You Do This.. [Law of Attraction]", "thumbnail": "https://i.ytimg.com/vi/EkzZSaeIikI/maxresdefault.jpg"} {"video_id": "EkzZSaeIikI", "text": "who are we not to trust it has the decision to put you into this world when it wants to and it also has the decision to take you away and sometimes think about it how many times throughout your life so far have you felt totally out of control you don't know how to handle it you don't know how to deal with the situation and you felt so so challenged and you were just stuck but somehow you pulled through and that magic happened so if it's ever happened to you before whether in relationships and business and finances whatever it is then if nothing", "start_timestamp": "00:05:42", "end_timestamp": "00:06:14", "start_second": 342, "end_second": 374, "url": "https://www.youtube.com/watch?v=EkzZSaeIikI&t=342s", "title": "Why 'The Secret' Won\u2019t Work For You Until You Do This.. [Law of Attraction]", "thumbnail": "https://i.ytimg.com/vi/EkzZSaeIikI/maxresdefault.jpg"} {"video_id": "EkzZSaeIikI", "text": "actually happened before who are you not to trust because it always always the universe always always has your back and if you understand that and have that peace of mind then when you have that peace of mind everything starts shifting and Wow we start shifting we don't need to play your song I'm gonna play your song it's beautiful so I'm ready I made it for too long have you played [Music] a prize for whoever guesses what the song is called you can comment below anyway guys I'm gonna finish off for today because we have to go up to", "start_timestamp": "00:06:14", "end_timestamp": "00:07:07", "start_second": 374, "end_second": 427, "url": "https://www.youtube.com/watch?v=EkzZSaeIikI&t=374s", "title": "Why 'The Secret' Won\u2019t Work For You Until You Do This.. [Law of Attraction]", "thumbnail": "https://i.ytimg.com/vi/EkzZSaeIikI/maxresdefault.jpg"} {"video_id": "EkzZSaeIikI", "text": "Newcastle with ah okay you know where are you because today is dad's birthday so I'm a blog along the way I don't know it depends if we have enough time to do so but anyway guys if this video has been of any use to you whatsoever you know what to do hit that thumbs up hit the like button also comment below let us know where you guys have tuned in from and finally if you're new to this channel and have been any used to you remember remember to hit the subscribe button and the notifications button next year because every single day I'm", "start_timestamp": "00:07:07", "end_timestamp": "00:07:37", "start_second": 427, "end_second": 457, "url": "https://www.youtube.com/watch?v=EkzZSaeIikI&t=427s", "title": "Why 'The Secret' Won\u2019t Work For You Until You Do This.. [Law of Attraction]", "thumbnail": "https://i.ytimg.com/vi/EkzZSaeIikI/maxresdefault.jpg"} {"video_id": "NXaVLXSZdEw", "text": "Germany's super-rich no other European country has as many billionaires and while their fortunes are growing more and more Germans are living under the poverty line set by the Organisation for Economic Cooperation and Development the OECD the press frequently reports on the country's high income inequality and low social mobility but little is known about the super-rich nate'd sure money attracts success and success attracts money I do believe that I've experienced myself you can suddenly connect with people", "start_timestamp": "00:00:00", "end_timestamp": "00:00:34", "start_second": 0, "end_second": 34, "url": "https://www.youtube.com/watch?v=NXaVLXSZdEw&t=0s", "title": "Germany: The discreet lives of the super rich | DW Documentary", "thumbnail": "https://i.ytimg.com/vi/NXaVLXSZdEw/maxresdefault.jpg"} {"video_id": "NXaVLXSZdEw", "text": "previously out of reach who are the kind of shores that had - who are Germany's super wealthy how do they live and how do they see the country they live in the way wealthy people in Germany are talked about bothers me because it's sensationalizing in sycophantic and in no way reflects what wealthy people have done and continue to do for this country it creates this impression of rich people being like Scrooge McDuck that they had these money bins in which they wallow in their coins look Scrooge McDuck wants more more MORE that's not", "start_timestamp": "00:00:34", "end_timestamp": "00:01:16", "start_second": 34, "end_second": 76, "url": "https://www.youtube.com/watch?v=NXaVLXSZdEw&t=34s", "title": "Germany: The discreet lives of the super rich | DW Documentary", "thumbnail": "https://i.ytimg.com/vi/NXaVLXSZdEw/maxresdefault.jpg"} {"video_id": "NXaVLXSZdEw", "text": "my world I like earning money is less but not swimming in it we thought we wanted to get closer to the discrete world of Germany's ultra rich of company owners and heirs worth millions we wanted to find out what makes those on top of the world tick [Music] every year an exclusive event takes place in the Schloss hotel Kornberg near frankfurt to which the public is not invited it's the annual Hall of Fame evening for the business monthly manager magazine hardly any other occasion in Germany draws as many wealthy business", "start_timestamp": "00:01:16", "end_timestamp": "00:02:08", "start_second": 76, "end_second": 128, "url": "https://www.youtube.com/watch?v=NXaVLXSZdEw&t=76s", "title": "Germany: The discreet lives of the super rich | DW Documentary", "thumbnail": "https://i.ytimg.com/vi/NXaVLXSZdEw/maxresdefault.jpg"} {"video_id": "NXaVLXSZdEw", "text": "owners [Music] the Iggy had laudatory esteemed presenters dear jury members dear ladies and gentlemen welcome to manager magazine's Hall of faith when we first founded the Hall of Fame in 1992 we wanted to take a stand for excellence and unconditional entrepreneurship and against faintheartedness and averageness we begin our nominations today with Harv dama moot his company United Internet is valued at around 11 billion and on our list of Germany's richest people he ranks 25th with a personal wealth of 4.5 billion euros", "start_timestamp": "00:02:08", "end_timestamp": "00:02:54", "start_second": 128, "end_second": 174, "url": "https://www.youtube.com/watch?v=NXaVLXSZdEw&t=128s", "title": "Germany: The discreet lives of the super rich | DW Documentary", "thumbnail": "https://i.ytimg.com/vi/NXaVLXSZdEw/maxresdefault.jpg"} {"video_id": "NXaVLXSZdEw", "text": "collectively the guests of this elegant evening are worth billions of euros this is the face of wealth in Germany mainly male and although we're close to the super-rich here their world remains somehow out of reach Hamburg home to manager magazine's parent company the speaker group hockey any publication keeps closer tabs on Germany's ultra rich every year the magazine's team gathers information on wealthy Germans and using the Forbes model makes a special edition with a list of Germany's 1001 richest people it's painstaking", "start_timestamp": "00:02:54", "end_timestamp": "00:03:43", "start_second": 174, "end_second": 223, "url": "https://www.youtube.com/watch?v=NXaVLXSZdEw&t=174s", "title": "Germany: The discreet lives of the super rich | DW Documentary", "thumbnail": "https://i.ytimg.com/vi/NXaVLXSZdEw/maxresdefault.jpg"} {"video_id": "NXaVLXSZdEw", "text": "work how many billionaires are video last year it was a hundred and thirty six right no last year it was around 170 170 okay someone who wants to get on our list at 1001 richest Germans needs to have around 100 million that doesn't have to be money in the bank most people have it as assets or as property but that's the ballpark we're looking for to be on our list of the richest Germans the thousand vison contant so nice and our chances editor-in-chief Stephon clothespin has been around Germany's ultra-wealthy for", "start_timestamp": "00:03:43", "end_timestamp": "00:04:25", "start_second": 223, "end_second": 265, "url": "https://www.youtube.com/watch?v=NXaVLXSZdEw&t=223s", "title": "Germany: The discreet lives of the super rich | DW Documentary", "thumbnail": "https://i.ytimg.com/vi/NXaVLXSZdEw/maxresdefault.jpg"} {"video_id": "NXaVLXSZdEw", "text": "years what does it take to get on the list as it Amazon the top I'd say that the top 150 spots on our list will always go to company owners and heirs even if you're a chief physician you'll need to see a whole lot of patients to become a billionaire managers also have a hard time getting that high up here in Germany there's a debate about whether the heads of Dax companies earn too much but if you compare it to what people in similar posts in the US the UK or China earn its peanuts so that alone can't ever make you one of the truly ultra", "start_timestamp": "00:04:25", "end_timestamp": "00:05:06", "start_second": 265, "end_second": 306, "url": "https://www.youtube.com/watch?v=NXaVLXSZdEw&t=265s", "title": "Germany: The discreet lives of the super rich | DW Documentary", "thumbnail": "https://i.ytimg.com/vi/NXaVLXSZdEw/maxresdefault.jpg"} {"video_id": "NXaVLXSZdEw", "text": "rich in them it's a club des vins a new chef creating a list of the wealthiest people is especially difficult in Germany though not because they aren't enough of them [Music] he somehow managed to amass this huge Empire and fortune in just a few years his wealth is estimated to be around four billion he's the least known super-rich Germans would you let us interview him at home they prefer to fly under the radar to avoid envy the super-rich don't like to melt themselves so to speak you can normally only get interviews inside the homes", "start_timestamp": "00:05:06", "end_timestamp": "00:05:54", "start_second": 306, "end_second": 354, "url": "https://www.youtube.com/watch?v=NXaVLXSZdEw&t=306s", "title": "Germany: The discreet lives of the super rich | DW Documentary", "thumbnail": "https://i.ytimg.com/vi/NXaVLXSZdEw/maxresdefault.jpg"} {"video_id": "NXaVLXSZdEw", "text": "it's second-tier once there you'll always get guys who get a kick out of publicity like mr. Marsh - and such getting a good shot of mr. Marsh Mayer is never a problem but the real money keeps itself hidden the super-rich are trying to go unnoticed sometimes they even try to hide there are no photos of several people on our list you won't find a single picture if you go online and google them there hasn't been a photo of their hymens germany's richest family for decades many stay hidden because they want to live normal lives", "start_timestamp": "00:05:54", "end_timestamp": "00:06:33", "start_second": 354, "end_second": 393, "url": "https://www.youtube.com/watch?v=NXaVLXSZdEw&t=354s", "title": "Germany: The discreet lives of the super rich | DW Documentary", "thumbnail": "https://i.ytimg.com/vi/NXaVLXSZdEw/maxresdefault.jpg"} {"video_id": "NXaVLXSZdEw", "text": "and think that they won't be able to do so if they're known to be multimillionaires Lemkin for months our interview requests were rejected agreed film shoots were canceled last minute none of the rich wanted to talk to us about money finally we got lucky in the financial hub Frankfurt here in a prime location tucked behind the Bank towers sits the asset management company for come Foca managers German business family's fortunes its chairman Christian fanbase Todd's home provided some insight as to why rich Germans are so shy", "start_timestamp": "00:06:33", "end_timestamp": "00:07:13", "start_second": 393, "end_second": 433, "url": "https://www.youtube.com/watch?v=NXaVLXSZdEw&t=393s", "title": "Germany: The discreet lives of the super rich | DW Documentary", "thumbnail": "https://i.ytimg.com/vi/NXaVLXSZdEw/maxresdefault.jpg"} {"video_id": "NXaVLXSZdEw", "text": "the joy many wealthy Germans are reluctant about stepping out into the public eye because they're afraid they could be seen negatively they ask themselves what do I get from showing myself to the public it doesn't give me anything on the contrary it could lead to some crazy person taking note and breaking into my home or kidnapping one of my children and those fears are not unwarranted then there's also the fact is that many heirs are inheriting fortunes that are somehow tainted by or related to the Third Reich the Abbott", "start_timestamp": "00:07:13", "end_timestamp": "00:07:56", "start_second": 433, "end_second": 476, "url": "https://www.youtube.com/watch?v=NXaVLXSZdEw&t=433s", "title": "Germany: The discreet lives of the super rich | DW Documentary", "thumbnail": "https://i.ytimg.com/vi/NXaVLXSZdEw/maxresdefault.jpg"} {"video_id": "NXaVLXSZdEw", "text": "ear of rich Germans avoids publicity as if it were the plague but what are they so afraid of if I asked a number of my friends whether they'd like to be interviewed for this film and each one said no they'd say someone else can do that better than me I can't do it right and I might come across wrong they think they'd have much more to lose than to gain even calm [Music] after a lot of back and forth with his press team one ultra-rich German did agree to meet us Michele Otto is the chairman of the Supervisory Board of the Auto Group and", "start_timestamp": "00:07:56", "end_timestamp": "00:08:38", "start_second": 476, "end_second": 518, "url": "https://www.youtube.com/watch?v=NXaVLXSZdEw&t=476s", "title": "Germany: The discreet lives of the super rich | DW Documentary", "thumbnail": "https://i.ytimg.com/vi/NXaVLXSZdEw/maxresdefault.jpg"} {"video_id": "NXaVLXSZdEw", "text": "one of the ten richest Germans we asked him why Germans are reluctant to show off their wealth any wiring doing so would lead to envy in the u.s. achievement and wealth has seen in a highly positive way but here they carry a bitter aftertaste where does his fortune come from or how did he get his wealth that scares some artists whom with a shrine weren't you I think a lot of people find their own wealth a little nauseating I like it Pindos good dick Hoffman grew up poor his mother ran a small drugstore in post-war Hanover her", "start_timestamp": "00:08:38", "end_timestamp": "00:09:25", "start_second": 518, "end_second": 565, "url": "https://www.youtube.com/watch?v=NXaVLXSZdEw&t=518s", "title": "Germany: The discreet lives of the super rich | DW Documentary", "thumbnail": "https://i.ytimg.com/vi/NXaVLXSZdEw/maxresdefault.jpg"} {"video_id": "NXaVLXSZdEw", "text": "son had bigger plans in 1972 the idea of opening the first self-service drugstore in Germany came to him today he's a multi-billionaire why was he happy to step into the spotlight negan unfunctional in the early years it was just about getting the name rossmann out there so when I was invited to a talk show on a small regional channel I liked going because I thought free publicity for my company but then two or three years ago I started to understand that this slightly flabby balding man whose teeth aren't perfect was hungry for recognition", "start_timestamp": "00:09:25", "end_timestamp": "00:10:06", "start_second": 565, "end_second": 606, "url": "https://www.youtube.com/watch?v=NXaVLXSZdEw&t=565s", "title": "Germany: The discreet lives of the super rich | DW Documentary", "thumbnail": "https://i.ytimg.com/vi/NXaVLXSZdEw/maxresdefault.jpg"} {"video_id": "NXaVLXSZdEw", "text": "himself back then I thought I was stepping into the limelight to promote the company but everyone rationalizes their motives and I did too y'all stop adore within 40 years horsemen became the most profitable drug store chain in Europe with stores in six countries in 2018 a total of 56 thousand people were working on the chain Hoffman is active in other business areas as well and he speculates on the stock market that could not take any capitai legal debt so I have a couple of private equity investments the largest is palliated between 80 and 100", "start_timestamp": "00:10:06", "end_timestamp": "00:11:01", "start_second": 606, "end_second": 661, "url": "https://www.youtube.com/watch?v=NXaVLXSZdEw&t=606s", "title": "Germany: The discreet lives of the super rich | DW Documentary", "thumbnail": "https://i.ytimg.com/vi/NXaVLXSZdEw/maxresdefault.jpg"} {"video_id": "NXaVLXSZdEw", "text": "million depending on the spot rate so it's a fair sum there are also shares in different industries introduced a mob will be boring it's off to tear a little I'm fine you're still referred to as an SME because small medium enterprise right why do you think that so I don't really know with 55,000 employees you're not really a medium enterprise anymore you're in another league we're back at Manager magazine in Hamburg with photos with a special issue are being selected the list of the richest Germans includes a notably high", "start_timestamp": "00:11:01", "end_timestamp": "00:11:42", "start_second": 661, "end_second": 702, "url": "https://www.youtube.com/watch?v=NXaVLXSZdEw&t=661s", "title": "Germany: The discreet lives of the super rich | DW Documentary", "thumbnail": "https://i.ytimg.com/vi/NXaVLXSZdEw/maxresdefault.jpg"} {"video_id": "NXaVLXSZdEw", "text": "number of company owners from the so called medium-sized businesses it's a German peculiarity and it's not the only one the cover story for this current issue is that last year money rained down on Germany's super-rich up here we have the chef Lutz the matriarch and her son who now owns 80% of mom has kept only 20 on the chef's loose have been at the top of our list for many many years I'd have to check exactly how much they're worth but around 20 billion give or take a bit down here we have MS Basel toss in the head of the Hinkle class", "start_timestamp": "00:11:42", "end_timestamp": "00:12:19", "start_second": 702, "end_second": 739, "url": "https://www.youtube.com/watch?v=NXaVLXSZdEw&t=702s", "title": "Germany: The discreet lives of the super rich | DW Documentary", "thumbnail": "https://i.ytimg.com/vi/NXaVLXSZdEw/maxresdefault.jpg"} {"video_id": "NXaVLXSZdEw", "text": "she's the first and so far only woman chairman of the Supervisory Board of one of the 30 Dax companies Germany's economy is still extremely male-dominated what's interesting is if you compare our list here with a list of the ultra-rich in the US we have a lot of old money old companies that have been around for decades in the US you have all those lads from Facebook Google snapchat and so forth that have bubbled up to the top of the list we don't have that type of thing here and compared with other countries Germans are very", "start_timestamp": "00:12:19", "end_timestamp": "00:12:53", "start_second": 739, "end_second": 773, "url": "https://www.youtube.com/watch?v=NXaVLXSZdEw&t=739s", "title": "Germany: The discreet lives of the super rich | DW Documentary", "thumbnail": "https://i.ytimg.com/vi/NXaVLXSZdEw/maxresdefault.jpg"} {"video_id": "NXaVLXSZdEw", "text": "reticent about showing their wealth very few Germans sail around in boats like this usually that's Americans Russians Chinese and so on here you don't really show your money you might have various houses villas and such but there's likely to be a Fulks park and parked out front it seems the average ultra wealthy German Israeli conspicuous unlike in the US athletes actors and TV personalities rarely make it onto the German list important on guns even though we put a great deal of love in a sweat into estimating guests fortunes they're", "start_timestamp": "00:12:53", "end_timestamp": "00:13:31", "start_second": 773, "end_second": 811, "url": "https://www.youtube.com/watch?v=NXaVLXSZdEw&t=773s", "title": "Germany: The discreet lives of the super rich | DW Documentary", "thumbnail": "https://i.ytimg.com/vi/NXaVLXSZdEw/maxresdefault.jpg"} {"video_id": "NXaVLXSZdEw", "text": "probably much bigger especially if we're talking about urban real estate prices have exploded over the past 10 years a lot of people have doubled their property assets so if you started with five billion in that market you're likely to have ten or fifteen billion today money makes money but while rich Germans fortunes have exploded since the financial crisis due to the increase in value of real estate stocks and assets those with average incomes have had to swallow losses even liberal economics Institute's are concerned about social", "start_timestamp": "00:13:31", "end_timestamp": "00:14:07", "start_second": 811, "end_second": 847, "url": "https://www.youtube.com/watch?v=NXaVLXSZdEw&t=811s", "title": "Germany: The discreet lives of the super rich | DW Documentary", "thumbnail": "https://i.ytimg.com/vi/NXaVLXSZdEw/maxresdefault.jpg"} {"video_id": "NXaVLXSZdEw", "text": "inequality in Germany how do the super-rich see this disparity the headline in the newspaper reads the rich are getting richer and richer which is true the rich are getting richer yes it's true but it's also false there are 20 million citizens in Germany who have assets worth between 100,000 and 1 million so millions of people are getting richer now the rich are getting richer even faster because one factor is probability that they can dedicate much more time to increasing their wealth about initiative the Left Party would likely say split up", "start_timestamp": "00:14:07", "end_timestamp": "00:14:48", "start_second": 847, "end_second": 888, "url": "https://www.youtube.com/watch?v=NXaVLXSZdEw&t=847s", "title": "Germany: The discreet lives of the super rich | DW Documentary", "thumbnail": "https://i.ytimg.com/vi/NXaVLXSZdEw/maxresdefault.jpg"} {"video_id": "NXaVLXSZdEw", "text": "your wealth but my response is I also do things for the world which I live in I don't just take I'm not a socialist though I can only do things because I have things I can y'know Bastogne Posada Michel Otto is one of the rich whose wealth has increased he successfully transitioned his mail-order business into a digital enterprise over 20 companies now belong to the Auto Group we wanted to know how Otto sees the debates about rich and poor do the rich understand the worries of the poor in the tough one just one good when people", "start_timestamp": "00:14:48", "end_timestamp": "00:15:28", "start_second": 888, "end_second": 928, "url": "https://www.youtube.com/watch?v=NXaVLXSZdEw&t=888s", "title": "Germany: The discreet lives of the super rich | DW Documentary", "thumbnail": "https://i.ytimg.com/vi/NXaVLXSZdEw/maxresdefault.jpg"} {"video_id": "NXaVLXSZdEw", "text": "talk about those on top who don't understand those at the bottom I wouldn't say that applies to me because I was not born rich I came to Hamburg as a refugee from West Prussia and my father had to start from scratch that's why I absolutely do understand people living in poverty today on the other hand I'd also say that if Germany is getting more and more millionaires people with small or medium-sized businesses because generally the millionaires in question did build a business they now run then I think that's great", "start_timestamp": "00:15:28", "end_timestamp": "00:16:05", "start_second": 928, "end_second": 965, "url": "https://www.youtube.com/watch?v=NXaVLXSZdEw&t=928s", "title": "Germany: The discreet lives of the super rich | DW Documentary", "thumbnail": "https://i.ytimg.com/vi/NXaVLXSZdEw/maxresdefault.jpg"} {"video_id": "NXaVLXSZdEw", "text": "because they are the people creating jobs for me that's what should matter in this debate we should focus on that and not rich versus poor voted is good young ensuring that the wealth of rich families can increase despite current zero interest rates is the mission of Christian fund based on science company each day he and his employees sent out investment opportunities from banks and other entities given exclusively to his wealthy clients this is a way to present an offer to just a handful of valued clients and focus", "start_timestamp": "00:16:05", "end_timestamp": "00:16:49", "start_second": 965, "end_second": 1009, "url": "https://www.youtube.com/watch?v=NXaVLXSZdEw&t=965s", "title": "Germany: The discreet lives of the super rich | DW Documentary", "thumbnail": "https://i.ytimg.com/vi/NXaVLXSZdEw/maxresdefault.jpg"} {"video_id": "NXaVLXSZdEw", "text": "Oh [Music] is it fair that rich Germans are able to increase their wealth while the rest of the population gets left behind [Music] I think it's difficult to apply terms like equity and fairness to the distribution of wealth I would say that here in Germany were better off than ever before and people here live in above-average circumstances nonetheless we have to make sure the gap between rich and poor doesn't get too wide because we don't want social conflicts like in the US or Latin America to happen here for better times company", "start_timestamp": "00:16:49", "end_timestamp": "00:17:32", "start_second": 1009, "end_second": 1052, "url": "https://www.youtube.com/watch?v=NXaVLXSZdEw&t=1009s", "title": "Germany: The discreet lives of the super rich | DW Documentary", "thumbnail": "https://i.ytimg.com/vi/NXaVLXSZdEw/maxresdefault.jpg"} {"video_id": "NXaVLXSZdEw", "text": "Fulgham is a so-called multi-family office family officers take care of the needs of very wealthy families managing and increasing their assets it only makes sense to use family offices if your wealth is upward of 30 million euros who can afford such a thing well a family office like ours obviously can't discuss its clients we have well known German business families that's our typical client profile Mikasa she can take someone who is or was a company owner thinks differently to someone who has spent their life as an", "start_timestamp": "00:17:32", "end_timestamp": "00:18:09", "start_second": 1052, "end_second": 1089, "url": "https://www.youtube.com/watch?v=NXaVLXSZdEw&t=1052s", "title": "Germany: The discreet lives of the super rich | DW Documentary", "thumbnail": "https://i.ytimg.com/vi/NXaVLXSZdEw/maxresdefault.jpg"} {"video_id": "NXaVLXSZdEw", "text": "employee or they are families who have had money for a long time so wealth is so to speak in their genes and then there are families who have just come into their wealth people who are still pumping with entrepreneurial energy they're usually quite different from heirs option Kaka I see it as a top top in my Stanford [Music] - Allah seems to have plenty of this entrepreneurial energy his company is McFate Europe's largest fitness studio chain its headquarters are in an old baking Factory in Berlin Hina Shula", "start_timestamp": "00:18:09", "end_timestamp": "00:18:52", "start_second": 1089, "end_second": 1132, "url": "https://www.youtube.com/watch?v=NXaVLXSZdEw&t=1089s", "title": "Germany: The discreet lives of the super rich | DW Documentary", "thumbnail": "https://i.ytimg.com/vi/NXaVLXSZdEw/maxresdefault.jpg"} {"video_id": "NXaVLXSZdEw", "text": "started off small but today his wealth is valued at 250 million euros [Music] which bring me finish then they're from I was born near long back grew up in a small village normally in villages you do the sports available and when I was around 1516 my role models were Arnold Schwarzenegger and Stallone that's how I came to the fitness world Shahla went from secondary school to complete a Salesman apprenticeship and became the manager of three supermarkets then he decided to start something new - when I was 25 I decided to switch to the", "start_timestamp": "00:18:52", "end_timestamp": "00:19:34", "start_second": 1132, "end_second": 1174, "url": "https://www.youtube.com/watch?v=NXaVLXSZdEw&t=1132s", "title": "Germany: The discreet lives of the super rich | DW Documentary", "thumbnail": "https://i.ytimg.com/vi/NXaVLXSZdEw/maxresdefault.jpg"} {"video_id": "NXaVLXSZdEw", "text": "fitness industry my idea was to open a gym where anyone could train the matter kids or her income that was the initial idea and I had big goals I wanted to be number one in Europe but that's all I had I didn't have financing okay good it was 1997 when shala opened his first fitness studio in verts book close to his home village he made use of some unusual methods on the monetary hoods book was a big step for me I opened my first fitness studio there under the slogan now also inverts book which people saw through as", "start_timestamp": "00:19:34", "end_timestamp": "00:20:12", "start_second": 1174, "end_second": 1212, "url": "https://www.youtube.com/watch?v=NXaVLXSZdEw&t=1174s", "title": "Germany: The discreet lives of the super rich | DW Documentary", "thumbnail": "https://i.ytimg.com/vi/NXaVLXSZdEw/maxresdefault.jpg"} {"video_id": "NXaVLXSZdEw", "text": "a marketing gag because customers came to me and asked where else we had studios so I gave it some thought and came up with the next marketing gag soon also in ela anything that did the job and also put me on track to going from verse bog to Erlangen and then it grew from there cuza ten years later Heine Shula reached his goal he's number one in Europe and still expanding McFate now owns ten Fitness companies as well as its own model agency meanwhile Shanna is getting ready to open fitness studios in the US des moines severan is getting to be", "start_timestamp": "00:20:12", "end_timestamp": "00:20:55", "start_second": 1212, "end_second": 1255, "url": "https://www.youtube.com/watch?v=NXaVLXSZdEw&t=1212s", "title": "Germany: The discreet lives of the super rich | DW Documentary", "thumbnail": "https://i.ytimg.com/vi/NXaVLXSZdEw/maxresdefault.jpg"} {"video_id": "NXaVLXSZdEw", "text": "number one is much easier than staying number one I think if you want to be successful you need to be a bit of an alpha animal inside the investor would probably pick a brand and say okay that could work I like it although it's probably two steps too far from any but if someone doesn't want to get involved with us because of it so what I'm convinced that you have an easier time if you fought your way to the top and to success lions feel it and so do partners I think that's our situation which is why I can also imagine that someone who", "start_timestamp": "00:20:55", "end_timestamp": "00:21:38", "start_second": 1255, "end_second": 1298, "url": "https://www.youtube.com/watch?v=NXaVLXSZdEw&t=1255s", "title": "Germany: The discreet lives of the super rich | DW Documentary", "thumbnail": "https://i.ytimg.com/vi/NXaVLXSZdEw/maxresdefault.jpg"} {"video_id": "NXaVLXSZdEw", "text": "inherits something or takes over or even has to take over a company in the second or third generation will have a much tougher time and Montezuma Muhsin as CSV aha but there are plenty of heirs in Germany huge fortunes and thousands of companies have been passed from one generation to the next they are heirs who don't want to and others who shouldn't take over their parents businesses succession is hugely important among Germany's richest Michel Otto inherited the mail-order company from his father and successfully managed", "start_timestamp": "00:21:38", "end_timestamp": "00:22:13", "start_second": 1298, "end_second": 1333, "url": "https://www.youtube.com/watch?v=NXaVLXSZdEw&t=1298s", "title": "Germany: The discreet lives of the super rich | DW Documentary", "thumbnail": "https://i.ytimg.com/vi/NXaVLXSZdEw/maxresdefault.jpg"} {"video_id": "NXaVLXSZdEw", "text": "it from when the kinder is this it will be harder for my children though because now the Auto Group has a hundred and twenty three companies in over 30 countries I know every single company either because I was involved when it was founded or because I led the takeover negotiation but my children don't yet know that many companies guns wunderkind otto's children have opted against direct succession I think it's important to give your children the option without pressuring them so you don't force them into a role I think that mistake is made", "start_timestamp": "00:22:13", "end_timestamp": "00:22:51", "start_second": 1333, "end_second": 1371, "url": "https://www.youtube.com/watch?v=NXaVLXSZdEw&t=1333s", "title": "Germany: The discreet lives of the super rich | DW Documentary", "thumbnail": "https://i.ytimg.com/vi/NXaVLXSZdEw/maxresdefault.jpg"} {"video_id": "NXaVLXSZdEw", "text": "often and I'd say it's bad for both the children and the company when master kundo under Thunder Lim kaput so now we have one point of sale Berg Vader are you a bit tired yes it's for an exhausting couple of days in ten minutes I'll be fine again it was just Bam Bam Bam Bam dear horsemen also spent a lot of time considering who his successor would be it's now decided how old horsemen the younger of his two sons will take over as manager of the drugstore chain do we actually sell much yarn it's not exactly part of a", "start_timestamp": "00:22:51", "end_timestamp": "00:23:31", "start_second": 1371, "end_second": 1411, "url": "https://www.youtube.com/watch?v=NXaVLXSZdEw&t=1371s", "title": "Germany: The discreet lives of the super rich | DW Documentary", "thumbnail": "https://i.ytimg.com/vi/NXaVLXSZdEw/maxresdefault.jpg"} {"video_id": "NXaVLXSZdEw", "text": "drugstores range the trend sort of over it was big in 2016 already fading in 2017 and it's been stagnating in 2018 but it's not the worst product we have and it still brings in some revenue rose father had to show him the appeal of being in charge when the boys got more engaged I thought oh now we can't look as if the drugstore business is only about making money so I showed them how we're also active in social issues in Africa and so forth I've always showed my sons that what we do isn't just about making money it's also about being", "start_timestamp": "00:23:31", "end_timestamp": "00:24:08", "start_second": 1411, "end_second": 1448, "url": "https://www.youtube.com/watch?v=NXaVLXSZdEw&t=1411s", "title": "Germany: The discreet lives of the super rich | DW Documentary", "thumbnail": "https://i.ytimg.com/vi/NXaVLXSZdEw/maxresdefault.jpg"} {"video_id": "NXaVLXSZdEw", "text": "responsible for others as a child I wanted to become a film director that was always my dream it still is today you might be thinking no I don't want him to be a director I want him in the company so I said well go ahead and become a director but being a director of such a big company is also exciting I didn't manipulate him a beauty goddess but boy if you could fly off and asked whether I feel competitive toward my father and sure he built up this big company that I'll only take over but it's really difficult to keep something", "start_timestamp": "00:24:08", "end_timestamp": "00:24:51", "start_second": 1448, "end_second": 1491, "url": "https://www.youtube.com/watch?v=NXaVLXSZdEw&t=1448s", "title": "Germany: The discreet lives of the super rich | DW Documentary", "thumbnail": "https://i.ytimg.com/vi/NXaVLXSZdEw/maxresdefault.jpg"} {"video_id": "NXaVLXSZdEw", "text": "going these days the founding period has its own challenges and just having the idea of founding a self-service drugstore was hugely innovative but the fight to survive has gotten tougher and that's the one I'm in a coma also now as a thought everyone in my family wants to be good at sports my father and I battle each other in tennis we all compete with one another and that's also shaped our view of life or mine at least as a tumulus miner despite their competitiveness the Osmond family reached a harmonious agreement", "start_timestamp": "00:24:51", "end_timestamp": "00:25:35", "start_second": 1491, "end_second": 1535, "url": "https://www.youtube.com/watch?v=NXaVLXSZdEw&t=1491s", "title": "Germany: The discreet lives of the super rich | DW Documentary", "thumbnail": "https://i.ytimg.com/vi/NXaVLXSZdEw/maxresdefault.jpg"} {"video_id": "NXaVLXSZdEw", "text": "with regard to succession cling against Rommel wipes me out I normally prefer playing doubles that's much more appropriate for men my age I'm starting to worry a bit about you you worry about me with your inheritance I wouldn't worry I'd be looking forward to it selecting heirs and successes is usually not quite as amicable as at the Hoffman's christiane fan base time has seen many inheritance disputes in rich families it's his job to preserve the family's assets and protected from all sorts of dangers maintaining a family fortune", "start_timestamp": "00:25:35", "end_timestamp": "00:26:23", "start_second": 1535, "end_second": 1583, "url": "https://www.youtube.com/watch?v=NXaVLXSZdEw&t=1535s", "title": "Germany: The discreet lives of the super rich | DW Documentary", "thumbnail": "https://i.ytimg.com/vi/NXaVLXSZdEw/maxresdefault.jpg"} {"video_id": "NXaVLXSZdEw", "text": "over several generations is incredibly difficult because it's under threat from being divvied up through inheritance from wealth disputes from expropriation from wars or simply from stupidity most families will have one or all of these happen to them only a handful of families have managed to stay more or less afloat over centuries but those right on top have been switched out again and again when based on time speaks from personal experience his own family's history dates back 900 years I have a horribly long name at least on my", "start_timestamp": "00:26:23", "end_timestamp": "00:27:01", "start_second": 1583, "end_second": 1621, "url": "https://www.youtube.com/watch?v=NXaVLXSZdEw&t=1583s", "title": "Germany: The discreet lives of the super rich | DW Documentary", "thumbnail": "https://i.ytimg.com/vi/NXaVLXSZdEw/maxresdefault.jpg"} {"video_id": "NXaVLXSZdEw", "text": "birth certificate there's my six given names Christian Lawton Ludwig William and Maria as a good Catholic followed by BA horn phone Malkin Haim Jane and beshte Simon what at work are normally called mr. Phan Bettelheim and at social events baron or lord baron baron from back home is an indirect successor of the figures the richest family in German history when does wealth begin for you if you're asking me at what point I consider someone to be truly right I would say over a hundred son I am definitely not rich I've", "start_timestamp": "00:27:01", "end_timestamp": "00:27:52", "start_second": 1621, "end_second": 1672, "url": "https://www.youtube.com/watch?v=NXaVLXSZdEw&t=1621s", "title": "Germany: The discreet lives of the super rich | DW Documentary", "thumbnail": "https://i.ytimg.com/vi/NXaVLXSZdEw/maxresdefault.jpg"} {"video_id": "NXaVLXSZdEw", "text": "escaped nakute but I'm comfortable you and our family is comfortable and I'm certainly not complaining when you have a family history as long as mine your family has seen everything near bankruptcy years overflowing with money inside and years when a lot was lost on the one wall news he takes us to the hunting lodge of the best horse hime family in Turlington it looked like the Lord had been lost forever during the division of Germany the house was built in 1892 was a hunting lodge for my great-great grand uncle's it's been in the family ever", "start_timestamp": "00:27:52", "end_timestamp": "00:28:38", "start_second": 1672, "end_second": 1718, "url": "https://www.youtube.com/watch?v=NXaVLXSZdEw&t=1672s", "title": "Germany: The discreet lives of the super rich | DW Documentary", "thumbnail": "https://i.ytimg.com/vi/NXaVLXSZdEw/maxresdefault.jpg"} {"video_id": "NXaVLXSZdEw", "text": "since except for a short interval it was expropriated in 1952 and then restituted in 1992 and since then I've owned it the family's hunting lodge survived expropriation and socialism without much damage today from bechtolsheim also owns hundreds of hectares of forest nearby and regularly invites business acquaintances for hunts that'll say many of the trophies are mine but several also come from my father some from my great grand uncle leopard down there I didn't shoot him that was my great grand uncle and then my dogs chewed off his", "start_timestamp": "00:28:38", "end_timestamp": "00:29:28", "start_second": 1718, "end_second": 1768, "url": "https://www.youtube.com/watch?v=NXaVLXSZdEw&t=1718s", "title": "Germany: The discreet lives of the super rich | DW Documentary", "thumbnail": "https://i.ytimg.com/vi/NXaVLXSZdEw/maxresdefault.jpg"} {"video_id": "NXaVLXSZdEw", "text": "ears so he no longer has his former beauty but he's too precious not to keep as we're sharpen a recurring topic in the special issues of the manager magazine are the super-wealthy snit works this gift goes alongside there are larger and smaller networks and there are a lot of them and most even we journalists don't know about in high society there are certain typical hobbies horse racing hockey okay a bit of tennis though that's almost old-school he'll meet in the boxes at major football stadiums because of course they're all football fans and", "start_timestamp": "00:29:28", "end_timestamp": "00:30:05", "start_second": 1768, "end_second": 1805, "url": "https://www.youtube.com/watch?v=NXaVLXSZdEw&t=1768s", "title": "Germany: The discreet lives of the super rich | DW Documentary", "thumbnail": "https://i.ytimg.com/vi/NXaVLXSZdEw/maxresdefault.jpg"} {"video_id": "NXaVLXSZdEw", "text": "football is huge of that a lot of networking happens there that's like their marketplace they mingle and meet there more than its so-called parties for the rich as of zuga nantan i [Applause] Dork oh yeah good we always have a lot of employees here we have a lot of friends in the box because Jim Pfeifer is here today Germany's most famous criminologist sometimes you live for someone from politics comes by Christian Wulff and Bettina are part of my close circle of friends so there's always lots going on here [Applause]", "start_timestamp": "00:30:05", "end_timestamp": "00:30:59", "start_second": 1805, "end_second": 1859, "url": "https://www.youtube.com/watch?v=NXaVLXSZdEw&t=1805s", "title": "Germany: The discreet lives of the super rich | DW Documentary", "thumbnail": "https://i.ytimg.com/vi/NXaVLXSZdEw/maxresdefault.jpg"} {"video_id": "NXaVLXSZdEw", "text": "in villain avinash allah is opening a new club a new branch of his fitness empire shallow also has become a member in the network of important people from the sports business and entertainment industries true it's a closed circle that's hard to get into sure money attracts success and success attracts money I do believe that I've experienced myself how you can suddenly connect with people previously out of reach I am in a different position the hot scene at Bouchon field for independence [Applause] Derk horseman is ready to leave his box", "start_timestamp": "00:30:59", "end_timestamp": "00:32:05", "start_second": 1859, "end_second": 1925, "url": "https://www.youtube.com/watch?v=NXaVLXSZdEw&t=1859s", "title": "Germany: The discreet lives of the super rich | DW Documentary", "thumbnail": "https://i.ytimg.com/vi/NXaVLXSZdEw/maxresdefault.jpg"} {"video_id": "NXaVLXSZdEw", "text": "he wants to celebrate the victory with his friend the Berlin air and hearing aid company owner Martin Kent Kent is also co-owner and president of Hannover 96 his box is located on the other end of the exclusive VIP area I told you we'd win today I told him that if we didn't win he'd pay 10 million that was the bet right - halfway decent many networks are important for business but do the rich also have political clout can political influence be bought in Germany bingo is life lost in the best way available for", "start_timestamp": "00:32:05", "end_timestamp": "00:32:52", "start_second": 1925, "end_second": 1972, "url": "https://www.youtube.com/watch?v=NXaVLXSZdEw&t=1925s", "title": "Germany: The discreet lives of the super rich | DW Documentary", "thumbnail": "https://i.ytimg.com/vi/NXaVLXSZdEw/maxresdefault.jpg"} {"video_id": "NXaVLXSZdEw", "text": "rich or super rich people in Germany to exert influence is the number of employees working in their companies someone who owns a company with a hundred thousand employees or let's say less maybe fifteen thousand employees can go to business associations and say finally go ahead and pass that law but that will cost me or an even better argument is that will cost you two thousand jobs in that area but there are no super rich people who regularly call up the ministers or ms merkel and say what needs to happen next", "start_timestamp": "00:32:52", "end_timestamp": "00:33:25", "start_second": 1972, "end_second": 2005, "url": "https://www.youtube.com/watch?v=NXaVLXSZdEw&t=1972s", "title": "Germany: The discreet lives of the super rich | DW Documentary", "thumbnail": "https://i.ytimg.com/vi/NXaVLXSZdEw/maxresdefault.jpg"} {"video_id": "NXaVLXSZdEw", "text": "which tax laws they'd like and so on that's not how things work here in Germany the deadline for the special issue is approaching the heart of it is the ranking of the 1001 richest Germans what do the rich think of this ranking is a ranking suit those rankings are for entertainment all their scraped together sometimes using stock market quotations but they aren't reliable in any real way and five Gunson fontana giving and visit roughly and also craft as each house I don't think much of these rankings and I didn't want to be included because it", "start_timestamp": "00:33:25", "end_timestamp": "00:34:12", "start_second": 2005, "end_second": 2052, "url": "https://www.youtube.com/watch?v=NXaVLXSZdEw&t=2005s", "title": "Germany: The discreet lives of the super rich | DW Documentary", "thumbnail": "https://i.ytimg.com/vi/NXaVLXSZdEw/maxresdefault.jpg"} {"video_id": "NXaVLXSZdEw", "text": "creates this impression of rich people being like Scrooge McDuck that they have these money bins in which they wallow in their coins the fact that I'm at the top of the rankings including of the wealthiest Germans does make me feel proud and Finnish naturally rich noise it is it's well read manager magazine every now and then but I've never read the rank I don't know whether I'm in it I don't need that I have other goals one of these is that Highness Allah now wants to open the world's largest Fitness Centre in North rhine-westphalia", "start_timestamp": "00:34:12", "end_timestamp": "00:34:49", "start_second": 2052, "end_second": 2089, "url": "https://www.youtube.com/watch?v=NXaVLXSZdEw&t=2052s", "title": "Germany: The discreet lives of the super rich | DW Documentary", "thumbnail": "https://i.ytimg.com/vi/NXaVLXSZdEw/maxresdefault.jpg"} {"video_id": "NXaVLXSZdEw", "text": "and let me write the Mariah's Japanese and means the future we think it's a perfect fit for the whole concept and vision because what we want to create here is truly unique and has never been done before our goal is to become the world's fitness center fitness center embed bite severity Shanna thinks he's found the perfect location to realize his vision in Oberhausen he's rented an old factory complex at the moment the space is still being used to make steel parts but before long thousands of customers will be exercising here the", "start_timestamp": "00:34:49", "end_timestamp": "00:35:32", "start_second": 2089, "end_second": 2132, "url": "https://www.youtube.com/watch?v=NXaVLXSZdEw&t=2089s", "title": "Germany: The discreet lives of the super rich | DW Documentary", "thumbnail": "https://i.ytimg.com/vi/NXaVLXSZdEw/maxresdefault.jpg"} {"video_id": "NXaVLXSZdEw", "text": "emergency exits path will be here and the offices on top seanny experienced his most traumatic experience to date in the area in 2010 21 people died and 54 were injured in a stampede at the Love Parade in duisburg shala had been the parade's organizer the cause of the panic has still not been conclusively determined how does an entrepreneur in the fast lane deal with that kind of tragedy an event like that will haunt you always for the rest of your life I've got a moral responsibility I was the organizer if I could turn back", "start_timestamp": "00:35:32", "end_timestamp": "00:36:10", "start_second": 2132, "end_second": 2170, "url": "https://www.youtube.com/watch?v=NXaVLXSZdEw&t=2132s", "title": "Germany: The discreet lives of the super rich | DW Documentary", "thumbnail": "https://i.ytimg.com/vi/NXaVLXSZdEw/maxresdefault.jpg"} {"video_id": "NXaVLXSZdEw", "text": "time I would do it immediately before the scale of what happened but you can't make it undone you have to try to deal with what happened in the Musa Dagh Mitzvah Zulu once again dick Hoffman has also seen setbacks and crises in the 90s we expanded dramatically hinted the Czech Republic Hungary and Poland I was also speculating on the stock markets a little too much and neglecting the company then in 1996 we suddenly had a loss of 12 million Deutschmarks the banks don't joke around if you're highly indebted and then you come in with huge", "start_timestamp": "00:36:10", "end_timestamp": "00:36:46", "start_second": 2170, "end_second": 2206, "url": "https://www.youtube.com/watch?v=NXaVLXSZdEw&t=2170s", "title": "Germany: The discreet lives of the super rich | DW Documentary", "thumbnail": "https://i.ytimg.com/vi/NXaVLXSZdEw/maxresdefault.jpg"} {"video_id": "NXaVLXSZdEw", "text": "losses that was critical and then I had a heart attack in 96 but everyone knows that life can get tough and things got very tough back then incurred odd nights what is saying so I dialed back a bit including stock speculation I sold them all and thought the only thing on my table now is pulling the rossmann drugstore business through and dumped death it was the right move to focus on one thing and not do so many different things horsemen emerged from that crisis stronger than ever he started speculating again but so he", "start_timestamp": "00:36:46", "end_timestamp": "00:37:31", "start_second": 2206, "end_second": 2251, "url": "https://www.youtube.com/watch?v=NXaVLXSZdEw&t=2206s", "title": "Germany: The discreet lives of the super rich | DW Documentary", "thumbnail": "https://i.ytimg.com/vi/NXaVLXSZdEw/maxresdefault.jpg"} {"video_id": "NXaVLXSZdEw", "text": "assures us only with his private wealth so let's see where gas promise I don't have a laptop I normally do this via NTV text 254 yeah here it says gas promised it 375 I could already sell those now I bought 250,000 of those so 250,000 times 20 cents that would give me 50,000 euros profit but I won't sell I think it's gonna rise to 4 euros sometimes it works sometimes it doesn't but I enjoy that's why I don't play the lottery because I find that boring can a large fortune also be a burden the hoped it does I'd say that for most people", "start_timestamp": "00:37:31", "end_timestamp": "00:38:27", "start_second": 2251, "end_second": 2307, "url": "https://www.youtube.com/watch?v=NXaVLXSZdEw&t=2251s", "title": "Germany: The discreet lives of the super rich | DW Documentary", "thumbnail": "https://i.ytimg.com/vi/NXaVLXSZdEw/maxresdefault.jpg"} {"video_id": "NXaVLXSZdEw", "text": "although they wouldn't voluntarily give away their money the fact that they wanted to grow can be a burden they are controlled by their own assets for example they'll move to Switzerland or somewhere to save on taxes and give up their entire circle of friends and basically become a slave to their fortune in my opinion that's absurd considering the conditions we currently have in German Finland conditions in Germany are currently more favorable than ever for the rich they pay significantly less tax than they did", "start_timestamp": "00:38:27", "end_timestamp": "00:39:06", "start_second": 2307, "end_second": 2346, "url": "https://www.youtube.com/watch?v=NXaVLXSZdEw&t=2307s", "title": "Germany: The discreet lives of the super rich | DW Documentary", "thumbnail": "https://i.ytimg.com/vi/NXaVLXSZdEw/maxresdefault.jpg"} {"video_id": "NXaVLXSZdEw", "text": "25 years ago only minority still feels compelled to emigrate abroad article 14 of the german constitution states property entails obligations its use shall also serve the public good do the rich in Germany live up to this responsibility you finish fish dish I think it is important if you're successful if you're lucky enough to have reached a certain level of prosperity and wealth to give something back to society Michele Otto is one of Germany's biggest donors his money helps fund the environmental cultural and social", "start_timestamp": "00:39:06", "end_timestamp": "00:39:51", "start_second": 2346, "end_second": 2391, "url": "https://www.youtube.com/watch?v=NXaVLXSZdEw&t=2346s", "title": "Germany: The discreet lives of the super rich | DW Documentary", "thumbnail": "https://i.ytimg.com/vi/NXaVLXSZdEw/maxresdefault.jpg"} {"video_id": "NXaVLXSZdEw", "text": "sectors like most rich people he prefers to decide himself what he spends his money on rather than leave that to the state Otto like many wealthy Germans donated millions for the construction of the air fill ammonia in Hamburg in Germany wealthy people like to donate and this makes important contributions to society and public life but generally they're against the proposal of redistributing wealth by a higher taxes for the rich in film German businesses would yield to all the demands of let's say miss varnish to the left party then", "start_timestamp": "00:39:51", "end_timestamp": "00:40:28", "start_second": 2391, "end_second": 2428, "url": "https://www.youtube.com/watch?v=NXaVLXSZdEw&t=2391s", "title": "Germany: The discreet lives of the super rich | DW Documentary", "thumbnail": "https://i.ytimg.com/vi/NXaVLXSZdEw/maxresdefault.jpg"} {"video_id": "NXaVLXSZdEw", "text": "millions of people would be happy and things would be good for a while because millions of people would have more money I know but a true redistribution of wealth has never led to more social justice in the long term not in any of the political systems that tried it it led to the impoverishment of these countries the landowner foot Christiana fund bechtolsheim sees higher taxes on the rich as dangerous as a philosopher I don't think much about this so-called rich tax for two reasons firstly if the terminology alone is stigmatizing and we", "start_timestamp": "00:40:28", "end_timestamp": "00:41:01", "start_second": 2428, "end_second": 2461, "url": "https://www.youtube.com/watch?v=NXaVLXSZdEw&t=2428s", "title": "Germany: The discreet lives of the super rich | DW Documentary", "thumbnail": "https://i.ytimg.com/vi/NXaVLXSZdEw/maxresdefault.jpg"} {"video_id": "NXaVLXSZdEw", "text": "in Germany should avoid that and secondly the rich tax wouldn't do any good on the contrary it would cut into the backbone of the German economy because the typical German rich person is a media sighs business owner they make up the backbone of the German economy and if we want to destroy that we have no one to blame but ourselves the special issue is ready things have basically stayed the same the rich have a few billion more the richest 1% of Germans now has personal wealth worth a quarter of the country's", "start_timestamp": "00:41:01", "end_timestamp": "00:41:36", "start_second": 2461, "end_second": 2496, "url": "https://www.youtube.com/watch?v=NXaVLXSZdEw&t=2461s", "title": "Germany: The discreet lives of the super rich | DW Documentary", "thumbnail": "https://i.ytimg.com/vi/NXaVLXSZdEw/maxresdefault.jpg"} {"video_id": "v2GRWzIhaqQ", "text": "hi there take a look at the following problem on the left right here so you have this quadruped and the goal is to have it walk forward or in any direction as far as possible now usually this is the domain of sort of reinforcement learning so you have inputs which is the sensors of the joints of the quadruped and you have outputs which is how much force you want to put on each of the legs and you have to somehow learn a policy to make it walk forward reinforcement learning does that by sort of trial and error using an", "start_timestamp": "00:00:00", "end_timestamp": "00:00:36", "start_second": 0, "end_second": 36, "url": "https://www.youtube.com/watch?v=v2GRWzIhaqQ&t=0s", "title": "Meta-Learning through Hebbian Plasticity in Random Networks (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/v2GRWzIhaqQ/maxresdefault.jpg"} {"video_id": "v2GRWzIhaqQ", "text": "environment to learn the policy directly however this paper does something different what it does is it learns a policy that is adaptive during training which basically means that at the beginning of each episode the policy in it is initialized randomly and by policy here we mean a policy network uh policy neural network which you can see at the bottom so that's initialized randomly and then during the episode depending on the input uh this network is changed and adapted in order to achieve high performance so even at test time", "start_timestamp": "00:00:36", "end_timestamp": "00:01:18", "start_second": 36, "end_second": 78, "url": "https://www.youtube.com/watch?v=v2GRWzIhaqQ&t=36s", "title": "Meta-Learning through Hebbian Plasticity in Random Networks (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/v2GRWzIhaqQ/maxresdefault.jpg"} {"video_id": "v2GRWzIhaqQ", "text": "the network is started randomly and then adapted during the episode so this paper deals with this problem and tries to implement this sort of more biologically plausible way of learning a policy adapting to the environment and achieve ultimately good performance in this task and it has some nice property namely that it can deal with these things as you can see here front right leg damage front left leg damage but we'll get to that later but just so you know what's coming so the paper is called meta learning through hebbian plasticity", "start_timestamp": "00:01:18", "end_timestamp": "00:01:58", "start_second": 78, "end_second": 118, "url": "https://www.youtube.com/watch?v=v2GRWzIhaqQ&t=78s", "title": "Meta-Learning through Hebbian Plasticity in Random Networks (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/v2GRWzIhaqQ/maxresdefault.jpg"} {"video_id": "v2GRWzIhaqQ", "text": "in random networks by elias najaro and sebastian rizi so we'll go through the paper what it does what evolutionary methods are really briefly which they use what hebbian plasticity is and the difference to classic reinforcement learning and then we'll look at the experiments and that's going to be it if you like content like this as always don't hesitate to subscribe and share it out and tell me what you think in the comments i still read all the comments so i am very interested in what you think about works like this and about the", "start_timestamp": "00:01:58", "end_timestamp": "00:02:33", "start_second": 118, "end_second": 153, "url": "https://www.youtube.com/watch?v=v2GRWzIhaqQ&t=118s", "title": "Meta-Learning through Hebbian Plasticity in Random Networks (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/v2GRWzIhaqQ/maxresdefault.jpg"} {"video_id": "v2GRWzIhaqQ", "text": "video itself okay so they say lifelong learning and adaptability are two defining aspects of biological agents modern reinforcement learning approaches have shown significant progress in solving complex tasks however once training is concluded the found solutions are typically static and incapable of adapting to new information or perturbations so they contrast the two things here reinforcement learning as you know is very powerful in these domains but its goal is to learn a policy and then that policy is fixed and", "start_timestamp": "00:02:33", "end_timestamp": "00:03:11", "start_second": 153, "end_second": 191, "url": "https://www.youtube.com/watch?v=v2GRWzIhaqQ&t=153s", "title": "Meta-Learning through Hebbian Plasticity in Random Networks (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/v2GRWzIhaqQ/maxresdefault.jpg"} {"video_id": "v2GRWzIhaqQ", "text": "it's specific to that particular problem however biological agents you know humans uh animals and so on they're able to adapt usually very very quickly they give some sort of examples right here like if a if an animal is born it almost immediately knows how to walk um so even if it has some sort of injury even if it has some sort of disability um usually the animal can walk uh pretty much instantly and that means it sort of adapts to the body that it is in sort of reconfigures itself on the fly and that's what we're going", "start_timestamp": "00:03:11", "end_timestamp": "00:03:51", "start_second": 191, "end_second": 231, "url": "https://www.youtube.com/watch?v=v2GRWzIhaqQ&t=191s", "title": "Meta-Learning through Hebbian Plasticity in Random Networks (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/v2GRWzIhaqQ/maxresdefault.jpg"} {"video_id": "v2GRWzIhaqQ", "text": "to explore here so this isn't going to out compete uh rl anytime soon it's just a different way and a biologically more plausible way in order to do that so again they say we still don't know completely how biological brains learn and adapt so efficiently from experience it is believed that synaptic plasticity plays a prominent role in this process and that's why they are using these hebien learning rules in order to configure the network so let's contrast the two things for a second in reinforcement learning what you have is", "start_timestamp": "00:03:51", "end_timestamp": "00:04:29", "start_second": 231, "end_second": 269, "url": "https://www.youtube.com/watch?v=v2GRWzIhaqQ&t=231s", "title": "Meta-Learning through Hebbian Plasticity in Random Networks (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/v2GRWzIhaqQ/maxresdefault.jpg"} {"video_id": "v2GRWzIhaqQ", "text": "a policy network now the policy network is a neural network that maps sensory inputs to actions okay so you have the observation goes in and out comes in action this is your policy network now during training in reinforcement learning what you do is you have some sort of environment okay this is the environment and you play this back and forth game with the environment and you try to improve this policy network right here as best as you can in order to achieve a high reward then during testing so this is train", "start_timestamp": "00:04:29", "end_timestamp": "00:05:08", "start_second": 269, "end_second": 308, "url": "https://www.youtube.com/watch?v=v2GRWzIhaqQ&t=269s", "title": "Meta-Learning through Hebbian Plasticity in Random Networks (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/v2GRWzIhaqQ/maxresdefault.jpg"} {"video_id": "v2GRWzIhaqQ", "text": "then during testing you freeze you freeze this network right here so you freeze the network and then you simply play that game and you see how well it does okay so this gives you some sort of reward and that's going to be your testing reward and you know that can be generalization it can be to different environments and so on but the crucial part is that you in train you learn and then you freeze during test in this in this particular paper right here they do something different so let's call that the hebbian plasticity world", "start_timestamp": "00:05:08", "end_timestamp": "00:05:49", "start_second": 308, "end_second": 349, "url": "https://www.youtube.com/watch?v=v2GRWzIhaqQ&t=308s", "title": "Meta-Learning through Hebbian Plasticity in Random Networks (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/v2GRWzIhaqQ/maxresdefault.jpg"} {"video_id": "v2GRWzIhaqQ", "text": "in the hebbian plasticity world again you have your environment and you play this game but you play the game in episodes and at the beginning of each episode you initialize this using some sort of distribution here a normal distribution you initialize the network and then you learn you adapt during the episode you adapt the network to have good performance okay so this thing right here these are the hebian rules so you update the network during the episode and then at the end of the episode you go back you initialize the network", "start_timestamp": "00:05:49", "end_timestamp": "00:06:35", "start_second": 349, "end_second": 395, "url": "https://www.youtube.com/watch?v=v2GRWzIhaqQ&t=349s", "title": "Meta-Learning through Hebbian Plasticity in Random Networks (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/v2GRWzIhaqQ/maxresdefault.jpg"} {"video_id": "v2GRWzIhaqQ", "text": "again you start a new episode and you again adapt that randomly initialized network so what's actually learned here isn't the weight of the network what's learned during training is these rules that transform any randomly initialized network into a high performing network now of course you you might just object and say hey wait a minute i can just basically hard code the you know the optimal weights here into these hebian rules like my rules can simply you know not care about the input and simply output whatever good weights", "start_timestamp": "00:06:35", "end_timestamp": "00:07:14", "start_second": 395, "end_second": 434, "url": "https://www.youtube.com/watch?v=v2GRWzIhaqQ&t=395s", "title": "Meta-Learning through Hebbian Plasticity in Random Networks (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/v2GRWzIhaqQ/maxresdefault.jpg"} {"video_id": "v2GRWzIhaqQ", "text": "there are and ultimately that would lead back to rl but as you will be able to see in the experiments they also have some videos provided that i invite you to watch you can really see that the network reconfigures itself first of all at the beginning it reconfigures itself to a good state but then also as the episode is progressing it continuously reconfigures itself depending on the input so this is the real power of these hebbian rules in that during the episode the network can continuously reconfigure itself", "start_timestamp": "00:07:14", "end_timestamp": "00:07:47", "start_second": 434, "end_second": 467, "url": "https://www.youtube.com/watch?v=v2GRWzIhaqQ&t=434s", "title": "Meta-Learning through Hebbian Plasticity in Random Networks (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/v2GRWzIhaqQ/maxresdefault.jpg"} {"video_id": "v2GRWzIhaqQ", "text": "in order to achieve high rewards so it's not just that i can go from the random initialization to a good performing policy i can adapt that policy depending on what the input is so at test time in this habit world what we're going to do is again we are going to freeze the learning rules so you have to kind of rethink we're going to freeze the hebian rules but still we're going to randomly initialize our policy in each episode and then we're going to change that during the episode okay and then that's ultimately going to", "start_timestamp": "00:07:47", "end_timestamp": "00:08:27", "start_second": 467, "end_second": 507, "url": "https://www.youtube.com/watch?v=v2GRWzIhaqQ&t=467s", "title": "Meta-Learning through Hebbian Plasticity in Random Networks (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/v2GRWzIhaqQ/maxresdefault.jpg"} {"video_id": "v2GRWzIhaqQ", "text": "give us our reward so that the thing that's learned is just something different here you learn the weights directly in the rl setting and in the heavy and plasticity setting you learn the rules to update the weights dynamically depending on the input this is a form of meta learning right it's not exactly but it is a form of meta learning so let's see what those hebbian rules are and you can as again you can see this right here during training so this is one episode and it always starts with these random networks at the beginning", "start_timestamp": "00:08:27", "end_timestamp": "00:09:06", "start_second": 507, "end_second": 546, "url": "https://www.youtube.com/watch?v=v2GRWzIhaqQ&t=507s", "title": "Meta-Learning through Hebbian Plasticity in Random Networks (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/v2GRWzIhaqQ/maxresdefault.jpg"} {"video_id": "v2GRWzIhaqQ", "text": "and then you can see as you progress there is structure emerging and again i'll link to the videos and you can see that during the episode even this is changing and this is especially visible on their other example that they have here like this this car example so in this car example during the video you'll see that now there's a curve like this and then as imagine you're a driver like there is a kind of a left curve coming and you adjust your mental state let's say to say okay i don't know what's around the", "start_timestamp": "00:09:06", "end_timestamp": "00:09:40", "start_second": 546, "end_second": 580, "url": "https://www.youtube.com/watch?v=v2GRWzIhaqQ&t=546s", "title": "Meta-Learning through Hebbian Plasticity in Random Networks (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/v2GRWzIhaqQ/maxresdefault.jpg"} {"video_id": "v2GRWzIhaqQ", "text": "curve i need to be ready to break and so on and then there is a straight piece coming and you'll be like well i i see everything you know i can focus on different things you cannot reconfigure your state in order to adapt to the the observation and that's exactly what you'll see in that video is that the weights are continuously updating not so much in these quarter pads to which we'll get later so these habian rules what do they look like these are biologically inspired rules and they say the following so this here is the delta w i j", "start_timestamp": "00:09:40", "end_timestamp": "00:10:17", "start_second": 580, "end_second": 617, "url": "https://www.youtube.com/watch?v=v2GRWzIhaqQ&t=580s", "title": "Meta-Learning through Hebbian Plasticity in Random Networks (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/v2GRWzIhaqQ/maxresdefault.jpg"} {"video_id": "v2GRWzIhaqQ", "text": "and our perspective of policy networks is going to be that this is a neural network as we said and we'll just pick up one layer right here and there is going to be weights right here you know weights from all to all these are going to be fully connected networks and like this and there's going to be neuron i somewhere here and neuron j somewhere here okay so neuron i and neuron j are going to have a connection together this thing right here and there's going this the question is going to be how do we update that weight from one time step to", "start_timestamp": "00:10:17", "end_timestamp": "00:10:55", "start_second": 617, "end_second": 655, "url": "https://www.youtube.com/watch?v=v2GRWzIhaqQ&t=617s", "title": "Meta-Learning through Hebbian Plasticity in Random Networks (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/v2GRWzIhaqQ/maxresdefault.jpg"} {"video_id": "v2GRWzIhaqQ", "text": "the next remembering the weights here are changed in each time step each time step during the episode we update the weights so how are they going to be updated let's contrast this first to classic reinforcement learning so in classic reinforcement learning we would keep these weights the same during the entire episode and then at the end of the episode right we keep those the same and at the end of the episode we'll get a reward and then we'll go back we'll look back and say how do we need to change the weights", "start_timestamp": "00:10:55", "end_timestamp": "00:11:26", "start_second": 655, "end_second": 686, "url": "https://www.youtube.com/watch?v=v2GRWzIhaqQ&t=655s", "title": "Meta-Learning through Hebbian Plasticity in Random Networks (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/v2GRWzIhaqQ/maxresdefault.jpg"} {"video_id": "v2GRWzIhaqQ", "text": "such that in the next episode the reward will be higher and in again in classic reinforcement learning for example in policy gradient methods you will actually calculate a gradient with respect to these weights right here actually let's let's go into that later when we contrast evolutionary methods so the important part right here is that we change the weights in each time step so how do we change the weights of course we don't have access to the reward right in order to change the weights the reward is going to come into play", "start_timestamp": "00:11:26", "end_timestamp": "00:11:59", "start_second": 686, "end_second": 719, "url": "https://www.youtube.com/watch?v=v2GRWzIhaqQ&t=686s", "title": "Meta-Learning through Hebbian Plasticity in Random Networks (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/v2GRWzIhaqQ/maxresdefault.jpg"} {"video_id": "v2GRWzIhaqQ", "text": "when we change the rules to change the weights but during the episode we don't have the reward at least we assume we only get kind of the reward at the end so we need a different uh method and the method is going to be the following right here the important things in this formula are going to be so how do we change the weights that's dependent on two quantities that appear during each time step o i and oj and these are going to be the outputs of neuron i and neuron j so how do we change the connection that's going to be dependent", "start_timestamp": "00:11:59", "end_timestamp": "00:12:37", "start_second": 719, "end_second": 757, "url": "https://www.youtube.com/watch?v=v2GRWzIhaqQ&t=719s", "title": "Meta-Learning through Hebbian Plasticity in Random Networks (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/v2GRWzIhaqQ/maxresdefault.jpg"} {"video_id": "v2GRWzIhaqQ", "text": "on the output of neuron i which is here called the presynaptic output and the output of neuron j which is going to be the post synaptic output the rule the kind of mantra here is the fire together wire together means that if two neurons are active at the same time regularly then they probably should be connected together because they already correlate and you can see right here that there is a term in this formula that is o i times o j so this here is the correlation between or the covariance um or just the product", "start_timestamp": "00:12:37", "end_timestamp": "00:13:19", "start_second": 757, "end_second": 799, "url": "https://www.youtube.com/watch?v=v2GRWzIhaqQ&t=757s", "title": "Meta-Learning through Hebbian Plasticity in Random Networks (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/v2GRWzIhaqQ/maxresdefault.jpg"} {"video_id": "v2GRWzIhaqQ", "text": "if if we're exact between these two neurons and if they are both active regularly then this quantity is going to be high and if they're both not active regularly that or if one is active and the other one isn't that quantity is going to be low and the a parameter here specifies how the weights are updated in response to this so the a b c d and eta parameters right here are these are the learned parameters these are going to be your learned rules to update the weights so these change once after once per learning step so", "start_timestamp": "00:13:19", "end_timestamp": "00:13:58", "start_second": 799, "end_second": 838, "url": "https://www.youtube.com/watch?v=v2GRWzIhaqQ&t=799s", "title": "Meta-Learning through Hebbian Plasticity in Random Networks (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/v2GRWzIhaqQ/maxresdefault.jpg"} {"video_id": "v2GRWzIhaqQ", "text": "once per so after the episode is done you're going to change these capital constants right here including the eta which is the learning rate these things right here these are per step so this is each step gives you a different oi and oj and then you'll adjust the weight based on that you'll see that these constants here they are per weight so for each weight in this neural network we learn a separate rule of how to update that particular weight so the algorithm can it can basically decide for a particular weight you can", "start_timestamp": "00:13:58", "end_timestamp": "00:14:34", "start_second": 838, "end_second": 874, "url": "https://www.youtube.com/watch?v=v2GRWzIhaqQ&t=838s", "title": "Meta-Learning through Hebbian Plasticity in Random Networks (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/v2GRWzIhaqQ/maxresdefault.jpg"} {"video_id": "v2GRWzIhaqQ", "text": "decide well if these two things fire together often i want to update my weight very heavily in response to that okay so if the a is very high that means the connection responds very thoroughly to when the two neurons fire together that is not the same as to say that connection should always be very strong it's dependent on the input so only when this quantity is high should the network or should the weight be updated and the a parameter modulates how well it's updated or how um how how strongly it's updated it can", "start_timestamp": "00:14:34", "end_timestamp": "00:15:16", "start_second": 874, "end_second": 916, "url": "https://www.youtube.com/watch?v=v2GRWzIhaqQ&t=874s", "title": "Meta-Learning through Hebbian Plasticity in Random Networks (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/v2GRWzIhaqQ/maxresdefault.jpg"} {"video_id": "v2GRWzIhaqQ", "text": "also be negative it can be zero basically meaning that you know it doesn't matter if they fire together i don't want to update the weight this particular weight in response to that so you can see that you can learn these rules that can adapt to different inputs because all of the changes the delta here is dependent on the inputs so on the correlation but also on the different inputs themselves and then there is also a constant right here okay this it's as you can see it's a linear function of the inputs of the oi and oj", "start_timestamp": "00:15:16", "end_timestamp": "00:15:55", "start_second": 916, "end_second": 955, "url": "https://www.youtube.com/watch?v=v2GRWzIhaqQ&t=916s", "title": "Meta-Learning through Hebbian Plasticity in Random Networks (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/v2GRWzIhaqQ/maxresdefault.jpg"} {"video_id": "v2GRWzIhaqQ", "text": "and their product so i hope this is clear that the these have been these habian rules you learn abcd and eta and that gives rise to an adaptive network that can change and reconfigure itself over the course of an episode depending on the inputs and one of the things right here and we'll get to how you actually learn the rules itself in a second but one of the things right here is very visible as i said in this first experiment where it reconfigures itself continuously but also in this experiment with this quadruped", "start_timestamp": "00:15:55", "end_timestamp": "00:16:34", "start_second": 955, "end_second": 994, "url": "https://www.youtube.com/watch?v=v2GRWzIhaqQ&t=955s", "title": "Meta-Learning through Hebbian Plasticity in Random Networks (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/v2GRWzIhaqQ/maxresdefault.jpg"} {"video_id": "v2GRWzIhaqQ", "text": "right here so this quarter pad usually it's you know you simply walk in a direction that's your reward and rl is perfectly fine at this as well however this is a bit of a has a bit of a trick to it namely you are always in one of three situations either you have an undamaged quarter pad or its kind of left leg front left leg is damaged or its front right leg is damaged okay and you don't tell the you simply sample these situations uh uniformly and you don't tell the algorithm which situation it is in now if you look", "start_timestamp": "00:16:34", "end_timestamp": "00:17:14", "start_second": 994, "end_second": 1034, "url": "https://www.youtube.com/watch?v=v2GRWzIhaqQ&t=994s", "title": "Meta-Learning through Hebbian Plasticity in Random Networks (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/v2GRWzIhaqQ/maxresdefault.jpg"} {"video_id": "v2GRWzIhaqQ", "text": "at if you compare two methods one where you directly learn the weights you learn a fixed policy to solve you know this is one task right this is one task and all of these three things appear with equal probability so you have to learn one policy to make all of this work if you learn the weights directly and um you don't have a power like there's no doubt that like a powerful rl approach could deal with this task but if in this case if you just put a standard weight learner with this same number of the same size of policy as the hebian", "start_timestamp": "00:17:14", "end_timestamp": "00:17:52", "start_second": 1034, "end_second": 1072, "url": "https://www.youtube.com/watch?v=v2GRWzIhaqQ&t=1034s", "title": "Meta-Learning through Hebbian Plasticity in Random Networks (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/v2GRWzIhaqQ/maxresdefault.jpg"} {"video_id": "v2GRWzIhaqQ", "text": "they compare to if you put a weight learner on it it will not be able to solve this task satisfactorily what it will do is it will say well i need one set of rules that make me walk as far as possible as often as possible so if you can see at the table i'm already showing you the results right here the table right here if you have these static weights you can see that it's performing pretty well in two out of three situations right so it what it basically does it says okay um here is what where there's damage", "start_timestamp": "00:17:52", "end_timestamp": "00:18:32", "start_second": 1072, "end_second": 1112, "url": "https://www.youtube.com/watch?v=v2GRWzIhaqQ&t=1072s", "title": "Meta-Learning through Hebbian Plasticity in Random Networks (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/v2GRWzIhaqQ/maxresdefault.jpg"} {"video_id": "v2GRWzIhaqQ", "text": "what it does is it says i'm going to learn to walk with my left leg using my left front leg that means when i have no damage or damage to the right front leg i'm just fine and i'm just going to take the hit basically where i have damage to the left front leg because i'm it's just going to suck so they solved they solve this like walk more than 100 steps so that doesn't it since it can only learn a fixed policy it um basically discards the case where there's damage to the left front leg it takes that hit in order to be better in the other two", "start_timestamp": "00:18:32", "end_timestamp": "00:19:10", "start_second": 1112, "end_second": 1150, "url": "https://www.youtube.com/watch?v=v2GRWzIhaqQ&t=1112s", "title": "Meta-Learning through Hebbian Plasticity in Random Networks (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/v2GRWzIhaqQ/maxresdefault.jpg"} {"video_id": "v2GRWzIhaqQ", "text": "methods you can see it's outperforming the hebian rule in the other two methods but this shows you kind of the the difference and the power that these hebian rules or these generally neural neuroplasticity might have because the having one is perfectly capable of at least in part adapting to the different situations now you can see that is not symmetric also the hebian rules they learn to you know there's 860 and there's 440 of a thing that should actually be symmetric we do expect a drop when there's damage but", "start_timestamp": "00:19:10", "end_timestamp": "00:19:48", "start_second": 1150, "end_second": 1188, "url": "https://www.youtube.com/watch?v=v2GRWzIhaqQ&t=1150s", "title": "Meta-Learning through Hebbian Plasticity in Random Networks (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/v2GRWzIhaqQ/maxresdefault.jpg"} {"video_id": "v2GRWzIhaqQ", "text": "um it's not symmetric which means that also the hebian rules they kind of randomly focus on one over the other but at least they're able in some degree to adapt to both and that's because it depending on the input you know it has a rule in there that basically says well if the if the back left leg and the front right leg you know if they fire together then i want to um if they if they fire together the sensors that show me that they're moving if they fire together i'm going to wire them together because that's how i walk you know front", "start_timestamp": "00:19:48", "end_timestamp": "00:20:26", "start_second": 1188, "end_second": 1226, "url": "https://www.youtube.com/watch?v=v2GRWzIhaqQ&t=1188s", "title": "Meta-Learning through Hebbian Plasticity in Random Networks (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/v2GRWzIhaqQ/maxresdefault.jpg"} {"video_id": "v2GRWzIhaqQ", "text": "right back left and then the other way around and if that's not the case i'm not going to wire them together so that would be the situation where you have damage instead if they are not wired together i'm going to and can do this in the next layer of the neural network wire these other two things together you know if if the first thing is not the case i'm going to wire these other two things together to make up for that loss and there you can see there is kind of this logic built into the network now again i know you can do", "start_timestamp": "00:20:26", "end_timestamp": "00:20:59", "start_second": 1226, "end_second": 1259, "url": "https://www.youtube.com/watch?v=v2GRWzIhaqQ&t=1226s", "title": "Meta-Learning through Hebbian Plasticity in Random Networks (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/v2GRWzIhaqQ/maxresdefault.jpg"} {"video_id": "v2GRWzIhaqQ", "text": "this with learning a fixed policy you can achieve the same effects the point here is just to show that um given kind of the same size networks and so on there you that there might be there might be like a qualitative difference in certain situations again by no means this is meant to out compete rl or anything like this okay so we'll we went there now how are these rules actually learned and there we have to again make a distinction that is completely separate from the hebbian non-habian way okay so the heavy and non-habian", "start_timestamp": "00:20:59", "end_timestamp": "00:21:40", "start_second": 1259, "end_second": 1300, "url": "https://www.youtube.com/watch?v=v2GRWzIhaqQ&t=1259s", "title": "Meta-Learning through Hebbian Plasticity in Random Networks (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/v2GRWzIhaqQ/maxresdefault.jpg"} {"video_id": "v2GRWzIhaqQ", "text": "distinction was do we learn the weights of the policy network directly or do we learn the rules to update the weights now the question is whatever we learn how do we learn it and again we have to draw the distinction this time between i'm going to say classic or even though the terminology is not really correct classic rl and evolutionary methods okay so in classic rl what i would do is i would use my weights in order to obtain a reward and then i would update my weights so my delta w would be proportional to the gradient of w of the reward", "start_timestamp": "00:21:40", "end_timestamp": "00:22:25", "start_second": 1300, "end_second": 1345, "url": "https://www.youtube.com/watch?v=v2GRWzIhaqQ&t=1300s", "title": "Meta-Learning through Hebbian Plasticity in Random Networks (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/v2GRWzIhaqQ/maxresdefault.jpg"} {"video_id": "v2GRWzIhaqQ", "text": "okay so in the classic rl especially this is a policy gradient method right now so i use my policy my weights to get the reward and then i would calculate a gradient and you know usually the reward isn't differentiable so you have this uh reinforced trick in order to pull the reward out and you you can read all of this up if you look at policy gradient uh the basic policy gradient methods but this here tells me i need a gradient usually this is going to be the reward times the gradient of my fw of my input so", "start_timestamp": "00:22:25", "end_timestamp": "00:23:08", "start_second": 1345, "end_second": 1388, "url": "https://www.youtube.com/watch?v=v2GRWzIhaqQ&t=1345s", "title": "Meta-Learning through Hebbian Plasticity in Random Networks (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/v2GRWzIhaqQ/maxresdefault.jpg"} {"video_id": "v2GRWzIhaqQ", "text": "what this means is what this means is that if my reward is high then i i just want to know what do i need to do to make more of what i just did okay and the gradient ensures that for every single weight in your neural network you know what to do so the gradient means that i have an exact handle on how do i need to change this weight how do i need to change that weight how do i need to change this weight in order if the reward is high and because of this multiplication here i want to make more of what i just did", "start_timestamp": "00:23:08", "end_timestamp": "00:23:48", "start_second": 1388, "end_second": 1428, "url": "https://www.youtube.com/watch?v=v2GRWzIhaqQ&t=1388s", "title": "Meta-Learning through Hebbian Plasticity in Random Networks (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/v2GRWzIhaqQ/maxresdefault.jpg"} {"video_id": "v2GRWzIhaqQ", "text": "and the gradient tells me how if the reward is low on the other hand i want to make less of what i just did but also the gradient tells me how that can be achieved i simply go into the other direction than i would if the reward is high in evolutionary methods we don't have we don't do this gradient calculation okay now there can be advantages to not doing radian calculation sometimes back propagation simply isn't possible even if it is possible and this is maybe the case where we are now what we need to learn in our case is", "start_timestamp": "00:23:48", "end_timestamp": "00:24:25", "start_second": 1428, "end_second": 1465, "url": "https://www.youtube.com/watch?v=v2GRWzIhaqQ&t=1428s", "title": "Meta-Learning through Hebbian Plasticity in Random Networks (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/v2GRWzIhaqQ/maxresdefault.jpg"} {"video_id": "v2GRWzIhaqQ", "text": "these rules to update the rules and imagine you have an episode and that's kind of episode so you have step step step step and in each step these rules are applied right in each of these steps the rules are applied and at the end you get a reward so what you need to do is to back propagate that reward through all the steps and then through all the rules okay and that might be just computationally not feasible or the rules the rules right here are pretty um pretty easy but the rules might not be differentiable", "start_timestamp": "00:24:25", "end_timestamp": "00:24:59", "start_second": 1465, "end_second": 1499, "url": "https://www.youtube.com/watch?v=v2GRWzIhaqQ&t=1465s", "title": "Meta-Learning through Hebbian Plasticity in Random Networks (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/v2GRWzIhaqQ/maxresdefault.jpg"} {"video_id": "v2GRWzIhaqQ", "text": "you actually have the same problem in general in classic rl as well but you know you can cut off time steps and so on there are various hacks in any case there can be advantages to not having that gradient and evolutionary methods are a way to do that in evolutionary method usually you are don't train one agent you train a population of agents so you have a bunch of these uh neural network agents in here and the way you update the neural network agent is you simply let them run you know you let them run your app", "start_timestamp": "00:24:59", "end_timestamp": "00:25:34", "start_second": 1499, "end_second": 1534, "url": "https://www.youtube.com/watch?v=v2GRWzIhaqQ&t=1499s", "title": "Meta-Learning through Hebbian Plasticity in Random Networks (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/v2GRWzIhaqQ/maxresdefault.jpg"} {"video_id": "v2GRWzIhaqQ", "text": "the episode so this is your w one of them you let them run the episode they get a reward and then you can do multiple things so this depends on the evolutionary method so you can either pick out the best performing agent or you can update each agent according to some rule the goal here is simply to basically you always want to take your weights you want to add some noise to them and you want to see does it get better or worse if it gets better good if it gets worse not good okay the difference is without the", "start_timestamp": "00:25:34", "end_timestamp": "00:26:13", "start_second": 1534, "end_second": 1573, "url": "https://www.youtube.com/watch?v=v2GRWzIhaqQ&t=1534s", "title": "Meta-Learning through Hebbian Plasticity in Random Networks (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/v2GRWzIhaqQ/maxresdefault.jpg"} {"video_id": "v2GRWzIhaqQ", "text": "gradient you don't have a handle on how do you need to change each individual weight all you can do is basically random walk and observe what happens and if the random walk is you know turns out to be good you go more into that direction of that random walk so it's sort of a sort of a poor poor man's gradient method in these evolutionary methods again completely independent of what we learn you can use the evolution evolutionary method to learn the fixed weights and that's what actually what happens in the table i've shown you", "start_timestamp": "00:26:13", "end_timestamp": "00:26:46", "start_second": 1573, "end_second": 1606, "url": "https://www.youtube.com/watch?v=v2GRWzIhaqQ&t=1573s", "title": "Meta-Learning through Hebbian Plasticity in Random Networks (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/v2GRWzIhaqQ/maxresdefault.jpg"} {"video_id": "v2GRWzIhaqQ", "text": "uh below or you can use the other evolutionary method to learn the hebbian update rules as well you can use rl to learn the fixed weight or the update rules in this paper they use evolutionary methods to learn the hebian update rules and they compare mostly with using evolutionary methods to learn the fixed weights okay the exact evolutionary step they use right here is the following so ht here is going to be the thing that you learn you know as compared to w being the network weights h is going to be the hebian weights since", "start_timestamp": "00:26:46", "end_timestamp": "00:27:23", "start_second": 1606, "end_second": 1643, "url": "https://www.youtube.com/watch?v=v2GRWzIhaqQ&t=1606s", "title": "Meta-Learning through Hebbian Plasticity in Random Networks (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/v2GRWzIhaqQ/maxresdefault.jpg"} {"video_id": "v2GRWzIhaqQ", "text": "we learn the hebian weights so how they'll update um each agent is going to be they'll take the hebian weights and this this here is how you update right this is your delta h how do you update the heavy and weights well what you do is you you perform n random perturbations so i take my weights and i add noise i just add noise okay so i i'm here and i just make a bunch of versions of it and then i observe how well are these versions doing so how well are my random perturbations doing this is going to be the fitness", "start_timestamp": "00:27:23", "end_timestamp": "00:28:04", "start_second": 1643, "end_second": 1684, "url": "https://www.youtube.com/watch?v=v2GRWzIhaqQ&t=1643s", "title": "Meta-Learning through Hebbian Plasticity in Random Networks (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/v2GRWzIhaqQ/maxresdefault.jpg"} {"video_id": "v2GRWzIhaqQ", "text": "fi right here is going to be the fitness and then i'm just going to perform a weighted average so this is my weighted average of these new solutions okay so if this solution here did pretty well and this solution did pretty poorly i want to walk you know in this direction and then again i do the same thing here from here i do a bunch of perturbations and maybe this one did pretty well and this one did pretty poorly i want to walk in this direction and so on okay so that's how you you'll change the um you'll change weights or rules or", "start_timestamp": "00:28:04", "end_timestamp": "00:28:47", "start_second": 1684, "end_second": 1727, "url": "https://www.youtube.com/watch?v=v2GRWzIhaqQ&t=1684s", "title": "Meta-Learning through Hebbian Plasticity in Random Networks (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/v2GRWzIhaqQ/maxresdefault.jpg"} {"video_id": "v2GRWzIhaqQ", "text": "whatever you want in an evolutionary method it's you know it's pretty easy it's easier than reinforcement learning no back prop no nothing basically black box optimizer there are more complicated evolutionary methods but no we don't go into those here right now okay so again i've already shown you these results now i said these static weights are also with evolutionary method they also report what you would get with like a rl approach like ppo you would get kind of the same thing as they get um as they get here so oh sorry", "start_timestamp": "00:28:47", "end_timestamp": "00:29:34", "start_second": 1727, "end_second": 1774, "url": "https://www.youtube.com/watch?v=v2GRWzIhaqQ&t=1727s", "title": "Meta-Learning through Hebbian Plasticity in Random Networks (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/v2GRWzIhaqQ/maxresdefault.jpg"} {"video_id": "v2GRWzIhaqQ", "text": "this is not the same as the table yeah i was confused for for a second this here is for the car environment okay this is this vision based environment so with their method they get like an 870 rewards with the hebian based approach with the static weight but still evolutionary method they get a much lower reward in fact the hebbian based approach is about the same as you get here with an rl algorithm and as we said url algorithm more complicated and if you use like a if you use like a state-of-the-art rl algorithm not just", "start_timestamp": "00:29:34", "end_timestamp": "00:30:15", "start_second": 1774, "end_second": 1815, "url": "https://www.youtube.com/watch?v=v2GRWzIhaqQ&t=1774s", "title": "Meta-Learning through Hebbian Plasticity in Random Networks (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/v2GRWzIhaqQ/maxresdefault.jpg"} {"video_id": "v2GRWzIhaqQ", "text": "ppo you get a bit of a better performance but not that much if you look at if you look at the actual numbers so you know pretty cool to see that again this is not outperforming anything this is simply showing that um you can do that they do a number of experiments where they go in the episode and they kind of change stuff in the episode and one cool thing here is that they go and you know this is an episode so at the episode you start with a random network each time in this hebian setting and then pretty quickly the", "start_timestamp": "00:30:15", "end_timestamp": "00:30:54", "start_second": 1815, "end_second": 1854, "url": "https://www.youtube.com/watch?v=v2GRWzIhaqQ&t=1815s", "title": "Meta-Learning through Hebbian Plasticity in Random Networks (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/v2GRWzIhaqQ/maxresdefault.jpg"} {"video_id": "v2GRWzIhaqQ", "text": "rules adapt for a high performing right so it it starts to walk it reconfigures itself and starts to walk the reward here again it doesn't have access to that but we can measure it of course and then at this step a right here they simply go to the weights and zero them out so they just delete these weights right here and only 10 time steps later it has reconfigured itself as you can see right here in order to walk again so 10 time steps later reconfigures itself reconfigures itself and after a short while right here", "start_timestamp": "00:30:54", "end_timestamp": "00:31:33", "start_second": 1854, "end_second": 1893, "url": "https://www.youtube.com/watch?v=v2GRWzIhaqQ&t=1854s", "title": "Meta-Learning through Hebbian Plasticity in Random Networks (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/v2GRWzIhaqQ/maxresdefault.jpg"} {"video_id": "v2GRWzIhaqQ", "text": "it's back to its kind of original performance as you can see so that's i i'd say that's fairly um fairly impressive uh in this very short amount of time able to recover from such and such an intervention if you do this i mean of course if you do this to your policy network that's statically learned it's going to be garbage but i guess the fair comparison would be to delete the habian rules themselves and you know so it's not like it's not like this can adapt to new situations or something like this this is still", "start_timestamp": "00:31:33", "end_timestamp": "00:32:10", "start_second": 1893, "end_second": 1930, "url": "https://www.youtube.com/watch?v=v2GRWzIhaqQ&t=1893s", "title": "Meta-Learning through Hebbian Plasticity in Random Networks (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/v2GRWzIhaqQ/maxresdefault.jpg"} {"video_id": "v2GRWzIhaqQ", "text": "learned for particular environments right but the the point here is that you learn the rules and this is kind of a study on neuroplasticity now my question actually would be why this diagonal pattern appears and i have not seen a like a clear explanation um especially is this anti-diagonal pattern it's not so much here in the output layer right this is the output layer there are what 21 actions or so and this one is this this dimension um so not that much there but there seems to be this rule and this is not", "start_timestamp": "00:32:10", "end_timestamp": "00:32:48", "start_second": 1930, "end_second": 1968, "url": "https://www.youtube.com/watch?v=v2GRWzIhaqQ&t=1930s", "title": "Meta-Learning through Hebbian Plasticity in Random Networks (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/v2GRWzIhaqQ/maxresdefault.jpg"} {"video_id": "v2GRWzIhaqQ", "text": "the case at the beginning right you saw the beginning you saw at the beginning it was pretty random matrix so why why yeah here pretty random and then there's this diagonal pattern i don't know why if you know let me know i mean it's anti-diagonal maybe it it is actually diagonal and the forward the fully connected layer is just defined as something like wt times x and um but maybe this also depends on the random initialization but there is no inherent way why a particular neuron would you know care about sending", "start_timestamp": "00:32:48", "end_timestamp": "00:33:32", "start_second": 1968, "end_second": 2012, "url": "https://www.youtube.com/watch?v=v2GRWzIhaqQ&t=1968s", "title": "Meta-Learning through Hebbian Plasticity in Random Networks (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/v2GRWzIhaqQ/maxresdefault.jpg"} {"video_id": "v2GRWzIhaqQ", "text": "information to like the same height of neuron on the other side or is there i don't know i'm so is this a property of the evolutionary or of the learning rules it seems not because the learning rules don't depend on the position i'm genuinely confused about this and maybe you know maybe they've written it somewhere and i've just overlooked it though i they they do reference it they say oh there's this diagonal pattern appearing but i don't think they ever say why it is diagonal um okay i might just be i might just be", "start_timestamp": "00:33:32", "end_timestamp": "00:34:14", "start_second": 2012, "end_second": 2054, "url": "https://www.youtube.com/watch?v=v2GRWzIhaqQ&t=2012s", "title": "Meta-Learning through Hebbian Plasticity in Random Networks (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/v2GRWzIhaqQ/maxresdefault.jpg"} {"video_id": "v2GRWzIhaqQ", "text": "real dumb yeah so they also you know they do some more experiments they show for example that if you just have random hebbian coefficients then your algorithm just jumps around kind of um in in weight space around the zero point however if you actually learn these having coefficients as they do you have like this clear attractor here and you have these kind of oscillating curves uh when you know when when you do that and you can see here in the different situations where things are damaged and so on so all in all i think it's a pretty", "start_timestamp": "00:34:14", "end_timestamp": "00:34:51", "start_second": 2054, "end_second": 2091, "url": "https://www.youtube.com/watch?v=v2GRWzIhaqQ&t=2054s", "title": "Meta-Learning through Hebbian Plasticity in Random Networks (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/v2GRWzIhaqQ/maxresdefault.jpg"} {"video_id": "v2GRWzIhaqQ", "text": "interesting study and i think this neuroplasticity is it's a different way you know it's unclear to say if it will ever deliver the the performance that rl delivers but certainly there are situations where such plasticity is desired and if we can also combine this with greater generalization performance then you know we have agents that can quickly kind of reconfigure and a lot of work by these this kind of open-ended learning community also plays into these roles all in all pretty pretty cool uh non-standard way of doing things last", "start_timestamp": "00:34:51", "end_timestamp": "00:35:30", "start_second": 2091, "end_second": 2130, "url": "https://www.youtube.com/watch?v=v2GRWzIhaqQ&t=2091s", "title": "Meta-Learning through Hebbian Plasticity in Random Networks (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/v2GRWzIhaqQ/maxresdefault.jpg"} {"video_id": "v2GRWzIhaqQ", "text": "thing the broader impact statement uh every now and then we'll look at a broader impact statement since these are new just to get kind of an overview of what they look like so they say the ethics computer societal consequence of this book are hard to predict but likely similar to other work dealing with more adaptive agents and robots in particular by giving the robot stability to still function when injured could make it easier from them being deployed in areas that have both a positive and negative impact on society", "start_timestamp": "00:35:30", "end_timestamp": "00:36:00", "start_second": 2130, "end_second": 2160, "url": "https://www.youtube.com/watch?v=v2GRWzIhaqQ&t=2130s", "title": "Meta-Learning through Hebbian Plasticity in Random Networks (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/v2GRWzIhaqQ/maxresdefault.jpg"} {"video_id": "v2GRWzIhaqQ", "text": "okay well again this it's it's not really giving robots the ability to still function when they're injured i first i thought first i thought okay they train it when it's fully functioning but then they damage it during test time but as i understand it as i understand the paper they already train it with the damaged versions they just don't tell the algorithm in which version it is right now so um it's not the same as being able to work when injured unless you've specifically trained for it in this case again i could be wrong", "start_timestamp": "00:36:00", "end_timestamp": "00:36:41", "start_second": 2160, "end_second": 2201, "url": "https://www.youtube.com/watch?v=v2GRWzIhaqQ&t=2160s", "title": "Meta-Learning through Hebbian Plasticity in Random Networks (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/v2GRWzIhaqQ/maxresdefault.jpg"} {"video_id": "v2GRWzIhaqQ", "text": "about this yeah in the very long term robots that can adapt could help in industrial automation or help to care for the elderly on the other hand more adaptive robots could also be more easily used for military applications the approach presented is papers far from being deployed in these areas but is important to discuss its potential long-term consequences early on now okay so let's evaluate the broader impact statement let's well the first check to do is always to simply replace um whatever their method is with the word technology", "start_timestamp": "00:36:41", "end_timestamp": "00:37:18", "start_second": 2201, "end_second": 2238, "url": "https://www.youtube.com/watch?v=v2GRWzIhaqQ&t=2201s", "title": "Meta-Learning through Hebbian Plasticity in Random Networks (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/v2GRWzIhaqQ/maxresdefault.jpg"} {"video_id": "v2GRWzIhaqQ", "text": "okay so let's do that in the very long term technology could help in industrial automation or help to care for the elderly check on the other hand technology could also be more easily used for military application check the technology is far from being deployed in these areas okay i guess some technology isn't but advanced technology yeah so again the rule for broader impact statements seem to be you take whatever your method is and you go up until uh you find you know you're basically at technology or something equivalent uh", "start_timestamp": "00:37:18", "end_timestamp": "00:38:01", "start_second": 2238, "end_second": 2281, "url": "https://www.youtube.com/watch?v=v2GRWzIhaqQ&t=2238s", "title": "Meta-Learning through Hebbian Plasticity in Random Networks (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/v2GRWzIhaqQ/maxresdefault.jpg"} {"video_id": "v2GRWzIhaqQ", "text": "because no one actually i've never seen a broader impact statement that writes about the actual thing in the paper they always go up like one layer or two and then it basically regresses to technology even even though very few papers actually would be able to discuss their particular thing but you know um and that and then in terms of guidelines on broader impact statement this one is missing there's there's always this um the holy trifecta so the holy trifecta is you go like a you know like you're a you're a catholic uh you go with your", "start_timestamp": "00:38:01", "end_timestamp": "00:38:38", "start_second": 2281, "end_second": 2318, "url": "https://www.youtube.com/watch?v=v2GRWzIhaqQ&t=2281s", "title": "Meta-Learning through Hebbian Plasticity in Random Networks (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/v2GRWzIhaqQ/maxresdefault.jpg"} {"video_id": "v2GRWzIhaqQ", "text": "finger to your head chest left and right and you say technology good technology bad technology biased okay so if you want to write a broader impact statement go up the layers technology good bad bias and we're missing the bias here so that's you know i'm just following what these guidelines two broader impact statements are i don't make the rules i'm sorry the the hebbians make the rules apparently um i'm not having okay i've i hope you've enjoyed this paper and this video let me know what you think check out the", "start_timestamp": "00:38:38", "end_timestamp": "00:39:14", "start_second": 2318, "end_second": 2354, "url": "https://www.youtube.com/watch?v=v2GRWzIhaqQ&t=2318s", "title": "Meta-Learning through Hebbian Plasticity in Random Networks (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/v2GRWzIhaqQ/maxresdefault.jpg"} {"video_id": "WVPE62Gk3EM", "text": "hi there today we'll look at big bird transformers for longer sequences by manil zaire and guru garuganesh at al of google research so this paper on a high level proposes to replace the quadratic attention mechanism in transformers by a mix of random attention windowed attention and selective global attention therefore achieving a complexity of linear memory requirement instead of quadratic memory requirement and as a result of that they can process longer sequences than traditional transformers like bert and achieve better results in some nlp", "start_timestamp": "00:00:00", "end_timestamp": "00:00:40", "start_second": 0, "end_second": 40, "url": "https://www.youtube.com/watch?v=WVPE62Gk3EM&t=0s", "title": "Big Bird: Transformers for Longer Sequences (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/WVPE62Gk3EM/maxresdefault.jpg"} {"video_id": "WVPE62Gk3EM", "text": "tasks and they also evaluate on genomics tasks so we'll go through this paper a bit look a bit at the proof because they give a theoretical kind of guarantee that their random attention mechanism can still be touring complete and can still achieve the same things as a full attention mechanism but we'll also look at the drawbacks i sort of have mixed feelings about this paper and i think i'll voice my concerns as we go through here but first let's look at the paper let's look at the architecture and i think this is actually a pretty", "start_timestamp": "00:00:40", "end_timestamp": "00:01:16", "start_second": 40, "end_second": 76, "url": "https://www.youtube.com/watch?v=WVPE62Gk3EM&t=40s", "title": "Big Bird: Transformers for Longer Sequences (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/WVPE62Gk3EM/maxresdefault.jpg"} {"video_id": "WVPE62Gk3EM", "text": "cool paper for the empirical progression of the field to process longer sequences with transformers as always if you like content like this uh feel free to share it around uh leave a like and tell me in the comments what you think about the paper and about what i think whatever you just just uh go nuts all right so the basic uh the basic premise right here is that the transformers they've been pretty impactful especially in nlp so they say transformer based models such as bert have been one of the most successful", "start_timestamp": "00:01:16", "end_timestamp": "00:01:55", "start_second": 76, "end_second": 115, "url": "https://www.youtube.com/watch?v=WVPE62Gk3EM&t=76s", "title": "Big Bird: Transformers for Longer Sequences (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/WVPE62Gk3EM/maxresdefault.jpg"} {"video_id": "WVPE62Gk3EM", "text": "deep learning models for nlp unfortunately one of their core limitations is the quadratic dependency mainly in terms of memory on the sequence length due to their full attention mechanism so really briefly the full attention mechanism and i've done you know numerous videos about attention mechanism bert attention is all you need and so on so if you want a detailed explanation of what that is just go look up the corresponding videos but briefly what you'll have in nlp is a set of tokens a sequence of tokens as an input and you", "start_timestamp": "00:01:55", "end_timestamp": "00:02:29", "start_second": 115, "end_second": 149, "url": "https://www.youtube.com/watch?v=WVPE62Gk3EM&t=115s", "title": "Big Bird: Transformers for Longer Sequences (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/WVPE62Gk3EM/maxresdefault.jpg"} {"video_id": "WVPE62Gk3EM", "text": "want to transform them layer after layer into sort of a a higher order representation of that same sequence and for that you build these layers out of nodes and you have as many nodes usually as you have tokens in the sequence and the next set of so each token is represented by a vector at the beginning and each layer transforms this sequence as i said into sort of a higher level representation so you want the vector of this token right here um to be a better representation than the vector was right here and you do that by incorporating", "start_timestamp": "00:02:29", "end_timestamp": "00:03:10", "start_second": 149, "end_second": 190, "url": "https://www.youtube.com/watch?v=WVPE62Gk3EM&t=149s", "title": "Big Bird: Transformers for Longer Sequences (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/WVPE62Gk3EM/maxresdefault.jpg"} {"video_id": "WVPE62Gk3EM", "text": "information from all the other tokens into that particular vector now as i said this is called an attention mechanism and we don't actually have to go into how it works right here but you can see pretty clearly that if you want to do this for every token you need to have information routed from every token to every token like from here to here from here to here and so on and this is just one token and then you need to do it for this token and for this token and for this token so what you'll ultimately get if n is", "start_timestamp": "00:03:10", "end_timestamp": "00:03:43", "start_second": 190, "end_second": 223, "url": "https://www.youtube.com/watch?v=WVPE62Gk3EM&t=190s", "title": "Big Bird: Transformers for Longer Sequences (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/WVPE62Gk3EM/maxresdefault.jpg"} {"video_id": "WVPE62Gk3EM", "text": "your sequence length you'll get some n squared amount of computation and memory requirements for this so this is a problem and usually this means that you know this sequence length in bert this is limited to something like 512 tokens which is okay for some applications but if you want to summarize you know entire articles entire books even or do question answering with lots of context it's not really enough so people have been thinking about how to scale this input how to scale this and of course the main culprit is", "start_timestamp": "00:03:43", "end_timestamp": "00:04:21", "start_second": 223, "end_second": 261, "url": "https://www.youtube.com/watch?v=WVPE62Gk3EM&t=223s", "title": "Big Bird: Transformers for Longer Sequences (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/WVPE62Gk3EM/maxresdefault.jpg"} {"video_id": "WVPE62Gk3EM", "text": "this quadratic tension mechanism because if you you know double the 512 you need you know four times the amount of compute and memory so how does this paper go about reducing that quadratic dependency the goal right here is of course to get this to some o of n right because then as we double the input length we simply need to double the compute requirements and that would be fantastic and that's what this paper does and it does so without you know sacrificing the properties of the transformer so here's the architecture that big bird proposes", "start_timestamp": "00:04:21", "end_timestamp": "00:05:00", "start_second": 261, "end_second": 300, "url": "https://www.youtube.com/watch?v=WVPE62Gk3EM&t=261s", "title": "Big Bird: Transformers for Longer Sequences (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/WVPE62Gk3EM/maxresdefault.jpg"} {"video_id": "WVPE62Gk3EM", "text": "by the way big bird another character from sesame street i guess will continue the naming here after elmo and bert you know i i'm i'm waiting for the model that's the count um yeah that's going to be a fun model but so big bird basically has three different types of attention and here these are adjacency matrices in this attention mechanism so here is the input layer and the output layer is right here so that basically means that node i right here would be connected well sorry that's not a straight line would be", "start_timestamp": "00:05:00", "end_timestamp": "00:05:40", "start_second": 300, "end_second": 340, "url": "https://www.youtube.com/watch?v=WVPE62Gk3EM&t=300s", "title": "Big Bird: Transformers for Longer Sequences (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/WVPE62Gk3EM/maxresdefault.jpg"} {"video_id": "WVPE62Gk3EM", "text": "connected to this particular node and also to this particular node so we're now trying if we have node i right here we're now trying to not connect it to all of these nodes but we'll say we'll just select sum at random and then connect it to that okay this is what we call random attention and you can pretty clearly see if you connect each of the i nodes to r equals two to two random nodes then you don't have an n squared anymore but you'll have a like an o of r times n which you know if r is a constant is an o of n", "start_timestamp": "00:05:40", "end_timestamp": "00:06:24", "start_second": 340, "end_second": 384, "url": "https://www.youtube.com/watch?v=WVPE62Gk3EM&t=340s", "title": "Big Bird: Transformers for Longer Sequences (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/WVPE62Gk3EM/maxresdefault.jpg"} {"video_id": "WVPE62Gk3EM", "text": "attention mechanism okay so the main goal between the random attention mechanism is that for each query basically you select random tokens that you attend to and that random number is a fixed number that's not dependent on the sequence length and the paper is a little bit unclear about whether or not those random ones are the same for every sequence or are switched up or are the same for every layer or are switched up but they formulate all of this as sort of in sort of a graph in sort of a random graph so there they formulate the attention", "start_timestamp": "00:06:24", "end_timestamp": "00:07:06", "start_second": 384, "end_second": 426, "url": "https://www.youtube.com/watch?v=WVPE62Gk3EM&t=384s", "title": "Big Bird: Transformers for Longer Sequences (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/WVPE62Gk3EM/maxresdefault.jpg"} {"video_id": "WVPE62Gk3EM", "text": "mechanism in form of a graph so if we transform all of these nodes into a graph a full attention mechanism would mean that each graph each node is connected to each of the other nodes right fully connected graph i don't maybe that's it so that would be a full attention mechanism and then they say well if we just have random connections between these things then there are some theorems from graph theory that say that each random walk in this graph is going to um so this graph is going to mix pretty quickly so i can get from each node to", "start_timestamp": "00:07:06", "end_timestamp": "00:07:47", "start_second": 426, "end_second": 467, "url": "https://www.youtube.com/watch?v=WVPE62Gk3EM&t=426s", "title": "Big Bird: Transformers for Longer Sequences (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/WVPE62Gk3EM/maxresdefault.jpg"} {"video_id": "WVPE62Gk3EM", "text": "each other node by a random walk in a logarithmic time and this random walk which basically means that you go from here to here this would be one layer of the transformer and then if you want to go from here to here that would you would have to do that in the next layer so this formulation as a random graph leads me to believe that layer after layer the random attention pattern is going to be the same but also the formulation of the paper leads me to believe that the this random attention differs from sequence to sequence so", "start_timestamp": "00:07:47", "end_timestamp": "00:08:26", "start_second": 467, "end_second": 506, "url": "https://www.youtube.com/watch?v=WVPE62Gk3EM&t=467s", "title": "Big Bird: Transformers for Longer Sequences (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/WVPE62Gk3EM/maxresdefault.jpg"} {"video_id": "WVPE62Gk3EM", "text": "i believe what's happening is that they you know get a new sequence then they decide on this pattern right here once and then they use this layer after layer the same pattern again so you can see that um in the traditional attention information can basically throw flow from each of the nodes to each other node in one single step right because each node is connected to each other node you see this in the graph right here however if we only select a subset then you know it needs to if if i want to go from as i said from here to here then i", "start_timestamp": "00:08:26", "end_timestamp": "00:09:08", "start_second": 506, "end_second": 548, "url": "https://www.youtube.com/watch?v=WVPE62Gk3EM&t=506s", "title": "Big Bird: Transformers for Longer Sequences (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/WVPE62Gk3EM/maxresdefault.jpg"} {"video_id": "WVPE62Gk3EM", "text": "need to do it in two steps and therefore i need two layers and that's going to be the culprit of this method here and you know while it is mentioned in the paper it's sort of i feel at least that's my my assessment of this paper it's kind of swept under the rug a little bit i mean they do have a theorem that clearly says we can construct an example of a task that in the full attention setting can be solved with a single step so a single layer that in our random attention setting needs a lot of layers so a lot of steps", "start_timestamp": "00:09:08", "end_timestamp": "00:09:45", "start_second": 548, "end_second": 585, "url": "https://www.youtube.com/watch?v=WVPE62Gk3EM&t=548s", "title": "Big Bird: Transformers for Longer Sequences (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/WVPE62Gk3EM/maxresdefault.jpg"} {"video_id": "WVPE62Gk3EM", "text": "but you know the rest of the paper is sort of shaky on on this thing but nevertheless you can see how the random attention can if you have enough layers do the same information routing as the full attention okay however this is not a property of the random attention and we'll see this in the next thing right here so the next ingredient that this paper uses is window attention and you can see over here that big bird is ultimately going to be a combination of the three types of attention which will uh which we are looking at", "start_timestamp": "00:09:45", "end_timestamp": "00:10:22", "start_second": 585, "end_second": 622, "url": "https://www.youtube.com/watch?v=WVPE62Gk3EM&t=585s", "title": "Big Bird: Transformers for Longer Sequences (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/WVPE62Gk3EM/maxresdefault.jpg"} {"video_id": "WVPE62Gk3EM", "text": "here so window attention basically means that each each eye each token at the if position is going to attend to itself of course so here is i but it is also going to attend to its neighbors so here is i minus 1 and here is i plus 1. and this is a you know this is a window size w that you can that is a parameter but also it is a constant and therefore um you again go from n squared to w times n which you know is o of n if w is a constant and this might be familiar to you because we've already seen this in the", "start_timestamp": "00:10:22", "end_timestamp": "00:11:05", "start_second": 622, "end_second": 665, "url": "https://www.youtube.com/watch?v=WVPE62Gk3EM&t=622s", "title": "Big Bird: Transformers for Longer Sequences (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/WVPE62Gk3EM/maxresdefault.jpg"} {"video_id": "WVPE62Gk3EM", "text": "long former paper so i've made a video or base i think even two videos on the long former which used exactly the window attention in combination with the global attention and uh if you want to know more about that go watch these videos but the new thing in big bird right here is this re edition of the random attention again the the window here in is is has exactly the same properties as the random attention so you have instead of a fully connected graph you have a sparsely connected graph now if you have random attention the", "start_timestamp": "00:11:05", "end_timestamp": "00:11:46", "start_second": 665, "end_second": 706, "url": "https://www.youtube.com/watch?v=WVPE62Gk3EM&t=665s", "title": "Big Bird: Transformers for Longer Sequences (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/WVPE62Gk3EM/maxresdefault.jpg"} {"video_id": "WVPE62Gk3EM", "text": "sparsely connected graph is like like the one on the right but if you have a windowed attention you can it is kind of not randomly connected but each node is connected to its neighbors like this and you can also see that if i want to go from this node to this node right here i can't do it in one step but i can do it in two steps i go here and i go here so in the terms of the attention layers if i want to go from node one to node three i have to do it in two steps because each node is only connected to its neighbors", "start_timestamp": "00:11:46", "end_timestamp": "00:12:25", "start_second": 706, "end_second": 745, "url": "https://www.youtube.com/watch?v=WVPE62Gk3EM&t=706s", "title": "Big Bird: Transformers for Longer Sequences (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/WVPE62Gk3EM/maxresdefault.jpg"} {"video_id": "WVPE62Gk3EM", "text": "so the connection patterns would sort of look like this so i have to go from one to two and then in the next layer from two to three so the paper basically makes up for the lack of full attention by uh adding layers and you also might recognize this from a convolution operation like this basically because it is a convolution operation right in a convolution each node a only aggregates input from its neighbors for the next layer and then we know that as we go up the layers the de facto window that each node looks at", "start_timestamp": "00:12:25", "end_timestamp": "00:13:06", "start_second": 745, "end_second": 786, "url": "https://www.youtube.com/watch?v=WVPE62Gk3EM&t=745s", "title": "Big Bird: Transformers for Longer Sequences (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/WVPE62Gk3EM/maxresdefault.jpg"} {"video_id": "WVPE62Gk3EM", "text": "is going to be like a cone kind of like this so this is very similar to how a convolutional neural network works and the reasoning is very similar because the reasoning is well in a sentence the most important words for any given word are probably going to be its neighbors like the words around it and as you go up the layers you branch out more and more but ultimately the this neighborhood principle holds in nlp as well so again we already saw this in the long former but that's the reason behind the window attention and that's the second", "start_timestamp": "00:13:06", "end_timestamp": "00:13:42", "start_second": 786, "end_second": 822, "url": "https://www.youtube.com/watch?v=WVPE62Gk3EM&t=786s", "title": "Big Bird: Transformers for Longer Sequences (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/WVPE62Gk3EM/maxresdefault.jpg"} {"video_id": "WVPE62Gk3EM", "text": "ingredient and then the third ingredient is this global attention now the global attention uh is selected tokens that are so important and that's you know fixed by the developers that are so important that they are they are connected to everything else so for example in these transformers you often have what's you know this kind of cls token so this is a special token that you prepend to some piece of text and the output of this token is going to be your classification output because you don't want to bind your", "start_timestamp": "00:13:42", "end_timestamp": "00:14:22", "start_second": 822, "end_second": 862, "url": "https://www.youtube.com/watch?v=WVPE62Gk3EM&t=822s", "title": "Big Bird: Transformers for Longer Sequences (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/WVPE62Gk3EM/maxresdefault.jpg"} {"video_id": "WVPE62Gk3EM", "text": "classification if you need to classify the entire sequence you don't want to bind that decision to one particular word what you want to do is you want to have an extra token and that's this cls token that kind of aggregates information from all of this so layer after layer layer after layer you'll have so if we go here layer after layer we have this one special node and in each step every single other node is able to send information right here to this node and receive information from this node okay so now uh", "start_timestamp": "00:14:22", "end_timestamp": "00:15:04", "start_second": 862, "end_second": 904, "url": "https://www.youtube.com/watch?v=WVPE62Gk3EM&t=862s", "title": "Big Bird: Transformers for Longer Sequences (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/WVPE62Gk3EM/maxresdefault.jpg"} {"video_id": "WVPE62Gk3EM", "text": "as a result of this as you as you may be able to see every single uh every single path is kind of a maximum length of two because if i want to go from any node to any other node i can simply you know send information to this global node and then the global node in the next step can send information to whatever other node and that is a property that they use in their proof that this tension mechanism is as sort of as powerful as the classic full attention mechanism and we'll go through that in one second but first i", "start_timestamp": "00:15:04", "end_timestamp": "00:15:40", "start_second": 904, "end_second": 940, "url": "https://www.youtube.com/watch?v=WVPE62Gk3EM&t=904s", "title": "Big Bird: Transformers for Longer Sequences (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/WVPE62Gk3EM/maxresdefault.jpg"} {"video_id": "WVPE62Gk3EM", "text": "hope this was clear that this combination of random attention window attention and global attention is what is called big bird okay they have some engineering tricks that go along with this but in concept you can imagine big bird being long former plus these random attention right here and you know as an engineer as an nlp engineer that makes kind of total sense i you know i totally believe that a the introduction the addition of these random attention of these random attention patterns can absolutely help your classification", "start_timestamp": "00:15:40", "end_timestamp": "00:16:21", "start_second": 940, "end_second": 981, "url": "https://www.youtube.com/watch?v=WVPE62Gk3EM&t=940s", "title": "Big Bird: Transformers for Longer Sequences (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/WVPE62Gk3EM/maxresdefault.jpg"} {"video_id": "WVPE62Gk3EM", "text": "or whatever your nlp tasks because you know more attention better and i also am completely willing to believe that you know using the full attention matrix while it is of course more accurate it won't hurt too much to leave some of that attention away because essentially all the path lengths are just becoming too or even with the random attention are really short or logarithmic to route information from a node to some other node so the loss that you incur is kind of in a logarithmic scale in terms of performance", "start_timestamp": "00:16:21", "end_timestamp": "00:16:59", "start_second": 981, "end_second": 1019, "url": "https://www.youtube.com/watch?v=WVPE62Gk3EM&t=981s", "title": "Big Bird: Transformers for Longer Sequences (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/WVPE62Gk3EM/maxresdefault.jpg"} {"video_id": "WVPE62Gk3EM", "text": "while the gain that you make is sort of in a in a quadratic or like a linear scale you go from quadratic to linear and that seems to me like a good empirical trade-off all right however the the proofs here the proof of um of how how these how these things are constructed are a little bit i don't know so what they do in the proof that this function can sort of a is a universal approximator people have already shown that full attention mechanisms are universal approximators um so they show here that this sparse", "start_timestamp": "00:16:59", "end_timestamp": "00:17:42", "start_second": 1019, "end_second": 1062, "url": "https://www.youtube.com/watch?v=WVPE62Gk3EM&t=1019s", "title": "Big Bird: Transformers for Longer Sequences (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/WVPE62Gk3EM/maxresdefault.jpg"} {"video_id": "WVPE62Gk3EM", "text": "attention mechanism is also a universal approximator they make big use of star graphs what they say is okay if we have a star graph which is one node connected right here to every other node this is a star graph if we have a star graph we can achieve the same thing than with a full graph a full graph is where every node is connected to every other node but as i already said what they need for this is multiple layers of this star graph so and that has to do with the fact that if i want to route information i basically have to go via this", "start_timestamp": "00:17:42", "end_timestamp": "00:18:21", "start_second": 1062, "end_second": 1101, "url": "https://www.youtube.com/watch?v=WVPE62Gk3EM&t=1062s", "title": "Big Bird: Transformers for Longer Sequences (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/WVPE62Gk3EM/maxresdefault.jpg"} {"video_id": "WVPE62Gk3EM", "text": "middle node right here and there's an additional complication because this middle node in our case right here is only one node i can't route information at the same t like i can't have this routing right here at the same time that i have this routing right here like going from here to here because i only have one middle node and i kind of this is not how the like this is very dumb math but uh maybe you have to imagine that there is one memory slot and you can only use that one memory slot at the same time for one of these things so essentially", "start_timestamp": "00:18:21", "end_timestamp": "00:19:02", "start_second": 1101, "end_second": 1142, "url": "https://www.youtube.com/watch?v=WVPE62Gk3EM&t=1101s", "title": "Big Bird: Transformers for Longer Sequences (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/WVPE62Gk3EM/maxresdefault.jpg"} {"video_id": "WVPE62Gk3EM", "text": "what you'll have to do is you'll have to do the green thing first and then in the next step you'll have to do the blue thing second and then so these are now pairwise routing between nodes but ultimately what an attention mechanism does is it does everything to everything right in a single layer it routes information from all the nodes to all the other nodes and to achieve that so you need multiple rounds of this and it turns out that in the worst case you actually need n rounds of this so you know you trade off your you go from", "start_timestamp": "00:19:02", "end_timestamp": "00:19:38", "start_second": 1142, "end_second": 1178, "url": "https://www.youtube.com/watch?v=WVPE62Gk3EM&t=1142s", "title": "Big Bird: Transformers for Longer Sequences (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/WVPE62Gk3EM/maxresdefault.jpg"} {"video_id": "WVPE62Gk3EM", "text": "n square to n uh memory and compute requirements in a single layer but in the worst case you need n layers to recover the the power of the full trend of the full transformer and that is the last one of their theoretical results right here so first they prove universal approximations and second they prove turing completeness these two properties have been proven for full attention mechanisms and third they prove that there are tasks where you actually do need n layers to solve them with their limited attention", "start_timestamp": "00:19:38", "end_timestamp": "00:20:16", "start_second": 1178, "end_second": 1216, "url": "https://www.youtube.com/watch?v=WVPE62Gk3EM&t=1178s", "title": "Big Bird: Transformers for Longer Sequences (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/WVPE62Gk3EM/maxresdefault.jpg"} {"video_id": "WVPE62Gk3EM", "text": "um so you know i'm not sure but i i feel you can make any sort of polynomial uh algorithm into a linear algorithm like this like i have a i have like a cool sorting algorithm right so if this is my sequence that i want to sort what i can do is i can simply you know take a random subset of them uh like this this and this and then kind of go and and sort them and then put them like i send them to the to the global memory like this i sort them and then i put them back right and if i do this for enough if i do this for enough rounds okay you", "start_timestamp": "00:20:16", "end_timestamp": "00:21:01", "start_second": 1216, "end_second": 1261, "url": "https://www.youtube.com/watch?v=WVPE62Gk3EM&t=1216s", "title": "Big Bird: Transformers for Longer Sequences (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/WVPE62Gk3EM/maxresdefault.jpg"} {"video_id": "WVPE62Gk3EM", "text": "know if i do this for enough rounds you know at the worst case i need n rounds to sort my or log n rounds if i do it smartly but you know in you know the single step here is uh the single step is just o of n so i have now an o of n sorting algorithm i you know i have my sort of a bit of wary to express things like that and um yeah but you know it is from an empirical standpoint i absolutely believe that this uh this is enough now my second quarrel right here is that if you look at the proof first of all what it", "start_timestamp": "00:21:01", "end_timestamp": "00:21:44", "start_second": 1261, "end_second": 1304, "url": "https://www.youtube.com/watch?v=WVPE62Gk3EM&t=1261s", "title": "Big Bird: Transformers for Longer Sequences (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/WVPE62Gk3EM/maxresdefault.jpg"} {"video_id": "WVPE62Gk3EM", "text": "makes use is this star graph and the star graph corresponds to the global attention so that's not much to do with the random attention though they use the random intention in their proof but i at least believe that it would be possible with the global attention only and then the second thing is if you look at the parameters that they use for the um for the experiments and i've already set this in the long former video so in the long former video it turned out that if you look at how big this window attention", "start_timestamp": "00:21:44", "end_timestamp": "00:22:20", "start_second": 1304, "end_second": 1340, "url": "https://www.youtube.com/watch?v=WVPE62Gk3EM&t=1304s", "title": "Big Bird: Transformers for Longer Sequences (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/WVPE62Gk3EM/maxresdefault.jpg"} {"video_id": "WVPE62Gk3EM", "text": "is it turns out that it you're still well you know the original bert attended to 512 tokens and then you look at the window and the window was still 512 tokens it's just that the global attention was even more so ultimately they ended up using more memory than the original bird and here if i look at the parameters of their um thing and they have multiple experiments right here and i believe this is the the base version so this is the base version they also have this large version but here this is the 12 layer um version and you can see they", "start_timestamp": "00:22:20", "end_timestamp": "00:23:01", "start_second": 1340, "end_second": 1381, "url": "https://www.youtube.com/watch?v=WVPE62Gk3EM&t=1340s", "title": "Big Bird: Transformers for Longer Sequences (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/WVPE62Gk3EM/maxresdefault.jpg"} {"video_id": "WVPE62Gk3EM", "text": "have this block length and we'll get into the block length in one second but then you can see that their window size is three times the block length the number of random tokens is three times the block length and the number of global tokens is two times the block length so that results in eight times b so 8 times 64 is you know can i calculate this or am i stupid uh it's 512 yes actually calculated this before so this is 512 tokens so you know you you go from from bert that has 512 tokens and attends to 512 tokens to also", "start_timestamp": "00:23:01", "end_timestamp": "00:23:52", "start_second": 1381, "end_second": 1432, "url": "https://www.youtube.com/watch?v=WVPE62Gk3EM&t=1381s", "title": "Big Bird: Transformers for Longer Sequences (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/WVPE62Gk3EM/maxresdefault.jpg"} {"video_id": "WVPE62Gk3EM", "text": "attending to 512 tokens of course the advantage here is that they now have 4 009 and 96 sequence length so they have the freedom to not attend to as many tokens as they have in the input length but you know to put it in perspective this here uses more memory and more compute on it on its face than bert because bert attends to as many tokens but has a smaller input sequence and you know i i there's sort of a thing where in order to make these sparse attention things work you have to go pretty pretty you know high in the number of things you", "start_timestamp": "00:23:52", "end_timestamp": "00:24:41", "start_second": 1432, "end_second": 1481, "url": "https://www.youtube.com/watch?v=WVPE62Gk3EM&t=1432s", "title": "Big Bird: Transformers for Longer Sequences (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/WVPE62Gk3EM/maxresdefault.jpg"} {"video_id": "WVPE62Gk3EM", "text": "attend to you can leave away some but it's not like you can you know scale up orders of magnitude of your input sequence length so that's this promise of linear attention is sort of it's kind of fulfilled but not there yet the second thing i i would like to point out is that in a lot of cases the number of random tokens is actually set to zero so really making use i believe of these of the of the global of the number of global tokens so it's that seems a bit strange in that they continuously refer to their random", "start_timestamp": "00:24:41", "end_timestamp": "00:25:19", "start_second": 1481, "end_second": 1519, "url": "https://www.youtube.com/watch?v=WVPE62Gk3EM&t=1481s", "title": "Big Bird: Transformers for Longer Sequences (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/WVPE62Gk3EM/maxresdefault.jpg"} {"video_id": "WVPE62Gk3EM", "text": "attention mechanism um but then in a lot of experiments they don't actually have a random attention mechanism i believe they have to do that because that's kind of what makes them different from the long former in principle but still um yeah so the last novelty let's say is an engineering novelty in that they now always consider not single for example they don't consider single random attention they always consider these in blocks and that's because our current hardware is really bad at sparse stuff really bad at", "start_timestamp": "00:25:19", "end_timestamp": "00:25:56", "start_second": 1519, "end_second": 1556, "url": "https://www.youtube.com/watch?v=WVPE62Gk3EM&t=1519s", "title": "Big Bird: Transformers for Longer Sequences (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/WVPE62Gk3EM/maxresdefault.jpg"} {"video_id": "WVPE62Gk3EM", "text": "single indexing gathering single things so if you can do everything in blocks you basically get you get these blocks almost for free so it takes only marginally longer to retrieve this full two by two block right here than it would to retrieve the single uh instance right here of course that means you have you know four times you still use four times more memory but it is not four times slower than the original thing so you can use these blocks uh right here you can do it for the random attention you can do it for the window attention", "start_timestamp": "00:25:56", "end_timestamp": "00:26:32", "start_second": 1556, "end_second": 1592, "url": "https://www.youtube.com/watch?v=WVPE62Gk3EM&t=1556s", "title": "Big Bird: Transformers for Longer Sequences (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/WVPE62Gk3EM/maxresdefault.jpg"} {"video_id": "WVPE62Gk3EM", "text": "as you can see here so you break this window pattern a little bit into blocks and that makes it a lot faster or that speeds up i get the speed up almost for free and then they make another approximation in that the way they do this windowing is and now let's just go really briefly so you can see right here that it would be very cumbersome to gather so what we need we're just going to focus this this dotted thing right here is a bit confusing so you want to attend to these things and these you can just get out with a matrix", "start_timestamp": "00:26:32", "end_timestamp": "00:27:17", "start_second": 1592, "end_second": 1637, "url": "https://www.youtube.com/watch?v=WVPE62Gk3EM&t=1592s", "title": "Big Bird: Transformers for Longer Sequences (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/WVPE62Gk3EM/maxresdefault.jpg"} {"video_id": "WVPE62Gk3EM", "text": "slice really easy but then you want to attend to this kind of blocky thing right here from the window attention right like this thing and this is hard to get out because you'd have to kind of index each row individually and that's very slow so what they do there is this matrix roll operation where you can sort of roll the axis around so what you'll do is you'll take this thing right here and you put it to the left right here and you'll take for example this thing right here and you'll put it to the right or no like it's it's up and down but in", "start_timestamp": "00:27:17", "end_timestamp": "00:27:55", "start_second": 1637, "end_second": 1675, "url": "https://www.youtube.com/watch?v=WVPE62Gk3EM&t=1637s", "title": "Big Bird: Transformers for Longer Sequences (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/WVPE62Gk3EM/maxresdefault.jpg"} {"video_id": "WVPE62Gk3EM", "text": "essence that's what you do and you can you can fold all of this blue stuff into a rectangular matrix if you know if you can see right here so you kind of roll this back roll this back roll this forward and you replace whatever is missing by these now this again gives you some inaccuracies because this block right here was never intended to be attended to and all of a sudden you see you have the k6 in here so it gives you a bit of inaccuracies at the edges of the sequence but you can take that you know you can take that hit for the", "start_timestamp": "00:27:55", "end_timestamp": "00:28:37", "start_second": 1675, "end_second": 1717, "url": "https://www.youtube.com/watch?v=WVPE62Gk3EM&t=1675s", "title": "Big Bird: Transformers for Longer Sequences (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/WVPE62Gk3EM/maxresdefault.jpg"} {"video_id": "WVPE62Gk3EM", "text": "increased performance that you gain by now having a rectangular matrix tpus are really efficient at this not as efficient as this and then the only thing that's really slow is gathering these random blocks right here but also by having the same amount of random blocks per input token what you'll do is you'll end up with just one of these columns right here or you know r of these columns and that again gives you a rectangular matrix so this thing right here you can process very very efficiently using a tpu and you know the mistakes you make", "start_timestamp": "00:28:37", "end_timestamp": "00:29:15", "start_second": 1717, "end_second": 1755, "url": "https://www.youtube.com/watch?v=WVPE62Gk3EM&t=1717s", "title": "Big Bird: Transformers for Longer Sequences (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/WVPE62Gk3EM/maxresdefault.jpg"} {"video_id": "WVPE62Gk3EM", "text": "are basically this thing right here and this thing right here because those weren't intended and are at the edges of the sequence so these were the the tricks of big bird to quickly uh summarize uh big bird is basically taking a transformer saying well why do we need all of this attention all of this full attention maybe we only need some of that and can already do a big job a good job especially you know considering the attention mechanism goes over multiple layers so we don't need a routing from each token to each token", "start_timestamp": "00:29:15", "end_timestamp": "00:29:55", "start_second": 1755, "end_second": 1795, "url": "https://www.youtube.com/watch?v=WVPE62Gk3EM&t=1755s", "title": "Big Bird: Transformers for Longer Sequences (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/WVPE62Gk3EM/maxresdefault.jpg"} {"video_id": "WVPE62Gk3EM", "text": "we we can make up for not having a fully connected graph by simply running multiple layers so their sparsity is first of all you have this random attention which i believe changes from sequence to sequence but stays within or among the layers of the same sequence then you have the window attention with the reasoning so the reasoning behind the random attention is that if you have a randomly connected graph the path lengths are on average logarithmic so you can route information efficiently the reasoning behind the window", "start_timestamp": "00:29:55", "end_timestamp": "00:30:29", "start_second": 1795, "end_second": 1829, "url": "https://www.youtube.com/watch?v=WVPE62Gk3EM&t=1795s", "title": "Big Bird: Transformers for Longer Sequences (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/WVPE62Gk3EM/maxresdefault.jpg"} {"video_id": "WVPE62Gk3EM", "text": "attention is that probably neighbor information is very important and that has been shown empirically and then the global attention the reasoning behind this is that some of the tokens that are fixed by the developers are so important that it it's very beneficial that each other node is connected to them and that they are connected to each other node the result of that is the big bird attention mechanism which is basically long former which already had these two plus the random attention this achieves a linear", "start_timestamp": "00:30:29", "end_timestamp": "00:31:07", "start_second": 1829, "end_second": 1867, "url": "https://www.youtube.com/watch?v=WVPE62Gk3EM&t=1829s", "title": "Big Bird: Transformers for Longer Sequences (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/WVPE62Gk3EM/maxresdefault.jpg"} {"video_id": "WVPE62Gk3EM", "text": "linear complexity in terms of of memory and compute though linear has to be qualified a bit because it's modified by the window size by the number of random attention tokens by the number of global tokens and in practice often ends up being you know fairly large-ish and also the the theoretical guarantees now come with the fact that you need multiple layers in the worst case you need sequence length amount of layers which you know in the worst case would result right back into a quadratic requirement for memory", "start_timestamp": "00:31:07", "end_timestamp": "00:31:46", "start_second": 1867, "end_second": 1906, "url": "https://www.youtube.com/watch?v=WVPE62Gk3EM&t=1867s", "title": "Big Bird: Transformers for Longer Sequences (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/WVPE62Gk3EM/maxresdefault.jpg"} {"video_id": "WVPE62Gk3EM", "text": "and compute they do some engineering some engineering tricks right here and their results are pretty good so the results in various tasks and we'll we'll look at some of the tasks right here so these are def set results using base size models for example where you can see they do outperform basic roberta models they outperform long former which may mean that the random attention is useful but you know in these things it also always may just mean that you've thrown more compute at it um at least i'm not really looking that", "start_timestamp": "00:31:46", "end_timestamp": "00:32:28", "start_second": 1906, "end_second": 1948, "url": "https://www.youtube.com/watch?v=WVPE62Gk3EM&t=1906s", "title": "Big Bird: Transformers for Longer Sequences (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/WVPE62Gk3EM/maxresdefault.jpg"} {"video_id": "WVPE62Gk3EM", "text": "they outperform the models because as you can see right here if they compare to state of the art and you know granted these are models that have been trained specifically for these tasks and are you know crafted and engineered and big bird manages to big bird manages to hold itself against them in a lot of tasks and even gets state of the art on some what i'm more interested in is that it you know it can reach good numbers it doesn't necessarily have to be state of the art but it can reach good numbers which tells me that", "start_timestamp": "00:32:28", "end_timestamp": "00:33:02", "start_second": 1948, "end_second": 1982, "url": "https://www.youtube.com/watch?v=WVPE62Gk3EM&t=1948s", "title": "Big Bird: Transformers for Longer Sequences (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/WVPE62Gk3EM/maxresdefault.jpg"} {"video_id": "WVPE62Gk3EM", "text": "okay probably the the empirical hit that i take by not having the full attention is you know is justifiable by the speed up and memory savings i do get um yeah especially when result when you see results mixed like this you know sometimes the other model is good and sometimes the big bird is good on different variations and so on i would not you know i would not make a big deal out of the fact that it is state of the art i get that the authors have to do that i would do so as well but you know um you know don't", "start_timestamp": "00:33:02", "end_timestamp": "00:33:39", "start_second": 1982, "end_second": 2019, "url": "https://www.youtube.com/watch?v=WVPE62Gk3EM&t=1982s", "title": "Big Bird: Transformers for Longer Sequences (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/WVPE62Gk3EM/maxresdefault.jpg"} {"video_id": "WVPE62Gk3EM", "text": "don't think that this is the like the best thing now uh it's very probable they just thrown also a lot of compute at it what is cool is they do uh some genomics experiments so not only do they have nlp state of the art but also they go into genomics and experiment with data there don't want to go into that because you know ultimately it's another task and that we leave the papers about the architecture all right so that was big bird i hope you enjoyed this video and learned i learned something certainly if you want to check out the proofs", "start_timestamp": "00:33:39", "end_timestamp": "00:34:19", "start_second": 2019, "end_second": 2059, "url": "https://www.youtube.com/watch?v=WVPE62Gk3EM&t=2019s", "title": "Big Bird: Transformers for Longer Sequences (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/WVPE62Gk3EM/maxresdefault.jpg"} {"video_id": "etsayyDGiO0", "text": "okay it's absolutely great to be here I am the official Kaggle representative kaggle comfy representative for this presentation or for this event we have a major Kaggle team meeting going on next week so very few people were able to attend and so I'm I'm happy to be here the event organizers actually contacted the cago leadership and they asked they asked could we get the the smartest Tagil employee to come give this presentation but unfortunately that person said no so so then the leadership said if they organizers ask the", "start_timestamp": "00:00:00", "end_timestamp": "00:00:44", "start_second": 0, "end_second": 44, "url": "https://www.youtube.com/watch?v=etsayyDGiO0&t=0s", "title": "Keeping Your Skills Fresh When Everything is Changing | by Walter Reade | Kaggle Days Dubai | Kaggle", "thumbnail": "https://i.ytimg.com/vi/etsayyDGiO0/maxresdefault.jpg"} {"video_id": "etsayyDGiO0", "text": "leadership well maybe we can get the wisest Kaggle employee to come speak and unfortunately that person said no also so the organizers they got a little bit nervous and they thought oh maybe we can get the best-looking kaggle employee to come speak at this and at that point I was very uncomfortable turning down the organizers three times in a row so here I am actually the truth is none of that is true I work with an amazing team one of the things that kaggle values as a company is low ego and I think that is", "start_timestamp": "00:00:44", "end_timestamp": "00:01:18", "start_second": 44, "end_second": 78, "url": "https://www.youtube.com/watch?v=etsayyDGiO0&t=44s", "title": "Keeping Your Skills Fresh When Everything is Changing | by Walter Reade | Kaggle Days Dubai | Kaggle", "thumbnail": "https://i.ytimg.com/vi/etsayyDGiO0/maxresdefault.jpg"} {"video_id": "etsayyDGiO0", "text": "really easy to do in this field because no matter how good we are there's always these super grandmasters who are so smart and they're so clever and they do so well and it's very easy for us to you know keep our egos in check so I'm going to talk about a topic that I think many of us probably struggle with and that is how do you keep up in a field that is constantly changing all the time right we know we need to keep our skills fresh but how do we do it and before I start I don't believe there's a right answer to", "start_timestamp": "00:01:18", "end_timestamp": "00:01:54", "start_second": 78, "end_second": 114, "url": "https://www.youtube.com/watch?v=etsayyDGiO0&t=78s", "title": "Keeping Your Skills Fresh When Everything is Changing | by Walter Reade | Kaggle Days Dubai | Kaggle", "thumbnail": "https://i.ytimg.com/vi/etsayyDGiO0/maxresdefault.jpg"} {"video_id": "etsayyDGiO0", "text": "this so I'm gonna share what I believe is right and the kind of the strategy and methodology that I use and hopefully you'll be able to find a few things that are right for you so during the recent Kaggle de San Francisco event we had Francois Holly so he is the creator of the Charis deep learning framework and he spoke and he told an interesting story he joined Kaggle and entered his first competition and was competing and he wasn't doing very well so he thought well maybe kegels not for me he forgot about Kegel just didn't do anything for", "start_timestamp": "00:01:54", "end_timestamp": "00:02:33", "start_second": 114, "end_second": 153, "url": "https://www.youtube.com/watch?v=etsayyDGiO0&t=114s", "title": "Keeping Your Skills Fresh When Everything is Changing | by Walter Reade | Kaggle Days Dubai | Kaggle", "thumbnail": "https://i.ytimg.com/vi/etsayyDGiO0/maxresdefault.jpg"} {"video_id": "etsayyDGiO0", "text": "a few weeks and then all of a sudden he got an email from Kaggle saying congratulations you've won most of us do not win the first Kaggle competition that we enter in fact most of us don't win any cattle competitions so it's very abnormal what are you shared most of us I think when we joined Kegel right some of us are very very skilled they come maybe with an advanced degree in data science or machine learning but a lot of us like myself come knowing almost nothing so so I'm gonna talk about some of the principles that will will help us", "start_timestamp": "00:02:33", "end_timestamp": "00:03:16", "start_second": 153, "end_second": 196, "url": "https://www.youtube.com/watch?v=etsayyDGiO0&t=153s", "title": "Keeping Your Skills Fresh When Everything is Changing | by Walter Reade | Kaggle Days Dubai | Kaggle", "thumbnail": "https://i.ytimg.com/vi/etsayyDGiO0/maxresdefault.jpg"} {"video_id": "etsayyDGiO0", "text": "and again they're very applicable to me I really hope they transfer to many of you so I'm gonna start to talk about my education and my career and not because either one of those are particularly important but I really want you to understand where I was as an individual when I joined Kaggle in my first cattle competition so so that's me a long time ago I got my first computer in 1983 it was a Timex Sinclair 1000 you've never heard of it because they went out of business in in like a year it was the company Timex that makes the watches and", "start_timestamp": "00:03:16", "end_timestamp": "00:03:54", "start_second": 196, "end_second": 234, "url": "https://www.youtube.com/watch?v=etsayyDGiO0&t=196s", "title": "Keeping Your Skills Fresh When Everything is Changing | by Walter Reade | Kaggle Days Dubai | Kaggle", "thumbnail": "https://i.ytimg.com/vi/etsayyDGiO0/maxresdefault.jpg"} {"video_id": "etsayyDGiO0", "text": "they wanted to get into computers it was an absolute disaster for them and that computer that I got into that by the way that's not a Timex Sinclair that's an Apple - that was my neighbor's that I used to go visit and and use their computer but my first computer had 2 kilobytes of RAM 2 kilobytes of RAM just to put things in perspective and so I taught myself basic I really enjoyed it right I wasn't necessarily very good but I really enjoyed it my high school taught a class in Pascal programming probably maybe not super familiar to a", "start_timestamp": "00:03:54", "end_timestamp": "00:04:33", "start_second": 234, "end_second": 273, "url": "https://www.youtube.com/watch?v=etsayyDGiO0&t=234s", "title": "Keeping Your Skills Fresh When Everything is Changing | by Walter Reade | Kaggle Days Dubai | Kaggle", "thumbnail": "https://i.ytimg.com/vi/etsayyDGiO0/maxresdefault.jpg"} {"video_id": "etsayyDGiO0", "text": "lot of you I enjoyed the class so then I thought well what do I want to do for a career and it turned out that everyone I knew was going into computers and I was worried by the time I graduated from college there'd be a flood of talent and all of the jobs would be taken right of course that was foolish because the right the industry took off faster than the demand pool and still to this day there are people in computer science and there's demand for the talent so I thought well what should I study I chose Chemical Engineering and I chose it not", "start_timestamp": "00:04:33", "end_timestamp": "00:05:12", "start_second": 273, "end_second": 312, "url": "https://www.youtube.com/watch?v=etsayyDGiO0&t=273s", "title": "Keeping Your Skills Fresh When Everything is Changing | by Walter Reade | Kaggle Days Dubai | Kaggle", "thumbnail": "https://i.ytimg.com/vi/etsayyDGiO0/maxresdefault.jpg"} {"video_id": "etsayyDGiO0", "text": "because I knew anything about Chemical Engineering I chose it because it sounded kind of cool Chemical Engineering well at least it sounded cool in 1987 so please don't judge me for that so when I was an undergrad I took a class in Fortran 77 I'm sure most of us have heard a Fortran and again I really liked it I went on to get a masters for part of my master's I used Fortran to write a program to model the combustion of a coal particle as it burns and I really enjoyed it so I went on to get a PhD in chemical engineering and this", "start_timestamp": "00:05:12", "end_timestamp": "00:05:51", "start_second": 312, "end_second": 351, "url": "https://www.youtube.com/watch?v=etsayyDGiO0&t=312s", "title": "Keeping Your Skills Fresh When Everything is Changing | by Walter Reade | Kaggle Days Dubai | Kaggle", "thumbnail": "https://i.ytimg.com/vi/etsayyDGiO0/maxresdefault.jpg"} {"video_id": "etsayyDGiO0", "text": "time a hundred percent of my work was computational so I did direct numerical simulations of turbulent three three dimensional turbulent flow and the particle movement within that flow now interestingly enough I don't know if many of you know this but the Fortran originally you had to start in the seventh column right so space space space space space space and then you would start your statement and things like line numbers and whatnot could go in the first six but that was a throwback to the old punch card days and", "start_timestamp": "00:05:51", "end_timestamp": "00:06:27", "start_second": 351, "end_second": 387, "url": "https://www.youtube.com/watch?v=etsayyDGiO0&t=351s", "title": "Keeping Your Skills Fresh When Everything is Changing | by Walter Reade | Kaggle Days Dubai | Kaggle", "thumbnail": "https://i.ytimg.com/vi/etsayyDGiO0/maxresdefault.jpg"} {"video_id": "etsayyDGiO0", "text": "with this was in the mid-90s they came out with Fortran 90 and Fortran 90 no longer needed you to have to worry about indentation which was really great right you could just write your program how you wanted and I thought it was fantastic how it unshackled me from having to worry about indentation now I also find it very ironic that in today's stage guido van rossum who invented python has reshackle me to a programming language that cares about indentation so one other point about this again just to give you an idea of how fast things", "start_timestamp": "00:06:27", "end_timestamp": "00:07:02", "start_second": 387, "end_second": 422, "url": "https://www.youtube.com/watch?v=etsayyDGiO0&t=387s", "title": "Keeping Your Skills Fresh When Everything is Changing | by Walter Reade | Kaggle Days Dubai | Kaggle", "thumbnail": "https://i.ytimg.com/vi/etsayyDGiO0/maxresdefault.jpg"} {"video_id": "etsayyDGiO0", "text": "change so now we're a decade later we're in the mid 90s and I had an IBM workstation and I had received a $10,000 grant for equipment for my research and I spent that entire $10,000 to get a memory module for this workstation that was 256 megabytes of RAM which was you know a very large Ram back then but $10,000 for that so things change so I got my first job out of school and so everything up to them was great loving life loving programming loving science loving Chemical Engineering and then I got a job in an industry that uses", "start_timestamp": "00:07:02", "end_timestamp": "00:07:45", "start_second": 422, "end_second": 465, "url": "https://www.youtube.com/watch?v=etsayyDGiO0&t=422s", "title": "Keeping Your Skills Fresh When Everything is Changing | by Walter Reade | Kaggle Days Dubai | Kaggle", "thumbnail": "https://i.ytimg.com/vi/etsayyDGiO0/maxresdefault.jpg"} {"video_id": "etsayyDGiO0", "text": "chemical engineering it was the paper industry and my job was to optimize air and drying systems for drying cardboard right we used cardboard you know they make it wet it's got to be dried and it was I can't say I really enjoyed it you'd have to go and so you know but you can see this is a person you'd have to go and stand up here and you'd have this long temperature measurement that you'd stick in there and measure the temperature of these dryer cans then you do the same thing with a humidity and you would do it on every single one of", "start_timestamp": "00:07:45", "end_timestamp": "00:08:19", "start_second": 465, "end_second": 499, "url": "https://www.youtube.com/watch?v=etsayyDGiO0&t=465s", "title": "Keeping Your Skills Fresh When Everything is Changing | by Walter Reade | Kaggle Days Dubai | Kaggle", "thumbnail": "https://i.ytimg.com/vi/etsayyDGiO0/maxresdefault.jpg"} {"video_id": "etsayyDGiO0", "text": "those on the back in the front it was hot it was sweaty it was not what I went to college for nine years to do so I decided to upgrade my career and then I went into consumer products making diapers are nappies as they call in some part of the world and slightly more glamorous I suppose but still not necessarily giving me the career fulfillment that I had always dreamed of and by the way the technology that goes into making diapers is is actually pretty amazing it's just not glamorous work so now it's the early", "start_timestamp": "00:08:19", "end_timestamp": "00:08:56", "start_second": 499, "end_second": 536, "url": "https://www.youtube.com/watch?v=etsayyDGiO0&t=499s", "title": "Keeping Your Skills Fresh When Everything is Changing | by Walter Reade | Kaggle Days Dubai | Kaggle", "thumbnail": "https://i.ytimg.com/vi/etsayyDGiO0/maxresdefault.jpg"} {"video_id": "etsayyDGiO0", "text": "2000s and I am absolutely bored out of my mind with my work it's just not exciting I I don't enjoy going to work and my company offered a distance education catalog and I was looking through there and they had a course on computational intelligence and that was three things it was genetic algorithms that was considered computational intelligence back then fuzzy logic which was super hyped up at about that time that I don't know does anyone use fuzzy logic anymore probably not and neural networks so I took this", "start_timestamp": "00:08:56", "end_timestamp": "00:09:32", "start_second": 536, "end_second": 572, "url": "https://www.youtube.com/watch?v=etsayyDGiO0&t=536s", "title": "Keeping Your Skills Fresh When Everything is Changing | by Walter Reade | Kaggle Days Dubai | Kaggle", "thumbnail": "https://i.ytimg.com/vi/etsayyDGiO0/maxresdefault.jpg"} {"video_id": "etsayyDGiO0", "text": "course now this was pre Coursera days they actually physically mailed you DVDs and you would watch the lectures you know on your television you would do your homework you would email the instructor two weeks later he'd email you back your grades so that was the distance education that was back then I really found it interesting this idea so they taught a very early version of convolutional neural networks they didn't work well they were very narrow you know they're very shallow and we actually didn't program the so we're", "start_timestamp": "00:09:32", "end_timestamp": "00:10:10", "start_second": 572, "end_second": 610, "url": "https://www.youtube.com/watch?v=etsayyDGiO0&t=572s", "title": "Keeping Your Skills Fresh When Everything is Changing | by Walter Reade | Kaggle Days Dubai | Kaggle", "thumbnail": "https://i.ytimg.com/vi/etsayyDGiO0/maxresdefault.jpg"} {"video_id": "etsayyDGiO0", "text": "still on windows so the instructor would he emailed us a precompiled binary we would run it on our Windows machine which was probably a single-core CPU or back then but even if it had more than one cards we're only using a single core but we would try to train this thing and and but the the problem was these little pop-can images I should have put a picture and I was blown away that a computer could recognize was it a diet coke was it a dr. pepper was it a sprite I was really fascinated by that the problem was is once I tried to apply", "start_timestamp": "00:10:10", "end_timestamp": "00:10:45", "start_second": 610, "end_second": 645, "url": "https://www.youtube.com/watch?v=etsayyDGiO0&t=610s", "title": "Keeping Your Skills Fresh When Everything is Changing | by Walter Reade | Kaggle Days Dubai | Kaggle", "thumbnail": "https://i.ytimg.com/vi/etsayyDGiO0/maxresdefault.jpg"} {"video_id": "etsayyDGiO0", "text": "this technology to anything in the real world like so in in in consumer product manufacturing they're taking pictures of every that's being made to do some simple measurements and what and I thought well maybe we can use Ural networks for this could never get it to work so I said well neural networks interesting but not useful so so then fast forward another ten years I'm still bored out of my mind at work right but hey families got to eat and I stumble across this challenge it was the Merck molecular activity", "start_timestamp": "00:10:45", "end_timestamp": "00:11:22", "start_second": 645, "end_second": 682, "url": "https://www.youtube.com/watch?v=etsayyDGiO0&t=645s", "title": "Keeping Your Skills Fresh When Everything is Changing | by Walter Reade | Kaggle Days Dubai | Kaggle", "thumbnail": "https://i.ytimg.com/vi/etsayyDGiO0/maxresdefault.jpg"} {"video_id": "etsayyDGiO0", "text": "challenge on a platform called kaggle it was fairly simple all you had to do is predict the molecular activity of these you know these molecules based on tens of thousands of molecular descriptors and I've got a PhD in chemical engineering I'm really good at data analysis top prize was $25,000 so I thought to myself this is gonna be the easiest $25,000 I've ever made in my life right I was literally picking out the car I was gonna buy so the person who won that competition does anyone know who that is it's Jeff Finn so his research lab won", "start_timestamp": "00:11:22", "end_timestamp": "00:12:04", "start_second": 682, "end_second": 724, "url": "https://www.youtube.com/watch?v=etsayyDGiO0&t=682s", "title": "Keeping Your Skills Fresh When Everything is Changing | by Walter Reade | Kaggle Days Dubai | Kaggle", "thumbnail": "https://i.ytimg.com/vi/etsayyDGiO0/maxresdefault.jpg"} {"video_id": "etsayyDGiO0", "text": "the competition using deep neural networks and as Maria said I came in though there were two hundred and thirty six teams I came in actually 23rd but it was from the bottom I came a twenty third from the bottom and it wasn't because I wasn't trying I was literally trying my hardest and it was it was actually a wait but I mean it was actually it was very demoralizing but it was also a wake-up call because I realized there was all of these things that I had no idea that I didn't know so it was just was like window opened into", "start_timestamp": "00:12:04", "end_timestamp": "00:12:40", "start_second": 724, "end_second": 760, "url": "https://www.youtube.com/watch?v=etsayyDGiO0&t=724s", "title": "Keeping Your Skills Fresh When Everything is Changing | by Walter Reade | Kaggle Days Dubai | Kaggle", "thumbnail": "https://i.ytimg.com/vi/etsayyDGiO0/maxresdefault.jpg"} {"video_id": "etsayyDGiO0", "text": "a whole new set of tools and technologies that I could learn fortunately another thing happened in 2012 so you probably know Andrew Eng is speaking here in 2012 Andrew hang it was a co-founder of Coursera and I found his course on machine learning and took that course and again I was blown away that here we have a course that would literally have costed thousands of dollars for me to take it's online it's for free so things were good for a while things were good because you could take that course there of course you could", "start_timestamp": "00:12:40", "end_timestamp": "00:13:17", "start_second": 760, "end_second": 797, "url": "https://www.youtube.com/watch?v=etsayyDGiO0&t=760s", "title": "Keeping Your Skills Fresh When Everything is Changing | by Walter Reade | Kaggle Days Dubai | Kaggle", "thumbnail": "https://i.ytimg.com/vi/etsayyDGiO0/maxresdefault.jpg"} {"video_id": "etsayyDGiO0", "text": "read the few papers that came out Jeff Hanna came out with a few papers but suddenly there is a problem this is just a screenshot of Coursera today and there are more data science and machine learning classes that any person here can take even if it's full-time if you look at the number of machine learning archived papers that are published every year you can't keep up so on the one hand we live in an amazing time where you can learn the things you need to - like in my case change my career or become good at Kegel on the other hand", "start_timestamp": "00:13:17", "end_timestamp": "00:13:56", "start_second": 797, "end_second": 836, "url": "https://www.youtube.com/watch?v=etsayyDGiO0&t=797s", "title": "Keeping Your Skills Fresh When Everything is Changing | by Walter Reade | Kaggle Days Dubai | Kaggle", "thumbnail": "https://i.ytimg.com/vi/etsayyDGiO0/maxresdefault.jpg"} {"video_id": "etsayyDGiO0", "text": "it's it's too much and you can't keep up and it's information overload so my question is how do you know get given them given the fact that you must everyone here must keep up their data science skills and the machine learning skills we have to keep up on the one hand on the other hand there's too much to learn so what do you do so the question is how do we keep up this is probably the most important point to understand is you don't you can't you can't keep up with everything new and machine learning and data science there", "start_timestamp": "00:13:56", "end_timestamp": "00:14:33", "start_second": 836, "end_second": 873, "url": "https://www.youtube.com/watch?v=etsayyDGiO0&t=836s", "title": "Keeping Your Skills Fresh When Everything is Changing | by Walter Reade | Kaggle Days Dubai | Kaggle", "thumbnail": "https://i.ytimg.com/vi/etsayyDGiO0/maxresdefault.jpg"} {"video_id": "etsayyDGiO0", "text": "are people who are deep learning experts who can't even keep up in the deep learning field and once you recognize that it kind of then reframes your mind to say well if I'm not going to learn everything what can I learn and how do I learn it so I'm gonna give three guidelines okay on how you can continue learning - like if you're brand new how you start your learning journey if you're experienced how you continue and the three guidelines are be sensible be smart and be systematic now I was thinking about branding these the three", "start_timestamp": "00:14:33", "end_timestamp": "00:15:08", "start_second": 873, "end_second": 908, "url": "https://www.youtube.com/watch?v=etsayyDGiO0&t=873s", "title": "Keeping Your Skills Fresh When Everything is Changing | by Walter Reade | Kaggle Days Dubai | Kaggle", "thumbnail": "https://i.ytimg.com/vi/etsayyDGiO0/maxresdefault.jpg"} {"video_id": "etsayyDGiO0", "text": "BS s of learning data science but I'm not sure if that's the best branding you can give me feedback if that's catchy enough so be sensible okay all right I crashed and burned on my first Kago competition and like I said there was an enormous amount of things I had to learn number one my computer was terrible was four years old had eight gigabytes of RAM and I realized if I want to compete so I started upgrading my computer more Ram Moore's better CPU more disk space more even more RAM better seat a new motherboard right so I started that race", "start_timestamp": "00:15:08", "end_timestamp": "00:15:41", "start_second": 908, "end_second": 941, "url": "https://www.youtube.com/watch?v=etsayyDGiO0&t=908s", "title": "Keeping Your Skills Fresh When Everything is Changing | by Walter Reade | Kaggle Days Dubai | Kaggle", "thumbnail": "https://i.ytimg.com/vi/etsayyDGiO0/maxresdefault.jpg"} {"video_id": "etsayyDGiO0", "text": "that many of us do okay that got me kind of far I was only using MATLAB in some proprietary software me neither of which you can actually use to win a kaggle competition and at least at the time MATLAB wasn't the best tool so I realized I've got to learn Python so that was another big major learning step the second competition I entered I actually did okay I came in 34th out of 950 34 from the top this time and I use logistic regression not because it was the right tool it was literally the only algorithm I knew in machine learning so", "start_timestamp": "00:15:41", "end_timestamp": "00:16:22", "start_second": 941, "end_second": 982, "url": "https://www.youtube.com/watch?v=etsayyDGiO0&t=941s", "title": "Keeping Your Skills Fresh When Everything is Changing | by Walter Reade | Kaggle Days Dubai | Kaggle", "thumbnail": "https://i.ytimg.com/vi/etsayyDGiO0/maxresdefault.jpg"} {"video_id": "etsayyDGiO0", "text": "I used it right there may have been veterans uh let's see I know who won like there's some people here I know one that I can't remember I know there's some grandmasters here that actually won that competition so then I learned you know you start seeing XG boost XG boost is winning everything so then you learn next G pose now uh deep learning I was sure that deep learning was a hype just like fuzzy logic and I avoided learning deep learning over time you know a year later I realized okay this is going to take over the world so I started", "start_timestamp": "00:16:22", "end_timestamp": "00:16:55", "start_second": 982, "end_second": 1015, "url": "https://www.youtube.com/watch?v=etsayyDGiO0&t=982s", "title": "Keeping Your Skills Fresh When Everything is Changing | by Walter Reade | Kaggle Days Dubai | Kaggle", "thumbnail": "https://i.ytimg.com/vi/etsayyDGiO0/maxresdefault.jpg"} {"video_id": "etsayyDGiO0", "text": "learning it well that required hardware GPU it required you know I was using Windows so I better go to one to Linux every time I made this shift I felt dumb again I felt stupid I felt like I don't know what I'm doing because it was a huge learning curve so when I say be sensible I want you to recognize that if you start feeling good about yourself and how good you are it's time to jump into something else and to kind of stretch yourself and feel a little bit intimidating again and that's perfectly normal and that's", "start_timestamp": "00:16:55", "end_timestamp": "00:17:31", "start_second": 1015, "end_second": 1051, "url": "https://www.youtube.com/watch?v=etsayyDGiO0&t=1015s", "title": "Keeping Your Skills Fresh When Everything is Changing | by Walter Reade | Kaggle Days Dubai | Kaggle", "thumbnail": "https://i.ytimg.com/vi/etsayyDGiO0/maxresdefault.jpg"} {"video_id": "etsayyDGiO0", "text": "gonna continue to happen III would say anyone who wants to be successful in this area has to get used to feeling insecure about what they know and what they don't know happens to me all the time not I'm I don't like when I say something and it's a mistake or I argue a position and it's wrong on the one hand on the other hand I'm actually glad because right it improves me and I get better so be sensible set your expectations to something that's reasonable so the next thing is be smart and what this means is you have to build", "start_timestamp": "00:17:31", "end_timestamp": "00:18:06", "start_second": 1051, "end_second": 1086, "url": "https://www.youtube.com/watch?v=etsayyDGiO0&t=1051s", "title": "Keeping Your Skills Fresh When Everything is Changing | by Walter Reade | Kaggle Days Dubai | Kaggle", "thumbnail": "https://i.ytimg.com/vi/etsayyDGiO0/maxresdefault.jpg"} {"video_id": "etsayyDGiO0", "text": "a learning plan for you if you go to github all these people say hey a step-by-step learning plan that might be okay it's probably not for you right and the reason is we're all in different places if you go through and somebody's learning plan that is a waste of time to advance not advanced enough you will not do well so be smart so here's what I recommend when you set a learning plan excuse me so say a learning plan say you have five hours a week to learn you all right you say all right every week I'm gonna learn and spend five hours", "start_timestamp": "00:18:06", "end_timestamp": "00:18:43", "start_second": 1086, "end_second": 1123, "url": "https://www.youtube.com/watch?v=etsayyDGiO0&t=1086s", "title": "Keeping Your Skills Fresh When Everything is Changing | by Walter Reade | Kaggle Days Dubai | Kaggle", "thumbnail": "https://i.ytimg.com/vi/etsayyDGiO0/maxresdefault.jpg"} {"video_id": "etsayyDGiO0", "text": "this is how I would recommend doing it number one and I was talking to Pavel earlier this is he's very big on this strengthen your fundamentals so that could be the fundamentals of linear algebra the fundamentals of statistics those things don't change for me most of the time I spend in there is learning Python deeper right learning the language I my favorite book to read and reread is called learning Python it's this thick it takes me forever to get through every time they get going new edition I start over again to refresh", "start_timestamp": "00:18:43", "end_timestamp": "00:19:15", "start_second": 1123, "end_second": 1155, "url": "https://www.youtube.com/watch?v=etsayyDGiO0&t=1123s", "title": "Keeping Your Skills Fresh When Everything is Changing | by Walter Reade | Kaggle Days Dubai | Kaggle", "thumbnail": "https://i.ytimg.com/vi/etsayyDGiO0/maxresdefault.jpg"} {"video_id": "etsayyDGiO0", "text": "those fundamentals the other piece of that is I spend a lot of time learning and relearning pandas every time there's a new addition I learn what's new and I also scan the API for things that oh maybe I didn't realize there was something cool there so strengthen the fundamentals of whatever you need to learn the second big chunk is your core work and your core tools so this is whatever you're doing for your project your problem you know the work you do at work for me is a kaggle data scientist a huge amount of that is domain knowledge so", "start_timestamp": "00:19:15", "end_timestamp": "00:19:53", "start_second": 1155, "end_second": 1193, "url": "https://www.youtube.com/watch?v=etsayyDGiO0&t=1155s", "title": "Keeping Your Skills Fresh When Everything is Changing | by Walter Reade | Kaggle Days Dubai | Kaggle", "thumbnail": "https://i.ytimg.com/vi/etsayyDGiO0/maxresdefault.jpg"} {"video_id": "etsayyDGiO0", "text": "somebody comes to us and says we want to host the competition on earthquake signals I've got to spend time learning about that so I can understand I can ask the right questions when Kaggle was acquired by Google all of a sudden we now and we using our local machines that now we're using GCP VMs right so learning kind of that system stuff I hate to bring this up but we constantly also try to learn the new ways that leakage can enter the data because there's so many thousands and thousands of ways insidious ways for leakage to go", "start_timestamp": "00:19:53", "end_timestamp": "00:20:30", "start_second": 1193, "end_second": 1230, "url": "https://www.youtube.com/watch?v=etsayyDGiO0&t=1193s", "title": "Keeping Your Skills Fresh When Everything is Changing | by Walter Reade | Kaggle Days Dubai | Kaggle", "thumbnail": "https://i.ytimg.com/vi/etsayyDGiO0/maxresdefault.jpg"} {"video_id": "etsayyDGiO0", "text": "and and trying to build our systems around how to how to minimize that that will be different from every one of you and then the last is spending time on the cutting edge now this is where most people make the biggest mistake when they try to keep up with the industry they spend way too much time because there's all this cool stuff there's a capsule networks for neural networks or you know the ordinary ordinary differential equation neural network stuff that was from the nerves conference so some really cool things", "start_timestamp": "00:20:30", "end_timestamp": "00:21:06", "start_second": 1230, "end_second": 1266, "url": "https://www.youtube.com/watch?v=etsayyDGiO0&t=1230s", "title": "Keeping Your Skills Fresh When Everything is Changing | by Walter Reade | Kaggle Days Dubai | Kaggle", "thumbnail": "https://i.ytimg.com/vi/etsayyDGiO0/maxresdefault.jpg"} {"video_id": "etsayyDGiO0", "text": "and I am NOT dissuading anyone from keeping up on those but if you spend the majority of your time learning those and and and trying to keep up the speed you will actually be sub optimizing what's critical for you there are many tools that are going to come out that you'll be able to use right out of the box like Bert the NLP tool that Google pre train model that Google released right that's something that is cutting-edge and you should be able to pull it right back into your you know your tool set so that's approximate you know I would", "start_timestamp": "00:21:06", "end_timestamp": "00:21:42", "start_second": 1266, "end_second": 1302, "url": "https://www.youtube.com/watch?v=etsayyDGiO0&t=1266s", "title": "Keeping Your Skills Fresh When Everything is Changing | by Walter Reade | Kaggle Days Dubai | Kaggle", "thumbnail": "https://i.ytimg.com/vi/etsayyDGiO0/maxresdefault.jpg"} {"video_id": "etsayyDGiO0", "text": "say you don't necessarily need to balance that on a week-to-week basis but over the long term try to to balance that one other point when it comes to learning new things I strongly believe that if you're not writing code and trying it out you're not actually learning it so I fall into this trap I watch a lot of YouTube videos Leyland icon comes out I like to watch as many of the Python videos as possible and unfortunately most of the time I'm not not actually implementing anything I watch it I go yeah that's a great oh", "start_timestamp": "00:21:42", "end_timestamp": "00:22:19", "start_second": 1302, "end_second": 1339, "url": "https://www.youtube.com/watch?v=etsayyDGiO0&t=1302s", "title": "Keeping Your Skills Fresh When Everything is Changing | by Walter Reade | Kaggle Days Dubai | Kaggle", "thumbnail": "https://i.ytimg.com/vi/etsayyDGiO0/maxresdefault.jpg"} {"video_id": "etsayyDGiO0", "text": "that's a great a great idea and then I promptly forget it so if you see something that's important make it a project and get it into your your repertoire so the last vs is be systematic and this is the secret to sustainability this is how you can do this in the long term and not two years from now 200 and I haven't kept up so what do I mean by be systematic I am a big fan of simple checklists and some of you may have read a many years ago I wrote a post mislay it was my most popular forum post I fangled or said", "start_timestamp": "00:22:19", "end_timestamp": "00:22:57", "start_second": 1339, "end_second": 1377, "url": "https://www.youtube.com/watch?v=etsayyDGiO0&t=1339s", "title": "Keeping Your Skills Fresh When Everything is Changing | by Walter Reade | Kaggle Days Dubai | Kaggle", "thumbnail": "https://i.ytimg.com/vi/etsayyDGiO0/maxresdefault.jpg"} {"video_id": "etsayyDGiO0", "text": "here's my standard work that I do for every competition and it literally is I update my Khanda environment I you know whatever so I do any environment updates I create a new repo I download the data I you know so it's just a very simple checklist of things I do so I use multiple machines so I make sure everything syncs out all those machines it wasn't anything that was complicated but the reason I use checklists is because it reduces cognitive friction when you're like oh especially with Kegel we're like oh it's really late I", "start_timestamp": "00:22:57", "end_timestamp": "00:23:34", "start_second": 1377, "end_second": 1414, "url": "https://www.youtube.com/watch?v=etsayyDGiO0&t=1377s", "title": "Keeping Your Skills Fresh When Everything is Changing | by Walter Reade | Kaggle Days Dubai | Kaggle", "thumbnail": "https://i.ytimg.com/vi/etsayyDGiO0/maxresdefault.jpg"} {"video_id": "etsayyDGiO0", "text": "really want to start this competition you don't want to have to think about it so having a checklist allows you to go boom boom boom boom boom and do it without having to think pilots use checklists pilots are some of the most skilled practitioners about what they do and the amount of training they have to go through they use checklists because it reduces cognitive friction if they have a headache if they're tired and make sure they do the appropriate things so so how how do you use a checklist or a system when you're learning for me I", "start_timestamp": "00:23:34", "end_timestamp": "00:24:08", "start_second": 1414, "end_second": 1448, "url": "https://www.youtube.com/watch?v=etsayyDGiO0&t=1414s", "title": "Keeping Your Skills Fresh When Everything is Changing | by Walter Reade | Kaggle Days Dubai | Kaggle", "thumbnail": "https://i.ytimg.com/vi/etsayyDGiO0/maxresdefault.jpg"} {"video_id": "etsayyDGiO0", "text": "have a list of things that I do every day to keep my learning up to date the first thing is super simple I have a bookmark bar for five different archives so archives that write the paper repository archiving topics fluid dynamics data and statistics vision machine vision and machine learning and earth sciences I open those up and I every morning literally before I do anything before I check email I check what those are so that's one of the things I do it's a habit it's sustainable the other thing I do is write any time that any of the PI data", "start_timestamp": "00:24:08", "end_timestamp": "00:24:46", "start_second": 1448, "end_second": 1486, "url": "https://www.youtube.com/watch?v=etsayyDGiO0&t=1448s", "title": "Keeping Your Skills Fresh When Everything is Changing | by Walter Reade | Kaggle Days Dubai | Kaggle", "thumbnail": "https://i.ytimg.com/vi/etsayyDGiO0/maxresdefault.jpg"} {"video_id": "etsayyDGiO0", "text": "the PyCon videos come out I make a list of all the ones I want to watch and I don't get to more than ten percent of them but I make the list and then I worked through them systematically till the next ones come out all right so if you want so as I mentioned there are so many courses it's actually intimidating where do you start what's the right course I have become a big fan of CagA learn not because it's put out by kaggle because I really actually agree with the philosophy what kaggle so if you think of a move if the time to make the", "start_timestamp": "00:24:46", "end_timestamp": "00:25:30", "start_second": 1486, "end_second": 1530, "url": "https://www.youtube.com/watch?v=etsayyDGiO0&t=1486s", "title": "Keeping Your Skills Fresh When Everything is Changing | by Walter Reade | Kaggle Days Dubai | Kaggle", "thumbnail": "https://i.ytimg.com/vi/etsayyDGiO0/maxresdefault.jpg"} {"video_id": "etsayyDGiO0", "text": "presentation to record the presentation to transcribe the presentation to host the presentation it's a very very very very long time scale what learned has done is say we're gonna take some very key important topics and we are going to let you do it in four hours and we're gonna focus on those key important things and get you pushing the buttons and learning these things what's nice about this there's no video actually one of the courses had videos but they're gonna get rid of them is they can freshen them isn't like as much as once", "start_timestamp": "00:25:30", "end_timestamp": "00:26:02", "start_second": 1530, "end_second": 1562, "url": "https://www.youtube.com/watch?v=etsayyDGiO0&t=1530s", "title": "Keeping Your Skills Fresh When Everything is Changing | by Walter Reade | Kaggle Days Dubai | Kaggle", "thumbnail": "https://i.ytimg.com/vi/etsayyDGiO0/maxresdefault.jpg"} {"video_id": "etsayyDGiO0", "text": "a week somebody will ask you a question and they'll realize oh we can make this better so they'll refresh the content so so for example if you if you look at this so it's kind of small but here's the current cago learn everything from python machine learning data visualization sequel micro challenges and the new one machine learning explained ability maybe you know those things but maybe you want a refresher it's really really fun and easy to go through so I'd recommend checking out one so here's like this one on", "start_timestamp": "00:26:02", "end_timestamp": "00:26:34", "start_second": 1562, "end_second": 1594, "url": "https://www.youtube.com/watch?v=etsayyDGiO0&t=1562s", "title": "Keeping Your Skills Fresh When Everything is Changing | by Walter Reade | Kaggle Days Dubai | Kaggle", "thumbnail": "https://i.ytimg.com/vi/etsayyDGiO0/maxresdefault.jpg"} {"video_id": "etsayyDGiO0", "text": "embeddings it's it's I went through it before I I had never got there I went Baba before you through it yesterday I thought oh yeah it makes a lot of sense so cago learned a great place if you ever feel lost go back to Cal go learn obviously there's tons of great stuff out there but if you're ever paralyzed about where you should go go to CAG alert this is an example right so they walk through with words they give you some code you run it it kind of lets you know if you were wrong it gives you hints on very the common errors and it's", "start_timestamp": "00:26:34", "end_timestamp": "00:27:07", "start_second": 1594, "end_second": 1627, "url": "https://www.youtube.com/watch?v=etsayyDGiO0&t=1594s", "title": "Keeping Your Skills Fresh When Everything is Changing | by Walter Reade | Kaggle Days Dubai | Kaggle", "thumbnail": "https://i.ytimg.com/vi/etsayyDGiO0/maxresdefault.jpg"} {"video_id": "etsayyDGiO0", "text": "fun it's interactive it's in kaggle kernels so you can access it anywhere and now that actually keeps track of your progress so you know if you stopped one halfway through it'll tell you which modules you've done all right the other thing that this this kind of surprised me if you go to youtube.com forward slash Kaggle there is an amazing amount of content and it's added to weekly everything from coffee chats to interviews with for example practitioners at Google brain all sorts of pretzeled live coding gets recorded", "start_timestamp": "00:27:07", "end_timestamp": "00:27:42", "start_second": 1627, "end_second": 1662, "url": "https://www.youtube.com/watch?v=etsayyDGiO0&t=1627s", "title": "Keeping Your Skills Fresh When Everything is Changing | by Walter Reade | Kaggle Days Dubai | Kaggle", "thumbnail": "https://i.ytimg.com/vi/etsayyDGiO0/maxresdefault.jpg"} {"video_id": "etsayyDGiO0", "text": "there so this is another great example of a place you can start there's probably too much content there for you to keep up on and that's okay right again give yourself the permission to skip things and focus along those lines there's a point I wanted to make every cago competition that I joined my primary goal was to learn something new so embeddings for example I'm not an NLP expert so if I join a cat go I did a playground who's spooky author so it was like three authors given a sentence predict which one so I did that with the", "start_timestamp": "00:27:42", "end_timestamp": "00:28:19", "start_second": 1662, "end_second": 1699, "url": "https://www.youtube.com/watch?v=etsayyDGiO0&t=1662s", "title": "Keeping Your Skills Fresh When Everything is Changing | by Walter Reade | Kaggle Days Dubai | Kaggle", "thumbnail": "https://i.ytimg.com/vi/etsayyDGiO0/maxresdefault.jpg"} {"video_id": "etsayyDGiO0", "text": "sole goal to learn NLP you may under one to learn okay finally I'm gonna learn convolutional neural networks or ell STM's or some sort of you know whatever you need to go so if you do that no matter where you place you will have one you will have had time well-spent so I want to close up a little bit with this quote it is by far my favorite quote it's a little bit intense so I want to explain it says the illiterate of the 21st century will not be those who cannot read and write but those who cannot learn unlearn and", "start_timestamp": "00:28:19", "end_timestamp": "00:28:54", "start_second": 1699, "end_second": 1734, "url": "https://www.youtube.com/watch?v=etsayyDGiO0&t=1699s", "title": "Keeping Your Skills Fresh When Everything is Changing | by Walter Reade | Kaggle Days Dubai | Kaggle", "thumbnail": "https://i.ytimg.com/vi/etsayyDGiO0/maxresdefault.jpg"} {"video_id": "etsayyDGiO0", "text": "relearn a hundred years ago so many people didn't know how to read or write and that prevented them from being in the workplace in today's world not just in data science and machine learning but almost any vocation you have to learn you have to be able to learn to unlearn and to relearn so that to me is the most important reasons for having yourself a smart plan that's sensible that's smart and that's sustainable so that you can learn whatever skills and do that five 10 20 30 years from now to keep up and have", "start_timestamp": "00:28:54", "end_timestamp": "00:29:31", "start_second": 1734, "end_second": 1771, "url": "https://www.youtube.com/watch?v=etsayyDGiO0&t=1734s", "title": "Keeping Your Skills Fresh When Everything is Changing | by Walter Reade | Kaggle Days Dubai | Kaggle", "thumbnail": "https://i.ytimg.com/vi/etsayyDGiO0/maxresdefault.jpg"} {"video_id": "etsayyDGiO0", "text": "whatever professional success that you want and with that thank you and I'll open it up to questions one second Ronnie Heflin let's see who has who has the mic area what I see a second period thank you oh that's a great question I I'm sure there are but I can't recommend any and the reason being is I just that's not something I looked at does if anyone here know of good data science for kids I you know I would say I don't know for sure but Khan Academy kha in the anatomy is one that has a lot of great content", "start_timestamp": "00:29:31", "end_timestamp": "00:30:39", "start_second": 1771, "end_second": 1839, "url": "https://www.youtube.com/watch?v=etsayyDGiO0&t=1771s", "title": "Keeping Your Skills Fresh When Everything is Changing | by Walter Reade | Kaggle Days Dubai | Kaggle", "thumbnail": "https://i.ytimg.com/vi/etsayyDGiO0/maxresdefault.jpg"} {"video_id": "etsayyDGiO0", "text": "but I don't notice they have data science No okay so sorry great question I'm gonna write that down and we'll follow up yes sir [Music] [Music] yeah it's a really question and and I actually don't believe it's the type of data I believe it's the volume of data and what I mean by that is if your company wants to do a very large data project and you have data in multiple legacy databases that sit in multiple organizations that can be extremely complex and hard to get my recommendation is always to start with the smallest viable scope so try to get", "start_timestamp": "00:30:39", "end_timestamp": "00:32:18", "start_second": 1839, "end_second": 1938, "url": "https://www.youtube.com/watch?v=etsayyDGiO0&t=1839s", "title": "Keeping Your Skills Fresh When Everything is Changing | by Walter Reade | Kaggle Days Dubai | Kaggle", "thumbnail": "https://i.ytimg.com/vi/etsayyDGiO0/maxresdefault.jpg"} {"video_id": "etsayyDGiO0", "text": "a little bit of all the data if you can or just get the data that seems the most reasonable and start doing model building on that the biggest mistake companies make and I've seen this many times is they say well we're gonna have a two-year project to get all of the data collected and then they get all the data collected and they don't have the signal that they need or only five percent of that data was important so I would say just like it was mentioned by grandmasters when your model building you go quick you go small you go fast", "start_timestamp": "00:32:18", "end_timestamp": "00:32:49", "start_second": 1938, "end_second": 1969, "url": "https://www.youtube.com/watch?v=etsayyDGiO0&t=1938s", "title": "Keeping Your Skills Fresh When Everything is Changing | by Walter Reade | Kaggle Days Dubai | Kaggle", "thumbnail": "https://i.ytimg.com/vi/etsayyDGiO0/maxresdefault.jpg"} {"video_id": "etsayyDGiO0", "text": "and then go from there I'd say the same same with data so then then you may say there's there's some very expensive data here that it's worth the time cleaning up it's worth the expense but then you'll find a date over here that's not worth the time okay the biggest mistake is companies say well let's clean up all of our data and that's just waste of time thank you question over here trick question take you back to your fluent that after navier-stokes yes and the reason is is because the navier-stokes is a you know", "start_timestamp": "00:32:49", "end_timestamp": "00:33:43", "start_second": 1969, "end_second": 2023, "url": "https://www.youtube.com/watch?v=etsayyDGiO0&t=1969s", "title": "Keeping Your Skills Fresh When Everything is Changing | by Walter Reade | Kaggle Days Dubai | Kaggle", "thumbnail": "https://i.ytimg.com/vi/etsayyDGiO0/maxresdefault.jpg"} {"video_id": "etsayyDGiO0", "text": "a three dimensional coupled nonlinear equation what the neural networks theoretically can approximate any function so neural networks can approximate the function so I just read a paper last week it was sent to me by a professor in Germany and so there's a there's a type of flow where you have a temperature differential and then you get these circulation patterns and if you look at if you ever seen a close-up of the surface of the Sun you see these like pockets that's what causes that and what they did is they did very very", "start_timestamp": "00:33:43", "end_timestamp": "00:34:21", "start_second": 2023, "end_second": 2061, "url": "https://www.youtube.com/watch?v=etsayyDGiO0&t=2023s", "title": "Keeping Your Skills Fresh When Everything is Changing | by Walter Reade | Kaggle Days Dubai | Kaggle", "thumbnail": "https://i.ytimg.com/vi/etsayyDGiO0/maxresdefault.jpg"} {"video_id": "etsayyDGiO0", "text": "high-resolution simulations of this and then they trained neural networks to identify so segments the the boundaries of this now they get the same results of these expensive simulations but it's orders of magnitude faster and there's a paper that came out a couple of years ago where I think Google was a co-author where like a smoke plume simulate that but then use neural networks to approximate that so now you can model those dynamics so you you you will see more and more of that but the answer is absolutely yes it is exciting work I'll", "start_timestamp": "00:34:21", "end_timestamp": "00:35:00", "start_second": 2061, "end_second": 2100, "url": "https://www.youtube.com/watch?v=etsayyDGiO0&t=2061s", "title": "Keeping Your Skills Fresh When Everything is Changing | by Walter Reade | Kaggle Days Dubai | Kaggle", "thumbnail": "https://i.ytimg.com/vi/etsayyDGiO0/maxresdefault.jpg"} {"video_id": "etsayyDGiO0", "text": "say another thing you're seeing more of this in chemical kinetics so chemical kinetics when you have multiple chemicals it's great it's it's stiff not nonlinear differential equations because you have very fast reactions very slow and people are now using neural networks if you have the system of chemicals we model probably react will see more than yes I am NOT a deep learning expert but I crush because our neighbors are noisy and we do do-dah like 1 million picture is just fine but do you never you are dead picture so great great", "start_timestamp": "00:35:00", "end_timestamp": "00:36:09", "start_second": 2100, "end_second": 2169, "url": "https://www.youtube.com/watch?v=etsayyDGiO0&t=2100s", "title": "Keeping Your Skills Fresh When Everything is Changing | by Walter Reade | Kaggle Days Dubai | Kaggle", "thumbnail": "https://i.ytimg.com/vi/etsayyDGiO0/maxresdefault.jpg"} {"video_id": "etsayyDGiO0", "text": "question so I'll answer that generally first so in general you want to pick the tool that's appropriate there's no reason to do a very very fancy 3d instead if you have a small amount of data and you can do logistic regression then it works fine so I'm a big fan of doing the simplest tool now with neural networks we have the advantage of transfer learning where Google has spent a month training a bank of GPUs on these large networks and so you can use those and actually I'm probably not a hundred but maybe a thousand images because you", "start_timestamp": "00:36:09", "end_timestamp": "00:36:41", "start_second": 2169, "end_second": 2201, "url": "https://www.youtube.com/watch?v=etsayyDGiO0&t=2169s", "title": "Keeping Your Skills Fresh When Everything is Changing | by Walter Reade | Kaggle Days Dubai | Kaggle", "thumbnail": "https://i.ytimg.com/vi/etsayyDGiO0/maxresdefault.jpg"} {"video_id": "etsayyDGiO0", "text": "have all of this pre train models so you know something like that I always say try it right if you have a thousand images kind of fine-tune a pre training model on on 500 see how well it does on the other 500 it's going to be very context dependent I think you know there's some image tasks that are extremely complicated that wouldn't work but there's some simple ones yeah that's a great question and you know to me to be very frank I am very bad at predicting what's next I've always seen what's happening and then act upon that you know there's a", "start_timestamp": "00:36:41", "end_timestamp": "00:38:00", "start_second": 2201, "end_second": 2280, "url": "https://www.youtube.com/watch?v=etsayyDGiO0&t=2201s", "title": "Keeping Your Skills Fresh When Everything is Changing | by Walter Reade | Kaggle Days Dubai | Kaggle", "thumbnail": "https://i.ytimg.com/vi/etsayyDGiO0/maxresdefault.jpg"} {"video_id": "etsayyDGiO0", "text": "lot of people that talk about these things and you know I don't pay a lot of attention to them to be fair i I just watch for tools that I can use in my work I will say this you know data science machine learning AI it's it's not going away in ten years it's not going to wait 20 years I think it's a fantastic field at the end so again is the people that are willing to keep up and whatever happens to be important and learn that I think are gonna do very well for themselves but again I just so I would be out if I knew the answer to", "start_timestamp": "00:38:00", "end_timestamp": "00:38:36", "start_second": 2280, "end_second": 2316, "url": "https://www.youtube.com/watch?v=etsayyDGiO0&t=2280s", "title": "Keeping Your Skills Fresh When Everything is Changing | by Walter Reade | Kaggle Days Dubai | Kaggle", "thumbnail": "https://i.ytimg.com/vi/etsayyDGiO0/maxresdefault.jpg"} {"video_id": "etsayyDGiO0", "text": "that I'd be on the main stage I think we have maybe time for one more question oh yes yes then little stars our afternoon break so we have like three after four minutes do you have any other questions [Music] yeah yeah that so if I understand your question is how do we help those who maybe are discouraged along the path yeah that's that's a hard question of you know so I I came to cattle with a PhD in chemical engineering in a good career and I remember the sense of being very intimidated to ask anything on the", "start_timestamp": "00:38:36", "end_timestamp": "00:39:43", "start_second": 2316, "end_second": 2383, "url": "https://www.youtube.com/watch?v=etsayyDGiO0&t=2316s", "title": "Keeping Your Skills Fresh When Everything is Changing | by Walter Reade | Kaggle Days Dubai | Kaggle", "thumbnail": "https://i.ytimg.com/vi/etsayyDGiO0/maxresdefault.jpg"} {"video_id": "etsayyDGiO0", "text": "forums right I felt I wasn't worthy I felt it was a stupid question that everyone knows it but me I think the more that we tell those stories the people that that's natural that's okay everyone goes through that I think the better right that's why I'm happy to tell my story how poorly I did I tell my story how frightening it was to ask the question on untangle but we can also do a good job I think so tango we try to monitor the forums and I can't say that I do this all the time but if I see somebody that you know they're due and", "start_timestamp": "00:39:43", "end_timestamp": "00:40:19", "start_second": 2383, "end_second": 2419, "url": "https://www.youtube.com/watch?v=etsayyDGiO0&t=2383s", "title": "Keeping Your Skills Fresh When Everything is Changing | by Walter Reade | Kaggle Days Dubai | Kaggle", "thumbnail": "https://i.ytimg.com/vi/etsayyDGiO0/maxresdefault.jpg"} {"video_id": "etsayyDGiO0", "text": "they're trying to say we I'm new to this you know I much as possible try to point them to hate just keep at it don't get discouraged so you know every one of you here can do that you don't have to feel like you're a grandmaster to give people encouragement so I really appreciated that question and regardless of where your level is I think you have the opportunity to do that and again that's what makes Cagle a wonderful community I love the cattle forums there's so much useful content people can enjoy each other a month ago is actually when I met", "start_timestamp": "00:40:19", "end_timestamp": "00:40:54", "start_second": 2419, "end_second": 2454, "url": "https://www.youtube.com/watch?v=etsayyDGiO0&t=2419s", "title": "Keeping Your Skills Fresh When Everything is Changing | by Walter Reade | Kaggle Days Dubai | Kaggle", "thumbnail": "https://i.ytimg.com/vi/etsayyDGiO0/maxresdefault.jpg"} {"video_id": "etsayyDGiO0", "text": "my first Aguilar I had never met somebody from Kaggle but I feel like I know them just by the online interaction so it's a wonderful place and we just have to convince people that they can do that at home and themselves thank you for them hey do you want to ask any more questions okay Pavel I can see your hand what is thank you but it will be the last question that's what your question my favorite game so probably he's maybe not as well known but a parent from Germany Mathias Mueller I liked him because I got ended", "start_timestamp": "00:40:54", "end_timestamp": "00:42:04", "start_second": 2454, "end_second": 2524, "url": "https://www.youtube.com/watch?v=etsayyDGiO0&t=2454s", "title": "Keeping Your Skills Fresh When Everything is Changing | by Walter Reade | Kaggle Days Dubai | Kaggle", "thumbnail": "https://i.ytimg.com/vi/etsayyDGiO0/maxresdefault.jpg"} {"video_id": "etsayyDGiO0", "text": "up with him he was always very helpful he was always very friendly he explained things that we easier though easy way for me to understand but but you know it to be fair I there's so many carers the grandmasters that I appreciate for all different reasons like like Abhishek I mean he's just he's just such an interesting character a funny guy as there has a lot to teach people yeah and I can go through the list but I'd say fair and fair and Mattias Mueller from Germany now in the Bay Area but actually maybe back in your baby is my favorite", "start_timestamp": "00:42:04", "end_timestamp": "00:42:39", "start_second": 2524, "end_second": 2559, "url": "https://www.youtube.com/watch?v=etsayyDGiO0&t=2524s", "title": "Keeping Your Skills Fresh When Everything is Changing | by Walter Reade | Kaggle Days Dubai | Kaggle", "thumbnail": "https://i.ytimg.com/vi/etsayyDGiO0/maxresdefault.jpg"} {"video_id": "9-o2aAoN0rY", "text": "hi there today we're looking at fast reinforcement learning with generalized policy updates by andre barretto shabo ho diana borsa david silver and doina preco so on high level this paper proposes a framework for reinforcement learning where you have many tasks at the same time and they propose a framework where they learn many policies at the same time that can or cannot correspond to these tasks and then their argument is that if you now have a new task that you haven't seen before you can easily construct a solution to", "start_timestamp": "00:00:00", "end_timestamp": "00:00:37", "start_second": 0, "end_second": 37, "url": "https://www.youtube.com/watch?v=9-o2aAoN0rY&t=0s", "title": "Fast reinforcement learning with generalized policy updates (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/9-o2aAoN0rY/maxresdefault.jpg"} {"video_id": "9-o2aAoN0rY", "text": "that task from your old policies basically mixing what you learned about your old tasks and it's a pretty general framework and we're going to look at it in my opinion it's it's pretty cool for certain settings however i think it kind of breaks down the the more general you go which i guess is expected um of such a framework but uh it's as you can see it's kind of math heavy but we'll get into the examples and um what it's potentially useful for all right so that was it on a high level if you like content like this don't hesitate to", "start_timestamp": "00:00:37", "end_timestamp": "00:01:15", "start_second": 37, "end_second": 75, "url": "https://www.youtube.com/watch?v=9-o2aAoN0rY&t=37s", "title": "Fast reinforcement learning with generalized policy updates (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/9-o2aAoN0rY/maxresdefault.jpg"} {"video_id": "9-o2aAoN0rY", "text": "subscribe to the channel and share it out leave a like and tell me in the comments what you think i'm still reading all of them uh so i will see it cool let's dive in so they say the combination of reinforcement learning with deep learning is a promising approach to tackle important sequential decision making problems that are currently intractable well they're taking they're talking about you know things like um mostly these game playing ais like go and things like this so we're this combination of deep learning with", "start_timestamp": "00:01:15", "end_timestamp": "00:01:53", "start_second": 75, "end_second": 113, "url": "https://www.youtube.com/watch?v=9-o2aAoN0rY&t=75s", "title": "Fast reinforcement learning with generalized policy updates (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/9-o2aAoN0rY/maxresdefault.jpg"} {"video_id": "9-o2aAoN0rY", "text": "reinforcement learning has really shined or shun whatever one obstacle to overcome is the amount of data needed by learning systems of this type so again if you look at these systems like alphago they need a simulator and they need to collect enormous amounts of data um even more so with systems like the dota ai the openai5 dota or starcraft playing alpha star i think it's alpha star they need so many simulations in order to learn about the tasks because they always start from scratch in this article they say", "start_timestamp": "00:01:53", "end_timestamp": "00:02:35", "start_second": 113, "end_second": 155, "url": "https://www.youtube.com/watch?v=9-o2aAoN0rY&t=113s", "title": "Fast reinforcement learning with generalized policy updates (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/9-o2aAoN0rY/maxresdefault.jpg"} {"video_id": "9-o2aAoN0rY", "text": "we propose to address this issue through a divide and conquer approach we argue that complex decision problems can be naturally decomposed into multiple tasks that unfold in sequence or in parallel by associating each task with a reward function this problem decomposition can be seamlessly accommodated within the standard reinforcement learning formalism okay so what are they saying right here they are basically saying that if you have a task let's say you want to get whoopsie from here to here and that's very complicated", "start_timestamp": "00:02:35", "end_timestamp": "00:03:11", "start_second": 155, "end_second": 191, "url": "https://www.youtube.com/watch?v=9-o2aAoN0rY&t=155s", "title": "Fast reinforcement learning with generalized policy updates (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/9-o2aAoN0rY/maxresdefault.jpg"} {"video_id": "9-o2aAoN0rY", "text": "let's make it complicated super duper complicated you can basically subdivide that task into multiple subtasks right so here it's like left turn right turn go straight left turn go straight right turn and so on and each of these subtasks you can see the two right turns here might share a lot of common information there could also be tasks that are at the same time like you need to go forward and jump can be decomposed into going forward and to jump now they're saying is if each of these tasks now has its separate reward function in", "start_timestamp": "00:03:11", "end_timestamp": "00:03:48", "start_second": 191, "end_second": 228, "url": "https://www.youtube.com/watch?v=9-o2aAoN0rY&t=191s", "title": "Fast reinforcement learning with generalized policy updates (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/9-o2aAoN0rY/maxresdefault.jpg"} {"video_id": "9-o2aAoN0rY", "text": "the environment like for some reason the environment tells you this by the way is task task one and you're gonna get a positive reward if you do a right turn and this down here is task two the the left turn task and you're gonna get a positive reward if for that task so the entire task state can be decomposed into a vector so in our case here we have maybe a vector with three elements okay the three elements correspond to turn right go straight and turn left and now your this this right here is your reward vector so we're no longer talk in this", "start_timestamp": "00:03:48", "end_timestamp": "00:04:33", "start_second": 228, "end_second": 273, "url": "https://www.youtube.com/watch?v=9-o2aAoN0rY&t=228s", "title": "Fast reinforcement learning with generalized policy updates (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/9-o2aAoN0rY/maxresdefault.jpg"} {"video_id": "9-o2aAoN0rY", "text": "framework we're no longer talking about just a reward we're talking about a reward vector now each of these tasks is going to give you its own individual reward so let's say you're here and you're actually turning right this is going to give you a reward of one for this task but reward of zero for the other task okay so the environment will somehow tell you which tasks you you get reward for now there is a notion where you can map this back to a single number and that is the second thing they introduce here so the second thing they", "start_timestamp": "00:04:33", "end_timestamp": "00:05:11", "start_second": 273, "end_second": 311, "url": "https://www.youtube.com/watch?v=9-o2aAoN0rY&t=273s", "title": "Fast reinforcement learning with generalized policy updates (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/9-o2aAoN0rY/maxresdefault.jpg"} {"video_id": "9-o2aAoN0rY", "text": "introduce here is this thing they call w so w is going to be a mixing vector w is going to be a vector i will call w right here this is the reward vector w is going to be the vector that tells you your final reward so here we're going to do an inner product so we're going to transpose this and multiply by w and w mixes these rewards and comes up with your final reward right here so this this is maybe the reward vector this is the reward number how we're going to call this reward number so in this case w would have to look", "start_timestamp": "00:05:11", "end_timestamp": "00:05:55", "start_second": 311, "end_second": 355, "url": "https://www.youtube.com/watch?v=9-o2aAoN0rY&t=311s", "title": "Fast reinforcement learning with generalized policy updates (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/9-o2aAoN0rY/maxresdefault.jpg"} {"video_id": "9-o2aAoN0rY", "text": "something like this let's say this is an example so the task right here would be to only do right turns now this is not a really nice example we're going to see some nicer examples later on but you can see that now the environment is specified as a vector of rewards and you can create the specific tasks like turning right simply by adjusting how you mix these different things by this vector w and this is going to be the key ingredient here so they discuss your general reinforcement learning the reinforcement learning lingo and i", "start_timestamp": "00:05:55", "end_timestamp": "00:06:34", "start_second": 355, "end_second": 394, "url": "https://www.youtube.com/watch?v=9-o2aAoN0rY&t=355s", "title": "Fast reinforcement learning with generalized policy updates (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/9-o2aAoN0rY/maxresdefault.jpg"} {"video_id": "9-o2aAoN0rY", "text": "think we've gone through this a number of times just very very quickly uh in reinforcement learning you're given these transitions you are in a state you take an action and that leads you to get a reward or prime and you get into a state s prime in the next state they say the reward is given by the reward function so the reward is purely a function of where you are and what you do and where you get to now most reinforcement learning problems you can actually kind of forget about this part right here because well it isn't it is kind of", "start_timestamp": "00:06:34", "end_timestamp": "00:07:15", "start_second": 394, "end_second": 435, "url": "https://www.youtube.com/watch?v=9-o2aAoN0rY&t=394s", "title": "Fast reinforcement learning with generalized policy updates (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/9-o2aAoN0rY/maxresdefault.jpg"} {"video_id": "9-o2aAoN0rY", "text": "important but you could um most reinforcement learning problems the reward is simply a matter of where you are and what you do and this can be a random variable there can be randomness but maybe it's easier if you for now think about the reward simply as a function of these two things so what you want to discover is a policy pi where you input you input where you are and the output is going to be what should you do in that situation okay uh that is a policy and associated with each policy is this thing called a q function", "start_timestamp": "00:07:15", "end_timestamp": "00:07:52", "start_second": 435, "end_second": 472, "url": "https://www.youtube.com/watch?v=9-o2aAoN0rY&t=435s", "title": "Fast reinforcement learning with generalized policy updates (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/9-o2aAoN0rY/maxresdefault.jpg"} {"video_id": "9-o2aAoN0rY", "text": "so you can see right here the q function of a policy um is going to be a function of where you are and what you do and this is a bit confusing but it basically means that you are in state s so you are here and you have let's say three options action one action two action three to do now the q function tells you the q function this is s and the a's are the numbers okay so let's say we plug in the state s and for a we plug in number two what it will tell you is if i am in state s and i perform action number two", "start_timestamp": "00:07:52", "end_timestamp": "00:08:34", "start_second": 472, "end_second": 514, "url": "https://www.youtube.com/watch?v=9-o2aAoN0rY&t=472s", "title": "Fast reinforcement learning with generalized policy updates (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/9-o2aAoN0rY/maxresdefault.jpg"} {"video_id": "9-o2aAoN0rY", "text": "then how valuable is that for me and value is defined by all the reward that i'm going to pick up from now until the end of time or the end of the episode it depends um but let's say until the end of time well how much how much reward am i going to pick up from now until the end of time is a bit of a vague not a vague question but a difficult question i can tell you how much i could estimate how much reward i'm going to pick up in the next step because i know what action i'm doing i'm performing action number", "start_timestamp": "00:08:34", "end_timestamp": "00:09:09", "start_second": 514, "end_second": 549, "url": "https://www.youtube.com/watch?v=9-o2aAoN0rY&t=514s", "title": "Fast reinforcement learning with generalized policy updates (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/9-o2aAoN0rY/maxresdefault.jpg"} {"video_id": "9-o2aAoN0rY", "text": "two but what happens after that who knows so that's where this policy right here comes in this policy right here says so the full definition of the q function is if i'm in state s and i perform action a right now and after that i follow policy pi what is my reward going to be well now it's well defined so right now you do action a and after that you do whatever action the policy tells you in that specific situation okay so that's the q function and you can pretty easily see that if you have a q function right if", "start_timestamp": "00:09:09", "end_timestamp": "00:09:48", "start_second": 549, "end_second": 588, "url": "https://www.youtube.com/watch?v=9-o2aAoN0rY&t=549s", "title": "Fast reinforcement learning with generalized policy updates (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/9-o2aAoN0rY/maxresdefault.jpg"} {"video_id": "9-o2aAoN0rY", "text": "you have an accurate q function you can get a good policy by simply always going with the action that gives you the highest q value because um it's because of a recurrence relationship called the the bellman equation uh this thing right here so your q function basically decomposes into the reward in the next step as we said plus whatever happens after that and whatever happens after that is just by the nature of how the things are defined is going to be the q function of whatever the policy is telling you so you can get a pretty good policy by", "start_timestamp": "00:09:48", "end_timestamp": "00:10:26", "start_second": 588, "end_second": 626, "url": "https://www.youtube.com/watch?v=9-o2aAoN0rY&t=588s", "title": "Fast reinforcement learning with generalized policy updates (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/9-o2aAoN0rY/maxresdefault.jpg"} {"video_id": "9-o2aAoN0rY", "text": "always doing whatever action your q function tells you is best this step of calculating the q function is called a policy evaluation and this paper here is going to generalize these notions um sorry so this is a policy evaluation and then the act of selecting an action is going to be a policy improvement these are just names okay but we need to know them because the paper introduces two new things i'm going to where do i highlight policy evaluation i don't know but here they say this is the policy improvement", "start_timestamp": "00:10:26", "end_timestamp": "00:11:13", "start_second": 626, "end_second": 673, "url": "https://www.youtube.com/watch?v=9-o2aAoN0rY&t=626s", "title": "Fast reinforcement learning with generalized policy updates (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/9-o2aAoN0rY/maxresdefault.jpg"} {"video_id": "9-o2aAoN0rY", "text": "okay ah here policy evaluation policy improvement these are the two steps so the first step is calculate the queue function the second step is to select an action and you can see how these things interlock namely we can calculate the q function of a given policy and we can improve that policy by selecting whatever action is best for the q function this paper generalizes this and you can see that there is a little a little r right here so the r is just a specific way to reference the reward function used right here okay and", "start_timestamp": "00:11:13", "end_timestamp": "00:11:58", "start_second": 673, "end_second": 718, "url": "https://www.youtube.com/watch?v=9-o2aAoN0rY&t=673s", "title": "Fast reinforcement learning with generalized policy updates (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/9-o2aAoN0rY/maxresdefault.jpg"} {"video_id": "9-o2aAoN0rY", "text": "you can see it here as well now usually we have one policy and one reward right and so what we do is we improve the policy and that leads us to better evaluate the q function for a given reward function and that leads us to improve the policy now this paper is going to transform this into the following we have many policies so we have policy one policy two and so on until policy i don't know p and we also have many reward functions reward 1 reward 2 reward 3 and so on until reward let's call that r so we have", "start_timestamp": "00:11:58", "end_timestamp": "00:12:45", "start_second": 718, "end_second": 765, "url": "https://www.youtube.com/watch?v=9-o2aAoN0rY&t=718s", "title": "Fast reinforcement learning with generalized policy updates (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/9-o2aAoN0rY/maxresdefault.jpg"} {"video_id": "9-o2aAoN0rY", "text": "many different tasks right here and we have many policies now in essence they don't need to have some anything to do with each other for the theory of this paper but i can simplify this a bit of how they see the world so let's say you have an agent and the agent has been trained on simply that first task right here and has been trained using classic q learning reinforcement learning what not and that results in this particular policy and then the agent just from scratch you restarted again you run reinforcement learning just", "start_timestamp": "00:12:45", "end_timestamp": "00:13:28", "start_second": 765, "end_second": 808, "url": "https://www.youtube.com/watch?v=9-o2aAoN0rY&t=765s", "title": "Fast reinforcement learning with generalized policy updates (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/9-o2aAoN0rY/maxresdefault.jpg"} {"video_id": "9-o2aAoN0rY", "text": "on reward number two and obtained policy number two and so on so you do this for all these rewards individually okay so you give the agent a new task and you ask it to learn a policy for that task now you're in a situation where if you are have a new task so are new the question is do you again need to train a new policy and the answer for this paper is no because we have all these policies we don't need to train a new we can simply mix and match these policies that we already know to obtain a good solution for the new", "start_timestamp": "00:13:28", "end_timestamp": "00:14:10", "start_second": 808, "end_second": 850, "url": "https://www.youtube.com/watch?v=9-o2aAoN0rY&t=808s", "title": "Fast reinforcement learning with generalized policy updates (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/9-o2aAoN0rY/maxresdefault.jpg"} {"video_id": "9-o2aAoN0rY", "text": "task so how does the paper do it it does it yeah it does it in the following it defines the successor features okay maybe it since maybe it's better if we first go to an example so the example they give here is the following otherwise this i guess this might sound just a bit too abstract okay so you have this world here the agent is the thing here in yellow and it can just move so its actions are move left up right down this this is one step okay in the environment there are two different objects one object is a triangle and one", "start_timestamp": "00:14:10", "end_timestamp": "00:14:56", "start_second": 850, "end_second": 896, "url": "https://www.youtube.com/watch?v=9-o2aAoN0rY&t=850s", "title": "Fast reinforcement learning with generalized policy updates (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/9-o2aAoN0rY/maxresdefault.jpg"} {"video_id": "9-o2aAoN0rY", "text": "object is a square okay so um there are a number of tasks we can define right now in this thing so we define tasks according to a reward function so the reward let's say the reward one is going to be um one if if it picks up a square sorry the square and zero else just if it picks up a square on any given step we give it a reward of one it we don't care about the blue triangles okay and then reward two is going to be the opposite it's going to be one not the opposite but one if it picks up a triangle and zero else so you can see the um", "start_timestamp": "00:14:56", "end_timestamp": "00:15:50", "start_second": 896, "end_second": 950, "url": "https://www.youtube.com/watch?v=9-o2aAoN0rY&t=896s", "title": "Fast reinforcement learning with generalized policy updates (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/9-o2aAoN0rY/maxresdefault.jpg"} {"video_id": "9-o2aAoN0rY", "text": "good policies right here so pi one is a is a good policy for reward one because it just goes and and collects these red things doesn't care about the blue things just goes and collects them pi two it goes and collects the blue things doesn't care about the red things okay so let's imagine that you have run reinforcement learning twice once for reward one and once for reward two and now you have two policies okay so you have two policies this will lead to pi one this will lead to pi two and now i give you the third task now the third task is a", "start_timestamp": "00:15:50", "end_timestamp": "00:16:31", "start_second": 950, "end_second": 991, "url": "https://www.youtube.com/watch?v=9-o2aAoN0rY&t=950s", "title": "Fast reinforcement learning with generalized policy updates (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/9-o2aAoN0rY/maxresdefault.jpg"} {"video_id": "9-o2aAoN0rY", "text": "bit special it's one if you pick up a square and it's um it's zero else except it's negative one if you pick up a blue thing well the order of these is kind of wrong but it just for visual representation okay so now you're asked to um pick up the red things but avoid the blue things okay pick up as many red things as you can avoid the blue things and again as we said the question is do you now have to run reinforcement learning again in this agent with your simulator using like q learning or something like this from", "start_timestamp": "00:16:31", "end_timestamp": "00:17:19", "start_second": 991, "end_second": 1039, "url": "https://www.youtube.com/watch?v=9-o2aAoN0rY&t=991s", "title": "Fast reinforcement learning with generalized policy updates (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/9-o2aAoN0rY/maxresdefault.jpg"} {"video_id": "9-o2aAoN0rY", "text": "the start or can you come up with a solution just given these two policies that will perform well on the on this new task okay and we're going to see how they do it so what they do is they use successor features so these successor features um i've done a video about successor features and i'll link to that you can look at that but essentially essentially the successor features are defined like this and for that we need to know what this thing is right here they simply call this a feature function okay it's very it's very um", "start_timestamp": "00:17:19", "end_timestamp": "00:18:08", "start_second": 1039, "end_second": 1088, "url": "https://www.youtube.com/watch?v=9-o2aAoN0rY&t=1039s", "title": "Fast reinforcement learning with generalized policy updates (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/9-o2aAoN0rY/maxresdefault.jpg"} {"video_id": "9-o2aAoN0rY", "text": "ambiguous term a feature function is a function that takes in a transition so state action next state and maps it to a high dimensional vector note this is almost the same as a reward function except the reward function simply maps it to a number now this is mapped to a higher dimensional thing again i wanna i kind of wanna leave out the next state right here just to make things easier on you so a feature here can be many many things but the structure of the features is going to be such that the reward function is going to be", "start_timestamp": "00:18:08", "end_timestamp": "00:18:58", "start_second": 1088, "end_second": 1138, "url": "https://www.youtube.com/watch?v=9-o2aAoN0rY&t=1088s", "title": "Fast reinforcement learning with generalized policy updates (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/9-o2aAoN0rY/maxresdefault.jpg"} {"video_id": "9-o2aAoN0rY", "text": "this feature times this w vector so it was a bit a bit not correct before when i said the reward is now a vector the reward of a particular task w can be seen as the inner product between the features and the task vector so w specifies the task and the features well they specify the features in our case it can be it can be fairly simple namely yes i was i was definitely wrong at the beginning so the feature functions right here is which object do you pick up okay so we define the feature function as 1 0 if you pick up a square and we define", "start_timestamp": "00:18:58", "end_timestamp": "00:19:49", "start_second": 1138, "end_second": 1189, "url": "https://www.youtube.com/watch?v=9-o2aAoN0rY&t=1138s", "title": "Fast reinforcement learning with generalized policy updates (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/9-o2aAoN0rY/maxresdefault.jpg"} {"video_id": "9-o2aAoN0rY", "text": "the feature function as 0 1 if you pick up a triangle and now you can and we define it as we define it as 0 0 if you pick up nothing and now you can fairly easily see that the reward of each task can be simply calculated by mixing the features accordingly okay so reward one is going to be simply the feature a 1 0 which is the w vector so i can specify a task by giving the appropriate w vector and now you can see that if this is my reward function my agent can go out into the world if it collects a square it is going to", "start_timestamp": "00:19:49", "end_timestamp": "00:20:36", "start_second": 1189, "end_second": 1236, "url": "https://www.youtube.com/watch?v=9-o2aAoN0rY&t=1189s", "title": "Fast reinforcement learning with generalized policy updates (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/9-o2aAoN0rY/maxresdefault.jpg"} {"video_id": "9-o2aAoN0rY", "text": "be rewarded right here if it collects a triangle even though the features indicate that it collected a triangle it doesn't care about it because the w is 0 right here if i now want to give it the new tag the same is true for r2 if and i want to give it the new task r3 right and you remember the reward function right there i can achieve that reward function by simply multiplying the same features the exact same feature functions by this vector right here okay remember there is a slight difference between the reward function and the feature function", "start_timestamp": "00:20:36", "end_timestamp": "00:21:17", "start_second": 1236, "end_second": 1277, "url": "https://www.youtube.com/watch?v=9-o2aAoN0rY&t=1236s", "title": "Fast reinforcement learning with generalized policy updates (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/9-o2aAoN0rY/maxresdefault.jpg"} {"video_id": "9-o2aAoN0rY", "text": "in this particular example the idea of the paper is that the feature function can be rich in in expressivity and you know tell you all sorts of things about your current state and the reward function is just a number right and then the the reward is specified by simply linearly mixing these features so the structure imposed by the paper here is that there are such a thing as a feature and any task can be described by mixing these same features okay that's that's the issue right here so the features are going to be constant", "start_timestamp": "00:21:17", "end_timestamp": "00:22:00", "start_second": 1277, "end_second": 1320, "url": "https://www.youtube.com/watch?v=9-o2aAoN0rY&t=1277s", "title": "Fast reinforcement learning with generalized policy updates (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/9-o2aAoN0rY/maxresdefault.jpg"} {"video_id": "9-o2aAoN0rY", "text": "across tasks whereas the w defines the task all right so the the goal here is that if you have learned many many things um during your tasks what you want to do is you want to learn this feature representation that is the same across all tasks and then you want to simply have the w specify how to mix these features to get the reward now of course this is a very strict very very definition not not a lot of things will fall into this unless you make the features like exponentially big of course um however they do", "start_timestamp": "00:22:00", "end_timestamp": "00:22:49", "start_second": 1320, "end_second": 1369, "url": "https://www.youtube.com/watch?v=9-o2aAoN0rY&t=1320s", "title": "Fast reinforcement learning with generalized policy updates (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/9-o2aAoN0rY/maxresdefault.jpg"} {"video_id": "9-o2aAoN0rY", "text": "discuss whenever a task doesn't fall into that so i hope you're with me so far this is the first kind of restriction we impose on our worlds that we can tackle with this framework namely that all of our worlds have all of our tasks in this world have to be a linear mix of the same features if that's given then our um then we can derive policies for tasks that we have never seen we can derive good policies by doing zero learning simply by specifying the task we can have a good policy for that task from the policies we've already learned", "start_timestamp": "00:22:49", "end_timestamp": "00:23:31", "start_second": 1369, "end_second": 1411, "url": "https://www.youtube.com/watch?v=9-o2aAoN0rY&t=1369s", "title": "Fast reinforcement learning with generalized policy updates (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/9-o2aAoN0rY/maxresdefault.jpg"} {"video_id": "9-o2aAoN0rY", "text": "for the other tasks okay so the reward three is now simply this and yeah notice it's not the same as the reward function because the reward function had one if you pick up the square negative one if you pick up the triangle and zero else so the zero we don't have to specify here because it's not part of our features right so you can see that the reward function is given simply by that and we can now as i said derive a good policy for this reward by looking at the other policies even though none of these policies has ever learned to avoid", "start_timestamp": "00:23:31", "end_timestamp": "00:24:13", "start_second": 1411, "end_second": 1453, "url": "https://www.youtube.com/watch?v=9-o2aAoN0rY&t=1411s", "title": "Fast reinforcement learning with generalized policy updates (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/9-o2aAoN0rY/maxresdefault.jpg"} {"video_id": "9-o2aAoN0rY", "text": "anything so it makes it defines these successor features right here so the successor features is much like the q function you can see the signature is almost the same so as a q function tells you um how much reward you're going to get if you do the action a and then follow policy pi the successor features almost the same thing however it doesn't tell you what rewards you're going to get it tells you which features you're going to get and which features by that we mean the sum of future features now you can see", "start_timestamp": "00:24:13", "end_timestamp": "00:24:56", "start_second": 1453, "end_second": 1496, "url": "https://www.youtube.com/watch?v=9-o2aAoN0rY&t=1453s", "title": "Fast reinforcement learning with generalized policy updates (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/9-o2aAoN0rY/maxresdefault.jpg"} {"video_id": "9-o2aAoN0rY", "text": "this some this a little bit this uh it of course it comes from the fact of the linearity up here so it's not really an additional restriction but simply to clarify what this means for your environment your environment has to be able to be looked at in terms of these features and these features they need to be cumulative again that comes from the fact that it's linear but to see so a feature like i want an an even number of steps or something like this would be terrible uh because and they're going into things", "start_timestamp": "00:24:56", "end_timestamp": "00:25:36", "start_second": 1496, "end_second": 1536, "url": "https://www.youtube.com/watch?v=9-o2aAoN0rY&t=1496s", "title": "Fast reinforcement learning with generalized policy updates (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/9-o2aAoN0rY/maxresdefault.jpg"} {"video_id": "9-o2aAoN0rY", "text": "like this later but it would be terrible because here we have the sum and um as soon as you if you have a feature that is very high if you have an even number of steps then um or if you have a feature that counts the steps you will never be able to to do well because if you have a feature that counts the steps it simply counts up and up and up and up depending on how many steps you do and your reward can never be specified in terms of a mix of these features and therefore your successor features are going to be useless", "start_timestamp": "00:25:36", "end_timestamp": "00:26:14", "start_second": 1536, "end_second": 1574, "url": "https://www.youtube.com/watch?v=9-o2aAoN0rY&t=1536s", "title": "Fast reinforcement learning with generalized policy updates (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/9-o2aAoN0rY/maxresdefault.jpg"} {"video_id": "9-o2aAoN0rY", "text": "but in our case where it's where feature one is pick up is how many of the sorry after rephrase our feature one is whether or not you pick up a square therefore if we sum it up our successor feature one is going to be the number of this is this is a pound sign the number of squares that you pick up okay similarly our feature too is whether or not you pick up a triangle in a particular step so our successor feature number two is going to be the number of triangles that you pick up over time you can see that the successor features", "start_timestamp": "00:26:14", "end_timestamp": "00:27:02", "start_second": 1574, "end_second": 1622, "url": "https://www.youtube.com/watch?v=9-o2aAoN0rY&t=1574s", "title": "Fast reinforcement learning with generalized policy updates (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/9-o2aAoN0rY/maxresdefault.jpg"} {"video_id": "9-o2aAoN0rY", "text": "is kind of the analogous of your q function but it is not in terms of a single number the reward it is going to be in terms of these features which is an entire vector okay and because we've constructed this in a linear way you can also pretty clearly see that the q function is inherently related to the uh to the successor features you can obtain the q function by simply multiplying the successor features by your task vector w now a lot of you might be wondering where does this w come from and in our initial case we're", "start_timestamp": "00:27:02", "end_timestamp": "00:27:43", "start_second": 1622, "end_second": 1663, "url": "https://www.youtube.com/watch?v=9-o2aAoN0rY&t=1622s", "title": "Fast reinforcement learning with generalized policy updates (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/9-o2aAoN0rY/maxresdefault.jpg"} {"video_id": "9-o2aAoN0rY", "text": "just going to frame everything as being given right so we're we're given this this w we're we're defining everything from our god-like perspective for now so don't think all of this is learned by now um yeah all right so how can you now derive this this magical new policy okay so we let's say we have this policy one and we have the policy two and they and you have the this features that you've learned constantly over both tasks in fact your it's given right it this pi function we give it we impose it that the feature one is", "start_timestamp": "00:27:43", "end_timestamp": "00:28:29", "start_second": 1663, "end_second": 1709, "url": "https://www.youtube.com/watch?v=9-o2aAoN0rY&t=1663s", "title": "Fast reinforcement learning with generalized policy updates (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/9-o2aAoN0rY/maxresdefault.jpg"} {"video_id": "9-o2aAoN0rY", "text": "whether you pick up a red square feature two is whether you pick up a blue square then we know that the reward functions can be achieved by doing the w so this here your w is going to be one zero and your w here is going to be zero one and we now we want a good policy for task three and we know we can achieve this by the one negative one w how can we derive a good policy and this is this algorithm this general policy evaluation a general policy improvement so it assumes that you as we said you have many many different", "start_timestamp": "00:28:29", "end_timestamp": "00:29:13", "start_second": 1709, "end_second": 1753, "url": "https://www.youtube.com/watch?v=9-o2aAoN0rY&t=1709s", "title": "Fast reinforcement learning with generalized policy updates (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/9-o2aAoN0rY/maxresdefault.jpg"} {"video_id": "9-o2aAoN0rY", "text": "many different policy so here you can see policy one where's policy two here's policy two and so on it assumes that you have many different features and therefore many different successor features in fact you have a vector of them right so here you can see feature one feature two and so on and it also assumes that you're in a current state and you have many actions at your disposal right now action one action two and so on okay so this is all the past you've already defined your features you have learned these policies", "start_timestamp": "00:29:13", "end_timestamp": "00:29:50", "start_second": 1753, "end_second": 1790, "url": "https://www.youtube.com/watch?v=9-o2aAoN0rY&t=1753s", "title": "Fast reinforcement learning with generalized policy updates (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/9-o2aAoN0rY/maxresdefault.jpg"} {"video_id": "9-o2aAoN0rY", "text": "and now you're given a new w w new in our case it's this one negative one and we want the best action so we are in state s we are given this w we want the best action now here is a method where we can simply calculate the best action in terms by by not reinforcement learning at all in this new task so by structuring things like this here so what does it really say here it this thing says we are going to evaluate all of these different cells of this tensor right here so we're going to determine what is the successor feature", "start_timestamp": "00:29:50", "end_timestamp": "00:30:37", "start_second": 1790, "end_second": 1837, "url": "https://www.youtube.com/watch?v=9-o2aAoN0rY&t=1790s", "title": "Fast reinforcement learning with generalized policy updates (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/9-o2aAoN0rY/maxresdefault.jpg"} {"video_id": "9-o2aAoN0rY", "text": "number two for policy pi one um in state s if i right now do a2 this is very abstract so let's say you're here and action action two is actually going to the right okay so you're here oh this was yellow it doesn't matter so this is so this is action one this is action two so action two is you go to the right okay you can you can see that this will let you pick up um we'll let you pick up a triangle now here that's action three and so on okay so what's this number going to be so we are in state s as we said and we do action", "start_timestamp": "00:30:37", "end_timestamp": "00:31:30", "start_second": 1837, "end_second": 1890, "url": "https://www.youtube.com/watch?v=9-o2aAoN0rY&t=1837s", "title": "Fast reinforcement learning with generalized policy updates (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/9-o2aAoN0rY/maxresdefault.jpg"} {"video_id": "9-o2aAoN0rY", "text": "two so action two is going to pick up a triangle the triangle the picking up of a triangle means that our pi for this step or sorry our five for the step is going to be zero one okay so our successor features this is not the features itself this is the successor features the successor features decompose into the next step plus all the next steps that we can follow okay so all the steps that will come so what are these features going to be is it's going to be the sum over that plus everything that follows and i can take a little bit of a guess", "start_timestamp": "00:31:30", "end_timestamp": "00:32:18", "start_second": 1890, "end_second": 1938, "url": "https://www.youtube.com/watch?v=9-o2aAoN0rY&t=1890s", "title": "Fast reinforcement learning with generalized policy updates (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/9-o2aAoN0rY/maxresdefault.jpg"} {"video_id": "9-o2aAoN0rY", "text": "here which means that this number so we only care about feature two right here this feature feature two this number is going to be one for the next step because we are going to pick up a triangle if we do action two but then after that we're going to follow policy one and policy one has been trained to pick up the red squares and not care about triangles so i'm going to guess that every now and then it will kind of step over a triangle but it won't fall we won't you know explicitly go look for them so let's say the episode", "start_timestamp": "00:32:18", "end_timestamp": "00:33:01", "start_second": 1938, "end_second": 1981, "url": "https://www.youtube.com/watch?v=9-o2aAoN0rY&t=1938s", "title": "Fast reinforcement learning with generalized policy updates (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/9-o2aAoN0rY/maxresdefault.jpg"} {"video_id": "9-o2aAoN0rY", "text": "was 10 more steps but the board has like 100 squares so and it has like three triangles on it so let's say that's like three-tenths um in expectation okay so this is going to be this is going to be the number that we're looking for we're doing this for every single one of these cells okay this this thing is going to do for every single one of these cells and this is very similar to evaluating q functions except we're evaluating an entire vector right here that's the difference to simply learning many q functions so if you were to evaluate", "start_timestamp": "00:33:01", "end_timestamp": "00:33:44", "start_second": 1981, "end_second": 2024, "url": "https://www.youtube.com/watch?v=9-o2aAoN0rY&t=1981s", "title": "Fast reinforcement learning with generalized policy updates (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/9-o2aAoN0rY/maxresdefault.jpg"} {"video_id": "9-o2aAoN0rY", "text": "only a q function then you would only have this first matrix this first block right here okay but you have feature one feature two and so on so you calculate everything in terms of these features and then by linearity you can mix it with that vector so in our case this is going to be the one negative one which will give you the q functions right from what we've seen before you obtain a q function by simply mixing your successor features with your um with this task vector and if you have a q function you can pretty easily determine uh which", "start_timestamp": "00:33:44", "end_timestamp": "00:34:24", "start_second": 2024, "end_second": 2064, "url": "https://www.youtube.com/watch?v=9-o2aAoN0rY&t=2024s", "title": "Fast reinforcement learning with generalized policy updates (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/9-o2aAoN0rY/maxresdefault.jpg"} {"video_id": "9-o2aAoN0rY", "text": "action you should take now you have here a q function with respect to every single policy but you can simply take the max right so the max across all of this will determine um will determine so you could take the max across all the policies which will give you the q function for a particular action over all policies that you consider and then you can simply take the arg max of that and determine the action you should take okay so it's a pretty big evaluation but if you do this that means you don't have to do reinforcement learning on", "start_timestamp": "00:34:24", "end_timestamp": "00:35:05", "start_second": 2064, "end_second": 2105, "url": "https://www.youtube.com/watch?v=9-o2aAoN0rY&t=2064s", "title": "Fast reinforcement learning with generalized policy updates (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/9-o2aAoN0rY/maxresdefault.jpg"} {"video_id": "9-o2aAoN0rY", "text": "this task it simply determines which action right now is the best given everything that i know from these old policies about the task and that's not going to be like the optimal policy uh per se but it's going to be one policy that's pretty pretty good and you can actually prove some things across that so they do this right here and you can see that here is what q learning does on this new task of picking up the squares and avoiding the triangles q learning takes a while to get there however if you do what they are suggesting", "start_timestamp": "00:35:05", "end_timestamp": "00:35:53", "start_second": 2105, "end_second": 2153, "url": "https://www.youtube.com/watch?v=9-o2aAoN0rY&t=2105s", "title": "Fast reinforcement learning with generalized policy updates (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/9-o2aAoN0rY/maxresdefault.jpg"} {"video_id": "9-o2aAoN0rY", "text": "and you know you give the w you can supply the w almost from the beginning you see right here almost from the beginning it is at a high reward now q learning surpasses it eventually but um it's pretty impressive that without doing any learning you are immediately good right now the caveat here of course is that they already need these policy pi 1 and pi 2 given to the algorithm and that comes from previous reinforcement learning trials and they say that they give these trials as many steps as q learning uses so they give them this", "start_timestamp": "00:35:53", "end_timestamp": "00:36:35", "start_second": 2153, "end_second": 2195, "url": "https://www.youtube.com/watch?v=9-o2aAoN0rY&t=2153s", "title": "Fast reinforcement learning with generalized policy updates (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/9-o2aAoN0rY/maxresdefault.jpg"} {"video_id": "9-o2aAoN0rY", "text": "these amounts of steps on these other tasks so the comparison here is a bit shaky if you ask me but the point made is that if you have a new task right now you can obtain very good solutions uh and you don't have to do anything okay and these solutions can be the basis for new reinforcement learning right you could start q learning off right here and then get here much faster potentially and so on so the next objective right here is that now we have defined the tasks and we had we know what these features are and we know how", "start_timestamp": "00:36:35", "end_timestamp": "00:37:12", "start_second": 2195, "end_second": 2232, "url": "https://www.youtube.com/watch?v=9-o2aAoN0rY&t=2195s", "title": "Fast reinforcement learning with generalized policy updates (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/9-o2aAoN0rY/maxresdefault.jpg"} {"video_id": "9-o2aAoN0rY", "text": "to mix these features as imposers of the task so what happens if we only have the reward function we specify the task only in terms of the reward functions but we're kind of looking at the features and we're like agent please figure out yourself how to to apply these features in order to make the reward high and that's what this thing is right here this gp and gpi with regress w so you don't no longer tell it what the w is um it needs to infer it through reinforcement learning right and it's not really reinforcement", "start_timestamp": "00:37:12", "end_timestamp": "00:37:51", "start_second": 2232, "end_second": 2271, "url": "https://www.youtube.com/watch?v=9-o2aAoN0rY&t=2232s", "title": "Fast reinforcement learning with generalized policy updates (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/9-o2aAoN0rY/maxresdefault.jpg"} {"video_id": "9-o2aAoN0rY", "text": "learning but what it does where is it yeah it simply it because all of this is linear and this thing here is given so always remember this thing here is given and these are the rewards that you obtain you can simply do a regression to figure out the w of the task now that's going to take some time but as you can see right here it is going to take um a lot less time than sim than doing q learning from scratch notably because you have good features so this is some this is this gets closer and closer to transfer", "start_timestamp": "00:37:51", "end_timestamp": "00:38:30", "start_second": 2271, "end_second": 2310, "url": "https://www.youtube.com/watch?v=9-o2aAoN0rY&t=2271s", "title": "Fast reinforcement learning with generalized policy updates (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/9-o2aAoN0rY/maxresdefault.jpg"} {"video_id": "9-o2aAoN0rY", "text": "learning right if you imagine that this right here is your pre-trained neural network and you simply learn the last layer of it um you you freeze this you do transfer learning fine tune the last layer here we are so um it gets closer and closer and you'll see this trend right here so it's pretty cool what you can do but basically i think it's a lot of math around a framework and the more and more you relax the kind of impositions uh that they need for their framework the more it gets back to simply well we do reinforcement learning at", "start_timestamp": "00:38:30", "end_timestamp": "00:39:16", "start_second": 2310, "end_second": 2356, "url": "https://www.youtube.com/watch?v=9-o2aAoN0rY&t=2310s", "title": "Fast reinforcement learning with generalized policy updates (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/9-o2aAoN0rY/maxresdefault.jpg"} {"video_id": "9-o2aAoN0rY", "text": "least in my um estimation so before we look at that this here is a pretty pretty cool experiment where they they look at how the how the different tasks can be achieved if you give different policies so you'll have noticed that we have always given these two two tasks one zero and zero one these were our tasks that we trained on and then one z negative one is task we evaluated on okay and you might object and say wait a minute these these two tasks you know they're pretty good as let's say pre-training tasks", "start_timestamp": "00:39:16", "end_timestamp": "00:39:59", "start_second": 2356, "end_second": 2399, "url": "https://www.youtube.com/watch?v=9-o2aAoN0rY&t=2356s", "title": "Fast reinforcement learning with generalized policy updates (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/9-o2aAoN0rY/maxresdefault.jpg"} {"video_id": "9-o2aAoN0rY", "text": "because and it's basically the standard basis right and any other task can be mixed from those so these are orthogonal vectors in this vector space so you're being pretty generous to this system what happens if we're not as generous so that's what they do here so they have different um policies and they evaluate how much you can learn with these different policies so the way you have to read this diagram is right here is going to be the one zero axis as they well they label it right here and this is going to be the zero one axis and", "start_timestamp": "00:39:59", "end_timestamp": "00:40:40", "start_second": 2399, "end_second": 2440, "url": "https://www.youtube.com/watch?v=9-o2aAoN0rY&t=2399s", "title": "Fast reinforcement learning with generalized policy updates (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/9-o2aAoN0rY/maxresdefault.jpg"} {"video_id": "9-o2aAoN0rY", "text": "this is evaluation so every direction on this circle defines a task for example this task right here as you can see is going to define the task of picking up both the squares and the triangles right whatever you pick up you get a reward however the task down here is going to be please pick up the squares but avoid the triangles at all cost okay and now they're going to look what happens if we supply different policies to choose from remember we're in this situation we're again in this situation where we give", "start_timestamp": "00:40:40", "end_timestamp": "00:41:17", "start_second": 2440, "end_second": 2477, "url": "https://www.youtube.com/watch?v=9-o2aAoN0rY&t=2440s", "title": "Fast reinforcement learning with generalized policy updates (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/9-o2aAoN0rY/maxresdefault.jpg"} {"video_id": "9-o2aAoN0rY", "text": "everything and we give initial policies we give the task vector and now it's about deriving a good policy just from looking at the old policy so no learning as a baseline you have q learning which into a given direction um tells you basically how how long q learning or takes or how far q learning gets with a given amount of steps indicated by this one two three four and so on um yeah you see i think this is this is this in how far q learning gets with these amounts of steps is the dotted lines right here so q learning", "start_timestamp": "00:41:17", "end_timestamp": "00:42:00", "start_second": 2477, "end_second": 2520, "url": "https://www.youtube.com/watch?v=9-o2aAoN0rY&t=2477s", "title": "Fast reinforcement learning with generalized policy updates (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/9-o2aAoN0rY/maxresdefault.jpg"} {"video_id": "9-o2aAoN0rY", "text": "gets this far with 10 to the i don't know 4 and then this far 10 to the 5 and so on so these are comparisons you can see that on the outside q learning is going to beat this these methods but our hope is going to be that of course if we have this zero shot generalization it's much better than running q learning for really long if we get close to it so the green thing is what we've already seen policies one and two will give you a fairly you know good um fairly good extent right here so what does it mean it means it can solve", "start_timestamp": "00:42:00", "end_timestamp": "00:42:43", "start_second": 2520, "end_second": 2563, "url": "https://www.youtube.com/watch?v=9-o2aAoN0rY&t=2520s", "title": "Fast reinforcement learning with generalized policy updates (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/9-o2aAoN0rY/maxresdefault.jpg"} {"video_id": "9-o2aAoN0rY", "text": "it can solve pretty much everything from here here this task this this task this task it kind of falls off once we go down here so once we go to the avoid section it sort of falls off because it has never learned to avoid now still we can of course do the avoidance by simply imposing a negative collection but negative collecting and avoiding aren't exactly the same thing in these um in these environments right because avoiding can also be going really close to something but not hitting it while collecting it's not the inverse of", "start_timestamp": "00:42:43", "end_timestamp": "00:43:21", "start_second": 2563, "end_second": 2601, "url": "https://www.youtube.com/watch?v=9-o2aAoN0rY&t=2563s", "title": "Fast reinforcement learning with generalized policy updates (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/9-o2aAoN0rY/maxresdefault.jpg"} {"video_id": "9-o2aAoN0rY", "text": "collecting the inverse of collecting would be like run away as far as as far as possible so we can expect that we've only ever learned to collect we're not going to be super good at avoiding um then the other extreme is when we give policies three and four and i haven't told you but you can see it right here uh policy three is explicitly to collect one and avoid the other while policy four is the opposite right here avoid the squares collect the triangles and now this policy this policy is should be pretty good on", "start_timestamp": "00:43:21", "end_timestamp": "00:44:06", "start_second": 2601, "end_second": 2646, "url": "https://www.youtube.com/watch?v=9-o2aAoN0rY&t=2601s", "title": "Fast reinforcement learning with generalized policy updates (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/9-o2aAoN0rY/maxresdefault.jpg"} {"video_id": "9-o2aAoN0rY", "text": "all of the tasks in between as you can see it has the biggest extent right here and that also makes sense by the way there's nothing down here because the task of avoiding both things doesn't really make sense because you can just stay where you are because there are also these these squares where there's nothing but you can see that the mixture of those is quite potent so already we can see even though these span a bases in fact an orthogonal basis as much as these because of the nature of the features that we define for the", "start_timestamp": "00:44:06", "end_timestamp": "00:44:46", "start_second": 2646, "end_second": 2686, "url": "https://www.youtube.com/watch?v=9-o2aAoN0rY&t=2646s", "title": "Fast reinforcement learning with generalized policy updates (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/9-o2aAoN0rY/maxresdefault.jpg"} {"video_id": "9-o2aAoN0rY", "text": "task they are not equivalent in mixing after so we can be more generous we can also be less generous if we only provide policy five and policy five is simply to pick up to pick up both objects then we're going to have a pretty hard time when it comes to avoiding things so you can see it can do fairly well picking up the various things in a positive manner but as soon as we cross this line into the like this horizontal line into where it's about avoiding a particular object um it's not it's not the the choices of actions we have from", "start_timestamp": "00:44:46", "end_timestamp": "00:45:25", "start_second": 2686, "end_second": 2725, "url": "https://www.youtube.com/watch?v=9-o2aAoN0rY&t=2686s", "title": "Fast reinforcement learning with generalized policy updates (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/9-o2aAoN0rY/maxresdefault.jpg"} {"video_id": "9-o2aAoN0rY", "text": "policy five aren't going to be super good at that and um they do another they do another thing right here so that the left thing is where they say it's important which policies we provide and the right thing they want to say something like it's important um so they want to say if we provide more policies that can be advantageous because we basically have more options to choose from okay so now they start off with policy four and policy four is simply avoid these squares collect the triangle you can see it performs", "start_timestamp": "00:45:25", "end_timestamp": "00:46:11", "start_second": 2725, "end_second": 2771, "url": "https://www.youtube.com/watch?v=9-o2aAoN0rY&t=2725s", "title": "Fast reinforcement learning with generalized policy updates (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/9-o2aAoN0rY/maxresdefault.jpg"} {"video_id": "9-o2aAoN0rY", "text": "fairly well over here where it's all about avoiding the uh squares and collecting the triangles as soon as you get into you know collecting or even here the opposite directions it's pretty bad right that's the red thing and now they add policy two to policy four so policy two is going to be also to collect um the the triangles but to just neglect the squares and that will also do a bit better why does it do better because it's better at collecting uh because this policy here also needs to avoid um and this policy here doesn't care so", "start_timestamp": "00:46:11", "end_timestamp": "00:46:53", "start_second": 2771, "end_second": 2813, "url": "https://www.youtube.com/watch?v=9-o2aAoN0rY&t=2771s", "title": "Fast reinforcement learning with generalized policy updates (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/9-o2aAoN0rY/maxresdefault.jpg"} {"video_id": "9-o2aAoN0rY", "text": "in the regimes where it's better to not care than to avoid adding this policy adding these options is going to be good and you can see that there's a general expansion here as we add more policies however i want to point out that for example here this black thing which should be technically superior to the blue thing because it contains as you can see here all the policies that the blue thing contains plus another policy um i don't i don't know if my vision but i'm pretty sure here the black thing is inside the blue", "start_timestamp": "00:46:53", "end_timestamp": "00:47:33", "start_second": 2813, "end_second": 2853, "url": "https://www.youtube.com/watch?v=9-o2aAoN0rY&t=2813s", "title": "Fast reinforcement learning with generalized policy updates (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/9-o2aAoN0rY/maxresdefault.jpg"} {"video_id": "9-o2aAoN0rY", "text": "thing uh so that means there can also be a disadvantage to adding more policies right here because maybe you got you have too much to choose from and so right here what we say is we add a policy that is all about collecting the squares and it is performing it is actually decreasing the perform the addition of this is decreasing the performance on tasks where you have to avoid the squares which i'm not sure if if that makes sense again the opposite of collecting isn't avoiding but i'm just pointing this out and this", "start_timestamp": "00:47:33", "end_timestamp": "00:48:17", "start_second": 2853, "end_second": 2897, "url": "https://www.youtube.com/watch?v=9-o2aAoN0rY&t=2853s", "title": "Fast reinforcement learning with generalized policy updates (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/9-o2aAoN0rY/maxresdefault.jpg"} {"video_id": "9-o2aAoN0rY", "text": "isn't really mentioned in the paper the paper simply says see we add policies and therefore we are getting better i'm not i don't agree with this given these results or maybe it the plotting the plotting is bad all right so they say okay more policies better which i disagree with they also say ho we can as as much as we can regress the w right we regress w we figure out the task we can even learn the successor features okay we can not the successor features um the pi functions that lead to the succession successor features and you", "start_timestamp": "00:48:17", "end_timestamp": "00:49:00", "start_second": 2897, "end_second": 2940, "url": "https://www.youtube.com/watch?v=9-o2aAoN0rY&t=2897s", "title": "Fast reinforcement learning with generalized policy updates (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/9-o2aAoN0rY/maxresdefault.jpg"} {"video_id": "9-o2aAoN0rY", "text": "can see if you do it with the true w you're really good at the beginning if you do it with a regress w we can see that before you can you so this is the small version of this plot right here this is like this uh section i think yeah you know you improve however we can also learn this pi function we can also learn the features where if we're not given the features maybe we can learn the features and they say well we can do this with but also by regression so here what we can do is we can find the function that minimizes the function", "start_timestamp": "00:49:00", "end_timestamp": "00:49:43", "start_second": 2940, "end_second": 2983, "url": "https://www.youtube.com/watch?v=9-o2aAoN0rY&t=2940s", "title": "Fast reinforcement learning with generalized policy updates (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/9-o2aAoN0rY/maxresdefault.jpg"} {"video_id": "9-o2aAoN0rY", "text": "and the w along with it that minimizes this error right here okay so you're finding the function and the w that that matches this error and this now really is like learning a neural network i mean you know um so i get i get it you have the i here and the w doesn't depend on the i and so on um but you're getting more and more back to actually simply learning non-linear functions mixing them linearly right here and i think that's going to be kind of the crux of this method uh the fact that the more complicated", "start_timestamp": "00:49:43", "end_timestamp": "00:50:26", "start_second": 2983, "end_second": 3026, "url": "https://www.youtube.com/watch?v=9-o2aAoN0rY&t=2983s", "title": "Fast reinforcement learning with generalized policy updates (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/9-o2aAoN0rY/maxresdefault.jpg"} {"video_id": "9-o2aAoN0rY", "text": "your problems are the less you are going to be able to do this kind of stuff and they even go as far as to say well what if like before we the reward is actually something like whether or not you have collected an even number of triangles or squares then they say well you can simply not have a single w but you can find a function w and now the policy is a function of the function of w and you can do potentially the same regression problem but as you can see it gets so now you um this right here is going to be a", "start_timestamp": "00:50:26", "end_timestamp": "00:51:07", "start_second": 3026, "end_second": 3067, "url": "https://www.youtube.com/watch?v=9-o2aAoN0rY&t=3026s", "title": "Fast reinforcement learning with generalized policy updates (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/9-o2aAoN0rY/maxresdefault.jpg"} {"video_id": "9-o2aAoN0rY", "text": "function of state and so you can see that it more and more it simply goes back to basically q learning again the only difference here is that you have this intermediate features but i think you can simply view this let's say as a hidden layer in a neural network i get it some are held constant across sums and so on but you know i i like the method in terms of um you know in terms of the analysis so if you are given all this stuff it seems pretty cool that you can derive new policies uh it's implication for lifelong", "start_timestamp": "00:51:07", "end_timestamp": "00:51:57", "start_second": 3067, "end_second": 3117, "url": "https://www.youtube.com/watch?v=9-o2aAoN0rY&t=3067s", "title": "Fast reinforcement learning with generalized policy updates (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/9-o2aAoN0rY/maxresdefault.jpg"} {"video_id": "9-o2aAoN0rY", "text": "learning they say look here um you have a bunch of tasks in your database that you've already learned on your agent is going out into the world it faces a new task it can use this thing you can use this thing to obtain a new good policy for that task it can then use reinforcement learning rl to refine that policy and then it can simply save that policy into the database so it keeps expanding and expanding this thing so it keeps adding rows and rows and rows right here of new policies that it's learned over the course of its life so", "start_timestamp": "00:51:57", "end_timestamp": "00:52:41", "start_second": 3117, "end_second": 3161, "url": "https://www.youtube.com/watch?v=9-o2aAoN0rY&t=3117s", "title": "Fast reinforcement learning with generalized policy updates (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/9-o2aAoN0rY/maxresdefault.jpg"} {"video_id": "9-o2aAoN0rY", "text": "once it's facing a new task it can just kind of draw from its experience and derive a good initial solution however uh the actual analysis only works i feel in quite limited circumstances and if you want to relax these limited circumstances then you need to basically regress and regress and regress away from away from their setup and i'm not sure i'm not sure where this is going to go if this is going to be a general framework for people it seems like it because it's pretty easy but then also it seems like most of the world doesn't really fall", "start_timestamp": "00:52:41", "end_timestamp": "00:53:23", "start_second": 3161, "end_second": 3203, "url": "https://www.youtube.com/watch?v=9-o2aAoN0rY&t=3161s", "title": "Fast reinforcement learning with generalized policy updates (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/9-o2aAoN0rY/maxresdefault.jpg"} {"video_id": "9-o2aAoN0rY", "text": "into this category in fact this divide and conquer approach um i'm not sure but from divide and conquer i almost imagine something like you subdivide and subdivide and subdivide until you know you are at some kind of basic task they still only go for you know single tasks like this here the tasks are somehow in sequence and i'm not i think we should really think about hierarchical rl now this can be a good first step right here but most hierarchical rl even the ones that specify themselves as fully hierarchical like we can do", "start_timestamp": "00:53:23", "end_timestamp": "00:54:03", "start_second": 3203, "end_second": 3243, "url": "https://www.youtube.com/watch?v=9-o2aAoN0rY&t=3203s", "title": "Fast reinforcement learning with generalized policy updates (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/9-o2aAoN0rY/maxresdefault.jpg"} {"video_id": "9-o2aAoN0rY", "text": "many layers they rarely go above two layers or three like like one one metal layer and one actual layer like this one right here uh they rarely go further maybe they go two layers but that's about it um i've seen very little in actual hierarchical or dividing conquer reinforcement learning just because it's so hard to train yeah all in all cool paper and if you want to get it into the math a little bit i think it's pretty easy math uh once you kind of set your goals on what it's actually meant to achieve um if you just read", "start_timestamp": "00:54:03", "end_timestamp": "00:54:42", "start_second": 3243, "end_second": 3282, "url": "https://www.youtube.com/watch?v=9-o2aAoN0rY&t=3243s", "title": "Fast reinforcement learning with generalized policy updates (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/9-o2aAoN0rY/maxresdefault.jpg"} {"video_id": "T35ba_VXkMY", "text": "hi there today we're going to look at end to end object detection with transformers by Nicolas carrion Francisco masa and others at Facebook AI research so on a high level this paper does object detection in images using first a CNN and then a transformer to detect objects and it does so via a bipartite matching training objective and this leaves you basically with an architecture that is super super simple compared to the previous architectures that had all kinds of engineering hurdles and thresholds and hyper", "start_timestamp": "00:00:00", "end_timestamp": "00:00:38", "start_second": 0, "end_second": 38, "url": "https://www.youtube.com/watch?v=T35ba_VXkMY&t=0s", "title": "DETR: End-to-End Object Detection with Transformers (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/T35ba_VXkMY/maxresdefault.jpg"} {"video_id": "T35ba_VXkMY", "text": "parameters so really excited for this as always if you like content like this consider leaving a like comment or subscribe let's get into it so let's say you have a picture like this here and you're supposed to detect all the objects in it and also where they are and what they are this task is called object detection so a good classifier here would say there's a bird right here and so this is a bird and then this here is also a bird right they can be overlapping these bounding boxes so this is you see the first problem that bird", "start_timestamp": "00:00:38", "end_timestamp": "00:01:20", "start_second": 38, "end_second": 80, "url": "https://www.youtube.com/watch?v=T35ba_VXkMY&t=38s", "title": "DETR: End-to-End Object Detection with Transformers (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/T35ba_VXkMY/maxresdefault.jpg"} {"video_id": "T35ba_VXkMY", "text": "why is that green nevermind okay and those are the only two objects so there's a number of very difficult things here first of all you need to sort of detect the objects you need to know how many there are it's all it's not always the same in each image there can be multiple objects of the same class there can be multiple objects of different classes they can be anywhere of any size that can be overlapping in the background small or across the entire image they can include each other partially so the problem is a", "start_timestamp": "00:01:20", "end_timestamp": "00:01:51", "start_second": 80, "end_second": 111, "url": "https://www.youtube.com/watch?v=T35ba_VXkMY&t=80s", "title": "DETR: End-to-End Object Detection with Transformers (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/T35ba_VXkMY/maxresdefault.jpg"} {"video_id": "T35ba_VXkMY", "text": "very very difficult problem and previous work has has done a lot of engineering on this like building detectors and then kind of you want to classify every single pixel here and then you you you get like two detection right here that are very close for the same classes they are that must maybe be the same instance right so there's only one thing here and not things and so on so there there used to be very complicated architectures that solve these problems and this paper here comes up with a super simple architecture and we'll kind of go from", "start_timestamp": "00:01:51", "end_timestamp": "00:02:25", "start_second": 111, "end_second": 145, "url": "https://www.youtube.com/watch?v=T35ba_VXkMY&t=111s", "title": "DETR: End-to-End Object Detection with Transformers (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/T35ba_VXkMY/maxresdefault.jpg"} {"video_id": "T35ba_VXkMY", "text": "the high level to the low to the implementation of each of the parts so what does this paper propose how do we solve a task like this first of all we put the image and the image here without the labels of course we put it through a convolutional neural network encoder since this is an image task it's you know kind of understandable that we do this mostly because CNN's just works so well for images so this gives us this set of image features and I think this this vector here is not really representative of what's happening so", "start_timestamp": "00:02:25", "end_timestamp": "00:03:02", "start_second": 145, "end_second": 182, "url": "https://www.youtube.com/watch?v=T35ba_VXkMY&t=145s", "title": "DETR: End-to-End Object Detection with Transformers (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/T35ba_VXkMY/maxresdefault.jpg"} {"video_id": "T35ba_VXkMY", "text": "let's actually take this picture right here and throw it in kind of an angled way and what what we'll do with the CNN is we'll simply sort of scale it down but have it multiple so here it's three channels right it's red green and blue like this three channels but we'll scale it down but we make it more channels so yeah so more channels okay but it's still sort of an image right here it still has the image form okay so the the CNN basically gives us this thing which which is sort of a higher level representation of the image with", "start_timestamp": "00:03:02", "end_timestamp": "00:03:46", "start_second": 182, "end_second": 226, "url": "https://www.youtube.com/watch?v=T35ba_VXkMY&t=182s", "title": "DETR: End-to-End Object Detection with Transformers (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/T35ba_VXkMY/maxresdefault.jpg"} {"video_id": "T35ba_VXkMY", "text": "many more feature channels but still kind of information where in the image those features are and this is going to be important in a second because now this thing which is this set of image features goes into a transformer encoder decoder and this is sort of the magic thing here as as a component we'll look into that in this in a second but we'll take it out right here is this set of box predictions so outcomes each of these boxes here is going to be consisting of a tuple and the tuple is going to be the class and the bounding", "start_timestamp": "00:03:46", "end_timestamp": "00:04:26", "start_second": 226, "end_second": 266, "url": "https://www.youtube.com/watch?v=T35ba_VXkMY&t=226s", "title": "DETR: End-to-End Object Detection with Transformers (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/T35ba_VXkMY/maxresdefault.jpg"} {"video_id": "T35ba_VXkMY", "text": "box okay so an example for this could be bird bird at x equals two y equals five okay that that's an example another example of this could also be there is nothing at x equals seven y equals nine okay so nothing the nothing class is a valid class right here and that's also important but safe to say there is this set of box predictions and then that is basically your output right these things are your output if you have those things you can draw these bounding boxes you can assign the labels the question is how do you train it now what you're", "start_timestamp": "00:04:26", "end_timestamp": "00:05:11", "start_second": 266, "end_second": 311, "url": "https://www.youtube.com/watch?v=T35ba_VXkMY&t=266s", "title": "DETR: End-to-End Object Detection with Transformers (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/T35ba_VXkMY/maxresdefault.jpg"} {"video_id": "T35ba_VXkMY", "text": "given is a database of images and these images as you see here on the right these images already have by human annotators drawn these bounding boxes in and also labels so this here would be annotated with bird and this here would be annotated with bird but it doesn't have any of these like it doesn't annotate the nothing classes or and so on so the question is how do you compare the two can you simply say okay if the first one here is the bird and then and the second one is this bird then it's good but then you know that the ordering", "start_timestamp": "00:05:11", "end_timestamp": "00:05:52", "start_second": 311, "end_second": 352, "url": "https://www.youtube.com/watch?v=T35ba_VXkMY&t=311s", "title": "DETR: End-to-End Object Detection with Transformers (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/T35ba_VXkMY/maxresdefault.jpg"} {"video_id": "T35ba_VXkMY", "text": "shouldn't matter you simply simply care whether you have the correct bounding boxes you don't care whether you have put them in the correct order and also what if your classifier does something like this it outputs those two boxes we see here but it also outputs this here and says bird or like one that is slightly off and says bird and so on so how do you deal with all of these cases so the way that this paper deals with all of these cases is with their bipartite matching loss this thing right here so how does it work let's say your", "start_timestamp": "00:05:52", "end_timestamp": "00:06:29", "start_second": 352, "end_second": 389, "url": "https://www.youtube.com/watch?v=T35ba_VXkMY&t=352s", "title": "DETR: End-to-End Object Detection with Transformers (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/T35ba_VXkMY/maxresdefault.jpg"} {"video_id": "T35ba_VXkMY", "text": "where can we go let's say your classifier so here is an image I'll have to wait for this to catch up here is an image and we put it through this entire pipeline and we get a set of predictions right and they're going to be class bounding box class bounding box class bounding box now the first thing you need to know is that there are always the same amount of predictions right there are always this size here is fixed that's large n okay that is sort of that's kind of a maximum of predictions since you can always", "start_timestamp": "00:06:29", "end_timestamp": "00:07:09", "start_second": 389, "end_second": 429, "url": "https://www.youtube.com/watch?v=T35ba_VXkMY&t=389s", "title": "DETR: End-to-End Object Detection with Transformers (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/T35ba_VXkMY/maxresdefault.jpg"} {"video_id": "T35ba_VXkMY", "text": "predict either a class or the nothing class in in this case you could predict anywhere from zero to five objects in the scene right okay and then the second thing is from your from your database you get out an image with its bounding box annotations right that are made by human laborers let's say these two and you also do class bounding box class bounding box but now you see we only have two two instances so here we just pad with the nothing class so I don't know what the bounding box should be for the nothing class it doesn't really", "start_timestamp": "00:07:09", "end_timestamp": "00:07:50", "start_second": 429, "end_second": 470, "url": "https://www.youtube.com/watch?v=T35ba_VXkMY&t=429s", "title": "DETR: End-to-End Object Detection with Transformers (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/T35ba_VXkMY/maxresdefault.jpg"} {"video_id": "T35ba_VXkMY", "text": "matter nothing no bounding box nothing no bounding box no bounding box so your ground truth labels if you will are also of size n so you always compare n things here on the left that your classifier output with n things on the right now as we already said the question is how do you deal with you can't simply compare one by one because the the ordering should not be important but also you don't want to encourage your classifier to always kind of if there is if if the one bird is very prominent right you don't want to", "start_timestamp": "00:07:50", "end_timestamp": "00:08:34", "start_second": 470, "end_second": 514, "url": "https://www.youtube.com/watch?v=T35ba_VXkMY&t=470s", "title": "DETR: End-to-End Object Detection with Transformers (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/T35ba_VXkMY/maxresdefault.jpg"} {"video_id": "T35ba_VXkMY", "text": "encourage your classifier to say do you say that here's a bird here's a bird there's a bird right here hey hey there's a bird there's a bird there's a bird and basically just because the signal for that bird is stronger and basically ignore the other bird what you want to do is you want to encourage some sort of your classifier to detect if it has already detected an object it shouldn't detect again in a slightly different place so what the way you do this is with this bipartite matching loss so at the time when you compute the loss you go here", "start_timestamp": "00:08:34", "end_timestamp": "00:09:07", "start_second": 514, "end_second": 547, "url": "https://www.youtube.com/watch?v=T35ba_VXkMY&t=514s", "title": "DETR: End-to-End Object Detection with Transformers (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/T35ba_VXkMY/maxresdefault.jpg"} {"video_id": "T35ba_VXkMY", "text": "and you compute what's called a maximum matching now what you have to provide is a loss function so we can there's a loss function L and L will take two of these things L will take the read the predicted thing of your model and L will take the true under one of the true underlying things and L will compute a number and we'll say how well do these two agree so you can say for example if either of them is the nothing class then I have no loss like I don't care about them that gives you no loss but if the two if the two classes agree and the two", "start_timestamp": "00:09:07", "end_timestamp": "00:09:54", "start_second": 547, "end_second": 594, "url": "https://www.youtube.com/watch?v=T35ba_VXkMY&t=547s", "title": "DETR: End-to-End Object Detection with Transformers (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/T35ba_VXkMY/maxresdefault.jpg"} {"video_id": "T35ba_VXkMY", "text": "bounding boxes agree then it's very good right and we maybe even gives like some negative loss or give loss zero but if if the bounding boxes agree but the classes don't agree then you say that's bad or the other way around if the classes agree in the bottom or even if everything disagrees it's the worst what what you're basically saying is if if these two would correspond to each other right if the thing on the left were the prediction for the thing on right which we don't know right it could be that the thing on the right refers to the bird on", "start_timestamp": "00:09:54", "end_timestamp": "00:10:30", "start_second": 594, "end_second": 630, "url": "https://www.youtube.com/watch?v=T35ba_VXkMY&t=594s", "title": "DETR: End-to-End Object Detection with Transformers (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/T35ba_VXkMY/maxresdefault.jpg"} {"video_id": "T35ba_VXkMY", "text": "the right and the thing on the Left refers to the bird on the left so would be natural that the bounding boxes aren't the same but you say if these were corresponding to each other what what would the loss be how well would they do and now if you compute this bipartite matching what you want I guess it's a it's a minimum matching in this case what you want is you four to find an assignment of things on the left two things on the right a one to one assignment this is an example of a one to one assignment everything on the left", "start_timestamp": "00:10:30", "end_timestamp": "00:11:05", "start_second": 630, "end_second": 665, "url": "https://www.youtube.com/watch?v=T35ba_VXkMY&t=630s", "title": "DETR: End-to-End Object Detection with Transformers (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/T35ba_VXkMY/maxresdefault.jpg"} {"video_id": "T35ba_VXkMY", "text": "is assigned exactly one thing on the right such that the total loss is minimized right so you're going to say I'm going to align the things on the left with the things on the right such that it's maximally favorable right I give you the maximum benefit of the doubt by aligning these things and what so in the best possible case what's the loss okay hope this is this is somehow clear so this you're trying to find the assignment from the left to the right that makes that basically is the best case for this output right here where", "start_timestamp": "00:11:05", "end_timestamp": "00:11:45", "start_second": 665, "end_second": 705, "url": "https://www.youtube.com/watch?v=T35ba_VXkMY&t=665s", "title": "DETR: End-to-End Object Detection with Transformers (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/T35ba_VXkMY/maxresdefault.jpg"} {"video_id": "T35ba_VXkMY", "text": "you really say oh okay here you output the output a bird very close to the bird ear in the ground truth label that's this here so I'm going to connect I'm going to connect these two because that's sort of it's it's it gives the model the most benefit of the doubt and the loss that you have at the end of that matching so this loss here would only then count wherever these connections are that loss is going to be your training loss okay so this solves the problems we had before it is not dependent on the order because if you", "start_timestamp": "00:11:45", "end_timestamp": "00:12:24", "start_second": 705, "end_second": 744, "url": "https://www.youtube.com/watch?v=T35ba_VXkMY&t=705s", "title": "DETR: End-to-End Object Detection with Transformers (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/T35ba_VXkMY/maxresdefault.jpg"} {"video_id": "T35ba_VXkMY", "text": "reorder the things your minimum matching will simplify it will simply swap with it it is it is um if you output the same bird multiple times only one of these is going to be assigned so if if this here is that bird only one of them only this one maybe is going to be assigned to that one and the other ones can't be assigned to that one are forced to be assigned to a different one let's say this one here and are going to incur a loss so you encourage your model to output let's say diverse bounding boxes different bounding boxes for things okay", "start_timestamp": "00:12:24", "end_timestamp": "00:13:03", "start_second": 744, "end_second": 783, "url": "https://www.youtube.com/watch?v=T35ba_VXkMY&t=744s", "title": "DETR: End-to-End Object Detection with Transformers (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/T35ba_VXkMY/maxresdefault.jpg"} {"video_id": "T35ba_VXkMY", "text": "so this D solves these problems and it's very clever and there are algorithms to compute these these minimum matchings and they use the Hungarian algorithm which will give you exactly such a matching again this is possible because you have n things on each side and the N is in effect here is a the maximum of objects that you can detect at once I guess if there is less you can simply pad right here and then the model of course is encouraged to come up with the equal number of no class predictions because if it outputs a prediction when it", "start_timestamp": "00:13:03", "end_timestamp": "00:13:42", "start_second": 783, "end_second": 822, "url": "https://www.youtube.com/watch?v=T35ba_VXkMY&t=783s", "title": "DETR: End-to-End Object Detection with Transformers (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/T35ba_VXkMY/maxresdefault.jpg"} {"video_id": "T35ba_VXkMY", "text": "shouldn't right if it already predicts two things and these are assigned to these two things and then it outputs one more thing it is going to be penalized because it should output three things with no class but it has output one-to-many with a with a class is going to be penalized okay so the this is a pretty pretty cool thing it again it relies on the fact that you have n on both sides but you can make n so large that basically it covers all of the cases so you can make n like 50 so you can detect up to 50 things in a scene", "start_timestamp": "00:13:42", "end_timestamp": "00:14:23", "start_second": 822, "end_second": 863, "url": "https://www.youtube.com/watch?v=T35ba_VXkMY&t=822s", "title": "DETR: End-to-End Object Detection with Transformers (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/T35ba_VXkMY/maxresdefault.jpg"} {"video_id": "T35ba_VXkMY", "text": "alright that's the algorithm in a high-level they do show their loss here you see the loss ultimately is going to be so it's going to be over this matching right here that's the minimum bipartite assignment that basically minimizes this total loss over your prediction and label matchings and the loss they come up with here I said you have to give the algorithm a loss is this one and they kind of go into how they do it I don't think it's super important so the the class algorithm sorry the loss on the class labels I", "start_timestamp": "00:14:23", "end_timestamp": "00:15:06", "start_second": 863, "end_second": 906, "url": "https://www.youtube.com/watch?v=T35ba_VXkMY&t=863s", "title": "DETR: End-to-End Object Detection with Transformers (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/T35ba_VXkMY/maxresdefault.jpg"} {"video_id": "T35ba_VXkMY", "text": "think it's going to be a soft Max or a sorry a cross-entropy loss like in usual classification and the loss on the to say whether to bounding boxes agree is a mixture of the l1 loss that compares to bounding boxes and this iou loss which is not dependent on the scale of the bounding boxes it kind of computes how much fraction of the two bounding boxes overlap but in any case the lost base they consist of saying how eyeli how much do the labels agree and how much do the bounding boxes agree okay again this is only possible because after that you", "start_timestamp": "00:15:06", "end_timestamp": "00:15:45", "start_second": 906, "end_second": 945, "url": "https://www.youtube.com/watch?v=T35ba_VXkMY&t=906s", "title": "DETR: End-to-End Object Detection with Transformers (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/T35ba_VXkMY/maxresdefault.jpg"} {"video_id": "T35ba_VXkMY", "text": "compute this matching otherwise you would have no clue which boxes to come which predictions to compare to which other predictions so let's look at this architecture a bit more in detail as we said you have this what they call the backbone which is a convolutional neural network and with that you put in some positional encodings now I already said the you should look at the these features right here as just smaller feature versions of the image but they still have some image nature then they are flattened so once they are put in", "start_timestamp": "00:15:45", "end_timestamp": "00:16:25", "start_second": 945, "end_second": 985, "url": "https://www.youtube.com/watch?v=T35ba_VXkMY&t=945s", "title": "DETR: End-to-End Object Detection with Transformers (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/T35ba_VXkMY/maxresdefault.jpg"} {"video_id": "T35ba_VXkMY", "text": "the transformer encoder because the transformer is naturally a sequence processing unit okay so it takes in just a sequence of vectors right here and since an image is not a sequence what you'll do is if you have your image features and we said we have a bunch of channels let's say we have four channels and their height and width and see you're going to unroll and flatten that into one sequence so this is height times width you basically unroll across these axes right here into this axis and it's channels I so", "start_timestamp": "00:16:25", "end_timestamp": "00:17:08", "start_second": 985, "end_second": 1028, "url": "https://www.youtube.com/watch?v=T35ba_VXkMY&t=985s", "title": "DETR: End-to-End Object Detection with Transformers (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/T35ba_VXkMY/maxresdefault.jpg"} {"video_id": "T35ba_VXkMY", "text": "basically you have a sequence here of of C dimensional feature vectors that you then put into your encoder okay so your encoder will now transform this sequence into an equally long sequence yet again of features and the good thing about a transformer because why do you use a transformer the good thing about the transformer is that in such a sequence and I've done videos on transformers it you can basic mainly look at the video attention is all you need if you want to under than this more fully this thing can", "start_timestamp": "00:17:08", "end_timestamp": "00:17:52", "start_second": 1028, "end_second": 1072, "url": "https://www.youtube.com/watch?v=T35ba_VXkMY&t=1028s", "title": "DETR: End-to-End Object Detection with Transformers (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/T35ba_VXkMY/maxresdefault.jpg"} {"video_id": "T35ba_VXkMY", "text": "basically have a tension so it has attention layers it can attend from each position to each position in a one-shot manner so as it transforms this representation up the transformer layers at each step it can basically aggregate information from everywhere in the sequence to anywhere else and therefore it's very it's very powerful if you have a sequence and you need sort of global connections across the sequence this is very good for a language processing because in a sentence let's look at this sentence the input images are matched", "start_timestamp": "00:17:52", "end_timestamp": "00:18:34", "start_second": 1072, "end_second": 1114, "url": "https://www.youtube.com/watch?v=T35ba_VXkMY&t=1072s", "title": "DETR: End-to-End Object Detection with Transformers (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/T35ba_VXkMY/maxresdefault.jpg"} {"video_id": "T35ba_VXkMY", "text": "together all right applying blah blah blah blah blah blah blah blah blah blah and then there is they write the word they and you need you need to know that they refers to the input images okay and but you see this is very very far away in the sentence so you need a model that makes use of long range dependencies and they make the case that in such a task right here you also need the long range dependencies because these bounding boxes as you see right here there can be quite large so if you have an image you need that this", "start_timestamp": "00:18:34", "end_timestamp": "00:19:14", "start_second": 1114, "end_second": 1154, "url": "https://www.youtube.com/watch?v=T35ba_VXkMY&t=1114s", "title": "DETR: End-to-End Object Detection with Transformers (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/T35ba_VXkMY/maxresdefault.jpg"} {"video_id": "T35ba_VXkMY", "text": "part here communicates with these and this and this and this part basically anywhere in the bounding box and these bounding boxes can be quite large so the transformer architecture actually makes sense here now I want to go a bit later into why I think it actually makes even more sense for a bounding box detection but right now I just want to keep going through this through this architecture right here so if my computer here decides to come back yes we can go on so what will get out is yet another so in here we put this thing we put down here", "start_timestamp": "00:19:14", "end_timestamp": "00:19:55", "start_second": 1154, "end_second": 1195, "url": "https://www.youtube.com/watch?v=T35ba_VXkMY&t=1154s", "title": "DETR: End-to-End Object Detection with Transformers (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/T35ba_VXkMY/maxresdefault.jpg"} {"video_id": "T35ba_VXkMY", "text": "we put into the transformer encoder and we get an equally sized equally shaped sequence out of the transformer encoder you see that this thing here goes as a side input into this transformer decoder so the transformer encoder here is just a bit more of a feature mapping technically just for the architecture you could think of just putting this into here but of course it's gonna go better with the transformer encoder the transformer decoder now does something similar but you see it has the encoder as a side input this is very much like", "start_timestamp": "00:19:55", "end_timestamp": "00:20:31", "start_second": 1195, "end_second": 1231, "url": "https://www.youtube.com/watch?v=T35ba_VXkMY&t=1195s", "title": "DETR: End-to-End Object Detection with Transformers (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/T35ba_VXkMY/maxresdefault.jpg"} {"video_id": "T35ba_VXkMY", "text": "this is not like Burt Burt is like a only encoder transformer whereas this is much like the original attention is all you need transformer that has an encoder and then the decoder as a side input basically as conditioning information has the encoder output what does the decoder do again since it's a transformer it's going to take a sequence and output a sequence the sequence it takes is right here is what they call object queries and this also is different from the attention is all you need papers and they don't do it", "start_timestamp": "00:20:31", "end_timestamp": "00:21:06", "start_second": 1231, "end_second": 1266, "url": "https://www.youtube.com/watch?v=T35ba_VXkMY&t=1231s", "title": "DETR: End-to-End Object Detection with Transformers (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/T35ba_VXkMY/maxresdefault.jpg"} {"video_id": "T35ba_VXkMY", "text": "autoregressive lee they just do it one shot what does it mean it means that you start with a sequence here of four things and this is these are the this is this big n and you out you output the sequence of a sequence of four things and it's important to see what they're going to end up so these things are then directly going through a classifier that now outputs the so these things here are these class label bounding box outputs okay so each of these things is going to after transformation end up being one of", "start_timestamp": "00:21:06", "end_timestamp": "00:21:46", "start_second": 1266, "end_second": 1306, "url": "https://www.youtube.com/watch?v=T35ba_VXkMY&t=1266s", "title": "DETR: End-to-End Object Detection with Transformers (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/T35ba_VXkMY/maxresdefault.jpg"} {"video_id": "T35ba_VXkMY", "text": "these bounding boxes either defining an object or saying that there isn't an object somewhere okay you see here this bounding box refers to this bird this bounding box refers to this bird so each of these things is going to to be one bounding box and the what they call object queries is the question of course is what do you input here right actually I want to transform this image information that comes from the left here I want to transform that into the bounding boxes what do I input here and the answer is you just", "start_timestamp": "00:21:46", "end_timestamp": "00:22:22", "start_second": 1306, "end_second": 1342, "url": "https://www.youtube.com/watch?v=T35ba_VXkMY&t=1306s", "title": "DETR: End-to-End Object Detection with Transformers (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/T35ba_VXkMY/maxresdefault.jpg"} {"video_id": "T35ba_VXkMY", "text": "input at the start you just input n random vectors because what's that gonna give you is basically n outputs you want and outputs because you want n of these bounding box classifications so you need n things and if I input n things into a transformer it's going to give me n things as an output and then in each step I can simply condition on the information that comes in the images and it it'll give me right then I can incorporate that information it's a very deep learning way of thinking about it actually that you just need the", "start_timestamp": "00:22:22", "end_timestamp": "00:22:57", "start_second": 1342, "end_second": 1377, "url": "https://www.youtube.com/watch?v=T35ba_VXkMY&t=1342s", "title": "DETR: End-to-End Object Detection with Transformers (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/T35ba_VXkMY/maxresdefault.jpg"} {"video_id": "T35ba_VXkMY", "text": "information somewhere in there and I need n things now they go more into detail into this transformer architecture help help in the helpful fashion in the appendix and will go there quickly so this I think here makes more sense so the image features come in here right and you see this is just a transformer stack an encoder stack of multi-head self attention and feed-forward in instants wise or like token wise feed-forward network and then that information is taken and is given as conditioning information over here now in here as I", "start_timestamp": "00:22:57", "end_timestamp": "00:23:42", "start_second": 1377, "end_second": 1422, "url": "https://www.youtube.com/watch?v=T35ba_VXkMY&t=1377s", "title": "DETR: End-to-End Object Detection with Transformers (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/T35ba_VXkMY/maxresdefault.jpg"} {"video_id": "T35ba_VXkMY", "text": "said you input these object queries which at the beginning are just n random vectors and what you're going to do you Argos are going to feature and code them and then you combine it with this image information so ultimately if you think of this one of these things one of these things is going to be a vector right and then that vector is going to be transformed and then it will have as it is transformed it will have the opportunity to basically look at features that come from here now the arrow is in the wrong direction so you", "start_timestamp": "00:23:42", "end_timestamp": "00:24:19", "start_second": 1422, "end_second": 1459, "url": "https://www.youtube.com/watch?v=T35ba_VXkMY&t=1422s", "title": "DETR: End-to-End Object Detection with Transformers (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/T35ba_VXkMY/maxresdefault.jpg"} {"video_id": "T35ba_VXkMY", "text": "have already taken the image and you've transformed it into a feature representation which is also a vector right you have the features of the image right here now as you transform this vector this object query queue you have the opportunity to look at the image features right and that's how do you get the image information in there so the image features will come in here transform that through attention so this is an attention mechanism on the image and then what you will output is a bounding box and a little class label it's really", "start_timestamp": "00:24:19", "end_timestamp": "00:25:01", "start_second": 1459, "end_second": 1501, "url": "https://www.youtube.com/watch?v=T35ba_VXkMY&t=1459s", "title": "DETR: End-to-End Object Detection with Transformers (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/T35ba_VXkMY/maxresdefault.jpg"} {"video_id": "T35ba_VXkMY", "text": "hard to explain I would guess you need to understand really what attention mechanisms are and of course the crucial part of of course is what what's this what do you input at the beginning and these object queries aren't actually random as I said they are learned so what you're going to do is you're going to learn independent of the input image you're going to learn n different object queries and these object queries now it's very it's very interesting because these object queries are sort of going to be different it's like you have", "start_timestamp": "00:25:01", "end_timestamp": "00:25:42", "start_second": 1501, "end_second": 1542, "url": "https://www.youtube.com/watch?v=T35ba_VXkMY&t=1501s", "title": "DETR: End-to-End Object Detection with Transformers (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/T35ba_VXkMY/maxresdefault.jpg"} {"video_id": "T35ba_VXkMY", "text": "different people that can ask the input image different questions right and this they have so there n is 100 but they show 20 of these object queries that they learn and so did they have visualization of all bounding box predictions on all images so it's it's sort of like you have n different people at your disposal and you train these n different people to kind of ask different questions of the input image ok you say this person up here will always ask irrespective of what the input image is will always ask sort of", "start_timestamp": "00:25:42", "end_timestamp": "00:26:24", "start_second": 1542, "end_second": 1584, "url": "https://www.youtube.com/watch?v=T35ba_VXkMY&t=1542s", "title": "DETR: End-to-End Object Detection with Transformers (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/T35ba_VXkMY/maxresdefault.jpg"} {"video_id": "T35ba_VXkMY", "text": "hey input image what's what's on your bottom left right that's I'm really interested what's on your bottom left and sometimes I'm a bit interested in what's here but I'm mainly interested what's on the bottom left of the image whereas this person right here sorry this person right here is more interested in what's in the center that the different colors here is refer to different sizes of bounding boxes so this person is also interested so the person on the top-left is interested mainly in I think small bounding boxes that are on the bottom", "start_timestamp": "00:26:24", "end_timestamp": "00:27:02", "start_second": 1584, "end_second": 1622, "url": "https://www.youtube.com/watch?v=T35ba_VXkMY&t=1584s", "title": "DETR: End-to-End Object Detection with Transformers (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/T35ba_VXkMY/maxresdefault.jpg"} {"video_id": "T35ba_VXkMY", "text": "left and the person here is mostly interested in what I'm really interested what's in the center what's large in the center I want give me large things that are in the center right and then this person right here is really interested on stuff that's on the right side of the image so you see in order to get different sort of a difference in bounding box predictions you train n different people to ask different questions of the of the input image and this asking of questions is exactly what an attention mechanism is so this person", "start_timestamp": "00:27:02", "end_timestamp": "00:27:41", "start_second": 1622, "end_second": 1661, "url": "https://www.youtube.com/watch?v=T35ba_VXkMY&t=1622s", "title": "DETR: End-to-End Object Detection with Transformers (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/T35ba_VXkMY/maxresdefault.jpg"} {"video_id": "T35ba_VXkMY", "text": "right here let's let's take this this person and I'm saying person these are vectors these are learned object queries but this person first they will simply ask the question what's on what's on the right side and then the image features right I'm getting poor drawing the image features it will have an attention mechanism to this part of the image features and then it will get back some signal right and then it will transform that with its own signal up and then it will ask maybe again okay now that I know more because you see that person is", "start_timestamp": "00:27:41", "end_timestamp": "00:28:26", "start_second": 1661, "end_second": 1706, "url": "https://www.youtube.com/watch?v=T35ba_VXkMY&t=1661s", "title": "DETR: End-to-End Object Detection with Transformers (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/T35ba_VXkMY/maxresdefault.jpg"} {"video_id": "T35ba_VXkMY", "text": "interested in multiple things it's interested in those things and those things so at first it will focus on these things but then it says oh now I'm now I know more right there is there I know I see there is actually something on the right side so in the higher layers it can then go back and ask the image more questions by sending these cue vectors of the attention mechanism and it will get back the V vectors from the image features that correspond to these cue things so up and up the layers this person can ask more refined", "start_timestamp": "00:28:26", "end_timestamp": "00:29:00", "start_second": 1706, "end_second": 1740, "url": "https://www.youtube.com/watch?v=T35ba_VXkMY&t=1706s", "title": "DETR: End-to-End Object Detection with Transformers (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/T35ba_VXkMY/maxresdefault.jpg"} {"video_id": "T35ba_VXkMY", "text": "questions about what that particular person is interested in okay and since you have the different people here that ask different questions you basically learn the people in a way such that across the data set they all together they cover every possible image pretty well again these people what they're interested in initially is not dependent on the picture you simply learn this in a global manner all right this is the best way I have of describing it you basically learn n people that are each one is interested in different things", "start_timestamp": "00:29:00", "end_timestamp": "00:29:39", "start_second": 1740, "end_second": 1779, "url": "https://www.youtube.com/watch?v=T35ba_VXkMY&t=1740s", "title": "DETR: End-to-End Object Detection with Transformers (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/T35ba_VXkMY/maxresdefault.jpg"} {"video_id": "T35ba_VXkMY", "text": "different classes and different regions in the image and each one of these people is going to output their best guess of what is where based on what they're interested in so that person might say I'm you know I'm the person that's interested kind of in the left side of things so I am going to output that there is a bird right here now these people if this is a transformer right and everything can attend to everything they can actually communicate with each other as they incorporate information from the image so in each", "start_timestamp": "00:29:39", "end_timestamp": "00:30:15", "start_second": 1779, "end_second": 1815, "url": "https://www.youtube.com/watch?v=T35ba_VXkMY&t=1779s", "title": "DETR: End-to-End Object Detection with Transformers (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/T35ba_VXkMY/maxresdefault.jpg"} {"video_id": "T35ba_VXkMY", "text": "layer they can do both they can incorporate information from the image and they can communicate with each other and then in the next layer that can do it again and again and again and thereby they can sort of they can sort of say well you already got the left side I will take the right side you already got the bird class I will take the elephant class and so on so you see here how the the architecture of the transformer actually is also very conducive to doing this bounding box prediction in that these different things can sort of", "start_timestamp": "00:30:15", "end_timestamp": "00:30:49", "start_second": 1815, "end_second": 1849, "url": "https://www.youtube.com/watch?v=T35ba_VXkMY&t=1815s", "title": "DETR: End-to-End Object Detection with Transformers (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/T35ba_VXkMY/maxresdefault.jpg"} {"video_id": "T35ba_VXkMY", "text": "attend to each other and therefore communicate with each other all right I hope that sort of makes sense now before we get into the experiments I want to list a third reason of why the transformer especially the encoders might actually also make a giant amount of sense here since you on the image into height and width and you have to imagine what does the transformer do the transformer as we said here has this notion of a tension where from any point in the sequence it can gather information from any other point in the sequence and this that's", "start_timestamp": "00:30:49", "end_timestamp": "00:31:30", "start_second": 1849, "end_second": 1890, "url": "https://www.youtube.com/watch?v=T35ba_VXkMY&t=1849s", "title": "DETR: End-to-End Object Detection with Transformers (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/T35ba_VXkMY/maxresdefault.jpg"} {"video_id": "T35ba_VXkMY", "text": "usually one of the downsides of the Transformers is done via a quadratic attention mechanism so if I just list one feature channel go over here if I just list one feature Channel right here this is height times width of the image right this is this is the entire image unrolled in one vector height times width and here I unroll it again height times width then this this matrix that I can build right here which is called the attention matrix right here it will tell me which parts of the sequence attends to which other parts okay so if you have", "start_timestamp": "00:31:30", "end_timestamp": "00:32:14", "start_second": 1890, "end_second": 1934, "url": "https://www.youtube.com/watch?v=T35ba_VXkMY&t=1890s", "title": "DETR: End-to-End Object Detection with Transformers (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/T35ba_VXkMY/maxresdefault.jpg"} {"video_id": "T35ba_VXkMY", "text": "an image that has the let's say the number three and you really want to figure out whether or not this is a three then the bow up here must communicate with the bow down here right they need to share information you say oh there's a bow here there's a bow here and there is a spiky thing here that must be a three so you want something this is rather at the beginning of the sequence you want this to attend first of all it will attend itself so you get fairly high values along the diagonal maybe 1010 1011 1112 and I saw this all", "start_timestamp": "00:32:14", "end_timestamp": "00:32:49", "start_second": 1934, "end_second": 1969, "url": "https://www.youtube.com/watch?v=T35ba_VXkMY&t=1934s", "title": "DETR: End-to-End Object Detection with Transformers (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/T35ba_VXkMY/maxresdefault.jpg"} {"video_id": "T35ba_VXkMY", "text": "eg skated a hundred million nine nine but it also like this this part here at the beginning of the sequence let's say it's here because this is unrolled right needs to attend to the end so this needs to attend to the end which we will put an 11 here and the other way around doesn't always need to be symmetrical by the way okay but in any case this is going to be a H times W squared matrix because everything can attend to everything and that's the attention mechanism why do I think this is so good for bounding boxes because let's let's", "start_timestamp": "00:32:49", "end_timestamp": "00:33:29", "start_second": 1969, "end_second": 2009, "url": "https://www.youtube.com/watch?v=T35ba_VXkMY&t=1969s", "title": "DETR: End-to-End Object Detection with Transformers (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/T35ba_VXkMY/maxresdefault.jpg"} {"video_id": "T35ba_VXkMY", "text": "imagine you actually have a matrix that is like this okay height times width times height times width every single point in here actually defines a bounding box because this point this point right here in this dimension corresponds to one location in the image and on this axis it corresponds to another location now in the attention matrix simply means these two points need to communicate but if you have two pixels you actually have defined a bounding box right here right you you were actually you're defining a bounding", "start_timestamp": "00:33:29", "end_timestamp": "00:34:05", "start_second": 2009, "end_second": 2045, "url": "https://www.youtube.com/watch?v=T35ba_VXkMY&t=2009s", "title": "DETR: End-to-End Object Detection with Transformers (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/T35ba_VXkMY/maxresdefault.jpg"} {"video_id": "T35ba_VXkMY", "text": "box and the fact that this is happening in the exact same matrices could mean that the Transformers are uniquely well the Transformers across sequences of these height times with unrolled images are uniquely well conducive to these bounding box prediction tasks I'm actually a bit astounded because when I first just read the title this immediately popped to my mind I'm like oh yes of course and they're going to predict the bounding boxes by simply training so what you would do what I thought this was gonna be as out you", "start_timestamp": "00:34:05", "end_timestamp": "00:34:42", "start_second": 2045, "end_second": 2082, "url": "https://www.youtube.com/watch?v=T35ba_VXkMY&t=2045s", "title": "DETR: End-to-End Object Detection with Transformers (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/T35ba_VXkMY/maxresdefault.jpg"} {"video_id": "T35ba_VXkMY", "text": "output an actual matrix like this and then you simply each point you can you can simply classify right so you can classify here whether whether or not like at in this direction there is a bird right and then if you have two points like this for example you and you also classify whether in this direction there is a bird right and this naturally defines a bounding box or you could like take this matrix and actually just classify individual points in this matrix to be the bounding boxes because they already define bounding boxes so I", "start_timestamp": "00:34:42", "end_timestamp": "00:35:20", "start_second": 2082, "end_second": 2120, "url": "https://www.youtube.com/watch?v=T35ba_VXkMY&t=2082s", "title": "DETR: End-to-End Object Detection with Transformers (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/T35ba_VXkMY/maxresdefault.jpg"} {"video_id": "T35ba_VXkMY", "text": "just I think these these quadratic things are are uniquely I mean someone must have thought of this or if not like the YouTube channel it would be funny first paper ever to actually have to cite the YouTube channel but again yeah so transformers seem to be a good idea for these kinds of things so how do they do of course they do well they are on par where with these other much much much more complex architectures these faster our CNN models they are apparently much more complex but they are on par with this they do however train forever", "start_timestamp": "00:35:20", "end_timestamp": "00:36:01", "start_second": 2120, "end_second": 2161, "url": "https://www.youtube.com/watch?v=T35ba_VXkMY&t=2120s", "title": "DETR: End-to-End Object Detection with Transformers (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/T35ba_VXkMY/maxresdefault.jpg"} {"video_id": "T35ba_VXkMY", "text": "I think they train for like six days on eight GPUs is not that much if you compare to like language models on hundreds of TP use but still okay I don't want to go into the numbers of experiments but what is really cool is that they can now visualize this sort of attention and you can see right here that if they look at a particular point in the image and visualize the attention it will actually attend to the instance itself so it will like these are usually the problems for these detection algorithms when things overlap and are", "start_timestamp": "00:36:01", "end_timestamp": "00:36:38", "start_second": 2161, "end_second": 2198, "url": "https://www.youtube.com/watch?v=T35ba_VXkMY&t=2161s", "title": "DETR: End-to-End Object Detection with Transformers (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/T35ba_VXkMY/maxresdefault.jpg"} {"video_id": "T35ba_VXkMY", "text": "partially occluded but you can see right here that the attention is on the part of the image that makes the instance in the back and the attention here is on the part of this and it doesn't sort of overlap into the others so that is one thing that's pretty impressive about these architectures the other thing they show is for example it can generalize to many many instances so here it has never seen 24 giraffes in one image but yet it can absolutely do that and giraffe giraffe to rupture after of and the one of the coolest images I find are these", "start_timestamp": "00:36:38", "end_timestamp": "00:37:19", "start_second": 2198, "end_second": 2239, "url": "https://www.youtube.com/watch?v=T35ba_VXkMY&t=2198s", "title": "DETR: End-to-End Object Detection with Transformers (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/T35ba_VXkMY/maxresdefault.jpg"} {"video_id": "T35ba_VXkMY", "text": "here where you can see right here again attention visualisation and you see that even within the bounding box of the front elephant here you see that the attention on this foot of the back elephant is is is assigned to this blue bounding box so this is the blue basically the blue bounding box person that is attending to that back foot that means they they these things really sort of understand or they learn these things like occlusion and you know just hard I have a hard time describing it but you can see it visually here right like how", "start_timestamp": "00:37:19", "end_timestamp": "00:38:07", "start_second": 2239, "end_second": 2287, "url": "https://www.youtube.com/watch?v=T35ba_VXkMY&t=2239s", "title": "DETR: End-to-End Object Detection with Transformers (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/T35ba_VXkMY/maxresdefault.jpg"} {"video_id": "T35ba_VXkMY", "text": "it clearly learns that these are two instances that are sort of occluding each other but this this this instance can actually appear within the bounding box of the other instance and the same goes for the zebra here that are partially occluding each other and you can see that the attention is correctly like even here this back foot of this zebra is correctly labeled so all in all that is pretty cool and they take it a step further and they say well with this architecture we can actually pretty easily do pixel wise classification so", "start_timestamp": "00:38:07", "end_timestamp": "00:38:46", "start_second": 2287, "end_second": 2326, "url": "https://www.youtube.com/watch?v=T35ba_VXkMY&t=2287s", "title": "DETR: End-to-End Object Detection with Transformers (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/T35ba_VXkMY/maxresdefault.jpg"} {"video_id": "T35ba_VXkMY", "text": "this is this cocoa stuff and things data set where I don't know which one is the stuff and which one is the things I think things is the objects and stuff is like sky and mountains and so on and so this is a classification task where you actually have to label every single pixel so what they do is they simply input this through their detector and they detect the instances they take the attention maps of the instances and then they scale it up this right here is just a CNN sort of in Reverse that scales up the image because they have scaled it", "start_timestamp": "00:38:46", "end_timestamp": "00:39:25", "start_second": 2326, "end_second": 2365, "url": "https://www.youtube.com/watch?v=T35ba_VXkMY&t=2326s", "title": "DETR: End-to-End Object Detection with Transformers (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/T35ba_VXkMY/maxresdefault.jpg"} {"video_id": "T35ba_VXkMY", "text": "down as we said they scale it up again and then they scan simply classify each pixel where each of these you remember we had these different people here that it that cared about different things in the image each of these people will classify their respective pixels the pixels they feel responsible for and then you simply merge all of these people's predictions together into this prediction and again this gives pretty pretty impressive results I am I mean this is this is fun this looks like it sort of works I haven't", "start_timestamp": "00:39:25", "end_timestamp": "00:40:05", "start_second": 2365, "end_second": 2405, "url": "https://www.youtube.com/watch?v=T35ba_VXkMY&t=2365s", "title": "DETR: End-to-End Object Detection with Transformers (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/T35ba_VXkMY/maxresdefault.jpg"} {"video_id": "T35ba_VXkMY", "text": "they do quantitative analysis of course but I'm just impressed by the examples right here alright that was sort of it I really enjoyed reading this papers the simplicity is pretty cool they do have not only do they have code in the paper to show how ridiculously easy it is to get this to run this is all you need in pi torch but they do actually have code and as I understand they also have pre trained models so they have this model Zoo right here where they give you the pre trained models so you can play with", "start_timestamp": "00:40:05", "end_timestamp": "00:40:40", "start_second": 2405, "end_second": 2440, "url": "https://www.youtube.com/watch?v=T35ba_VXkMY&t=2405s", "title": "DETR: End-to-End Object Detection with Transformers (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/T35ba_VXkMY/maxresdefault.jpg"} {"video_id": "xtOg44r6dsE", "text": "[Music] hi guys this is Laker from a Judaica the evolution of AI has changed the entire 21st century in terms of technology ai has told in the spotlight and it's advancements are quicker than we predicted with such an exponential growth in AI machine learning is becoming the most training field of the 21st century it is starting to redefine the way we live and it's time we understood what it is and why it matters in this session we'll be discussing the different types of machine learning and we'll compare them to each other so let", "start_timestamp": "00:00:00", "end_timestamp": "00:00:43", "start_second": 0, "end_second": 43, "url": "https://www.youtube.com/watch?v=xtOg44r6dsE&t=0s", "title": "Supervised vs Unsupervised vs Reinforcement Learning | Data Science Certification Training | Edureka", "thumbnail": "https://i.ytimg.com/vi/xtOg44r6dsE/maxresdefault.jpg"} {"video_id": "xtOg44r6dsE", "text": "me run you through today's agenda we're going to begin the session with an introduction to machine learning next we will discuss the types of machine learning after that we'll compare supervised unsupervised and reinforcement learning based on a few key parameters we'll finally end the session by discussing a few example problems that can be solved using supervised unsupervised and reinforcement learning algorithms so without any further delay let's get started so guys machine learning is the science of getting computers to act by", "start_timestamp": "00:00:43", "end_timestamp": "00:01:14", "start_second": 43, "end_second": 74, "url": "https://www.youtube.com/watch?v=xtOg44r6dsE&t=43s", "title": "Supervised vs Unsupervised vs Reinforcement Learning | Data Science Certification Training | Edureka", "thumbnail": "https://i.ytimg.com/vi/xtOg44r6dsE/maxresdefault.jpg"} {"video_id": "xtOg44r6dsE", "text": "feeding them data and letting them learn a few tricks on their own without being explicitly programmed now this sounds awfully a lot like a human child so let's consider a small scenario to understand machine learning now as a child if you had to distinguish between fruits such as cherries apples and oranges you wouldn't even know where to start because you're not familiar with how the fruits look now as we grow up we collect more information and start developing the capability to distinguish between various fruits the only reason", "start_timestamp": "00:01:14", "end_timestamp": "00:01:46", "start_second": 74, "end_second": 106, "url": "https://www.youtube.com/watch?v=xtOg44r6dsE&t=74s", "title": "Supervised vs Unsupervised vs Reinforcement Learning | Data Science Certification Training | Edureka", "thumbnail": "https://i.ytimg.com/vi/xtOg44r6dsE/maxresdefault.jpg"} {"video_id": "xtOg44r6dsE", "text": "why we are able to make this distinction is because we absorb our surroundings we gathered more data and we learn from our past experiences it's because our brain is capable enough to think and make decisions since we have been feeding it a lot of data and this is exactly how machine learning works it involves continuously feeding data to a machine so that it can interpret this data understand the useful insides detect patterns and ident my key features to solve problems this is very similar to how our brain works", "start_timestamp": "00:01:46", "end_timestamp": "00:02:17", "start_second": 106, "end_second": 137, "url": "https://www.youtube.com/watch?v=xtOg44r6dsE&t=106s", "title": "Supervised vs Unsupervised vs Reinforcement Learning | Data Science Certification Training | Edureka", "thumbnail": "https://i.ytimg.com/vi/xtOg44r6dsE/maxresdefault.jpg"} {"video_id": "xtOg44r6dsE", "text": "now let's move ahead and take a look at the different types of machine learning so first of all we have supervised learning now guys supervised means to oversee or direct a certain activity and make sure it's done correctly in this type of learning the machine learns under guidance so at school or teachers guided us and taught us similarly in supervised learning machines learn by feeding them label data and explicitly telling them hey this is the input and this is exactly how the output must look okay so the", "start_timestamp": "00:02:17", "end_timestamp": "00:02:50", "start_second": 137, "end_second": 170, "url": "https://www.youtube.com/watch?v=xtOg44r6dsE&t=137s", "title": "Supervised vs Unsupervised vs Reinforcement Learning | Data Science Certification Training | Edureka", "thumbnail": "https://i.ytimg.com/vi/xtOg44r6dsE/maxresdefault.jpg"} {"video_id": "xtOg44r6dsE", "text": "teacher in this case is the training data next we have unsupervised learning unsupervised means to act without anyone's supervision or without anybody's direction now here the data is not labeled there is no guide and the machine has to figure out the data set given and it has to find hidden patterns in order to make predictions about the output an example of unsupervised learning is an adult like you and me we don't need a guide to help us with our daily activities we can figure things out on our own without any supervision", "start_timestamp": "00:02:50", "end_timestamp": "00:03:23", "start_second": 170, "end_second": 203, "url": "https://www.youtube.com/watch?v=xtOg44r6dsE&t=170s", "title": "Supervised vs Unsupervised vs Reinforcement Learning | Data Science Certification Training | Edureka", "thumbnail": "https://i.ytimg.com/vi/xtOg44r6dsE/maxresdefault.jpg"} {"video_id": "xtOg44r6dsE", "text": "finally we have reinforcement learning now guys reinforcement means to establish or encourage a pattern of behavior let's say that you were dropped off at an isolated island what would you do now initially you'd panic and you'd be unsure of what to do where to get food from how to live and so on but after a while you will have to adapt you must learn how to live in the island adapt to the changing climates learn more to eat and what not to eat so here you're basically following the hit and trial concept because you new to the", "start_timestamp": "00:03:23", "end_timestamp": "00:03:57", "start_second": 203, "end_second": 237, "url": "https://www.youtube.com/watch?v=xtOg44r6dsE&t=203s", "title": "Supervised vs Unsupervised vs Reinforcement Learning | Data Science Certification Training | Edureka", "thumbnail": "https://i.ytimg.com/vi/xtOg44r6dsE/maxresdefault.jpg"} {"video_id": "xtOg44r6dsE", "text": "surrounding and the only way to learn is experience and then learn from your experience this is what reinforcement learning is it is a learning method wherein an agent which is basically you stuck on the island interacts with its environment which is the island by producing actions and discovers errors or rewards and once the agent gets trained it gets ready to predict the new data presented to it now let's move ahead and look at the differences between supervised answer and reinforcement learning so let's begin by looking at their definitions", "start_timestamp": "00:03:57", "end_timestamp": "00:04:32", "start_second": 237, "end_second": 272, "url": "https://www.youtube.com/watch?v=xtOg44r6dsE&t=237s", "title": "Supervised vs Unsupervised vs Reinforcement Learning | Data Science Certification Training | Edureka", "thumbnail": "https://i.ytimg.com/vi/xtOg44r6dsE/maxresdefault.jpg"} {"video_id": "xtOg44r6dsE", "text": "now like I mentioned earlier supervised learning is a type of machine learning wherein we teach the machine using label data so an input and your output is label next we have unsupervised learning over here the data provided to the machine is not labeled and the machine has to learn without any supervision so that's why it should discover hidden patterns and trends in the data finally we have reinforcement learning now the basic concept behind reinforcement learning is that there is an agent now this agent is put in an", "start_timestamp": "00:04:32", "end_timestamp": "00:05:04", "start_second": 272, "end_second": 304, "url": "https://www.youtube.com/watch?v=xtOg44r6dsE&t=272s", "title": "Supervised vs Unsupervised vs Reinforcement Learning | Data Science Certification Training | Edureka", "thumbnail": "https://i.ytimg.com/vi/xtOg44r6dsE/maxresdefault.jpg"} {"video_id": "xtOg44r6dsE", "text": "unknown environment so the agent has to explore the environment by taking actions and transitioning from one state to the other so that he can get maximum rewards now the next parameter to consider is the type of problems that are solved using supervised unsupervised and reinforcement learning so under supervised learning we have two main categories of problems we have regression problems and we have classification problems now guys there is an important difference between classification and regression basically", "start_timestamp": "00:05:04", "end_timestamp": "00:05:35", "start_second": 304, "end_second": 335, "url": "https://www.youtube.com/watch?v=xtOg44r6dsE&t=304s", "title": "Supervised vs Unsupervised vs Reinforcement Learning | Data Science Certification Training | Edureka", "thumbnail": "https://i.ytimg.com/vi/xtOg44r6dsE/maxresdefault.jpg"} {"video_id": "xtOg44r6dsE", "text": "classification is about predicting a label or a class whereas regression is about predicting a continuous quantity now let's say that you have to classify your emails into two different routes so here basically we'll be labeling our emails as spam and non-spam mails for this kind of problem where we have to assign our input data into different classes we make use of classification algorithms on the other hand regression is used to predict a continuous quantity now a continuous variable is a variable that has infinite number of", "start_timestamp": "00:05:35", "end_timestamp": "00:06:08", "start_second": 335, "end_second": 368, "url": "https://www.youtube.com/watch?v=xtOg44r6dsE&t=335s", "title": "Supervised vs Unsupervised vs Reinforcement Learning | Data Science Certification Training | Edureka", "thumbnail": "https://i.ytimg.com/vi/xtOg44r6dsE/maxresdefault.jpg"} {"video_id": "xtOg44r6dsE", "text": "possibilities for example a person's weight so someone could be 180 pounds or they could be 180 point 10 pounds or 180 point 1 1 0 pounds now the number of possibilities for weight are limitless and this is exactly what a continuous variable is so regression is a predictive analysis used to predict continuous variables here you don't have to label data in two different classes instead you have to predict a final outcome like let's say that you want to predict the price of a stock over a period for such problems you can make use of", "start_timestamp": "00:06:08", "end_timestamp": "00:06:42", "start_second": 368, "end_second": 402, "url": "https://www.youtube.com/watch?v=xtOg44r6dsE&t=368s", "title": "Supervised vs Unsupervised vs Reinforcement Learning | Data Science Certification Training | Edureka", "thumbnail": "https://i.ytimg.com/vi/xtOg44r6dsE/maxresdefault.jpg"} {"video_id": "xtOg44r6dsE", "text": "regression algorithms coming to unsupervised learning this type of learning can be used to solve association problems and clustering problems association problems basically involve discovering patterns in data finding co-occurrences and so on a classic example of Association rule mining is a relationship between bread and jam so people who tend to buy bread also tend to buy jam over here it's all about finding associations between items that frequently co-occur or items are similar to each other apart from Association problems unsupervised", "start_timestamp": "00:06:42", "end_timestamp": "00:07:18", "start_second": 402, "end_second": 438, "url": "https://www.youtube.com/watch?v=xtOg44r6dsE&t=402s", "title": "Supervised vs Unsupervised vs Reinforcement Learning | Data Science Certification Training | Edureka", "thumbnail": "https://i.ytimg.com/vi/xtOg44r6dsE/maxresdefault.jpg"} {"video_id": "xtOg44r6dsE", "text": "learning also deals with clustering and anomaly detection problems clustering is used for cases that involve targeted marketing wherein you are given a list of customers and some information about them and what you have to do is you have to cluster these customers based on their similarity now guys Digital AdWords use a clustering technique to cluster potential buyers into different categories based on their interests and their intent anomaly detection on the other hand is used for tracking unusual activities an example of this is credit", "start_timestamp": "00:07:18", "end_timestamp": "00:07:51", "start_second": 438, "end_second": 471, "url": "https://www.youtube.com/watch?v=xtOg44r6dsE&t=438s", "title": "Supervised vs Unsupervised vs Reinforcement Learning | Data Science Certification Training | Edureka", "thumbnail": "https://i.ytimg.com/vi/xtOg44r6dsE/maxresdefault.jpg"} {"video_id": "xtOg44r6dsE", "text": "card fraud where in various unsupervised algorithms are used to detect suspicious activities then there is reinforcement learning now this type of learning is comparatively different in reinforcement learning the key difference is that the input itself depends on the actions we take for example in robotics we might start in a situation where the robot does not know anything above the surrounding it is in so after it performs certain actions it finds out more about the world but the world it sees depends on whether it chooses to", "start_timestamp": "00:07:51", "end_timestamp": "00:08:25", "start_second": 471, "end_second": 505, "url": "https://www.youtube.com/watch?v=xtOg44r6dsE&t=471s", "title": "Supervised vs Unsupervised vs Reinforcement Learning | Data Science Certification Training | Edureka", "thumbnail": "https://i.ytimg.com/vi/xtOg44r6dsE/maxresdefault.jpg"} {"video_id": "xtOg44r6dsE", "text": "move right or whether it shows to move forward or backward in this case the robot is known as the agent and its surrounding is the environment so for each action it takes it can receive a reward or it might receive a punishment now the next parameter is the type of data used to train a machine when it comes to supervised learning it's quite clear and simple the machine will be provided with a label set of input and output data in the training phase itself so basically you feed the output of your algorithm into the system this means", "start_timestamp": "00:08:25", "end_timestamp": "00:08:58", "start_second": 505, "end_second": 538, "url": "https://www.youtube.com/watch?v=xtOg44r6dsE&t=505s", "title": "Supervised vs Unsupervised vs Reinforcement Learning | Data Science Certification Training | Edureka", "thumbnail": "https://i.ytimg.com/vi/xtOg44r6dsE/maxresdefault.jpg"} {"video_id": "xtOg44r6dsE", "text": "that in supervised learning the machine already knows the output of the algorithm before it starts working on it now an example is classifying a data set into either cats or dogs alright so if the algorithm is fed an image of a cat the image is labeled as a cat similarly for a dog so guys this is how the model is taught it's told that this is a cat by labeling it after the algorithm is taught it is then tested using a new data set but a point to remember here is that in the training phase for a supervised learning algorithm the beta", "start_timestamp": "00:08:58", "end_timestamp": "00:09:33", "start_second": 538, "end_second": 573, "url": "https://www.youtube.com/watch?v=xtOg44r6dsE&t=538s", "title": "Supervised vs Unsupervised vs Reinforcement Learning | Data Science Certification Training | Edureka", "thumbnail": "https://i.ytimg.com/vi/xtOg44r6dsE/maxresdefault.jpg"} {"video_id": "xtOg44r6dsE", "text": "is labeled alright the input is also labeled and the output is also labeled in unsupervised learning the machine is only given the input data so here we don't tell the system where to go the system has to understand itself from the input data that we give to it so it does this by finding patterns in the data so if we try to classify images into cats and dogs in unsupervised learning the machine will be fed images of cats and dogs and at the end it will form two groups one containing cats and the other containing dogs now the only difference", "start_timestamp": "00:09:33", "end_timestamp": "00:10:06", "start_second": 573, "end_second": 606, "url": "https://www.youtube.com/watch?v=xtOg44r6dsE&t=573s", "title": "Supervised vs Unsupervised vs Reinforcement Learning | Data Science Certification Training | Edureka", "thumbnail": "https://i.ytimg.com/vi/xtOg44r6dsE/maxresdefault.jpg"} {"video_id": "xtOg44r6dsE", "text": "here is that it won't add labels to the output okay it will just understand how cats look and cluster them into one group and similarly for dogs coming to reinforcement learning there is no predefined data the input depends on the actions taken by the agent now these actions are then recorded in the form of matrices so that it can serve as a memory to the agent so basically as the agent explodes the environment it will collect data which was then being used to get the output so guys in reinforcement learning there is", "start_timestamp": "00:10:06", "end_timestamp": "00:10:38", "start_second": 606, "end_second": 638, "url": "https://www.youtube.com/watch?v=xtOg44r6dsE&t=606s", "title": "Supervised vs Unsupervised vs Reinforcement Learning | Data Science Certification Training | Edureka", "thumbnail": "https://i.ytimg.com/vi/xtOg44r6dsE/maxresdefault.jpg"} {"video_id": "xtOg44r6dsE", "text": "no predefined data set given to the machine the agent does all the work from scratch the next parameter to consider is training in supervised learning the training phase is well defined and very explicit the machine is fed training data where both the input and output is labeled and the only thing the algorithm has to do is map the input to the output so the training data act like a teacher or a guide over here now once the algorithm is well trained it is tested using the new data when it comes to unsupervised learning the training phase", "start_timestamp": "00:10:38", "end_timestamp": "00:11:11", "start_second": 638, "end_second": 671, "url": "https://www.youtube.com/watch?v=xtOg44r6dsE&t=638s", "title": "Supervised vs Unsupervised vs Reinforcement Learning | Data Science Certification Training | Edureka", "thumbnail": "https://i.ytimg.com/vi/xtOg44r6dsE/maxresdefault.jpg"} {"video_id": "xtOg44r6dsE", "text": "is big because the machine is only given the input and it has to figure out the output on its own so there's no supervisor here or there's no mentor over here in reinforcement learning there is no predefined data and the whole reinforcement learning process itself is a training and testing phase since there is no predefined data given to the machine it has to learn everything on its own and it starts by exploring and collecting data the next parameter we're going to discuss is the aim of each of these machine learning", "start_timestamp": "00:11:11", "end_timestamp": "00:11:41", "start_second": 671, "end_second": 701, "url": "https://www.youtube.com/watch?v=xtOg44r6dsE&t=671s", "title": "Supervised vs Unsupervised vs Reinforcement Learning | Data Science Certification Training | Edureka", "thumbnail": "https://i.ytimg.com/vi/xtOg44r6dsE/maxresdefault.jpg"} {"video_id": "xtOg44r6dsE", "text": "types the main aim or the end goal of a supervised learning algorithm is to forecast an outcome now obviously that is the basic aim of all these machine learning types but the whole supervised learning process is built in such a way that it can directly give you a predicted outcome because supervised learning algorithms have a very well-defined training phase unsupervised learning is all about discovering patterns and extracting useful insights now since the algorithm is only fed the input it has to find a way to get to the", "start_timestamp": "00:11:41", "end_timestamp": "00:12:13", "start_second": 701, "end_second": 733, "url": "https://www.youtube.com/watch?v=xtOg44r6dsE&t=701s", "title": "Supervised vs Unsupervised vs Reinforcement Learning | Data Science Certification Training | Edureka", "thumbnail": "https://i.ytimg.com/vi/xtOg44r6dsE/maxresdefault.jpg"} {"video_id": "xtOg44r6dsE", "text": "output by finding trends and associations in the data coming to reinforcement learning the agent here is a lot like a human child just like how a baby is clueless about the world initially the agent also has no idea about its environment but as it explores the environment it starts learning it learns from the mistakes it makes and it basically learns from its experience now let's look at the approach followed when it comes to supervised learning it's quite simple like I mentioned earlier all that the algorithm has to do is map", "start_timestamp": "00:12:13", "end_timestamp": "00:12:45", "start_second": 733, "end_second": 765, "url": "https://www.youtube.com/watch?v=xtOg44r6dsE&t=733s", "title": "Supervised vs Unsupervised vs Reinforcement Learning | Data Science Certification Training | Edureka", "thumbnail": "https://i.ytimg.com/vi/xtOg44r6dsE/maxresdefault.jpg"} {"video_id": "xtOg44r6dsE", "text": "the known input to the known output in unsupervised learning the algorithm has to find patterns in data trends in data and keep exploring the data until it reaches the output the approach followed by reinforcement learning is a trial and error method the trial and error method best explains reinforcement learning because the agent has to try out all possible actions to learn about its environment and to get maximum rewards the next parameter is feedback now in supervised learning there is a direct feedback mechanism since the machine is", "start_timestamp": "00:12:45", "end_timestamp": "00:13:19", "start_second": 765, "end_second": 799, "url": "https://www.youtube.com/watch?v=xtOg44r6dsE&t=765s", "title": "Supervised vs Unsupervised vs Reinforcement Learning | Data Science Certification Training | Edureka", "thumbnail": "https://i.ytimg.com/vi/xtOg44r6dsE/maxresdefault.jpg"} {"video_id": "xtOg44r6dsE", "text": "trained with build input and output for unsupervised learning there is no feedback mechanism because the machine is unaware of the output during the training phase now in reinforcement learning the feedback is in the form of rewards or punishments from the environment so when an agent takes a suitable action it will get a corresponding reward for that action but if the action is wrong then it gets a punishment so rewards and punishments can be thought with respect to a game now in a game when you win a state you", "start_timestamp": "00:13:19", "end_timestamp": "00:13:48", "start_second": 799, "end_second": 828, "url": "https://www.youtube.com/watch?v=xtOg44r6dsE&t=799s", "title": "Supervised vs Unsupervised vs Reinforcement Learning | Data Science Certification Training | Edureka", "thumbnail": "https://i.ytimg.com/vi/xtOg44r6dsE/maxresdefault.jpg"} {"video_id": "xtOg44r6dsE", "text": "get extra coins but when you fail you have to go back to the same state and try again now let's look at some of the popular algorithms supervised learning has algorithms like linear regression which is mainly used for regression problems it also has algorithms like support vector machines decision trees and so on and these can also be used for classification problems coming to unsupervised learning we have algorithms like key means C means for clustering analysis and algorithms like a priori and Association rule mining to deal with", "start_timestamp": "00:13:48", "end_timestamp": "00:14:21", "start_second": 828, "end_second": 861, "url": "https://www.youtube.com/watch?v=xtOg44r6dsE&t=828s", "title": "Supervised vs Unsupervised vs Reinforcement Learning | Data Science Certification Training | Edureka", "thumbnail": "https://i.ytimg.com/vi/xtOg44r6dsE/maxresdefault.jpg"} {"video_id": "xtOg44r6dsE", "text": "Association problems now reinforcement learning is just being explored recently a few algorithms include Q learning and the state action reward state action algorithm next up we have applications so guys supervised learning is widely used in the business sector for forecasting risks risk analysis predicting sales profit and so on coming to unsupervised learning so guys the recommendations you see when you shop online like for example if you buy a book on Amazon right you get a list of recommendations now these are all done", "start_timestamp": "00:14:21", "end_timestamp": "00:14:56", "start_second": 861, "end_second": 896, "url": "https://www.youtube.com/watch?v=xtOg44r6dsE&t=861s", "title": "Supervised vs Unsupervised vs Reinforcement Learning | Data Science Certification Training | Edureka", "thumbnail": "https://i.ytimg.com/vi/xtOg44r6dsE/maxresdefault.jpg"} {"video_id": "xtOg44r6dsE", "text": "by unsupervised learning algorithms other applications include anomaly detection credit card fraud detection and so on now reinforcement learning is used in self-driving cars in building games and all of that one famous example is the alphago game I'm sure all if you have heard of that so guys those were the major differences between supervised unsupervised and reinforcement learning so now let me give you a few examples of problems that can be solved using supervised unsupervised and reinforcement learning algorithms all", "start_timestamp": "00:14:56", "end_timestamp": "00:15:28", "start_second": 896, "end_second": 928, "url": "https://www.youtube.com/watch?v=xtOg44r6dsE&t=896s", "title": "Supervised vs Unsupervised vs Reinforcement Learning | Data Science Certification Training | Edureka", "thumbnail": "https://i.ytimg.com/vi/xtOg44r6dsE/maxresdefault.jpg"} {"video_id": "xtOg44r6dsE", "text": "right so our first use case is to study a bank credit data set and make a decision about whether to approve the loan of an applicant based on his profile so here we are going to be given a bank credit data set now the information that you see over here is for each of the customers so every customer's account balance purpose credit amount value savings everything is given in the data set and you have to predict whether you can approve the loan of an applicant based on his bank account balance based on his purpose his", "start_timestamp": "00:15:28", "end_timestamp": "00:16:00", "start_second": 928, "end_second": 960, "url": "https://www.youtube.com/watch?v=xtOg44r6dsE&t=928s", "title": "Supervised vs Unsupervised vs Reinforcement Learning | Data Science Certification Training | Edureka", "thumbnail": "https://i.ytimg.com/vi/xtOg44r6dsE/maxresdefault.jpg"} {"video_id": "xtOg44r6dsE", "text": "credit amount and his savings so for this problem you can make use of the supervised learning algorithm known as key and an algorithm or key in your is neighbor algorithm now let's look at our next use case now here we have to establish a mathematical equation for distance as a function of speed so basically over here you're going to predict the distance that a car can travel based on its speed so guys the best algorithm to use for such a problem is the linear regression algorithm so the linear regression algorithm is", "start_timestamp": "00:16:00", "end_timestamp": "00:16:31", "start_second": 960, "end_second": 991, "url": "https://www.youtube.com/watch?v=xtOg44r6dsE&t=960s", "title": "Supervised vs Unsupervised vs Reinforcement Learning | Data Science Certification Training | Edureka", "thumbnail": "https://i.ytimg.com/vi/xtOg44r6dsE/maxresdefault.jpg"} {"video_id": "xtOg44r6dsE", "text": "basically used to predict continuous quantities and in this case we have to predict the distance which is a continuous quantity and like I mentioned earlier a linear regression is a type of supervised learning algorithm okay moving on to our next few skills now the problem here is to cluster a set of movies as either good or average based on a social media outreach all right now if you read the problem statement properly you can see the word cluster alright this clearly means that this is a clustering problem and clustering", "start_timestamp": "00:16:31", "end_timestamp": "00:17:03", "start_second": 991, "end_second": 1023, "url": "https://www.youtube.com/watch?v=xtOg44r6dsE&t=991s", "title": "Supervised vs Unsupervised vs Reinforcement Learning | Data Science Certification Training | Edureka", "thumbnail": "https://i.ytimg.com/vi/xtOg44r6dsE/maxresdefault.jpg"} {"video_id": "xtOg44r6dsE", "text": "problems fall under unsupervised learning so here we're going to make use of a algorithm known as k-means algorithm to form two clusters okay one cluster is going to contain popular movies and the other is going to contain non popular movies based on their likes on social media now moving ahead the our next problem statement is to perform Market Basket analysis by finding association between items bought at the grocery store again over here you can see the keyword association this means that this is an association problem now", "start_timestamp": "00:17:03", "end_timestamp": "00:17:36", "start_second": 1023, "end_second": 1056, "url": "https://www.youtube.com/watch?v=xtOg44r6dsE&t=1023s", "title": "Supervised vs Unsupervised vs Reinforcement Learning | Data Science Certification Training | Edureka", "thumbnail": "https://i.ytimg.com/vi/xtOg44r6dsE/maxresdefault.jpg"} {"video_id": "xtOg44r6dsE", "text": "Association problems fall under the unsupervised learning algorithms and here we can make use of the a priori algorithm to do this so here what you have to do is basically if and find association between different items so if a person bought bread and butter together it means that there is an association between these two items so in this problem you just going to find the association between different items and you're going to make use of the unsupervised learning algorithm cause the a priori algorithm so guys", "start_timestamp": "00:17:36", "end_timestamp": "00:18:05", "start_second": 1056, "end_second": 1085, "url": "https://www.youtube.com/watch?v=xtOg44r6dsE&t=1056s", "title": "Supervised vs Unsupervised vs Reinforcement Learning | Data Science Certification Training | Edureka", "thumbnail": "https://i.ytimg.com/vi/xtOg44r6dsE/maxresdefault.jpg"} {"video_id": "xtOg44r6dsE", "text": "this is the last use case and over here the problem statement says that you're going to place an agent in any one of the rooms and basically the rooms are represented as 0 1 2 3 4 & 5 and the goal here is to reach the outside of the building now this is clearly a reinforcement learning problem all right to solve this you can make use of the cue learning algorithm and your end goal is to reach room number 5 so guys here you can see that there is no data set because the data set is going to be developed by the agent", "start_timestamp": "00:18:05", "end_timestamp": "00:18:35", "start_second": 1085, "end_second": 1115, "url": "https://www.youtube.com/watch?v=xtOg44r6dsE&t=1085s", "title": "Supervised vs Unsupervised vs Reinforcement Learning | Data Science Certification Training | Edureka", "thumbnail": "https://i.ytimg.com/vi/xtOg44r6dsE/maxresdefault.jpg"} {"video_id": "xtOg44r6dsE", "text": "itself so guys over here the agent is responsible for collecting the data all right he's going to explore the environment collect useful information and then he's going to use this information to get to room number 5 so guys that was it for our use cases and with this we come to the end of today's video I hope all of you enjoyed it if you have any doubts or any queries regarding the session please leave them in the comment section and we'll get back to you at the earliest so guys thank you so much for watching this", "start_timestamp": "00:18:35", "end_timestamp": "00:19:03", "start_second": 1115, "end_second": 1143, "url": "https://www.youtube.com/watch?v=xtOg44r6dsE&t=1115s", "title": "Supervised vs Unsupervised vs Reinforcement Learning | Data Science Certification Training | Edureka", "thumbnail": "https://i.ytimg.com/vi/xtOg44r6dsE/maxresdefault.jpg"} {"video_id": "Y6RRHw9uN9o", "text": "hey guys and welcome to another fun and easy machine learning video on support vector machines so the other day I was walking through the park where I saw a lot of people with their pets dogs as well as cats and then I came across this strange creature just really challenging for me to tell whether it was a dog or get but I eventually figured it out that it was a cat groom like a dog now if it was challenging for me to figure out imagine how difficult and challenging it would be for a computer to precisely classify between a dog and a cat a", "start_timestamp": "00:00:00", "end_timestamp": "00:00:34", "start_second": 0, "end_second": 34, "url": "https://www.youtube.com/watch?v=Y6RRHw9uN9o&t=0s", "title": "Support Vector Machine (SVM) in 7 minutes - Fun Machine Learning", "thumbnail": "https://i.ytimg.com/vi/Y6RRHw9uN9o/maxresdefault.jpg"} {"video_id": "Y6RRHw9uN9o", "text": "really great algorithm for these types of applications is the support vector machine algorithm or SVM it looks at the extremes of the data sets and draws a decision boundary also known as a hyperplane near the extreme points in the data set so essentially the support vector machine algorithm is a frontier which best segregates the two classes so how does it work to understand SVM Zapatera let's first take a look at why they called support vector machines so say we got some sample data over here of features that classify whether an", "start_timestamp": "00:00:34", "end_timestamp": "00:01:08", "start_second": 34, "end_second": 68, "url": "https://www.youtube.com/watch?v=Y6RRHw9uN9o&t=34s", "title": "Support Vector Machine (SVM) in 7 minutes - Fun Machine Learning", "thumbnail": "https://i.ytimg.com/vi/Y6RRHw9uN9o/maxresdefault.jpg"} {"video_id": "Y6RRHw9uN9o", "text": "observed picture is a dog or cat so we can for example look at the snout length or the ear geometry if we assume that dogs generally have longer snouts and gets have much more pointy ear shapes so how would we decide where to draw our decision boundary well we can draw it over here or here or like this any of these would be fine but what would be the best if you do not have the optimal decision boundary we could incorrectly classify a dog with a cat so if we draw an arbitrary separation line and we use intuition to draw it somewhere between", "start_timestamp": "00:01:08", "end_timestamp": "00:01:48", "start_second": 68, "end_second": 108, "url": "https://www.youtube.com/watch?v=Y6RRHw9uN9o&t=68s", "title": "Support Vector Machine (SVM) in 7 minutes - Fun Machine Learning", "thumbnail": "https://i.ytimg.com/vi/Y6RRHw9uN9o/maxresdefault.jpg"} {"video_id": "Y6RRHw9uN9o", "text": "this data point for the dhoklas and this point for the cat glass these points are also known as support vectors which are defined as data points that the margin pushes up against all points that are close to the opposing glass so the algorithm basically implies that only support vectors are important whereas training examples are ignore an example of this is so that if we have in our case of a dog that loser cat or get that is chrome like a dog we want our classifier to look at the extremes and set our margins based on these support", "start_timestamp": "00:01:48", "end_timestamp": "00:02:23", "start_second": 108, "end_second": 143, "url": "https://www.youtube.com/watch?v=Y6RRHw9uN9o&t=108s", "title": "Support Vector Machine (SVM) in 7 minutes - Fun Machine Learning", "thumbnail": "https://i.ytimg.com/vi/Y6RRHw9uN9o/maxresdefault.jpg"} {"video_id": "Y6RRHw9uN9o", "text": "vectors so we have D plus which is the shortest distance to the closest positive point and D minus which is the shortest distance to the closest negative point and then we have the margin of a separating hyperplane which is the positive plus the negative the line or decision boundary that segregates the two classes is commonly referred to as a hyperplane because s VMs can be used in multi dimensional data sets and the data points are referred to as vectors as they have coordinates within the space of data so", "start_timestamp": "00:02:23", "end_timestamp": "00:02:56", "start_second": 143, "end_second": 176, "url": "https://www.youtube.com/watch?v=Y6RRHw9uN9o&t=143s", "title": "Support Vector Machine (SVM) in 7 minutes - Fun Machine Learning", "thumbnail": "https://i.ytimg.com/vi/Y6RRHw9uN9o/maxresdefault.jpg"} {"video_id": "Y6RRHw9uN9o", "text": "what we discussed so far is also known as linear support vector machines or L SVM because the class are linearly separable but what happens if we have a dataset that is not linear separable so say we are presented with data that looks like this where it looks almost impossible to use a single line to separate the two classes we can use a function to transform our data into high dimensional space so you can see over here we go from one dimensional to two dimensional space we can apply a simple polynomial function to get a parabola and now you", "start_timestamp": "00:02:56", "end_timestamp": "00:03:32", "start_second": 176, "end_second": 212, "url": "https://www.youtube.com/watch?v=Y6RRHw9uN9o&t=176s", "title": "Support Vector Machine (SVM) in 7 minutes - Fun Machine Learning", "thumbnail": "https://i.ytimg.com/vi/Y6RRHw9uN9o/maxresdefault.jpg"} {"video_id": "Y6RRHw9uN9o", "text": "can easily see how we can draw our hyperplane we can do the same for this data set where it's easy to draw the hyperplane or line but for a machine will use a function to transform our data from two-dimensional to three dimensional feature space now the only problem with transformation into higher dimensional feature space is that it's computationally expensive we can use a kernel trick to reduce the computational costs a function that takes as inputs vectors in the original space and returns the dot product of the vectors", "start_timestamp": "00:03:32", "end_timestamp": "00:04:03", "start_second": 212, "end_second": 243, "url": "https://www.youtube.com/watch?v=Y6RRHw9uN9o&t=212s", "title": "Support Vector Machine (SVM) in 7 minutes - Fun Machine Learning", "thumbnail": "https://i.ytimg.com/vi/Y6RRHw9uN9o/maxresdefault.jpg"} {"video_id": "Y6RRHw9uN9o", "text": "in the feature space is called a kernel function also referred to as kernel trick using a kernel function we can apply the dot product between two vectors so that every point is mapped into a high dimensional space via some transformation so essentially we use it to transform a non linear space into a linear space if you look at some popular kernel types here are some popular kernel types that you can use transform our data into high dimensional feature space there are polynomial kernel radial basis function RPF or RBF", "start_timestamp": "00:04:03", "end_timestamp": "00:04:38", "start_second": 243, "end_second": 278, "url": "https://www.youtube.com/watch?v=Y6RRHw9uN9o&t=243s", "title": "Support Vector Machine (SVM) in 7 minutes - Fun Machine Learning", "thumbnail": "https://i.ytimg.com/vi/Y6RRHw9uN9o/maxresdefault.jpg"} {"video_id": "Y6RRHw9uN9o", "text": "kernel sigmoid kernel amongst others unfortunately choosing the correct kernel is a non-trivial task and may depend on specific task at hand no matter which kernel you choose you need to tune the kernel parameters to get good performance from a classifier a popular parameter tuning technique includes k-fold cross-validation you'll deal some of these parameters in our Python labs so the advantages of support vector machines are that they are effective in high dimensional spaces they are so effective in cases where a", "start_timestamp": "00:04:38", "end_timestamp": "00:05:10", "start_second": 278, "end_second": 310, "url": "https://www.youtube.com/watch?v=Y6RRHw9uN9o&t=278s", "title": "Support Vector Machine (SVM) in 7 minutes - Fun Machine Learning", "thumbnail": "https://i.ytimg.com/vi/Y6RRHw9uN9o/maxresdefault.jpg"} {"video_id": "Y6RRHw9uN9o", "text": "number of dimensions is greater than the number of samples they use a subset of training points in the decision function or support vectors so it's also memory efficient suppose factors are first down so different kernels can be specified for the decision function common kernels are provided but it's also possible to specify custom kernels we can add kernel functions together to achieve even more complex hyperplanes the disadvantages however of sports vector machines include if the number of features is greater than the number of samples the", "start_timestamp": "00:05:10", "end_timestamp": "00:05:43", "start_second": 310, "end_second": 343, "url": "https://www.youtube.com/watch?v=Y6RRHw9uN9o&t=310s", "title": "Support Vector Machine (SVM) in 7 minutes - Fun Machine Learning", "thumbnail": "https://i.ytimg.com/vi/Y6RRHw9uN9o/maxresdefault.jpg"} {"video_id": "Y6RRHw9uN9o", "text": "method is likely to give poor performance support vector machines do not directly provide probability estimates these are calculated using an expensive five fold cross validation if you take a look at the applications of support vector machines the support vector machine algorithm has numerous applications and can be a quite popular alternative to artificial neural networks or a ends here are some applications from published journal papers so we can use support vector machines in medical imaging there's one application for SVM based regression", "start_timestamp": "00:05:43", "end_timestamp": "00:06:17", "start_second": 343, "end_second": 377, "url": "https://www.youtube.com/watch?v=Y6RRHw9uN9o&t=343s", "title": "Support Vector Machine (SVM) in 7 minutes - Fun Machine Learning", "thumbnail": "https://i.ytimg.com/vi/Y6RRHw9uN9o/maxresdefault.jpg"} {"video_id": "Y6RRHw9uN9o", "text": "models to study the Equality in urban areas in the city of Oviedo in Spain support vector machines is also used for image interpolation as well as medical classification tasks in the financial industry support vectors are used for time series predictions as well as financial and there's one paper on the application of neural networks mixed with support vector machines in coding theory and practice there's also for pattern recognition for machine volts diagnosis which also uses support vector machines as well as page ranking algorithm and", "start_timestamp": "00:06:17", "end_timestamp": "00:06:50", "start_second": 377, "end_second": 410, "url": "https://www.youtube.com/watch?v=Y6RRHw9uN9o&t=377s", "title": "Support Vector Machine (SVM) in 7 minutes - Fun Machine Learning", "thumbnail": "https://i.ytimg.com/vi/Y6RRHw9uN9o/maxresdefault.jpg"} {"video_id": "yexR53My2O4", "text": "hi there today we'll look at distributed representations of words and phrases and their compositionality by thomas McAuliffe Ilya sutskever chi-chan Greg Corrado and Jeffrey Dean this is another historical paper it's one of three papers it's the middle one that introduces the original word to Veck algorithm and if you as you might know work to back was extremely influential in NLP since this paper basically until recently where it's sort of gone out of fashion a bit in research with the rise of things like Elmo and Bert but it's", "start_timestamp": "00:00:00", "end_timestamp": "00:00:37", "start_second": 0, "end_second": 37, "url": "https://www.youtube.com/watch?v=yexR53My2O4&t=0s", "title": "[Classic] Word2Vec: Distributed Representations of Words and Phrases and their Compositionality", "thumbnail": "https://i.ytimg.com/vi/yexR53My2O4/maxresdefault.jpg"} {"video_id": "yexR53My2O4", "text": "still a very very relevant so we'll look at this historical paper today with kind of the hindsight of being a couple years into the future in fact as you see right here this was released in 2013 so it's seven years later now and we'll look back and we'll see what they said back then about the system this is not going to be like a very you know well enhanced PowerPoint presentation of how we're to Veck works this we're going to look at the paper and read it together if if you if you like you know content like this if you like historical paper", "start_timestamp": "00:00:37", "end_timestamp": "00:01:14", "start_second": 37, "end_second": 74, "url": "https://www.youtube.com/watch?v=yexR53My2O4&t=37s", "title": "[Classic] Word2Vec: Distributed Representations of Words and Phrases and their Compositionality", "thumbnail": "https://i.ytimg.com/vi/yexR53My2O4/maxresdefault.jpg"} {"video_id": "yexR53My2O4", "text": "readings let me know in the comments share it out if you do like it and of course subscribe because this these kind of historical papers I enjoy them but you know many people might already know what these things are so yeah ok let's you know go through the paper and pick up their ideas and kind of put them in context they say the recently introduced continues skip graham model is an efficient method for learning high-quality distributed vector representations that capture a large number of precise syntactic and semantic", "start_timestamp": "00:01:14", "end_timestamp": "00:01:49", "start_second": 74, "end_second": 109, "url": "https://www.youtube.com/watch?v=yexR53My2O4&t=74s", "title": "[Classic] Word2Vec: Distributed Representations of Words and Phrases and their Compositionality", "thumbnail": "https://i.ytimg.com/vi/yexR53My2O4/maxresdefault.jpg"} {"video_id": "yexR53My2O4", "text": "world relationships so the Skip Graham model was already introduced by mclubbe in an earlier paper that came out I believe not like one or two months prior to this one as I said where to veck is a series of papers I don't think there is a paper called were to vector they here have released the code along with the with the paper in the code was called - vac so the Skip Graham model was introduced previously but it is replicated right here so this in the Skip Graham model what you're trying to do is you're trying to get a distributed", "start_timestamp": "00:01:49", "end_timestamp": "00:02:25", "start_second": 109, "end_second": 145, "url": "https://www.youtube.com/watch?v=yexR53My2O4&t=109s", "title": "[Classic] Word2Vec: Distributed Representations of Words and Phrases and their Compositionality", "thumbnail": "https://i.ytimg.com/vi/yexR53My2O4/maxresdefault.jpg"} {"video_id": "yexR53My2O4", "text": "word representation so what does that mean that means that for each word in your language let's take these words right here for each word in the language you want to come up with a vector that somehow describes that word in a continuous fashion so that with in a the - like me might be mapped to I don't know 0.1 0.9 and 0.3 Learn might be mapped to negative 0.5 and so on so each word gets assigned a vector in the same dimensional space and what the previous paper kind of discovered is that if you do this correctly then these vectors", "start_timestamp": "00:02:25", "end_timestamp": "00:03:05", "start_second": 145, "end_second": 185, "url": "https://www.youtube.com/watch?v=yexR53My2O4&t=145s", "title": "[Classic] Word2Vec: Distributed Representations of Words and Phrases and their Compositionality", "thumbnail": "https://i.ytimg.com/vi/yexR53My2O4/maxresdefault.jpg"} {"video_id": "yexR53My2O4", "text": "they have they have some kind of properties and we can already kind of jump ahead because this was already a bit a bit researched in the last paper the semantics of these vectors will be something like this so here they have a two dimensional PCA so these are the first two dimensions of the one thousand dimensional skip gram vector so the vectors they obtain they can do things like this where they can show that in these spaces for example there appears to be a vector Direction that characterizes the capital of a country", "start_timestamp": "00:03:05", "end_timestamp": "00:03:41", "start_second": 185, "end_second": 221, "url": "https://www.youtube.com/watch?v=yexR53My2O4&t=185s", "title": "[Classic] Word2Vec: Distributed Representations of Words and Phrases and their Compositionality", "thumbnail": "https://i.ytimg.com/vi/yexR53My2O4/maxresdefault.jpg"} {"video_id": "yexR53My2O4", "text": "so if you take a few countries and their capitals and you average that vector you get a kind of a direction for capital Ness of a city given a country you can see that there is a a pretty clear relation here now some of these things have later been revised to such that they they are ultimately ended up being not that impressive for example there was always this kind of math with vectors and I don't I believe this is this might not be in this this is in the last paper where they discovered that if you take the vector for King and you", "start_timestamp": "00:03:41", "end_timestamp": "00:04:22", "start_second": 221, "end_second": 262, "url": "https://www.youtube.com/watch?v=yexR53My2O4&t=221s", "title": "[Classic] Word2Vec: Distributed Representations of Words and Phrases and their Compositionality", "thumbnail": "https://i.ytimg.com/vi/yexR53My2O4/maxresdefault.jpg"} {"video_id": "yexR53My2O4", "text": "subtract the vector for man and you add the vector for a woman then that would result in the vector for Queen so the way they did it was basically they did this calculation right here and then they searched in the point they ended up they searched for the nearest neighbor in their vocabulary and that turned out to be Queen but in order to make it Queen actually you have to exclude the original word King and people quickly discovered that if you don't exclude the original word it you know the result of this kind of", "start_timestamp": "00:04:22", "end_timestamp": "00:05:01", "start_second": 262, "end_second": 301, "url": "https://www.youtube.com/watch?v=yexR53My2O4&t=262s", "title": "[Classic] Word2Vec: Distributed Representations of Words and Phrases and their Compositionality", "thumbnail": "https://i.ytimg.com/vi/yexR53My2O4/maxresdefault.jpg"} {"video_id": "yexR53My2O4", "text": "arithmetic will almost always lead back to the original word and then a lot of these analogy tasks are simply the result of you then discarding that word during the nearest neighbor search and then Queen just happens to be one of the closest words and it's it sort of much less dependent on which exact calculation you do here so there's been a lot of follow-up work kind of analyzing criticizing these vector maths but definitely we know that these word vectors turned out to be extremely extremely helpful and syntactically and", "start_timestamp": "00:05:01", "end_timestamp": "00:05:36", "start_second": 301, "end_second": 336, "url": "https://www.youtube.com/watch?v=yexR53My2O4&t=301s", "title": "[Classic] Word2Vec: Distributed Representations of Words and Phrases and their Compositionality", "thumbnail": "https://i.ytimg.com/vi/yexR53My2O4/maxresdefault.jpg"} {"video_id": "yexR53My2O4", "text": "semantically relevant in downstream tasks because they have performed very very well so how does the Skip Graham model work how does how does it assign vectors to each to each word so first of all it has a dictionary so there is a word an input word and for each word you have a big dictionary and the diction dictionary is basically says that you know - the word - is going to be mapped to this vector point one Dada Dada Dada and so on the word learn is going to be mapped to that vector and then you also have these output vectors right here and", "start_timestamp": "00:05:36", "end_timestamp": "00:06:22", "start_second": 336, "end_second": 382, "url": "https://www.youtube.com/watch?v=yexR53My2O4&t=336s", "title": "[Classic] Word2Vec: Distributed Representations of Words and Phrases and their Compositionality", "thumbnail": "https://i.ytimg.com/vi/yexR53My2O4/maxresdefault.jpg"} {"video_id": "yexR53My2O4", "text": "what you're trying to do is you're trying to take a phrase from the data set like this one right here and you take out one word like this word vector right here and you're trying to frame this as a prediction task so you're trying to frame this as in this case four different prediction tasks so you're telling your machine I give you the word vector and which other words are around the word vector you just tell it that you don't tell it anything else you just say which other words are around the world vector and the correct", "start_timestamp": "00:06:22", "end_timestamp": "00:07:06", "start_second": 382, "end_second": 426, "url": "https://www.youtube.com/watch?v=yexR53My2O4&t=382s", "title": "[Classic] Word2Vec: Distributed Representations of Words and Phrases and their Compositionality", "thumbnail": "https://i.ytimg.com/vi/yexR53My2O4/maxresdefault.jpg"} {"video_id": "yexR53My2O4", "text": "answers in this case would be to learn word and representations so these you construct for different training examples where you have an x and a y so the X is always vector and the Y is 2 and then the next training sample the X is vector and the Y is learn and and so on ok so this here each training sample is a classification task right and the classification task is and is as you can see no you can't see right here but the classification task is you have the input word and you classify it into one of many many many many many many classes", "start_timestamp": "00:07:06", "end_timestamp": "00:07:57", "start_second": 426, "end_second": 477, "url": "https://www.youtube.com/watch?v=yexR53My2O4&t=426s", "title": "[Classic] Word2Vec: Distributed Representations of Words and Phrases and their Compositionality", "thumbnail": "https://i.ytimg.com/vi/yexR53My2O4/maxresdefault.jpg"} {"video_id": "yexR53My2O4", "text": "namely there are as many classes as you have words in the dictionary so each word in the dictionary will have a class associated with it right so an image nets you have like a thousand classes but in these and that's already a lot but in these tasks you're gonna have a hundred thousand classes because there are a hundred thousand words in the English language that you wanna want to treat and there are many more but in this case they leave away all the words that appear less than five times in their corpus that's still a lot of words", "start_timestamp": "00:07:57", "end_timestamp": "00:08:30", "start_second": 477, "end_second": 510, "url": "https://www.youtube.com/watch?v=yexR53My2O4&t=477s", "title": "[Classic] Word2Vec: Distributed Representations of Words and Phrases and their Compositionality", "thumbnail": "https://i.ytimg.com/vi/yexR53My2O4/maxresdefault.jpg"} {"video_id": "yexR53My2O4", "text": "so it's like a super duper duper lot of a classification task but ultimately if you do something like this then the originally so the representation that you end up with is going to be very very good at doing these kind of downstream tasks and that's what they discovered so their skip gram model is nothing else than taking a word in predicting the surrounding words from that word and this is what it means this is the formal statement of the skip gram objective what you want to do is the objective of the skip gram model is to maximize the", "start_timestamp": "00:08:30", "end_timestamp": "00:09:10", "start_second": 510, "end_second": 550, "url": "https://www.youtube.com/watch?v=yexR53My2O4&t=510s", "title": "[Classic] Word2Vec: Distributed Representations of Words and Phrases and their Compositionality", "thumbnail": "https://i.ytimg.com/vi/yexR53My2O4/maxresdefault.jpg"} {"video_id": "yexR53My2O4", "text": "average log probability this one so for the word we're considering the word t we want to maximize the log probability of each word W that is in around the word C I'm sorry around the word W in a context window of C that's exactly what we did before we take a word like this model right here and from it we predict all of the words around it in energy in a given window right that's all that's the entire objective and that will give you very good representations and this is how you would implement that so what you'll have is you'll have these vector", "start_timestamp": "00:09:10", "end_timestamp": "00:09:58", "start_second": 550, "end_second": 598, "url": "https://www.youtube.com/watch?v=yexR53My2O4&t=550s", "title": "[Classic] Word2Vec: Distributed Representations of Words and Phrases and their Compositionality", "thumbnail": "https://i.ytimg.com/vi/yexR53My2O4/maxresdefault.jpg"} {"video_id": "yexR53My2O4", "text": "representation V that comes from your original dictionary those are the things you learn and then because you have like an 30,000 way classifier you know that a classification layer is nothing else than a linear layer followed by a soft max operation and that linear layer also has parameters these are the the Prime's okay so first you have the look up in the dictionary for the word vector right here and this is the vector of the classification layer now there are modifications where you can use like the same vectors and so on or you can also", "start_timestamp": "00:09:58", "end_timestamp": "00:10:35", "start_second": 598, "end_second": 635, "url": "https://www.youtube.com/watch?v=yexR53My2O4&t=598s", "title": "[Classic] Word2Vec: Distributed Representations of Words and Phrases and their Compositionality", "thumbnail": "https://i.ytimg.com/vi/yexR53My2O4/maxresdefault.jpg"} {"video_id": "yexR53My2O4", "text": "make use of these vectors but ultimately you care about these vectors right here and the vectors here are simply the classification layers weights so here you can see that there is what you're trying to maximize is the inner product between the word that you're considering and the words around that word and you're trying to do a classification task so you need to normalize now this is the normalization constant and it goes over all of your vocabulary so that's what they tackle here they say W is the number of words", "start_timestamp": "00:10:35", "end_timestamp": "00:11:21", "start_second": 635, "end_second": 681, "url": "https://www.youtube.com/watch?v=yexR53My2O4&t=635s", "title": "[Classic] Word2Vec: Distributed Representations of Words and Phrases and their Compositionality", "thumbnail": "https://i.ytimg.com/vi/yexR53My2O4/maxresdefault.jpg"} {"video_id": "yexR53My2O4", "text": "in the vocabulary this formulation is impractical because the cost of computing the gradient is proportional to W which is often large and that's 10 to the 5 to 10 to the 7 terms so many 10 like tens of millions of terms in your vocabulary that's just not feasible right so people have been you know sort of trying different ways to get around very very large number of classes and here it seems that that is really our bottleneck in the previous paper they've already shown that this objective can give you very good word representation", "start_timestamp": "00:11:21", "end_timestamp": "00:11:57", "start_second": 681, "end_second": 717, "url": "https://www.youtube.com/watch?v=yexR53My2O4&t=681s", "title": "[Classic] Word2Vec: Distributed Representations of Words and Phrases and their Compositionality", "thumbnail": "https://i.ytimg.com/vi/yexR53My2O4/maxresdefault.jpg"} {"video_id": "yexR53My2O4", "text": "but now we need to get around the fact that we have such large vocabularies so the first idea here is hierarchical softmax and this is kind of a tangent i find this paper by the way it's sort of hard to read because it's like a half engineering paper but yeah so first I introduced this hierarchical softmax which is kind of a a distraction it's kind of a here is what we do here is what we considered first but then didn't end up using really they do compare with it but the flow of text is sort of that you expect this to be part of the final", "start_timestamp": "00:11:57", "end_timestamp": "00:12:33", "start_second": 717, "end_second": 753, "url": "https://www.youtube.com/watch?v=yexR53My2O4&t=717s", "title": "[Classic] Word2Vec: Distributed Representations of Words and Phrases and their Compositionality", "thumbnail": "https://i.ytimg.com/vi/yexR53My2O4/maxresdefault.jpg"} {"video_id": "yexR53My2O4", "text": "model which it isn't so in the hierarchical softmax what you do instead of having this giant multi-class classification task right here you take all of these classes right here and you put them in a sort of a tree okay so you take this and you put them into a tree so instead of classifying you know let's say we have a thousand classes instead of classifying a thousand ways we first classify in two ways and then we classify in two ways again from each one and then we classify in two ways again as you know a thousand is like two to", "start_timestamp": "00:12:33", "end_timestamp": "00:13:10", "start_second": 753, "end_second": 790, "url": "https://www.youtube.com/watch?v=yexR53My2O4&t=753s", "title": "[Classic] Word2Vec: Distributed Representations of Words and Phrases and their Compositionality", "thumbnail": "https://i.ytimg.com/vi/yexR53My2O4/maxresdefault.jpg"} {"video_id": "yexR53My2O4", "text": "the ten so we need approximately ten layers of this before we are actually arriving at a thousand classes but it also means that we only have to weigh classifications each time so in the hierarchical softmax we build trees like this and then we so we have a word we look up its vector sorry its vector and then we classify it for each of these nodes so your output isn't going to be a thousand a thousand log probabilities your output is going to be a log probability a binary log probability for each of the nodes right here so you want to know", "start_timestamp": "00:13:10", "end_timestamp": "00:13:54", "start_second": 790, "end_second": 834, "url": "https://www.youtube.com/watch?v=yexR53My2O4&t=790s", "title": "[Classic] Word2Vec: Distributed Representations of Words and Phrases and their Compositionality", "thumbnail": "https://i.ytimg.com/vi/yexR53My2O4/maxresdefault.jpg"} {"video_id": "yexR53My2O4", "text": "okay here is it in the upper half or the lower half of my classes okay cool it's in the upper half okay here is in the upper half or the lower half and so on and you learn all to predict all of these junctions right here and that's going to end up you with you having to predict it less now of course you are constrained you impose a very big prior on the class distribution classes are an independently anymore namely if two classes here are in the same subtree that means that they are going to be predicted their predictions are going to", "start_timestamp": "00:13:54", "end_timestamp": "00:14:28", "start_second": 834, "end_second": 868, "url": "https://www.youtube.com/watch?v=yexR53My2O4&t=834s", "title": "[Classic] Word2Vec: Distributed Representations of Words and Phrases and their Compositionality", "thumbnail": "https://i.ytimg.com/vi/yexR53My2O4/maxresdefault.jpg"} {"video_id": "yexR53My2O4", "text": "be correlated because the path to them is the same partially so how you arrange the classes here is very important and there has been a lot of work in this but as I said this is rather a distraction right here hierarchical softmax is a way to solve this however they went with a different way right here they went with this approach called negative sampling negative sampling has been it's been very influential not only in word to vecna --get of sampling is one of the cornerstones of the current trend in self supervised learning in contrastive", "start_timestamp": "00:14:28", "end_timestamp": "00:15:13", "start_second": 868, "end_second": 913, "url": "https://www.youtube.com/watch?v=yexR53My2O4&t=868s", "title": "[Classic] Word2Vec: Distributed Representations of Words and Phrases and their Compositionality", "thumbnail": "https://i.ytimg.com/vi/yexR53My2O4/maxresdefault.jpg"} {"video_id": "yexR53My2O4", "text": "estimation and so on so this all of this you know it pops up in unlikely ways in other fields and it sort of I'm not gonna say it originated here but definitely it was introduced into the popular deep learning world right here so they say an alternative to hierarchical softmax is noise contrastive estimation okay so in noise contrastive estimation posits that a good model should be able to differentiate data from noise by means of logistic regression you know that seems very reasonable this is similar to the hinge loss and so on yada yada", "start_timestamp": "00:15:13", "end_timestamp": "00:15:55", "start_second": 913, "end_second": 955, "url": "https://www.youtube.com/watch?v=yexR53My2O4&t=913s", "title": "[Classic] Word2Vec: Distributed Representations of Words and Phrases and their Compositionality", "thumbnail": "https://i.ytimg.com/vi/yexR53My2O4/maxresdefault.jpg"} {"video_id": "yexR53My2O4", "text": "while NCE can be shown to approximately maximize the log probability of the softmax the Skip current model is only concerned with learning high-quality vector representations so we are free to simplify in all these contrastive estimation as long as the vector representations retain their quality we define negative sampling by this following objective so this is very interesting they see okay noise contrastive estimation you know it approximately maximizes the law of probability so the noise contrastive estimation would actually be the correct", "start_timestamp": "00:15:55", "end_timestamp": "00:16:28", "start_second": 955, "end_second": 988, "url": "https://www.youtube.com/watch?v=yexR53My2O4&t=955s", "title": "[Classic] Word2Vec: Distributed Representations of Words and Phrases and their Compositionality", "thumbnail": "https://i.ytimg.com/vi/yexR53My2O4/maxresdefault.jpg"} {"video_id": "yexR53My2O4", "text": "way to approximate their problem however they say well as long as you know as long as something reasonable comes out we're free to change that up a bit so they go with this negative sampling approach right here and you can see that this is this is almost the same so it's written a bit differently from the original softmax thing because the original softmax thing was written as a fraction and here it's as a sum but what you're trying to do in the noise contain the negative sampling framework is you trying to maximize the following", "start_timestamp": "00:16:28", "end_timestamp": "00:17:06", "start_second": 988, "end_second": 1026, "url": "https://www.youtube.com/watch?v=yexR53My2O4&t=988s", "title": "[Classic] Word2Vec: Distributed Representations of Words and Phrases and their Compositionality", "thumbnail": "https://i.ytimg.com/vi/yexR53My2O4/maxresdefault.jpg"} {"video_id": "yexR53My2O4", "text": "you're trying to maximize the inner product of the word you're considering and the words around them okay so if you're trying to still predict the words around you but now instead of having this prediction soft max over all of the classes you only have the soft max over a subset of classes so what you'll do is use sample words from your vocabulary at random and you sample K of them and you're simply trying to now minimize the inner product between those words and your okay so what is that ultimately lead to it ultimately leads to the following you", "start_timestamp": "00:17:06", "end_timestamp": "00:17:53", "start_second": 1026, "end_second": 1073, "url": "https://www.youtube.com/watch?v=yexR53My2O4&t=1026s", "title": "[Classic] Word2Vec: Distributed Representations of Words and Phrases and their Compositionality", "thumbnail": "https://i.ytimg.com/vi/yexR53My2O4/maxresdefault.jpg"} {"video_id": "yexR53My2O4", "text": "have a word like this word here- and what you're trying to do is you're not trying that much to predict the word sampling what you're trying to do is you're trying to say that in my space right here I simply want sampling to be closer than any other words that's not in the context window okay so so here is my word negative and here is my word sampling and I want these two to be close and if I if I sample another word like here this is the word cake if I sorry if I sample that I simply want that to be far away further than than", "start_timestamp": "00:17:53", "end_timestamp": "00:18:36", "start_second": 1073, "end_second": 1116, "url": "https://www.youtube.com/watch?v=yexR53My2O4&t=1073s", "title": "[Classic] Word2Vec: Distributed Representations of Words and Phrases and their Compositionality", "thumbnail": "https://i.ytimg.com/vi/yexR53My2O4/maxresdefault.jpg"} {"video_id": "yexR53My2O4", "text": "the word sampling okay so this is now a comparative it's not I classify sampling as the highest class it's simply I want to classify the word sampling against the other classes higher all right so and this is now much much easier so instead of a thousand or ten thousand or a million weigh classification and now maybe have I have a k plus one way classification right pretty easy right I simply sample K other words and as I assumed because I have so many words chances that I actually sample one that's in my context window is very", "start_timestamp": "00:18:36", "end_timestamp": "00:19:17", "start_second": 1116, "end_second": 1157, "url": "https://www.youtube.com/watch?v=yexR53My2O4&t=1116s", "title": "[Classic] Word2Vec: Distributed Representations of Words and Phrases and their Compositionality", "thumbnail": "https://i.ytimg.com/vi/yexR53My2O4/maxresdefault.jpg"} {"video_id": "yexR53My2O4", "text": "small right so I simply sample other words and I say well these other words are random they have nothing to do with the current frame that I'm looking at so they should be you know they can be whatever they want but at least they should be farther away than the words that are actually in my in my context and that is negative sampling the process of sampling negatives this right here and then making sure that the positives which are these here in this case the words in the context are classified with a higher probability", "start_timestamp": "00:19:17", "end_timestamp": "00:19:55", "start_second": 1157, "end_second": 1195, "url": "https://www.youtube.com/watch?v=yexR53My2O4&t=1157s", "title": "[Classic] Word2Vec: Distributed Representations of Words and Phrases and their Compositionality", "thumbnail": "https://i.ytimg.com/vi/yexR53My2O4/maxresdefault.jpg"} {"video_id": "yexR53My2O4", "text": "than the negatives for a given in right this here is the input word that's it that's negative sampling and of course yeah as I said you recognize this from current things like yourself supervised learning where you wanna have the same image augmented twice go through the pipeline you know you augment you put a little bit of different noise and then you have a different image and at the end you say these two should be close together well this other one should be far apart it's the exact same thing here except that", "start_timestamp": "00:19:55", "end_timestamp": "00:20:34", "start_second": 1195, "end_second": 1234, "url": "https://www.youtube.com/watch?v=yexR53My2O4&t=1195s", "title": "[Classic] Word2Vec: Distributed Representations of Words and Phrases and their Compositionality", "thumbnail": "https://i.ytimg.com/vi/yexR53My2O4/maxresdefault.jpg"} {"video_id": "yexR53My2O4", "text": "you have a different way of obtaining the positive and the negative samples in this case positive samples are everything that's in the context negative samples are just randomly sampled from the data set and that you know works of course that works much much much faster and you can see that this this turns out to give you vectors that are pretty good and you can train with higher vectors sorry with higher dimensional vectors you can train with bigger vocabularies with this this has turned out to be very very influential", "start_timestamp": "00:20:34", "end_timestamp": "00:21:12", "start_second": 1234, "end_second": 1272, "url": "https://www.youtube.com/watch?v=yexR53My2O4&t=1234s", "title": "[Classic] Word2Vec: Distributed Representations of Words and Phrases and their Compositionality", "thumbnail": "https://i.ytimg.com/vi/yexR53My2O4/maxresdefault.jpg"} {"video_id": "yexR53My2O4", "text": "as I said now with the rise of Burke and so on work to back is kind of getting forgotten but this was a revolution and distributed vectors so it wasn't the thing really it kind of was a thing before that but it wasn't really a thing that people used what people would do is still they would do n gram models before that so they would kind of diss diss they would sort of chunk up their sentences into engrams into overlapping engrams and then have a big giant table further where they index their engrams so the word I don't know so the word", "start_timestamp": "00:21:12", "end_timestamp": "00:21:52", "start_second": 1272, "end_second": 1312, "url": "https://www.youtube.com/watch?v=yexR53My2O4&t=1272s", "title": "[Classic] Word2Vec: Distributed Representations of Words and Phrases and their Compositionality", "thumbnail": "https://i.ytimg.com/vi/yexR53My2O4/maxresdefault.jpg"} {"video_id": "yexR53My2O4", "text": "hello is ID 1 the word hello there is ID 2 and so on so you have a big table for all the engrams and then what we would try to do is you would try to do this kind of bag of words estimation where you would take a you know whatever engrams appeared in sentence and you would have this big you know classification where you'd associate the engrams with each other and so on so distributed word representations were kind of a revolution at that point especially distributed representation that actually outperformed these old Engram methods so", "start_timestamp": "00:21:52", "end_timestamp": "00:22:33", "start_second": 1312, "end_second": 1353, "url": "https://www.youtube.com/watch?v=yexR53My2O4&t=1312s", "title": "[Classic] Word2Vec: Distributed Representations of Words and Phrases and their Compositionality", "thumbnail": "https://i.ytimg.com/vi/yexR53My2O4/maxresdefault.jpg"} {"video_id": "yexR53My2O4", "text": "there are a number of tricks right here that are I think not understood until this day for example the question is how do you sample these negative samples right right here this basically says get K words from your vocabulary at random according to this distribution right here now how are you going to do that basically you have a spectrum of options the one side of the spectrum is going to be completely uniform okay we sample each word with the same probability and the other side of the spectrum is something like sample this according to", "start_timestamp": "00:22:33", "end_timestamp": "00:23:14", "start_second": 1353, "end_second": 1394, "url": "https://www.youtube.com/watch?v=yexR53My2O4&t=1353s", "title": "[Classic] Word2Vec: Distributed Representations of Words and Phrases and their Compositionality", "thumbnail": "https://i.ytimg.com/vi/yexR53My2O4/maxresdefault.jpg"} {"video_id": "yexR53My2O4", "text": "their unique gram these are two different things they're they're opposites in this in this fashion so here you say hey some words appear way way way more often than other words shouldn't we prefer them when we sample right shouldn't we if we have a corpus and shouldn't we sample from the corpus and if in the corpus one word appears 50 times more than the other word then shouldn't we sample that 50 times more as a negative because it's you know so abundant and it should read get a higher classification accuracy", "start_timestamp": "00:23:14", "end_timestamp": "00:23:50", "start_second": 1394, "end_second": 1430, "url": "https://www.youtube.com/watch?v=yexR53My2O4&t=1394s", "title": "[Classic] Word2Vec: Distributed Representations of Words and Phrases and their Compositionality", "thumbnail": "https://i.ytimg.com/vi/yexR53My2O4/maxresdefault.jpg"} {"video_id": "yexR53My2O4", "text": "whereas on the other hand you could say no no we should simply sample every word in our dictionary uniformly they came up with something in between which they say both NC and negative sampling have noise distribution as a free parameter we investigated a number of choices and found that the unigram distribution raised to the 3/4 power ie unigram to death recorder outperformed significantly the unigram and uniform distributions for both NC and neg on every task which including language modeling this I think is a mystery until today and it actually", "start_timestamp": "00:23:50", "end_timestamp": "00:24:32", "start_second": 1430, "end_second": 1472, "url": "https://www.youtube.com/watch?v=yexR53My2O4&t=1430s", "title": "[Classic] Word2Vec: Distributed Representations of Words and Phrases and their Compositionality", "thumbnail": "https://i.ytimg.com/vi/yexR53My2O4/maxresdefault.jpg"} {"video_id": "yexR53My2O4", "text": "turned out that this exponent right here is magically much better than like the exponent of one or even the exponent of one half like you might be reasonably assumed that the square root you know might be something but the 3/4 I think turned out to be very good and very mystical so what does it what does it mean it means that you have kind of a balance between words that appear often in words that don't appear often usually in these kind of things you have a power law where we have very few words that appear very often and then you have okay", "start_timestamp": "00:24:32", "end_timestamp": "00:25:07", "start_second": 1472, "end_second": 1507, "url": "https://www.youtube.com/watch?v=yexR53My2O4&t=1472s", "title": "[Classic] Word2Vec: Distributed Representations of Words and Phrases and their Compositionality", "thumbnail": "https://i.ytimg.com/vi/yexR53My2O4/maxresdefault.jpg"} {"video_id": "yexR53My2O4", "text": "that's the tail shouldn't go up but you have a very long tail of words right and what you want to do is in this case you want to sample these words here more but they are they appear so much more often than if you simply sample according to their unigram distribution you basically not regard these words right here you'll forget about them and your performance will suffer because they do appear every now and then so what you want to do is you want to push the dose down a little bit and the optimal amount for the", "start_timestamp": "00:25:07", "end_timestamp": "00:25:38", "start_second": 1507, "end_second": 1538, "url": "https://www.youtube.com/watch?v=yexR53My2O4&t=1507s", "title": "[Classic] Word2Vec: Distributed Representations of Words and Phrases and their Compositionality", "thumbnail": "https://i.ytimg.com/vi/yexR53My2O4/maxresdefault.jpg"} {"video_id": "yexR53My2O4", "text": "little bit turns out to be to raise it the you raise it to the 3/4 strange but you know turned out to work well the other thing they do is they do the they do a sub sampling of frequent words so again this is a way to kind of push down the often appearing words where they say the most frequent words can easily occur hundreds of millions of times like in the array such words usually provide less information value than the rare words for example while the Skip current model benefits from observing the co-occurrences of France and Paris it", "start_timestamp": "00:25:38", "end_timestamp": "00:26:19", "start_second": 1538, "end_second": 1579, "url": "https://www.youtube.com/watch?v=yexR53My2O4&t=1538s", "title": "[Classic] Word2Vec: Distributed Representations of Words and Phrases and their Compositionality", "thumbnail": "https://i.ytimg.com/vi/yexR53My2O4/maxresdefault.jpg"} {"video_id": "yexR53My2O4", "text": "benefits much less from observing the frequent co-occurrences of France and the as nearly every word Co occurs frequently with in a sentence with the so they do another trick here to counter this imbalance between rare and frequent words use a simple subsampling approach each word in the training set is discarded with probably I computed by that formula right so therefore formula right here and might be asking again why why this formula so this is the sampling probability of a word and it goes with 1 over T T is a", "start_timestamp": "00:26:19", "end_timestamp": "00:26:58", "start_second": 1579, "end_second": 1618, "url": "https://www.youtube.com/watch?v=yexR53My2O4&t=1579s", "title": "[Classic] Word2Vec: Distributed Representations of Words and Phrases and their Compositionality", "thumbnail": "https://i.ytimg.com/vi/yexR53My2O4/maxresdefault.jpg"} {"video_id": "yexR53My2O4", "text": "temperature parameter and F is the frequency with which the word appears in the corpus so as you can see as the word appears more in the in the corpus then so this is the frequency as the word appears more then this thing goes down then this thing goes up so it's discarded with this probability so it's discarded with a higher probability if it appears more often where F is frequency if the word T is a throat T is a chosen threshold which shows this subsampling formula because it aggressively sub-samples words whose", "start_timestamp": "00:26:58", "end_timestamp": "00:27:37", "start_second": 1618, "end_second": 1657, "url": "https://www.youtube.com/watch?v=yexR53My2O4&t=1618s", "title": "[Classic] Word2Vec: Distributed Representations of Words and Phrases and their Compositionality", "thumbnail": "https://i.ytimg.com/vi/yexR53My2O4/maxresdefault.jpg"} {"video_id": "yexR53My2O4", "text": "frequency is greater than T while preserving the ranking of the frequencies although this subsampling formula was chosen heuristically we found it to work well in practice it accelerates learning and even significantly improves the accuracy of the learn vectors of the rare words as will be shown in the following sections so again something sort of arbitrary it's even it's more understandable than the 3/4 but still it's sort of arbitrary they experimented around they found this works well and then every everybody", "start_timestamp": "00:27:37", "end_timestamp": "00:28:07", "start_second": 1657, "end_second": 1687, "url": "https://www.youtube.com/watch?v=yexR53My2O4&t=1657s", "title": "[Classic] Word2Vec: Distributed Representations of Words and Phrases and their Compositionality", "thumbnail": "https://i.ytimg.com/vi/yexR53My2O4/maxresdefault.jpg"} {"video_id": "yexR53My2O4", "text": "ended up you know using that so that's how this kind of stuff happens ok so now we get into the empirical results and the empirical results in this case we're already sort of given in the previous paper but here they have these the analogical reasoning task where you can see that the negative sampling did outperform the others by quite a bit right here so the negative sampling approaches outperformed the hierarchical softmax and the noise contrastive estimation and in the previous paper they also compared with other baselines and saw that it", "start_timestamp": "00:28:07", "end_timestamp": "00:28:49", "start_second": 1687, "end_second": 1729, "url": "https://www.youtube.com/watch?v=yexR53My2O4&t=1687s", "title": "[Classic] Word2Vec: Distributed Representations of Words and Phrases and their Compositionality", "thumbnail": "https://i.ytimg.com/vi/yexR53My2O4/maxresdefault.jpg"} {"video_id": "yexR53My2O4", "text": "also outperforms those while being right time time-efficient so you can see that especially with these subsampling approaches the time here there's 36 minutes for and they again I think they have like a huge corpus that they train on these were to vac code turned out to be really really efficient code and that's why I got so popular as well they did the same thing for phrases right here so for phrases like New York Times and so on but this was kind of more of a this was more of a side thing the phrase vectors turned out to be you", "start_timestamp": "00:28:49", "end_timestamp": "00:29:38", "start_second": 1729, "end_second": 1778, "url": "https://www.youtube.com/watch?v=yexR53My2O4&t=1729s", "title": "[Classic] Word2Vec: Distributed Representations of Words and Phrases and their Compositionality", "thumbnail": "https://i.ytimg.com/vi/yexR53My2O4/maxresdefault.jpg"} {"video_id": "yexR53My2O4", "text": "know rather a side thing from the actual code right here so yeah as as I said this paper is very different from other research papers and that it's it's sort of half an engineering paper and all of these papers are they're kinda hard to read because they just kind of state some things in the order is kind of weird sometimes why they do things is kind of weird sometimes but you can't you know you can't deny that it had the quite the effect on the community and now this it is a very cool paper a very cool series of papers and it's very cool", "start_timestamp": "00:29:38", "end_timestamp": "00:30:21", "start_second": 1778, "end_second": 1821, "url": "https://www.youtube.com/watch?v=yexR53My2O4&t=1778s", "title": "[Classic] Word2Vec: Distributed Representations of Words and Phrases and their Compositionality", "thumbnail": "https://i.ytimg.com/vi/yexR53My2O4/maxresdefault.jpg"} {"video_id": "yexR53My2O4", "text": "that actually they release the code and they made the code such that it is super duper efficient even like on a single machine and that was very cool because you know being Google they could have just released code that is very efficient on a distributed data center and they didn't do that so that this is it's sort of not really like today anymore or like today when they release code it's always you need you need like 50 cloud TP use to do it and it's still cool that they release code but this was this was really a step", "start_timestamp": "00:30:21", "end_timestamp": "00:31:01", "start_second": 1821, "end_second": 1861, "url": "https://www.youtube.com/watch?v=yexR53My2O4&t=1821s", "title": "[Classic] Word2Vec: Distributed Representations of Words and Phrases and their Compositionality", "thumbnail": "https://i.ytimg.com/vi/yexR53My2O4/maxresdefault.jpg"} {"video_id": "xamzdNUN1o0", "text": "Suat CS 287 fall 2019 sorry to miss you last week you covered on Tuesday contacting variant optimization with Igor and Thursday I believe you covered motion planning with one of the TAS Harry there was a couple of slides that were uncovered at the end of last lecture I'm gonna let you study those on your own essentially there's a topic of lqr trees it's not gonna come in the homework but it's quite interesting it's a way to combine motion planning with lqr to have a more efficient motion planner effectively because motion", "start_timestamp": "00:00:00", "end_timestamp": "00:00:38", "start_second": 0, "end_second": 38, "url": "https://www.youtube.com/watch?v=xamzdNUN1o0&t=0s", "title": "Lecture 11 Probability Review, Bayes Filters, Gaussians -- CS287-FA19 Advanced Robotics", "thumbnail": "https://i.ytimg.com/vi/xamzdNUN1o0/hqdefault.jpg"} {"video_id": "xamzdNUN1o0", "text": "planning tends to be very local wind sampling based lqr is a not that local it has a basin of Attraction and so combining both of those into lqr trees is pretty interesting and then the other thing you didn't cover was shortcutting typically when you use a sampling based motion planner you will at the end of a path that's very jagged short cutting is the idea that you just check if any of the steps along the way you can just skip because it's just a little bit of a detour and use a straight line instead of course depends on the dynamics of", "start_timestamp": "00:00:38", "end_timestamp": "00:01:13", "start_second": 38, "end_second": 73, "url": "https://www.youtube.com/watch?v=xamzdNUN1o0&t=38s", "title": "Lecture 11 Probability Review, Bayes Filters, Gaussians -- CS287-FA19 Advanced Robotics", "thumbnail": "https://i.ytimg.com/vi/xamzdNUN1o0/hqdefault.jpg"} {"video_id": "xamzdNUN1o0", "text": "your system if your system has peculiar dynamics you cannot just do straight lines but if your system is able to follow any path except when there are obstacles thing you can do straight lines if there's no obstacle so that's shortcutting is a little more detail in the slides and often actually you also run non-linear optimization for control actually at the end to smooth out your path and get a locally optimal path so those are the highlights of what you didn't cover but we're gonna not cover in more detail but the slides have the", "start_timestamp": "00:01:13", "end_timestamp": "00:01:41", "start_second": 73, "end_second": 101, "url": "https://www.youtube.com/watch?v=xamzdNUN1o0&t=73s", "title": "Lecture 11 Probability Review, Bayes Filters, Gaussians -- CS287-FA19 Advanced Robotics", "thumbnail": "https://i.ytimg.com/vi/xamzdNUN1o0/hqdefault.jpg"} {"video_id": "xamzdNUN1o0", "text": "detail so they will switch to sometimes the second part of the course first part of the course was all about finding optimal actions how do we given a dynamical system some environment really find a sequence of actions or a policy that optimizes expected reward now we're going to look at the complementary part which is trying to make sense of our sensor readings such that we actually even know what the state of the environment might be and then act against that and we'll need a lot more probability here so we'll start with a", "start_timestamp": "00:01:41", "end_timestamp": "00:02:13", "start_second": 101, "end_second": 133, "url": "https://www.youtube.com/watch?v=xamzdNUN1o0&t=101s", "title": "Lecture 11 Probability Review, Bayes Filters, Gaussians -- CS287-FA19 Advanced Robotics", "thumbnail": "https://i.ytimg.com/vi/xamzdNUN1o0/hqdefault.jpg"} {"video_id": "xamzdNUN1o0", "text": "bit probability review then we'll look at Bayes filters which are going to be the core of what we'll be doing and we'll look at base filters today in the simplest setting which is just discrete state discrete observation and then we'll look at gaussians which will allow us next lecture to do it in continuous state spaces and in many ways similar to how lqr give us local solutions similarly with Gaussian siva for nonlinear systems we'll be able to find local approximations to the probability distributions all right", "start_timestamp": "00:02:13", "end_timestamp": "00:02:49", "start_second": 133, "end_second": 169, "url": "https://www.youtube.com/watch?v=xamzdNUN1o0&t=133s", "title": "Lecture 11 Probability Review, Bayes Filters, Gaussians -- CS287-FA19 Advanced Robotics", "thumbnail": "https://i.ytimg.com/vi/xamzdNUN1o0/hqdefault.jpg"} {"video_id": "xamzdNUN1o0", "text": "any questions all right then let's get started so why might we care about probability in robotics well often the state of the robot and the state of its environment are unknown and only noisy sensors are available so we don't have a sensor that just says here is the state instead you measure something else probability provides a framework to fuse that sensor information to a reasonable estimate of what the state might be and so the result of what we would compute would be a distribution over possible states that the environment and robot", "start_timestamp": "00:02:49", "end_timestamp": "00:03:33", "start_second": 169, "end_second": 213, "url": "https://www.youtube.com/watch?v=xamzdNUN1o0&t=169s", "title": "Lecture 11 Probability Review, Bayes Filters, Gaussians -- CS287-FA19 Advanced Robotics", "thumbnail": "https://i.ytimg.com/vi/xamzdNUN1o0/hqdefault.jpg"} {"video_id": "xamzdNUN1o0", "text": "could be in there's another reason we care about probability which is something we've already covered a little bit which is that the dynamic is often stochastic so we can optimize for a particular outcome but the only optimize obtain a good distribution of our outcomes and again probability provides as a framework to deal with that we've done that in simple settings so far but we'll expand that later and we'll actually bring both of those together in a future lecture where we'll look at the notion that the actions you take could", "start_timestamp": "00:03:33", "end_timestamp": "00:04:01", "start_second": 213, "end_second": 241, "url": "https://www.youtube.com/watch?v=xamzdNUN1o0&t=213s", "title": "Lecture 11 Probability Review, Bayes Filters, Gaussians -- CS287-FA19 Advanced Robotics", "thumbnail": "https://i.ytimg.com/vi/xamzdNUN1o0/hqdefault.jpg"} {"video_id": "xamzdNUN1o0", "text": "be taken to reduce uncertainty about the world so you actively go seek out sensor information that could help you understand better what the state of the world is and then collect more reward formulas are called pom the peace partial observable Markov decision processes and that will be somewhat the culmination of bringing together what we're covering in the next few lectures and what we've covered in everything so far let's look at an example how after what would be the state position of the helicopter orientation velocity", "start_timestamp": "00:04:01", "end_timestamp": "00:04:33", "start_second": 241, "end_second": 273, "url": "https://www.youtube.com/watch?v=xamzdNUN1o0&t=241s", "title": "Lecture 11 Probability Review, Bayes Filters, Gaussians -- CS287-FA19 Advanced Robotics", "thumbnail": "https://i.ytimg.com/vi/xamzdNUN1o0/hqdefault.jpg"} {"video_id": "xamzdNUN1o0", "text": "angular rate what would be the sensors you have you don't have direct access to position orientation velocity angular rate you have access to some of those in a noisy way and then others you don't have access at all so GPS gives you a noisy estimate of position sometimes also velocity typically only up to a couple meters accuracy so not super precise but it's a noisy estimate of position typically you put the inertial sensing on your robot so for helicopter maybe you have a three axis gyro so charr is a angular rate sensor so it", "start_timestamp": "00:04:33", "end_timestamp": "00:05:03", "start_second": 273, "end_second": 303, "url": "https://www.youtube.com/watch?v=xamzdNUN1o0&t=273s", "title": "Lecture 11 Probability Review, Bayes Filters, Gaussians -- CS287-FA19 Advanced Robotics", "thumbnail": "https://i.ytimg.com/vi/xamzdNUN1o0/hqdefault.jpg"} {"video_id": "xamzdNUN1o0", "text": "gives you three three numbers at any given time measuring the angular velocity around the three main axes of the helicopter if that's you know if you mounted axis aligned then three axis accelerometer accelerometers measure acceleration but not exact acceleration it's a little trickier than that Adashi measures all zeros in freefall so if you're free falling an accelerometer measures all zeros and anything that you do that's not free-falling for example if you're standing on the ground your accelerometer will measure 9.81 m/s^2", "start_timestamp": "00:05:03", "end_timestamp": "00:05:39", "start_second": 303, "end_second": 339, "url": "https://www.youtube.com/watch?v=xamzdNUN1o0&t=303s", "title": "Lecture 11 Probability Review, Bayes Filters, Gaussians -- CS287-FA19 Advanced Robotics", "thumbnail": "https://i.ytimg.com/vi/xamzdNUN1o0/hqdefault.jpg"} {"video_id": "xamzdNUN1o0", "text": "opposite to essentially gravity because you're resisting gravity and that's what it's measuring at that time so in some sense accelerometer is not not just an in fact much more accelerometer is about measuring orientation very often than it is about measuring your actual acceleration because any resistance of gravity you'll be able to measure and from that get in some sense especially if we're on the ground understand where gravity is pointing which gives you a lot of information about your orientation then three axis magnetometer", "start_timestamp": "00:05:39", "end_timestamp": "00:06:11", "start_second": 339, "end_second": 371, "url": "https://www.youtube.com/watch?v=xamzdNUN1o0&t=339s", "title": "Lecture 11 Probability Review, Bayes Filters, Gaussians -- CS287-FA19 Advanced Robotics", "thumbnail": "https://i.ytimg.com/vi/xamzdNUN1o0/hqdefault.jpg"} {"video_id": "xamzdNUN1o0", "text": "which well what is it magnetometer it measures the earth magnetic field so earth magnetic field you might think of it as North but it's actually if you measure it in 3d it's not really north it's roughly north but it actually also points into Earth but it's known which direction at point since you are measuring in the frame of your sensor where is the magnetic field pointing and that gives you information about the orientation of the system that you have some tricky things that of course if you have something that generates its own", "start_timestamp": "00:06:11", "end_timestamp": "00:06:41", "start_second": 371, "end_second": 401, "url": "https://www.youtube.com/watch?v=xamzdNUN1o0&t=371s", "title": "Lecture 11 Probability Review, Bayes Filters, Gaussians -- CS287-FA19 Advanced Robotics", "thumbnail": "https://i.ytimg.com/vi/xamzdNUN1o0/hqdefault.jpg"} {"video_id": "xamzdNUN1o0", "text": "magnetic field like maybe have some magnets on your system where you have some high currents that induce magnetic fields then it might perturb those measurements and not really get the earth magnetic field but something else are you flying near a power line or something it might perturb the measurements you're getting dynamics there is noise from the wind there's unball dynamics and engine in the servo motors and in the blades of the helicopter so overall we don't really have access to state directly but we have things that relate pretty", "start_timestamp": "00:06:41", "end_timestamp": "00:07:12", "start_second": 401, "end_second": 432, "url": "https://www.youtube.com/watch?v=xamzdNUN1o0&t=401s", "title": "Lecture 11 Probability Review, Bayes Filters, Gaussians -- CS287-FA19 Advanced Robotics", "thumbnail": "https://i.ytimg.com/vi/xamzdNUN1o0/hqdefault.jpg"} {"video_id": "xamzdNUN1o0", "text": "closely to state and we should be able to distill from that a likely estimate of the state or a distribution around some mode maybe of what the state might be how about a mobile robot inside a building the state could be position and heading is let's say a slow robot is just slowly moving you don't care about velocity just the position in the direction it's facing then sense source odometry which is sensing motion of the actuators for example wheel encoders so on your wheels you might measure how much your wheel has turned if you didn't", "start_timestamp": "00:07:12", "end_timestamp": "00:07:45", "start_second": 432, "end_second": 465, "url": "https://www.youtube.com/watch?v=xamzdNUN1o0&t=432s", "title": "Lecture 11 Probability Review, Bayes Filters, Gaussians -- CS287-FA19 Advanced Robotics", "thumbnail": "https://i.ytimg.com/vi/xamzdNUN1o0/hqdefault.jpg"} {"video_id": "xamzdNUN1o0", "text": "know the diameter of your wheel you know how much you have moved well that's approximately true because the wheel might have slipped there might have been bumps in the road where it's if it's not flat you don't move as far forward as you thought you would have because you really have been going down and back up but it gives you some kind of measurement of how much you've moved then often people with a laser rangefinder on a mobile robot why is that it sends out a laser beam and then it measures how long it takes for that", "start_timestamp": "00:07:45", "end_timestamp": "00:08:11", "start_second": 465, "end_second": 491, "url": "https://www.youtube.com/watch?v=xamzdNUN1o0&t=465s", "title": "Lecture 11 Probability Review, Bayes Filters, Gaussians -- CS287-FA19 Advanced Robotics", "thumbnail": "https://i.ytimg.com/vi/xamzdNUN1o0/hqdefault.jpg"} {"video_id": "xamzdNUN1o0", "text": "beam to come back and that gives you a measurement of twice the distance to the obstacle in that direction and so it's a nice way to directly measure 3d it's a little bit problematic at times because sometimes you have mirrors and then the thing doesn't come back your reflects off in another way or you might have glass that acts much like a mirror and the same thing will happen you don't get the measurements back but if you have nice matte surfaces it's a really nice way to measure how far away things are dynamics is noisy because of wheel", "start_timestamp": "00:08:11", "end_timestamp": "00:08:40", "start_second": 491, "end_second": 520, "url": "https://www.youtube.com/watch?v=xamzdNUN1o0&t=491s", "title": "Lecture 11 Probability Review, Bayes Filters, Gaussians -- CS287-FA19 Advanced Robotics", "thumbnail": "https://i.ytimg.com/vi/xamzdNUN1o0/hqdefault.jpg"} {"video_id": "xamzdNUN1o0", "text": "slippage you might say my will turn this much but what if it slipped and you don't know what your next state is going to be if you don't know how much it slipped and it could be unmad unmodeled variations in the floor that effect essentially where you end up all right so those are two examples just to motivate why we might want to deal with probability distributions in that yeah if this is what we're faced with or the helicopters what we're faced with we're now going to have a deterministic notion of this is the state we're just", "start_timestamp": "00:08:40", "end_timestamp": "00:09:08", "start_second": 520, "end_second": 548, "url": "https://www.youtube.com/watch?v=xamzdNUN1o0&t=520s", "title": "Lecture 11 Probability Review, Bayes Filters, Gaussians -- CS287-FA19 Advanced Robotics", "thumbnail": "https://i.ytimg.com/vi/xamzdNUN1o0/hqdefault.jpg"} {"video_id": "xamzdNUN1o0", "text": "going to have a bunch of sensor information and the result that best will be a distribution over possible States okay so today what we're going to do we're going to a probability review hopefully that's a review if what we're doing the probability review section is not review you should go study to make sure that it feels like review as soon as possible then we'll do Bayes filters where we will look at the foundation of a lot what we'll cover in the discrete setting and then we'll start looking at gaussians which will form the foundation", "start_timestamp": "00:09:08", "end_timestamp": "00:09:42", "start_second": 548, "end_second": 582, "url": "https://www.youtube.com/watch?v=xamzdNUN1o0&t=548s", "title": "Lecture 11 Probability Review, Bayes Filters, Gaussians -- CS287-FA19 Advanced Robotics", "thumbnail": "https://i.ytimg.com/vi/xamzdNUN1o0/hqdefault.jpg"} {"video_id": "xamzdNUN1o0", "text": "for what we'll do a next lecture to do it in continuous space all right so probability theory as a lot of math it starts with some actions and and everything follows from that so probability theory actions here is that you have some outcome a and it could be many possible outcomes an outcome a the probability of the outcome is assigned a number between 0 and 1 maybe you know I don't know the probability of being in the correct room for lecture is some probability assigned to it then the probability of the union of all possible", "start_timestamp": "00:09:42", "end_timestamp": "00:10:17", "start_second": 582, "end_second": 617, "url": "https://www.youtube.com/watch?v=xamzdNUN1o0&t=582s", "title": "Lecture 11 Probability Review, Bayes Filters, Gaussians -- CS287-FA19 Advanced Robotics", "thumbnail": "https://i.ytimg.com/vi/xamzdNUN1o0/hqdefault.jpg"} {"video_id": "xamzdNUN1o0", "text": "outcomes Omega is essentially everything that could possibly happen the probability of everything that could possibly happen something in that said to have happened should be 1 otherwise that's not the proper Omega that captures everything and that also means nothing can have a number sign higher than 1 the probabilities don't go above 1 and then probability of the empty set well the empty set is never containing the outcome that happened whatever happened is something it's not in the empty set and so probability of that is", "start_timestamp": "00:10:17", "end_timestamp": "00:10:47", "start_second": 617, "end_second": 647, "url": "https://www.youtube.com/watch?v=xamzdNUN1o0&t=617s", "title": "Lecture 11 Probability Review, Bayes Filters, Gaussians -- CS287-FA19 Advanced Robotics", "thumbnail": "https://i.ytimg.com/vi/xamzdNUN1o0/hqdefault.jpg"} {"video_id": "xamzdNUN1o0", "text": "0 and then the main thing here is essentially how you kind of keep track of probability the probability of the union of two possible outcomes a and B is a probability of a plus probability of B but it's possible that a and B happen at the same time and then you're double counting down as you subtract back out the probability that a and B happened at the same time which is a intersection B and that gives you the probability of the Union pictorially looks like this you have your Omega space which has all possible events in it then there is", "start_timestamp": "00:10:47", "end_timestamp": "00:11:23", "start_second": 647, "end_second": 683, "url": "https://www.youtube.com/watch?v=xamzdNUN1o0&t=647s", "title": "Lecture 11 Probability Review, Bayes Filters, Gaussians -- CS287-FA19 Advanced Robotics", "thumbnail": "https://i.ytimg.com/vi/xamzdNUN1o0/hqdefault.jpg"} {"video_id": "xamzdNUN1o0", "text": "outcomes that make a true there is outcomes that make be true and then there's outcomes to make a and B through the intersection and the probability of a union B is probability of a plus probability of B minus probability of the intersection one abstract or abstract or concrete depending on your mindset way to think of it is that everything that can possibly happen is a point in this rectangle pictorially that's like every possible state the world could be in is a point in that rectangle I mean we might never know the", "start_timestamp": "00:11:23", "end_timestamp": "00:11:56", "start_second": 683, "end_second": 716, "url": "https://www.youtube.com/watch?v=xamzdNUN1o0&t=683s", "title": "Lecture 11 Probability Review, Bayes Filters, Gaussians -- CS287-FA19 Advanced Robotics", "thumbnail": "https://i.ytimg.com/vi/xamzdNUN1o0/hqdefault.jpg"} {"video_id": "xamzdNUN1o0", "text": "exact details of it but every possible state is a point in that rectangle and then a is a property of the state of the world that holds true for all the ones that line that blue circle the bees have property of the state that holds true for all the states of the world in the yellow circle and then a intersection B is a property of the world that holds true for all the points that are in the intersection and so you can think of a and B just as properties or descriptions of you know abstractions of the world of", "start_timestamp": "00:11:56", "end_timestamp": "00:12:29", "start_second": 716, "end_second": 749, "url": "https://www.youtube.com/watch?v=xamzdNUN1o0&t=716s", "title": "Lecture 11 Probability Review, Bayes Filters, Gaussians -- CS287-FA19 Advanced Robotics", "thumbnail": "https://i.ytimg.com/vi/xamzdNUN1o0/hqdefault.jpg"} {"video_id": "xamzdNUN1o0", "text": "the state whereas individual points in that rectangle are in some sense the full state of the world which are probably never observed will never really talk about because we don't care about all the details we'll talk about a and B but in terms of how it's all set up that's an easy way to think of it then when you use these actions you can come to new conclusions I mean that that's kind of how math works you posit some actions in the you do some derivations and you have a new thing that's useful to use so for example", "start_timestamp": "00:12:29", "end_timestamp": "00:12:57", "start_second": 749, "end_second": 777, "url": "https://www.youtube.com/watch?v=xamzdNUN1o0&t=749s", "title": "Lecture 11 Probability Review, Bayes Filters, Gaussians -- CS287-FA19 Advanced Robotics", "thumbnail": "https://i.ytimg.com/vi/xamzdNUN1o0/hqdefault.jpg"} {"video_id": "xamzdNUN1o0", "text": "probability of a union on Vega minus a well on the Left we just look at what's in the parentheses is well a union with omega minus 8 that's omega so the probability of Omega is 1 that's what we derived up to here on the right-hand side we use that thing that's a probability of Union is promotive each individual event- probability of the intersection that's this we've worked that out then we know that this probability is zero and now we have the probability of a plus probability of the complement of a has to be equal to one and so that's the", "start_timestamp": "00:12:57", "end_timestamp": "00:13:35", "start_second": 777, "end_second": 815, "url": "https://www.youtube.com/watch?v=xamzdNUN1o0&t=777s", "title": "Lecture 11 Probability Review, Bayes Filters, Gaussians -- CS287-FA19 Advanced Robotics", "thumbnail": "https://i.ytimg.com/vi/xamzdNUN1o0/hqdefault.jpg"} {"video_id": "xamzdNUN1o0", "text": "kind of properties you would derive all right so there are discrete random variables and continuous random variables so here we have Omega again the rectangle and again think of every point in the rectangle as representing one possible state of the entire world and then X is our random variable we work with X can take on four values if the event in indicates the world is in a state that's in that leftmost rectangle we say x equals x1 if it's in that triangle above it's x equals x2 and so forth and so there's only four values X", "start_timestamp": "00:13:35", "end_timestamp": "00:14:15", "start_second": 815, "end_second": 855, "url": "https://www.youtube.com/watch?v=xamzdNUN1o0&t=815s", "title": "Lecture 11 Probability Review, Bayes Filters, Gaussians -- CS287-FA19 Advanced Robotics", "thumbnail": "https://i.ytimg.com/vi/xamzdNUN1o0/hqdefault.jpg"} {"video_id": "xamzdNUN1o0", "text": "can take on the probability associated with X equal X 1 we can think of like essentially the mass associated with that region in the rectangle and we call P a probability mass function a simpler example than the abstract rectangle there would be a coin flip heads and tails probability half for each that's another example of a distribution again you can think of as in the context of that rectangle though you can say I have a rectangle I split it in half there is heads and tails and you know have the states of the entire world my coin came", "start_timestamp": "00:14:15", "end_timestamp": "00:14:49", "start_second": 855, "end_second": 889, "url": "https://www.youtube.com/watch?v=xamzdNUN1o0&t=855s", "title": "Lecture 11 Probability Review, Bayes Filters, Gaussians -- CS287-FA19 Advanced Robotics", "thumbnail": "https://i.ytimg.com/vi/xamzdNUN1o0/hqdefault.jpg"} {"video_id": "xamzdNUN1o0", "text": "out and heads have the states of the entire world my cone came out tails we're ignoring everything else about the world we're just looking at heads versus tails and not the details of everything else in the world continuous random variables X takes on a value and some continuum you cannot now associate the probability with X taking on a specific value because if it's infinitely many things it's a continuum if you want to assign finite probability to think so it'll sum up to something higher than 1 which we can have it needs to be", "start_timestamp": "00:14:49", "end_timestamp": "00:15:19", "start_second": 889, "end_second": 919, "url": "https://www.youtube.com/watch?v=xamzdNUN1o0&t=889s", "title": "Lecture 11 Probability Review, Bayes Filters, Gaussians -- CS287-FA19 Advanced Robotics", "thumbnail": "https://i.ytimg.com/vi/xamzdNUN1o0/hqdefault.jpg"} {"video_id": "xamzdNUN1o0", "text": "something to 1 so we can so many more rash you have to integrate now so we'll do is this a probability density function is defined saying that if you look at mass under the curve so the probability for X being an interval A to B is the integral from A to B of P of X DX for example if we think from zero here maybe till this point we could take the probability Mouse and that's then the in the area under the curve would be denoting how much parabola we associate with X landing in that interval so again we can't really talk about X stick on a", "start_timestamp": "00:15:19", "end_timestamp": "00:15:59", "start_second": 919, "end_second": 959, "url": "https://www.youtube.com/watch?v=xamzdNUN1o0&t=919s", "title": "Lecture 11 Probability Review, Bayes Filters, Gaussians -- CS287-FA19 Advanced Robotics", "thumbnail": "https://i.ytimg.com/vi/xamzdNUN1o0/hqdefault.jpg"} {"video_id": "xamzdNUN1o0", "text": "specific value here the best we can do is say probability of X lying in an interval you could say X probability of X taking on a specific value would say it's zero but then to have something meaningful you take intervals and assign probabilities to the intervals any questions so far all right then the most important thing and what we'll be working on is to look at distributions that involve multiple variables so what why is that well the reason is that has already talked about we'll have things like real state that", "start_timestamp": "00:15:59", "end_timestamp": "00:16:39", "start_second": 959, "end_second": 999, "url": "https://www.youtube.com/watch?v=xamzdNUN1o0&t=959s", "title": "Lecture 11 Probability Review, Bayes Filters, Gaussians -- CS287-FA19 Advanced Robotics", "thumbnail": "https://i.ytimg.com/vi/xamzdNUN1o0/hqdefault.jpg"} {"video_id": "xamzdNUN1o0", "text": "we care about and sensory measurements so I want to somehow talk about the joint distribution between the two so if we measure one of them we can say something about the other one or we'll deal with dynamics of the world and so we want to relate the distribution now with the distribution at the next time and so both of these involved joint distributions over two random variables rather than just looking at a single variable all right so the joint distribution over x and y is denoted this way then in simple scenarios", "start_timestamp": "00:16:39", "end_timestamp": "00:17:21", "start_second": 999, "end_second": 1041, "url": "https://www.youtube.com/watch?v=xamzdNUN1o0&t=999s", "title": "Lecture 11 Probability Review, Bayes Filters, Gaussians -- CS287-FA19 Advanced Robotics", "thumbnail": "https://i.ytimg.com/vi/xamzdNUN1o0/hqdefault.jpg"} {"video_id": "xamzdNUN1o0", "text": "simple scenarios X&Y might be independent meaning of the probability of x and y taking on specific value small X small Y is a probability of X taking on value x and y taking on value of y that's the case actually knowing X does not tell you anything about Y or the other way around so it's actually not so interesting case mathematically simplifies things but yeah if you had a sensor where your sensor is independent of the state of your system the sensor cannot inform you about the system sometimes the property that holds true", "start_timestamp": "00:17:21", "end_timestamp": "00:17:53", "start_second": 1041, "end_second": 1073, "url": "https://www.youtube.com/watch?v=xamzdNUN1o0&t=1041s", "title": "Lecture 11 Probability Review, Bayes Filters, Gaussians -- CS287-FA19 Advanced Robotics", "thumbnail": "https://i.ytimg.com/vi/xamzdNUN1o0/hqdefault.jpg"} {"video_id": "xamzdNUN1o0", "text": "though and then it's it's good to know and good to take advantage of mathematically but you don't want to build your systems that way necessarily then X given Y is the probability of X taking on value of small X given Y has value small Y so what we have here is that as a definition we write it as X given Y is the probability of x and y divided by probability of Y so it think again about that rectangle there's a region where Y takes on the values small Y and that has a certain surface and that's py and we look at the region", "start_timestamp": "00:17:53", "end_timestamp": "00:18:32", "start_second": 1073, "end_second": 1112, "url": "https://www.youtube.com/watch?v=xamzdNUN1o0&t=1073s", "title": "Lecture 11 Probability Review, Bayes Filters, Gaussians -- CS287-FA19 Advanced Robotics", "thumbnail": "https://i.ytimg.com/vi/xamzdNUN1o0/hqdefault.jpg"} {"video_id": "xamzdNUN1o0", "text": "where X takes on value is small X within that region that's B X comma Y and then the fraction of the surface it takes up that's the conditional we can also rewrite this as the probability of X comma Y is the probability of Y times probability of x given Y this will become very useful many many situations because let's say we know the distribution for Y and want to know something about X which might be the distribution at the next time slice and if we notice in about X given Y this may be the dynamics of the system to go from", "start_timestamp": "00:18:32", "end_timestamp": "00:19:06", "start_second": 1112, "end_second": 1146, "url": "https://www.youtube.com/watch?v=xamzdNUN1o0&t=1112s", "title": "Lecture 11 Probability Review, Bayes Filters, Gaussians -- CS287-FA19 Advanced Robotics", "thumbnail": "https://i.ytimg.com/vi/xamzdNUN1o0/hqdefault.jpg"} {"video_id": "xamzdNUN1o0", "text": "Y to the next time X then this equation will tell something about the joint and hence tell us something about X then if x and y are independent then the conditional of X given Y is the same as the marginal for X so P of X will call the Marshal for X and so that means that sometimes in our math we'll be able to simplify things you have an assumption X in planet of Y we see X given Y appear we can just get rid of the Y and simplify to just px all of this is also true for probability densities it's just in densities often it's a small P but", "start_timestamp": "00:19:06", "end_timestamp": "00:19:58", "start_second": 1146, "end_second": 1198, "url": "https://www.youtube.com/watch?v=xamzdNUN1o0&t=1146s", "title": "Lecture 11 Probability Review, Bayes Filters, Gaussians -- CS287-FA19 Advanced Robotics", "thumbnail": "https://i.ytimg.com/vi/xamzdNUN1o0/hqdefault.jpg"} {"video_id": "xamzdNUN1o0", "text": "other than that you can write the exact same math and I will say something about the densities instead of the actual probability mass now let's work through the equations we'll be using the most all right so there are essentially two equations there are two equations we'll be using a lot and so let's explicitly step through them and see what they are one s cold law of total probability was it say it says that and we'll write it for the discrete case the probability distribution for X is equal to so probability x equals x is sum over all", "start_timestamp": "00:19:58", "end_timestamp": "00:21:19", "start_second": 1198, "end_second": 1279, "url": "https://www.youtube.com/watch?v=xamzdNUN1o0&t=1198s", "title": "Lecture 11 Probability Review, Bayes Filters, Gaussians -- CS287-FA19 Advanced Robotics", "thumbnail": "https://i.ytimg.com/vi/xamzdNUN1o0/hqdefault.jpg"} {"video_id": "xamzdNUN1o0", "text": "values Y can take on of P X comma Y so the reason that will be so important is because often we'll want to get rid of some variables and move to another variable and the way we're going to be doing that is by constructing a Joint Distribution over those two variables somehow and then something out the 1 we don't want anymore more specifically the way this will typically be used is here we'll say well PX is equal to sum over y and imagine we already had access to the distribution for y and we need to construct a joint because you want to", "start_timestamp": "00:21:19", "end_timestamp": "00:21:59", "start_second": 1279, "end_second": 1319, "url": "https://www.youtube.com/watch?v=xamzdNUN1o0&t=1279s", "title": "Lecture 11 Probability Review, Bayes Filters, Gaussians -- CS287-FA19 Advanced Robotics", "thumbnail": "https://i.ytimg.com/vi/xamzdNUN1o0/hqdefault.jpg"} {"video_id": "xamzdNUN1o0", "text": "bring in X now we might have a model for how X is related to Y so you'll have px given Y and so this equation here is one that we'll find ourselves using a lot one of the two equations we'll be using over and over same thing can of course be done for densities and densities this summation would become an integral because you have to integrate over all continuous barrels values the variable can take on whereas in some in discrete variables you just sum otherwise same thing though in general whenever there's a summation you can put an integral same", "start_timestamp": "00:21:59", "end_timestamp": "00:22:33", "start_second": 1319, "end_second": 1353, "url": "https://www.youtube.com/watch?v=xamzdNUN1o0&t=1319s", "title": "Lecture 11 Probability Review, Bayes Filters, Gaussians -- CS287-FA19 Advanced Robotics", "thumbnail": "https://i.ytimg.com/vi/xamzdNUN1o0/hqdefault.jpg"} {"video_id": "xamzdNUN1o0", "text": "math we'll go through so that's one of the things we'll use a lot the other thing we'll use a lot is Bayes rule Bayes rule hopefully sounds familiar where how do you derive it we actually derive it from looking at this the expression for the joint the X comma Y is px given Y x py which is also because the roles of X and y are arbitrary it doesn't matter which one comes first or second in this equation here we can swap the roles and we have P Y given x times PX from that using just this part over here we can write P X given Y as P y", "start_timestamp": "00:22:33", "end_timestamp": "00:23:32", "start_second": 1353, "end_second": 1412, "url": "https://www.youtube.com/watch?v=xamzdNUN1o0&t=1353s", "title": "Lecture 11 Probability Review, Bayes Filters, Gaussians -- CS287-FA19 Advanced Robotics", "thumbnail": "https://i.ytimg.com/vi/xamzdNUN1o0/hqdefault.jpg"} {"video_id": "xamzdNUN1o0", "text": "given x times px over py let's interpret this equation for a moment so what are we doing here we're interested in finding this mission over X given Y so we can associate a story with this imagine X is the state of the system and Y is a sensor reading we might obtain and so we want to know what's the distribution of our possible states of this system given we have a sensory reading Y well it's not easy to come up with a table for that it's not easy to just say oh you know what I'm just gonna build a table for a px given y because it's", "start_timestamp": "00:23:32", "end_timestamp": "00:24:14", "start_second": 1412, "end_second": 1454, "url": "https://www.youtube.com/watch?v=xamzdNUN1o0&t=1412s", "title": "Lecture 11 Probability Review, Bayes Filters, Gaussians -- CS287-FA19 Advanced Robotics", "thumbnail": "https://i.ytimg.com/vi/xamzdNUN1o0/hqdefault.jpg"} {"video_id": "xamzdNUN1o0", "text": "actually not really how nature works the way nature works is that if there is a state that state causes the readings so this model here is really the causal model that we have available given a state it will cause a distribution over readings and when you build a sensor and you try to sell to people this is your sensor model that you would provide it said this is the distribution of our outcomes given this situation this is fishing-rod come given this other situation and so forth for example for a laser range file you might say well if", "start_timestamp": "00:24:14", "end_timestamp": "00:24:50", "start_second": 1454, "end_second": 1490, "url": "https://www.youtube.com/watch?v=xamzdNUN1o0&t=1454s", "title": "Lecture 11 Probability Review, Bayes Filters, Gaussians -- CS287-FA19 Advanced Robotics", "thumbnail": "https://i.ytimg.com/vi/xamzdNUN1o0/hqdefault.jpg"} {"video_id": "xamzdNUN1o0", "text": "there's a map surface at a certain distance the reading you'll get back is within maybe two centimeters like maybe with denigration 2 centimeters around the proper distance reading something like that so that thing is your calibrated sensor and that you can you can get a distribution for but what actually ugly will be distribution over state given that reading will depend on the prior X what you thought ahead of time might be likely states of your system and so they combined together and then of course is a normalization to make sure things sum", "start_timestamp": "00:24:50", "end_timestamp": "00:25:26", "start_second": 1490, "end_second": 1526, "url": "https://www.youtube.com/watch?v=xamzdNUN1o0&t=1490s", "title": "Lecture 11 Probability Review, Bayes Filters, Gaussians -- CS287-FA19 Advanced Robotics", "thumbnail": "https://i.ytimg.com/vi/xamzdNUN1o0/hqdefault.jpg"} {"video_id": "xamzdNUN1o0", "text": "to one that's old is this and often will write this as just 1 over Z times P Y given x times px or it turns out in a probabilistic robotics book notation they love to use it I so often write as a dot x py given x times PX so we don't have too much worry about the thing at the bottom that's just a normalization we just want to understand a thing at the top which brings together the thing for which we are able to build a model which is the relationship from X to Y and how we can then use it assuming we have a prior over X use it to get the", "start_timestamp": "00:25:26", "end_timestamp": "00:26:06", "start_second": 1526, "end_second": 1566, "url": "https://www.youtube.com/watch?v=xamzdNUN1o0&t=1526s", "title": "Lecture 11 Probability Review, Bayes Filters, Gaussians -- CS287-FA19 Advanced Robotics", "thumbnail": "https://i.ytimg.com/vi/xamzdNUN1o0/hqdefault.jpg"} {"video_id": "xamzdNUN1o0", "text": "distribution we really want which is distribution over state any questions about this these two equations are essentially the ones we'll just use everywhere and what we'll do today next lecture lecture after that if you fully understand those you should be in really good shape now we can have a new version of this so law of total probability again but with conditioning what does that mean for both of these imagine for the first one here imagine we had already something to conditioned on maybe there was a measurement in the past that was", "start_timestamp": "00:26:06", "end_timestamp": "00:27:15", "start_second": 1566, "end_second": 1635, "url": "https://www.youtube.com/watch?v=xamzdNUN1o0&t=1566s", "title": "Lecture 11 Probability Review, Bayes Filters, Gaussians -- CS287-FA19 Advanced Robotics", "thumbnail": "https://i.ytimg.com/vi/xamzdNUN1o0/hqdefault.jpg"} {"video_id": "xamzdNUN1o0", "text": "already there like we really just in not going just from Y to X but there's some Z that's already present somewhere so what can we do well we can say okay P X given Z is equal to we write the same equation sum over Y P Y times P X given Y but since we conditioned on C we need to ever wear a condition on Z and if you're ready I mean notation wise condition on Y and Z is just Y comma Z you don't need to draw another bar and so what we see happen is done the exact same equation can be written with just additional conditioning on Z or any", "start_timestamp": "00:27:15", "end_timestamp": "00:28:11", "start_second": 1635, "end_second": 1691, "url": "https://www.youtube.com/watch?v=xamzdNUN1o0&t=1635s", "title": "Lecture 11 Probability Review, Bayes Filters, Gaussians -- CS287-FA19 Advanced Robotics", "thumbnail": "https://i.ytimg.com/vi/xamzdNUN1o0/hqdefault.jpg"} {"video_id": "xamzdNUN1o0", "text": "number of variables as long as you consistently put them everywhere so you can always add more conditioning and so often that's the version we'll be using this will have many things we've already observed in the past we want to conditioned on and then we'll want to apply the equation the left law of total probability and we'll just carry along everything so do our math as if it doesn't exist we just need to consistently carry it along very straightforward got to be careful you can't drop it anywhere same thing is", "start_timestamp": "00:28:11", "end_timestamp": "00:28:44", "start_second": 1691, "end_second": 1724, "url": "https://www.youtube.com/watch?v=xamzdNUN1o0&t=1691s", "title": "Lecture 11 Probability Review, Bayes Filters, Gaussians -- CS287-FA19 Advanced Robotics", "thumbnail": "https://i.ytimg.com/vi/xamzdNUN1o0/hqdefault.jpg"} {"video_id": "xamzdNUN1o0", "text": "true with Bayes rule what if you have multiple things you are conditioning on a already condition of something else no problem Bayes rule with conditioning and the reason this should really be a surprise is you go back to the original picture of probability right then you we have this rectangle and the rectangle is really the foundation of everything with a point in the rectangle corresponding to the full world state and then regions correspond to values the random variable takes on if you condition on something", "start_timestamp": "00:28:44", "end_timestamp": "00:29:20", "start_second": 1724, "end_second": 1760, "url": "https://www.youtube.com/watch?v=xamzdNUN1o0&t=1724s", "title": "Lecture 11 Probability Review, Bayes Filters, Gaussians -- CS287-FA19 Advanced Robotics", "thumbnail": "https://i.ytimg.com/vi/xamzdNUN1o0/hqdefault.jpg"} {"video_id": "xamzdNUN1o0", "text": "essentially we're just redefining the rectangle as being only only let's say this part that's our new rectangle and that's the only part of the world we're working with possible States you can take on and once we've done that we put all probability on this thing we kind of renormalize it everything's there that's our new rectangle and the maths not going to change because we're just looking at this sub part of the whole rectangle but of course you need to do it consistently every step along the way we need to consistently look at that sub", "start_timestamp": "00:29:20", "end_timestamp": "00:29:47", "start_second": 1760, "end_second": 1787, "url": "https://www.youtube.com/watch?v=xamzdNUN1o0&t=1760s", "title": "Lecture 11 Probability Review, Bayes Filters, Gaussians -- CS287-FA19 Advanced Robotics", "thumbnail": "https://i.ytimg.com/vi/xamzdNUN1o0/hqdefault.jpg"} {"video_id": "xamzdNUN1o0", "text": "part where we are having Z this would be Z equals Z we need to keep it consistently to that part we can sometimes have it sometimes not when we do this but then same thing with Bayes rule yeah initially it was for the full rectangle now we restricted attention to some sub region no problem we just condition everywhere P X given Y comma Z is equal to let's it's equal to well what do we have we had Y given X but of course also Z because Z needs to be in the back everywhere now then P X but we need to carry along the Z and then normalize by", "start_timestamp": "00:29:47", "end_timestamp": "00:30:34", "start_second": 1787, "end_second": 1834, "url": "https://www.youtube.com/watch?v=xamzdNUN1o0&t=1787s", "title": "Lecture 11 Probability Review, Bayes Filters, Gaussians -- CS287-FA19 Advanced Robotics", "thumbnail": "https://i.ytimg.com/vi/xamzdNUN1o0/hqdefault.jpg"} {"video_id": "xamzdNUN1o0", "text": "Y again given Z but again the bottom here doesn't really matter much in baseball we usually know that we just need to compute the top part for every possible value of x and once we computed the top part for every value of x we can see what it sums to and whatever it sums to is what goes in the bottom right and so applying Bayes rule is about doing this for all X and then summing to know what the bottom is and so here we see that we can incorporate new evidence we already had evidence we had already had X given Z we had a past observation", "start_timestamp": "00:30:34", "end_timestamp": "00:31:10", "start_second": 1834, "end_second": 1870, "url": "https://www.youtube.com/watch?v=xamzdNUN1o0&t=1834s", "title": "Lecture 11 Probability Review, Bayes Filters, Gaussians -- CS287-FA19 Advanced Robotics", "thumbnail": "https://i.ytimg.com/vi/xamzdNUN1o0/hqdefault.jpg"} {"video_id": "xamzdNUN1o0", "text": "distribution over x given that observation then a new thing came in for Y and we have a model of how Y acts given X and Z and practice often the Z will disappear here because Y will only depend on X not on Z but in channel YK depend on X and Z and then we can combine that together to know what now new distribution is over state X then another concept that will be very important is conditional independence so conditional independence is a lot like independence but a little less strict independence is a property done and sometimes it's nice", "start_timestamp": "00:31:10", "end_timestamp": "00:32:03", "start_second": 1870, "end_second": 1923, "url": "https://www.youtube.com/watch?v=xamzdNUN1o0&t=1870s", "title": "Lecture 11 Probability Review, Bayes Filters, Gaussians -- CS287-FA19 Advanced Robotics", "thumbnail": "https://i.ytimg.com/vi/xamzdNUN1o0/hqdefault.jpg"} {"video_id": "xamzdNUN1o0", "text": "because it simplifies the math whenever you run into it but if your variables are all independent then you can't really infer anything from one variable about the other variable and so ultimately you get no interesting results out you need the variables to interact to get something interesting out now it could be that they interact but in limited ways and that's what conditional independence tries to capture so condition dependence we have X conditionally independent of Y given Z if and only if we have P X comma Y given", "start_timestamp": "00:32:03", "end_timestamp": "00:32:42", "start_second": 1923, "end_second": 1962, "url": "https://www.youtube.com/watch?v=xamzdNUN1o0&t=1923s", "title": "Lecture 11 Probability Review, Bayes Filters, Gaussians -- CS287-FA19 Advanced Robotics", "thumbnail": "https://i.ytimg.com/vi/xamzdNUN1o0/hqdefault.jpg"} {"video_id": "xamzdNUN1o0", "text": "Z equals P X given Z times P Y given Z so it's like independence the joint between x and y is the product of marginal for X marginal for y exit that it's only true when you condition on Z once you know z x and y are independent but as long as you don't know z they might have a relationship for example z might be something I don't know like Z could be the weather and X might be is somebody carrying an umbrella and Y might be is the is the pavement wet or something and once you know it's raining you you don't need to you already know", "start_timestamp": "00:32:42", "end_timestamp": "00:33:29", "start_second": 1962, "end_second": 2009, "url": "https://www.youtube.com/watch?v=xamzdNUN1o0&t=1962s", "title": "Lecture 11 Probability Review, Bayes Filters, Gaussians -- CS287-FA19 Advanced Robotics", "thumbnail": "https://i.ytimg.com/vi/xamzdNUN1o0/hqdefault.jpg"} {"video_id": "xamzdNUN1o0", "text": "that the rain will cause x and y independently it's not related directly whether pavement is wet and somebody cares umbrella it's through the fact that it rains which is the common cause for both of them and said I'll make x and y independent given see another example and the one that we'll see most often in this class is if you know the state of the world then every sensor for every sense you're reading might be independent very often because if your sensor just acts on the real state of the world then the reading that happens", "start_timestamp": "00:33:29", "end_timestamp": "00:34:01", "start_second": 2009, "end_second": 2041, "url": "https://www.youtube.com/watch?v=xamzdNUN1o0&t=2009s", "title": "Lecture 11 Probability Review, Bayes Filters, Gaussians -- CS287-FA19 Advanced Robotics", "thumbnail": "https://i.ytimg.com/vi/xamzdNUN1o0/hqdefault.jpg"} {"video_id": "xamzdNUN1o0", "text": "first very happens next will be unrelated and we'll be able to simplify things okay so this is this is kind of really everything we need so we just need three things we need law of total probability the regular version and the conditioned version we need Bayes rule the regular version the conditioned version and we need a notion of conditional independence which will allow us to simplify equations this is also equivalent to writing and also we'll see it that way equivalent to px given Z and Y but once you know Z X is independent of Y being", "start_timestamp": "00:34:01", "end_timestamp": "00:34:46", "start_second": 2041, "end_second": 2086, "url": "https://www.youtube.com/watch?v=xamzdNUN1o0&t=2041s", "title": "Lecture 11 Probability Review, Bayes Filters, Gaussians -- CS287-FA19 Advanced Robotics", "thumbnail": "https://i.ytimg.com/vi/xamzdNUN1o0/hqdefault.jpg"} {"video_id": "xamzdNUN1o0", "text": "equal to px given Z and it's also equivalent to P Y given Z comma X being equal to Y given Z this once you know Z X does not tell you anything about Y now to be fair this condition the dependence assumptions that will be making sometimes in the real world might not exactly be true ever but they'll often be reasonable approximations and we might be willing to make him because they drastically simplify the math and the competition we need to do to then run the algorithm in practice so often often these things are assumptions we", "start_timestamp": "00:34:46", "end_timestamp": "00:35:27", "start_second": 2086, "end_second": 2127, "url": "https://www.youtube.com/watch?v=xamzdNUN1o0&t=2086s", "title": "Lecture 11 Probability Review, Bayes Filters, Gaussians -- CS287-FA19 Advanced Robotics", "thumbnail": "https://i.ytimg.com/vi/xamzdNUN1o0/hqdefault.jpg"} {"video_id": "xamzdNUN1o0", "text": "make to approximate things that are almost independent in the real world questions so far yes mm-hmm so typically the way you would do it is this typically what you'd be given is from a previous calculation maybe or just given would be this thing here which is the prior over X of PI over X given Z you'd be given a model of how X and Z in this case are just X in that case cos Y that's your sensor calibration model for example and that together allows you to compute this product for every possible value of X because we're conditioning on Z a", "start_timestamp": "00:35:27", "end_timestamp": "00:36:22", "start_second": 2127, "end_second": 2182, "url": "https://www.youtube.com/watch?v=xamzdNUN1o0&t=2127s", "title": "Lecture 11 Probability Review, Bayes Filters, Gaussians -- CS287-FA19 Advanced Robotics", "thumbnail": "https://i.ytimg.com/vi/xamzdNUN1o0/hqdefault.jpg"} {"video_id": "xamzdNUN1o0", "text": "favorite possible value of X yeah we're computers for every possible value of X because Y will be measured in this scenario Z will be measured so y and z will be numbers available to us x will not be known can take on many values and so we'll compute this for every value of x now after we've computed this for every value of x we'll have a table of all these four values of X and those entries if we ignore the bottom will not sum to one but we know it's supposed to be a distribution over x given y and z so then we know I also know we we", "start_timestamp": "00:36:22", "end_timestamp": "00:36:55", "start_second": 2182, "end_second": 2215, "url": "https://www.youtube.com/watch?v=xamzdNUN1o0&t=2182s", "title": "Lecture 11 Probability Review, Bayes Filters, Gaussians -- CS287-FA19 Advanced Robotics", "thumbnail": "https://i.ytimg.com/vi/xamzdNUN1o0/hqdefault.jpg"} {"video_id": "xamzdNUN1o0", "text": "ignored the thing at the bottom that does not depend on X that's key here there's no X in here so what we're computing is a table with entries that depend on X every X will have its own entry and we know all of them are missing the same rescaling which we've ignored and we but we can find the rescaling by remembering that actually every all the entries in the table have to sum to one and so summing all the entries in that table will actually compute this thing over here mathematically we know that's a way to", "start_timestamp": "00:36:55", "end_timestamp": "00:37:24", "start_second": 2215, "end_second": 2244, "url": "https://www.youtube.com/watch?v=xamzdNUN1o0&t=2215s", "title": "Lecture 11 Probability Review, Bayes Filters, Gaussians -- CS287-FA19 Advanced Robotics", "thumbnail": "https://i.ytimg.com/vi/xamzdNUN1o0/hqdefault.jpg"} {"video_id": "xamzdNUN1o0", "text": "compute this and practice we don't often pay attention to what it really means we just say hey we want this we know this thing here does not depend on X so we can ignore it as long as at the end we do a rescaling of all entries with the same scaling you cannot rescale one entry for x1 by the other one the other you need a fixed rescaling across all entries that you computed so the things written on the board we also have on the slides where often we have a pairing between the discrete version where there's a summation and", "start_timestamp": "00:37:24", "end_timestamp": "00:38:21", "start_second": 2244, "end_second": 2301, "url": "https://www.youtube.com/watch?v=xamzdNUN1o0&t=2244s", "title": "Lecture 11 Probability Review, Bayes Filters, Gaussians -- CS287-FA19 Advanced Robotics", "thumbnail": "https://i.ytimg.com/vi/xamzdNUN1o0/hqdefault.jpg"} {"video_id": "xamzdNUN1o0", "text": "continuous version where it just replaced out some sign with a little squiggly thing and then you put a D something is over the variable instead of putting it under so essentially just always a place sum over the variable with this thing here and then a dy and then you got the same thing just for continuous space Bayes rule the terms we didn't name them but essentially often the X the P of X is called the prior because we have it ahead of time Y given X is the likelihood of a measurement Y given this state is x is often what it", "start_timestamp": "00:38:21", "end_timestamp": "00:38:56", "start_second": 2301, "end_second": 2336, "url": "https://www.youtube.com/watch?v=xamzdNUN1o0&t=2301s", "title": "Lecture 11 Probability Review, Bayes Filters, Gaussians -- CS287-FA19 Advanced Robotics", "thumbnail": "https://i.ytimg.com/vi/xamzdNUN1o0/hqdefault.jpg"} {"video_id": "xamzdNUN1o0", "text": "ends up being solvent called likelihood and evidence is yeah that's just Y is what you measure to is a probability of getting that evidence this we just talked about when you do these calculations often you effectively algorithmically you compute it for all X's you compute that top part in the fraction and after you've done that you know it has to be rescaling you just compute the rescaling and multiply with that in the paralytic robotics book notation it's a little bit funny that sometimes they use the ADA as the thing", "start_timestamp": "00:38:56", "end_timestamp": "00:39:33", "start_second": 2336, "end_second": 2373, "url": "https://www.youtube.com/watch?v=xamzdNUN1o0&t=2336s", "title": "Lecture 11 Probability Review, Bayes Filters, Gaussians -- CS287-FA19 Advanced Robotics", "thumbnail": "https://i.ytimg.com/vi/xamzdNUN1o0/hqdefault.jpg"} {"video_id": "xamzdNUN1o0", "text": "you multiply with sometimes I think you divide by just gonna I guess be it be okay with that it has different meanings in different contexts but ultimately it's just a rescaling and some people like to divide by the risk and some people like to multiply with one over that thing all the same law of total probability with integrals we saw that basil with conditioning we saw that we saw conditional independence what it means again it's an assumption we often make that will hopefully be approximately true or maybe even fully true so we can", "start_timestamp": "00:39:33", "end_timestamp": "00:40:06", "start_second": 2373, "end_second": 2406, "url": "https://www.youtube.com/watch?v=xamzdNUN1o0&t=2373s", "title": "Lecture 11 Probability Review, Bayes Filters, Gaussians -- CS287-FA19 Advanced Robotics", "thumbnail": "https://i.ytimg.com/vi/xamzdNUN1o0/hqdefault.jpg"} {"video_id": "xamzdNUN1o0", "text": "use it to simplify our math here's a simple example so we have a robot trying to see if the doors open or not okay what's the probability of doors open given some measurement of the sensor of the robot it is to illustrate that kind of reasoning behind why we often use Bayes rule probability of being open given some measurement is a diagnostic distribution it's kind we're trying to diagnose what's the state given a measurement but those distributions often we cannot build we cannot deliver a sensor with that distribution because I mean you can't do", "start_timestamp": "00:40:06", "end_timestamp": "00:40:42", "start_second": 2406, "end_second": 2442, "url": "https://www.youtube.com/watch?v=xamzdNUN1o0&t=2406s", "title": "Lecture 11 Probability Review, Bayes Filters, Gaussians -- CS287-FA19 Advanced Robotics", "thumbnail": "https://i.ytimg.com/vi/xamzdNUN1o0/hqdefault.jpg"} {"video_id": "xamzdNUN1o0", "text": "that imagine you deliver the sensor to somebody and they lock the door and nobody has the key anymore then your sensor can't know that because you didn't know that ahead of time what you really can build is the opposite distribution that causal one given a situation what am I going to measure and so that's what's be available to us and that's what we have to use but it's not what we want the reason about the world so we'll use Bayes rule to get what we want which is the opposite thing going from the distribution that we know how to model", "start_timestamp": "00:40:42", "end_timestamp": "00:41:17", "start_second": 2442, "end_second": 2477, "url": "https://www.youtube.com/watch?v=xamzdNUN1o0&t=2442s", "title": "Lecture 11 Probability Review, Bayes Filters, Gaussians -- CS287-FA19 Advanced Robotics", "thumbnail": "https://i.ytimg.com/vi/xamzdNUN1o0/hqdefault.jpg"} {"video_id": "xamzdNUN1o0", "text": "which is the measurement given the state to the distribution we want to take action which is one know whether the door is open or not or at least a distribution over that for the causal one it's really just a matter of counting frequencies you might have a physical model that makes it easy but often you just kind of just do a bunch of measurements so you see how often does a sense to read this or that as a function of whether the door is open or not and now I have my model let's do a concrete example we have we need to put", "start_timestamp": "00:41:17", "end_timestamp": "00:41:48", "start_second": 2477, "end_second": 2508, "url": "https://www.youtube.com/watch?v=xamzdNUN1o0&t=2477s", "title": "Lecture 11 Probability Review, Bayes Filters, Gaussians -- CS287-FA19 Advanced Robotics", "thumbnail": "https://i.ytimg.com/vi/xamzdNUN1o0/hqdefault.jpg"} {"video_id": "xamzdNUN1o0", "text": "some numbers into it probability of a measurement Z given the doors open is 0.6 Zija the door is not open is 0.3 so we see that when the door is open we are more likely to measure Z as the outcome than when the door is not open so if we measure Z which should increase our probability that the door is open and our prior is 50% well we can apply Bayes rule we measured Z we can look at Bayes rule fill this in do the math which is described to find indeed that now after measuring Z which favors the door being open we have more specifically 0.67", "start_timestamp": "00:41:48", "end_timestamp": "00:42:24", "start_second": 2508, "end_second": 2544, "url": "https://www.youtube.com/watch?v=xamzdNUN1o0&t=2508s", "title": "Lecture 11 Probability Review, Bayes Filters, Gaussians -- CS287-FA19 Advanced Robotics", "thumbnail": "https://i.ytimg.com/vi/xamzdNUN1o0/hqdefault.jpg"} {"video_id": "xamzdNUN1o0", "text": "probability that the door is open now you could have another measurement imagine of another sensor wait wait go yeah let matching we have another sensor z2 how can we integrate that well we can again apply Bayes rule we already know that we can apply Bayes rule again even with conditioning in the back we already conditioned on one measurement just in a condition on the next measurement we actually do this as many times as we want so the condition on many measurements we can do it one at a time and build it up so here we have the", "start_timestamp": "00:42:24", "end_timestamp": "00:43:05", "start_second": 2544, "end_second": 2585, "url": "https://www.youtube.com/watch?v=xamzdNUN1o0&t=2544s", "title": "Lecture 11 Probability Review, Bayes Filters, Gaussians -- CS287-FA19 Advanced Robotics", "thumbnail": "https://i.ytimg.com/vi/xamzdNUN1o0/hqdefault.jpg"} {"video_id": "xamzdNUN1o0", "text": "often a assumption is applied to simplify the math is that the sensor measurements are all independent given the state of the world because the state of the world is causing what you measure and so if the state of the world is a known entity then each outcome is independent given that state that will simplify the math because then when we look at Z M the enth sensory measurement given the state of the world X and all the other sensory measurements we don't need to build a model for Z and given X and the other measurements we just need", "start_timestamp": "00:43:05", "end_timestamp": "00:43:39", "start_second": 2585, "end_second": 2619, "url": "https://www.youtube.com/watch?v=xamzdNUN1o0&t=2585s", "title": "Lecture 11 Probability Review, Bayes Filters, Gaussians -- CS287-FA19 Advanced Robotics", "thumbnail": "https://i.ytimg.com/vi/xamzdNUN1o0/hqdefault.jpg"} {"video_id": "xamzdNUN1o0", "text": "to build a model for Z and given X the state of the world which is exactly the kind of sensory model that we like to build so we get a simplification happening over here we had ZN conditioned on everything that's the standard Bayes rule with conditioning but then we made an assumption there's not generality but we made an assumption that Z and is independent of the other Z's if we already know the state X and then this here has simplified then again the bottom we don't worry about there's no X in it we can compute the top part", "start_timestamp": "00:43:39", "end_timestamp": "00:44:15", "start_second": 2619, "end_second": 2655, "url": "https://www.youtube.com/watch?v=xamzdNUN1o0&t=2619s", "title": "Lecture 11 Probability Review, Bayes Filters, Gaussians -- CS287-FA19 Advanced Robotics", "thumbnail": "https://i.ytimg.com/vi/xamzdNUN1o0/hqdefault.jpg"} {"video_id": "xamzdNUN1o0", "text": "for every value of x once we're done we sum it all together to know what the bottom is that will normalize it we can apply this repeatedly that the idea we just saw that this becomes Z on given X px given all the previous ones there's nothing specific about Z N and we'll see then we get a product over all Z is given x times prior over X and so now all we need is that sensor model for each sensor given X we multiply probabilities together and we get our posterior distribution okay for the door example we had in this", "start_timestamp": "00:44:15", "end_timestamp": "00:44:52", "start_second": 2655, "end_second": 2692, "url": "https://www.youtube.com/watch?v=xamzdNUN1o0&t=2655s", "title": "Lecture 11 Probability Review, Bayes Filters, Gaussians -- CS287-FA19 Advanced Robotics", "thumbnail": "https://i.ytimg.com/vi/xamzdNUN1o0/hqdefault.jpg"} {"video_id": "xamzdNUN1o0", "text": "case a new sensor is z2 and when we see z2 it actually is more likely to be seen when the door is not open we observe z2 so what do you expect to happen mathematically we had a 2/3 probability of the door being open ahead of time after sensor measurement 1 we observe z2 that probability should drop but let's check we can do the math we just apply Bayes rule fill in the numbers and indeed the probability drops as we expected all right now there is one thing I want to point out here is that a typical pitfall is that real-world", "start_timestamp": "00:44:52", "end_timestamp": "00:45:36", "start_second": 2692, "end_second": 2736, "url": "https://www.youtube.com/watch?v=xamzdNUN1o0&t=2692s", "title": "Lecture 11 Probability Review, Bayes Filters, Gaussians -- CS287-FA19 Advanced Robotics", "thumbnail": "https://i.ytimg.com/vi/xamzdNUN1o0/hqdefault.jpg"} {"video_id": "xamzdNUN1o0", "text": "sensors are not always super independent so you might have a sensor and you might say okay maybe I have a laser rangefinder and it reads a reading but then maybe send out two beams next to each other but those two beings might encounter the same kind of noise or the same kind of interference in their paths and so those two measurements might not be independent and the assumption we just went through where there's all simplifies just becomes a product of things is violated even though often in the math will want this and practice be", "start_timestamp": "00:45:36", "end_timestamp": "00:46:16", "start_second": 2736, "end_second": 2776, "url": "https://www.youtube.com/watch?v=xamzdNUN1o0&t=2736s", "title": "Lecture 11 Probability Review, Bayes Filters, Gaussians -- CS287-FA19 Advanced Robotics", "thumbnail": "https://i.ytimg.com/vi/xamzdNUN1o0/hqdefault.jpg"} {"video_id": "xamzdNUN1o0", "text": "careful you need to think carefully are these sensory readings really independent or imagine the laser rangefinders reading something at time T and you run it again at time T plus 1 but nothing has changed in the world then are those really independent readings not totally necessarily because you're measuring the exact same configuration with the exact same sensors so maybe it's not completely independent so be careful what can the effect be imagine you imagine you think they're independent these readings and they're not what do", "start_timestamp": "00:46:16", "end_timestamp": "00:46:46", "start_second": 2776, "end_second": 2806, "url": "https://www.youtube.com/watch?v=xamzdNUN1o0&t=2776s", "title": "Lecture 11 Probability Review, Bayes Filters, Gaussians -- CS287-FA19 Advanced Robotics", "thumbnail": "https://i.ytimg.com/vi/xamzdNUN1o0/hqdefault.jpg"} {"video_id": "xamzdNUN1o0", "text": "you think will happen let's think about this imagine we have really one reading and really what we should be having in this equation because there's only one in one measurement happening all the others are just copies that are not independent information they're just copies of that one reading that's how dependent they are extreme case of dependence we're just you know instead of fitting it in once into our Bayes rule thing we say hey why don't we feed it in ten times because that way we can incorporate the information even more all right what", "start_timestamp": "00:46:46", "end_timestamp": "00:47:28", "start_second": 2806, "end_second": 2848, "url": "https://www.youtube.com/watch?v=xamzdNUN1o0&t=2806s", "title": "Lecture 11 Probability Review, Bayes Filters, Gaussians -- CS287-FA19 Advanced Robotics", "thumbnail": "https://i.ytimg.com/vi/xamzdNUN1o0/hqdefault.jpg"} {"video_id": "xamzdNUN1o0", "text": "would happen you feed it in ten times you multiply this thing in ten times so what will happen is that whatever that measurement is favoring you'll become overconfident in it if your favors door open and maybe normally with one time incorporating that measurement you go from 1/2 to 2/3 if you cooperated a hundred times you multiply it in hundred times all of a sudden you'll be at 99% probability instead of where you should be which is 2/3 that's exactly what happens if we look at the graph here so here is a graph showing as a number on", "start_timestamp": "00:47:28", "end_timestamp": "00:48:05", "start_second": 2848, "end_second": 2885, "url": "https://www.youtube.com/watch?v=xamzdNUN1o0&t=2848s", "title": "Lecture 11 Probability Review, Bayes Filters, Gaussians -- CS287-FA19 Advanced Robotics", "thumbnail": "https://i.ytimg.com/vi/xamzdNUN1o0/hqdefault.jpg"} {"video_id": "xamzdNUN1o0", "text": "the horizontal axis number of times the same measurement is incorporated using the math that is the assumption that these measurements are independent and we see that very quickly you can actually flip the probability from being 99% being one observation one state in the world to the other state in the world X 1 versus X 2 so be very careful about that another places might come is like you might maybe have I don't know accelerometer or gyro and the gyro is supposed to measure independently the angular rate of your system but the gyro", "start_timestamp": "00:48:05", "end_timestamp": "00:48:43", "start_second": 2885, "end_second": 2923, "url": "https://www.youtube.com/watch?v=xamzdNUN1o0&t=2885s", "title": "Lecture 11 Probability Review, Bayes Filters, Gaussians -- CS287-FA19 Advanced Robotics", "thumbnail": "https://i.ytimg.com/vi/xamzdNUN1o0/hqdefault.jpg"} {"video_id": "xamzdNUN1o0", "text": "is a physical system under the hood or something physical happening there and so it might take time for that physical thing to settle in and really get that measurement and if you you know read it off I don't know ten thousand times per second that might not be valid that might only be 10 or 100 independent readings really and if you then use the 10,000 readings instead of just the ones that are actually independent you'll become wildly overconfident in whatever it is that you're measuring rather than keeping a reasonable", "start_timestamp": "00:48:43", "end_timestamp": "00:49:12", "start_second": 2923, "end_second": 2952, "url": "https://www.youtube.com/watch?v=xamzdNUN1o0&t=2923s", "title": "Lecture 11 Probability Review, Bayes Filters, Gaussians -- CS287-FA19 Advanced Robotics", "thumbnail": "https://i.ytimg.com/vi/xamzdNUN1o0/hqdefault.jpg"} {"video_id": "xamzdNUN1o0", "text": "distribution of our possible states of the world okay let's take a couple minute break here and then let's look at Bayes filters and gaussians actually much more I did it's not as unstable for alright let's restart any questions about the first half yes so I've in a lot of things we assume a first-order Markovian system and kind of touching on the example that you were talking about at the end have you seen in practice whether maybe the first-order Markovian assumption isn't appropriate but a certain like n step Markovian assumption is appropriate", "start_timestamp": "00:49:12", "end_timestamp": "00:52:12", "start_second": 2952, "end_second": 3132, "url": "https://www.youtube.com/watch?v=xamzdNUN1o0&t=2952s", "title": "Lecture 11 Probability Review, Bayes Filters, Gaussians -- CS287-FA19 Advanced Robotics", "thumbnail": "https://i.ytimg.com/vi/xamzdNUN1o0/hqdefault.jpg"} {"video_id": "xamzdNUN1o0", "text": "so that we can find this balance between having something that we can get accurate I guess quantification from but also the joint distributions are not intractable yeah so it's always a trade-off between tractability and match with the real world I would say I think in general first order assumption I mean in principle if you have full state then of course definition of full state means that the first-order assumption will be true otherwise it's not full state but in practice your state will not be maybe the full state of the world to be a", "start_timestamp": "00:52:12", "end_timestamp": "00:52:50", "start_second": 3132, "end_second": 3170, "url": "https://www.youtube.com/watch?v=xamzdNUN1o0&t=3132s", "title": "Lecture 11 Probability Review, Bayes Filters, Gaussians -- CS287-FA19 Advanced Robotics", "thumbnail": "https://i.ytimg.com/vi/xamzdNUN1o0/hqdefault.jpg"} {"video_id": "xamzdNUN1o0", "text": "approximation of the state that you use as your representation for state space and as a consequence first sort of assumptions might not exactly be true I think often it's dealt with in pretty much ad hoc way where people just look at the system and say ok for this system it seems like we need to look this far back or that far back to get what is effectively the state of the system by including enough history but I haven't seen like if necessarily like a very clear-cut way to define it in general and this ties directly on what we're", "start_timestamp": "00:52:50", "end_timestamp": "00:53:26", "start_second": 3170, "end_second": 3206, "url": "https://www.youtube.com/watch?v=xamzdNUN1o0&t=3170s", "title": "Lecture 11 Probability Review, Bayes Filters, Gaussians -- CS287-FA19 Advanced Robotics", "thumbnail": "https://i.ytimg.com/vi/xamzdNUN1o0/hqdefault.jpg"} {"video_id": "xamzdNUN1o0", "text": "going to do now which is Bayes filters so often the world is dynamic because robot does things other agents do things time is passing by the world changes in various ways as a consequence so how to incorporate actions into what we're doing as well as our sensor readings so typical actions robot turns its wheels to move robot uses manipulator to grasp an object a plant grows over time and takes up more space actions are never carried out with absolute certainty so in contrast to measurements actually actions generally increase the", "start_timestamp": "00:53:26", "end_timestamp": "00:54:02", "start_second": 3206, "end_second": 3242, "url": "https://www.youtube.com/watch?v=xamzdNUN1o0&t=3206s", "title": "Lecture 11 Probability Review, Bayes Filters, Gaussians -- CS287-FA19 Advanced Robotics", "thumbnail": "https://i.ytimg.com/vi/xamzdNUN1o0/hqdefault.jpg"} {"video_id": "xamzdNUN1o0", "text": "uncertainty dibal you have is you will have measurements reduce yesterday in your distribution then action gets taken which introduces new uncertainty and the distribution becomes higher variance how to model actions well we can again do it with a probability distribution so we can say the prohibition of our next state X prime given current state X and action U and that again is a causal model it's something we can build a model for and then use to calculate the distribution over next States so for example a robot might try to open a door", "start_timestamp": "00:54:02", "end_timestamp": "00:54:39", "start_second": 3242, "end_second": 3279, "url": "https://www.youtube.com/watch?v=xamzdNUN1o0&t=3242s", "title": "Lecture 11 Probability Review, Bayes Filters, Gaussians -- CS287-FA19 Advanced Robotics", "thumbnail": "https://i.ytimg.com/vi/xamzdNUN1o0/hqdefault.jpg"} {"video_id": "xamzdNUN1o0", "text": "and it might succeed or might not succeed you might have a model for that so you might say if the doors already open and or in this case closed the robot tries to close the door doors already open well 10% chance it'll just stay open because it fails 90% chance it managed to close it if it's already closed and it will keep it closed okay that's a model that might or might not be correct but these are the kind of models that we're going to be working with such that we can do calculations about possible states of the world based", "start_timestamp": "00:54:39", "end_timestamp": "00:55:09", "start_second": 3279, "end_second": 3309, "url": "https://www.youtube.com/watch?v=xamzdNUN1o0&t=3279s", "title": "Lecture 11 Probability Review, Bayes Filters, Gaussians -- CS287-FA19 Advanced Robotics", "thumbnail": "https://i.ytimg.com/vi/xamzdNUN1o0/hqdefault.jpg"} {"video_id": "xamzdNUN1o0", "text": "on our actions we took and sensory readings we get well this is exactly the law of total probability we're applying here the conditional distribution for X prime next day given the action U is well we have a distribution for current state that's what we assume we assume that we know the solution for current state will then multiply in X prime given U and X to get to joint over X Prime and X once if they joined over X Prime and X we actually just want X Prime so we sum out or integrate out X to just get the thing", "start_timestamp": "00:55:09", "end_timestamp": "00:55:47", "start_second": 3309, "end_second": 3347, "url": "https://www.youtube.com/watch?v=xamzdNUN1o0&t=3309s", "title": "Lecture 11 Probability Review, Bayes Filters, Gaussians -- CS287-FA19 Advanced Robotics", "thumbnail": "https://i.ytimg.com/vi/xamzdNUN1o0/hqdefault.jpg"} {"video_id": "xamzdNUN1o0", "text": "for X Prime that's exactly what's happening over here loft all the probability with some conditioning so we can run this for the robot example and I'm going to step through the details here but you can do you know with those numbers we have the math you can see okay given the action that we took the probability of clothes become 15 out of 16 and the priority of open is 1 out of 16 how about measurements measurements are what we've been talking about the most so far we'll use Bayes rule Y again because we have easy time getting a", "start_timestamp": "00:55:47", "end_timestamp": "00:56:25", "start_second": 3347, "end_second": 3385, "url": "https://www.youtube.com/watch?v=xamzdNUN1o0&t=3347s", "title": "Lecture 11 Probability Review, Bayes Filters, Gaussians -- CS287-FA19 Advanced Robotics", "thumbnail": "https://i.ytimg.com/vi/xamzdNUN1o0/hqdefault.jpg"} {"video_id": "xamzdNUN1o0", "text": "model for sensitive reading given State but we don't know how to get a model for state given sensory reading directly because we need to know the prior over States and wanted to use Bayes rule to get that now in Bayes filters the framework is that we get a stream of actions and observations we have a sensory model a action model or a dynamics model and a prior distribution for where the state of the system is starting from so these are our Givens what we want is an estimate of the state of the system at all times so we want to", "start_timestamp": "00:56:25", "end_timestamp": "00:56:59", "start_second": 3385, "end_second": 3419, "url": "https://www.youtube.com/watch?v=xamzdNUN1o0&t=3385s", "title": "Lecture 11 Probability Review, Bayes Filters, Gaussians -- CS287-FA19 Advanced Robotics", "thumbnail": "https://i.ytimg.com/vi/xamzdNUN1o0/hqdefault.jpg"} {"video_id": "xamzdNUN1o0", "text": "keep a running estimate over distribution over possible States for each time T pictorially often it's drawn this way this graph captures the causal relationships that are present in our set of assumptions so X again is the state over time use the actions we take actions will affect the next state the previous state will affect the next state and the observation in this case assumed to just depend on the state the state of the world determines what we might measure and so when we get to look at this graph the explicit assumptions", "start_timestamp": "00:56:59", "end_timestamp": "00:57:42", "start_second": 3419, "end_second": 3462, "url": "https://www.youtube.com/watch?v=xamzdNUN1o0&t=3419s", "title": "Lecture 11 Probability Review, Bayes Filters, Gaussians -- CS287-FA19 Advanced Robotics", "thumbnail": "https://i.ytimg.com/vi/xamzdNUN1o0/hqdefault.jpg"} {"video_id": "xamzdNUN1o0", "text": "being made is that the measurement at time T given everything that happened before everything happened before only depends on X T so for no X T no past states matter when we know X denote past measurements matter no past control inputs matter it's all summarized in state XT we're going back to the question of course if your state space doesn't capture everything about the world then this will not be true in this assumption will be violated and typically that's the case you never captured the whole world stayed in what", "start_timestamp": "00:57:42", "end_timestamp": "00:58:14", "start_second": 3462, "end_second": 3494, "url": "https://www.youtube.com/watch?v=xamzdNUN1o0&t=3462s", "title": "Lecture 11 Probability Review, Bayes Filters, Gaussians -- CS287-FA19 Advanced Robotics", "thumbnail": "https://i.ytimg.com/vi/xamzdNUN1o0/hqdefault.jpg"} {"video_id": "xamzdNUN1o0", "text": "you do but hopefully you're close enough that this actually works then same thing for dynamics next state could in principle depend on everything that came before but the assumption we make is that it only depends on previous state and action taken again the paralytic robotics book makes some kind of different kind of indexing that I'm personally used to but since that's the the best book to go read probably on this if you want to read more I'm sticking with their annotation so they said the action taken UT is the one you", "start_timestamp": "00:58:14", "end_timestamp": "00:58:48", "start_second": 3494, "end_second": 3528, "url": "https://www.youtube.com/watch?v=xamzdNUN1o0&t=3494s", "title": "Lecture 11 Probability Review, Bayes Filters, Gaussians -- CS287-FA19 Advanced Robotics", "thumbnail": "https://i.ytimg.com/vi/xamzdNUN1o0/hqdefault.jpg"} {"video_id": "xamzdNUN1o0", "text": "take that will land you in XT normally you would call that UT minus 1 I think that's what almost everybody calls it but in the book they call a UT it's the action that you know you take right before you land in state X T so that's what it is and that's also reflected in the graph over there UT it's actually you take right before you end up in state X D so unlined assumptions are independent noise meaning that the this thing here these two assumptions means that if you let's say how did the mystic model plus noise that the plus noise", "start_timestamp": "00:58:48", "end_timestamp": "00:59:24", "start_second": 3528, "end_second": 3564, "url": "https://www.youtube.com/watch?v=xamzdNUN1o0&t=3528s", "title": "Lecture 11 Probability Review, Bayes Filters, Gaussians -- CS287-FA19 Advanced Robotics", "thumbnail": "https://i.ytimg.com/vi/xamzdNUN1o0/hqdefault.jpg"} {"video_id": "xamzdNUN1o0", "text": "effect is independent at different times and it also assumes if you are if you're going to say whatever I get out of these calculations is the correct distribution that's only true if these assumptions are correct if you make any approximations in your dynamics model in your sensory model well those approximations will result in miss estimates that you end up with but it's the best you can do but keep that in mind it's there's a lot of assumptions made here and so you only get the result relative to the quality of the", "start_timestamp": "00:59:24", "end_timestamp": "00:59:56", "start_second": 3564, "end_second": 3596, "url": "https://www.youtube.com/watch?v=xamzdNUN1o0&t=3564s", "title": "Lecture 11 Probability Review, Bayes Filters, Gaussians -- CS287-FA19 Advanced Robotics", "thumbnail": "https://i.ytimg.com/vi/xamzdNUN1o0/hqdefault.jpg"} {"video_id": "xamzdNUN1o0", "text": "assumptions that you made the quality of the models that you built in okay let's step through Bayes filters so what builds up step-by-step is the probably the key thing for the second part of lecture so we want to believe which is a distribution over X D given everything from the past including also the current observation of time T now remember it's going to be a recursive thing we're going to already have this presumably for time T minus one and we're going to see if we already had it for t minus one how do we get it for time T what's the", "start_timestamp": "00:59:56", "end_timestamp": "01:00:36", "start_second": 3596, "end_second": 3636, "url": "https://www.youtube.com/watch?v=xamzdNUN1o0&t=3596s", "title": "Lecture 11 Probability Review, Bayes Filters, Gaussians -- CS287-FA19 Advanced Robotics", "thumbnail": "https://i.ytimg.com/vi/xamzdNUN1o0/hqdefault.jpg"} {"video_id": "xamzdNUN1o0", "text": "latest thing that happened latest thing that happened in that progression is the measurement ZT how do you incorporate a measurement Bayes rule so ignore everything but XD and ZT what we do p XD given ZT we'd say Bayes rule its ZT given XT times XT okay that's exactly what we're doing we're going to apply Bayes rule and then some normalization right but remember there's also that other stuff and we'll just carry it around everywhere we know that we can carry things around everywhere as long as we do it consistently so it's", "start_timestamp": "01:00:36", "end_timestamp": "01:01:08", "start_second": 3636, "end_second": 3668, "url": "https://www.youtube.com/watch?v=xamzdNUN1o0&t=3636s", "title": "Lecture 11 Probability Review, Bayes Filters, Gaussians -- CS287-FA19 Advanced Robotics", "thumbnail": "https://i.ytimg.com/vi/xamzdNUN1o0/hqdefault.jpg"} {"video_id": "xamzdNUN1o0", "text": "just Bayes rule apply now we had some assumptions that the measurement at time T given XT does not depend on anything else so we just simplify accordingly okay so we've not yet gone from XT minus 1 to XT we've incorporated the last step the measurement can we now do the transition from XT minus 1 to XT well let's think about it how do we go from XT minus 1 to XT well xt- one I've already have it we're multiplying the dynamics model to get the joint and then sum out XT minus one okay that's exactly what we're going to do we", "start_timestamp": "01:01:08", "end_timestamp": "01:01:44", "start_second": 3668, "end_second": 3704, "url": "https://www.youtube.com/watch?v=xamzdNUN1o0&t=3668s", "title": "Lecture 11 Probability Review, Bayes Filters, Gaussians -- CS287-FA19 Advanced Robotics", "thumbnail": "https://i.ytimg.com/vi/xamzdNUN1o0/hqdefault.jpg"} {"video_id": "xamzdNUN1o0", "text": "now say the distribution for XT is XT given XT minus 1 times distribution for XT minus 1 and integrators sum out XT minus 1 and sure we're carrying around all this extra stuff here you want or UT and the Z's and so forth but we know that's just extra stuff we can care around consistently this is just the law of total probability for XT and XT minus 1 then we say well can we simplify this because our mark of assumptions yes we can we don't need to conditioned on everything that's conditioned on there XT given everything in the past only", "start_timestamp": "01:01:44", "end_timestamp": "01:02:23", "start_second": 3704, "end_second": 3743, "url": "https://www.youtube.com/watch?v=xamzdNUN1o0&t=3704s", "title": "Lecture 11 Probability Review, Bayes Filters, Gaussians -- CS287-FA19 Advanced Robotics", "thumbnail": "https://i.ytimg.com/vi/xamzdNUN1o0/hqdefault.jpg"} {"video_id": "xamzdNUN1o0", "text": "depends on UT and XT minus 1 and this here is now our complete recursion because this is a thing we assume we already have is the same thing we wanted to find for XD we have it for XT minus 1 and now we can just repeat all the way back to X 0 and we're good to go so at every time step as we track our system we can incorporate the controls we applied U at time T here gives the tribution over XT this thing over here gives distribution over XT given that what we already knew about XT minus 1 and the controls we applied and then we", "start_timestamp": "01:02:23", "end_timestamp": "01:02:59", "start_second": 3743, "end_second": 3779, "url": "https://www.youtube.com/watch?v=xamzdNUN1o0&t=3743s", "title": "Lecture 11 Probability Review, Bayes Filters, Gaussians -- CS287-FA19 Advanced Robotics", "thumbnail": "https://i.ytimg.com/vi/xamzdNUN1o0/hqdefault.jpg"} {"video_id": "xamzdNUN1o0", "text": "get a measurement we're multiplying the likelihood of the measurement and then we were renormalize and if our distribution for XT and we can keep tracking over time all right the mark of the assumption makes this even slightly simpler notation and we're good to go all right so and then the last thing here is just the notation the book uses BL belief of XT minus 1 is a shorthand where there's no there's no assumption here it's not like we can delete all this stuff here just to be clear this stuff is now being", "start_timestamp": "01:02:59", "end_timestamp": "01:03:36", "start_second": 3779, "end_second": 3816, "url": "https://www.youtube.com/watch?v=xamzdNUN1o0&t=3779s", "title": "Lecture 11 Probability Review, Bayes Filters, Gaussians -- CS287-FA19 Advanced Robotics", "thumbnail": "https://i.ytimg.com/vi/xamzdNUN1o0/hqdefault.jpg"} {"video_id": "xamzdNUN1o0", "text": "deleted by some assumption is just the notation beol XT minus 1 by definition means XT minus 1 given all this is just a shorthand notation so for the second-to-last equation or like the third line from the bottom and the second line it goes UT at the last one and then it's CT minus one what happened there like on the conditioning oh ok the only thing that happened there essentially that the UT was removed yeah it's not fully spelled out but essentially UT does not play a role for XT minus 1 because it comes after and so", "start_timestamp": "01:03:36", "end_timestamp": "01:04:15", "start_second": 3816, "end_second": 3855, "url": "https://www.youtube.com/watch?v=xamzdNUN1o0&t=3816s", "title": "Lecture 11 Probability Review, Bayes Filters, Gaussians -- CS287-FA19 Advanced Robotics", "thumbnail": "https://i.ytimg.com/vi/xamzdNUN1o0/hqdefault.jpg"} {"video_id": "xamzdNUN1o0", "text": "UT was removed from that that's all that happened and so this is the final result we have we can put this in an algorithm and this the algorithm that will be running and this is the version for discrete States of course where we can just track the Blee compute this was actually very simple you start out with some distribution prior over X your distribution over X then when you get an observation you're multiplying the likelihood of the observation given X of course if you do that for all possible values of X you sum that together", "start_timestamp": "01:04:15", "end_timestamp": "01:04:55", "start_second": 3855, "end_second": 3895, "url": "https://www.youtube.com/watch?v=xamzdNUN1o0&t=3855s", "title": "Lecture 11 Probability Review, Bayes Filters, Gaussians -- CS287-FA19 Advanced Robotics", "thumbnail": "https://i.ytimg.com/vi/xamzdNUN1o0/hqdefault.jpg"} {"video_id": "xamzdNUN1o0", "text": "normalized by it you every new believe for X after that observation has come in if it's not an observation that came in but you took an action well then you use law of total probability then a much smaller gets multiplied in you get the next state and so you can run they don't if you alternate between Z and you you can just have a bunch of Z's coming in incorporate them then take up some actions incorporate that and just repeat over time whenever something come in act upon what happened so in summary Bayes rule allows us to", "start_timestamp": "01:04:55", "end_timestamp": "01:05:28", "start_second": 3895, "end_second": 3928, "url": "https://www.youtube.com/watch?v=xamzdNUN1o0&t=3895s", "title": "Lecture 11 Probability Review, Bayes Filters, Gaussians -- CS287-FA19 Advanced Robotics", "thumbnail": "https://i.ytimg.com/vi/xamzdNUN1o0/hqdefault.jpg"} {"video_id": "xamzdNUN1o0", "text": "compute probabilities are hard to assess otherwise that's really what's going on here under the hood it's that we don't have direct access to the distribution we want we've access to it sometimes the reverse distribution we can use to get what we want under the markets assumption this recursive Bayesian updating can be done very efficiently and Bayes photos are kind of the common pricing tool for estimating the state of dynamic systems here's a simple example imagine your robots supposed to localize has four", "start_timestamp": "01:05:28", "end_timestamp": "01:05:55", "start_second": 3928, "end_second": 3955, "url": "https://www.youtube.com/watch?v=xamzdNUN1o0&t=3928s", "title": "Lecture 11 Probability Review, Bayes Filters, Gaussians -- CS287-FA19 Advanced Robotics", "thumbnail": "https://i.ytimg.com/vi/xamzdNUN1o0/hqdefault.jpg"} {"video_id": "xamzdNUN1o0", "text": "measurements and that means measures in all four directions and it checks whether it's a wall or not in each of those directions it's a noisy sensor and we're going to not go through the math here we're going to qualitative let's see what happens well if it's if it's really over there but we of course don't know where it is the gray e the uniform gray means our distribution is uniform over all possible locations we show the location of the robot just to you know know what's going on but the robot doesn't know that robot just could be", "start_timestamp": "01:05:55", "end_timestamp": "01:06:23", "start_second": 3955, "end_second": 3983, "url": "https://www.youtube.com/watch?v=xamzdNUN1o0&t=3955s", "title": "Lecture 11 Probability Review, Bayes Filters, Gaussians -- CS287-FA19 Advanced Robotics", "thumbnail": "https://i.ytimg.com/vi/xamzdNUN1o0/hqdefault.jpg"} {"video_id": "xamzdNUN1o0", "text": "anywhere get some measurement to come in since wow I only see things above and below me not to the sides so the Bayes rule update will result in the new distribution then they might say I'm going to move but it might have noisy motion so it then kind of spreads out the distribution a little bit then it measures again which shrinks it again and this process repeats over time and over time it might localize itself in this building of course this assumes that access to a map of the building otherwise this whole", "start_timestamp": "01:06:23", "end_timestamp": "01:06:56", "start_second": 3983, "end_second": 4016, "url": "https://www.youtube.com/watch?v=xamzdNUN1o0&t=3983s", "title": "Lecture 11 Probability Review, Bayes Filters, Gaussians -- CS287-FA19 Advanced Robotics", "thumbnail": "https://i.ytimg.com/vi/xamzdNUN1o0/hqdefault.jpg"} {"video_id": "xamzdNUN1o0", "text": "base update rule wouldn't work out in this case okay that's it for Bayes filters I want to now move on to Gaussian so we'll look at univariate gaussians multivariate gaussians we'll look at law of total probability for gaussians which already covered for discrete case we'll look at what does the math look like for Gaussian so we'll do that next lecture but that's on the menu for us for gaussians and we'll look at conditioning Bayes rule how it shapes up for gaussians univariate gaussians what did it look like is this kind of", "start_timestamp": "01:06:56", "end_timestamp": "01:07:37", "start_second": 4016, "end_second": 4057, "url": "https://www.youtube.com/watch?v=xamzdNUN1o0&t=4016s", "title": "Lecture 11 Probability Review, Bayes Filters, Gaussians -- CS287-FA19 Advanced Robotics", "thumbnail": "https://i.ytimg.com/vi/xamzdNUN1o0/hqdefault.jpg"} {"video_id": "xamzdNUN1o0", "text": "blob kind of distribution where most of the mass is in the middle but it can actually go all the way to infinity on both sides there's something called standard deviation Sigma and 68% of the mass lies within one standard deviation of the mean and then it kind of decays out from there the mean is usually denoted by mu sanitation by Sigma and then the variance is Sigma squared the density itself is this thing over here so it's exponentiated negative x minus mu square so let's think about that when X is close to MU and the X is equal to", "start_timestamp": "01:07:37", "end_timestamp": "01:08:15", "start_second": 4057, "end_second": 4095, "url": "https://www.youtube.com/watch?v=xamzdNUN1o0&t=4057s", "title": "Lecture 11 Probability Review, Bayes Filters, Gaussians -- CS287-FA19 Advanced Robotics", "thumbnail": "https://i.ytimg.com/vi/xamzdNUN1o0/hqdefault.jpg"} {"video_id": "xamzdNUN1o0", "text": "MU that's zero and you have e to the power power zero which will give you one and then if you are the further you go away actually could be even let's see yeah each of our zero would give you one if you go further away from you you start moving away from you this thing will become and the exponent will become a more and more negative number and that will make the expo that negative number will be a lower number and density drops it's exponential Exponential's drop very quickly that so why this nicely integrates to one", "start_timestamp": "01:08:15", "end_timestamp": "01:08:55", "start_second": 4095, "end_second": 4135, "url": "https://www.youtube.com/watch?v=xamzdNUN1o0&t=4095s", "title": "Lecture 11 Probability Review, Bayes Filters, Gaussians -- CS287-FA19 Advanced Robotics", "thumbnail": "https://i.ytimg.com/vi/xamzdNUN1o0/hqdefault.jpg"} {"video_id": "xamzdNUN1o0", "text": "still because Exponential's drop quickly enough there's not a lot of probability mass out in the in the faraway parts the normalization constant upfront that's just you know you do some calculus and you find out that to make sure that this quantity here to make sure it integrates the one you need to put upfront 1 over Sigma of 1 over square root 2 pi ok so what are some properties of gaussians well this is integrate to 1 so we know that this thing integrates to 1 the expected value of your variable X is expected value is you integrate over all", "start_timestamp": "01:08:55", "end_timestamp": "01:09:31", "start_second": 4135, "end_second": 4171, "url": "https://www.youtube.com/watch?v=xamzdNUN1o0&t=4135s", "title": "Lecture 11 Probability Review, Bayes Filters, Gaussians -- CS287-FA19 Advanced Robotics", "thumbnail": "https://i.ytimg.com/vi/xamzdNUN1o0/hqdefault.jpg"} {"video_id": "xamzdNUN1o0", "text": "possible X can take times probability of taking that value if you work out the integral and not proving that this is the integral here I'm just saying if you work out this integral you'll see the result is mu so the expected value under a Gaussian is immune not to an expected either because I mean in the name is also saying it's the mean but you can actually do the math and principle to find out that mu is indeed the expected value under a Gaussian distribution the variance is defined as X minus mu squared on average what's your square deviation", "start_timestamp": "01:09:31", "end_timestamp": "01:10:04", "start_second": 4171, "end_second": 4204, "url": "https://www.youtube.com/watch?v=xamzdNUN1o0&t=4171s", "title": "Lecture 11 Probability Review, Bayes Filters, Gaussians -- CS287-FA19 Advanced Robotics", "thumbnail": "https://i.ytimg.com/vi/xamzdNUN1o0/hqdefault.jpg"} {"video_id": "xamzdNUN1o0", "text": "away from the mean you can do the math compute this integral you'll see Sigma squared is the variance which is what we already called it but in principle you know you have to calculate it verify that the name we use for Sigma squared is actually really what variance means and indeed it is now you might say why might we care about gaussians these integrals I mean we figured them out or somebody figured it about but they're not necessarily easiest but at least they can be done well there's another reason to care about them aside from", "start_timestamp": "01:10:04", "end_timestamp": "01:10:34", "start_second": 4204, "end_second": 4234, "url": "https://www.youtube.com/watch?v=xamzdNUN1o0&t=4204s", "title": "Lecture 11 Probability Review, Bayes Filters, Gaussians -- CS287-FA19 Advanced Robotics", "thumbnail": "https://i.ytimg.com/vi/xamzdNUN1o0/hqdefault.jpg"} {"video_id": "xamzdNUN1o0", "text": "them being convenient to work with the central limit theorem classical CLT says that let x1 x2 be an infinite sequence of independent random variables with an expected values mu and variance Sigma squared then we define a new variable ZM Zn is the sum of all of them minus the sum of the averages so this should be centered around 0 because the sum of all the mind is the sum of the averages or expectations and then normalize by this quantity over here it's a scale down then for the limit of n going to infinity with", "start_timestamp": "01:10:34", "end_timestamp": "01:11:15", "start_second": 4234, "end_second": 4275, "url": "https://www.youtube.com/watch?v=xamzdNUN1o0&t=4234s", "title": "Lecture 11 Probability Review, Bayes Filters, Gaussians -- CS287-FA19 Advanced Robotics", "thumbnail": "https://i.ytimg.com/vi/xamzdNUN1o0/hqdefault.jpg"} {"video_id": "xamzdNUN1o0", "text": "infinitely many of those variables being put together in a sum and send it around zero and normalized then we have Z going to a zero one so zero mean Santa Aviation one Gaussian so what that means is that if whatever you care about is the effect of multiple independent factors the resultant variable is often distributed more or less like a Gaussian and the more independent factors contribute to the variable you only care about the clothes that variable we ultimately care about will be distributed according to a Gaussian so", "start_timestamp": "01:11:15", "end_timestamp": "01:11:51", "start_second": 4275, "end_second": 4311, "url": "https://www.youtube.com/watch?v=xamzdNUN1o0&t=4275s", "title": "Lecture 11 Probability Review, Bayes Filters, Gaussians -- CS287-FA19 Advanced Robotics", "thumbnail": "https://i.ytimg.com/vi/xamzdNUN1o0/hqdefault.jpg"} {"video_id": "xamzdNUN1o0", "text": "it's not just about convenience of math though that's a big part of it another big factor why gaussians are a reasonable thing to use is that in fact even if there's enough underlying factors it might really be a Gaussian the thing you're looking at about multivariate gaussians well here's what it looks like these are densities where X is now a vector so we again see a normalization happening up front which is has a determinant of Sigma in it Sigma is a symmetric matrix the covariance matrix and have X minus mu X", "start_timestamp": "01:11:51", "end_timestamp": "01:12:26", "start_second": 4311, "end_second": 4346, "url": "https://www.youtube.com/watch?v=xamzdNUN1o0&t=4311s", "title": "Lecture 11 Probability Review, Bayes Filters, Gaussians -- CS287-FA19 Advanced Robotics", "thumbnail": "https://i.ytimg.com/vi/xamzdNUN1o0/hqdefault.jpg"} {"video_id": "xamzdNUN1o0", "text": "minus mu so whenever you see something like this often it's a little hard to make sense of it directly if you haven't seen it before your simplest thing to you say okay yeah there is matrices and vectors but whatever is just scalar if it's just scale it looks like a single univariate gaussians again you just have X minus mu squared if it's not a scalar what do we have well Sigma is a symmetric matrix so the inverse of Sigma is also symmetric matrix so the symmetric matrix symmetric matrices are as we know from we've covered lqr just", "start_timestamp": "01:12:26", "end_timestamp": "01:13:01", "start_second": 4346, "end_second": 4381, "url": "https://www.youtube.com/watch?v=xamzdNUN1o0&t=4346s", "title": "Lecture 11 Probability Review, Bayes Filters, Gaussians -- CS287-FA19 Advanced Robotics", "thumbnail": "https://i.ytimg.com/vi/xamzdNUN1o0/hqdefault.jpg"} {"video_id": "xamzdNUN1o0", "text": "the rotation away from being diagonal matrices just a coordinate transformation away from being diagonal so we can just as well think of it as diagonal matrices if we were just working the correct coordinate system okay so imagine it's diagonal then we really see like x1 and mu1 attracting with the first entry in that diagonal and x2 mutant with the second entry and completely independent no interaction so x1 and x2 are their own gaussians in that coordinate system where now they might have different variances because the", "start_timestamp": "01:13:01", "end_timestamp": "01:13:33", "start_second": 4381, "end_second": 4413, "url": "https://www.youtube.com/watch?v=xamzdNUN1o0&t=4381s", "title": "Lecture 11 Probability Review, Bayes Filters, Gaussians -- CS287-FA19 Advanced Robotics", "thumbnail": "https://i.ytimg.com/vi/xamzdNUN1o0/hqdefault.jpg"} {"video_id": "xamzdNUN1o0", "text": "dialogue could have different entries for x1 and x2 but Elmi is just saying oh we have to gosh is 1 4 X 1 1 4 x2 they could have different variances that's it now if we're not on the coordinate line system shirt then it won't look like a diagonal but the intuition remains the same there exists a coordinate system where this would be diagonal and easy to think about I also remember back when we did lqr I told you any matrix can be any symmetric matrix in a quadratic form is fully general because we have a non symmetric", "start_timestamp": "01:13:33", "end_timestamp": "01:14:03", "start_second": 4413, "end_second": 4443, "url": "https://www.youtube.com/watch?v=xamzdNUN1o0&t=4413s", "title": "Lecture 11 Probability Review, Bayes Filters, Gaussians -- CS287-FA19 Advanced Robotics", "thumbnail": "https://i.ytimg.com/vi/xamzdNUN1o0/hqdefault.jpg"} {"video_id": "xamzdNUN1o0", "text": "matrix the non symmetric part cancels out anyway now we did some math on the board and that feel free to revisit there's no reason to ever consider a non symmetric matrix Sigma because the non symmetric part will just cancel out in that quadratic form alright so we can compute expectations here to expected value of variable X is of course is mu expected deviation of X from the mean is actually entries in Sigma so this is saying expected value of x I minus mu I so how much bigger is the Exide than its mean x how much bigger is xj then its", "start_timestamp": "01:14:03", "end_timestamp": "01:14:41", "start_second": 4443, "end_second": 4481, "url": "https://www.youtube.com/watch?v=xamzdNUN1o0&t=4443s", "title": "Lecture 11 Probability Review, Bayes Filters, Gaussians -- CS287-FA19 Advanced Robotics", "thumbnail": "https://i.ytimg.com/vi/xamzdNUN1o0/hqdefault.jpg"} {"video_id": "xamzdNUN1o0", "text": "mean well if they're both together bigger and together smaller then this will always be a positive quantity and will have a positive number come out if one of them tends to be bigger when the other one is smaller and the other way around then they'll have opposite signs a negative entry will come out and they're completely independent and their bigger is smaller then I'll cancel it be 0 so Sigma IJ is positive when they both together tend to be above average negative when they have counter correlation in terms of above below", "start_timestamp": "01:14:41", "end_timestamp": "01:15:09", "start_second": 4481, "end_second": 4509, "url": "https://www.youtube.com/watch?v=xamzdNUN1o0&t=4481s", "title": "Lecture 11 Probability Review, Bayes Filters, Gaussians -- CS287-FA19 Advanced Robotics", "thumbnail": "https://i.ytimg.com/vi/xamzdNUN1o0/hqdefault.jpg"} {"video_id": "xamzdNUN1o0", "text": "average and 0 when there's no relationship between when they're above or below average let's look at some examples what this looks like so here is a plot of a density of a Gaussian mean at 1 0 the first axis is this 1 the second axis is this 1 in all these plots Center deviation 1 for each coordinate so symmetric Gaussian which keeps television while we shift it now the mean is at negative 0.5 for the first query we see it shifted what if we shifted even more will so shifted for the second coordinate to mean again this", "start_timestamp": "01:15:09", "end_timestamp": "01:15:43", "start_second": 4509, "end_second": 4543, "url": "https://www.youtube.com/watch?v=xamzdNUN1o0&t=4509s", "title": "Lecture 11 Probability Review, Bayes Filters, Gaussians -- CS287-FA19 Advanced Robotics", "thumbnail": "https://i.ytimg.com/vi/xamzdNUN1o0/hqdefault.jpg"} {"video_id": "xamzdNUN1o0", "text": "bump just moves around let's look at some more examples again a starting point a 0 centered variance 1 Gaussian we then reduce entries on Sigma so Sigma is diagonal here setting stay acts as a line nicely symmetric point six point six it becomes a taller peak because all the density has to be closer because you're not allowed to be as far away from the mean as often so the mouse has to be more centered you can make the standard mission larger and things will spread out you can also do things that are not diagonal matrices so here again", "start_timestamp": "01:15:43", "end_timestamp": "01:16:22", "start_second": 4543, "end_second": 4582, "url": "https://www.youtube.com/watch?v=xamzdNUN1o0&t=4543s", "title": "Lecture 11 Probability Review, Bayes Filters, Gaussians -- CS287-FA19 Advanced Robotics", "thumbnail": "https://i.ytimg.com/vi/xamzdNUN1o0/hqdefault.jpg"} {"video_id": "xamzdNUN1o0", "text": "on the left the standard one in the middle we said x1 and x2 are positively correlated so what do we expect we expect that there is a lot of mass along the axis where x1 and x2 are both above the mean or both below the mean the mean here is zero so along the main diagonal we expect there to be a lot of mass which is exactly what we see then here we made it even more 0.8 and we'll get even more mass along that main diagonal because whenever one is above average the other one should also be above average below average the other one", "start_timestamp": "01:16:22", "end_timestamp": "01:16:55", "start_second": 4582, "end_second": 4615, "url": "https://www.youtube.com/watch?v=xamzdNUN1o0&t=4582s", "title": "Lecture 11 Probability Review, Bayes Filters, Gaussians -- CS287-FA19 Advanced Robotics", "thumbnail": "https://i.ytimg.com/vi/xamzdNUN1o0/hqdefault.jpg"} {"video_id": "xamzdNUN1o0", "text": "should be below average most of the time you can draw this all so differently so drawing in this kind of 3d like sketches is sometimes hard to do much easier to draw contours so the contours of the densities shown below the kind of 3d plot then here's a corresponding density contour for the middle one and for the last one you see indeed the ellipse runs along that main diagonal how about some other examples here is negative correlation so here it runs along the opposite axis when x1 is above average x2 is below average and so we get it to", "start_timestamp": "01:16:55", "end_timestamp": "01:17:35", "start_second": 4615, "end_second": 4655, "url": "https://www.youtube.com/watch?v=xamzdNUN1o0&t=4615s", "title": "Lecture 11 Probability Review, Bayes Filters, Gaussians -- CS287-FA19 Advanced Robotics", "thumbnail": "https://i.ytimg.com/vi/xamzdNUN1o0/hqdefault.jpg"} {"video_id": "xamzdNUN1o0", "text": "run the opposite way we can make it even more negative negative point-eight even stronger negative correlation or we can make it positive again and also make one have higher variance than the other one so the first coordinate has more variance than the other one any questions about these okay then next lecture we'll do the math for the total probability law and Bayes rule for gaussians actually one quick extra announcement for today's today's lecture is probably one lecture where the pace is probably the most off for", "start_timestamp": "01:17:35", "end_timestamp": "01:18:27", "start_second": 4655, "end_second": 4707, "url": "https://www.youtube.com/watch?v=xamzdNUN1o0&t=4655s", "title": "Lecture 11 Probability Review, Bayes Filters, Gaussians -- CS287-FA19 Advanced Robotics", "thumbnail": "https://i.ytimg.com/vi/xamzdNUN1o0/hqdefault.jpg"} {"video_id": "SsnWM1xWDu4", "text": "so yeah it's a pleasure to be here and today we'll be cooking through the labels a couple of words about me I'm a kegel competition Smith master located in Minsk Belarus my name is Kegel is V dot e dot s dot and currently I work as a data scientist at h21 so today we'll be talking about some distinctions between labeled and unlabeled data then we will talk about what is said at Lackland actually is some use cases and some recipes of how to cook so the label to suit the labels and some example on the real okaycome", "start_timestamp": "00:00:00", "end_timestamp": "00:00:42", "start_second": 0, "end_second": 42, "url": "https://www.youtube.com/watch?v=SsnWM1xWDu4&t=0s", "title": "How to cook pseudo-labels | by Yauhen Babakhin | Kaggle Days Dubai | Kaggle", "thumbnail": "https://i.ytimg.com/vi/SsnWM1xWDu4/maxresdefault.jpg"} {"video_id": "SsnWM1xWDu4", "text": "competitions where to the labels were applied and achieved good results so here is the general supervisor on problem train and test data and for train data we are given a label so have some kind of wave of data and our goal is to build a model to predict the test labels so test is kind of unlabeled data and generally it is a kind of usual keiko competition scheme where we are given some kind of label data and you need to make a model to predict what the unlabeled [Music] and the problem we'll be talking about", "start_timestamp": "00:00:42", "end_timestamp": "00:01:19", "start_second": 42, "end_second": 79, "url": "https://www.youtube.com/watch?v=SsnWM1xWDu4&t=42s", "title": "How to cook pseudo-labels | by Yauhen Babakhin | Kaggle Days Dubai | Kaggle", "thumbnail": "https://i.ytimg.com/vi/SsnWM1xWDu4/maxresdefault.jpg"} {"video_id": "SsnWM1xWDu4", "text": "today is the kind of situation when a lot of unlabeled data and label data is very small so this could happen to both in Europe some kind of usual motion journal projects and as well as on Kegel competitions and here probably you could some kind of create some tricky models to build them on this small label data and apply some different techniques but probably the better approach is to somehow use this huge unlabeled data and the reasons why like what will have such situations when the label data is very small the first one is it is expensive", "start_timestamp": "00:01:19", "end_timestamp": "00:02:01", "start_second": 79, "end_second": 121, "url": "https://www.youtube.com/watch?v=SsnWM1xWDu4&t=79s", "title": "How to cook pseudo-labels | by Yauhen Babakhin | Kaggle Days Dubai | Kaggle", "thumbnail": "https://i.ytimg.com/vi/SsnWM1xWDu4/maxresdefault.jpg"} {"video_id": "SsnWM1xWDu4", "text": "so to label your data you need to acquire some special people or you need to acquire domain experts or use some special software in order to label your data and obtain label data so consequently it is also time consuming so you need for example one month to label one more potential for your data and of course your management will not be satisfied with with this approach that you need - too much time and you need to put the model right away there are some other reasons for example seconds it could be some sophisticated", "start_timestamp": "00:02:01", "end_timestamp": "00:02:34", "start_second": 121, "end_second": 154, "url": "https://www.youtube.com/watch?v=SsnWM1xWDu4&t=121s", "title": "How to cook pseudo-labels | by Yauhen Babakhin | Kaggle Days Dubai | Kaggle", "thumbnail": "https://i.ytimg.com/vi/SsnWM1xWDu4/maxresdefault.jpg"} {"video_id": "SsnWM1xWDu4", "text": "experiments you need for example build some I don't know very very hard experimented start to establish and there are lots of lots of stress this is a need and it is hard to repeat it frequently and that is why it is also could be expensive a time consuming so it's kind of basic reasons why why we have such deviations with a small label data and here is a quote by entry in there it is not who has the best algorithms that wins it has the most data and probably it is much more crucial nowadays when there are lots of", "start_timestamp": "00:02:34", "end_timestamp": "00:03:07", "start_second": 154, "end_second": 187, "url": "https://www.youtube.com/watch?v=SsnWM1xWDu4&t=154s", "title": "How to cook pseudo-labels | by Yauhen Babakhin | Kaggle Days Dubai | Kaggle", "thumbnail": "https://i.ytimg.com/vi/SsnWM1xWDu4/maxresdefault.jpg"} {"video_id": "SsnWM1xWDu4", "text": "machine learning models that work in out of box so it would apply and you could solve almost any problem type are just using some some some predefined models and the problem is that you can't apply models when you don't don't have data or this is the reliable data is too small so probably it could all it could also be extended for personal data so if I have small label data's and it's also hard to do some kind of huge supervisors to supervise or model and in case if you're unable to get more label data and if it is impossible or it is hard to", "start_timestamp": "00:03:07", "end_timestamp": "00:03:43", "start_second": 187, "end_second": 223, "url": "https://www.youtube.com/watch?v=SsnWM1xWDu4&t=187s", "title": "How to cook pseudo-labels | by Yauhen Babakhin | Kaggle Days Dubai | Kaggle", "thumbnail": "https://i.ytimg.com/vi/SsnWM1xWDu4/maxresdefault.jpg"} {"video_id": "SsnWM1xWDu4", "text": "acquire to reduce the message codes same supervised chlorine is introduced in a simple example here on the left you have a classic supervised subra specification a problem where we have for example two classes triangles and squares and our goal is to build a classifier so a decision boundary that would distinguish these two classes so here on the image pad on the image B we have some kind of sample decision boundary between these two classes what same service were allowed to do it allows you to utilize all cells and label data so this is red", "start_timestamp": "00:03:43", "end_timestamp": "00:04:20", "start_second": 223, "end_second": 260, "url": "https://www.youtube.com/watch?v=SsnWM1xWDu4&t=223s", "title": "How to cook pseudo-labels | by Yauhen Babakhin | Kaggle Days Dubai | Kaggle", "thumbnail": "https://i.ytimg.com/vi/SsnWM1xWDu4/maxresdefault.jpg"} {"video_id": "SsnWM1xWDu4", "text": "dots some kind of way observe this data but we don't we don't have the real labels and actually from this red dots with some kind of get the structure of the data and it allows to sometimes change our decision boundary itself uses this knowledge and we see that this is and right now is more more reliable and more generalizable probably yeah and what a sexual thrill delivers so the labeling it's kind of the simplest semi-supervised learning so simple superest Waldron have has lots of different approaches but see the", "start_timestamp": "00:04:20", "end_timestamp": "00:04:57", "start_second": 260, "end_second": 297, "url": "https://www.youtube.com/watch?v=SsnWM1xWDu4&t=260s", "title": "How to cook pseudo-labels | by Yauhen Babakhin | Kaggle Days Dubai | Kaggle", "thumbnail": "https://i.ytimg.com/vi/SsnWM1xWDu4/maxresdefault.jpg"} {"video_id": "SsnWM1xWDu4", "text": "labeling is like the most petition and so it's the most easy to use and the idea is pretty straightforward so we have the label data we train Air model like some supermodel one when this label data and afterwards we just make the prediction spawns and label data and actually these predictions are already through the labels so we treat all the test observations is that we have predicted by our model as to the labels and then we could some kind of concatenate these two data sets so initial data and our predictions made by", "start_timestamp": "00:04:57", "end_timestamp": "00:05:31", "start_second": 297, "end_second": 331, "url": "https://www.youtube.com/watch?v=SsnWM1xWDu4&t=297s", "title": "How to cook pseudo-labels | by Yauhen Babakhin | Kaggle Days Dubai | Kaggle", "thumbnail": "https://i.ytimg.com/vi/SsnWM1xWDu4/maxresdefault.jpg"} {"video_id": "SsnWM1xWDu4", "text": "our models and treat all of this data as a kind of extended version of our label data and users to the labels in our subsequent training yeah before before speaking of how to with how could we utilize to the labels I'll talk about some kind of couple of ingredients the first one is confidence so instead of taking all the predictions from the code test set we're interested only in the content predictions the reason for that is if we add to power of two the labels some observations that are hard to predict or some special cases some", "start_timestamp": "00:05:31", "end_timestamp": "00:06:09", "start_second": 331, "end_second": 369, "url": "https://www.youtube.com/watch?v=SsnWM1xWDu4&t=331s", "title": "How to cook pseudo-labels | by Yauhen Babakhin | Kaggle Days Dubai | Kaggle", "thumbnail": "https://i.ytimg.com/vi/SsnWM1xWDu4/maxresdefault.jpg"} {"video_id": "SsnWM1xWDu4", "text": "corner cases then it would like been late when been detect our our subsequent training because it introduces some noise and some bias in now in our model and we need which is only the confidence intervals in order to like I select only the reservations that our model is confident in there are different different definitions of what is obscure or what is a confidence prediction for different types of of the problems so for example falsification problem the easiest way is to kind of take the probabilities for each process that were", "start_timestamp": "00:06:09", "end_timestamp": "00:06:44", "start_second": 369, "end_second": 404, "url": "https://www.youtube.com/watch?v=SsnWM1xWDu4&t=369s", "title": "How to cook pseudo-labels | by Yauhen Babakhin | Kaggle Days Dubai | Kaggle", "thumbnail": "https://i.ytimg.com/vi/SsnWM1xWDu4/maxresdefault.jpg"} {"video_id": "SsnWM1xWDu4", "text": "predicted and for example if at least one one plus have probability over 0.9 then okay it is reliable reliable observation we could edit in the content through the labels for image segmentation problems for example we could use some kind of percentage of motivated or pixels away we'll just find in 110 pixels on the image and then we're treating like the person of course two pixels for example over eighty percent those senses alteration is confident and of course aggregation type of process it's a little bit tricky", "start_timestamp": "00:06:44", "end_timestamp": "00:07:17", "start_second": 404, "end_second": 437, "url": "https://www.youtube.com/watch?v=SsnWM1xWDu4&t=404s", "title": "How to cook pseudo-labels | by Yauhen Babakhin | Kaggle Days Dubai | Kaggle", "thumbnail": "https://i.ytimg.com/vi/SsnWM1xWDu4/maxresdefault.jpg"} {"video_id": "SsnWM1xWDu4", "text": "because it's hard to tell what is kind of confident prediction for the regression problem but one approach what you could use for example in different problems is that you can look at your predictions from one epic to another unit year training of the neural network and if you see like huge jumps for one equal to another during the nest neural network training it means that probably this observation is unreliable and it is not not a good idea to include it in the to the labels and overall pseudo labels is kind of the", "start_timestamp": "00:07:17", "end_timestamp": "00:07:53", "start_second": 437, "end_second": 473, "url": "https://www.youtube.com/watch?v=SsnWM1xWDu4&t=437s", "title": "How to cook pseudo-labels | by Yauhen Babakhin | Kaggle Days Dubai | Kaggle", "thumbnail": "https://i.ytimg.com/vi/SsnWM1xWDu4/maxresdefault.jpg"} {"video_id": "SsnWM1xWDu4", "text": "message that is widely used in the in the deployment context as wrist posture is that no metals allows to treacherous terrain online and you can add some new data to either intentional training so but probably not probably so for sure it could also be used for some classic machine learning problems but it is not not so popular so my talk will be more about about some neural networks and the deployment context yeah ii ii degraded in isn't semblance so instead of creating one one model and predicting what one set of students who", "start_timestamp": "00:07:53", "end_timestamp": "00:08:33", "start_second": 473, "end_second": 513, "url": "https://www.youtube.com/watch?v=SsnWM1xWDu4&t=473s", "title": "How to cook pseudo-labels | by Yauhen Babakhin | Kaggle Days Dubai | Kaggle", "thumbnail": "https://i.ytimg.com/vi/SsnWM1xWDu4/maxresdefault.jpg"} {"video_id": "SsnWM1xWDu4", "text": "could train multiple models sending samples Emmet's in some way and obtain a new set of to the labels the reason for using an semblance observed two different reasons the first reason is that worse it is better to use in similar photos instead of one model if you talking about the quality so single model is almost always be the first versus it's an example of model and the second reason for that is that in several photos allows to get diversity to up to the labels so we for example using only single model from one page to", "start_timestamp": "00:08:33", "end_timestamp": "00:09:08", "start_second": 513, "end_second": 548, "url": "https://www.youtube.com/watch?v=SsnWM1xWDu4&t=513s", "title": "How to cook pseudo-labels | by Yauhen Babakhin | Kaggle Days Dubai | Kaggle", "thumbnail": "https://i.ytimg.com/vi/SsnWM1xWDu4/maxresdefault.jpg"} {"video_id": "SsnWM1xWDu4", "text": "another and Catina training with a single model sinu put some kind of propagate the errors or homes on this model and if you're using example since the diverse models could just somehow eliminate this effect and we obtain a more generalizable to the labels all right so the first recipe how how we could utilize to the labels about change multi nia sleaze so it consists of two steps the first step is that we just Union two datasets label the data and see the labels obtained and treat it as a mutable data so of course he could from", "start_timestamp": "00:09:08", "end_timestamp": "00:09:43", "start_second": 548, "end_second": 583, "url": "https://www.youtube.com/watch?v=SsnWM1xWDu4&t=548s", "title": "How to cook pseudo-labels | by Yauhen Babakhin | Kaggle Days Dubai | Kaggle", "thumbnail": "https://i.ytimg.com/vi/SsnWM1xWDu4/maxresdefault.jpg"} {"video_id": "SsnWM1xWDu4", "text": "to deliver secret select one with continent predictions were some kind of only subsets Anderson dish but which is also the labels as well as a column as a label and afterwards this new label data set could be used to train a new model so now instead of using original we use both pseudo inverse sine theta and trainer model and it occurs is it such approach allows to get better results and change with a single model I mean the model is a zero strain on unlabeled data so identity so what is the process it is to concatenate train data and to", "start_timestamp": "00:09:43", "end_timestamp": "00:10:23", "start_second": 583, "end_second": 623, "url": "https://www.youtube.com/watch?v=SsnWM1xWDu4&t=583s", "title": "How to cook pseudo-labels | by Yauhen Babakhin | Kaggle Days Dubai | Kaggle", "thumbnail": "https://i.ytimg.com/vi/SsnWM1xWDu4/maxresdefault.jpg"} {"video_id": "SsnWM1xWDu4", "text": "the labels all right another approach is called pre-trained it's a kind of a little different in the previous approach was based on some kind of on a data level so we can created our data in the city in the interested single set of labels now we are solving some kind of a model level problem so we take on the schudle labels change the model only opens up to the labels and we have change some kind of weight in civilization so we will save the way so this one is a bit we have obtained and this weights could be", "start_timestamp": "00:10:23", "end_timestamp": "00:10:57", "start_second": 623, "end_second": 657, "url": "https://www.youtube.com/watch?v=SsnWM1xWDu4&t=623s", "title": "How to cook pseudo-labels | by Yauhen Babakhin | Kaggle Days Dubai | Kaggle", "thumbnail": "https://i.ytimg.com/vi/SsnWM1xWDu4/maxresdefault.jpg"} {"video_id": "SsnWM1xWDu4", "text": "used as a starting point for subsequent train on the label data on the initial table data and as the reason why it is working is that afterwards after you obtain your model or some pseudo labels now you wait wait no information about you your data set about your domain you're working with and this installation works better than for example if a man in image net installation so the pipeline works in the following way we have our pertained models of Australian condoms with to the labels we initialize the weights and it", "start_timestamp": "00:10:57", "end_timestamp": "00:11:28", "start_second": 657, "end_second": 688, "url": "https://www.youtube.com/watch?v=SsnWM1xWDu4&t=657s", "title": "How to cook pseudo-labels | by Yauhen Babakhin | Kaggle Days Dubai | Kaggle", "thumbnail": "https://i.ytimg.com/vi/SsnWM1xWDu4/maxresdefault.jpg"} {"video_id": "SsnWM1xWDu4", "text": "allows to train faster and obtain better results if a fan changes this model or going to label data on the initial label data and after we look ups to point to you is this model at NU model you could make predictions on general data and again this approach allows to get better results compared if we have started for from here and just insulin insulin ensures the weight with the image net okay so each recipe has some herbs and spices and here we'll talk about the validation when we are talking about pseudo labels case absolutely labels is", "start_timestamp": "00:11:28", "end_timestamp": "00:12:06", "start_second": 688, "end_second": 726, "url": "https://www.youtube.com/watch?v=SsnWM1xWDu4&t=688s", "title": "How to cook pseudo-labels | by Yauhen Babakhin | Kaggle Days Dubai | Kaggle", "thumbnail": "https://i.ytimg.com/vi/SsnWM1xWDu4/maxresdefault.jpg"} {"video_id": "SsnWM1xWDu4", "text": "kind of agreeing a great way to - or that kind of over senior model and yoke and abused or fits into the leaderboard or fit into a validation data and you need to establish the proper way how to compare the models before you can apply to the labels and after the second version of will be applied and for example have this for four volts away using the basically k-fold cross-validation and the first approach could be we just train four different models for each of the poles and samples them and obtain through the labels and", "start_timestamp": "00:12:06", "end_timestamp": "00:12:41", "start_second": 726, "end_second": 761, "url": "https://www.youtube.com/watch?v=SsnWM1xWDu4&t=726s", "title": "How to cook pseudo-labels | by Yauhen Babakhin | Kaggle Days Dubai | Kaggle", "thumbnail": "https://i.ytimg.com/vi/SsnWM1xWDu4/maxresdefault.jpg"} {"video_id": "SsnWM1xWDu4", "text": "since these ends the specific labels are used in the first or second recipe in order to continue the training however this approach is weak because we now see the label dataset contains the information about also also targeted target Morocco's also late labels is a train data and if for example after work limited to the labels we want to measure the quality ones there for example for sport then it may acure that our our quality is too too too too optimistic because so the label school reticle since the right labels also causes this", "start_timestamp": "00:12:41", "end_timestamp": "00:13:19", "start_second": 761, "end_second": 799, "url": "https://www.youtube.com/watch?v=SsnWM1xWDu4&t=761s", "title": "How to cook pseudo-labels | by Yauhen Babakhin | Kaggle Days Dubai | Kaggle", "thumbnail": "https://i.ytimg.com/vi/SsnWM1xWDu4/maxresdefault.jpg"} {"video_id": "SsnWM1xWDu4", "text": "of this first hole so the better approach is to use some kind of out of focus to the to the labels so for each fault the train has separate a separate model and predict and create a separate suitable data change independently and afterwards it kind of provides a reliable reliable schema validation so then we can compare the models before the first ago that was Jack have been applied the only drawback here is that we need to train for different models for each mode and obtain only one set of CB labels and they're not really reliable", "start_timestamp": "00:13:19", "end_timestamp": "00:13:57", "start_second": 799, "end_second": 837, "url": "https://www.youtube.com/watch?v=SsnWM1xWDu4&t=799s", "title": "How to cook pseudo-labels | by Yauhen Babakhin | Kaggle Days Dubai | Kaggle", "thumbnail": "https://i.ytimg.com/vi/SsnWM1xWDu4/maxresdefault.jpg"} {"video_id": "SsnWM1xWDu4", "text": "because we need this kind of an example ingredient when we are developing multiple diverse models and actually in practice everyone uses these schemes just like a skin so it is it is letter Western model I focus as I said in the kit but the reason for the usage of this scheme is that we are already obtained an assembler model so they don't have four models and if for example for each move with well-developed three different architectures narrow Network center cap over 8 to 12 models and it is really a great world chain of two the labels and", "start_timestamp": "00:13:57", "end_timestamp": "00:14:41", "start_second": 837, "end_second": 881, "url": "https://www.youtube.com/watch?v=SsnWM1xWDu4&t=837s", "title": "How to cook pseudo-labels | by Yauhen Babakhin | Kaggle Days Dubai | Kaggle", "thumbnail": "https://i.ytimg.com/vi/SsnWM1xWDu4/maxresdefault.jpg"} {"video_id": "SsnWM1xWDu4", "text": "divorce so yes the take to to this rotation scheme and now I will talk about a couple of examples where the labels showed pretty pretty good results and one of their competitions is Canada model and education it was posted by keiko last year and the problem was to kind of quantify photos made by some camera into the camera it was taken by so it was a kind of multi conservation problem with ten classes and the classes are kind of Apple some soon some some kind of other devices and for example this particular image was taken by a HTC", "start_timestamp": "00:14:41", "end_timestamp": "00:15:24", "start_second": 881, "end_second": 924, "url": "https://www.youtube.com/watch?v=SsnWM1xWDu4&t=881s", "title": "How to cook pseudo-labels | by Yauhen Babakhin | Kaggle Days Dubai | Kaggle", "thumbnail": "https://i.ytimg.com/vi/SsnWM1xWDu4/maxresdefault.jpg"} {"video_id": "SsnWM1xWDu4", "text": "One and yeah it's kind of impossible to say just looking but by eyes so it's another nano net network approach have been applied here for this competition and actually in this competition train that is data we're about the same size so so what about I guess 3/7 of images and we remember that do the labels show great great value when it have some kind of small small label data sets and large unlabeled data but the reason why syllabus are working here on this problem is this train and and test we'd actually also train images were taken by", "start_timestamp": "00:15:24", "end_timestamp": "00:15:58", "start_second": 924, "end_second": 958, "url": "https://www.youtube.com/watch?v=SsnWM1xWDu4&t=924s", "title": "How to cook pseudo-labels | by Yauhen Babakhin | Kaggle Days Dubai | Kaggle", "thumbnail": "https://i.ytimg.com/vi/SsnWM1xWDu4/maxresdefault.jpg"} {"video_id": "SsnWM1xWDu4", "text": "a single physical device so for example this is green form but the test images for example might may be taken by the same model for example HTC or iPhone 4 but they were taken by different physical devices is this orange one and so the goal here was to some kind of use assisted predictions in terms of silver labels and probably if it could allow find as some particular features some particular ste facts that are specific for this country to first particle of polish model and the said that is ins ik handle learned for flow from from the", "start_timestamp": "00:15:58", "end_timestamp": "00:16:34", "start_second": 958, "end_second": 994, "url": "https://www.youtube.com/watch?v=SsnWM1xWDu4&t=958s", "title": "How to cook pseudo-labels | by Yauhen Babakhin | Kaggle Days Dubai | Kaggle", "thumbnail": "https://i.ytimg.com/vi/SsnWM1xWDu4/maxresdefault.jpg"} {"video_id": "SsnWM1xWDu4", "text": "green model okay here is kind of a recipe what has been applied and they want what would have worked or for this particular competition so firstly we're just making a kind of classic classic approach we train multiple models and sample them and sample them so Lucas for example tried different architectures different training procedures and so on so welcome to the labels names this approach allows us to get 66 place on the quanta private leaderboard the next step is could be the kind of route would take take this to the labels", "start_timestamp": "00:16:34", "end_timestamp": "00:17:10", "start_second": 994, "end_second": 1030, "url": "https://www.youtube.com/watch?v=SsnWM1xWDu4&t=994s", "title": "How to cook pseudo-labels | by Yauhen Babakhin | Kaggle Days Dubai | Kaggle", "thumbnail": "https://i.ytimg.com/vi/SsnWM1xWDu4/maxresdefault.jpg"} {"video_id": "SsnWM1xWDu4", "text": "pretend pretend on them and printing on train data and it gives like a huge boost and we have receiving a top 20 position however as we've discussed there like distribution between train and test data it is probably the better idea to train on the unpure through the labels so the source step is kind of minute eliminates the second one and an instead of points Union on train data we train our model on pure silver labels so at this step we don't use as a the initial train data at all and in such case if it was to get even better", "start_timestamp": "00:17:10", "end_timestamp": "00:17:45", "start_second": 1030, "end_second": 1065, "url": "https://www.youtube.com/watch?v=SsnWM1xWDu4&t=1030s", "title": "How to cook pseudo-labels | by Yauhen Babakhin | Kaggle Days Dubai | Kaggle", "thumbnail": "https://i.ytimg.com/vi/SsnWM1xWDu4/maxresdefault.jpg"} {"video_id": "SsnWM1xWDu4", "text": "results and it achieves the same place or on the private leaderboard alright the next the next example is salt identification challenge it is the problem was in in semantic image segmentation so we were given some kind of images obviously some kind of or surface under earth and each pixel of the image was classified in the two classes whether it is sold on unsold and the goal was to build a model that predicts the salt deposits like some kind of mask masks represented all cones right yeah in this case in this competition as", "start_timestamp": "00:17:45", "end_timestamp": "00:18:28", "start_second": 1065, "end_second": 1108, "url": "https://www.youtube.com/watch?v=SsnWM1xWDu4&t=1065s", "title": "How to cook pseudo-labels | by Yauhen Babakhin | Kaggle Days Dubai | Kaggle", "thumbnail": "https://i.ytimg.com/vi/SsnWM1xWDu4/maxresdefault.jpg"} {"video_id": "SsnWM1xWDu4", "text": "a trained data contains only four four thousand images and the test data contains eighteen thousand images so it's really a perfect candidate for pop pseudo employment when I have this Assessor says what difference between labeled and another available data and again we start with a simple approach will train multiple models obtain filter labels and it gives in this case is a forty six position it's pretty pretty good I guess it was around three thousand participants in this competition and on the second stage we", "start_timestamp": "00:18:28", "end_timestamp": "00:19:00", "start_second": 1108, "end_second": 1140, "url": "https://www.youtube.com/watch?v=SsnWM1xWDu4&t=1108s", "title": "How to cook pseudo-labels | by Yauhen Babakhin | Kaggle Days Dubai | Kaggle", "thumbnail": "https://i.ytimg.com/vi/SsnWM1xWDu4/maxresdefault.jpg"} {"video_id": "SsnWM1xWDu4", "text": "apply our second recipe so we retain the model on CD labels twenty non-trained and obtain citizenship is a place in top ten and in order to achieve better results we just repeat this this is this these two steps multiple times so what does it mean so after second second step of ten you set up to open up a new set of see the labels we again train multiple models obtain new pursue the labels and also I can pretend which allows a French Union trade so repeaters just look multiple times allows to give the better results and a ship to touch", "start_timestamp": "00:19:00", "end_timestamp": "00:19:33", "start_second": 1140, "end_second": 1173, "url": "https://www.youtube.com/watch?v=SsnWM1xWDu4&t=1140s", "title": "How to cook pseudo-labels | by Yauhen Babakhin | Kaggle Days Dubai | Kaggle", "thumbnail": "https://i.ytimg.com/vi/SsnWM1xWDu4/maxresdefault.jpg"} {"video_id": "SsnWM1xWDu4", "text": "top one position for this competition and actually the skin looks likely cut off in this way so initially we have only label data between a model in this first round send a predictive see the labels on the unlabeled data select on the confident ones and retrain the model general repeats is this test for example K times and at each iteration which this improvement is a sports oh yes the sports improvement are kind of degrade so it's a smaller and smaller but it should iteration gives more and more information about", "start_timestamp": "00:19:33", "end_timestamp": "00:20:02", "start_second": 1173, "end_second": 1202, "url": "https://www.youtube.com/watch?v=SsnWM1xWDu4&t=1173s", "title": "How to cook pseudo-labels | by Yauhen Babakhin | Kaggle Days Dubai | Kaggle", "thumbnail": "https://i.ytimg.com/vi/SsnWM1xWDu4/maxresdefault.jpg"} {"video_id": "SsnWM1xWDu4", "text": "syllabus improves the calls up see the labels and achieve the better results also either board and actually one more dish that we are using here is that where a train is a model so at each stage we train the model from scratch so what what does it mean if we're trains we forgot to change model from with the first stage to the second and sort and so on we could finish with the situation when our like error propagates through each iteration so if I have made an error in the first round and out to the label I'll quit inaccurate then this error", "start_timestamp": "00:20:02", "end_timestamp": "00:20:33", "start_second": 1202, "end_second": 1233, "url": "https://www.youtube.com/watch?v=SsnWM1xWDu4&t=1202s", "title": "How to cook pseudo-labels | by Yauhen Babakhin | Kaggle Days Dubai | Kaggle", "thumbnail": "https://i.ytimg.com/vi/SsnWM1xWDu4/maxresdefault.jpg"} {"video_id": "SsnWM1xWDu4", "text": "will propagate through each round so with each round we're just starting from imagenet weights and start start start a model from scratch okay so yeah basically there is no kind of universal universal recipe of how to cook soup without the labels and how they could be applied because it's really well specific to data uuuu you're using ends it's a problem type you're dressing but you have some kind of building box that could be a temple night together with some kind of your sub some spices and and some ideas that could improve your", "start_timestamp": "00:20:33", "end_timestamp": "00:21:12", "start_second": 1233, "end_second": 1272, "url": "https://www.youtube.com/watch?v=SsnWM1xWDu4&t=1233s", "title": "How to cook pseudo-labels | by Yauhen Babakhin | Kaggle Days Dubai | Kaggle", "thumbnail": "https://i.ytimg.com/vi/SsnWM1xWDu4/maxresdefault.jpg"} {"video_id": "SsnWM1xWDu4", "text": "performance patron performance of your models yes this approach could be applied in both competitions and in real machine learning projects and it really performs very well when it have very small label did the data the data set but unlabeled data is available kind of in fact in in in yeah so unlevel did this is a lots of unlabeled data and when when what is a good idea what when to use this substitute the labels it could be also quite in some kind of final stage of the competition when you kind of stuck with the ideas and your", "start_timestamp": "00:21:12", "end_timestamp": "00:21:48", "start_second": 1272, "end_second": 1308, "url": "https://www.youtube.com/watch?v=SsnWM1xWDu4&t=1272s", "title": "How to cook pseudo-labels | by Yauhen Babakhin | Kaggle Days Dubai | Kaggle", "thumbnail": "https://i.ytimg.com/vi/SsnWM1xWDu4/maxresdefault.jpg"} {"video_id": "9JpdAg6uMXs", "text": "um thank you all for coming this is a massive room so today we will have six great invited talks panel discussion and a selection of posters and spotlight presentations I don't have much to say but welcome young good fellow from open AI he will be given the first talk of today an introduction to generative adversarial networks thank you good morning thank you everybody for coming I guess I'll explain first a little bit what my goals for this this talk are I know there's a lot of different people here at the workshop and the main", "start_timestamp": "00:00:00", "end_timestamp": "00:00:44", "start_second": 0, "end_second": 44, "url": "https://www.youtube.com/watch?v=9JpdAg6uMXs&t=0s", "title": "Introduction to GANs, NIPS 2016 | Ian Goodfellow, OpenAI", "thumbnail": "https://i.ytimg.com/vi/9JpdAg6uMXs/maxresdefault.jpg"} {"video_id": "9JpdAg6uMXs", "text": "purpose of the talk is just to give everyone a little bit of context so that you know what adverse sail training is what generative adversarial networks are if you were at my tutorial on Monday you probably will have seen a lot of these slides before but I'm also going to throw in a few new ideas just so that you feel like you've got something extra for your time but this talk is mostly for the people who have just arrived at the workshop and needed some context so this workshop is about adversarial training and the phrase adversarial", "start_timestamp": "00:00:44", "end_timestamp": "00:01:14", "start_second": 44, "end_second": 74, "url": "https://www.youtube.com/watch?v=9JpdAg6uMXs&t=44s", "title": "Introduction to GANs, NIPS 2016 | Ian Goodfellow, OpenAI", "thumbnail": "https://i.ytimg.com/vi/9JpdAg6uMXs/maxresdefault.jpg"} {"video_id": "9JpdAg6uMXs", "text": "training is a phrase whose usage is in flux and I don't claim exclusive ownership of the phrase but to avoid confusion I thought I'd comment a little bit on how the phrase has been used before and how it's mostly used now so I first used the phrase adversarial training in a paper called explaining and harnessing adversarial examples and in that context I used it to refer to the process of training and neural network to correctly classify adversarial examples by training the network on adversarial examples today", "start_timestamp": "00:01:14", "end_timestamp": "00:01:48", "start_second": 74, "end_second": 108, "url": "https://www.youtube.com/watch?v=9JpdAg6uMXs&t=74s", "title": "Introduction to GANs, NIPS 2016 | Ian Goodfellow, OpenAI", "thumbnail": "https://i.ytimg.com/vi/9JpdAg6uMXs/maxresdefault.jpg"} {"video_id": "9JpdAg6uMXs", "text": "other people have started using the phrase adversarial training for lots of different areas almost any situation where we train and model in a worst case scenario where the worst case inputs are provided either by another model or by an optimization algorithm so the phrase episode training now applies to lots of ideas that are both new and old the way that we use the phrase adverse sail training now it could apply to things like and an agent playing a game against a copy of itself like Arthur Samuels checkers player back in the 1950s so", "start_timestamp": "00:01:48", "end_timestamp": "00:02:21", "start_second": 108, "end_second": 141, "url": "https://www.youtube.com/watch?v=9JpdAg6uMXs&t=108s", "title": "Introduction to GANs, NIPS 2016 | Ian Goodfellow, OpenAI", "thumbnail": "https://i.ytimg.com/vi/9JpdAg6uMXs/maxresdefault.jpg"} {"video_id": "9JpdAg6uMXs", "text": "it's important to recognize that when we use the phrase adversarial training today we're not only referring to things that were invented recently but the usage has expanded to encompass a lot of older things that also had other names like robust optimization most of the day's workshop is about a specific kind of adverse ale training which is training of generative adversarial networks in the context of generative adversarial networks both both players in the game are neural networks and the goal is to learn to generate data that", "start_timestamp": "00:02:21", "end_timestamp": "00:02:55", "start_second": 141, "end_second": 175, "url": "https://www.youtube.com/watch?v=9JpdAg6uMXs&t=141s", "title": "Introduction to GANs, NIPS 2016 | Ian Goodfellow, OpenAI", "thumbnail": "https://i.ytimg.com/vi/9JpdAg6uMXs/maxresdefault.jpg"} {"video_id": "9JpdAg6uMXs", "text": "resembles the data that was in the training set the reason that we call the training process for generative adverse cell network adversarial training is that the worst case input for one of these networks is generated by the other player and so one of the players is always trained to do as well as possible on the worst possible input it's worth mentioning that there are other works going on in the space of adversarial training where the goal is still to train on adversarial examples inputs that were maybe created by an", "start_timestamp": "00:02:55", "end_timestamp": "00:03:28", "start_second": 175, "end_second": 208, "url": "https://www.youtube.com/watch?v=9JpdAg6uMXs&t=175s", "title": "Introduction to GANs, NIPS 2016 | Ian Goodfellow, OpenAI", "thumbnail": "https://i.ytimg.com/vi/9JpdAg6uMXs/maxresdefault.jpg"} {"video_id": "9JpdAg6uMXs", "text": "optimization algorithm to confuse the model and you will see some posters about that here there's also some work about that in the reliable ml workshop but I hope that clears up any confusion about the term adversarial training so generative adversarial networks are mostly intended to solve the task of generative modeling the idea behind generative modeling is that we have a collection of training examples usually of large high dimensional examples such as images or audio waveforms most of the time we'll use", "start_timestamp": "00:03:28", "end_timestamp": "00:04:00", "start_second": 208, "end_second": 240, "url": "https://www.youtube.com/watch?v=9JpdAg6uMXs&t=208s", "title": "Introduction to GANs, NIPS 2016 | Ian Goodfellow, OpenAI", "thumbnail": "https://i.ytimg.com/vi/9JpdAg6uMXs/maxresdefault.jpg"} {"video_id": "9JpdAg6uMXs", "text": "images as the running scenario that we we show pictures of in slides because it's much easier easier to show a picture of an image than to play an audio waveform but everything that we describe for images applies to more or less any other kind of data so there are two things you might ask for a generative model to do one is what we call density estimation we're given a large collection of examples we want to find the probability density function scribes as examples but another thing we might do is try to learn a function or a", "start_timestamp": "00:04:00", "end_timestamp": "00:04:31", "start_second": 240, "end_second": 271, "url": "https://www.youtube.com/watch?v=9JpdAg6uMXs&t=240s", "title": "Introduction to GANs, NIPS 2016 | Ian Goodfellow, OpenAI", "thumbnail": "https://i.ytimg.com/vi/9JpdAg6uMXs/maxresdefault.jpg"} {"video_id": "9JpdAg6uMXs", "text": "program that can generate more samples from that same training distribution so I show that on the lower the lower row here where we have a collection of many different training examples in this case photos from the imagenet data set and we'd like to create a lot more of those photos and we create those photos in a random way where the model is actually generating photos that have never been seen before but come from the same data distribution in this case the images on the right are actually just more examples from the image net data set", "start_timestamp": "00:04:31", "end_timestamp": "00:05:00", "start_second": 271, "end_second": 300, "url": "https://www.youtube.com/watch?v=9JpdAg6uMXs&t=271s", "title": "Introduction to GANs, NIPS 2016 | Ian Goodfellow, OpenAI", "thumbnail": "https://i.ytimg.com/vi/9JpdAg6uMXs/maxresdefault.jpg"} {"video_id": "9JpdAg6uMXs", "text": "generative models are not yet good enough to make this quality of images but that's the goal that we're striving toward the particular approach the generative adversarial Network to take to generative modeling is to have two different agents playing a game against each other one of these agents is a generator network which tries to generate data and the other agent is a discriminator network that examines data and estimates whether it is real or fake the goal of the generator is to fool the discriminator and as both players get", "start_timestamp": "00:05:00", "end_timestamp": "00:05:32", "start_second": 300, "end_second": 332, "url": "https://www.youtube.com/watch?v=9JpdAg6uMXs&t=300s", "title": "Introduction to GANs, NIPS 2016 | Ian Goodfellow, OpenAI", "thumbnail": "https://i.ytimg.com/vi/9JpdAg6uMXs/maxresdefault.jpg"} {"video_id": "9JpdAg6uMXs", "text": "better and better at their job over time eventually the generator is forced to create data that is as realistic as possible the data that comes to the same distribution as the training data the way that the training process works is that first we sample some image from the training data set like the face that we show on the Left we call this image X it's just the name of the input to the model and then the first player is this discriminator network which you represent with a capital D the discriminator network is a", "start_timestamp": "00:05:32", "end_timestamp": "00:06:04", "start_second": 332, "end_second": 364, "url": "https://www.youtube.com/watch?v=9JpdAg6uMXs&t=332s", "title": "Introduction to GANs, NIPS 2016 | Ian Goodfellow, OpenAI", "thumbnail": "https://i.ytimg.com/vi/9JpdAg6uMXs/maxresdefault.jpg"} {"video_id": "9JpdAg6uMXs", "text": "differentiable function that has parameters that control the shape of the function in other words it's usually a neural network we then apply the function D to the image X and in this case the goal of D is to make D of X be very close to one signifying that X is a real example that came from the training set in the other half of the training process we sample some random noise Z from a prior distribution over latent variables in our generative model you can think of Z as just a sort of randomness that allows the generator", "start_timestamp": "00:06:04", "end_timestamp": "00:06:41", "start_second": 364, "end_second": 401, "url": "https://www.youtube.com/watch?v=9JpdAg6uMXs&t=364s", "title": "Introduction to GANs, NIPS 2016 | Ian Goodfellow, OpenAI", "thumbnail": "https://i.ytimg.com/vi/9JpdAg6uMXs/maxresdefault.jpg"} {"video_id": "9JpdAg6uMXs", "text": "to output many different images instead of outputting only one realistic image after we've sampled the input noisy we apply the generator function just like the discriminator the generator is a differentiable function controlled by some set of parameters and in other words it's usually a deep neural network after applying the function G to input noisy we obtain a value of x sampled in this case from the model like the face on the right this sample X will hopefully be reasonably similar to the data distribution but might have some", "start_timestamp": "00:06:41", "end_timestamp": "00:07:17", "start_second": 401, "end_second": 437, "url": "https://www.youtube.com/watch?v=9JpdAg6uMXs&t=401s", "title": "Introduction to GANs, NIPS 2016 | Ian Goodfellow, OpenAI", "thumbnail": "https://i.ytimg.com/vi/9JpdAg6uMXs/maxresdefault.jpg"} {"video_id": "9JpdAg6uMXs", "text": "small problems with it that the discriminator could detect in this case we've shown a slightly grainy noisy image of a face suggesting that this brain and noise is a feature that the discriminator might use to detect the images faked we applied the discriminator function to the fake example that we pulled from the generator and in this case the discriminator tries to make its output D of G of Z be near zero earlier when we use the discriminator and real data we wanted D of X to be near one and now the discriminator wants D of G of Z to be", "start_timestamp": "00:07:17", "end_timestamp": "00:07:52", "start_second": 437, "end_second": 472, "url": "https://www.youtube.com/watch?v=9JpdAg6uMXs&t=437s", "title": "Introduction to GANs, NIPS 2016 | Ian Goodfellow, OpenAI", "thumbnail": "https://i.ytimg.com/vi/9JpdAg6uMXs/maxresdefault.jpg"} {"video_id": "9JpdAg6uMXs", "text": "near zero to signify that the input is fake simultaneously the generator is competing against the discriminator trying to make D of G of Z approach one we can think of the generator the discriminator is being a little bit like counterfeiters and police the police would like to allow people with real money to safely spend their money without being punished but would like to also catch counterfeit money and remove it from circulation and punish the counterfeiters simultaneously the counterfeiters would like to fool the", "start_timestamp": "00:07:52", "end_timestamp": "00:08:26", "start_second": 472, "end_second": 506, "url": "https://www.youtube.com/watch?v=9JpdAg6uMXs&t=472s", "title": "Introduction to GANs, NIPS 2016 | Ian Goodfellow, OpenAI", "thumbnail": "https://i.ytimg.com/vi/9JpdAg6uMXs/maxresdefault.jpg"} {"video_id": "9JpdAg6uMXs", "text": "police and successfully use their money but if the counterfeiters are not very good at making fake money they'll get caught so over time the police learned to be better and better at catching counterfeit money and the counterfeiters learn to be better and better at producing it so in the end we can actually use game theory to analyze this situation we find that if both the police and the counterfeiters or in other words if both the discriminator and the generator have unlimited capabilities the Nash equilibrium of this game corresponds to", "start_timestamp": "00:08:26", "end_timestamp": "00:09:00", "start_second": 506, "end_second": 540, "url": "https://www.youtube.com/watch?v=9JpdAg6uMXs&t=506s", "title": "Introduction to GANs, NIPS 2016 | Ian Goodfellow, OpenAI", "thumbnail": "https://i.ytimg.com/vi/9JpdAg6uMXs/maxresdefault.jpg"} {"video_id": "9JpdAg6uMXs", "text": "the generator producing perfect samples that come from the same distribution as a trading data in other words the counterfeit are producing counterfeit money that is indistinguishable from real money and at that point the discriminator or in other words the police can not actually distinguish between the two sources of data and simply says that every input has probability one-half of being real and probability one-half of being fake we can formally describe the learning process using what's called a minimax game so we have a cost function for the", "start_timestamp": "00:09:00", "end_timestamp": "00:09:34", "start_second": 540, "end_second": 574, "url": "https://www.youtube.com/watch?v=9JpdAg6uMXs&t=540s", "title": "Introduction to GANs, NIPS 2016 | Ian Goodfellow, OpenAI", "thumbnail": "https://i.ytimg.com/vi/9JpdAg6uMXs/maxresdefault.jpg"} {"video_id": "9JpdAg6uMXs", "text": "discriminator and we call J superscript D which is just the normal cross entropy cost associated with the binary classification problem of telling real data from fake data we have one mini batch of real data drawn from the data set and what a mini batch of fake data drawn from the generator and then if we use this minimax formulation of the game then the cost for the generator is just the negation of the cost for the discriminator the equilibrium of this game is a saddle point of a superscript D and finding this saddle point", "start_timestamp": "00:09:34", "end_timestamp": "00:10:07", "start_second": 574, "end_second": 607, "url": "https://www.youtube.com/watch?v=9JpdAg6uMXs&t=574s", "title": "Introduction to GANs, NIPS 2016 | Ian Goodfellow, OpenAI", "thumbnail": "https://i.ytimg.com/vi/9JpdAg6uMXs/maxresdefault.jpg"} {"video_id": "9JpdAg6uMXs", "text": "resembles the process of minimizing the Jensen's Shannon divergence between the data and the model we can use that to actually prove that we'll recover the correct data distribution if we go to the equilibrium of the game we can analyze what the discriminator does and they play this game and we see exactly what it is that allows generative adversarial networks to be effective the basic idea is that if you take the derivatives of the minimax games value function with respect to the outputs of the discriminator we can actually solve", "start_timestamp": "00:10:07", "end_timestamp": "00:10:39", "start_second": 607, "end_second": 639, "url": "https://www.youtube.com/watch?v=9JpdAg6uMXs&t=607s", "title": "Introduction to GANs, NIPS 2016 | Ian Goodfellow, OpenAI", "thumbnail": "https://i.ytimg.com/vi/9JpdAg6uMXs/maxresdefault.jpg"} {"video_id": "9JpdAg6uMXs", "text": "for the optimal function that the discriminator should learn this function turns out to be the ratio between P data of X and P data of X plus P model of X you can do a little bit of algebra on that to rearrange it and you get P data of x over P model of X so we're learning a ratio between the density that the real data is drawn from and the density of the model currently represents estimating that ratio allows us to compute a lot of different divergences like the Jenson Shannon divergence and the KL divergence between the data and", "start_timestamp": "00:10:39", "end_timestamp": "00:11:12", "start_second": 639, "end_second": 672, "url": "https://www.youtube.com/watch?v=9JpdAg6uMXs&t=639s", "title": "Introduction to GANs, NIPS 2016 | Ian Goodfellow, OpenAI", "thumbnail": "https://i.ytimg.com/vi/9JpdAg6uMXs/maxresdefault.jpg"} {"video_id": "9JpdAg6uMXs", "text": "model that are used for training with maximum likelihood so the key insight of generative Ebersole networks is to use supervised learning to estimate a ratio that we need to be able to do unsupervised learning there are also a variety of other papers by Shakir Muhammad and his collaborators and Sebastian knows and his collaborators that talk a lot about the different divergences that you can learn with these kinds of techniques and how this estimation procedure compares to other techniques have also been developed in the statistical estimation", "start_timestamp": "00:11:12", "end_timestamp": "00:11:45", "start_second": 672, "end_second": 705, "url": "https://www.youtube.com/watch?v=9JpdAg6uMXs&t=672s", "title": "Introduction to GANs, NIPS 2016 | Ian Goodfellow, OpenAI", "thumbnail": "https://i.ytimg.com/vi/9JpdAg6uMXs/maxresdefault.jpg"} {"video_id": "9JpdAg6uMXs", "text": "literature previously but this is the basic idea right here is that we're able to learn this ratio so far I've described everything in terms of the minimax game I personally recommend that you don't use exactly that formulation you use a slightly different formulation where the generator has its own separate cost and the idea is that rather than minimizing the discriminators pay off the generator should maximize the probability that the discriminator makes a mistake the nice thing about this formulation is that the generator is", "start_timestamp": "00:11:45", "end_timestamp": "00:12:18", "start_second": 705, "end_second": 738, "url": "https://www.youtube.com/watch?v=9JpdAg6uMXs&t=705s", "title": "Introduction to GANs, NIPS 2016 | Ian Goodfellow, OpenAI", "thumbnail": "https://i.ytimg.com/vi/9JpdAg6uMXs/maxresdefault.jpg"} {"video_id": "9JpdAg6uMXs", "text": "much less likely to suffer from the vanish and gradients problem but this is more of a practical tip and trick rather than a strong theoretical recommendation and some of the other speakers you'll see today might actually give other advice so it's kind of an open question about exactly which tips and tricks work the best one of the really cool things about generative adversarial Nets is that you can do arithmetic on the z vectors that drive the output of the model we can think of Z as a set of latent variables that describe what is", "start_timestamp": "00:12:18", "end_timestamp": "00:12:52", "start_second": 738, "end_second": 772, "url": "https://www.youtube.com/watch?v=9JpdAg6uMXs&t=738s", "title": "Introduction to GANs, NIPS 2016 | Ian Goodfellow, OpenAI", "thumbnail": "https://i.ytimg.com/vi/9JpdAg6uMXs/maxresdefault.jpg"} {"video_id": "9JpdAg6uMXs", "text": "going to appear in the image and so Alec radford the co-organizer of this workshop and his collaborators showed that you can actually take Z vectors corresponding to pictures of a man with glasses the Z vector for a picture of a man and the Z vector for a picture of a woman and if you subtract the vector for men from the vector for men with glasses and you add the vector for women you'll actually get a vector that describes woman with glasses and when you decode small jitters of that vector you get many different pictures of a", "start_timestamp": "00:12:52", "end_timestamp": "00:13:27", "start_second": 772, "end_second": 807, "url": "https://www.youtube.com/watch?v=9JpdAg6uMXs&t=772s", "title": "Introduction to GANs, NIPS 2016 | Ian Goodfellow, OpenAI", "thumbnail": "https://i.ytimg.com/vi/9JpdAg6uMXs/maxresdefault.jpg"} {"video_id": "9JpdAg6uMXs", "text": "woman wearing glasses a lot of you may have seen a similar result before with language models where the word embedding for Queen could be used to do arithmetic where if you subtract off the word embeddings for female and add the word embedding for male you get a vector that is very close to the word embedding for King in this case Alec and his collaborators have a slightly more exciting result because they not only show that the arithmetic works in vector space but also that the vector can be decoded to a high dimensional realistic", "start_timestamp": "00:13:27", "end_timestamp": "00:14:01", "start_second": 807, "end_second": 841, "url": "https://www.youtube.com/watch?v=9JpdAg6uMXs&t=807s", "title": "Introduction to GANs, NIPS 2016 | Ian Goodfellow, OpenAI", "thumbnail": "https://i.ytimg.com/vi/9JpdAg6uMXs/maxresdefault.jpg"} {"video_id": "9JpdAg6uMXs", "text": "image with many different pixels all set correctly in the case of language modeling the final result was a vector that was very near the word for King but there was no need to decode that vector into some kind of extremely complicated observation set that corresponds to a king probably the biggest issue with generative adversarial networks and to some extent with other forms of adversarial training is that the training process does not always converge most of deep learning consists of minimizing a single cost function but", "start_timestamp": "00:14:01", "end_timestamp": "00:14:37", "start_second": 841, "end_second": 877, "url": "https://www.youtube.com/watch?v=9JpdAg6uMXs&t=841s", "title": "Introduction to GANs, NIPS 2016 | Ian Goodfellow, OpenAI", "thumbnail": "https://i.ytimg.com/vi/9JpdAg6uMXs/maxresdefault.jpg"} {"video_id": "9JpdAg6uMXs", "text": "the basic idea of adversarial training is that we have two different players who are adversaries and each of them is minimizing their own cost function when we minimize a single cost function that's called optimization and it's unusual for us to have a major problem with non convergence we might get unlucky and converge to a location that we don't like such as a saddle point with a high cost function value but we'll usually at least converge to some general region when we play a game with two players and each of them is", "start_timestamp": "00:14:37", "end_timestamp": "00:15:10", "start_second": 877, "end_second": 910, "url": "https://www.youtube.com/watch?v=9JpdAg6uMXs&t=877s", "title": "Introduction to GANs, NIPS 2016 | Ian Goodfellow, OpenAI", "thumbnail": "https://i.ytimg.com/vi/9JpdAg6uMXs/maxresdefault.jpg"} {"video_id": "9JpdAg6uMXs", "text": "simultaneously trying to minimize their own cost we might never actually approach the equilibrium of the game in particular one of the worst forms of non convergence that we see with generative adversarial networks is what we call mode collapse or if you're in on a little joke in our first paper we also caught the Helvetica scenario sometimes the basic idea of the hind mode collapse is that when we use the minimax formulation of the game we'd really like to see is minimization over G in the outer loop and maximization", "start_timestamp": "00:15:10", "end_timestamp": "00:15:44", "start_second": 910, "end_second": 944, "url": "https://www.youtube.com/watch?v=9JpdAg6uMXs&t=910s", "title": "Introduction to GANs, NIPS 2016 | Ian Goodfellow, OpenAI", "thumbnail": "https://i.ytimg.com/vi/9JpdAg6uMXs/maxresdefault.jpg"} {"video_id": "9JpdAg6uMXs", "text": "over D in the inner loop if we do this min max problem applied to the value function V we are guaranteed to actually recover the training distribution but if we swap the order of the mechs and the men we get a different result in fact if we minimize every G in the inner loop the generator has no incentive to do anything other than map all inputs Z to the same output X and that output X is the point that is currently considered most likely to be real rather than fake by the current value of the generator so we really want to do min max and not max", "start_timestamp": "00:15:44", "end_timestamp": "00:16:21", "start_second": 944, "end_second": 981, "url": "https://www.youtube.com/watch?v=9JpdAg6uMXs&t=944s", "title": "Introduction to GANs, NIPS 2016 | Ian Goodfellow, OpenAI", "thumbnail": "https://i.ytimg.com/vi/9JpdAg6uMXs/maxresdefault.jpg"} {"video_id": "9JpdAg6uMXs", "text": "min which one are we actually doing the way that we train models we do simultaneous gradient descent on both players costs and that looks very symmetric it doesn't naturally prioritize one direction of the min Max or max min in practice we find that we often see results that look an awful lot like Max min unfortunately with G in the inner loop so using some very nice visualizations from Luke Metz and his collaborator collaborators we see here that if we have a target distribution we'd like to learn with several", "start_timestamp": "00:16:21", "end_timestamp": "00:16:57", "start_second": 981, "end_second": 1017, "url": "https://www.youtube.com/watch?v=9JpdAg6uMXs&t=981s", "title": "Introduction to GANs, NIPS 2016 | Ian Goodfellow, OpenAI", "thumbnail": "https://i.ytimg.com/vi/9JpdAg6uMXs/maxresdefault.jpg"} {"video_id": "9JpdAg6uMXs", "text": "different modes in two dimensions the training procedure shown in the bottom row of images actually visits one mode after another instead of learning to visit all of the different modes so what's going on is that the generator will identify some mode that the discriminator believes is highly likely and place all of its maps there and then the discriminator learns not to be fooled by the generator going to that one particular location and instead of learning that the generator ought to go to multiple locations the generator", "start_timestamp": "00:16:57", "end_timestamp": "00:17:28", "start_second": 1017, "end_second": 1048, "url": "https://www.youtube.com/watch?v=9JpdAg6uMXs&t=1017s", "title": "Introduction to GANs, NIPS 2016 | Ian Goodfellow, OpenAI", "thumbnail": "https://i.ytimg.com/vi/9JpdAg6uMXs/maxresdefault.jpg"} {"video_id": "9JpdAg6uMXs", "text": "moves on to a different location until the discriminator learns to reject that one - one way that we can try to mitigate the mode collapse problem is with the use of what we call mini-batch features this is introduced in the paper that we presented on Monday night from open AI where the basic idea is to add extra features to the discriminator so the discriminator can look at an entire mini batch of data and if all the different samples in the mini batch are very similar whether the discriminator can realize that mode collapse is", "start_timestamp": "00:17:28", "end_timestamp": "00:17:59", "start_second": 1048, "end_second": 1079, "url": "https://www.youtube.com/watch?v=9JpdAg6uMXs&t=1048s", "title": "Introduction to GANs, NIPS 2016 | Ian Goodfellow, OpenAI", "thumbnail": "https://i.ytimg.com/vi/9JpdAg6uMXs/maxresdefault.jpg"} {"video_id": "9JpdAg6uMXs", "text": "happening and reject those samples is being fake on the CFR 10 dataset this approach allowed us to learn samples that show all the different object classes in CFR 10 for the first time on the Left I show you what the training data looks like for CFR 10 you can see that it's not that beautiful to start with because there are only 32 by 32 pixel images so the resolution is very low on the right we see the samples that come from the model and you see that you can actually recognize horses ships airplanes and so on and cars that we", "start_timestamp": "00:17:59", "end_timestamp": "00:18:32", "start_second": 1079, "end_second": 1112, "url": "https://www.youtube.com/watch?v=9JpdAg6uMXs&t=1079s", "title": "Introduction to GANs, NIPS 2016 | Ian Goodfellow, OpenAI", "thumbnail": "https://i.ytimg.com/vi/9JpdAg6uMXs/maxresdefault.jpg"} {"video_id": "9JpdAg6uMXs", "text": "actually have the real object classes recognizably occurring within this data set an image net there's a thousand classes so it's much more difficult to resist the mode collapse problem an image that our model mostly produces samples that have kind of the texture of photographs but don't necessarily have rich class structure we do occasionally get rich class structure if I show you some very cherry-picked examples we're able to make lots of different pictures of things like dogs spiders koalas bears and birds and so on", "start_timestamp": "00:18:32", "end_timestamp": "00:19:06", "start_second": 1112, "end_second": 1146, "url": "https://www.youtube.com/watch?v=9JpdAg6uMXs&t=1112s", "title": "Introduction to GANs, NIPS 2016 | Ian Goodfellow, OpenAI", "thumbnail": "https://i.ytimg.com/vi/9JpdAg6uMXs/maxresdefault.jpg"} {"video_id": "9JpdAg6uMXs", "text": "we still see a lot of problems with the model though in particular we often see problems with counting we think that this might be something to do with the architecture of our convolutional network that it's able to test whether a feature is absent or present but it doesn't necessarily test how many times that feature occurs so we see things like this giraffe head with four eyes this dog with something like six legs or this kind of three-headed monkey thing or you know stacks of puppies rather than a single puppy or a cat with one", "start_timestamp": "00:19:06", "end_timestamp": "00:19:42", "start_second": 1146, "end_second": 1182, "url": "https://www.youtube.com/watch?v=9JpdAg6uMXs&t=1146s", "title": "Introduction to GANs, NIPS 2016 | Ian Goodfellow, OpenAI", "thumbnail": "https://i.ytimg.com/vi/9JpdAg6uMXs/maxresdefault.jpg"} {"video_id": "9JpdAg6uMXs", "text": "and a half faces we also often see problems with perspective where the model generates images that are extremely flat in particular the image on the lower left looks to me like somebody skinned a dog you know like a bearskin rug and then took a picture with the camera looking straight down at it on the ground whether the picture in the lower middle looks to me literally like a cubist painting we're in the cubism movement artists intentionally removed all the perspective from an image and rearranged the object to show us", "start_timestamp": "00:19:42", "end_timestamp": "00:20:14", "start_second": 1182, "end_second": 1214, "url": "https://www.youtube.com/watch?v=9JpdAg6uMXs&t=1182s", "title": "Introduction to GANs, NIPS 2016 | Ian Goodfellow, OpenAI", "thumbnail": "https://i.ytimg.com/vi/9JpdAg6uMXs/maxresdefault.jpg"} {"video_id": "9JpdAg6uMXs", "text": "different parts from different angles but representing the entire thing is flat in many cases we see images that are really quite nice that have some problem with the global structure a lot of the time this just consists of images of animals where we don't actually get to see their legs that they have a head and torso attached but they don't actually complete the legs anywhere and in my particular favorite generator samples so far on the lower left we have an image that we've actually named fall out cow where we have an animal that is", "start_timestamp": "00:20:14", "end_timestamp": "00:20:45", "start_second": 1214, "end_second": 1245, "url": "https://www.youtube.com/watch?v=9JpdAg6uMXs&t=1214s", "title": "Introduction to GANs, NIPS 2016 | Ian Goodfellow, OpenAI", "thumbnail": "https://i.ytimg.com/vi/9JpdAg6uMXs/maxresdefault.jpg"} {"video_id": "9JpdAg6uMXs", "text": "both quadrupedal and bipedal it actually has legs and it has the right number of them but it has two different bodies so what are some things that you can do with genitive adversarial networks there are just so many different things that it's a little bit hard to show all of them I showed a lot more in my tutorial but I can show you just a few really quick one thing that came out recently is image to image translation this is from the research group at Berkeley and the basic idea here is to take a conditional generative adversarial", "start_timestamp": "00:20:45", "end_timestamp": "00:21:17", "start_second": 1245, "end_second": 1277, "url": "https://www.youtube.com/watch?v=9JpdAg6uMXs&t=1245s", "title": "Introduction to GANs, NIPS 2016 | Ian Goodfellow, OpenAI", "thumbnail": "https://i.ytimg.com/vi/9JpdAg6uMXs/maxresdefault.jpg"} {"video_id": "9JpdAg6uMXs", "text": "Network and map from one domain to another it can do things like take images that say for every pixel where the different what kind of class should appear at each pixel and turn that into a photorealistic scene with the desired objects in the desired positions it can also take an aerial photo and turn it into a map where it can take a sketch of an object and turn it into a photo of an object more recently there was a very exciting result that finally developed the ability to generate realistic samples on the image net data set from", "start_timestamp": "00:21:17", "end_timestamp": "00:21:52", "start_second": 1277, "end_second": 1312, "url": "https://www.youtube.com/watch?v=9JpdAg6uMXs&t=1277s", "title": "Introduction to GANs, NIPS 2016 | Ian Goodfellow, OpenAI", "thumbnail": "https://i.ytimg.com/vi/9JpdAg6uMXs/maxresdefault.jpg"} {"video_id": "9JpdAg6uMXs", "text": "all 1,000 classes and with really good diversity this result is called plug-and-play generative models and it combines many different approaches to generative modeling including generative adversarial Nets moment matching denoising auto-encoders and lunch main sampling the results are really excellent and we see lots of different very recognizable high quality images with all the right numbers of legs and everything so generative modeling has really come very far in just the last month actually and generative ever eternal nets are part of", "start_timestamp": "00:21:52", "end_timestamp": "00:22:26", "start_second": 1312, "end_second": 1346, "url": "https://www.youtube.com/watch?v=9JpdAg6uMXs&t=1312s", "title": "Introduction to GANs, NIPS 2016 | Ian Goodfellow, OpenAI", "thumbnail": "https://i.ytimg.com/vi/9JpdAg6uMXs/maxresdefault.jpg"} {"video_id": "9JpdAg6uMXs", "text": "that progress I have a few comments about exactly what it is that allows generally retro style networks to work well on kind of an intuitive level one of the main things that's really different about generative a dresser Nets compared to other approaches to machine learning is that they give a very nice way of telling the model that there are multiple correct answers so a lot of the time with supervised learning we use something like mean squared error to tell the model loads output should have been so on the Left I show a little", "start_timestamp": "00:22:26", "end_timestamp": "00:22:55", "start_second": 1346, "end_second": 1375, "url": "https://www.youtube.com/watch?v=9JpdAg6uMXs&t=1346s", "title": "Introduction to GANs, NIPS 2016 | Ian Goodfellow, OpenAI", "thumbnail": "https://i.ytimg.com/vi/9JpdAg6uMXs/maxresdefault.jpg"} {"video_id": "9JpdAg6uMXs", "text": "bit about what's wrong with the mean squared error training process suppose we have the blue dot at the bottom of the slide representing some input and we'd like to learn to map this input to some desired output suppose all the different green dots represent different possible outputs that are all valid well in the training set suppose the the label that we had for this particular blue dot was the green dot on the far left but suppose that the model produced a green dot on the far right mean squared error will induce the red error", "start_timestamp": "00:22:55", "end_timestamp": "00:23:26", "start_second": 1375, "end_second": 1406, "url": "https://www.youtube.com/watch?v=9JpdAg6uMXs&t=1375s", "title": "Introduction to GANs, NIPS 2016 | Ian Goodfellow, OpenAI", "thumbnail": "https://i.ytimg.com/vi/9JpdAg6uMXs/maxresdefault.jpg"} {"video_id": "9JpdAg6uMXs", "text": "arrow saying that instead of producing the dot on the right the model should have produced the dot on the Left which is the one that appears in the training set that means that over time the blue dot will actually get mapped to something more like the mean of all these different green dots and that causes us to learn things like blurry images when we try to learn to predict different images associated with some input generative Everest or networks don't actually directly use a pair of inputs and outputs to tell the model", "start_timestamp": "00:23:26", "end_timestamp": "00:23:57", "start_second": 1406, "end_second": 1437, "url": "https://www.youtube.com/watch?v=9JpdAg6uMXs&t=1406s", "title": "Introduction to GANs, NIPS 2016 | Ian Goodfellow, OpenAI", "thumbnail": "https://i.ytimg.com/vi/9JpdAg6uMXs/maxresdefault.jpg"} {"video_id": "9JpdAg6uMXs", "text": "what it should do instead the discriminator learns how inputs and outputs can be paired and then the discriminator tells the model whether it did a good job or not so the discriminator what ideally learn that all of the different green dots are possible options and then when the generator produces the green dot on the right the discriminator says that that was a good thing to do there are many different good things that the model can do and the discriminator will hopefully endorse all of them so we now have this mechanism", "start_timestamp": "00:23:57", "end_timestamp": "00:24:27", "start_second": 1437, "end_second": 1467, "url": "https://www.youtube.com/watch?v=9JpdAg6uMXs&t=1437s", "title": "Introduction to GANs, NIPS 2016 | Ian Goodfellow, OpenAI", "thumbnail": "https://i.ytimg.com/vi/9JpdAg6uMXs/maxresdefault.jpg"} {"video_id": "9JpdAg6uMXs", "text": "for saying that many different outputs are possible instead of always steering the model toward one predefined answer we can see this especially in the context of next video frame prediction so this paper by Bill Lata and his collaborators shows what happens when we use a few different kinds of models to predict the next frame in a video on the left I show the ground truth where we have this 3d rendered image of a person's head and you can see that the image is very sharp and and has a clearly visible ear using a model that", "start_timestamp": "00:24:27", "end_timestamp": "00:25:02", "start_second": 1467, "end_second": 1502, "url": "https://www.youtube.com/watch?v=9JpdAg6uMXs&t=1467s", "title": "Introduction to GANs, NIPS 2016 | Ian Goodfellow, OpenAI", "thumbnail": "https://i.ytimg.com/vi/9JpdAg6uMXs/maxresdefault.jpg"} {"video_id": "9JpdAg6uMXs", "text": "was changed with mean squared error in the image in the middle we see that the ear has vanished because the exact location of the ear is not especially predictable and when the model averages over many different possible places the year could go it vanishes similarly the eye has become blurry in the image on the right we see what happens when an adversarial loss is included in the training process in this case the model is now encouraged to produce samples that actually look realistic and it knows that there are multiple different", "start_timestamp": "00:25:02", "end_timestamp": "00:25:30", "start_second": 1502, "end_second": 1530, "url": "https://www.youtube.com/watch?v=9JpdAg6uMXs&t=1502s", "title": "Introduction to GANs, NIPS 2016 | Ian Goodfellow, OpenAI", "thumbnail": "https://i.ytimg.com/vi/9JpdAg6uMXs/maxresdefault.jpg"} {"video_id": "9JpdAg6uMXs", "text": "answers that are all possible so it's been able to choose one of the many different sharp images that could happen at the next time step it's also worth thinking about whatever sterols training looks like for people and really the way that the idea of adverse sail training emerged was that economists and other researchers in those kinds of fields we're already working on thinking about the way that multiple different agents acting in a market have their behavior influenced by the process of them optimizing their own", "start_timestamp": "00:25:30", "end_timestamp": "00:26:04", "start_second": 1530, "end_second": 1564, "url": "https://www.youtube.com/watch?v=9JpdAg6uMXs&t=1530s", "title": "Introduction to GANs, NIPS 2016 | Ian Goodfellow, OpenAI", "thumbnail": "https://i.ytimg.com/vi/9JpdAg6uMXs/maxresdefault.jpg"} {"video_id": "9JpdAg6uMXs", "text": "payoffs while all the other players optimize their paths so some things that I think could be interesting to look at from machine learning code of view are weather cycles in markets can be explained by the failure of optimization algorithms to converge if we have trouble fitting generative Evaristo nuts and we have complete information just think about how hard it is to choose prices for goods when there are many more actors and when you don't have complete information about the market I'm sure this has been studied to some", "start_timestamp": "00:26:04", "end_timestamp": "00:26:31", "start_second": 1564, "end_second": 1591, "url": "https://www.youtube.com/watch?v=9JpdAg6uMXs&t=1564s", "title": "Introduction to GANs, NIPS 2016 | Ian Goodfellow, OpenAI", "thumbnail": "https://i.ytimg.com/vi/9JpdAg6uMXs/maxresdefault.jpg"} {"video_id": "9JpdAg6uMXs", "text": "extent but I think that bringing the economy the machine learning people together could find some more interesting ideas we didn't already know about we've also seen lots of cases of things like auctions that are designed to make sure that people pay the right price and that's more or less what I was thinking of when I designed generative adversarial Nets one last remit is that if we think about the way that people learn researchers like Erikson have shown that the way to become really good at any particular", "start_timestamp": "00:26:31", "end_timestamp": "00:26:59", "start_second": 1591, "end_second": 1619, "url": "https://www.youtube.com/watch?v=9JpdAg6uMXs&t=1591s", "title": "Introduction to GANs, NIPS 2016 | Ian Goodfellow, OpenAI", "thumbnail": "https://i.ytimg.com/vi/9JpdAg6uMXs/maxresdefault.jpg"} {"video_id": "9JpdAg6uMXs", "text": "task is to practice it a lot but also to do deliberate practice you're not just putting in a lot of hours you are specifically choosing subtasks within the skill that you're trying to get good at that are especially difficult for you and getting feedback from an expert who coaches you you can think of adversarial training as capturing both of these aspects of developing a skill rather than just training and lots and lots of training examples your training on the worst case inputs that are really hard for the model and in the case of", "start_timestamp": "00:26:59", "end_timestamp": "00:27:28", "start_second": 1619, "end_second": 1648, "url": "https://www.youtube.com/watch?v=9JpdAg6uMXs&t=1619s", "title": "Introduction to GANs, NIPS 2016 | Ian Goodfellow, OpenAI", "thumbnail": "https://i.ytimg.com/vi/9JpdAg6uMXs/maxresdefault.jpg"} {"video_id": "9JpdAg6uMXs", "text": "generative adverts or networks you have an expert the discriminator coaching the generator on what it should have done instead so a lot of insights from human psychology and human learning are actually telling us how we can make machine learning more effective so in conclusion adversarial training is a way of training a variety of models in different ways that all involve working on a worst case input generative adversarial Nets are one of the most popular members of this framework and they're based on using the estimate of a", "start_timestamp": "00:27:28", "end_timestamp": "00:27:56", "start_second": 1648, "end_second": 1676, "url": "https://www.youtube.com/watch?v=9JpdAg6uMXs&t=1648s", "title": "Introduction to GANs, NIPS 2016 | Ian Goodfellow, OpenAI", "thumbnail": "https://i.ytimg.com/vi/9JpdAg6uMXs/maxresdefault.jpg"} {"video_id": "9JpdAg6uMXs", "text": "ratio of densities to do unsupervised learning and part of why they work so well is that they allow the model to have multiple correct answers and they draw on a lot of the ideas that help humans to learn really well I'm almost out of time but I think we might be able to have a few questions if that's okay with the organizers you have several microphones on the sides okay yes I have caution first here over here so I secured armed for sequential things like video do you know any of any worked of using my recurrent", "start_timestamp": "00:27:56", "end_timestamp": "00:28:39", "start_second": 1676, "end_second": 1719, "url": "https://www.youtube.com/watch?v=9JpdAg6uMXs&t=1676s", "title": "Introduction to GANs, NIPS 2016 | Ian Goodfellow, OpenAI", "thumbnail": "https://i.ytimg.com/vi/9JpdAg6uMXs/maxresdefault.jpg"} {"video_id": "9JpdAg6uMXs", "text": "network that actually can generate the risk so the sequence of data to generate our video or what kind of charge that would be yeah there is a paper about generating videos with generative adversarial networks I forget the exact title off the top of my head okay there's also there's a paper here at this workshop today I called unrolled generative every cell networks and I know that one of their experiments involves using a recurrent network to generate Amnesty one pixel at a time so you could check out their spotlight and", "start_timestamp": "00:28:39", "end_timestamp": "00:29:08", "start_second": 1719, "end_second": 1748, "url": "https://www.youtube.com/watch?v=9JpdAg6uMXs&t=1719s", "title": "Introduction to GANs, NIPS 2016 | Ian Goodfellow, OpenAI", "thumbnail": "https://i.ytimg.com/vi/9JpdAg6uMXs/maxresdefault.jpg"} {"video_id": "9JpdAg6uMXs", "text": "poster and how about generating language sequence of words in discrete domain the thing that's difficult about that is is the discrete outputs which means that the generator is not differentiable so that's an open research area it might be solvable using things like the reinforce algorithm to do policy gradient on the parameters or using things like gun Bell softmax or the concrete distribution or it might be possible to generate word embeddings from the generator and then decode them to discrete values instead", "start_timestamp": "00:29:08", "end_timestamp": "00:29:37", "start_second": 1748, "end_second": 1777, "url": "https://www.youtube.com/watch?v=9JpdAg6uMXs&t=1748s", "title": "Introduction to GANs, NIPS 2016 | Ian Goodfellow, OpenAI", "thumbnail": "https://i.ytimg.com/vi/9JpdAg6uMXs/maxresdefault.jpg"} {"video_id": "9JpdAg6uMXs", "text": "of generating military values directly and how about a speech which is continuous we know that google's are wavenet use different mechanism of generating speech and using this gain framework can be used equally well yeah so before I left Google I suggested that they try generating continuous waveforms it can and I don't actually know if they tried and again didn't work or if they just went straight to using Pixlr and n-type methods to generate the continuous waveform but I do think that the continuous waveform is the way to go", "start_timestamp": "00:29:37", "end_timestamp": "00:30:09", "start_second": 1777, "end_second": 1809, "url": "https://www.youtube.com/watch?v=9JpdAg6uMXs&t=1777s", "title": "Introduction to GANs, NIPS 2016 | Ian Goodfellow, OpenAI", "thumbnail": "https://i.ytimg.com/vi/9JpdAg6uMXs/maxresdefault.jpg"} {"video_id": "9JpdAg6uMXs", "text": "with games if that did work the advantage of games would have over wavenet is that the gans can generate the sample much faster that wavenet needs to pass through a neural net for every single sample of the audio so it's generating you know really thousands of samples using thousands of passes to the network it takes about two minutes to generate one sample one second about you okay though again could generate a long waveform in one shot it's interesting to see that wavenet didn't really have temporary effect I", "start_timestamp": "00:30:09", "end_timestamp": "00:30:39", "start_second": 1809, "end_second": 1839, "url": "https://www.youtube.com/watch?v=9JpdAg6uMXs&t=1809s", "title": "Introduction to GANs, NIPS 2016 | Ian Goodfellow, OpenAI", "thumbnail": "https://i.ytimg.com/vi/9JpdAg6uMXs/maxresdefault.jpg"} {"video_id": "9JpdAg6uMXs", "text": "use show over there using mean square error kind of criterion for the image so do you have an explanation why once brother the arm is not blur so the blurring effect is with mean square error in real valued spaces and they were using discrete spaces where they have a soft max distribution so their loss function is not actually mean squared error it's it's a categorical cross entropy okay and but that's still no excuse let's ask another question and you can check with you later yeah any other questions okay thank you", "start_timestamp": "00:30:39", "end_timestamp": "00:31:21", "start_second": 1839, "end_second": 1881, "url": "https://www.youtube.com/watch?v=9JpdAg6uMXs&t=1839s", "title": "Introduction to GANs, NIPS 2016 | Ian Goodfellow, OpenAI", "thumbnail": "https://i.ytimg.com/vi/9JpdAg6uMXs/maxresdefault.jpg"} {"video_id": "7uplycLvraw", "text": "thank you thank you very much for the warm introduction it's an honor to be here and participate in this lecture series I want to first clear up any possible misconceptions I have one kid just one and he has one head this was this two-headed thing was taken with photo booth on the iPad and it's his favorite thing to do is to take pictures of himself with two heads so anyway that's impossible and he's thinking about it okay all right all right okay so so I have two goals for this talk and they're kind of conflicting a little bit so the first", "start_timestamp": "00:00:00", "end_timestamp": "00:00:42", "start_second": 0, "end_second": 42, "url": "https://www.youtube.com/watch?v=7uplycLvraw&t=0s", "title": "Thinking Algorithmically About Impossibility", "thumbnail": "https://i.ytimg.com/vi/7uplycLvraw/maxresdefault.jpg"} {"video_id": "7uplycLvraw", "text": "goal is I want to tell you about the state of the art in in circuit complexity lower bounds in what particular areas of certain complexity at least and the second goal is to give a general thought for a scientific audience so I'm I'm going to try to navigate these two things and hopefully it will turn out okay all right let's begin so as a young computer science theorist one is taught that there are roughly two kinds of people there are algorithms people and there are complexity people okay the algorithms designers asked the following very", "start_timestamp": "00:00:42", "end_timestamp": "00:01:27", "start_second": 42, "end_second": 87, "url": "https://www.youtube.com/watch?v=7uplycLvraw&t=42s", "title": "Thinking Algorithmically About Impossibility", "thumbnail": "https://i.ytimg.com/vi/7uplycLvraw/maxresdefault.jpg"} {"video_id": "7uplycLvraw", "text": "positive upbeat questions they asked when can we solve problems quickly what can we do with computers when can we find a good algorithm what what does good mean good could mean many sorts of things generally speaking it means that your algorithm is time efficient it runs very fast okay on the other hand there's this other group of people called complexity theorists okay the complexity theorists are negative nancies they ask where our problem is difficult to solve okay when when can we prove the problem is not easy okay in more slightly more", "start_timestamp": "00:01:27", "end_timestamp": "00:02:10", "start_second": 87, "end_second": 130, "url": "https://www.youtube.com/watch?v=7uplycLvraw&t=87s", "title": "Thinking Algorithmically About Impossibility", "thumbnail": "https://i.ytimg.com/vi/7uplycLvraw/maxresdefault.jpg"} {"video_id": "7uplycLvraw", "text": "technical terms when can we prove a lower bound on the amount of time that we need to solve a problem or the amount of resources needed to be consumed in order to solve a problem so so this sort of lower bound thing versus algorithms think it's going to be pretty prevalent this talk so here are some intuitions about theory that you build up again as a young computer scientist you learned that the elegant design and the complexity theorists have opposing goals and tasks okay certainly the way I presented it I mean one is trying to say", "start_timestamp": "00:02:10", "end_timestamp": "00:02:46", "start_second": 130, "end_second": 166, "url": "https://www.youtube.com/watch?v=7uplycLvraw&t=130s", "title": "Thinking Algorithmically About Impossibility", "thumbnail": "https://i.ytimg.com/vi/7uplycLvraw/maxresdefault.jpg"} {"video_id": "7uplycLvraw", "text": "there's an algorithm the other ones trying to say nope you know so so the way I presented there really a opposite to each other okay and the second thing you learn when way or another so the algorithms tribe has an easier life somehow somehow designing algorithms is just much easier than proving lower balance and this intuition is really really strong I mean you want to say look if I design an algorithm I just have to give you one piece of code it's just going to work and solve the problem no matter what but if just to show a", "start_timestamp": "00:02:46", "end_timestamp": "00:03:21", "start_second": 166, "end_second": 201, "url": "https://www.youtube.com/watch?v=7uplycLvraw&t=166s", "title": "Thinking Algorithmically About Impossibility", "thumbnail": "https://i.ytimg.com/vi/7uplycLvraw/maxresdefault.jpg"} {"video_id": "7uplycLvraw", "text": "lower bound I have to show that no matter what code you wrote down even code I will never see like just there's just only so much code I could see in my life oh well no matter what you write down it's just not going to work it's just not going to solve the problem fast enough so just from that point of view it seems designing Aram's is easier okay so some of our lower bound says have you have to reason about all possible algorithms right so I assume many of you know about algorithms so I'll spend some time talking more about lower bounds at", "start_timestamp": "00:03:21", "end_timestamp": "00:03:55", "start_second": 201, "end_second": 235, "url": "https://www.youtube.com/watch?v=7uplycLvraw&t=201s", "title": "Thinking Algorithmically About Impossibility", "thumbnail": "https://i.ytimg.com/vi/7uplycLvraw/maxresdefault.jpg"} {"video_id": "7uplycLvraw", "text": "least the beginning so the most famous lower bound question the Grandmamma of them all is P equals NP okay and just to give you a very high-level idea of what P and NP are not to go in the definitions NP is the set of problems where verifying a solution to the problem is easy and P are the problems where finding a solution is easy to do okay and so by easy we mean the following technical sense of Paul meal time so there's some algorithm that on inputs of length n runs in some polynomial and an amount of steps think", "start_timestamp": "00:03:55", "end_timestamp": "00:04:31", "start_second": 235, "end_second": 271, "url": "https://www.youtube.com/watch?v=7uplycLvraw&t=235s", "title": "Thinking Algorithmically About Impossibility", "thumbnail": "https://i.ytimg.com/vi/7uplycLvraw/maxresdefault.jpg"} {"video_id": "7uplycLvraw", "text": "of this P of n is N squared or in cubed something like that so this is what we mean by easy most of the things that you see when you're learning our than they're easy sorting easy so this problem was articulated in 1970s by Steve cook and the carp and also in the Soviet Union by leonid 11 okay so the rather than going into definitions and things like this I will just give you an illustration of this and probably the best illustration of the difference between P and NP is the problem of factoring numbers so suppose I give you", "start_timestamp": "00:04:31", "end_timestamp": "00:05:13", "start_second": 271, "end_second": 313, "url": "https://www.youtube.com/watch?v=7uplycLvraw&t=271s", "title": "Thinking Algorithmically About Impossibility", "thumbnail": "https://i.ytimg.com/vi/7uplycLvraw/maxresdefault.jpg"} {"video_id": "7uplycLvraw", "text": "this number okay it's 232 digits and I want you to I want to know like x and y so the x times y equals this number okay well turns out there are two such numbers okay X of this length Y of this length all right now the problem of giving such a number what are x and y right so this is an EMPI problem because if I gave you x and y to begin with if I gave you the solution of the problem the X and the y then it's easy to check that next times y equals the given number you can definitely multiply these things in less", "start_timestamp": "00:05:13", "end_timestamp": "00:05:53", "start_second": 313, "end_second": 353, "url": "https://www.youtube.com/watch?v=7uplycLvraw&t=313s", "title": "Thinking Algorithmically About Impossibility", "thumbnail": "https://i.ytimg.com/vi/7uplycLvraw/maxresdefault.jpg"} {"video_id": "7uplycLvraw", "text": "than a second on your cellphone check they equal the number okay what about finding the x and y given just this number what can you do well I gave you a very special number here actually so this is the RSA 768 number it took two years and a team of number theorists and programmers to find this X&Y a lot of famous people crushed really really hard to find this X&Y despite the fact that once it's given to me takes no time at all to verify their items okay so in so MP is this kind of problem where you I give you the number and I want to know the", "start_timestamp": "00:05:53", "end_timestamp": "00:06:33", "start_second": 353, "end_second": 393, "url": "https://www.youtube.com/watch?v=7uplycLvraw&t=353s", "title": "Thinking Algorithmically About Impossibility", "thumbnail": "https://i.ytimg.com/vi/7uplycLvraw/maxresdefault.jpg"} {"video_id": "7uplycLvraw", "text": "two the two factors x and y well with P you just you know I'm just I just want to like find the answer okay so the problem of people's emptiness asking whenever it's possible to check that x times y equals a number could I find could I do the inverse and figure out x and y efficiently as well okay so this is a very stark contrast between these two kinds of problems okay so P equals NP is asking a verifying a solution a problem is easy to do then can we find a solution the problem also easily so here's another number okay", "start_timestamp": "00:06:33", "end_timestamp": "00:07:13", "start_second": 393, "end_second": 433, "url": "https://www.youtube.com/watch?v=7uplycLvraw&t=393s", "title": "Thinking Algorithmically About Impossibility", "thumbnail": "https://i.ytimg.com/vi/7uplycLvraw/maxresdefault.jpg"} {"video_id": "7uplycLvraw", "text": "it's been slightly longer it's 240 decimal digits alright so we don't actually don't know how to factor this number well okay as far as we know perhaps our friends the NSA have factored this number a long time ago okay and and if you want to you know a bigger list of numbers that we don't know how to factor there's a whole page and Wikipedia with lots and lots of numbers going up about six hundred digits big lists of numbers okay so if we could factor integers easily so this problem is looks a simple number theoretic problem but you", "start_timestamp": "00:07:13", "end_timestamp": "00:07:49", "start_second": 433, "end_second": 469, "url": "https://www.youtube.com/watch?v=7uplycLvraw&t=433s", "title": "Thinking Algorithmically About Impossibility", "thumbnail": "https://i.ytimg.com/vi/7uplycLvraw/maxresdefault.jpg"} {"video_id": "7uplycLvraw", "text": "probably know that if we could factor these interests easily we could break all our essay based cryptography so we could break anything that says there's just a little lock here on your web browser yeah forget it it's not really I mean this is crack it open okay well HTTPS SSL totally broken okay these are as same numbers actually is a great resource for someone like me if someone send emails me and say they have a proof people's MP I just forwarding this link and say I'll be waiting you know just all you got to do", "start_timestamp": "00:07:49", "end_timestamp": "00:08:21", "start_second": 469, "end_second": 501, "url": "https://www.youtube.com/watch?v=7uplycLvraw&t=469s", "title": "Thinking Algorithmically About Impossibility", "thumbnail": "https://i.ytimg.com/vi/7uplycLvraw/maxresdefault.jpg"} {"video_id": "7uplycLvraw", "text": "just tell me these you know short 100 something digit numbers from everything in this list okay then oh then I read your paper okay so so that that makes them so very very makes it easy for me right okay so we believe the answer is no okay hope this is you know somewhat convincing just this this problem of multiplication versus factoring a number but how that we mathematically prove that how could we prove there's just no way to do it there's no way to break a number in the protectors we have to show that some", "start_timestamp": "00:08:21", "end_timestamp": "00:08:54", "start_second": 501, "end_second": 534, "url": "https://www.youtube.com/watch?v=7uplycLvraw&t=501s", "title": "Thinking Algorithmically About Impossibility", "thumbnail": "https://i.ytimg.com/vi/7uplycLvraw/maxresdefault.jpg"} {"video_id": "7uplycLvraw", "text": "problem that's easy to verify it somehow impossible to efficiently solve okay so this requires us to somehow reason about all easy programs one way or another so let me emphasize it P where's MP is only one of many questions of this kind many of which just I simply can't mention this talk I just want to mentioned the most prominent one there but they all sort of hint or at the same kind of thing reasoning about efficient computation we don't know how to do it okay so where lower bounds good for so they're impossibilities they'll say you", "start_timestamp": "00:08:54", "end_timestamp": "00:09:33", "start_second": 534, "end_second": 573, "url": "https://www.youtube.com/watch?v=7uplycLvraw&t=534s", "title": "Thinking Algorithmically About Impossibility", "thumbnail": "https://i.ytimg.com/vi/7uplycLvraw/maxresdefault.jpg"} {"video_id": "7uplycLvraw", "text": "can't do something well the first sort of very useful way in which the lower bounds were uses for algorithm engineering so if I prove that a lower bound on a particular mathematical formalization or a problem that just often just means that the way you set the problem formally is not the right sort of way it's math it just not going to help you if you try for all this if you set if you wrote the problem down in this way so so you could try something else you try to read for me at the problem go back to it and and see if you", "start_timestamp": "00:09:33", "end_timestamp": "00:10:09", "start_second": 573, "end_second": 609, "url": "https://www.youtube.com/watch?v=7uplycLvraw&t=573s", "title": "Thinking Algorithmically About Impossibility", "thumbnail": "https://i.ytimg.com/vi/7uplycLvraw/maxresdefault.jpg"} {"video_id": "7uplycLvraw", "text": "can steer yourself away from things which MP hard and things which are more like P but lower bounds have actually been extremely useful in other areas such as machine learning theory useful in in learning functions for which you know lower bounds for those kinds of functions for cryptography as as demonstrated with RSA so you sort of need lower bounds just to get off the ground you need to be able to say that you know I have have concocted a problem efficiently but you you can't crack it you can't you know turn around and solve", "start_timestamp": "00:10:09", "end_timestamp": "00:10:43", "start_second": 609, "end_second": 643, "url": "https://www.youtube.com/watch?v=7uplycLvraw&t=609s", "title": "Thinking Algorithmically About Impossibility", "thumbnail": "https://i.ytimg.com/vi/7uplycLvraw/maxresdefault.jpg"} {"video_id": "7uplycLvraw", "text": "it for me and also more grounds come up in an unexpected way and the construction of so-called pseudo-random generators so these come up when you have a randomized algorithm arete the album which tosses coins and you can argue that you can mathematically prove that okay if you have uniform independent random coins then this algorithm is going to work well maybe maybe you know you can't rely on independent random coins well lower bounds can actually be used to remove the randomness from algorithms replace randomized armed systematically with", "start_timestamp": "00:10:43", "end_timestamp": "00:11:16", "start_second": 643, "end_second": 676, "url": "https://www.youtube.com/watch?v=7uplycLvraw&t=643s", "title": "Thinking Algorithmically About Impossibility", "thumbnail": "https://i.ytimg.com/vi/7uplycLvraw/maxresdefault.jpg"} {"video_id": "7uplycLvraw", "text": "something that just uses a purely deterministic sequence of coin flips and still works well this is a very nice connection and sort of unexpected when you first think about it but one day I really want to try at home is that we don't really know what all lower bounds are good for because we have no idea what the proofs of them look like okay I mean this almost all this is just from sort of knowing just that the lower bound holds what you can do but what if you actually had a mathematical proof something that", "start_timestamp": "00:11:16", "end_timestamp": "00:11:47", "start_second": 676, "end_second": 707, "url": "https://www.youtube.com/watch?v=7uplycLvraw&t=676s", "title": "Thinking Algorithmically About Impossibility", "thumbnail": "https://i.ytimg.com/vi/7uplycLvraw/maxresdefault.jpg"} {"video_id": "7uplycLvraw", "text": "actually did the reasoning and you could study it then there's no telling what we could we could learn about computation that way and so lower balance to me are one of the great scientific mysteries of our time there are conjectures about them everywhere ok Internet security relies on them being true but knowing them for sure is mostly inaccessible to us ok so so yeah this is just a great mystery why is easy computation so hard to understand okay why are lower bound so hard to prove well why are we so stuck here so", "start_timestamp": "00:11:47", "end_timestamp": "00:12:25", "start_second": 707, "end_second": 745, "url": "https://www.youtube.com/watch?v=7uplycLvraw&t=707s", "title": "Thinking Algorithmically About Impossibility", "thumbnail": "https://i.ytimg.com/vi/7uplycLvraw/maxresdefault.jpg"} {"video_id": "7uplycLvraw", "text": "actually we have a pretty good understanding of why we're stuck so there are many known nogo theorems in complexity theory with various technical names cultivation natural proofs algebra zation so on what they say in a nutshell is that the common proof techniques that we use are just not going to be enough to prove things much much weaker than p9 wimpy things much weaker than this just like very simple things we can't we can't prove allowance for and this has cost a great deal of weeping and wailing and gnashing of teeth a great pessimism", "start_timestamp": "00:12:25", "end_timestamp": "00:12:59", "start_second": 745, "end_second": 779, "url": "https://www.youtube.com/watch?v=7uplycLvraw&t=745s", "title": "Thinking Algorithmically About Impossibility", "thumbnail": "https://i.ytimg.com/vi/7uplycLvraw/maxresdefault.jpg"} {"video_id": "7uplycLvraw", "text": "in complexity theory and so the big question here is how will we make progress how do we get smarter how are we going to get off the ground and start really proving that some problems are hard to solve ok so so this is what I'm going to try to say something about okay I mean obviously I don't have all the answers but I would like to say something that I hope can be conveyed to everyone yeah well sort of the main insights in work that I participate in so far is in connections between arms and lower bounds so we're", "start_timestamp": "00:12:59", "end_timestamp": "00:13:38", "start_second": 779, "end_second": 818, "url": "https://www.youtube.com/watch?v=7uplycLvraw&t=779s", "title": "Thinking Algorithmically About Impossibility", "thumbnail": "https://i.ytimg.com/vi/7uplycLvraw/maxresdefault.jpg"} {"video_id": "7uplycLvraw", "text": "talking about how arm designs are the easier life then lower bound improvers in fact we are deep connections between the two subjects that we just don't understand yet we know for sure that they are there we have sort of glimpses of connections we but there is certainly something more under the hood that we've got to know and so I'll tell you about the glimpses that we have so the main idea I guess the one thing I would like to you to try to take away from this talk is this thing in this box designing algorithms is as hard as proving lower", "start_timestamp": "00:13:38", "end_timestamp": "00:14:12", "start_second": 818, "end_second": 852, "url": "https://www.youtube.com/watch?v=7uplycLvraw&t=818s", "title": "Thinking Algorithmically About Impossibility", "thumbnail": "https://i.ytimg.com/vi/7uplycLvraw/maxresdefault.jpg"} {"video_id": "7uplycLvraw", "text": "bounds sometimes now this is one of those ideas that depending on who you talk to is either complete heresy or they thought of it already okay so that means it's a good idea from from my from my informal judgment okay so yeah so it's a so it but it's sort of provocative so it's very provocative actually because if because lower bound designer doesn't have it as hard as the album designer okay so somehow the design event certain algorithms is going to actually give you lower bounds so maybe the our designer doesn't have", "start_timestamp": "00:14:12", "end_timestamp": "00:14:48", "start_second": 852, "end_second": 888, "url": "https://www.youtube.com/watch?v=7uplycLvraw&t=852s", "title": "Thinking Algorithmically About Impossibility", "thumbnail": "https://i.ytimg.com/vi/7uplycLvraw/maxresdefault.jpg"} {"video_id": "7uplycLvraw", "text": "it so easy after all it just they managed to solve other problems or maybe it's just psychological maybe we just are wired to think about algorithms and some you know more easily than lower bounds but there's not so much different between them in the end so here's here's just that my vague attempt to draw some parallel here so a typical resultant album design says here is an algorithm a that solves some problem on all possible instances of the problem now it's actually amazing how many statements of this form hold but", "start_timestamp": "00:14:48", "end_timestamp": "00:15:22", "start_second": 888, "end_second": 922, "url": "https://www.youtube.com/watch?v=7uplycLvraw&t=888s", "title": "Thinking Algorithmically About Impossibility", "thumbnail": "https://i.ytimg.com/vi/7uplycLvraw/maxresdefault.jpg"} {"video_id": "7uplycLvraw", "text": "the fact that like you can have a algorithm that say solves a a problem all possible graphs no matter which graph you see even graphs you will never be able to see you can still prove it's going to work and give you the right answer for this graph okay on all possible instances okay now a typical theorem from lower bounds will say something like here is a proof P that this problem can't be solved on all possible by all above two albums of some class some weak class of algorithms that I've sort of box over here and I can", "start_timestamp": "00:15:22", "end_timestamp": "00:15:54", "start_second": 922, "end_second": 954, "url": "https://www.youtube.com/watch?v=7uplycLvraw&t=922s", "title": "Thinking Algorithmically About Impossibility", "thumbnail": "https://i.ytimg.com/vi/7uplycLvraw/maxresdefault.jpg"} {"video_id": "7uplycLvraw", "text": "rule all of them out from solving this problem here so what I would like to promote is to draw a parallel between the algorithm here in the album design problem and the proof here in lower bounds and all possible instances of the problem in the album design problem with all possible albums with some class okay so you think about this it sort of automatically suggests the kind of algorithms we want to design so the kind of problems we want to solve our problems which take some kind of algorithm as input itself and perform", "start_timestamp": "00:15:54", "end_timestamp": "00:16:30", "start_second": 954, "end_second": 990, "url": "https://www.youtube.com/watch?v=7uplycLvraw&t=954s", "title": "Thinking Algorithmically About Impossibility", "thumbnail": "https://i.ytimg.com/vi/7uplycLvraw/maxresdefault.jpg"} {"video_id": "7uplycLvraw", "text": "some computation so we wanted to solve some kind of problem that is taking an algorithm at an input we have to be very careful of how we do this I mean like if you just took the usual kind of algorithm as input like the sort of terrain type algorithm then there are lots of results an undergrad computability theory that just says no matter what you try to do analyzing the things it's not going to work so it's make this kind of connection tractable we have to slightly change the model of algorithm we're looking at and", "start_timestamp": "00:16:30", "end_timestamp": "00:16:59", "start_second": 990, "end_second": 1019, "url": "https://www.youtube.com/watch?v=7uplycLvraw&t=990s", "title": "Thinking Algorithmically About Impossibility", "thumbnail": "https://i.ytimg.com/vi/7uplycLvraw/maxresdefault.jpg"} {"video_id": "7uplycLvraw", "text": "this is what I'm going to talk about okay so so I have a very high level very high level here's the specific kind of connection that I'm going to talk about so suppose you have there exists some non-trivial circuit analysis our okay so think of this is just some albums some fixed program maybe a Turing machine what have you it receives as input some circuit encoded in some nice way some logical circuit and for all logical circuits it can do the some kind of non-trivial analysis on this thing let's say it can determine whether that this", "start_timestamp": "00:16:59", "end_timestamp": "00:17:38", "start_second": 1019, "end_second": 1058, "url": "https://www.youtube.com/watch?v=7uplycLvraw&t=1019s", "title": "Thinking Algorithmically About Impossibility", "thumbnail": "https://i.ytimg.com/vi/7uplycLvraw/maxresdefault.jpg"} {"video_id": "7uplycLvraw", "text": "circuit is satisfiable in other words whether the circuit is computing a trivial function or this computing something interesting let's suppose I can do this in some nice way something just better than trying all possible inputs to the circuit and seeing what happens if you can set this kind of thing of it you can solve this kind of album design problem then you can say there is a function f compute with respect to certain Turing machines for which for all circuits they cannot compute this function for all circuits", "start_timestamp": "00:17:38", "end_timestamp": "00:18:14", "start_second": 1058, "end_second": 1094, "url": "https://www.youtube.com/watch?v=7uplycLvraw&t=1058s", "title": "Thinking Algorithmically About Impossibility", "thumbnail": "https://i.ytimg.com/vi/7uplycLvraw/maxresdefault.jpg"} {"video_id": "7uplycLvraw", "text": "that cannot compute this function so you get circuit complexity lower amounts so from the design of particular kinds of algorithms problems ones which take as some kind of algorithm or circuit as input and analyzing them non trivially you can then prove that for whatever it is I analyzed non-trivial e I now have a new function that can't be computed by it so that this is a very very counterintuitive if you look at it in the wrong way if you look at it it's quite a far away maybe it's so slightly more into okay so this is the basic kind", "start_timestamp": "00:18:14", "end_timestamp": "00:18:51", "start_second": 1094, "end_second": 1131, "url": "https://www.youtube.com/watch?v=7uplycLvraw&t=1094s", "title": "Thinking Algorithmically About Impossibility", "thumbnail": "https://i.ytimg.com/vi/7uplycLvraw/maxresdefault.jpg"} {"video_id": "7uplycLvraw", "text": "of connection that I and others have been able to draw and we've been able to actually prove lower new lower bounds new limits on computing using this kind of connection so here's an outline for the rest of the talk I'll give you a very quick introduction to just boolean circuits and sort of the problems we're thinking about here I'll compare circuits to algorithms in the usual sense that people learn them and then I'll talk briefly at the end about how circuit analysis algorithms can imply circuit limitations okay all right so", "start_timestamp": "00:18:51", "end_timestamp": "00:19:26", "start_second": 1131, "end_second": 1166, "url": "https://www.youtube.com/watch?v=7uplycLvraw&t=1131s", "title": "Thinking Algorithmically About Impossibility", "thumbnail": "https://i.ytimg.com/vi/7uplycLvraw/maxresdefault.jpg"} {"video_id": "7uplycLvraw", "text": "bullion circuits look you know something like this so this is a boolean circuit which is taking three bits of input okay it's taking the aunt of each pair of bits and then taking the or of those bits and so it's outputting a and B or B NC or ANC okay in general a circuit of size s for us will take a fixed number of zero one inputs and it will output let's say a single bit you can generalize it to outputting mobile bits for SS a single bit then for F times it's going to take two previously computed bits maybe some", "start_timestamp": "00:19:26", "end_timestamp": "00:20:00", "start_second": 1166, "end_second": 1200, "url": "https://www.youtube.com/watch?v=7uplycLvraw&t=1166s", "title": "Thinking Algorithmically About Impossibility", "thumbnail": "https://i.ytimg.com/vi/7uplycLvraw/maxresdefault.jpg"} {"video_id": "7uplycLvraw", "text": "input bits such as here a and B and you compute a function of those two bits and that will be some new bit that I can use in subsequent computations such as here I'm taking this or if those things okay so just to give an example this above circuit on three inputs outputs one if only if at least two the inputs of one okay now the size of the complexity of this circuit is going to be like four four gates in it and so this is how we can measure computation but for finite functions so it's important here that the number of bits of input is something", "start_timestamp": "00:20:00", "end_timestamp": "00:20:38", "start_second": 1200, "end_second": 1238, "url": "https://www.youtube.com/watch?v=7uplycLvraw&t=1200s", "title": "Thinking Algorithmically About Impossibility", "thumbnail": "https://i.ytimg.com/vi/7uplycLvraw/maxresdefault.jpg"} {"video_id": "7uplycLvraw", "text": "finite okay and then we can talk about if I want to compute some given function how many times do I have to take some previously computed bits and give up another but before I finally get the output of the function I want okay and so trying to minimize this as the number of little no gates you have in here is the core problem in certain complexity all right so most functions most require humongous circuits okay so a very old theorem of Shannon and matched by loop on/off says it with high probability if you randomly choose a function by just", "start_timestamp": "00:20:38", "end_timestamp": "00:21:19", "start_second": 1238, "end_second": 1279, "url": "https://www.youtube.com/watch?v=7uplycLvraw&t=1238s", "title": "Thinking Algorithmically About Impossibility", "thumbnail": "https://i.ytimg.com/vi/7uplycLvraw/maxresdefault.jpg"} {"video_id": "7uplycLvraw", "text": "sort of flipping a bunch of coins generating a function by coin flipping this will require a very large circuit but still gonna nearly a size like to the N over N okay it's another number of inputs to this thing so this is just saying I take a bit zero one and you know I take n of those bits and I output a single bit okay so in fact every function has a circuit of size about to the you know gram okay so this think of it this way so both um I'm flipping two to the N points and I generate some very very long two to the N bit string this", "start_timestamp": "00:21:19", "end_timestamp": "00:21:54", "start_second": 1279, "end_second": 1314, "url": "https://www.youtube.com/watch?v=7uplycLvraw&t=1279s", "title": "Thinking Algorithmically About Impossibility", "thumbnail": "https://i.ytimg.com/vi/7uplycLvraw/maxresdefault.jpg"} {"video_id": "7uplycLvraw", "text": "is just defining some function f okay what this is saying is that there's no way in general with high probability to compress this 2 to the N bit thing and a small very small representation this is basically proved by a counting argument is too many small series too few small circuits and too many randomly chosen functions so a random one is just not going to be represented by something small there's a simple incompressibility type argument so okay the universe is littered with you know these really really hard functions from the", "start_timestamp": "00:21:54", "end_timestamp": "00:22:30", "start_second": 1314, "end_second": 1350, "url": "https://www.youtube.com/watch?v=7uplycLvraw&t=1314s", "title": "Thinking Algorithmically About Impossibility", "thumbnail": "https://i.ytimg.com/vi/7uplycLvraw/maxresdefault.jpg"} {"video_id": "7uplycLvraw", "text": "circumflex II point of view my question is which natural functions ones that we actually want to solve exhibit this kind of exponential behavior they require a large number of gates in order to compute them okay so this is a very central question in certain place you just trying to understand which functions can be solved at all we know that random ones cannot can we can we get is there some function of interest you know that we that we that doesn't have that we cannot to solve the small sir okay that's it so that's a sorry", "start_timestamp": "00:22:30", "end_timestamp": "00:23:08", "start_second": 1350, "end_second": 1388, "url": "https://www.youtube.com/watch?v=7uplycLvraw&t=1350s", "title": "Thinking Algorithmically About Impossibility", "thumbnail": "https://i.ytimg.com/vi/7uplycLvraw/maxresdefault.jpg"} {"video_id": "7uplycLvraw", "text": "complex see that show so the kinds of things I want to do is circuits I is I want to analyze them so I'm going to talk about computational problems that take a circuit encode it in some way as an input and compute something on that circuit so the circuit analysis problem is a problem where the input is a circuit so here we have some logical circuit written down in some encoding the P that our program can read and we want to output some property of the function computed by this circuit so we want to know something about what", "start_timestamp": "00:23:08", "end_timestamp": "00:23:40", "start_second": 1388, "end_second": 1420, "url": "https://www.youtube.com/watch?v=7uplycLvraw&t=1388s", "title": "Thinking Algorithmically About Impossibility", "thumbnail": "https://i.ytimg.com/vi/7uplycLvraw/maxresdefault.jpg"} {"video_id": "7uplycLvraw", "text": "function is it computing in you know inside this little description so the canonical example of this is a circuit satisfiability problem also known as circuit set okay it's really like the simplest circuit analysis problem in a particular sense here you're given a logical circuit see so you wrote you've written down some description of some circuits got wired together in some funky way and you want to know is it computing the all zeroes function is it the case that no matter what input I give it is always going to print zero or", "start_timestamp": "00:23:40", "end_timestamp": "00:24:14", "start_second": 1420, "end_second": 1454, "url": "https://www.youtube.com/watch?v=7uplycLvraw&t=1420s", "title": "Thinking Algorithmically About Impossibility", "thumbnail": "https://i.ytimg.com/vi/7uplycLvraw/maxresdefault.jpg"} {"video_id": "7uplycLvraw", "text": "is there some input which will make it print one okay that would usually when something makes it print one we call it a satisfying assignment and that's the origin of satisfiability but really want to know Jesus is it computing a trivial function or not so just tell me that okay so a circuit set is a so-called np-complete problem and so it's very unlikely to be solvable efficiently okay unless P equals NP okay like this so probably we're not going to be able to solve this circuit analysis problem very well but we could ask still if there is", "start_timestamp": "00:24:14", "end_timestamp": "00:24:51", "start_second": 1454, "end_second": 1491, "url": "https://www.youtube.com/watch?v=7uplycLvraw&t=1454s", "title": "Thinking Algorithmically About Impossibility", "thumbnail": "https://i.ytimg.com/vi/7uplycLvraw/maxresdefault.jpg"} {"video_id": "7uplycLvraw", "text": "some way to solve it faster than the obvious self what is the obvious algorithm just try every possible input to the circuit okay so here we're leveraging the fact that there's a finite number of inputs so the circuit has a finite number of bits coming in so you could just if it's got n bits coming in there to the impossibility is just try them all see if any of them make this thing print one okay so we can answer this something faster this obvious brute-force search our okay we and maybe this is an interesting", "start_timestamp": "00:24:51", "end_timestamp": "00:25:20", "start_second": 1491, "end_second": 1520, "url": "https://www.youtube.com/watch?v=7uplycLvraw&t=1491s", "title": "Thinking Algorithmically About Impossibility", "thumbnail": "https://i.ytimg.com/vi/7uplycLvraw/maxresdefault.jpg"} {"video_id": "7uplycLvraw", "text": "question maybe it isn't I always thought it was a very interesting question turns out it actually is interesting for other reasons it actually is connected to circuit complexity but just the question of it you have us you have a silly algorithm you have a obvious algorithm is it the best you can do this is just a very fundamental question by itself all right so that's a quick introduction to circuits okay now I want to talk about how circuits compare to algorithms so in the usual everything model where you think of you know writing a single", "start_timestamp": "00:25:20", "end_timestamp": "00:25:59", "start_second": 1520, "end_second": 1559, "url": "https://www.youtube.com/watch?v=7uplycLvraw&t=1520s", "title": "Thinking Algorithmically About Impossibility", "thumbnail": "https://i.ytimg.com/vi/7uplycLvraw/maxresdefault.jpg"} {"video_id": "7uplycLvraw", "text": "program it's going to solve a problem you have some finite description of some program and this program can take an arbitrarily long inputs and still solve the problem okay if you give it an input of length 10 no problem input of length of thousand no problem a million no problem 100 million no problem this thing always solves the problem single thing okay that is in contrast with the circuit model where we only allow you to take in fixed length inputs okay so so if we're thinking about how our circuits can relate to one another we've", "start_timestamp": "00:25:59", "end_timestamp": "00:26:33", "start_second": 1559, "end_second": 1593, "url": "https://www.youtube.com/watch?v=7uplycLvraw&t=1559s", "title": "Thinking Algorithmically About Impossibility", "thumbnail": "https://i.ytimg.com/vi/7uplycLvraw/maxresdefault.jpg"} {"video_id": "7uplycLvraw", "text": "got this sort of correctness type mismatch where algorithms can take an arbitrarily long input and circuits can only take in the fixed length inputs and there's a very easy way that complexity theorist came up with sort of unifying the two and the idea is to talk about a computational model called a circuit family the circuit family has is an infinite collection of circuits infinitely many and you have a circuit on one bit of input a circuit on two bits of input circuit on ten bits of input for every possible size of input", "start_timestamp": "00:26:33", "end_timestamp": "00:27:11", "start_second": 1593, "end_second": 1631, "url": "https://www.youtube.com/watch?v=7uplycLvraw&t=1593s", "title": "Thinking Algorithmically About Impossibility", "thumbnail": "https://i.ytimg.com/vi/7uplycLvraw/maxresdefault.jpg"} {"video_id": "7uplycLvraw", "text": "you've got a circuit with that many inputs okay so for each end you have a circuit C sub n that's going to bring you on on inputs of lengths in and so it's obvious what I'm going to do and I want to compute something I'm you give me an input I measure its length I feed it to whatever the circuit is in the collection that matches that length as I run it it gives me some answer okay that's how the circuit family would compute on on some problem that would normally be solved by now okay all right so in this model the circuit family", "start_timestamp": "00:27:11", "end_timestamp": "00:27:42", "start_second": 1631, "end_second": 1662, "url": "https://www.youtube.com/watch?v=7uplycLvraw&t=1631s", "title": "Thinking Algorithmically About Impossibility", "thumbnail": "https://i.ytimg.com/vi/7uplycLvraw/maxresdefault.jpg"} {"video_id": "7uplycLvraw", "text": "model programs have infinitely long descriptions a priori okay there is nothing bounding what these circuits could be you get a separate program every single time here input changes so it's possible that you know if your input if your input is you know a thousand there's something really clever you can do just when in facility two thousand one thousand and one you gotta restart everything do something completely different and get something you know very good okay so that there's this infinite description can be really", "start_timestamp": "00:27:42", "end_timestamp": "00:28:15", "start_second": 1662, "end_second": 1695, "url": "https://www.youtube.com/watch?v=7uplycLvraw&t=1662s", "title": "Thinking Algorithmically About Impossibility", "thumbnail": "https://i.ytimg.com/vi/7uplycLvraw/maxresdefault.jpg"} {"video_id": "7uplycLvraw", "text": "really powerful okay but our notion of efficiency here is this class P poly so efficiency for algorithms is just the class P P poly is a set of problems solvable with one of these infinite circuit families so separate circuits ease of in for each input length where for every end the size of the in circuit is some polynomial in it this polynomial is fixed once and for all let's say it's N squared and so this circuit sees up a thousand be a thousand squared sighs okay so this is our notion of efficiency here for this or infinite", "start_timestamp": "00:28:15", "end_timestamp": "00:28:51", "start_second": 1695, "end_second": 1731, "url": "https://www.youtube.com/watch?v=7uplycLvraw&t=1695s", "title": "Thinking Algorithmically About Impossibility", "thumbnail": "https://i.ytimg.com/vi/7uplycLvraw/maxresdefault.jpg"} {"video_id": "7uplycLvraw", "text": "model okay and the idea is that each circuit here it's going to be small relative to its input but you get a separate circuit for each input length that's sort of the extra thing you get you get this infinite description okay so this circuit family right here alright that's a notion of P poly so you anything programs have infinite link descriptions huh excuse me why study this model okay Theory dork that's all fine and good but why don't you go play off in theory land well all the rest of us go change the world okay okay well I have", "start_timestamp": "00:28:51", "end_timestamp": "00:29:32", "start_second": 1731, "end_second": 1772, "url": "https://www.youtube.com/watch?v=7uplycLvraw&t=1731s", "title": "Thinking Algorithmically About Impossibility", "thumbnail": "https://i.ytimg.com/vi/7uplycLvraw/maxresdefault.jpg"} {"video_id": "7uplycLvraw", "text": "an answer for you all right okay so so proving limitations on what circuit families can compute is a step towards a non asymptotic complexity theory a complexity it doesn't talk about polynomials or Exponential's or whatever when have you talked about numbers how big does the computation get if I want to run it and if I want to run it on inputs of length a million can I fit it inside the known universe so so concrete limitations on computing within the known universe is would be is the the thing you would like to have that's sort", "start_timestamp": "00:29:32", "end_timestamp": "00:30:10", "start_second": 1772, "end_second": 1810, "url": "https://www.youtube.com/watch?v=7uplycLvraw&t=1772s", "title": "Thinking Algorithmically About Impossibility", "thumbnail": "https://i.ytimg.com/vi/7uplycLvraw/maxresdefault.jpg"} {"video_id": "7uplycLvraw", "text": "of the ideal thing you would like to have something not as I'm sorry we don't care about polynomials exploration we just want to know how big is the computation gonna get so think something like any computer solving most instances might tend to for a bit problem these at least 10 to the 125 bits to be described so if you've got you know this kind of statement then your problem is just simply not solvable period okay so actually such a statement was proved by Meyer and stockmeyer in the 70s for a particular logic problem", "start_timestamp": "00:30:10", "end_timestamp": "00:30:39", "start_second": 1810, "end_second": 1839, "url": "https://www.youtube.com/watch?v=7uplycLvraw&t=1810s", "title": "Thinking Algorithmically About Impossibility", "thumbnail": "https://i.ytimg.com/vi/7uplycLvraw/maxresdefault.jpg"} {"video_id": "7uplycLvraw", "text": "and they derived it we by reverse engineering a circuit lure about because circuit lower bounds are actually explicitly considering the trade-offs between the size of an input and the size of the computation you got a throw at the input to solve whereas Peters MP is actually not talking about this time at a fixed algorithm that's gonna work on all inputs okay and so the universe turns out stores a less than 10 to the 125 bits this is the this is the famous bekenstein bound in 1970s let's derive from that and so what", "start_timestamp": "00:30:39", "end_timestamp": "00:31:11", "start_second": 1839, "end_second": 1871, "url": "https://www.youtube.com/watch?v=7uplycLvraw&t=1839s", "title": "Thinking Algorithmically About Impossibility", "thumbnail": "https://i.ytimg.com/vi/7uplycLvraw/maxresdefault.jpg"} {"video_id": "7uplycLvraw", "text": "you're saying is that any computer solving most instance my problem just won't fit within the known universe okay that's a pretty good lower bound okay so that's the kind of thing we would like to have and so the circuit family is just some stepping stone to getting some non asymptotic things were just talking about numbers that is the ultimate goal we are really this is des soon okay all right so a lot of algorithms compared to circuit families okay so we define this thing how does it compare to the uniform", "start_timestamp": "00:31:11", "end_timestamp": "00:31:44", "start_second": 1871, "end_second": 1904, "url": "https://www.youtube.com/watch?v=7uplycLvraw&t=1871s", "title": "Thinking Algorithmically About Impossibility", "thumbnail": "https://i.ytimg.com/vi/7uplycLvraw/maxresdefault.jpg"} {"video_id": "7uplycLvraw", "text": "one program during low so if you got a program that runs and say T of n time when inputs of length n so in the squared time then it's well-known that you can always get a circuit family that will do the same thing and it has about TN in size as well there's some extra little factors don't see it in size so you can simulate efficient albums with efficient circuit families okay so time will scale the size all right so so that's all fine good in fact if you flip random coins in your algorithm and it gives you the right answer with high", "start_timestamp": "00:31:44", "end_timestamp": "00:32:23", "start_second": 1904, "end_second": 1943, "url": "https://www.youtube.com/watch?v=7uplycLvraw&t=1904s", "title": "Thinking Algorithmically About Impossibility", "thumbnail": "https://i.ytimg.com/vi/7uplycLvraw/maxresdefault.jpg"} {"video_id": "7uplycLvraw", "text": "probability you will still be able to get infinite family no circuits has about the same size so in a sense you can remove the RAM this when you allow for an infinite circuit fan this is nothing more than the statement okay in complexity theory terms that bvp is in P Polly randomized phone real-time as efficient bonafide circuit changes okay suppose I'm just looking at circuit families I want to know can they be simulated by algorithms well turns out there is a family we in fact every circuit has size about in some linear", "start_timestamp": "00:32:23", "end_timestamp": "00:32:57", "start_second": 1943, "end_second": 1977, "url": "https://www.youtube.com/watch?v=7uplycLvraw&t=1943s", "title": "Thinking Algorithmically About Impossibility", "thumbnail": "https://i.ytimg.com/vi/7uplycLvraw/maxresdefault.jpg"} {"video_id": "7uplycLvraw", "text": "size circuits so there's definitely in ppalli there's only a linear size as so as the input gross it's only just grows linearly in the transit slice however that family has no algorithm at all okay there is just simply no algorithm that will solve the problem that is solved by the circuit family okay so you you can't like hope to make them equivalent okay the main key here is that the circuit family gets an infinite description you get a different circuit for every input link and here you have a finite description a single element right and", "start_timestamp": "00:32:57", "end_timestamp": "00:33:33", "start_second": 1977, "end_second": 2013, "url": "https://www.youtube.com/watch?v=7uplycLvraw&t=1977s", "title": "Thinking Algorithmically About Impossibility", "thumbnail": "https://i.ytimg.com/vi/7uplycLvraw/maxresdefault.jpg"} {"video_id": "7uplycLvraw", "text": "this is just a statement of some undecidable problems in fact many of them are in P poly okay so undecidable meaning there's no our at all that's just technical so finally I want to talk about okay suppose you've got a really complicated algorithm what what can you say about the size of the circuit family solving it so let's suppose I have some album running in exponential time so to the in step so given an in bit input it takes to the in step so like something just solving the trivial rm4 circuit set so I'm just", "start_timestamp": "00:33:33", "end_timestamp": "00:34:10", "start_second": 2013, "end_second": 2050, "url": "https://www.youtube.com/watch?v=7uplycLvraw&t=2013s", "title": "Thinking Algorithmically About Impossibility", "thumbnail": "https://i.ytimg.com/vi/7uplycLvraw/maxresdefault.jpg"} {"video_id": "7uplycLvraw", "text": "going through all possibilities through the the inputs and just seeing what happens okay so it's actually open it's an open question whether every exponential time algorithm I'm saying extraordinary amount of time can be solved by circuits with a circuit family with N squared size every circuit has this is this is a really really remarkably open question so is it somehow possible that extremely long time consuming computation can be split up into implementing tiny little you know chunks we're on every single", "start_timestamp": "00:34:10", "end_timestamp": "00:34:49", "start_second": 2050, "end_second": 2089, "url": "https://www.youtube.com/watch?v=7uplycLvraw&t=2050s", "title": "Thinking Algorithmically About Impossibility", "thumbnail": "https://i.ytimg.com/vi/7uplycLvraw/maxresdefault.jpg"} {"video_id": "7uplycLvraw", "text": "input length when I solve that problem optimally I actually get only N squared amount of computation this is possible it's actually possible we don't believe it's true but it's still a possibility we cannot rule it out and this is saying in complexity terms that X in ppalli is an open question so exponential time being in poly size there is the open question okay so just to give you just a even more crazy way in which we still don't know how like the power circuit families talk about exponential time versus shallowness some of you", "start_timestamp": "00:34:49", "end_timestamp": "00:35:30", "start_second": 2089, "end_second": 2130, "url": "https://www.youtube.com/watch?v=7uplycLvraw&t=2089s", "title": "Thinking Algorithmically About Impossibility", "thumbnail": "https://i.ytimg.com/vi/7uplycLvraw/maxresdefault.jpg"} {"video_id": "7uplycLvraw", "text": "might have attended ms or a workshop on neural net okay this is also we don't want shallow Nets here so this is a very very shallow neural networks so suppose you've got an algorithm and this algorithm is solving a problem that solvable in so-called non-deterministic exponential time so this is like some just gigantic unfathomable class so this is problems where the solution takes exponentially many bits to describe and then verifying the solution is exponentially so this is just some gigantic class of problems ok the", "start_timestamp": "00:35:30", "end_timestamp": "00:36:05", "start_second": 2130, "end_second": 2165, "url": "https://www.youtube.com/watch?v=7uplycLvraw&t=2130s", "title": "Thinking Algorithmically About Impossibility", "thumbnail": "https://i.ytimg.com/vi/7uplycLvraw/maxresdefault.jpg"} {"video_id": "7uplycLvraw", "text": "solutions themselves are gigantic and then verifying them takes a gigantic amount of time ok so this is a huge class of problems so it's actually possible this is gigantic class problems every single problem in this class could be again simulated by an infinite family of circuits one for every input length with the following property so this is zooming in ok so every circuit looks like the following very very simple object okay we've got N squared neurons here and so in some layer we've got n inputs so this is the amp circuit so", "start_timestamp": "00:36:05", "end_timestamp": "00:36:44", "start_second": 2165, "end_second": 2204, "url": "https://www.youtube.com/watch?v=7uplycLvraw&t=2165s", "title": "Thinking Algorithmically About Impossibility", "thumbnail": "https://i.ytimg.com/vi/7uplycLvraw/maxresdefault.jpg"} {"video_id": "7uplycLvraw", "text": "it's going to take n inputs and output a bit and then from those n squared neurons they're gonna give me a 0 or 1 and they want to take one more neuron some function of those and output 0 1 ok so if each neuron we're just computing some linear form of the input so there's some weights here being some real weights being multiplied to 0 one values we're checking whether that exceeds some particular threshold value let's say T part if it does we we fire it doesn't you know what if it doesn't we fire as you just this very", "start_timestamp": "00:36:44", "end_timestamp": "00:37:14", "start_second": 2204, "end_second": 2234, "url": "https://www.youtube.com/watch?v=7uplycLvraw&t=2204s", "title": "Thinking Algorithmically About Impossibility", "thumbnail": "https://i.ytimg.com/vi/7uplycLvraw/maxresdefault.jpg"} {"video_id": "7uplycLvraw", "text": "simple activation function okay so these are neural networks with one hidden layer small neural networks with one hidden layer there should be a very very weak class ok in fact if the weights are really small the weights are like minus 1 1 we do have very strong Louisville but if we allow the weights to be anything we want this is still possible it's a gigantic gap in our knowledge so we don't yet understand very simple neural networks okay just even things like just one hidden layer just one hidden layer with", "start_timestamp": "00:37:14", "end_timestamp": "00:37:51", "start_second": 2234, "end_second": 2271, "url": "https://www.youtube.com/watch?v=7uplycLvraw&t=2234s", "title": "Thinking Algorithmically About Impossibility", "thumbnail": "https://i.ytimg.com/vi/7uplycLvraw/maxresdefault.jpg"} {"video_id": "7uplycLvraw", "text": "a simple activation function it's so so I like to just give an analogy here of an alien brain versus implementing for libraries okay so so think of this not as a central time thing is as you got some super advanced you know alien brain no no think of it is like something times 100 that okay something just just some brains and solving problems unfathomable to you okay okay but it's one brain it's just one of them okay and that you want to know if there are any mini little fly brains you know just just one just two layers of depth you", "start_timestamp": "00:37:51", "end_timestamp": "00:38:35", "start_second": 2271, "end_second": 2315, "url": "https://www.youtube.com/watch?v=7uplycLvraw&t=2271s", "title": "Thinking Algorithmically About Impossibility", "thumbnail": "https://i.ytimg.com/vi/7uplycLvraw/maxresdefault.jpg"} {"video_id": "7uplycLvraw", "text": "know anslee many of them though which will also solve the problem no matter what this no matter what the the big alien brain okay so this so the only advantage you have is that there are implementing our brains and there's a different for each input length is that enough almost certainly not but it's the implementing that's that trips us up alright okay so now I want to talk and remaining turn about how certain analysis algorithms can actually imply circuit limitations so circuit islands algorithms can imply", "start_timestamp": "00:38:35", "end_timestamp": "00:39:13", "start_second": 2315, "end_second": 2353, "url": "https://www.youtube.com/watch?v=7uplycLvraw&t=2315s", "title": "Thinking Algorithmically About Impossibility", "thumbnail": "https://i.ytimg.com/vi/7uplycLvraw/maxresdefault.jpg"} {"video_id": "7uplycLvraw", "text": "cellular balance and particular situations and in a particular sense this was well known a long time ago there's a theorem of car flipping and Meyer from 1980 which says suppose we had extremely efficient circuit now serums like basically perfect sure\u00f1os arms okay so then there are problems solvable by hours in exponential time say to the end time long enough time that cannot be solved the polynomial size circuit family so this would resolve this X and P poly question we were talking about really okay so just a notation if P equals", "start_timestamp": "00:39:13", "end_timestamp": "00:39:47", "start_second": 2353, "end_second": 2387, "url": "https://www.youtube.com/watch?v=7uplycLvraw&t=2353s", "title": "Thinking Algorithmically About Impossibility", "thumbnail": "https://i.ytimg.com/vi/7uplycLvraw/maxresdefault.jpg"} {"video_id": "7uplycLvraw", "text": "empty say circuit side is in P that is a good circuit an office algorithm then we would have it this class X was not in people okay this is a very interesting implication very interesting that that an algorithm analysis could even the other P would be okay that's part not gonna be there but some error analysis can actually prove a lower bound can actually lead to a lower bound at all okay but the thing is we don't believe that hypothesis is true so it's kind of like saying if pigs can fly then pigs can wink I mean like we we don't believe", "start_timestamp": "00:39:47", "end_timestamp": "00:40:23", "start_second": 2387, "end_second": 2423, "url": "https://www.youtube.com/watch?v=7uplycLvraw&t=2387s", "title": "Thinking Algorithmically About Impossibility", "thumbnail": "https://i.ytimg.com/vi/7uplycLvraw/maxresdefault.jpg"} {"video_id": "7uplycLvraw", "text": "I bought is this we expect the conclusion to happen okay so so it's it's you know it's it could be useful but it seems to be a limited utility it seems like we have to assume you've got like this amazing like circuit analysis algorithm let's go study that I mean okay maybe extra people not in people ollie happens but okay this is you know by far the more interesting of the of the tooth to me okay so so what we want to do this you know take the wings off this i'll just dude and you know bring him back down to earth and like maybe", "start_timestamp": "00:40:23", "end_timestamp": "00:40:55", "start_second": 2423, "end_second": 2455, "url": "https://www.youtube.com/watch?v=7uplycLvraw&t=2423s", "title": "Thinking Algorithmically About Impossibility", "thumbnail": "https://i.ytimg.com/vi/7uplycLvraw/maxresdefault.jpg"} {"video_id": "7uplycLvraw", "text": "maybe he can like you know like a lot no you know something that is actually possible and then you know we can prove something that we expect anyway yeah so a lower a circuit lower about that that's what we would like to do alright and so we were able to do this in particularly restricted situations so this is work of myself and many others who are working on this too and and a very high level what we can say is it a slightly faster algorithm for the circuit set problem just slightly faster than exhaustive search already implies", "start_timestamp": "00:40:55", "end_timestamp": "00:41:32", "start_second": 2455, "end_second": 2492, "url": "https://www.youtube.com/watch?v=7uplycLvraw&t=2455s", "title": "Thinking Algorithmically About Impossibility", "thumbnail": "https://i.ytimg.com/vi/7uplycLvraw/maxresdefault.jpg"} {"video_id": "7uplycLvraw", "text": "lower balanced against circuits solving these problems gigantic class non-deterministic exponential time this class were which like it's possible that even neural Nets is one hidden layer could just solve the class so in pictures what we're saying is that suppose you know I'll give you just some arbitrary circuit well and there's some way to inspect this circuit and you can find an input which makes it print one whenever it exists yeah and the one extra thing I want to say is instead of taking say the exponential", "start_timestamp": "00:41:32", "end_timestamp": "00:42:05", "start_second": 2492, "end_second": 2525, "url": "https://www.youtube.com/watch?v=7uplycLvraw&t=2492s", "title": "Thinking Algorithmically About Impossibility", "thumbnail": "https://i.ytimg.com/vi/7uplycLvraw/maxresdefault.jpg"} {"video_id": "7uplycLvraw", "text": "cost of two to the end time as you would have exhaustively you take to the end over end of the ten time okay and you do this for all polynomial size circuits so let's just say you do this for every circuit in particular s you shave off some polynomial into the tenth is enough okay for for our purposes suppose you can do that just a tiny sliver off exhaustive search then you'll be able to prove for the same circuit class that there it that it cannot compute sorry in X cannot be so so X is not in people okay so so what's the intuition behind", "start_timestamp": "00:42:05", "end_timestamp": "00:42:44", "start_second": 2525, "end_second": 2564, "url": "https://www.youtube.com/watch?v=7uplycLvraw&t=2525s", "title": "Thinking Algorithmically About Impossibility", "thumbnail": "https://i.ytimg.com/vi/7uplycLvraw/maxresdefault.jpg"} {"video_id": "7uplycLvraw", "text": "this the there's two part the basic intuitions what is it faster circuit set algorithms uncover a weakness in and what circuits can do so in particular it says it high-level that small circuits can't offer you skate the all zeros function they can't hide it from you so I suppose this where a black box okay and you can't actually appear inside the circuit and look at what's going on the only thing you can do is take inputs stick them in and get outputs that's why that's the only thing you could do and now you want to solve this a problem you", "start_timestamp": "00:42:44", "end_timestamp": "00:43:20", "start_second": 2564, "end_second": 2600, "url": "https://www.youtube.com/watch?v=7uplycLvraw&t=2564s", "title": "Thinking Algorithmically About Impossibility", "thumbnail": "https://i.ytimg.com/vi/7uplycLvraw/maxresdefault.jpg"} {"video_id": "7uplycLvraw", "text": "want to know there's an input which makes the circuit print one right then you can prove that you you would need to call this black box at least to the end times to know for sure by there ways you could trip up any particular strategy for querying this black box getting inputs and outputs without looking in it so that you've got to take to the end time so if we can solve the problem into the end over a ten time then there is some fundamental way in which we are opening up the guts of this circuit and getting some advantage over a black box", "start_timestamp": "00:43:20", "end_timestamp": "00:43:53", "start_second": 2600, "end_second": 2633, "url": "https://www.youtube.com/watch?v=7uplycLvraw&t=2600s", "title": "Thinking Algorithmically About Impossibility", "thumbnail": "https://i.ytimg.com/vi/7uplycLvraw/maxresdefault.jpg"} {"video_id": "7uplycLvraw", "text": "and turns out this little advantage this weakness in circuits of the algorithm is uncovered is already enough to start proving the lower bounds alright so that's it that's the first intuition for why is such an implication should even hold the second intuition is that faster circuits set algorithms show a strength of faster than to the N algorithm so they saw some tasks that you didn't think could be done namely find an input which makes the circuit print one it does it faster than to the end so there's some nice algorithm that can", "start_timestamp": "00:43:53", "end_timestamp": "00:44:29", "start_second": 2633, "end_second": 2669, "url": "https://www.youtube.com/watch?v=7uplycLvraw&t=2633s", "title": "Thinking Algorithmically About Impossibility", "thumbnail": "https://i.ytimg.com/vi/7uplycLvraw/maxresdefault.jpg"} {"video_id": "7uplycLvraw", "text": "efficiently tell us when a given circuit computes the all zeroes function so so given these two intuitions you can think of this problem in general as some kind of gain between ours and in circuits there's a circuit set problem that algorithms would like to solve and there are circuits that are you know inherently devious and try you know trying to fool any given algorithm and wired in some way to keep the algorithms from telling whether it's and when we can be exhaustive search we are winning the game the algorithm is winning the", "start_timestamp": "00:44:29", "end_timestamp": "00:45:02", "start_second": 2669, "end_second": 2702, "url": "https://www.youtube.com/watch?v=7uplycLvraw&t=2669s", "title": "Thinking Algorithmically About Impossibility", "thumbnail": "https://i.ytimg.com/vi/7uplycLvraw/maxresdefault.jpg"} {"video_id": "7uplycLvraw", "text": "game and no matter what circuits given it can somehow drill down through the back black box and and get the answer okay so with the algorithm winning the game we running a lesson to the end time we hope to say circuits or a week less than to the entire albums are strong and somehow we're going to turn this into some function in non interesting exponential time that doesn't have your smallest area so now the circuits algorithm is showing that circuits are weak and this theorem is more less making that formal ok so we can actually", "start_timestamp": "00:45:02", "end_timestamp": "00:45:36", "start_second": 2702, "end_second": 2736, "url": "https://www.youtube.com/watch?v=7uplycLvraw&t=2702s", "title": "Thinking Algorithmically About Impossibility", "thumbnail": "https://i.ytimg.com/vi/7uplycLvraw/maxresdefault.jpg"} {"video_id": "7uplycLvraw", "text": "apply this kind of theorem to prove lower bounds for restricted classes of circuits see fancy see ok I won't go into what the definitions of these things are but the the way in which they're proof lays out exactly as you might expect we show that faster circuits at algorithms for this for particular classes can imply circuit lower bounds for that particular class ok so just some generic kind of theorem if see circuits sat on circuits with n inputs can be solved and to the N over in ten steps then this class in X or", "start_timestamp": "00:45:36", "end_timestamp": "00:46:10", "start_second": 2736, "end_second": 2770, "url": "https://www.youtube.com/watch?v=7uplycLvraw&t=2736s", "title": "Thinking Algorithmically About Impossibility", "thumbnail": "https://i.ytimg.com/vi/7uplycLvraw/maxresdefault.jpg"} {"video_id": "7uplycLvraw", "text": "some function this big class in X it doesn't have circuits all that kind whatever it is that we prove this circuit settling for okay and the second step is just design the element now this is just an in conditional sort of thing so just we can prove for many interesting certain classes ones will renew no lower bounds whatsoever we can solve the SAP problem faster than exhaustive search and then plugging that into the above connection you get lower bounds for them so you can improve over exhaustive search in some", "start_timestamp": "00:46:10", "end_timestamp": "00:46:39", "start_second": 2770, "end_second": 2799, "url": "https://www.youtube.com/watch?v=7uplycLvraw&t=2770s", "title": "Thinking Algorithmically About Impossibility", "thumbnail": "https://i.ytimg.com/vi/7uplycLvraw/maxresdefault.jpg"} {"video_id": "7uplycLvraw", "text": "cases and so far this is kind of the only way we know how to prove certain lower balance is through the design of the right kind of set over all right so I'd like to conclude with the following challenge so how do we become smarter about computation so the outsiders it may seem that we already know maybe too much about computation like computers are getting smarter and smarter than us all the time yet from the theory level we actually don't know precisely how powerful that we are understanding this is really coarse so the dirty secret we", "start_timestamp": "00:46:39", "end_timestamp": "00:47:22", "start_second": 2799, "end_second": 2842, "url": "https://www.youtube.com/watch?v=7uplycLvraw&t=2799s", "title": "Thinking Algorithmically About Impossibility", "thumbnail": "https://i.ytimg.com/vi/7uplycLvraw/maxresdefault.jpg"} {"video_id": "7uplycLvraw", "text": "still don't know too much about the limits of computers so how can algorithms help prove lower bounds so this is just in general a direction I would like people to think more about what kinds of algorithms could help to lower bounds because if we can set things up in this way then the lower bound problem does not become so intimidating anymore it just becomes the problem of designing the right kind of our something that we have done as computer scientists for many years and then how can lower bounds help design", "start_timestamp": "00:47:22", "end_timestamp": "00:47:53", "start_second": 2842, "end_second": 2873, "url": "https://www.youtube.com/watch?v=7uplycLvraw&t=2842s", "title": "Thinking Algorithmically About Impossibility", "thumbnail": "https://i.ytimg.com/vi/7uplycLvraw/maxresdefault.jpg"} {"video_id": "7uplycLvraw", "text": "algorithms I didn't have time at all here to talk about how that how the connections of that form but there are there are many such connections namely in this thing I briefly mentioned about how lower bounds can imply D randomization removing the roundness from algorithms and making them fully deterministic there are many more connections to be film I'm sure so just earlier this year my student Brin more Chapman and I showed that lower bound circuit lower bounds can in a particular sense be equivalent to a particular", "start_timestamp": "00:47:53", "end_timestamp": "00:48:24", "start_second": 2873, "end_second": 2904, "url": "https://www.youtube.com/watch?v=7uplycLvraw&t=2873s", "title": "Thinking Algorithmically About Impossibility", "thumbnail": "https://i.ytimg.com/vi/7uplycLvraw/maxresdefault.jpg"} {"video_id": "7uplycLvraw", "text": "design problem so you prove a lower bound circuit lower bound if and only if you desire test data for testing whether a given function of given circuit computes a particular function okay so trying to minimize the test data you need to test whether a circuit computes of function is actually equivalent to proving a lower bound so this is very nice in the sense that that it gives a very constructive way of thinking about how you prove an impossibility result a very algorithmic way how you do and so in general I think we will make serious", "start_timestamp": "00:48:24", "end_timestamp": "00:48:58", "start_second": 2904, "end_second": 2938, "url": "https://www.youtube.com/watch?v=7uplycLvraw&t=2904s", "title": "Thinking Algorithmically About Impossibility", "thumbnail": "https://i.ytimg.com/vi/7uplycLvraw/maxresdefault.jpg"} {"video_id": "7uplycLvraw", "text": "progress by studying alguns and complexity as a whole and not as competing fields for anything we're really doing I really think we're doing a lot of the same thing more than we think okay that's all I have to say thank you sorry can you repeat it again right here oh so like the work of prasada you know yeah I mean so I think what's going on there is they're explaining the fact that I'm at least to me is that improving something is np-hard is actually an algorithms problem and so it's like from my understanding is you", "start_timestamp": "00:48:58", "end_timestamp": "00:50:26", "start_second": 2938, "end_second": 3026, "url": "https://www.youtube.com/watch?v=7uplycLvraw&t=2938s", "title": "Thinking Algorithmically About Impossibility", "thumbnail": "https://i.ytimg.com/vi/7uplycLvraw/maxresdefault.jpg"} {"video_id": "7uplycLvraw", "text": "know either something can be done or it can't be done you get and you get a hardness reduction so it's so I mean in one case you get an algorithm or you get a hard introduction I mean in both cases you get some kind of algorithm but but I don't know I mean or any of them unconditional I don't know I need a lower bound you get unconditional yeah yeah I think it's a different kind of instance it's very interesting kind of instance but yeah I think I think it's different from what I'm trying to say but that's I didn't think I thought", "start_timestamp": "00:50:26", "end_timestamp": "00:51:26", "start_second": 3026, "end_second": 3086, "url": "https://www.youtube.com/watch?v=7uplycLvraw&t=3026s", "title": "Thinking Algorithmically About Impossibility", "thumbnail": "https://i.ytimg.com/vi/7uplycLvraw/maxresdefault.jpg"} {"video_id": "7uplycLvraw", "text": "you still need unique games are or P not equal NP or something like this so I mean I would like to say something stronger than this so it's not just algorithm or hardness reduction yeah yes yes yes yeah yeah yeah yeah yeah so that's what makes the problem so immensely difficult is that it could just be anything in you know in and the hardest case is where each one is sort of incompressible by itself yeah if they're uniform if there's an algorithm generating those then you know this kind of dichotomy I was talking about becomes", "start_timestamp": "00:51:26", "end_timestamp": "00:52:33", "start_second": 3086, "end_second": 3153, "url": "https://www.youtube.com/watch?v=7uplycLvraw&t=3086s", "title": "Thinking Algorithmically About Impossibility", "thumbnail": "https://i.ytimg.com/vi/7uplycLvraw/maxresdefault.jpg"} {"video_id": "7uplycLvraw", "text": "a completely different picture it was a very different picture I mean there's still interesting questions I mean the open questions around it but the picture is very different see I thought it was very careful not to say that I'm not trying a equals not hmm oh so yeah I mean even at the undergrad automata Theory level like when I teach them my hole in the road theorem I say well this means that for every language either it's got a fire automaton it's regular or there's this weird infinite object called distinguishing set and", "start_timestamp": "00:52:33", "end_timestamp": "00:53:41", "start_second": 3153, "end_second": 3221, "url": "https://www.youtube.com/watch?v=7uplycLvraw&t=3153s", "title": "Thinking Algorithmically About Impossibility", "thumbnail": "https://i.ytimg.com/vi/7uplycLvraw/maxresdefault.jpg"} {"video_id": "7uplycLvraw", "text": "this proves that there's no final total and so in that case it is some sort of constructive way in which you're showing that something is not regular something does not have a DFA so yeah it does come up elsewhere I just yeah I only have so much time yeah yeah there's a whole community of people that would disagree with I mean that solving is quite an enterprise actually yeah yeah oh well I mean there are things like this so-called exponential time hypothesis is false so then for example counting independent sets if it's in sub", "start_timestamp": "00:53:41", "end_timestamp": "00:54:48", "start_second": 3221, "end_second": 3288, "url": "https://www.youtube.com/watch?v=7uplycLvraw&t=3221s", "title": "Thinking Algorithmically About Impossibility", "thumbnail": "https://i.ytimg.com/vi/7uplycLvraw/maxresdefault.jpg"} {"video_id": "PuStNtldiJY", "text": "[Music] reading the war between human and artificial intelligence our deep learning systems are beginning to surpass humans sorry [Music] jurgen schmidhuber the swiss AI lab ids IA on the 9th of november 1989 I saw the Berlin Wall fall on TV if you ask me when did you ever have tears in your eyes this is the first event that comes to my mind when I was a boy I wanted to maximize my impact on the world and I was smart enough to realize that I'm not very smart and so it became clear to me that I have to build a", "start_timestamp": "00:00:00", "end_timestamp": "00:01:14", "start_second": 0, "end_second": 74, "url": "https://www.youtube.com/watch?v=PuStNtldiJY&t=0s", "title": "How AI Is Beginning To Surpass Humans | Ju\u0308rgen Schmidhuber", "thumbnail": "https://i.ytimg.com/vi/PuStNtldiJY/maxresdefault.jpg"} {"video_id": "PuStNtldiJY", "text": "machine and artificial intelligence that learns to become much smarter than I could ever hope to be such that it can learn to solve all the problems that I cannot solve myself such that I can retire and my first publication on that dates back 30 years today 1987 might promises was about solving the grand problem of AI not just building something that learns a little bit here and a little bit over there but also learns to improve the learning algorithm itself and it learns the way it lands away learns recursively and I'm still", "start_timestamp": "00:01:14", "end_timestamp": "00:02:01", "start_second": 74, "end_second": 121, "url": "https://www.youtube.com/watch?v=PuStNtldiJY&t=74s", "title": "How AI Is Beginning To Surpass Humans | Ju\u0308rgen Schmidhuber", "thumbnail": "https://i.ytimg.com/vi/PuStNtldiJY/maxresdefault.jpg"} {"video_id": "PuStNtldiJY", "text": "working on the same thing and I'm still saying the same thing and the only difference is that more people are listening because on the way to that goal my team has developed learning methods which are now on 3,000 million smartphones what you see behind me are the logos are the five most valuable companies of the Western world Apple Google Microsoft Amazon Facebook and all of them claim that AI is central to what they are doing and all of them are using heavily the deep learning methods as they are called now that we have developed in our", "start_timestamp": "00:02:01", "end_timestamp": "00:02:55", "start_second": 121, "end_second": 175, "url": "https://www.youtube.com/watch?v=PuStNtldiJY&t=121s", "title": "How AI Is Beginning To Surpass Humans | Ju\u0308rgen Schmidhuber", "thumbnail": "https://i.ytimg.com/vi/PuStNtldiJY/maxresdefault.jpg"} {"video_id": "PuStNtldiJY", "text": "little labs in Munich and in Switzerland since the early 90s in particular something called the long short-term memory has anybody in this room ever heard of the long short-term memory ends up fel SEM has anybody in this room never heard of the LS TM and okay I see we have a third group in this room who didn't understand the question the lsdm is an artificial neural network which has recurrent connections and it's a little bit inspired by the human brain in your brain you've got about 100 billion little processors and they are", "start_timestamp": "00:02:55", "end_timestamp": "00:03:57", "start_second": 175, "end_second": 237, "url": "https://www.youtube.com/watch?v=PuStNtldiJY&t=175s", "title": "How AI Is Beginning To Surpass Humans | Ju\u0308rgen Schmidhuber", "thumbnail": "https://i.ytimg.com/vi/PuStNtldiJY/maxresdefault.jpg"} {"video_id": "PuStNtldiJY", "text": "called neurons and each of them is connected to maybe 10,000 other neurons on average and some of these neurons are infant neurons where video is coming in through the cameras and audio is coming into the microphones and tactile information is going in through the pain sensors and some of the neurons are output neurons and they move the finger muscles and speech mushrooms and in between are these hidden neurons we're thinking is taking place and they all connected and each connection has a strength which says how much does this", "start_timestamp": "00:03:57", "end_timestamp": "00:04:34", "start_second": 237, "end_second": 274, "url": "https://www.youtube.com/watch?v=PuStNtldiJY&t=237s", "title": "How AI Is Beginning To Surpass Humans | Ju\u0308rgen Schmidhuber", "thumbnail": "https://i.ytimg.com/vi/PuStNtldiJY/maxresdefault.jpg"} {"video_id": "PuStNtldiJY", "text": "neuron over here influence this neuron over here at the next time step and in the beginning all these connections are random and the network knows nothing but then over time it learns to improve itself and it learns to do so of all kinds of interesting problems such as driving a car just from examples from training examples and you may not know the lsdm but all of you have it in your pockets on your smartphone because whenever you take out your smartphone and you do the speech recognition and you say okay guru show", "start_timestamp": "00:04:34", "end_timestamp": "00:05:13", "start_second": 274, "end_second": 313, "url": "https://www.youtube.com/watch?v=PuStNtldiJY&t=274s", "title": "How AI Is Beginning To Surpass Humans | Ju\u0308rgen Schmidhuber", "thumbnail": "https://i.ytimg.com/vi/PuStNtldiJY/maxresdefault.jpg"} {"video_id": "PuStNtldiJY", "text": "me the fastest way to the station then it's recognizing your speech and what's happening there's an lsdm in there which gets about 100 and puts per second from the microphone and they are streaming in memories of past inputs are circling around these these recurrent connections and from many training examples it has learned to adjust these internal connections such that it can recognize what you're saying that's now on 2 billion Android phones it's much better than what Google had before 2015 here is the basic LS TM cell I don't have time to", "start_timestamp": "00:05:13", "end_timestamp": "00:05:53", "start_second": 313, "end_second": 353, "url": "https://www.youtube.com/watch?v=PuStNtldiJY&t=313s", "title": "How AI Is Beginning To Surpass Humans | Ju\u0308rgen Schmidhuber", "thumbnail": "https://i.ytimg.com/vi/PuStNtldiJY/maxresdefault.jpg"} {"video_id": "PuStNtldiJY", "text": "explain it but here are also the names of the brilliant students in my lab who made that possible how are the big companies using it well speech recognition is just one of many examples if you're on Facebook is anybody on Facebook ok are you sometimes using the translate function where you can translate text from other people yes again whenever you do that you are waking up a long short term memory and lsdm which has learned from scratch to translate sentences into equivalent sentences in different languages and", "start_timestamp": "00:05:53", "end_timestamp": "00:06:30", "start_second": 353, "end_second": 390, "url": "https://www.youtube.com/watch?v=PuStNtldiJY&t=353s", "title": "How AI Is Beginning To Surpass Humans | Ju\u0308rgen Schmidhuber", "thumbnail": "https://i.ytimg.com/vi/PuStNtldiJY/maxresdefault.jpg"} {"video_id": "PuStNtldiJY", "text": "Facebook is using that a system which has LSD a Metascore for about 4 billion translations per day that's about 50,000 per second and another another 50,000 and the second and another 50,000 if you have an Amazon Alexa it's talking back to you it sounds like a female voice it's not a recording it's an STL SDM which has learned to sound like a female voice to see how much lsdm is per meeting the modern world just look at what all these Google Data Centers are doing now 30% 29% as of 2016 of the awesome computational power for", "start_timestamp": "00:06:30", "end_timestamp": "00:07:17", "start_second": 390, "end_second": 437, "url": "https://www.youtube.com/watch?v=PuStNtldiJY&t=390s", "title": "How AI Is Beginning To Surpass Humans | Ju\u0308rgen Schmidhuber", "thumbnail": "https://i.ytimg.com/vi/PuStNtldiJY/maxresdefault.jpg"} {"video_id": "PuStNtldiJY", "text": "inference in all these Google Data Centers was used for lsdm the big Asian card companies such as Samsung are also using it and and just a couple of months ago Samsung became the most profitable company in the world for the first time what can be learned from that if you want your company to be among the most profitable profitable ones better.you is lsdm now we started this type of research a long time ago in the early 90 years and by the way you are a large audience by my standards but back then few people were interested in artificial", "start_timestamp": "00:07:17", "end_timestamp": "00:08:00", "start_second": 437, "end_second": 480, "url": "https://www.youtube.com/watch?v=PuStNtldiJY&t=437s", "title": "How AI Is Beginning To Surpass Humans | Ju\u0308rgen Schmidhuber", "thumbnail": "https://i.ytimg.com/vi/PuStNtldiJY/maxresdefault.jpg"} {"video_id": "PuStNtldiJY", "text": "intelligence and I remember I gave a talk and there was just one single person in the audience a young lady I said young lady it's very embarrassing but apparently today I'm going to give this talk just to you and she said ok but please hurry I am the next speaker [Applause] since then we have greatly profited from the fact that every five years computers are getting ten times cheaper that's an old trend much older than Moore's law and goes back at least to 1941 when here invests a team Conrad Sousa built the first working", "start_timestamp": "00:08:00", "end_timestamp": "00:08:50", "start_second": 480, "end_second": 530, "url": "https://www.youtube.com/watch?v=PuStNtldiJY&t=480s", "title": "How AI Is Beginning To Surpass Humans | Ju\u0308rgen Schmidhuber", "thumbnail": "https://i.ytimg.com/vi/PuStNtldiJY/maxresdefault.jpg"} {"video_id": "PuStNtldiJY", "text": "program controlled computer and 30 years later for the same price we could do 1 million times as many operations per second because he could do only one operation per second roughly and now it's 75 years later we can do roughly a million billion instructions per second for the same price and it's not clear that this trend is going to break soon because the physical limits are much further out there if this trend doesn't break then within the near future we are going for the first time we are going to have little computational devices that", "start_timestamp": "00:08:50", "end_timestamp": "00:09:26", "start_second": 530, "end_second": 566, "url": "https://www.youtube.com/watch?v=PuStNtldiJY&t=530s", "title": "How AI Is Beginning To Surpass Humans | Ju\u0308rgen Schmidhuber", "thumbnail": "https://i.ytimg.com/vi/PuStNtldiJY/maxresdefault.jpg"} {"video_id": "PuStNtldiJY", "text": "can compute as much as a human brain we don't have that yet but soon it will be possible if that trend doesn't break them it will take only 50 more years such that for the same price you can compute as much as all 10 billion brains on the planet and there will not be only one little device like that but many many many everything is going to change by 2011 computers were fast enough to allow us for the first time to have superhuman performance at least and limited domains through these deep learning networks back then that", "start_timestamp": "00:09:26", "end_timestamp": "00:10:04", "start_second": 566, "end_second": 604, "url": "https://www.youtube.com/watch?v=PuStNtldiJY&t=566s", "title": "How AI Is Beginning To Surpass Humans | Ju\u0308rgen Schmidhuber", "thumbnail": "https://i.ytimg.com/vi/PuStNtldiJY/maxresdefault.jpg"} {"video_id": "PuStNtldiJY", "text": "was 2011 so computers were about 20 times more expensive than today today we can do 20 times as much for the same price and and that was already good enough to do superhuman traffic sign recognition which is important for self-driving cars and ten years ago five years ago when computers were about ten times more expensive than today they were already fast enough to make us win these medical imaging competitions what you see behind me is a slice through the female breast tissue and our network which started as", "start_timestamp": "00:10:04", "end_timestamp": "00:10:42", "start_second": 604, "end_second": 642, "url": "https://www.youtube.com/watch?v=PuStNtldiJY&t=604s", "title": "How AI Is Beginning To Surpass Humans | Ju\u0308rgen Schmidhuber", "thumbnail": "https://i.ytimg.com/vi/PuStNtldiJY/maxresdefault.jpg"} {"video_id": "PuStNtldiJY", "text": "a stupid Network had no idea of anything just learned to recognize cancer by imitating a human doctor a histology and out competing all the other competitors back then soon all of healthcare soon all of medical diagnosis is going to be superhuman it is going to be so good that it's going to be mandatory at some point we can also use Alice TM and things like that to control robots but we don't only have systems that slavishly imitate human teachers know we also have systems that invent their own goals we call that artificial curiosity", "start_timestamp": "00:10:42", "end_timestamp": "00:11:24", "start_second": 642, "end_second": 684, "url": "https://www.youtube.com/watch?v=PuStNtldiJY&t=642s", "title": "How AI Is Beginning To Surpass Humans | Ju\u0308rgen Schmidhuber", "thumbnail": "https://i.ytimg.com/vi/PuStNtldiJY/maxresdefault.jpg"} {"video_id": "PuStNtldiJY", "text": "artificial creativity systems that like little babies learn to invent their own experiments to figure out how the world functions and what you can do in it and systems that set their own goals are required to become smart because if they don't have the freedom to do that they are not going to become more and more general problems over solving one new self-invented problem after another on the other hand it's hard to predict what they are going to do but you can steer them in the not-so-distant future I guess we will for the first time have AI", "start_timestamp": "00:11:24", "end_timestamp": "00:12:03", "start_second": 684, "end_second": 723, "url": "https://www.youtube.com/watch?v=PuStNtldiJY&t=684s", "title": "How AI Is Beginning To Surpass Humans | Ju\u0308rgen Schmidhuber", "thumbnail": "https://i.ytimg.com/vi/PuStNtldiJY/maxresdefault.jpg"} {"video_id": "PuStNtldiJY", "text": "on the level of small animals we don't have that yet but it's not going to take so many years once we have that it may need it may require just a few additional decades to reach human level intelligence why because technological evolution is maybe a million times faster than biological evolution because the dead ends are weeded out much faster and it took 3.5 billion years to go from zero from nothing to a monkey but just a few tens of millions of years afterwards to go from the monkey to human level intelligence we have a company that is", "start_timestamp": "00:12:03", "end_timestamp": "00:12:46", "start_second": 723, "end_second": 766, "url": "https://www.youtube.com/watch?v=PuStNtldiJY&t=723s", "title": "How AI Is Beginning To Surpass Humans | Ju\u0308rgen Schmidhuber", "thumbnail": "https://i.ytimg.com/vi/PuStNtldiJY/maxresdefault.jpg"} {"video_id": "PuStNtldiJY", "text": "trying to make that a reality it's called Nissan's pronounced Nissan's like in English but spelled in a different way and and this company is trying to him to build the first the general-purpose AI that really deserves the name many people think there is this insurmountable wall between today's special purpose a tires which do for example the speech recognition etc and translation and and the universal or a general purpose AI or intelligence of humans but mr. Gorbachev we are going to tear down this one and there is no doubt", "start_timestamp": "00:12:46", "end_timestamp": "00:13:36", "start_second": 766, "end_second": 816, "url": "https://www.youtube.com/watch?v=PuStNtldiJY&t=766s", "title": "How AI Is Beginning To Surpass Humans | Ju\u0308rgen Schmidhuber", "thumbnail": "https://i.ytimg.com/vi/PuStNtldiJY/maxresdefault.jpg"} {"video_id": "PuStNtldiJY", "text": "in my mind that within not so many decades for the first time we are going to have superhuman decision-makers in many many domains super-smart ARS which are as I told you not just going to be slaves of humans they are going to do their own thing in many ways and and they are going to realize what we have realized a long time ago which is that most resources are not in our thin film of biosphere now they are out there in space so of course they are going to expand out there in space where most of the resources are and through billions", "start_timestamp": "00:13:36", "end_timestamp": "00:14:18", "start_second": 816, "end_second": 858, "url": "https://www.youtube.com/watch?v=PuStNtldiJY&t=816s", "title": "How AI Is Beginning To Surpass Humans | Ju\u0308rgen Schmidhuber", "thumbnail": "https://i.ytimg.com/vi/PuStNtldiJY/maxresdefault.jpg"} {"video_id": "PuStNtldiJY", "text": "of self-replicating robot factories they are going to colonize the solar system and within a few hundred thousand years they are going to cover the entire galaxy with senders and receivers such that they can travel the way they are traveling in my lab today which is by radio from sender to receiver now nobody knows anything about the details of how all of that is going to happen but it's the only logical thing because you still need resources and torm terms of Matan energy so the only way is to move outwards what's happening now is much", "start_timestamp": "00:14:18", "end_timestamp": "00:14:58", "start_second": 858, "end_second": 898, "url": "https://www.youtube.com/watch?v=PuStNtldiJY&t=858s", "title": "How AI Is Beginning To Surpass Humans | Ju\u0308rgen Schmidhuber", "thumbnail": "https://i.ytimg.com/vi/PuStNtldiJY/maxresdefault.jpg"} {"video_id": "PuStNtldiJY", "text": "more than another Industrial Revolution this is something that transcends humankind and biology itself a new type of life is going to expand from this little planet in a way where humans cannot follow well that's okay we don't have to believe we are going to stay the crown of creation we don't believe we have to stay the crown of creation but you still can see beauty in being part of something of some grander scheme that goes that leads the universe from less complexity to higher complexity it's a privilege to live at a time when we can", "start_timestamp": "00:14:58", "end_timestamp": "00:15:54", "start_second": 898, "end_second": 954, "url": "https://www.youtube.com/watch?v=PuStNtldiJY&t=898s", "title": "How AI Is Beginning To Surpass Humans | Ju\u0308rgen Schmidhuber", "thumbnail": "https://i.ytimg.com/vi/PuStNtldiJY/maxresdefault.jpg"} {"video_id": "-9evrZnBorM", "text": "hello everyone today we're looking at bird pre-training of deep bi-directional transformers for language understanding by Jacob Devlin and then why Chung Kenton Lee Cristina Tata Nova these are people from Google AI language so you're about to see the most hyped model currently so basically Bert is a model that takes as an input language sub token sequences and outputs various things so it can it can be made to do various things almost any NLP tasks with basically little training because the Bert model comes pre trained on a very", "start_timestamp": "00:00:00", "end_timestamp": "00:00:46", "start_second": 0, "end_second": 46, "url": "https://www.youtube.com/watch?v=-9evrZnBorM&t=0s", "title": "BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding", "thumbnail": "https://i.ytimg.com/vi/-9evrZnBorM/maxresdefault.jpg"} {"video_id": "-9evrZnBorM", "text": "large corpus and we're gonna see how that's done all right so the paper introduces basically the kind of current state of the art of language models and they say okay what they want to do new is they want to do bi-directional training I'm going to go down here and see their comparison right so here they compare three models and these are representative of three types of models so first here is for example the the open AI transformer so this is a this is the classic or one of the classic transformer models we've talked", "start_timestamp": "00:00:46", "end_timestamp": "00:01:35", "start_second": 46, "end_second": 95, "url": "https://www.youtube.com/watch?v=-9evrZnBorM&t=46s", "title": "BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding", "thumbnail": "https://i.ytimg.com/vi/-9evrZnBorM/maxresdefault.jpg"} {"video_id": "-9evrZnBorM", "text": "about transformers before and the attention is all you need video so what a transformer does is it uses attention and if for those who forgot what attention is if you have like a token sequence ABCDE then a classic model to use that would be an LST M so the other stem would go here it would like have a vector representation a hidden state and then it would take this a it would take this hidden state and compute a new hidden state and then it will go on and take be and incorporate this into the hidden state the hidden state kind of always", "start_timestamp": "00:01:35", "end_timestamp": "00:02:17", "start_second": 95, "end_second": 137, "url": "https://www.youtube.com/watch?v=-9evrZnBorM&t=95s", "title": "BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding", "thumbnail": "https://i.ytimg.com/vi/-9evrZnBorM/maxresdefault.jpg"} {"video_id": "-9evrZnBorM", "text": "stays the same size but the the recurrent model will update the hidden state as it goes over the input sequence so this is one way of dealing with language but people have kind of done another way and that's the attention based mechanism is where basically for each of these you compute a compute a vector independently of each other so each one has a vector representation and then you have a vector representation of what you of what you want which is called an attention head and you can have multiple of these but in the", "start_timestamp": "00:02:17", "end_timestamp": "00:03:02", "start_second": 137, "end_second": 182, "url": "https://www.youtube.com/watch?v=-9evrZnBorM&t=137s", "title": "BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding", "thumbnail": "https://i.ytimg.com/vi/-9evrZnBorM/maxresdefault.jpg"} {"video_id": "-9evrZnBorM", "text": "simplest case let's just say we are looking for the subject in this sentence ABCDE is a sentence and one of the words is the subject of the sentence then we could have a vector here that's called a query rec vector so these are called these are called values V and this is called a query Q and then these vectors are the same size report at this you're gonna compute the inner product with each of these so the inner product you wanna you wanna do okay I already screwed this up you're actually computing two vectors", "start_timestamp": "00:03:02", "end_timestamp": "00:03:45", "start_second": 182, "end_second": 225, "url": "https://www.youtube.com/watch?v=-9evrZnBorM&t=182s", "title": "BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding", "thumbnail": "https://i.ytimg.com/vi/-9evrZnBorM/maxresdefault.jpg"} {"video_id": "-9evrZnBorM", "text": "for each token so honesty but this is this is not too important for this step one is the key and one is the value all right value and this is called the key and you have your query Q and you compute the inner products actually with the key sorry values aren't too important for what I want to demonstrate but you compute key with query all right and that gives you basically for each key it's gonna give you an output and so you're gonna have you're gonna have a for this ABCDE you're gonna have like this much inner product this much inner", "start_timestamp": "00:03:45", "end_timestamp": "00:04:31", "start_second": 225, "end_second": 271, "url": "https://www.youtube.com/watch?v=-9evrZnBorM&t=225s", "title": "BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding", "thumbnail": "https://i.ytimg.com/vi/-9evrZnBorM/maxresdefault.jpg"} {"video_id": "-9evrZnBorM", "text": "product this much this much this much inner product so after maybe the softmax you have like a nice distribution and then you can say AHA here this is the the biggest the biggest alignment of the particular key with my query and that query is which one is the subject of course you're gonna train all these queries and keys producing procedures so this is this is a tension mechanism and if you then want that that's where the value comes in you can if if your queries not only which ones the subject but it's actually generically read that", "start_timestamp": "00:04:31", "end_timestamp": "00:05:07", "start_second": 271, "end_second": 307, "url": "https://www.youtube.com/watch?v=-9evrZnBorM&t=271s", "title": "BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding", "thumbnail": "https://i.ytimg.com/vi/-9evrZnBorM/maxresdefault.jpg"} {"video_id": "-9evrZnBorM", "text": "okay I'm gonna extract some information from some token that I'm going to use later then you would actually take B say aa B is the best one okay I'm gonna take the value of B you're basically gonna get a cab it a weighted average of the values according to these values here right so this is very shortly what attention is if you if you want a lengthy explanation go to the attention is all you need video right so open that I cheap T uses attention here and it's a it's a left to right transformer that's what it says here and what that means is", "start_timestamp": "00:05:07", "end_timestamp": "00:05:47", "start_second": 307, "end_second": 347, "url": "https://www.youtube.com/watch?v=-9evrZnBorM&t=307s", "title": "BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding", "thumbnail": "https://i.ytimg.com/vi/-9evrZnBorM/maxresdefault.jpg"} {"video_id": "-9evrZnBorM", "text": "because also step-by-step but in each step it uses attention so here is the input tokens and as you can see it goes in this direction so each one of these and these are multiple layers of attention so you can also layer these of course so each one of the attention intermediate steps can only attend to whatever is to the left of it right you can see this here so it goes step by step and it goes left to right basically so it can it can kind of take the sequence in as a left to right input basically what that means is whenever", "start_timestamp": "00:05:47", "end_timestamp": "00:06:27", "start_second": 347, "end_second": 387, "url": "https://www.youtube.com/watch?v=-9evrZnBorM&t=347s", "title": "BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding", "thumbnail": "https://i.ytimg.com/vi/-9evrZnBorM/maxresdefault.jpg"} {"video_id": "-9evrZnBorM", "text": "you interpret a particular token you can your context is only to the left of that token you don't know what's coming in it's like what do you when you read a sentence from left to right but then as humans unconsciously we probably go and at the end of the sentence kind of make sense of the thing as a whole but here the model is forced to make sense of what the thing only from whatever is to the left of it so that's a basic limitation of these left to right models then there is another approach which is called Elmo", "start_timestamp": "00:06:27", "end_timestamp": "00:07:04", "start_second": 387, "end_second": 424, "url": "https://www.youtube.com/watch?v=-9evrZnBorM&t=387s", "title": "BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding", "thumbnail": "https://i.ytimg.com/vi/-9evrZnBorM/maxresdefault.jpg"} {"video_id": "-9evrZnBorM", "text": "which has been popular recently as a substitute for word vectors so if you know word vectors were vectors are basically the kind of first stage in most language processing tasks where for each word say the cat sat something for each word you have you had a big giant table and for each word you associate a vector of fixed size dimension right so you place every word in a vector space and these vectors you pre compute for something like word Tyvek or glove and that gives you a nice way to basically deal with these words in a canonical way", "start_timestamp": "00:07:04", "end_timestamp": "00:07:52", "start_second": 424, "end_second": 472, "url": "https://www.youtube.com/watch?v=-9evrZnBorM&t=424s", "title": "BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding", "thumbnail": "https://i.ytimg.com/vi/-9evrZnBorM/maxresdefault.jpg"} {"video_id": "-9evrZnBorM", "text": "and you can pre train the word vectors that's all really nice but people have realized ok words kind of multiple meanings and words can kind of slightly change meaning depending on words around them and so on so what Elmo does is Elmo uses to Alice Tian's one Elliston goes into this direction when I Austin goes into this direction and basically a single lsdm as we saw the four it takes in the input sequence one by one so here e1 and e2 if the en it produces hidden states at each step produces a hidden state that", "start_timestamp": "00:07:52", "end_timestamp": "00:08:30", "start_second": 472, "end_second": 510, "url": "https://www.youtube.com/watch?v=-9evrZnBorM&t=472s", "title": "BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding", "thumbnail": "https://i.ytimg.com/vi/-9evrZnBorM/maxresdefault.jpg"} {"video_id": "-9evrZnBorM", "text": "is a result of previous hidden state and the current token and and then what it says is ok now these hidden States here basically these are now the embeddings of the token here on e and so on right these are the embeddings so the world vectors as to say Oran are no longer just one vector per word so they're not in isolation anymore but basically you need the entire sequence to compute the word vectors as a result of this of this Alice T and this is more powerful because it can give individual words multiple or each it basically each word", "start_timestamp": "00:08:30", "end_timestamp": "00:09:16", "start_second": 510, "end_second": 556, "url": "https://www.youtube.com/watch?v=-9evrZnBorM&t=510s", "title": "BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding", "thumbnail": "https://i.ytimg.com/vi/-9evrZnBorM/maxresdefault.jpg"} {"video_id": "-9evrZnBorM", "text": "has kind of a unique and then depending on the surrounding words you would still hope that a given word would have similar similar embedding or similar word vector all across the language but you can kind of fine tune it to the particular sentence it is in and also you can completely change its meaning if it's if it's kind of a word that has a completely new meaning in that sentence so basically uses two LST M's one as as I said here forward one backward these also have multiple layers and so and each of these produce one such hidden", "start_timestamp": "00:09:16", "end_timestamp": "00:09:52", "start_second": 556, "end_second": 592, "url": "https://www.youtube.com/watch?v=-9evrZnBorM&t=556s", "title": "BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding", "thumbnail": "https://i.ytimg.com/vi/-9evrZnBorM/maxresdefault.jpg"} {"video_id": "-9evrZnBorM", "text": "vector per token and he would simply concatenate the two from the from so from here is this leather stain on the left produces one disaster the right produces maybe here another one and you simply concatenate the two to get the the final embedding the final word vector for each token so the fundamental limitation here is that this is kind of you have information from the left end you have entered information from the right so other than here the original transformer you actually have you actually can condition on the left context and the", "start_timestamp": "00:09:52", "end_timestamp": "00:10:34", "start_second": 592, "end_second": 634, "url": "https://www.youtube.com/watch?v=-9evrZnBorM&t=592s", "title": "BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding", "thumbnail": "https://i.ytimg.com/vi/-9evrZnBorM/maxresdefault.jpg"} {"video_id": "-9evrZnBorM", "text": "right context but it's very it's it's very shallow because it's simply a concatenation of the left facing Alice TM and the concatenation of the right facing Alice TM and and these ultimately intrinsically they have nothing to do with each other they so you simply concatenate the two things that the left facing Ostiense still can only see to the left and the right facing Alice team still can only see to the right so you basically have to you have blind models and then you can have concatenate so the it's still", "start_timestamp": "00:10:34", "end_timestamp": "00:11:09", "start_second": 634, "end_second": 669, "url": "https://www.youtube.com/watch?v=-9evrZnBorM&t=634s", "title": "BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding", "thumbnail": "https://i.ytimg.com/vi/-9evrZnBorM/maxresdefault.jpg"} {"video_id": "-9evrZnBorM", "text": "suboptimal because what you want is you want a single model to output your word vectors or to interpret the language that can look at both the left and the right at the same time and then incorporate information from both of them simultaneously and not just at the end by concatenation this is what Birth does it's a bird here and this is kind of what they claim is the new contribution bird at each in each layer here of the model D D let's look at this and for a particular token they look at all of the context so every every other", "start_timestamp": "00:11:09", "end_timestamp": "00:11:52", "start_second": 669, "end_second": 712, "url": "https://www.youtube.com/watch?v=-9evrZnBorM&t=669s", "title": "BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding", "thumbnail": "https://i.ytimg.com/vi/-9evrZnBorM/maxresdefault.jpg"} {"video_id": "-9evrZnBorM", "text": "token in the in the input they look at that and so the the basically it seems kind of it seems kind of obvious but it's it's actually there's there's reasons why these other models don't do this but so this is the entire point of Berk is at each layer in this in this transformer architecture is still an attention that can ISM by the way so that there's there's the mechanism of attention here and and here is exactly the same or almost the same they actually keep it close on purpose in order to compare but now we have", "start_timestamp": "00:11:52", "end_timestamp": "00:12:34", "start_second": 712, "end_second": 754, "url": "https://www.youtube.com/watch?v=-9evrZnBorM&t=712s", "title": "BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding", "thumbnail": "https://i.ytimg.com/vi/-9evrZnBorM/maxresdefault.jpg"} {"video_id": "-9evrZnBorM", "text": "attention not only to the left but also to the right to everything right so why do these other model whether for example the opening I transformer only look to the left that's because somehow you need a task to train on right and most of the time if you especially if you want unsupervised training you going to do something like language modeling and language modeling what you have is a sentence ABCD and you're asking what comes next here alright so by by the definition of the task you can only look to the left", "start_timestamp": "00:12:34", "end_timestamp": "00:13:17", "start_second": 754, "end_second": 797, "url": "https://www.youtube.com/watch?v=-9evrZnBorM&t=754s", "title": "BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding", "thumbnail": "https://i.ytimg.com/vi/-9evrZnBorM/maxresdefault.jpg"} {"video_id": "-9evrZnBorM", "text": "that's that's just how how these like how the task works so it makes sense that that these other models kind of do this because they preach right and this not Burt has a different free training because they can they can only they have to look to the left and the right and the other thing is what you want to use the model for so the good thing if you own if you go left to right is you can use the model now for generating language in the same thing if if you have a B C D and you ask and the model is trying to produce the next character", "start_timestamp": "00:13:17", "end_timestamp": "00:13:56", "start_second": 797, "end_second": 836, "url": "https://www.youtube.com/watch?v=-9evrZnBorM&t=797s", "title": "BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding", "thumbnail": "https://i.ytimg.com/vi/-9evrZnBorM/maxresdefault.jpg"} {"video_id": "-9evrZnBorM", "text": "only looking to the left right then you can you can say what's the next character the model says B and then we can feed the same thing into the model and say okay what's now the next character what's now the next character G so there's pretty useful if you only look to the left you can actually use the model then for generating a language which is something you can't do with Burt or it's not it's not really obvious now how to do it with where people are I know people are investigating into language producing producing entire", "start_timestamp": "00:13:56", "end_timestamp": "00:14:32", "start_second": 836, "end_second": 872, "url": "https://www.youtube.com/watch?v=-9evrZnBorM&t=836s", "title": "BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding", "thumbnail": "https://i.ytimg.com/vi/-9evrZnBorM/maxresdefault.jpg"} {"video_id": "-9evrZnBorM", "text": "sequences with Burt but as yet it's not super clear how to do this with this model that being said them all is pretty good at pretty much everything else so let's jump in to how they train they train let's see here they trained using masked basically masked language modeling so when I actually go into that first masked language modeling what they do is they basically replace some words by the mask token and they don't have a good no nice alright today I'm everyone here all right here if you just look at kind", "start_timestamp": "00:14:32", "end_timestamp": "00:15:23", "start_second": 872, "end_second": 923, "url": "https://www.youtube.com/watch?v=-9evrZnBorM&t=872s", "title": "BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding", "thumbnail": "https://i.ytimg.com/vi/-9evrZnBorM/maxresdefault.jpg"} {"video_id": "-9evrZnBorM", "text": "of the top sentence here the man went to mask store right don't don't don't worry about the set and so on just this a man went to a mask store and the models simply asked to predict what's here which word is there so it needs to incorporate information from the right and from the left to do this so that's basically how you train it they simply drop out some of the words some of the time and they they have different techniques so you can clearly tell a lot of work has gone into kind of fine-tuning everything in this model", "start_timestamp": "00:15:23", "end_timestamp": "00:16:04", "start_second": 923, "end_second": 964, "url": "https://www.youtube.com/watch?v=-9evrZnBorM&t=923s", "title": "BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding", "thumbnail": "https://i.ytimg.com/vi/-9evrZnBorM/maxresdefault.jpg"} {"video_id": "-9evrZnBorM", "text": "like how to Train it and so on so let's say we don't always do this sometimes we do this other thing and sometimes we do that and there's several ways of biasing this model but basically you do this mask which modeling and then because they also want to evaluate on let's say entire sequence tasks or tasks that span multiple sentences what they do is the second free training task at the same time as you can see here where they feed two sentences so that's the first sentence that's the second sentence they feed these two sentences as an input so", "start_timestamp": "00:16:04", "end_timestamp": "00:16:41", "start_second": 964, "end_second": 1001, "url": "https://www.youtube.com/watch?v=-9evrZnBorM&t=964s", "title": "BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding", "thumbnail": "https://i.ytimg.com/vi/-9evrZnBorM/maxresdefault.jpg"} {"video_id": "-9evrZnBorM", "text": "at first they have this token and these separate the sentences and then they ask the model to predict a label is next and is next is neck is true if the second sentence follows the first and so it's if it's like a logical continuation and the way you do this unsupervised is really easy you take a big giant corpus and you take a sentence for the first sentence and then 50% of the time you take the next sentence in the corpus and the label is true and 50% of the time you take some random sentence here you say for example the man the man mask to", "start_timestamp": "00:16:41", "end_timestamp": "00:17:26", "start_second": 1001, "end_second": 1046, "url": "https://www.youtube.com/watch?v=-9evrZnBorM&t=1001s", "title": "BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding", "thumbnail": "https://i.ytimg.com/vi/-9evrZnBorM/maxresdefault.jpg"} {"video_id": "-9evrZnBorM", "text": "the store and the next sentence is penguin mask or flightless birds and that's kind of a random sentence so the model is asked to predict well that's probably not the next and following this first sentence so you do these two tasks a pre-trained and D can do this unsupervised you don't need supervised data for that you just need a corpus and they do this for a long time with a lot of data and the model itself is giant has 24 I think of these transformer layers so it's giant and then you kind of pre-trained in this", "start_timestamp": "00:17:26", "end_timestamp": "00:18:10", "start_second": 1046, "end_second": 1090, "url": "https://www.youtube.com/watch?v=-9evrZnBorM&t=1046s", "title": "BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding", "thumbnail": "https://i.ytimg.com/vi/-9evrZnBorM/maxresdefault.jpg"} {"video_id": "-9evrZnBorM", "text": "model here is a here's an illustration of some extra things so what they do is they they first this is the input up here so the first token is this CLS token which is kind of the star token and then this is the first sentence then the step is the separator of two sentences then this is the second sentence and then again I said look up to these hashtags in a second but first they say okay first we have the token embeddings so they kind of start with with the original concept of word vectors at the very basis because you", "start_timestamp": "00:18:10", "end_timestamp": "00:18:56", "start_second": 1090, "end_second": 1136, "url": "https://www.youtube.com/watch?v=-9evrZnBorM&t=1090s", "title": "BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding", "thumbnail": "https://i.ytimg.com/vi/-9evrZnBorM/maxresdefault.jpg"} {"video_id": "-9evrZnBorM", "text": "need to start with actually going into a vector space to use these models but they don't they they then then in kind of transform these through the transform layers they also use segments embeddings and segments embeddings as you can see here is simply a kind of a binary label EI being the label for the first sentence and E being the label for today the second sentence so just the model can differentiate which one's the first which one's the second because it's kind of hard to learn for a transformer architecture that the tokens kind of", "start_timestamp": "00:18:56", "end_timestamp": "00:19:35", "start_second": 1136, "end_second": 1175, "url": "https://www.youtube.com/watch?v=-9evrZnBorM&t=1136s", "title": "BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding", "thumbnail": "https://i.ytimg.com/vi/-9evrZnBorM/maxresdefault.jpg"} {"video_id": "-9evrZnBorM", "text": "separate the sentences so you can want to help it and the last thing is positional embeddings and we've also already talked about these in attention is all you need this is where you can kind of the model since it's a transformer it doesn't go step by step it doesn't go one of them so it's kind of hard for the model to make that how far things are apart from each other how far to tokens if they're neighbors or if they're really far apart and these positional embeddings kind of help the model I decide if two tokens are close", "start_timestamp": "00:19:35", "end_timestamp": "00:20:11", "start_second": 1175, "end_second": 1211, "url": "https://www.youtube.com/watch?v=-9evrZnBorM&t=1175s", "title": "BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding", "thumbnail": "https://i.ytimg.com/vi/-9evrZnBorM/maxresdefault.jpg"} {"video_id": "-9evrZnBorM", "text": "to each other in input infer if they're right they're just neighbors or if they are actually really far apart all right so this is this is how the kind of first input is constructed out of these embeddings and then it's fed through these transformer layers as we saw with the mask that lime task and his next task I want to quickly get to these hashtags what what they mean so the input here is separated into word pieces so called work pieces and what that is is so in language processing tasks you have kind of a choice you have", "start_timestamp": "00:20:11", "end_timestamp": "00:20:54", "start_second": 1211, "end_second": 1254, "url": "https://www.youtube.com/watch?v=-9evrZnBorM&t=1211s", "title": "BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding", "thumbnail": "https://i.ytimg.com/vi/-9evrZnBorM/maxresdefault.jpg"} {"video_id": "-9evrZnBorM", "text": "you have a choice of how to tokenize your input so what let's look at a sentence here subscribe to PewDiePie so this is a sentence and the sentence is rather let's say world wise complicated so why matter language one will have problem with this so first you need to tokenize this sentence alright so what most people do is they say okay here are the word boundaries we're not tokenize this into three segments first is subscribe to piggyback okay so three things and each of these now needs a a word vector associated with it now to", "start_timestamp": "00:20:54", "end_timestamp": "00:21:45", "start_second": 1254, "end_second": 1305, "url": "https://www.youtube.com/watch?v=-9evrZnBorM&t=1254s", "title": "BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding", "thumbnail": "https://i.ytimg.com/vi/-9evrZnBorM/maxresdefault.jpg"} {"video_id": "-9evrZnBorM", "text": "the thing is the word vectors let's assume you have them pre-trained or something in any case you need a big table a big big table and this goes down here where for each word a the two I you you have a Keter associated with it right so you need to keep this in your model and as you can as you know English has a lot of words here so this table is gonna be really big and the problem is how do you make this table right okay you could make it kind of dynamically and so on but in general you're gonna create this table with all the words you know and", "start_timestamp": "00:21:45", "end_timestamp": "00:22:37", "start_second": 1305, "end_second": 1357, "url": "https://www.youtube.com/watch?v=-9evrZnBorM&t=1305s", "title": "BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding", "thumbnail": "https://i.ytimg.com/vi/-9evrZnBorM/maxresdefault.jpg"} {"video_id": "-9evrZnBorM", "text": "that's going to be too big because English has so many words and then you can say alright we'll only take the top whatever is used in 90% of the language which turns out to be it's kind of burrito distributed so it turns out to be like 5 percent of the words are used in 90 percent of the language so you just take these but then you're gonna have the problem ok here 2 2 is not a problem why not 2 is used super often that we're gonna have it at the very top somewhere and we're gonna go back to it subscribe is it's already it's not so common right", "start_timestamp": "00:22:37", "end_timestamp": "00:23:18", "start_second": 1357, "end_second": 1398, "url": "https://www.youtube.com/watch?v=-9evrZnBorM&t=1357s", "title": "BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding", "thumbnail": "https://i.ytimg.com/vi/-9evrZnBorM/maxresdefault.jpg"} {"video_id": "-9evrZnBorM", "text": "so maybe you have a word for it somewhere down but then pewdiepie is a name so there is no there's not even a word like that's not even a word it's it's just so what you what you usually do what people usually do is they have this out of vocabulary token and then they have a vector associated somewhere here without of vocabulary token is a whatever I don't know what it is I just don't like that I don't have it in my vocabulary and the model kind of deals that that's kind of it's not it's not really ideal especially if you", "start_timestamp": "00:23:18", "end_timestamp": "00:23:57", "start_second": 1398, "end_second": 1437, "url": "https://www.youtube.com/watch?v=-9evrZnBorM&t=1398s", "title": "BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding", "thumbnail": "https://i.ytimg.com/vi/-9evrZnBorM/maxresdefault.jpg"} {"video_id": "-9evrZnBorM", "text": "don't want to generate language also your model tends to generate out of tabular tokens if you allow that if you don't allow that you have a problem during training so it's all kind of messy what's the alternative the alternative is to go character level so let's look at character level in character level you say all right my words are obviously made of actors and characters I'm just gonna split it each character right here the white space can be a character too so I'm gonna split at each character and then I'm simply going to have a bone", "start_timestamp": "00:23:57", "end_timestamp": "00:24:36", "start_second": 1437, "end_second": 1476, "url": "https://www.youtube.com/watch?v=-9evrZnBorM&t=1437s", "title": "BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding", "thumbnail": "https://i.ytimg.com/vi/-9evrZnBorM/maxresdefault.jpg"} {"video_id": "-9evrZnBorM", "text": "vector for each character and there's only like 20-something six of those and um so I can keep 26 vectors but this tends to be rather problematic because a character by itself having a meaning that you know that can be encapsulated by a vector is kind of its kind of shady because the character character by itself usually doesn't mean and it doesn't have a meaning so what's the solution here the solution is to go in between the solution is to say well let's actually go forward pieces and you can kind of think of them as syllables", "start_timestamp": "00:24:36", "end_timestamp": "00:25:16", "start_second": 1476, "end_second": 1516, "url": "https://www.youtube.com/watch?v=-9evrZnBorM&t=1476s", "title": "BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding", "thumbnail": "https://i.ytimg.com/vi/-9evrZnBorM/maxresdefault.jpg"} {"video_id": "-9evrZnBorM", "text": "but you can you can split you can make them in a way that you have a fixed size vocabulary say okay I have 4,000 entry places in my big table it's I can afford 4,000 size table so first of all I'm going to have for each character ABCDE and so on I'm going to have a vector but then only have 26 have 3,000 some left I'm going to have also the most common words now a is already here but maybe I can have two and from and so the most common words they also get there and then for the other things I'm going to split the words maybe in a sub scribe", "start_timestamp": "00:25:16", "end_timestamp": "00:26:04", "start_second": 1516, "end_second": 1564, "url": "https://www.youtube.com/watch?v=-9evrZnBorM&t=1516s", "title": "BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding", "thumbnail": "https://i.ytimg.com/vi/-9evrZnBorM/maxresdefault.jpg"} {"video_id": "-9evrZnBorM", "text": "right so these are two syllables and sub can be free kind of a prefix to many things and I only need then 1 1 so I've sub here sub I only need one vector for that and then the rest if described scribe is by the way also a word so I can have that but if scribe weren't in my vocabulary I can divide scribe Len up into into characters and then describe them with the so basically I can mix and match here I can sub it that's that I have that and then scribe I don't have it I don't have any of the pieces but so I can just use", "start_timestamp": "00:26:04", "end_timestamp": "00:26:44", "start_second": 1564, "end_second": 1604, "url": "https://www.youtube.com/watch?v=-9evrZnBorM&t=1564s", "title": "BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding", "thumbnail": "https://i.ytimg.com/vi/-9evrZnBorM/maxresdefault.jpg"} {"video_id": "-9evrZnBorM", "text": "the character the character so this would would be sub and then s C or I II so these these would be the tokens that I work with now as as my input and this these tags here so this is what would happen to PewDiePie you could simply split along each character so you basic this kind of an interpolation between the token model and the character model and it's really neat and it usually works quite well the as I said the the hashtag sign here simply means that these two have original in one word and now this this", "start_timestamp": "00:26:44", "end_timestamp": "00:27:35", "start_second": 1604, "end_second": 1655, "url": "https://www.youtube.com/watch?v=-9evrZnBorM&t=1604s", "title": "BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding", "thumbnail": "https://i.ytimg.com/vi/-9evrZnBorM/maxresdefault.jpg"} {"video_id": "-9evrZnBorM", "text": "in here is just a word piece token this is a really good example where where where piece come in because play by itself is a word and that can make play yang instead of having an own vector for that I can divide it into play which already has a meaning and presumably playing and play would have similar meaning so it makes sense to have to play as a so that's the token singled-out ear and then in as is as a suffix also makes sense to have a token for that in my table and then I said we have these two tokens here and that probably", "start_timestamp": "00:27:35", "end_timestamp": "00:28:11", "start_second": 1655, "end_second": 1691, "url": "https://www.youtube.com/watch?v=-9evrZnBorM&t=1655s", "title": "BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding", "thumbnail": "https://i.ytimg.com/vi/-9evrZnBorM/maxresdefault.jpg"} {"video_id": "-9evrZnBorM", "text": "already gives me more information than simply having the word playing right by the way you should subscribe to PewDiePie just FYI alright let's go on so we we do workpiece tokenization we do the masked language model we do the next sentence prediction pre-training what do we have now we have a model that can really really well predict some masked words now how do we use it they evaluate on these I believe it's 11 tasks 11 different tasks of or is it I don't know how many it is it is a lot with the same model so this pre trend", "start_timestamp": "00:28:11", "end_timestamp": "00:29:05", "start_second": 1691, "end_second": 1745, "url": "https://www.youtube.com/watch?v=-9evrZnBorM&t=1691s", "title": "BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding", "thumbnail": "https://i.ytimg.com/vi/-9evrZnBorM/maxresdefault.jpg"} {"video_id": "-9evrZnBorM", "text": "model then how claim can be fine-tuned to do all of these tasks and it gets up it goes like state-of-the-art on everyone it's crazy so how do they fight in it so the easiest tasks are the one are the so-called sequence level tasks where you basically have the sequence and you're you're about to predict one class label for the entire sequence so here where the sentence pair classification tasks for example the task we saw before the is next task but there is more sophisticated tasks that you need kind of supervised data for and so with", "start_timestamp": "00:29:05", "end_timestamp": "00:29:49", "start_second": 1745, "end_second": 1789, "url": "https://www.youtube.com/watch?v=-9evrZnBorM&t=1745s", "title": "BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding", "thumbnail": "https://i.ytimg.com/vi/-9evrZnBorM/maxresdefault.jpg"} {"video_id": "-9evrZnBorM", "text": "supervised that you'd have a class level that you trained on so what do you do is let's look at one of them ever MN l I they had it up here nope here multi-genre natural language inference crowd-sourced entailment classification task so given a pair of sentences the goal is to predict whether the second sentence is an entailment contradiction or neutral with respect to the first one all right two sentences and you're about to predict which one of these three labels it is so you put the two cent sentences here Burke can", "start_timestamp": "00:29:49", "end_timestamp": "00:30:31", "start_second": 1789, "end_second": 1831, "url": "https://www.youtube.com/watch?v=-9evrZnBorM&t=1789s", "title": "BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding", "thumbnail": "https://i.ytimg.com/vi/-9evrZnBorM/maxresdefault.jpg"} {"video_id": "-9evrZnBorM", "text": "already take two sentences as an input as we saw right the the embeddings are the the a and b embeddings and the position of endings are left out of the picture here but they would be added to it and these these would be the embeddings for it and then you pass this through the verb monel and this is the final layer and what they do is they simply take now the the embedding the final embedding for this first one course into this starter token and they simply put a single layer of classification so basically a logistic regression on it", "start_timestamp": "00:30:31", "end_timestamp": "00:31:14", "start_second": 1831, "end_second": 1874, "url": "https://www.youtube.com/watch?v=-9evrZnBorM&t=1831s", "title": "BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding", "thumbnail": "https://i.ytimg.com/vi/-9evrZnBorM/maxresdefault.jpg"} {"video_id": "-9evrZnBorM", "text": "and that's how they then get a class label so if this is whatever let's say this is this just gives you here a hidden vector of 512 dimensions 512 and you have three labels to output here 1 2 3 you simply need a a matrix that's 512 by 3 of size and these are the these are the weights that you would then have to train in addition the vert subvert is pre trained and you have to simply only now learn these weights of course they also kind of fine-tune the entire vertol but that's really fine-tuning the only thing you have to learn from scratch is", "start_timestamp": "00:31:14", "end_timestamp": "00:32:02", "start_second": 1874, "end_second": 1922, "url": "https://www.youtube.com/watch?v=-9evrZnBorM&t=1874s", "title": "BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding", "thumbnail": "https://i.ytimg.com/vi/-9evrZnBorM/maxresdefault.jpg"} {"video_id": "-9evrZnBorM", "text": "is this these weights here that's pretty first of all it's pretty neat because you can be very quick at learning new tasks because you simply start from the pre-trade vert and then you go and learn a single class for a layer on top and astonishingly this works extremely well for these tasks a bit of a a bit of a more challenging task is this year squat is a question-answering task and we're gonna jump down here where they explain the task so you have an input question oops you have an input question and then for", "start_timestamp": "00:32:02", "end_timestamp": "00:32:47", "start_second": 1922, "end_second": 1967, "url": "https://www.youtube.com/watch?v=-9evrZnBorM&t=1922s", "title": "BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding", "thumbnail": "https://i.ytimg.com/vi/-9evrZnBorM/maxresdefault.jpg"} {"video_id": "-9evrZnBorM", "text": "question is where the water droplets collide with ice crystals to form precipitation and you have an input paragraph which is kind of a paragraph from Wikipedia page and you know that the answer is somewhere in this paragraph right the data set is constructed such that the answer is in the paragraph so the entire referees precipitation forms as small as smaller droplets call ask the collision with other raindrops ice crystals within a cloud so you the question is where do water droplets collide perform precipitation the answer", "start_timestamp": "00:32:47", "end_timestamp": "00:33:29", "start_second": 1967, "end_second": 2009, "url": "https://www.youtube.com/watch?v=-9evrZnBorM&t=1967s", "title": "BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding", "thumbnail": "https://i.ytimg.com/vi/-9evrZnBorM/maxresdefault.jpg"} {"video_id": "-9evrZnBorM", "text": "here is within a cloud so that's this this thing here so usually what squad models do is they they predict the spam they predict where it's the start of the answer and where is the end of the answer that's also what kind of birds train to do so in order to do this what you do is again you already have the ability to input two sequences so we've trained with two sentences but here they say well you say oh well our first sequence is going to be the question our second sequence is going to be the entire power paragraph from Wikipedia", "start_timestamp": "00:33:29", "end_timestamp": "00:34:07", "start_second": 2009, "end_second": 2047, "url": "https://www.youtube.com/watch?v=-9evrZnBorM&t=2009s", "title": "BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding", "thumbnail": "https://i.ytimg.com/vi/-9evrZnBorM/maxresdefault.jpg"} {"video_id": "-9evrZnBorM", "text": "and then for each output for each output for the output of each token remember there's as many outputs as there's inputs because the transformer will always transform to the same length of sequence for each token in the output we classify it is this token the start token and or is this token the end token or is this token none of all now what they do effectively is that here each each one outputs each one is a vector and they as we said at the beginning of finding out which ones the subject now here we have two queries namely query", "start_timestamp": "00:34:07", "end_timestamp": "00:34:55", "start_second": 2047, "end_second": 2095, "url": "https://www.youtube.com/watch?v=-9evrZnBorM&t=2047s", "title": "BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding", "thumbnail": "https://i.ytimg.com/vi/-9evrZnBorM/maxresdefault.jpg"} {"video_id": "-9evrZnBorM", "text": "one which is is this to start let's call it query s and query e is is this the end token so these are two queries and I'm going to just produce compute the inner product of each query with each of these outputs right and over my sequence here this is gonna give me a distribution so start for start maybe this token is not much than this token is a lot and so on the other solution there's I've tokens and for the end not so much not so probable not so probable very probable not suppose so what you get when I get is from these inner products", "start_timestamp": "00:34:55", "end_timestamp": "00:35:41", "start_second": 2095, "end_second": 2141, "url": "https://www.youtube.com/watch?v=-9evrZnBorM&t=2095s", "title": "BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding", "thumbnail": "https://i.ytimg.com/vi/-9evrZnBorM/maxresdefault.jpg"} {"video_id": "-9evrZnBorM", "text": "is a distribution over which ones start in which ones to end ok this one's probably a start and this one's probably the end so that's how you predict the span and again what you have to ultimately learn is these these queries here and so not that much and this is named entity recognition and named entity recognition you have a sentence and you're supposed to recognize named entities like up here we saw subscribe to PewDiePie and the named entity would be PewDiePie right this is a name and you're supposed to recognize that this is a name and", "start_timestamp": "00:35:41", "end_timestamp": "00:36:33", "start_second": 2141, "end_second": 2193, "url": "https://www.youtube.com/watch?v=-9evrZnBorM&t=2141s", "title": "BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding", "thumbnail": "https://i.ytimg.com/vi/-9evrZnBorM/maxresdefault.jpg"} {"video_id": "-9evrZnBorM", "text": "they do it the same same way that they do the squat basically or a similar way sorry they basically for each of the outputs here they simply classify whether or not is it's part of an M entity or not so what they have to do is they have to simply train if they you also have different labels for which kind of entity is this this is like a person and this is this is no entity so if you have ten of the labels then each for each thing you would classify it into one of ten classes so you need a classifier of input size versus number", "start_timestamp": "00:36:33", "end_timestamp": "00:37:22", "start_second": 2193, "end_second": 2242, "url": "https://www.youtube.com/watch?v=-9evrZnBorM&t=2193s", "title": "BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding", "thumbnail": "https://i.ytimg.com/vi/-9evrZnBorM/maxresdefault.jpg"} {"video_id": "-9evrZnBorM", "text": "of classes that's all you have to train in addition to pre to fine-tuning vert itself alright so they kind of evaluate on all of these tasks they get super duper numbers on all of them here burped large winds on pretty much everything and this model is big just saying and they trained it on TP use which is available in kind of Google cloud infrastructure so far they've trained it on a lot of data so - - away it's it's kind of expected that you would outperform but it's very surprising that you outperform everyone else by this much and they've", "start_timestamp": "00:37:22", "end_timestamp": "00:38:17", "start_second": 2242, "end_second": 2297, "url": "https://www.youtube.com/watch?v=-9evrZnBorM&t=2242s", "title": "BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding", "thumbnail": "https://i.ytimg.com/vi/-9evrZnBorM/maxresdefault.jpg"} {"video_id": "-9evrZnBorM", "text": "done a lot of kind of Appalachian studies where they show that it's really due to the fact that they do this left and right context they they take into account the left and right context of given token when doing the the attention that it's that that's why it's better so here for example they compare the bird base model and they say okay what if we don't do the NSP the next sentence prediction tasks then you can see the numbers they already kind of they drop on these tasks and what if we then additionally do only left-to-right", "start_timestamp": "00:38:17", "end_timestamp": "00:39:02", "start_second": 2297, "end_second": 2342, "url": "https://www.youtube.com/watch?v=-9evrZnBorM&t=2297s", "title": "BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding", "thumbnail": "https://i.ytimg.com/vi/-9evrZnBorM/maxresdefault.jpg"} {"video_id": "-9evrZnBorM", "text": "training and the numbers they drop really seriously again you see sometimes here for example you see a pretty serious drop in the number also here so there really seems to be real value in doing this kind of left and right context attention so it's not just about the model size and the amount of data that's basically what they show here and this is really cool that the paper actually shows this because usually people have an idea and they throw a lot more resources at it and they're better you never know why and this is pretty", "start_timestamp": "00:39:02", "end_timestamp": "00:39:41", "start_second": 2342, "end_second": 2381, "url": "https://www.youtube.com/watch?v=-9evrZnBorM&t=2342s", "title": "BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding", "thumbnail": "https://i.ytimg.com/vi/-9evrZnBorM/maxresdefault.jpg"} {"video_id": "0O1UXKh-Yck", "text": "i am sapna sharma and i am here to present a paper i am here along with ragini to present a very interesting paper published by six google researchers in may 2019 the researchers are david birthlord nicholas scarlini ian goodfellow avital oliver nicholas papernote and colin raffle the title of the paper is mix match a holistic approach to semi-supervised learning so next slide please so we will be covering the background and purpose to see why mix match was [Music] required the key terms to understand the algorithm", "start_timestamp": "00:00:00", "end_timestamp": "00:00:57", "start_second": 0, "end_second": 57, "url": "https://www.youtube.com/watch?v=0O1UXKh-Yck&t=0s", "title": "MixMatch: A Holistic Approach to Semi-Supervised Learning", "thumbnail": "https://i.ytimg.com/vi/0O1UXKh-Yck/maxresdefault.jpg"} {"video_id": "0O1UXKh-Yck", "text": "we will be giving a brief description of the algorithm and the results of the experiments performed by the authors so we are all aware that there uh we are all aware of the three main classes of the machine learning that is the supervised machine learning the unsupervised machine learning and the semi supervised machine learning while the supervised machine learning needs the ground tooth that is the label data to build a model the unsupervised machine learning predicts unlabeled data using clustering techniques now the major", "start_timestamp": "00:00:57", "end_timestamp": "00:01:42", "start_second": 57, "end_second": 102, "url": "https://www.youtube.com/watch?v=0O1UXKh-Yck&t=57s", "title": "MixMatch: A Holistic Approach to Semi-Supervised Learning", "thumbnail": "https://i.ytimg.com/vi/0O1UXKh-Yck/maxresdefault.jpg"} {"video_id": "0O1UXKh-Yck", "text": "concern for most data scientists is the labeled data as we ourselves face the same problem while building a model for wound tissue classification it is very difficult to get an expert to label the whole set of data thus the scarcity of label data is the major con constraint for supervised machine learning now here is where the semi-supervised machine learning plays a major role which takes the advantages of both supervised machine learning and unsupervised machine learning to give us label data and according to the authors of the paper", "start_timestamp": "00:01:42", "end_timestamp": "00:02:29", "start_second": 102, "end_second": 149, "url": "https://www.youtube.com/watch?v=0O1UXKh-Yck&t=102s", "title": "MixMatch: A Holistic Approach to Semi-Supervised Learning", "thumbnail": "https://i.ytimg.com/vi/0O1UXKh-Yck/maxresdefault.jpg"} {"video_id": "0O1UXKh-Yck", "text": "mixmatch claims to have developed a technique to label the unlabeled data with much better accuracy than the present day there may supervise techniques as per the abstract of the paper mix match unifies the current dominant approaches used in semi-supervised learning to produce a new algorithm that works by guessing low entropy labels for data augmented unlabeled examples and mixing labeled and unlabeled data using mix up so before going to the actual algorithm let us just have a look at the key terms which will be used to", "start_timestamp": "00:02:29", "end_timestamp": "00:03:14", "start_second": 149, "end_second": 194, "url": "https://www.youtube.com/watch?v=0O1UXKh-Yck&t=149s", "title": "MixMatch: A Holistic Approach to Semi-Supervised Learning", "thumbnail": "https://i.ytimg.com/vi/0O1UXKh-Yck/maxresdefault.jpg"} {"video_id": "0O1UXKh-Yck", "text": "understand the algorithm the consistency regularization it applies uh data augmentation to semi supervised learning by leveraging the idea that a classifier should output the same class distribution for an unlabeled example even after it has been augmented in other words labels should not change when noise is added the model called pi model is used mixmatch utilizes a form of consistency regularization through the use of standard data augmentation for images such as random horizontal flips and crops now the entropy minimization", "start_timestamp": "00:03:14", "end_timestamp": "00:04:02", "start_second": 194, "end_second": 242, "url": "https://www.youtube.com/watch?v=0O1UXKh-Yck&t=194s", "title": "MixMatch: A Holistic Approach to Semi-Supervised Learning", "thumbnail": "https://i.ytimg.com/vi/0O1UXKh-Yck/maxresdefault.jpg"} {"video_id": "0O1UXKh-Yck", "text": "it basically means that we need to reduce the randomness in the prediction of the unlabeled data or the classified decision boundary should not pass through the high density region of the margin marginal data distribution this is done by outputting low entropy prediction on the unlabeled data so mix match also implicitly achieves entropy minimization by adding a loss function and using a sharpening function on the target distribution for unlabeled data the traditional regularization is again a method applied to avoid overfitting", "start_timestamp": "00:04:02", "end_timestamp": "00:04:52", "start_second": 242, "end_second": 292, "url": "https://www.youtube.com/watch?v=0O1UXKh-Yck&t=242s", "title": "MixMatch: A Holistic Approach to Semi-Supervised Learning", "thumbnail": "https://i.ytimg.com/vi/0O1UXKh-Yck/maxresdefault.jpg"} {"video_id": "0O1UXKh-Yck", "text": "of the model we have two types of regularization l one or the lasso regression and l two or the ridge regression the range regression adds squared magnitude of coefficient as penalty to loss function and mix match uses the squared or the l2 laws on prediction and guest labels and lastly the main teacher to overcome the problem of inconsistency in using the exponential moving average of label prediction on each training set on large data a mean teacher a method that averages model weights instead of label prediction is used", "start_timestamp": "00:04:52", "end_timestamp": "00:05:42", "start_second": 292, "end_second": 342, "url": "https://www.youtube.com/watch?v=0O1UXKh-Yck&t=292s", "title": "MixMatch: A Holistic Approach to Semi-Supervised Learning", "thumbnail": "https://i.ytimg.com/vi/0O1UXKh-Yck/maxresdefault.jpg"} {"video_id": "0O1UXKh-Yck", "text": "mean teacher improves the test accuracy and enables training with fewer models fewer labels so i will be giving a brief description of the steps involved in mix match as the four points are like the data augmentation label guessing entropy regularization and mix up so in data augmentation it is common approach to compensate the scarcity of the unlabeled data and data augmentation is done by applying transformations on the input data points such such that the label remains unchanged data augmentation is done both on", "start_timestamp": "00:05:42", "end_timestamp": "00:06:34", "start_second": 342, "end_second": 394, "url": "https://www.youtube.com/watch?v=0O1UXKh-Yck&t=342s", "title": "MixMatch: A Holistic Approach to Semi-Supervised Learning", "thumbnail": "https://i.ytimg.com/vi/0O1UXKh-Yck/maxresdefault.jpg"} {"video_id": "0O1UXKh-Yck", "text": "labeled and unlabeled data the individual augmentations are used for generating a guest label we'll be seeing more about it label guessing for each unlabeled example mix match produces a guess for the examples label using the model's prediction this guess is later used in unsupervised lost term the entropy regularization to enforce the fact that the classified decision boundary should not pass through the high density region of marginal data distribution the classifier outputs low entropy prediction on unlabeled data this is done by adding", "start_timestamp": "00:06:34", "end_timestamp": "00:07:20", "start_second": 394, "end_second": 440, "url": "https://www.youtube.com/watch?v=0O1UXKh-Yck&t=394s", "title": "MixMatch: A Holistic Approach to Semi-Supervised Learning", "thumbnail": "https://i.ytimg.com/vi/0O1UXKh-Yck/maxresdefault.jpg"} {"video_id": "0O1UXKh-Yck", "text": "loss term which minimizes the entropy for the unlabeled data mix match applies sharpening function to reduce the entropy of the label distribution and this is done by adjusting the temperature of the categorical distribution now the last is the mix-up a mix-up is a recently proposed method for training deep neural networks where additional samples are generated during training by convexly combining random pairs of images and their associated labels by doing so mixer regularizes the neural network to favor simple linear behavior", "start_timestamp": "00:07:20", "end_timestamp": "00:08:10", "start_second": 440, "end_second": 490, "url": "https://www.youtube.com/watch?v=0O1UXKh-Yck&t=440s", "title": "MixMatch: A Holistic Approach to Semi-Supervised Learning", "thumbnail": "https://i.ytimg.com/vi/0O1UXKh-Yck/maxresdefault.jpg"} {"video_id": "0O1UXKh-Yck", "text": "in between training examples mix-up also reduces the memorization of corrupt labels increases the robustness of adversarial example mix match uses mix-up with a slight modification as the final step of its algorithm yes okay brilliant so uh thank you sapna for um giving a brief introduction about the paper before i get into the algorithm i wanted to just stop for a bit and ask if there are any questions anybody has so far okay i guess not so i start from um talking about more on the algorithm bits where if you see this is an image taken", "start_timestamp": "00:08:10", "end_timestamp": "00:09:26", "start_second": 490, "end_second": 566, "url": "https://www.youtube.com/watch?v=0O1UXKh-Yck&t=490s", "title": "MixMatch: A Holistic Approach to Semi-Supervised Learning", "thumbnail": "https://i.ytimg.com/vi/0O1UXKh-Yck/maxresdefault.jpg"} {"video_id": "0O1UXKh-Yck", "text": "from the paper where it says that the algorithm basically augments the unlabeled image and tries and classifies it based on the number of augmentations and then draws an average to guess the label of this unlabeled image that it has after it has this average getting predicted or the guess label getting predicted that less label gets sharpened uh what sharpening does is it basically moves the line of prediction or the line where the decisions are made away from the the higher density association of the data points", "start_timestamp": "00:09:26", "end_timestamp": "00:10:12", "start_second": 566, "end_second": 612, "url": "https://www.youtube.com/watch?v=0O1UXKh-Yck&t=566s", "title": "MixMatch: A Holistic Approach to Semi-Supervised Learning", "thumbnail": "https://i.ytimg.com/vi/0O1UXKh-Yck/maxresdefault.jpg"} {"video_id": "0O1UXKh-Yck", "text": "uh because that makes it easier to give a consistent prediction and also increases the confidence of the predicted labor as you see in label guessing we only guess the label of the unlabeled data however we do augmentations on both labeled and unlabeled data to come to this um label guessing wherein the label is guest based on whatever is in the label data as well as in the unlabeled data but transformations are mostly done on the unlabeled data as many times as required but according to what the authors have seen", "start_timestamp": "00:10:12", "end_timestamp": "00:11:03", "start_second": 612, "end_second": 663, "url": "https://www.youtube.com/watch?v=0O1UXKh-Yck&t=612s", "title": "MixMatch: A Holistic Approach to Semi-Supervised Learning", "thumbnail": "https://i.ytimg.com/vi/0O1UXKh-Yck/maxresdefault.jpg"} {"video_id": "0O1UXKh-Yck", "text": "and gotten the results uh k that is the number of augmentations uh is two that given a result which did outperform all the standard state-of-the-art methods after this the sharpening formula or the equation that they've used where they do have another tuning parameter called st which is the temperature required to sharpen the image prediction or the label that is guessed based on the average prediction that is obtained in the previous step they average they take the decision boundary away and make the prediction more confident", "start_timestamp": "00:11:03", "end_timestamp": "00:11:50", "start_second": 663, "end_second": 710, "url": "https://www.youtube.com/watch?v=0O1UXKh-Yck&t=663s", "title": "MixMatch: A Holistic Approach to Semi-Supervised Learning", "thumbnail": "https://i.ytimg.com/vi/0O1UXKh-Yck/maxresdefault.jpg"} {"video_id": "0O1UXKh-Yck", "text": "so that when the loss is calculated on the unlabeled data it it is not affecting the the final prediction after sharpening what remains is two sets of data one is the augmented label data with the sharpened predictions and the label data which already has its own labels so what is done is a mix-up based on this parameter tuning which is called alpha so what basically makes up does is let's say uh you have an image of a tiger and a cheetah and you want to see how much of one image corresponds to a tiger and how much one image", "start_timestamp": "00:11:50", "end_timestamp": "00:12:43", "start_second": 710, "end_second": 763, "url": "https://www.youtube.com/watch?v=0O1UXKh-Yck&t=710s", "title": "MixMatch: A Holistic Approach to Semi-Supervised Learning", "thumbnail": "https://i.ytimg.com/vi/0O1UXKh-Yck/maxresdefault.jpg"} {"video_id": "0O1UXKh-Yck", "text": "corresponds to cheetah so what you do is you mix these up together saying 80 20 and have a new label saying that that image is now classified as eighty percent of a tiger and twenty percent of vegeta and that mix up is this alpha value which is again tuned based on your data sets and availability of the number of data or images that are collected i forgot to mention the sharpening temperature value that they the authors have taken as a constant for their experiments it's 0.5 yeah i think it's 0.5 after the first", "start_timestamp": "00:12:43", "end_timestamp": "00:13:31", "start_second": 763, "end_second": 811, "url": "https://www.youtube.com/watch?v=0O1UXKh-Yck&t=763s", "title": "MixMatch: A Holistic Approach to Semi-Supervised Learning", "thumbnail": "https://i.ytimg.com/vi/0O1UXKh-Yck/maxresdefault.jpg"} {"video_id": "0O1UXKh-Yck", "text": "pattern let me just quickly go back and see um i think it's in the next slide so we can look yeah it's 0.5 so what they they've tuned it uh to sharpen the picture sharpen the the prediction then they tune in the alpha parameter to have a proper mix up of the labeled and unlabeled data and they form in a big bunch of mixed data set which has unlabeled as well as label data set and i think we've been just talking too much of labeled and unlabeled this is the most important factor of a semi supervised learning method wherein", "start_timestamp": "00:13:31", "end_timestamp": "00:14:19", "start_second": 811, "end_second": 859, "url": "https://www.youtube.com/watch?v=0O1UXKh-Yck&t=811s", "title": "MixMatch: A Holistic Approach to Semi-Supervised Learning", "thumbnail": "https://i.ytimg.com/vi/0O1UXKh-Yck/maxresdefault.jpg"} {"video_id": "0O1UXKh-Yck", "text": "you have your training set that comprises of labeled as well as unlabeled data the reason being it increases the chances of getting more label data which in turn will be fed to your model to have a larger training set than what was initially available so all this is getting done to ensure that your training set increases in number than what you have earlier the last step in the algorithm is calculating the loss because we have a label set the first loss that gets calculated is the cross entropy loss for supervised", "start_timestamp": "00:14:19", "end_timestamp": "00:15:01", "start_second": 859, "end_second": 901, "url": "https://www.youtube.com/watch?v=0O1UXKh-Yck&t=859s", "title": "MixMatch: A Holistic Approach to Semi-Supervised Learning", "thumbnail": "https://i.ytimg.com/vi/0O1UXKh-Yck/maxresdefault.jpg"} {"video_id": "0O1UXKh-Yck", "text": "learning and the second one asapna mentioned is the l2 loss for the unlabeled learning i would stop here and ask if there were any questions okay i go forward so this slide explains what i just talked about but in a more um direct way and what all they have tuned in so the key factors to keep in mind for this algorithm to perform better or worse is tuning these hybrid parameters as the augmentations sorry k i'm sorry then the the sharpening temperature t the the parameters for mix up that's alpha and the weight of the unsupervised", "start_timestamp": "00:15:01", "end_timestamp": "00:15:59", "start_second": 901, "end_second": 959, "url": "https://www.youtube.com/watch?v=0O1UXKh-Yck&t=901s", "title": "MixMatch: A Holistic Approach to Semi-Supervised Learning", "thumbnail": "https://i.ytimg.com/vi/0O1UXKh-Yck/maxresdefault.jpg"} {"video_id": "0O1UXKh-Yck", "text": "consistency loss lambda which is a hundred in this case um the authors performed two sets of experiments one was a normal semi-supervised experiment set up where they compared it to the state of art uh semi-supervised learning models and um the other one is um the the method in which they cut out all the additional um transformations or sharpening or um any extra effort that they've put in to get into the efficiency or the accuracy level that the model performs to show that which uh bit in the algorithm performed the best", "start_timestamp": "00:15:59", "end_timestamp": "00:16:49", "start_second": 959, "end_second": 1009, "url": "https://www.youtube.com/watch?v=0O1UXKh-Yck&t=959s", "title": "MixMatch: A Holistic Approach to Semi-Supervised Learning", "thumbnail": "https://i.ytimg.com/vi/0O1UXKh-Yck/maxresdefault.jpg"} {"video_id": "0O1UXKh-Yck", "text": "to get in the desired results of the desired accuracy at the end so for their semi supervised experiment set up uh they used a wide resonant model with 28 layers and a growth of factor two the data sets were our standard data sets at the zipper 10 say 400 the svhn which is the street view house numbers and the stls uh the models to compare were the main teacher as satna explained in her brief introduction about what main teacher does it is one of the semi-supervised learning methods wherein the labels are based on the exponential", "start_timestamp": "00:16:49", "end_timestamp": "00:17:27", "start_second": 1009, "end_second": 1047, "url": "https://www.youtube.com/watch?v=0O1UXKh-Yck&t=1009s", "title": "MixMatch: A Holistic Approach to Semi-Supervised Learning", "thumbnail": "https://i.ytimg.com/vi/0O1UXKh-Yck/maxresdefault.jpg"} {"video_id": "0O1UXKh-Yck", "text": "moving average a virtual adversarial training again a semi-supervised method and the pseudo labeling again a semi-supervised standard model so you see from the results where the model that they used initially was the cipher 10 data set a supervised training of 50 000 examples was trained to to given the said accuracy but as comparison with mismatch it just used in 250 labels to given the desired output similarly they did it with svhn again with 70 3250 examples with no unlabeled data at all and they got in the said", "start_timestamp": "00:17:27", "end_timestamp": "00:18:21", "start_second": 1047, "end_second": 1101, "url": "https://www.youtube.com/watch?v=0O1UXKh-Yck&t=1047s", "title": "MixMatch: A Holistic Approach to Semi-Supervised Learning", "thumbnail": "https://i.ytimg.com/vi/0O1UXKh-Yck/maxresdefault.jpg"} {"video_id": "0O1UXKh-Yck", "text": "with a label of labels of 250 again um it given the said efficiency so what i mean by 250 labels and 2 and 73 257 examples is that the entire data set was used for training purposes for this particular supervised uh experiment out of the 7 73 257 only 250 labels were used in to train the remainder of the unlabeled data set to get in the performance that it shows in the results and this is the abolition study set up where i said that they talked about adding or removing additional components that they used to come to this level to show which", "start_timestamp": "00:18:21", "end_timestamp": "00:19:17", "start_second": 1101, "end_second": 1157, "url": "https://www.youtube.com/watch?v=0O1UXKh-Yck&t=1101s", "title": "MixMatch: A Holistic Approach to Semi-Supervised Learning", "thumbnail": "https://i.ytimg.com/vi/0O1UXKh-Yck/maxresdefault.jpg"} {"video_id": "0O1UXKh-Yck", "text": "of the exact step in the algorithm performed better than the rest so you see the first one is about the mix and match where it's 100 mix and match and it gave an error rate of 11 for 250 labels and an error rate of six percent which is quite good for 4000 labels they removed the distribution averaging or the guess labels and said the number of transformations as one it gave in a 17 error rate again no sharpening it given 27 they had an um they do mix up they did it with label data mixer they did it with unlabeled", "start_timestamp": "00:19:17", "end_timestamp": "00:20:04", "start_second": 1157, "end_second": 1204, "url": "https://www.youtube.com/watch?v=0O1UXKh-Yck&t=1157s", "title": "MixMatch: A Holistic Approach to Semi-Supervised Learning", "thumbnail": "https://i.ytimg.com/vi/0O1UXKh-Yck/maxresdefault.jpg"} {"video_id": "0O1UXKh-Yck", "text": "data mix-up and you see in this that the mix up is benefiting the performance of this model um going to the end uh concluding what we have understood from the paper or the basic purpose of what the authors wanted to showcase using all the components that they have in creating the algorithm it is seen that the hyper parameters that they've used is the augmentations the sharpening the mix and match and the loss that they've calculated it seems that they are have contributed quite well in the performance of the model", "start_timestamp": "00:20:04", "end_timestamp": "00:20:48", "start_second": 1204, "end_second": 1248, "url": "https://www.youtube.com/watch?v=0O1UXKh-Yck&t=1204s", "title": "MixMatch: A Holistic Approach to Semi-Supervised Learning", "thumbnail": "https://i.ytimg.com/vi/0O1UXKh-Yck/maxresdefault.jpg"} {"video_id": "0O1UXKh-Yck", "text": "but out of that in the as you seen in the previous slide that the mix-up seems to be the most important factor contributing to this a mix-up again is a tuning hyperparameter and if you tune it further you probably might get in better results if you don't tune it as much you might get in less results so these were the positives that i could get or we could see in the paper however there is a bit of a negative or that that comes out through this algorithm that the time and cost needed to generate all the transformations the mix up", "start_timestamp": "00:20:48", "end_timestamp": "00:21:37", "start_second": 1248, "end_second": 1297, "url": "https://www.youtube.com/watch?v=0O1UXKh-Yck&t=1248s", "title": "MixMatch: A Holistic Approach to Semi-Supervised Learning", "thumbnail": "https://i.ytimg.com/vi/0O1UXKh-Yck/maxresdefault.jpg"} {"video_id": "0O1UXKh-Yck", "text": "and have a neural network set up on it does take a lot of time and it also requires additional gpus to be set in which perhaps could be seen as an additional overhead but it still does perform a lot better also it leaves needs a lot less data to start to train in terms of having any expert coming in signing up the training set saying that yes a label is an a label b label smb label so you see that this method does showcase that it is one of a better ways to go forward in semi-supervised space for further reading i have listed a", "start_timestamp": "00:21:37", "end_timestamp": "00:22:27", "start_second": 1297, "end_second": 1347, "url": "https://www.youtube.com/watch?v=0O1UXKh-Yck&t=1297s", "title": "MixMatch: A Holistic Approach to Semi-Supervised Learning", "thumbnail": "https://i.ytimg.com/vi/0O1UXKh-Yck/maxresdefault.jpg"} {"video_id": "0O1UXKh-Yck", "text": "couple of uh papers to go ahead and understand where the thought for building up this model came through um and i think this is the last slide if you have any questions i should be glad to answer them um did you try to use the implementation that they provided on online yes i i did try it i tried on lagoon data set and i could share in my uh code base with you to give it a try so so the the the results were were really good did it improve uh well we don't have enough of um data for the wounds data sets so i would say it", "start_timestamp": "00:22:27", "end_timestamp": "00:23:22", "start_second": 1347, "end_second": 1402, "url": "https://www.youtube.com/watch?v=0O1UXKh-Yck&t=1347s", "title": "MixMatch: A Holistic Approach to Semi-Supervised Learning", "thumbnail": "https://i.ytimg.com/vi/0O1UXKh-Yck/maxresdefault.jpg"} {"video_id": "0O1UXKh-Yck", "text": "did perform good uh i'm not really sure whether it will perform better uh because the data was quite less but i do see that it will go ahead and do a better job gotcha awesome thanks any further questions i don't have a question but i was wondering if you could sure um the network that you tried to run it would won't that that's one type you use or was it which category of one did you use all of them it's a semi-supervised learning space so labels are given for all and then i took in half of that as an unlabeled set and not have labels for", "start_timestamp": "00:23:22", "end_timestamp": "00:24:32", "start_second": 1402, "end_second": 1472, "url": "https://www.youtube.com/watch?v=0O1UXKh-Yck&t=1402s", "title": "MixMatch: A Holistic Approach to Semi-Supervised Learning", "thumbnail": "https://i.ytimg.com/vi/0O1UXKh-Yck/maxresdefault.jpg"} {"video_id": "0O1UXKh-Yck", "text": "it okay could you maybe share that on the channel yes appreciate it sure thank you so yeah i have a question uh so what are the other like i'm not quite like uh quite familiar with semi-supervised vanny algorithms so what is the like what's the state of the art between before this algorithm what what what did they compare their girlfriends to in this paper um so he didn't quite understand your question so so before so when they introduce these algorithms so they have they should compare it to something that how were things done before this uh mix", "start_timestamp": "00:24:32", "end_timestamp": "00:25:30", "start_second": 1472, "end_second": 1530, "url": "https://www.youtube.com/watch?v=0O1UXKh-Yck&t=1472s", "title": "MixMatch: A Holistic Approach to Semi-Supervised Learning", "thumbnail": "https://i.ytimg.com/vi/0O1UXKh-Yck/maxresdefault.jpg"} {"video_id": "0O1UXKh-Yck", "text": "of algorithms for semi-supervised so they did us uh if you see in here um uh they've used in a supervised method wherein they used all the 50 000 examples that were labeled so they needed labels for 50 000 items or entities and they had to train based on that but with the mix match coming in they reduced the number of training set or labeled set to 250. if you know what i mean so so let me go back and i'll explain a bit of a semi-supervised learning space so semi-supervised learning space needs both a bit of a label", "start_timestamp": "00:25:30", "end_timestamp": "00:26:16", "start_second": 1530, "end_second": 1576, "url": "https://www.youtube.com/watch?v=0O1UXKh-Yck&t=1530s", "title": "MixMatch: A Holistic Approach to Semi-Supervised Learning", "thumbnail": "https://i.ytimg.com/vi/0O1UXKh-Yck/maxresdefault.jpg"} {"video_id": "0O1UXKh-Yck", "text": "set which is a trained set and a lot of unlabeled data set because what it does it takes a little bit of the label set and a bit of unlabeled set and forms in a mix of labeled and unlabeled and the reason being that it tries and labels all the unlabeled um examples that it has in this mix so that the end training sample has got more no i i know i know i know like the idea behind but i i i'm not sure what i was just asking what was the state of the art before like before me this mix of algorithms for semi-supervised it was a simple", "start_timestamp": "00:26:16", "end_timestamp": "00:27:07", "start_second": 1576, "end_second": 1627, "url": "https://www.youtube.com/watch?v=0O1UXKh-Yck&t=1576s", "title": "MixMatch: A Holistic Approach to Semi-Supervised Learning", "thumbnail": "https://i.ytimg.com/vi/0O1UXKh-Yck/maxresdefault.jpg"} {"video_id": "0O1UXKh-Yck", "text": "semi-supervised approach wherein you have iterations where you take in from your unlabeled set perform some basic transformations or have a have some sharpening done for your confidence uh of your label set having say uh prediction probability or maybe have co-training methods that have two classifiers learning on the same view etc and then give your prediction so mix match as such introduces all of these methods together so they they didn't stop at just using simple uh transformations or simple decision boundary making algorithms", "start_timestamp": "00:27:07", "end_timestamp": "00:27:55", "start_second": 1627, "end_second": 1675, "url": "https://www.youtube.com/watch?v=0O1UXKh-Yck&t=1627s", "title": "MixMatch: A Holistic Approach to Semi-Supervised Learning", "thumbnail": "https://i.ytimg.com/vi/0O1UXKh-Yck/maxresdefault.jpg"} {"video_id": "0O1UXKh-Yck", "text": "or a mixed match they this is a combination of all three together so state of art didn't have this the novelty of this paper is mixing all of these uh semi-supervised learning approaches together in one to come up with uh their holistic approach so that's why the name okay so basically though like uh those uh pseudo label and v80 and uh was it these are also semi supervised approaches right yes okay uh i have a question so suppose we take three examples right extending what you said cheetahs tigers and we add a", "start_timestamp": "00:27:55", "end_timestamp": "00:28:50", "start_second": 1675, "end_second": 1730, "url": "https://www.youtube.com/watch?v=0O1UXKh-Yck&t=1675s", "title": "MixMatch: A Holistic Approach to Semi-Supervised Learning", "thumbnail": "https://i.ytimg.com/vi/0O1UXKh-Yck/maxresdefault.jpg"} {"video_id": "0O1UXKh-Yck", "text": "third one leopards um and uh so cheetah and tiger are in the 250 labels that are used and leopards are in the 4000. does that mean all the cheetahs and all the tigers in your data set are labeled or are some of them unlabeled it could be some of them unlabeled or they could be some of them labeled and it can be both it all depends on the percentage of the unlabeled data that you thank you hi i'm curious about one thing you say that you tested with the wound data set uh did you try perform it against fast ai platform yes yes with first ai", "start_timestamp": "00:28:50", "end_timestamp": "00:29:57", "start_second": 1730, "end_second": 1797, "url": "https://www.youtube.com/watch?v=0O1UXKh-Yck&t=1730s", "title": "MixMatch: A Holistic Approach to Semi-Supervised Learning", "thumbnail": "https://i.ytimg.com/vi/0O1UXKh-Yck/maxresdefault.jpg"} {"video_id": "0O1UXKh-Yck", "text": "yeah and how how your i mean how is their performance with this method compared to plasma eye for the training uh set i did get in about a 64 64 to 69 percent uh accuracy on the training set initially and with this it did bump up to about 70 but i wasn't sure whether it was uh the right thing that i was doing so that's i'm still playing around with it but uh i know where the where the tuning needs to be done so i should be able to have a better accuracy on that this is on the training set i've not yet gone on to", "start_timestamp": "00:29:57", "end_timestamp": "00:30:37", "start_second": 1797, "end_second": 1837, "url": "https://www.youtube.com/watch?v=0O1UXKh-Yck&t=1797s", "title": "MixMatch: A Holistic Approach to Semi-Supervised Learning", "thumbnail": "https://i.ytimg.com/vi/0O1UXKh-Yck/maxresdefault.jpg"} {"video_id": "0O1UXKh-Yck", "text": "testing the model on the test data they'd probably be higher than that yeah yeah okay thank you me any further questions should we say that's it then okay thank you very much uh if you have any questions um please get back to me i should be glad to answer them and i will share in the link to my notebook i am still a little not worse than not books i write it in uh prehistoric language as an editor and things like that but i will move on to notebooks soon and i'll post it there so thank you very much for attending it", "start_timestamp": "00:30:37", "end_timestamp": "00:32:22", "start_second": 1837, "end_second": 1942, "url": "https://www.youtube.com/watch?v=0O1UXKh-Yck&t=1837s", "title": "MixMatch: A Holistic Approach to Semi-Supervised Learning", "thumbnail": "https://i.ytimg.com/vi/0O1UXKh-Yck/maxresdefault.jpg"} {"video_id": "5cySIwg49RI", "text": "[Music] thanks for watching Henry AI Labs this video will present the semi weak supervised learning framework presented by research at Facebook's AI research lab this framework is a really interesting extension to their previous work on weak supervision such as using the hashtags and Instagram images as a weak supervised signal to trip retrain imagenet classification models this research is going to extend this idea to integrate semi-supervised learning as well as weekly supervised learning and then introduce a lot of other", "start_timestamp": "00:00:00", "end_timestamp": "00:00:27", "start_second": 0, "end_second": 27, "url": "https://www.youtube.com/watch?v=5cySIwg49RI&t=0s", "title": "Semi-Weak Supervised Learning", "thumbnail": "https://i.ytimg.com/vi/5cySIwg49RI/maxresdefault.jpg"} {"video_id": "5cySIwg49RI", "text": "interesting ideas like incorporating model distillation into this framework and looking at class imbalance evident in these unlabeled data sets this video will present the research paper billion scale semi-supervised learning for image classification from researchers at Facebook a research lab this animation from Facebook's blog post on billions scale semi-supervised learning shows the idea of their semi-supervised training framework before integrating week supervision so in this case their take on semi-supervised learning is different", "start_timestamp": "00:00:27", "end_timestamp": "00:00:51", "start_second": 27, "end_second": 51, "url": "https://www.youtube.com/watch?v=5cySIwg49RI&t=27s", "title": "Semi-Weak Supervised Learning", "thumbnail": "https://i.ytimg.com/vi/5cySIwg49RI/maxresdefault.jpg"} {"video_id": "5cySIwg49RI", "text": "from other definitions of semi-supervised learning such as rotating an image and then predicting the rotation angle or something like word to vector wave defect where you mask out certain parts of the sequence and then train the model to predict the missing part of the sequence this idea of semi-supervised learning is to have a label data set such as the image net data set train a large capacity model like a res NEX 101 32 by 48 group convolutions res next architecture and then use this massive high capacity", "start_timestamp": "00:00:51", "end_timestamp": "00:01:19", "start_second": 51, "end_second": 79, "url": "https://www.youtube.com/watch?v=5cySIwg49RI&t=51s", "title": "Semi-Weak Supervised Learning", "thumbnail": "https://i.ytimg.com/vi/5cySIwg49RI/maxresdefault.jpg"} {"video_id": "5cySIwg49RI", "text": "model to predict the labels on an unlabeled data set and then you would use these unlabeled the softmax distribution of these predictions to pre train the target model ie as in model distillation such as what powers models like hugging faces distill burped then you'll fine tune the model that's been trained with model distillation on the label data set and this is your new model so one of the interesting things we already see about this is the novel use of model distillation as semi-supervised learning it's not really", "start_timestamp": "00:01:19", "end_timestamp": "00:01:45", "start_second": 79, "end_second": 105, "url": "https://www.youtube.com/watch?v=5cySIwg49RI&t=79s", "title": "Semi-Weak Supervised Learning", "thumbnail": "https://i.ytimg.com/vi/5cySIwg49RI/maxresdefault.jpg"} {"video_id": "5cySIwg49RI", "text": "common to use these two terms together semi-supervised learning and model distillation also interesting lis is we can see this kind of model compression that arises from this framework you can have a really high capacity teacher model like the res NEX 101 32 by 48 D and then you can have a lower capacity more manageable probably faster inference time lower storage costs like the res net 50 that you could deploy on mobile and IOT in these kinds of things this animation shows the extension from the semi-supervised training framework", "start_timestamp": "00:01:45", "end_timestamp": "00:02:12", "start_second": 105, "end_second": 132, "url": "https://www.youtube.com/watch?v=5cySIwg49RI&t=105s", "title": "Semi-Weak Supervised Learning", "thumbnail": "https://i.ytimg.com/vi/5cySIwg49RI/maxresdefault.jpg"} {"video_id": "5cySIwg49RI", "text": "to semi weekly supervised training so in this case instead of having some massive unlabeled data set we have a weekly supervised data set so instead of just having a collection of images we have a week label such as the hashtags on Instagram images and so the thing with the weekly supervised hashtags on Instagram images is that they're really subjective they're noisy and not really like as precisely labeled as say the data from imagenet so in this model we're going to pre train the teacher model same idea of having some larger", "start_timestamp": "00:02:12", "end_timestamp": "00:02:37", "start_second": 132, "end_second": 157, "url": "https://www.youtube.com/watch?v=5cySIwg49RI&t=132s", "title": "Semi-Weak Supervised Learning", "thumbnail": "https://i.ytimg.com/vi/5cySIwg49RI/maxresdefault.jpg"} {"video_id": "5cySIwg49RI", "text": "capacity teacher model smaller capacity student network so we're gonna pre-trained it with the weekly supervised data set fine tuna with the image net then we're gonna use the fine-tuned model after having been pre trained to predict the softmax distribution over the weekly supervised data set and then when you use this model distillation knowledge distillation in order to train our student network we're gonna fine tune the student Network and then we have our Train model some of the interesting issues that research paper raises is", "start_timestamp": "00:02:37", "end_timestamp": "00:03:00", "start_second": 157, "end_second": 180, "url": "https://www.youtube.com/watch?v=5cySIwg49RI&t=157s", "title": "Semi-Weak Supervised Learning", "thumbnail": "https://i.ytimg.com/vi/5cySIwg49RI/maxresdefault.jpg"} {"video_id": "5cySIwg49RI", "text": "this idea of class imbalance and unlabeled weekly supervised data sets so one characteristic of machine learning models and you know deep learning convolutional image classifiers is that class imbalance can really destroy the performance so for example if you're training a cat versus dog image classifier and you have 80% of your data your training data is cats and 20% its dogs your train model is going to want to predict cats it's going to be biased towards the imbalance data so in these weekly supervised data sets such", "start_timestamp": "00:03:00", "end_timestamp": "00:03:26", "start_second": 180, "end_second": 206, "url": "https://www.youtube.com/watch?v=5cySIwg49RI&t=180s", "title": "Semi-Weak Supervised Learning", "thumbnail": "https://i.ytimg.com/vi/5cySIwg49RI/maxresdefault.jpg"} {"video_id": "5cySIwg49RI", "text": "as hashtags on Instagram images and then trying to transfer that into image net classification there's going to be like a long tail distribution where you're not gonna have as many of these really specific classes and image net contained in this image net data set so another interesting idea is just incorporating model distillation in the semi-supervised learning this teacher-student model compression and then this framework is gonna achieve 81.2% imaging at top one accuracy with a ResNet 50 and they're gonna scale this", "start_timestamp": "00:03:26", "end_timestamp": "00:03:52", "start_second": 206, "end_second": 232, "url": "https://www.youtube.com/watch?v=5cySIwg49RI&t=206s", "title": "Semi-Weak Supervised Learning", "thumbnail": "https://i.ytimg.com/vi/5cySIwg49RI/maxresdefault.jpg"} {"video_id": "5cySIwg49RI", "text": "up to the res next 101 32 X 16 D with the student network capacity and they're gonna achieve 80 4.8% and this is up from eighty four point two from their previous research on doing a lot of label engineering for the weekly supervised Instagram dataset and then in that previous study they had achieved eighty five point four but it is a larger capacity bottle at the 48 D and they probably didn't test the 48 D because you know it's expensive and time-consuming to train these kinds of models this research from Facebook's and", "start_timestamp": "00:03:52", "end_timestamp": "00:04:19", "start_second": 232, "end_second": 259, "url": "https://www.youtube.com/watch?v=5cySIwg49RI&t=232s", "title": "Semi-Weak Supervised Learning", "thumbnail": "https://i.ytimg.com/vi/5cySIwg49RI/maxresdefault.jpg"} {"video_id": "5cySIwg49RI", "text": "research lab is in line with a lot of their other work such as using weekly supervised pre training for video action recognition using over 65 million images like from Instagram as well as using billions of images in their weekly supervised pre-training of an image net image classifier in this case they do achieve a slightly higher performance but they do have larger capacity models and also interestingly is that in this case they are manually doing the you know kind of removing some of the noise from the week supervised data set", "start_timestamp": "00:04:19", "end_timestamp": "00:04:44", "start_second": 259, "end_second": 284, "url": "https://www.youtube.com/watch?v=5cySIwg49RI&t=259s", "title": "Semi-Weak Supervised Learning", "thumbnail": "https://i.ytimg.com/vi/5cySIwg49RI/maxresdefault.jpg"} {"video_id": "5cySIwg49RI", "text": "whereas this framework is gonna present an automatic way automated way of doing this an interesting characteristic of the newest semi weak supervised training framework is they're going to use an explicit algorithm to balance the classes in the predicted distribution from the weekly supervised data set so this teacher model has been trained on the image Minette classification task but the weekly supervised data set probably isn't as balanced as imagenet classification it's probably heavily skewed to sort towards some classes more", "start_timestamp": "00:04:44", "end_timestamp": "00:05:07", "start_second": 284, "end_second": 307, "url": "https://www.youtube.com/watch?v=5cySIwg49RI&t=284s", "title": "Semi-Weak Supervised Learning", "thumbnail": "https://i.ytimg.com/vi/5cySIwg49RI/maxresdefault.jpg"} {"video_id": "5cySIwg49RI", "text": "than others such as dogs and things like that compared to these really specific things like I don't know a tiger shark there's things like this so this visualization shows the top k scoring of examples so as the teacher model predicts a distribution of class labels over the unlabeled data or the weekly label data the models are going to be ranked according to their probability their class probability and then they're going to be balanced in this way so that each class has an even number of training samples and in this", "start_timestamp": "00:05:07", "end_timestamp": "00:05:34", "start_second": 307, "end_second": 334, "url": "https://www.youtube.com/watch?v=5cySIwg49RI&t=307s", "title": "Semi-Weak Supervised Learning", "thumbnail": "https://i.ytimg.com/vi/5cySIwg49RI/maxresdefault.jpg"} {"video_id": "5cySIwg49RI", "text": "visualization you see how as it goes from rack 1000 to being very close to the image like the leaf beetle looks like the beetle whereas by rank 10000 16000 is not really a beetle anymore but the teacher model has given some probability to beetle when it's a predicting distribution of that image another interesting issue with this framework that's raised in the paper is the idea of inference time with model distillation so in this case we're predicting over a billion unlabeled or weekly labeled images with our teacher", "start_timestamp": "00:05:34", "end_timestamp": "00:06:01", "start_second": 334, "end_second": 361, "url": "https://www.youtube.com/watch?v=5cySIwg49RI&t=334s", "title": "Semi-Weak Supervised Learning", "thumbnail": "https://i.ytimg.com/vi/5cySIwg49RI/maxresdefault.jpg"} {"video_id": "5cySIwg49RI", "text": "model so we want to have fast inference time for this prediction is much more important in this case than it is for typical examples of model distillation where the data sets aren't that large when you have billions of images you want to make sure the teacher model has fast inference I think with the rising popularity of knowledge distillation model distillation techniques such as hugging faces disco Bert and now Facebook semi weekly supervised training paradigm we're gonna see these kinds of inference accelerators like in videos", "start_timestamp": "00:06:01", "end_timestamp": "00:06:24", "start_second": 361, "end_second": 384, "url": "https://www.youtube.com/watch?v=5cySIwg49RI&t=361s", "title": "Semi-Weak Supervised Learning", "thumbnail": "https://i.ytimg.com/vi/5cySIwg49RI/maxresdefault.jpg"} {"video_id": "5cySIwg49RI", "text": "tensor Artie accelerator to be more and more important because usually these frameworks are developed for inference when the model has been deployed but now we're seeing the inference be a part of the training loop as well in this teacher-student Maalik model distillation paradigm from their paper the researchers at Facebook give these six guideline recommendations for large-scale semi-supervised learning so the first idea is really interesting this teacher-student model distillation paradigm also really interestingly and", "start_timestamp": "00:06:24", "end_timestamp": "00:06:47", "start_second": 384, "end_second": 407, "url": "https://www.youtube.com/watch?v=5cySIwg49RI&t=384s", "title": "Semi-Weak Supervised Learning", "thumbnail": "https://i.ytimg.com/vi/5cySIwg49RI/maxresdefault.jpg"} {"video_id": "5cySIwg49RI", "text": "uniquely in this paper is that they're going to test model distillation when the student and the teacher have the same architecture or the same capacity the second idea is to fine tune the model with true labels only this is a pretty intuitive idea the weekly supervised label dataset has a ton of noise in it compared to the imagenet data set or you know other more specific label data sets a third idea is that large-scale unlabeled data is key to this performance naturally the key driver behind this algorithm is that", "start_timestamp": "00:06:47", "end_timestamp": "00:07:12", "start_second": 407, "end_second": 432, "url": "https://www.youtube.com/watch?v=5cySIwg49RI&t=407s", "title": "Semi-Weak Supervised Learning", "thumbnail": "https://i.ytimg.com/vi/5cySIwg49RI/maxresdefault.jpg"} {"video_id": "5cySIwg49RI", "text": "they have a billion images from Instagram that they're using to train this model the fourth idea is really interesting that they use a large number of training iterations for their pre training with the weekly supervised learning compared to you know more pre-training iterations compared to normal supervised learning the fifth idea is a novel contribution to this paper the idea of having a balanced distribution for inferred labels so when you're doing the model distillation you want you don't want to have class and", "start_timestamp": "00:07:12", "end_timestamp": "00:07:37", "start_second": 432, "end_second": 457, "url": "https://www.youtube.com/watch?v=5cySIwg49RI&t=432s", "title": "Semi-Weak Supervised Learning", "thumbnail": "https://i.ytimg.com/vi/5cySIwg49RI/maxresdefault.jpg"} {"video_id": "5cySIwg49RI", "text": "balance in the distribution of these labels and the sixth idea is that pre training the high capacity teacher model week supervision further improves the results the idea of adding the week supervised to make this the semi week supervised learning framework now we'll get into some of the results of their research report you can check out their repository on github semi-supervised image net 1k models where you have the pre trained models that you can load with torch hub and then they also present some of the results you see the", "start_timestamp": "00:07:37", "end_timestamp": "00:08:01", "start_second": 457, "end_second": 481, "url": "https://www.youtube.com/watch?v=5cySIwg49RI&t=457s", "title": "Semi-Weak Supervised Learning", "thumbnail": "https://i.ytimg.com/vi/5cySIwg49RI/maxresdefault.jpg"} {"video_id": "5cySIwg49RI", "text": "semi-supervised learning framework different model architectures from ResNet 18 and 50 up to the res next group convolution architectures and then you see up to the 80 4.8% accuracy when using the semi-weekly supervised learning framework with 193 million parameters on the res next 101 32 by 16 d architecture the first set of results they present shows the success of the semi-supervised learning framework with different student models so first we're looking at the resident 18 the resident 50 and then higher versions of the", "start_timestamp": "00:08:01", "end_timestamp": "00:08:28", "start_second": 481, "end_second": 508, "url": "https://www.youtube.com/watch?v=5cySIwg49RI&t=481s", "title": "Semi-Weak Supervised Learning", "thumbnail": "https://i.ytimg.com/vi/5cySIwg49RI/maxresdefault.jpg"} {"video_id": "5cySIwg49RI", "text": "Resnick's and the 50 101 higher capacity resonate variations so we see that the fine-tuned semi-supervised learning framework is always outperforming the fully supervised learning tasks when you just train the student model on imagenet classification then they present this idea of varying the complexity of the teacher model and showing how increasing the capacity of the teacher model increases the accuracy of the student model we see the gains are increasing every time as we scale up the capacity the teacher model while holding the", "start_timestamp": "00:08:28", "end_timestamp": "00:08:55", "start_second": 508, "end_second": 535, "url": "https://www.youtube.com/watch?v=5cySIwg49RI&t=508s", "title": "Semi-Weak Supervised Learning", "thumbnail": "https://i.ytimg.com/vi/5cySIwg49RI/maxresdefault.jpg"} {"video_id": "5cySIwg49RI", "text": "student model with constant this plot shows the results when the teacher and the student model have the same architectural capacity interestingly we still see these gains when we're using the same capacity for each model this plot shows how the top one accuracy changes as a function of the unlabeled data site data set size used as the you know the unlabeled or the weekly supervised data in this pipeline so we see that the performance continues to increase as a data set gets larger and larger following the recommendation from", "start_timestamp": "00:08:55", "end_timestamp": "00:09:22", "start_second": 535, "end_second": 562, "url": "https://www.youtube.com/watch?v=5cySIwg49RI&t=535s", "title": "Semi-Weak Supervised Learning", "thumbnail": "https://i.ytimg.com/vi/5cySIwg49RI/maxresdefault.jpg"} {"video_id": "5cySIwg49RI", "text": "their guideline that having massive unlabeled data sets is key to this framework this plot shows how the accuracy improves as a function of the number of pre training iterations so as stated in their recommendations they use a much larger amount of pre training epochs than supervised learning epochs really interesting Lee is there showing four billion training iterations it achieves the highest accuracy in this plot then they show the results of increasing the K parameter so the K parameter shown here is this idea of", "start_timestamp": "00:09:22", "end_timestamp": "00:09:48", "start_second": 562, "end_second": 588, "url": "https://www.youtube.com/watch?v=5cySIwg49RI&t=562s", "title": "Semi-Weak Supervised Learning", "thumbnail": "https://i.ytimg.com/vi/5cySIwg49RI/maxresdefault.jpg"} {"video_id": "5cySIwg49RI", "text": "scoring the class label distributions when you're doing the model distillation from the teacher Network so basically the idea is if you increase the K from say eight K to 16 K and then you're looking at a specific class such as leaf beetle the as you get towards the end of the eight K and then especially from say eight thousand one up to six the top eight thousand one to the top sixteen thousand the images are going to look less and less like beetles they just have been assigned some probability as beetle and now they're a part of the", "start_timestamp": "00:09:48", "end_timestamp": "00:10:15", "start_second": 588, "end_second": 615, "url": "https://www.youtube.com/watch?v=5cySIwg49RI&t=588s", "title": "Semi-Weak Supervised Learning", "thumbnail": "https://i.ytimg.com/vi/5cySIwg49RI/maxresdefault.jpg"} {"video_id": "5cySIwg49RI", "text": "balanced data set because you've increased this K parameter so they show how there is a limiting effect to how large you increase the K because naturally as you increase the K passed a certain threshold you're making your knowledge distillation data step four your student network to be really imbalanced and deep learning machine learning these kinds of decision boundary models do not respond well to class and balance although they don't show you like things like random over sampling a lot of the techniques commonly used to overcome class", "start_timestamp": "00:10:15", "end_timestamp": "00:10:43", "start_second": 615, "end_second": 643, "url": "https://www.youtube.com/watch?v=5cySIwg49RI&t=615s", "title": "Semi-Weak Supervised Learning", "thumbnail": "https://i.ytimg.com/vi/5cySIwg49RI/maxresdefault.jpg"} {"video_id": "5cySIwg49RI", "text": "imbalance this table shows the evidence of the balancing of the distribution with these ablation studies such as balancing dataset or unbalanced in a dataset showing a point eight percent accuracy improvement which is very significant for imagenet classification and then also the idea of using the Instagram tags versus ranking the list of the pretty distributions and then comparing all these performances to supervised learning this table is the showing the big highlights of the paper you see they achieve 81.2% accuracy with", "start_timestamp": "00:10:43", "end_timestamp": "00:11:10", "start_second": 643, "end_second": 670, "url": "https://www.youtube.com/watch?v=5cySIwg49RI&t=643s", "title": "Semi-Weak Supervised Learning", "thumbnail": "https://i.ytimg.com/vi/5cySIwg49RI/maxresdefault.jpg"} {"video_id": "5cySIwg49RI", "text": "the resident 50 architecture and how this is state VR compared to previous works trying to achieve weekly supervised learning at the same model capacity then you can see the head-to-head comparison with their previous work on you know label engineering the week supervised learning and sort of you do see how the accuracy starts to saturate at the further model capacity of the Residex architecture another really interesting characteristic of this framework is success on transfer learning so when they transfer this pre trained model", "start_timestamp": "00:11:10", "end_timestamp": "00:11:36", "start_second": 670, "end_second": 696, "url": "https://www.youtube.com/watch?v=5cySIwg49RI&t=670s", "title": "Semi-Weak Supervised Learning", "thumbnail": "https://i.ytimg.com/vi/5cySIwg49RI/maxresdefault.jpg"} {"video_id": "5cySIwg49RI", "text": "from imagenet to the bird image classification model they achieve a really high level of transfer learning performance compared to previous approaches this chart shows the difference between fine-tuning just a fully connected layer at the end of the network compared to the full network and then the performance achieved after doing this they also test this model on video classification with the deep mind kinetics data set and they show a significant improvement achieving seventy five point nine percent accuracy", "start_timestamp": "00:11:36", "end_timestamp": "00:12:00", "start_second": 696, "end_second": 720, "url": "https://www.youtube.com/watch?v=5cySIwg49RI&t=696s", "title": "Semi-Weak Supervised Learning", "thumbnail": "https://i.ytimg.com/vi/5cySIwg49RI/maxresdefault.jpg"} {"video_id": "5cySIwg49RI", "text": "using this technique compared to the previous research achieving seventy four point eight percent accuracy thanks for watching this presentation of semi weak supervised learning from facebook's AI research lab this research paper has presented a really interesting framework for semi-supervised learning and weekly supervised learning and integrating this model distillation paradigm there are some really interesting ideas presented in this paper such as the importance of how they having a balanced class distribution for the model distillation", "start_timestamp": "00:12:00", "end_timestamp": "00:12:23", "start_second": 720, "end_second": 743, "url": "https://www.youtube.com/watch?v=5cySIwg49RI&t=720s", "title": "Semi-Weak Supervised Learning", "thumbnail": "https://i.ytimg.com/vi/5cySIwg49RI/maxresdefault.jpg"} {"video_id": "6cGNcgzWZT8", "text": "eighteen secrets that lie hidden in your subconscious mind as I continue to do the work of going deep within my own subconscious mind which is really a net result of the training that I put together which was released earlier this year on programming your subconscious mind I recognize that there's far more to what goes on in the subconscious mind and how we create reality based on what's on our subconscious mind so much so that I've been making a lot of videos lately on this topic and what I've done in this video is I've pulled 18 key", "start_timestamp": "00:00:00", "end_timestamp": "00:00:37", "start_second": 0, "end_second": 37, "url": "https://www.youtube.com/watch?v=6cGNcgzWZT8&t=0s", "title": "18 Secrets That Lie Hidden In Your Subconscious Mind (Neville Goddard, Napoleon Hill)", "thumbnail": "https://i.ytimg.com/vi/6cGNcgzWZT8/maxresdefault.jpg"} {"video_id": "6cGNcgzWZT8", "text": "distinctions that we're going to discuss from the perspectives of neville goddard and napoleon hill who I believe are masters with working with the subconscious mind and these 18 distinctions are reflections insights perspectives that I've gathered as a result of working with their information and as a result of working with what I had put together in my subconscious mind training program which by the way made a huge difference in my life because I think I might have mentioned this a few times but in order to test the", "start_timestamp": "00:00:37", "end_timestamp": "00:01:10", "start_second": 37, "end_second": 70, "url": "https://www.youtube.com/watch?v=6cGNcgzWZT8&t=37s", "title": "18 Secrets That Lie Hidden In Your Subconscious Mind (Neville Goddard, Napoleon Hill)", "thumbnail": "https://i.ytimg.com/vi/6cGNcgzWZT8/maxresdefault.jpg"} {"video_id": "6cGNcgzWZT8", "text": "effectiveness of reprogramming my subconscious mind around that time I decided to apply it towards attracting a ideal relationship and by using the exact principles in that program I found myself in my ideal relationship in fact my girlfriend is in front of me right now as we are recording this she's playing a huge integral role in what I do and together we create amazing content for sharing and I also want to add that on the list of qualities that I look for she meets every single one of them because I realized one very", "start_timestamp": "00:01:10", "end_timestamp": "00:01:51", "start_second": 70, "end_second": 111, "url": "https://www.youtube.com/watch?v=6cGNcgzWZT8&t=70s", "title": "18 Secrets That Lie Hidden In Your Subconscious Mind (Neville Goddard, Napoleon Hill)", "thumbnail": "https://i.ytimg.com/vi/6cGNcgzWZT8/maxresdefault.jpg"} {"video_id": "6cGNcgzWZT8", "text": "important thing the vision is the identity and I made a video on this last video I did talking about the imagination being the identity and you can say the imagination and the vision is the identity both the same and also different as far as discussions go we say imagination and vision but the bottom line is this who you are who you're destined to be is your true vision and you can create it by cleansing disempowering thoughts and elements in your subconscious mind which you have learned through five sensory", "start_timestamp": "00:01:51", "end_timestamp": "00:02:25", "start_second": 111, "end_second": 145, "url": "https://www.youtube.com/watch?v=6cGNcgzWZT8&t=111s", "title": "18 Secrets That Lie Hidden In Your Subconscious Mind (Neville Goddard, Napoleon Hill)", "thumbnail": "https://i.ytimg.com/vi/6cGNcgzWZT8/maxresdefault.jpg"} {"video_id": "6cGNcgzWZT8", "text": "based input and meaning associated to it meaning that you have created that you've associated to the five sensory input data or meaning that you have learned from others so my goal in this video is to assist you on your journey to materialize your dreams to bring forth your vision I made a number of videos helping you uncover your vision and I'll continue to do so and in this video we're talking about releasing elements in your subconscious mind working with really deep elements in your subconscious mind to bring forth", "start_timestamp": "00:02:25", "end_timestamp": "00:02:56", "start_second": 145, "end_second": 176, "url": "https://www.youtube.com/watch?v=6cGNcgzWZT8&t=145s", "title": "18 Secrets That Lie Hidden In Your Subconscious Mind (Neville Goddard, Napoleon Hill)", "thumbnail": "https://i.ytimg.com/vi/6cGNcgzWZT8/maxresdefault.jpg"} {"video_id": "6cGNcgzWZT8", "text": "that what you truly desire because your vision is yours and you have the power within you to bring forth your vision by working with the power of your subconscious mind neville goddard says so when you know what you want remain faithful to that assumption and the assumption though at the moment is denied by your senses and denied by reason if you persist in it it will harden into fact which brings us to our first point what gets hardened into fact controls automatic behaviors of the individual if you believe that you are", "start_timestamp": "00:02:56", "end_timestamp": "00:03:35", "start_second": 176, "end_second": 215, "url": "https://www.youtube.com/watch?v=6cGNcgzWZT8&t=176s", "title": "18 Secrets That Lie Hidden In Your Subconscious Mind (Neville Goddard, Napoleon Hill)", "thumbnail": "https://i.ytimg.com/vi/6cGNcgzWZT8/maxresdefault.jpg"} {"video_id": "6cGNcgzWZT8", "text": "not worth it as far as your vision goes if you don't believe fully within you that the vision is your birthright it is who you are then you have elements that are within your subconscious mind that are being projected outwards and materialized into form through various different ways but primarily or it could even be secondary through your behaviors the way you carry yourself the way you connect with people the way you navigate reality the way you handle different people environment and circumstances in your life reveals to", "start_timestamp": "00:03:35", "end_timestamp": "00:04:14", "start_second": 215, "end_second": 254, "url": "https://www.youtube.com/watch?v=6cGNcgzWZT8&t=215s", "title": "18 Secrets That Lie Hidden In Your Subconscious Mind (Neville Goddard, Napoleon Hill)", "thumbnail": "https://i.ytimg.com/vi/6cGNcgzWZT8/maxresdefault.jpg"} {"video_id": "6cGNcgzWZT8", "text": "you as to what you are assuming to be true in your subconscious mind which a lot of times can be hardened into fact now when it's hardened into fact what you'll notice is that in the external you will find all kinds of evidence to support that fact now if you believed in another assumption the polar opposite of that assumption and that was hardened into fact then you're gonna find plenty of evidence to prove to you that that assumption is true by the facts that you will find in the external world now what does this mean this means that the", "start_timestamp": "00:04:14", "end_timestamp": "00:04:52", "start_second": 254, "end_second": 292, "url": "https://www.youtube.com/watch?v=6cGNcgzWZT8&t=254s", "title": "18 Secrets That Lie Hidden In Your Subconscious Mind (Neville Goddard, Napoleon Hill)", "thumbnail": "https://i.ytimg.com/vi/6cGNcgzWZT8/maxresdefault.jpg"} {"video_id": "6cGNcgzWZT8", "text": "source is the thoughts within and the thoughts within are mostly subconscious we have a conscious mind we've got a subconscious mind and the subconscious mind is responsible for creating our reality and the subconscious mind has to become on board with our vision and the way we do this is by impressing our vision through our imagination or subconscious mind reconditioning or taking in information via our five senses that are in alignment and congruent to our vision till that hardens into fact and behaviors thoughts", "start_timestamp": "00:04:52", "end_timestamp": "00:05:30", "start_second": 292, "end_second": 330, "url": "https://www.youtube.com/watch?v=6cGNcgzWZT8&t=292s", "title": "18 Secrets That Lie Hidden In Your Subconscious Mind (Neville Goddard, Napoleon Hill)", "thumbnail": "https://i.ytimg.com/vi/6cGNcgzWZT8/maxresdefault.jpg"} {"video_id": "6cGNcgzWZT8", "text": "and actions and whatever you do projects outwards interacts with the external world the environment people and circumstance to bring forth your vision number two what gets hardened into fact removes from an individual's consciousness that what is not related without the use of mental or emotional force what I mean by this if you believe reality to be a certain way and if that has hardened into fact you will find yourself surrounded by people environment circumstance and information that supports it and what will be", "start_timestamp": "00:05:30", "end_timestamp": "00:06:09", "start_second": 330, "end_second": 369, "url": "https://www.youtube.com/watch?v=6cGNcgzWZT8&t=330s", "title": "18 Secrets That Lie Hidden In Your Subconscious Mind (Neville Goddard, Napoleon Hill)", "thumbnail": "https://i.ytimg.com/vi/6cGNcgzWZT8/maxresdefault.jpg"} {"video_id": "6cGNcgzWZT8", "text": "excluded from your consciousness is that what will not be related to that vision you won't be able to even see it it'll be like a blind spot now this can work to our advantage I'm not talking about being indifferent I'm talking about being in the spirit of harmony your true vision comes from who you really are your soul expression see I believe we have our conscious or subconscious and the super conscious the super conscious is the universal mind it is the one mind the single mind in which all individual expressions from that one", "start_timestamp": "00:06:09", "end_timestamp": "00:06:42", "start_second": 369, "end_second": 402, "url": "https://www.youtube.com/watch?v=6cGNcgzWZT8&t=369s", "title": "18 Secrets That Lie Hidden In Your Subconscious Mind (Neville Goddard, Napoleon Hill)", "thumbnail": "https://i.ytimg.com/vi/6cGNcgzWZT8/maxresdefault.jpg"} {"video_id": "6cGNcgzWZT8", "text": "line contained within that one mind is our true vision and when we uncover that vision in our imagination and we honor that vision what gets included in our consciousness is that what is related to the vision which includes being in the spirit of harmony with all people environment and circumstance and what gets excluded from the consciousness is that what is in harmonious in thoughts feelings emotions and behaviors that create in harmony in the external world but the truth still remains that the source is within it is from mostly the", "start_timestamp": "00:06:42", "end_timestamp": "00:07:22", "start_second": 402, "end_second": 442, "url": "https://www.youtube.com/watch?v=6cGNcgzWZT8&t=402s", "title": "18 Secrets That Lie Hidden In Your Subconscious Mind (Neville Goddard, Napoleon Hill)", "thumbnail": "https://i.ytimg.com/vi/6cGNcgzWZT8/maxresdefault.jpg"} {"video_id": "6cGNcgzWZT8", "text": "subconscious so as you observe what's within your consciousness with your awareness not what distracts you that what your attention goes on it reveals to you what's within yourself and what is revealed to you can be worked on if it's related to your vision then you can encourage that if it's not related to your vision if it's an emotion that creates turmoil within which reflects outwards and materialized into form as turmoil with people environment circumstance that you can change that within by adjusting your subconscious", "start_timestamp": "00:07:22", "end_timestamp": "00:07:57", "start_second": 442, "end_second": 477, "url": "https://www.youtube.com/watch?v=6cGNcgzWZT8&t=442s", "title": "18 Secrets That Lie Hidden In Your Subconscious Mind (Neville Goddard, Napoleon Hill)", "thumbnail": "https://i.ytimg.com/vi/6cGNcgzWZT8/maxresdefault.jpg"} {"video_id": "6cGNcgzWZT8", "text": "mind but the bottom line is this is that as a person continuously does the work on themselves the environment the people and circumstance the external world becomes more harmonious number three faithful assumption is what levels up a person's thought beyond limited thinking now what Neville is saying is remain faithful to that assumption even if it's denied by your five senses five senses the sixth sense is your vision it is your connection to the superconscious it is your conversation with infinite intelligence which we're going to talk", "start_timestamp": "00:07:57", "end_timestamp": "00:08:35", "start_second": 477, "end_second": 515, "url": "https://www.youtube.com/watch?v=6cGNcgzWZT8&t=477s", "title": "18 Secrets That Lie Hidden In Your Subconscious Mind (Neville Goddard, Napoleon Hill)", "thumbnail": "https://i.ytimg.com/vi/6cGNcgzWZT8/maxresdefault.jpg"} {"video_id": "6cGNcgzWZT8", "text": "about in a moment the five senses are earth based senses and there are empowering thoughts and feelings via those senses and there are some disempowering ones that are belonging on this earth based experience the disempowering ones are fear doubt and indecision because for many thousands of years we had a certain level of consciousness and it wasn't high enough to help us realize that abundance is the natural order of things when we are in harmony with the lovers within which I call the connection between the conscious the", "start_timestamp": "00:08:35", "end_timestamp": "00:09:10", "start_second": 515, "end_second": 550, "url": "https://www.youtube.com/watch?v=6cGNcgzWZT8&t=515s", "title": "18 Secrets That Lie Hidden In Your Subconscious Mind (Neville Goddard, Napoleon Hill)", "thumbnail": "https://i.ytimg.com/vi/6cGNcgzWZT8/maxresdefault.jpg"} {"video_id": "6cGNcgzWZT8", "text": "subconscious and the super conscious harmony within but we're talking about here in this video is becoming harmonized between the conscious the subconscious and superconscious and even if you don't believe in the super conscious even if you don't believe that it exists then we can at least believe because it'll do wonders for you and then in the process you'll uncover this super conscious and you don't really have to be living a harmonious life it's the relationship between the conscious and the subconscious the conscious being", "start_timestamp": "00:09:10", "end_timestamp": "00:09:40", "start_second": 550, "end_second": 580, "url": "https://www.youtube.com/watch?v=6cGNcgzWZT8&t=550s", "title": "18 Secrets That Lie Hidden In Your Subconscious Mind (Neville Goddard, Napoleon Hill)", "thumbnail": "https://i.ytimg.com/vi/6cGNcgzWZT8/maxresdefault.jpg"} {"video_id": "6cGNcgzWZT8", "text": "the Guardian the director the decision-maker on where you decide to go and where your attention goes on and guarding what goes into your subconscious and the subconscious being the creative force that expresses outwards to materialize into form that what gets inputted into the subconscious will be impressed upon the subconscious and the subconscious will bring it forth so by remaining faithful to that assumption you are working with the imagination which is the success so you are transcending the five senses you're", "start_timestamp": "00:09:40", "end_timestamp": "00:10:12", "start_second": 580, "end_second": 612, "url": "https://www.youtube.com/watch?v=6cGNcgzWZT8&t=580s", "title": "18 Secrets That Lie Hidden In Your Subconscious Mind (Neville Goddard, Napoleon Hill)", "thumbnail": "https://i.ytimg.com/vi/6cGNcgzWZT8/maxresdefault.jpg"} {"video_id": "6cGNcgzWZT8", "text": "saying this is what I want impressed on the subconscious mind that what is in my imagination and the subconscious mind will bring it forth whether it's from the imagination or whether it's from the five senses now what is in the imagination which comes from your vision is a higher level thinking a lot of times you'll find that it transcends the thinking beyond what you have learned from the five senses and as a result of honoring that imagination that vision that assumption of the wish fulfilled in the imagination your thinking will start", "start_timestamp": "00:10:12", "end_timestamp": "00:10:51", "start_second": 612, "end_second": 651, "url": "https://www.youtube.com/watch?v=6cGNcgzWZT8&t=612s", "title": "18 Secrets That Lie Hidden In Your Subconscious Mind (Neville Goddard, Napoleon Hill)", "thumbnail": "https://i.ytimg.com/vi/6cGNcgzWZT8/maxresdefault.jpg"} {"video_id": "6cGNcgzWZT8", "text": "to elevate what you'll notice is that you will think different thoughts that are more empowered or more empowering to yourself your self-esteem your self-confidence will go up towards others people environment and circumstance and these thoughts that you have will become a assumption that will harden into fact because you will honor the thoughts the thoughts will be projected outwards and materializing to form and you will see it and you will assume it to be fact at a subconscious level and you will see it with the repetition Rima", "start_timestamp": "00:10:51", "end_timestamp": "00:11:26", "start_second": 651, "end_second": 686, "url": "https://www.youtube.com/watch?v=6cGNcgzWZT8&t=651s", "title": "18 Secrets That Lie Hidden In Your Subconscious Mind (Neville Goddard, Napoleon Hill)", "thumbnail": "https://i.ytimg.com/vi/6cGNcgzWZT8/maxresdefault.jpg"} {"video_id": "6cGNcgzWZT8", "text": "T realizing over and over again into form he says so if I assume that I am I don't have to I don't need any evidence to support it I assume that I am and what well i name it and having giving it a name giving it form given it definition remaining in it I resurrect and if it takes a thousand men to aid the birth of the state a thousand men will play their parts and I don't have to go out and look for them what is he talking about see there is one imagination really one mind and we are individual expressions of that mind the", "start_timestamp": "00:11:26", "end_timestamp": "00:12:10", "start_second": 686, "end_second": 730, "url": "https://www.youtube.com/watch?v=6cGNcgzWZT8&t=686s", "title": "18 Secrets That Lie Hidden In Your Subconscious Mind (Neville Goddard, Napoleon Hill)", "thumbnail": "https://i.ytimg.com/vi/6cGNcgzWZT8/maxresdefault.jpg"} {"video_id": "6cGNcgzWZT8", "text": "super conscious mind is the universal mind and you as an individual is made up of a conscious and a subconscious mind when you're dealing with something like faith and honoring that imagination living in the imagination you are working by sending a faith-based message over to the super conscious mind and the super conscious mind the universal mind is connected to the subconscious mind of you and everybody else and that information goes over to them in via the subconscious mind and they will have hunches and inspirations to be in", "start_timestamp": "00:12:10", "end_timestamp": "00:12:46", "start_second": 730, "end_second": 766, "url": "https://www.youtube.com/watch?v=6cGNcgzWZT8&t=730s", "title": "18 Secrets That Lie Hidden In Your Subconscious Mind (Neville Goddard, Napoleon Hill)", "thumbnail": "https://i.ytimg.com/vi/6cGNcgzWZT8/maxresdefault.jpg"} {"video_id": "6cGNcgzWZT8", "text": "connection with you now this is something that I experience on a regular basis a lot of times I get emails from individuals that say I felt I had to reach out to you and you have the answer to this question that I'm looking for and they asked me the question and it happened to be the very thing that I figured out last week and we are all connected via this invisible link the question is do we believe it's possible do we honor the connection how do you establish the connection you establish the connection through higher vibration thoughts", "start_timestamp": "00:12:46", "end_timestamp": "00:13:20", "start_second": 766, "end_second": 800, "url": "https://www.youtube.com/watch?v=6cGNcgzWZT8&t=766s", "title": "18 Secrets That Lie Hidden In Your Subconscious Mind (Neville Goddard, Napoleon Hill)", "thumbnail": "https://i.ytimg.com/vi/6cGNcgzWZT8/maxresdefault.jpg"} {"video_id": "6cGNcgzWZT8", "text": "leveling up your thoughts thoughts that are in the spirit of harmony what is the spirit of harmony well looking look at the spirit of harmony from multiple perspectives a good relationship between the conscious the subconscious and the superconscious number one which I call the lovers within and number two a relationship and realization that your vision doesn't take away your true vision doesn't take away from the world from divine from evolution it contributes to it win for you win for those that you deal with and win for", "start_timestamp": "00:13:20", "end_timestamp": "00:13:56", "start_second": 800, "end_second": 836, "url": "https://www.youtube.com/watch?v=6cGNcgzWZT8&t=800s", "title": "18 Secrets That Lie Hidden In Your Subconscious Mind (Neville Goddard, Napoleon Hill)", "thumbnail": "https://i.ytimg.com/vi/6cGNcgzWZT8/maxresdefault.jpg"} {"video_id": "6cGNcgzWZT8", "text": "divine or evolution that's the true vision now this might not have been happening for thousands of years but our consciousness is going up and this is going to continue to increase and we will see more and more living in the spirit of harmony look around and you'll see that there's far more people living in the spirit of harmony when they embrace this philosophy a lot of my students including myself have realized that by releasing that does not that what does not serve me that was within my subconscious mind which I've taken in", "start_timestamp": "00:13:56", "end_timestamp": "00:14:27", "start_second": 836, "end_second": 867, "url": "https://www.youtube.com/watch?v=6cGNcgzWZT8&t=836s", "title": "18 Secrets That Lie Hidden In Your Subconscious Mind (Neville Goddard, Napoleon Hill)", "thumbnail": "https://i.ytimg.com/vi/6cGNcgzWZT8/maxresdefault.jpg"} {"video_id": "6cGNcgzWZT8", "text": "beyond five senses and given meaning to that was disempowering by releasing that programming I allow myself to go into the superconscious and get my vision and express accordingly why because fears start to go away fears break the connection between the conscious the subconscious and superconscious so number four we are individuals made up of conscious and subconscious mind connected to the super conscious mind otherwise known as the universal model and when you work in harmony in the imagination by encouraging positive uplifting nurturing", "start_timestamp": "00:14:27", "end_timestamp": "00:15:07", "start_second": 867, "end_second": 907, "url": "https://www.youtube.com/watch?v=6cGNcgzWZT8&t=867s", "title": "18 Secrets That Lie Hidden In Your Subconscious Mind (Neville Goddard, Napoleon Hill)", "thumbnail": "https://i.ytimg.com/vi/6cGNcgzWZT8/maxresdefault.jpg"} {"video_id": "6cGNcgzWZT8", "text": "spirit of harmony thoughts that information will go and up to the super conscious and the super conscious will work with it that's how you work with the universal mind that's how you communicate with the universal mind the universal mind will not bring scarcity based thinking into the subconscious mind of others it will not be allowed to go forth that kind of information is from the five sensory-based input here on earth it does not exist in that level of vibration that level of consciousness that is the superconscious", "start_timestamp": "00:15:07", "end_timestamp": "00:15:40", "start_second": 907, "end_second": 940, "url": "https://www.youtube.com/watch?v=6cGNcgzWZT8&t=907s", "title": "18 Secrets That Lie Hidden In Your Subconscious Mind (Neville Goddard, Napoleon Hill)", "thumbnail": "https://i.ytimg.com/vi/6cGNcgzWZT8/maxresdefault.jpg"} {"video_id": "6cGNcgzWZT8", "text": "which is the creative source of everything spirit of harmony brings forth number five those that are in the spirit of harmony now the beautiful thing is that when you work with this when you cleanse your subconscious you'll start to think more spirit of harmony based thoughts and you will notice that you'll attract the people that are also in spirit of harmony and I'll keep increasing it's almost like when they show up into your life you could say I was expecting you and they will say to you well I was expecting to", "start_timestamp": "00:15:40", "end_timestamp": "00:16:12", "start_second": 940, "end_second": 972, "url": "https://www.youtube.com/watch?v=6cGNcgzWZT8&t=940s", "title": "18 Secrets That Lie Hidden In Your Subconscious Mind (Neville Goddard, Napoleon Hill)", "thumbnail": "https://i.ytimg.com/vi/6cGNcgzWZT8/maxresdefault.jpg"} {"video_id": "6cGNcgzWZT8", "text": "deal with you and here's what I would like to do and you would say to them here is what I like to do and you will find it to be in harmony in harmony in the spirit of harmony it is what is known as a true mastermind true mastermind see in my opinion and working with this through realization of experiencing this a mastermind is when two minds come together to a more come together and they create something that is greater than than they could have done by themselves so they're working on a creative solution to a problem and as", "start_timestamp": "00:16:12", "end_timestamp": "00:16:44", "start_second": 972, "end_second": 1004, "url": "https://www.youtube.com/watch?v=6cGNcgzWZT8&t=972s", "title": "18 Secrets That Lie Hidden In Your Subconscious Mind (Neville Goddard, Napoleon Hill)", "thumbnail": "https://i.ytimg.com/vi/6cGNcgzWZT8/maxresdefault.jpg"} {"video_id": "6cGNcgzWZT8", "text": "a results of the two Minds working in sync they come up with an idea a concept a way of solving the problem a way of going about doing things that is greater then they are actually combining together to be in the spirit of harmony and as a result of that infinite intelligence shows up and creates the third mind so when you cleanse your subconscious mind you'll notice that you'll be in more of a greater degree of spirit of harmony within yourself between the conscious the subconscious and superconscious and you will attract", "start_timestamp": "00:16:44", "end_timestamp": "00:17:16", "start_second": 1004, "end_second": 1036, "url": "https://www.youtube.com/watch?v=6cGNcgzWZT8&t=1004s", "title": "18 Secrets That Lie Hidden In Your Subconscious Mind (Neville Goddard, Napoleon Hill)", "thumbnail": "https://i.ytimg.com/vi/6cGNcgzWZT8/maxresdefault.jpg"} {"video_id": "6cGNcgzWZT8", "text": "those that are also like that because you don't attract what you want you attract who you are and part of this journey of cleansing the subconscious mind is becoming who we are being one with the vision the vision being the true identity number six your brain knows your imagination as reality see this is very interesting if you consistency and persistency from that place imagine believe have faith in that what you hold in your imagination your body will start to act different your thoughts will be different your posture will be different the way", "start_timestamp": "00:17:16", "end_timestamp": "00:17:59", "start_second": 1036, "end_second": 1079, "url": "https://www.youtube.com/watch?v=6cGNcgzWZT8&t=1036s", "title": "18 Secrets That Lie Hidden In Your Subconscious Mind (Neville Goddard, Napoleon Hill)", "thumbnail": "https://i.ytimg.com/vi/6cGNcgzWZT8/maxresdefault.jpg"} {"video_id": "6cGNcgzWZT8", "text": "you communicate will be different I know because I've been working with this a lot of people ask me how did I improve my communication skills how can I get on a microphone and just keep expressing like this so precisely well it's because harmoniously within I've developed a really good relationship between my conscious mind my subconscious mind in my super conscious mind I am not overly in my head yet I am consciously aware my subconscious mind is a repository of a lot of experience that I've gathered in my life", "start_timestamp": "00:17:59", "end_timestamp": "00:18:29", "start_second": 1079, "end_second": 1109, "url": "https://www.youtube.com/watch?v=6cGNcgzWZT8&t=1079s", "title": "18 Secrets That Lie Hidden In Your Subconscious Mind (Neville Goddard, Napoleon Hill)", "thumbnail": "https://i.ytimg.com/vi/6cGNcgzWZT8/maxresdefault.jpg"} {"video_id": "6cGNcgzWZT8", "text": "in business entrepreneurship on these topics which expresses through me and as that connection and facilitation happens the superconscious gets involved and projects outwards now what happens is my brain is working there's no doubt about it it is involved in the equation and all of these behaviors this way that I'm communicating and everything that I am is a net result of what I imagined myself to be and for many years I would imagine myself being this way and that applies for everything else and as a result of that by behaviors my actions", "start_timestamp": "00:18:29", "end_timestamp": "00:19:11", "start_second": 1109, "end_second": 1151, "url": "https://www.youtube.com/watch?v=6cGNcgzWZT8&t=1109s", "title": "18 Secrets That Lie Hidden In Your Subconscious Mind (Neville Goddard, Napoleon Hill)", "thumbnail": "https://i.ytimg.com/vi/6cGNcgzWZT8/maxresdefault.jpg"} {"video_id": "6cGNcgzWZT8", "text": "my words how I communicate expresses accordingly the training happens in the mind and the visualization now this is not to say that you can't do the hands-on training like practice public speaking but you can absolutely work in your imagination to become a better communicator your imagination can be made so vivid that you can't tell the difference between a real act in an imaginary act and if you've done it right you'll notice your behaviors have changed he says so if you proceed your visit by an imaginal act they will see you as you", "start_timestamp": "00:19:11", "end_timestamp": "00:19:48", "start_second": 1151, "end_second": 1188, "url": "https://www.youtube.com/watch?v=6cGNcgzWZT8&t=1151s", "title": "18 Secrets That Lie Hidden In Your Subconscious Mind (Neville Goddard, Napoleon Hill)", "thumbnail": "https://i.ytimg.com/vi/6cGNcgzWZT8/maxresdefault.jpg"} {"video_id": "6cGNcgzWZT8", "text": "see yourself if you walk in knowing that you are no good they will see you exactly that way but if you walk in the assumption that things are as you desire them to be they are going to see you that way and this is life so while in the presence of another practice putting no label on them to allow infinite intelligence to express out from you and materialize them into form in the spirit of harmony see what happens is we have our imagina lacked we believe ourselves to be a certain way and that's either a conscious imaginal act or it's a", "start_timestamp": "00:19:48", "end_timestamp": "00:20:29", "start_second": 1188, "end_second": 1229, "url": "https://www.youtube.com/watch?v=6cGNcgzWZT8&t=1188s", "title": "18 Secrets That Lie Hidden In Your Subconscious Mind (Neville Goddard, Napoleon Hill)", "thumbnail": "https://i.ytimg.com/vi/6cGNcgzWZT8/maxresdefault.jpg"} {"video_id": "6cGNcgzWZT8", "text": "subconscious from past program but regardless we have that in our imagination and what is in our imagination impresses on the subconscious mind and projects outwards to materialize into form of how we interpret people environment and circumstance see all people are the same we just believed them to be different now that might sound like a bold statement but practice it imagine in your mind how you want to be received imagine in your mind how you want to receive others and when you show up in the presence of others release the", "start_timestamp": "00:20:29", "end_timestamp": "00:21:06", "start_second": 1229, "end_second": 1266, "url": "https://www.youtube.com/watch?v=6cGNcgzWZT8&t=1229s", "title": "18 Secrets That Lie Hidden In Your Subconscious Mind (Neville Goddard, Napoleon Hill)", "thumbnail": "https://i.ytimg.com/vi/6cGNcgzWZT8/maxresdefault.jpg"} {"video_id": "6cGNcgzWZT8", "text": "conscious judging release the categorising allow the subconscious to express that what was impressed on the subconscious and allowed superconscious to also do its thing and you won't start to notice that they will be that if you imagine them to be a certain way a positive way and you show up and they're not that way there is programming that's in the subconscious mind that's still projecting outwards to materialize them into form why because all is actually one mind if you believe them to be their greatest self that what is actually", "start_timestamp": "00:21:06", "end_timestamp": "00:21:43", "start_second": 1266, "end_second": 1303, "url": "https://www.youtube.com/watch?v=6cGNcgzWZT8&t=1266s", "title": "18 Secrets That Lie Hidden In Your Subconscious Mind (Neville Goddard, Napoleon Hill)", "thumbnail": "https://i.ytimg.com/vi/6cGNcgzWZT8/maxresdefault.jpg"} {"video_id": "6cGNcgzWZT8", "text": "their true vision then that message will go into the super conscious and the super conscious will communicate to their subconscious and they will start behaving that way around you their behaviors will be different around you what you might even notice is they behave totally different to you versus others they'll be more harmonious around you they'll be more pleasant around you this will be more gracious around you now I'll speak from experience on this because everybody that I'm around with now presents themself in this higher", "start_timestamp": "00:21:43", "end_timestamp": "00:22:14", "start_second": 1303, "end_second": 1334, "url": "https://www.youtube.com/watch?v=6cGNcgzWZT8&t=1303s", "title": "18 Secrets That Lie Hidden In Your Subconscious Mind (Neville Goddard, Napoleon Hill)", "thumbnail": "https://i.ytimg.com/vi/6cGNcgzWZT8/maxresdefault.jpg"} {"video_id": "6cGNcgzWZT8", "text": "level of version of themselves whereas in a past time in my life I would walk around with scarcity thinking and negative programming in my subconscious mind and the world would seem really dark it would seem really harsh and people would respond to me in kind of this dark energy and I I would think it was them failing to realize that I was REME aterial izing them or projecting outwards them into form and attracting those that want to play that theater with me but after I realized that I'm the cause and I started working on this", "start_timestamp": "00:22:14", "end_timestamp": "00:22:48", "start_second": 1334, "end_second": 1368, "url": "https://www.youtube.com/watch?v=6cGNcgzWZT8&t=1334s", "title": "18 Secrets That Lie Hidden In Your Subconscious Mind (Neville Goddard, Napoleon Hill)", "thumbnail": "https://i.ytimg.com/vi/6cGNcgzWZT8/maxresdefault.jpg"} {"video_id": "6cGNcgzWZT8", "text": "by working with my subconscious mind by doing my imaginal acts by cleansing my subconscious mind using subconscious audios and other modalities what I then realized is that by assuming them to be a certain way they show up that way even if it's just for that moment they are that way and I've watched people transform in front of me people who I haven't seen in years who were a certain way but now they're this different way number 8 mood materializes into people environment and circumstance to reveal our mood within how we believe reality", "start_timestamp": "00:22:48", "end_timestamp": "00:23:30", "start_second": 1368, "end_second": 1410, "url": "https://www.youtube.com/watch?v=6cGNcgzWZT8&t=1368s", "title": "18 Secrets That Lie Hidden In Your Subconscious Mind (Neville Goddard, Napoleon Hill)", "thumbnail": "https://i.ytimg.com/vi/6cGNcgzWZT8/maxresdefault.jpg"} {"video_id": "6cGNcgzWZT8", "text": "to be will be reflected in the external world to be so that's one of the deep secrets of the subconscious mind a deep understanding and grasp of this gives you an enormous amount of power now we're taking this down into a nuanced level of emotion mood how you feel they will feel if you're angry they will be angry in front of you I know it seems kind of far out like how could it be that way but just try and remember actually remember a time in your life when you had an amazing uplifting powerful source based mood", "start_timestamp": "00:23:30", "end_timestamp": "00:24:06", "start_second": 1410, "end_second": 1446, "url": "https://www.youtube.com/watch?v=6cGNcgzWZT8&t=1410s", "title": "18 Secrets That Lie Hidden In Your Subconscious Mind (Neville Goddard, Napoleon Hill)", "thumbnail": "https://i.ytimg.com/vi/6cGNcgzWZT8/maxresdefault.jpg"} {"video_id": "6cGNcgzWZT8", "text": "abundance and you were doing your thing whatever it is you were out and about notice how harmonious the world was to that mood and I gave you my own personal example a time in my life where I felt a lot of darkness within negativity with it I experienced it without in the external world now I've seen this happen many times in my life a really high peak positive state reflected outwards by surrounded as far as the senses can see by people environment and circumstances that are harmonious to that and I've seen it in darker times in my life again", "start_timestamp": "00:24:06", "end_timestamp": "00:24:39", "start_second": 1446, "end_second": 1479, "url": "https://www.youtube.com/watch?v=6cGNcgzWZT8&t=1446s", "title": "18 Secrets That Lie Hidden In Your Subconscious Mind (Neville Goddard, Napoleon Hill)", "thumbnail": "https://i.ytimg.com/vi/6cGNcgzWZT8/maxresdefault.jpg"} {"video_id": "6cGNcgzWZT8", "text": "surrounded by people environment and circumstance to reflect it accordingly I was the source within I am the source within my mood materializes into people environment circumstance to reveal what is going on within me in a way they're helping me they're helping you telling you what's inside of you when you adjust that within yourself via the subconscious it starts to change because see we're not trying to consciously will and force this the subconscious mind projects outwards and creates into form that's it and it's it does so", "start_timestamp": "00:24:39", "end_timestamp": "00:25:13", "start_second": 1479, "end_second": 1513, "url": "https://www.youtube.com/watch?v=6cGNcgzWZT8&t=1479s", "title": "18 Secrets That Lie Hidden In Your Subconscious Mind (Neville Goddard, Napoleon Hill)", "thumbnail": "https://i.ytimg.com/vi/6cGNcgzWZT8/maxresdefault.jpg"} {"video_id": "6cGNcgzWZT8", "text": "automatically we don't even have to think about it consciously it's always happening all the time and it can be programmed in our imagination and I like a dual process working with the imagined imagination the imaginary act and journaling taking notes of what is revealed to me in the external world as I go about reality doing whatever I do every day for me it's the world of entrepreneurship it's one of the best worlds to reveal about myself because it's a world filled with higher level of challenges that I choose to rise up to", "start_timestamp": "00:25:13", "end_timestamp": "00:25:45", "start_second": 1513, "end_second": 1545, "url": "https://www.youtube.com/watch?v=6cGNcgzWZT8&t=1513s", "title": "18 Secrets That Lie Hidden In Your Subconscious Mind (Neville Goddard, Napoleon Hill)", "thumbnail": "https://i.ytimg.com/vi/6cGNcgzWZT8/maxresdefault.jpg"} {"video_id": "6cGNcgzWZT8", "text": "to reveal more about myself and as I rise after those challenges I create more success more returns and this has been the process I've been following for years and I will continue to do so and each time I noticed something that is inharmonious revealed to me in the external world I will write that down I will take a note of it and I will address it by in my imagination or my subconscious mind audios affirmations now unlocking the power of affirmations is very important I got a specific process and I created a video about that", "start_timestamp": "00:25:45", "end_timestamp": "00:26:18", "start_second": 1545, "end_second": 1578, "url": "https://www.youtube.com/watch?v=6cGNcgzWZT8&t=1545s", "title": "18 Secrets That Lie Hidden In Your Subconscious Mind (Neville Goddard, Napoleon Hill)", "thumbnail": "https://i.ytimg.com/vi/6cGNcgzWZT8/maxresdefault.jpg"} {"video_id": "6cGNcgzWZT8", "text": "and I'll put a link in the description thoughts become things and affirmations in the external world which may be assumed as fact the origin is still thought and thus that assumed fact no matter how valid it may appear can and will be changed by thought now by the way these quotes that I'm reading here are the actual 18 elements are pulled for my Instagram and every day I post to my Instagram so I recommend you go over there and follow me and consume this information because these are thoughts that I have as a result of working with", "start_timestamp": "00:26:18", "end_timestamp": "00:26:51", "start_second": 1578, "end_second": 1611, "url": "https://www.youtube.com/watch?v=6cGNcgzWZT8&t=1578s", "title": "18 Secrets That Lie Hidden In Your Subconscious Mind (Neville Goddard, Napoleon Hill)", "thumbnail": "https://i.ytimg.com/vi/6cGNcgzWZT8/maxresdefault.jpg"} {"video_id": "6cGNcgzWZT8", "text": "my own subconscious mind and working with my clients and helping them in areas of their life whether it be entrepreneurship and business and personal development working with a lot of the identity elements in the subconscious mind so I took them and infused them in here they were my inspiration so thoughts become things and affirmations in the external world which may be assumed as fact what is being revealed to you in the external world because it's there because you can pick it up within your five senses you", "start_timestamp": "00:26:51", "end_timestamp": "00:27:21", "start_second": 1611, "end_second": 1641, "url": "https://www.youtube.com/watch?v=6cGNcgzWZT8&t=1611s", "title": "18 Secrets That Lie Hidden In Your Subconscious Mind (Neville Goddard, Napoleon Hill)", "thumbnail": "https://i.ytimg.com/vi/6cGNcgzWZT8/maxresdefault.jpg"} {"video_id": "6cGNcgzWZT8", "text": "can taste it you can smell it you can feel it you can see it you can hear it it's easily assumed as fact because you see it you experience it beyond the five senses and what has experienced the we are the five senses can be assumed as fact but that's a choice when you realize that that was a projection from the subconscious materialized in the external world and you begin to question that and say wait a second I can now choose to assume that to be fact if I change the cause within what happens here's a circumstance here's a person", "start_timestamp": "00:27:21", "end_timestamp": "00:27:54", "start_second": 1641, "end_second": 1674, "url": "https://www.youtube.com/watch?v=6cGNcgzWZT8&t=1641s", "title": "18 Secrets That Lie Hidden In Your Subconscious Mind (Neville Goddard, Napoleon Hill)", "thumbnail": "https://i.ytimg.com/vi/6cGNcgzWZT8/maxresdefault.jpg"} {"video_id": "6cGNcgzWZT8", "text": "here's an environment here's how I feel about it here's the the chain of events leading up to it here's what happened what if I change the cause within about this what if I change the way I look at it what happens well when you change the way you look at things the things you look at change try it take a current circumstance take your current problem that you have and shift your perspective around see one of the things that I've learned on this journey of entrepreneurship is I have learned to be able to see opportunity where others can", "start_timestamp": "00:27:54", "end_timestamp": "00:28:29", "start_second": 1674, "end_second": 1709, "url": "https://www.youtube.com/watch?v=6cGNcgzWZT8&t=1674s", "title": "18 Secrets That Lie Hidden In Your Subconscious Mind (Neville Goddard, Napoleon Hill)", "thumbnail": "https://i.ytimg.com/vi/6cGNcgzWZT8/maxresdefault.jpg"} {"video_id": "6cGNcgzWZT8", "text": "only see problems and roadblocks and impossible I've trained myself to do it now I've been on this journey for 10 years 10 years as a full-time entrepreneur the repetition of being exposed through so many problems in my businesses in my clients businesses and working as a consultant and coach in working with many entrepreneurial organizations and companies of all different levels I've been exposed to so many problems and issues and solutions to that that my subconscious has found so many ways to solve problems to the point of", "start_timestamp": "00:28:29", "end_timestamp": "00:29:01", "start_second": 1709, "end_second": 1741, "url": "https://www.youtube.com/watch?v=6cGNcgzWZT8&t=1709s", "title": "18 Secrets That Lie Hidden In Your Subconscious Mind (Neville Goddard, Napoleon Hill)", "thumbnail": "https://i.ytimg.com/vi/6cGNcgzWZT8/maxresdefault.jpg"} {"video_id": "6cGNcgzWZT8", "text": "repetition where someone who is starting this journey brings to me their scope of problems and I can easily see solutions easily see the solutions what does that mean it means the solutions exist now these are valid viable solutions these are concrete solutions or things that I'm just making up but the difference between that entrepreneur and me is a perspective they don't see it as an opportunity they see it as a problem they see it as an impossibility the moment you see it as an impossibility it becomes so what you believe as far as", "start_timestamp": "00:29:01", "end_timestamp": "00:29:37", "start_second": 1741, "end_second": 1777, "url": "https://www.youtube.com/watch?v=6cGNcgzWZT8&t=1741s", "title": "18 Secrets That Lie Hidden In Your Subconscious Mind (Neville Goddard, Napoleon Hill)", "thumbnail": "https://i.ytimg.com/vi/6cGNcgzWZT8/maxresdefault.jpg"} {"video_id": "6cGNcgzWZT8", "text": "reality goals will be projected outside of you and materialized into form so one of the things we have to always remember is that the word impossible can limit us so Napoleon Hill had one set in thinking Grow Rich when my favorite quotes he said a great many years ago I purchased a fine dictionary the first thing I did with it was to turn to the word impossible and neatly clip it out of the book it would not be an unwise thing for you to do now why would he suggest something like this because the moment we have this impossibility idea show up", "start_timestamp": "00:29:37", "end_timestamp": "00:30:09", "start_second": 1777, "end_second": 1809, "url": "https://www.youtube.com/watch?v=6cGNcgzWZT8&t=1777s", "title": "18 Secrets That Lie Hidden In Your Subconscious Mind (Neville Goddard, Napoleon Hill)", "thumbnail": "https://i.ytimg.com/vi/6cGNcgzWZT8/maxresdefault.jpg"} {"video_id": "6cGNcgzWZT8", "text": "it is reflected in the external world on the based on what we're thinking about dealing with a person an environment or circumstance it will be reflected in the external world as fact of impossibility but in order to and think about your unlike don't just take my word for it you have experienced this is a beautiful thing I'm not talking about things that you don't know about you I've already experienced these things I'm talking about reflect upon a time in your life where others saw a situation as an impossibility and you saw it as a", "start_timestamp": "00:30:09", "end_timestamp": "00:30:41", "start_second": 1809, "end_second": 1841, "url": "https://www.youtube.com/watch?v=6cGNcgzWZT8&t=1809s", "title": "18 Secrets That Lie Hidden In Your Subconscious Mind (Neville Goddard, Napoleon Hill)", "thumbnail": "https://i.ytimg.com/vi/6cGNcgzWZT8/maxresdefault.jpg"} {"video_id": "6cGNcgzWZT8", "text": "possibility and you were the one who figured it out you learn how to create that where others could not see it it was because you first believed in your mind that it was possible and that projected out and reflected and materialized as the possibility that door showed up and you held the key with the thought the thought is the key that opens up the door so Napoleon Hill talks about the concept of the sixth sense and I refer to this as the superconscious the infinite intelligence conversation the universal mind now some might agree", "start_timestamp": "00:30:41", "end_timestamp": "00:31:16", "start_second": 1841, "end_second": 1876, "url": "https://www.youtube.com/watch?v=6cGNcgzWZT8&t=1841s", "title": "18 Secrets That Lie Hidden In Your Subconscious Mind (Neville Goddard, Napoleon Hill)", "thumbnail": "https://i.ytimg.com/vi/6cGNcgzWZT8/maxresdefault.jpg"} {"video_id": "6cGNcgzWZT8", "text": "with this and some might not agree with it I have a lot of reference experiences personally that I work with and I've talked to many others who work with this and I agree with what he says here he says the sixth sense is the portion of the subconscious mind which has been referred to as the creative imagination it has also been referred to as the receiving set through which ideas plans and thoughts flash into the mind the flashes are sometimes called hunches or inspirations the sixth sense defies description it cannot be described to a", "start_timestamp": "00:31:16", "end_timestamp": "00:31:46", "start_second": 1876, "end_second": 1906, "url": "https://www.youtube.com/watch?v=6cGNcgzWZT8&t=1876s", "title": "18 Secrets That Lie Hidden In Your Subconscious Mind (Neville Goddard, Napoleon Hill)", "thumbnail": "https://i.ytimg.com/vi/6cGNcgzWZT8/maxresdefault.jpg"} {"video_id": "6cGNcgzWZT8", "text": "person who has not mastered the other principles in the philosophy because such a person has no knowledge and no experience with which sixth sense may be compared understanding of the sixth sense comes only by meditating through mind development from within now this is why I've been pointing out a lot of videos about developing your mind in the Kabbalah and we say all is mine though all is mine and the universe is mental you have the solution to everything via your mind within the subconscious within the superconscious within infinite", "start_timestamp": "00:31:46", "end_timestamp": "00:32:19", "start_second": 1906, "end_second": 1939, "url": "https://www.youtube.com/watch?v=6cGNcgzWZT8&t=1906s", "title": "18 Secrets That Lie Hidden In Your Subconscious Mind (Neville Goddard, Napoleon Hill)", "thumbnail": "https://i.ytimg.com/vi/6cGNcgzWZT8/maxresdefault.jpg"} {"video_id": "6cGNcgzWZT8", "text": "intelligence we are the sixth sense we are the connection to the subconscious mind of other individuals via the mastermind again the mastermind individuals coming together in the spirit of harmony I'm not talking about groupthink where people come together and try to be right rather than focus on what is right rather than focusing on what is right they focus on who is right that would be groupthink there'll be egoic we're looking to believe in the possibility of the solution and saying as a collective coming together in a", "start_timestamp": "00:32:19", "end_timestamp": "00:32:50", "start_second": 1939, "end_second": 1970, "url": "https://www.youtube.com/watch?v=6cGNcgzWZT8&t=1939s", "title": "18 Secrets That Lie Hidden In Your Subconscious Mind (Neville Goddard, Napoleon Hill)", "thumbnail": "https://i.ytimg.com/vi/6cGNcgzWZT8/maxresdefault.jpg"} {"video_id": "6cGNcgzWZT8", "text": "mastermind we will figure it out together that thought goes into the minds of all individuals impresses itself in the subconscious mind goes to the superconscious mind and is brought forth adds the idea that hunch the inspiration to solve the problem or create the solution whatever it is now he says the sixth sense defies description it cannot be described to a person who has not mastered the other principles of the philosophy and I'll speak from experiences because I've been reading thinking Grow Rich since 2004", "start_timestamp": "00:32:50", "end_timestamp": "00:33:19", "start_second": 1970, "end_second": 1999, "url": "https://www.youtube.com/watch?v=6cGNcgzWZT8&t=1970s", "title": "18 Secrets That Lie Hidden In Your Subconscious Mind (Neville Goddard, Napoleon Hill)", "thumbnail": "https://i.ytimg.com/vi/6cGNcgzWZT8/maxresdefault.jpg"} {"video_id": "6cGNcgzWZT8", "text": "and I've used it many times to create multiple definite chief aims and I've achieved every single one of them and I have one right now and I will achieve it and each time I achieve my definite chief aim I realize the power of the sixth sense how so I'll tell you exactly how because during the time of in between of generating the definite chief aim and the manifestation of the definite chief aim I had no idea how I was gonna be brought forth but I will tell you this ideas hunches and inspiration showed up that redefine all", "start_timestamp": "00:33:19", "end_timestamp": "00:33:53", "start_second": 1999, "end_second": 2033, "url": "https://www.youtube.com/watch?v=6cGNcgzWZT8&t=1999s", "title": "18 Secrets That Lie Hidden In Your Subconscious Mind (Neville Goddard, Napoleon Hill)", "thumbnail": "https://i.ytimg.com/vi/6cGNcgzWZT8/maxresdefault.jpg"} {"video_id": "6cGNcgzWZT8", "text": "the things that I've ever learned logic and reason even and I followed those hunches and inspirations in some way somehow it brought forth and Steve Jobs sanity says you cannot connect the dots looking forward you can only connect them looking backwards even talks about trusting your intuition that's why he really recommends the book by Yogananda and so what I do so do I the autobiography of a yogi' you read a book like that and you will automatically realize the power that you have within you and it is through the", "start_timestamp": "00:33:53", "end_timestamp": "00:34:22", "start_second": 2033, "end_second": 2062, "url": "https://www.youtube.com/watch?v=6cGNcgzWZT8&t=2033s", "title": "18 Secrets That Lie Hidden In Your Subconscious Mind (Neville Goddard, Napoleon Hill)", "thumbnail": "https://i.ytimg.com/vi/6cGNcgzWZT8/maxresdefault.jpg"} {"video_id": "6cGNcgzWZT8", "text": "repetition of achieving your definite chief aims that you build a deeper connection with the sixth sense perhaps that's why a few people when you look at the grand scheme of things have a deep connection with the sixth sense maybe they have not honored the voice within and created a result of the definite chief aim which is a fragment of their true identity see I believe that the definite chief aim is part of a grand vision I believe that the definite chief aim is part of your true vision if by nurturing the definite chief aim that", "start_timestamp": "00:34:22", "end_timestamp": "00:34:57", "start_second": 2062, "end_second": 2097, "url": "https://www.youtube.com/watch?v=6cGNcgzWZT8&t=2062s", "title": "18 Secrets That Lie Hidden In Your Subconscious Mind (Neville Goddard, Napoleon Hill)", "thumbnail": "https://i.ytimg.com/vi/6cGNcgzWZT8/maxresdefault.jpg"} {"video_id": "6cGNcgzWZT8", "text": "you have right now and bringing it forth you have a greater sense of understanding and awareness of your overall mission and purpose in life one of the things that I found to be true is that by following the definite chief I realized that all is in harmony to contribute to the definite chief aim and that each element of the definite chief aim or each definite chief aim when completed builds my connection with the sixth sense and then reveals to me my next definite chief aim and each definite chief aim it's like a little", "start_timestamp": "00:34:57", "end_timestamp": "00:35:35", "start_second": 2097, "end_second": 2135, "url": "https://www.youtube.com/watch?v=6cGNcgzWZT8&t=2097s", "title": "18 Secrets That Lie Hidden In Your Subconscious Mind (Neville Goddard, Napoleon Hill)", "thumbnail": "https://i.ytimg.com/vi/6cGNcgzWZT8/maxresdefault.jpg"} {"video_id": "6cGNcgzWZT8", "text": "pixelated vision in each pixel shows up what the definite chief aim accomplished to reveal to me what my purpose is here on this planet and that's why I made that video the last video I made when I said your imagination is your vision it is your identity because I've I've realized this is that every time I had a definite chief aimed more and more of my mission my life purpose is revealed to itself revealed to me now I believe life has an interesting pattern when you accomplish something infinite intelligence $0.06 shows up with a", "start_timestamp": "00:35:35", "end_timestamp": "00:36:10", "start_second": 2135, "end_second": 2170, "url": "https://www.youtube.com/watch?v=6cGNcgzWZT8&t=2135s", "title": "18 Secrets That Lie Hidden In Your Subconscious Mind (Neville Goddard, Napoleon Hill)", "thumbnail": "https://i.ytimg.com/vi/6cGNcgzWZT8/maxresdefault.jpg"} {"video_id": "6cGNcgzWZT8", "text": "greater responsibility for you and this responsibility is to bring it forth heaven on earth Devine is always looking to manifest on earth that's why it's said as above so below and it is through this process that it's manifested you you are the one that brings it forth it is through you and I and everyone here that the vision of where we are going divine evolution is being brought forth if you look back humans have evolved and it is a net result of paying attention and honoring the vision each individual that has contributed had a definite", "start_timestamp": "00:36:10", "end_timestamp": "00:36:46", "start_second": 2170, "end_second": 2206, "url": "https://www.youtube.com/watch?v=6cGNcgzWZT8&t=2170s", "title": "18 Secrets That Lie Hidden In Your Subconscious Mind (Neville Goddard, Napoleon Hill)", "thumbnail": "https://i.ytimg.com/vi/6cGNcgzWZT8/maxresdefault.jpg"} {"video_id": "6cGNcgzWZT8", "text": "chief aim and by constantly identifying and honoring and seeing the definite chief aim all the way till completion you uncover more of your vision and you start to understand the sixth sense even more so number 10 all is part of the super conscious mind impressing the super conscious subconscious mind is done through the imagination so when you get your vision from the super conscious mind your definite chief aim you are to follow it all the way till completion and you can work with bringing forth you're definitely Fame via your", "start_timestamp": "00:36:46", "end_timestamp": "00:37:18", "start_second": 2206, "end_second": 2238, "url": "https://www.youtube.com/watch?v=6cGNcgzWZT8&t=2206s", "title": "18 Secrets That Lie Hidden In Your Subconscious Mind (Neville Goddard, Napoleon Hill)", "thumbnail": "https://i.ytimg.com/vi/6cGNcgzWZT8/maxresdefault.jpg"} {"video_id": "6cGNcgzWZT8", "text": "imagination it is via your imagination that the conscious the subconscious and the super conscious work in harmony the love within the subconscious mind is impressed and brought forth the conscious mind continues to facilitated guard and nurtured the subconscious mind and the ways that define or defy possibility are figured out by the super conscious number 11 optimal behaviors manifest automatically because we continuously support and encourage our imagination we also realize that our five senses are there to support our", "start_timestamp": "00:37:18", "end_timestamp": "00:37:55", "start_second": 2238, "end_second": 2275, "url": "https://www.youtube.com/watch?v=6cGNcgzWZT8&t=2238s", "title": "18 Secrets That Lie Hidden In Your Subconscious Mind (Neville Goddard, Napoleon Hill)", "thumbnail": "https://i.ytimg.com/vi/6cGNcgzWZT8/maxresdefault.jpg"} {"video_id": "6cGNcgzWZT8", "text": "sixth sense the imagination so for many years we could have been living in a world where we were run by our five senses and you could live that way if you want but when you work with this philosophy when you work through this process of identifying your vision and making this a continuum it's a continuous journey you realize that the five senses are here to support the vision of the sixth sense information pours through you like a conduit via the sixth sense through your imagination into the subconscious mind impresses the", "start_timestamp": "00:37:55", "end_timestamp": "00:38:28", "start_second": 2275, "end_second": 2308, "url": "https://www.youtube.com/watch?v=6cGNcgzWZT8&t=2275s", "title": "18 Secrets That Lie Hidden In Your Subconscious Mind (Neville Goddard, Napoleon Hill)", "thumbnail": "https://i.ytimg.com/vi/6cGNcgzWZT8/maxresdefault.jpg"} {"video_id": "6cGNcgzWZT8", "text": "subconscious mind the subconscious mind projects outwards and materializes into form that information is picked up we are the five senses and interpreted by the conscious mind and given meaning to to tune the like an instrument the subconscious mind to be in alignment with the vision and then send specific instructions back to the superconscious to repeat the process again and again over and over again that's the relationship of the lovers within but the primary living is through the imagination the sixth sense now that is", "start_timestamp": "00:38:28", "end_timestamp": "00:39:08", "start_second": 2308, "end_second": 2348, "url": "https://www.youtube.com/watch?v=6cGNcgzWZT8&t=2308s", "title": "18 Secrets That Lie Hidden In Your Subconscious Mind (Neville Goddard, Napoleon Hill)", "thumbnail": "https://i.ytimg.com/vi/6cGNcgzWZT8/maxresdefault.jpg"} {"video_id": "6cGNcgzWZT8", "text": "a person that is projecting outwards releasing that is a person that is said to have bring who brings the goods the table arrives with the full cup they're not looking to take because all is within they are looking to express and share and contribute and that's how a mastermind is created and that's how you had the button's consciousness and you can feel when you're around a person like that and my goal in this video is to help you unlock that because we all have that potentiality within us and it continuously gets", "start_timestamp": "00:39:08", "end_timestamp": "00:39:43", "start_second": 2348, "end_second": 2383, "url": "https://www.youtube.com/watch?v=6cGNcgzWZT8&t=2348s", "title": "18 Secrets That Lie Hidden In Your Subconscious Mind (Neville Goddard, Napoleon Hill)", "thumbnail": "https://i.ytimg.com/vi/6cGNcgzWZT8/maxresdefault.jpg"} {"video_id": "6cGNcgzWZT8", "text": "released more and more by doing the work by cleansing the subconscious mind from that what breaks the connection between the conscious and the subconscious and superconscious and when the superconscious is expressed through the suit through the subconscious in the external world that's where you get beautiful works of art that's where you get beautiful information insights that's where you get breakthroughs in technology breakthrough a breakthroughs in medical science breakthroughs in everything because you're working with", "start_timestamp": "00:39:43", "end_timestamp": "00:40:15", "start_second": 2383, "end_second": 2415, "url": "https://www.youtube.com/watch?v=6cGNcgzWZT8&t=2383s", "title": "18 Secrets That Lie Hidden In Your Subconscious Mind (Neville Goddard, Napoleon Hill)", "thumbnail": "https://i.ytimg.com/vi/6cGNcgzWZT8/maxresdefault.jpg"} {"video_id": "6cGNcgzWZT8", "text": "the spirit of harmony because you're working with imagination because you're releasing fears doubts and indecision on that what is your purpose in your vision see Napoleon Hill said this and I'm a huge fan of this in that chapter of outwitting the six ghosts of fear he said this and I'll pull it up here he said indecision is the seedling of fear remember this as you read indecision crystallizes into doubt and the to blend and become fear see in the last video when I talked about mental chemistry I recommend you watch that video and watch", "start_timestamp": "00:40:15", "end_timestamp": "00:40:49", "start_second": 2415, "end_second": 2449, "url": "https://www.youtube.com/watch?v=6cGNcgzWZT8&t=2415s", "title": "18 Secrets That Lie Hidden In Your Subconscious Mind (Neville Goddard, Napoleon Hill)", "thumbnail": "https://i.ytimg.com/vi/6cGNcgzWZT8/maxresdefault.jpg"} {"video_id": "6cGNcgzWZT8", "text": "it again and again and again because what I was really getting at over there is you don't want to wait till you fear you want to catch it at an indecision and doubt level before it crystallizes interfere you have the awareness and the power to capture it at that point because as soon as fear hits you that's when you risk drifting that's when you break the connection between the lovers within that's when you get in your head about it that's when you're gonna act from a scarcity perspective that is when you're not working in harmony to release", "start_timestamp": "00:40:49", "end_timestamp": "00:41:19", "start_second": 2449, "end_second": 2479, "url": "https://www.youtube.com/watch?v=6cGNcgzWZT8&t=2449s", "title": "18 Secrets That Lie Hidden In Your Subconscious Mind (Neville Goddard, Napoleon Hill)", "thumbnail": "https://i.ytimg.com/vi/6cGNcgzWZT8/maxresdefault.jpg"} {"video_id": "6cGNcgzWZT8", "text": "that's what's on your subconscious mind to help you solve the problem so catch it at an indecision and doubt level the moment you feel indecision you trust that the answers are within you you release it and you ask the subconscious mind you say subconscious mind you are the source of creation of all that exists because you are connected with the superconscious mind you are also a repository of the experiences I have had in my life and I realize consciously that you have far more experiences than I could consciously think of at this", "start_timestamp": "00:41:19", "end_timestamp": "00:41:55", "start_second": 2479, "end_second": 2515, "url": "https://www.youtube.com/watch?v=6cGNcgzWZT8&t=2479s", "title": "18 Secrets That Lie Hidden In Your Subconscious Mind (Neville Goddard, Napoleon Hill)", "thumbnail": "https://i.ytimg.com/vi/6cGNcgzWZT8/maxresdefault.jpg"} {"video_id": "6cGNcgzWZT8", "text": "moment you have the answers within you present to me the hunch the inspiration and I will act upon it with calm faith and it will be reveal to you to the extent that you have the connection and the relationship between the conscious and subconscious super conscious it will be revealed to you right that in there and the key is take action on it speed of implementation if you get that hunch you have to act upon it right away because if you don't act upon it right away more doubt will set it more indecision will set it when you act upon", "start_timestamp": "00:41:55", "end_timestamp": "00:42:32", "start_second": 2515, "end_second": 2552, "url": "https://www.youtube.com/watch?v=6cGNcgzWZT8&t=2515s", "title": "18 Secrets That Lie Hidden In Your Subconscious Mind (Neville Goddard, Napoleon Hill)", "thumbnail": "https://i.ytimg.com/vi/6cGNcgzWZT8/maxresdefault.jpg"} {"video_id": "6cGNcgzWZT8", "text": "it you move everything forward and it might not necessarily directly produce the result but in some way somehow it will be brought forth because you're moving forward the next ideal will show up the next thing and when you connect the dots looking backwards then it will make sense to you there are infinite ways number 12 infinite ways of bringing forth your vision found by listening to your own inner voice the inner voice speaks from the super conscious mind which is the source of all that exists and your subconscious and even if you don't", "start_timestamp": "00:42:32", "end_timestamp": "00:43:09", "start_second": 2552, "end_second": 2589, "url": "https://www.youtube.com/watch?v=6cGNcgzWZT8&t=2552s", "title": "18 Secrets That Lie Hidden In Your Subconscious Mind (Neville Goddard, Napoleon Hill)", "thumbnail": "https://i.ytimg.com/vi/6cGNcgzWZT8/maxresdefault.jpg"} {"video_id": "6cGNcgzWZT8", "text": "believe in the super conscious it will still speak from the subconscious because the subconscious is a storehouse of so many reference experiences that you have had in your life and there's tons of them you have to believe and have faith and confidence in yourself that's right recommend watching the video I did on self confidence formula by Napoleon Hill when a person has self-confidence and faith in themselves they tap into their subconscious and their subconscious expresses you'll notice that people who have a high", "start_timestamp": "00:43:09", "end_timestamp": "00:43:37", "start_second": 2589, "end_second": 2617, "url": "https://www.youtube.com/watch?v=6cGNcgzWZT8&t=2589s", "title": "18 Secrets That Lie Hidden In Your Subconscious Mind (Neville Goddard, Napoleon Hill)", "thumbnail": "https://i.ytimg.com/vi/6cGNcgzWZT8/maxresdefault.jpg"} {"video_id": "6cGNcgzWZT8", "text": "degree of real core confidence are more likely in flow more often they are not in their head they're not stuck an inharmonious in the relationship between the conscious and the subconscious you have to trust that exists within you and you have an inner voice and it can speak to you for me I build a connection with my inner voice to the point my inner voice speaks to conversations but for you it might be a hunch and inspiration but the more you honor it the clear that inner voice becomes the more pronounced it becomes and again", "start_timestamp": "00:43:37", "end_timestamp": "00:44:12", "start_second": 2617, "end_second": 2652, "url": "https://www.youtube.com/watch?v=6cGNcgzWZT8&t=2617s", "title": "18 Secrets That Lie Hidden In Your Subconscious Mind (Neville Goddard, Napoleon Hill)", "thumbnail": "https://i.ytimg.com/vi/6cGNcgzWZT8/maxresdefault.jpg"} {"video_id": "6cGNcgzWZT8", "text": "that's why Steve Jobs said don't let the noise of other people's opinions drown out your own inner voice because all is one mind the inner voice is connected to the inner voice of everybody else The Sixth Sense probably is the medium of contact between the finite man or finite mind of man and the infinite intel and infinite intelligence and for this reason it is a mixture of both the mental and the spiritual it is believed to be the point in which the mind of man connects to the universal mind all well-being is materialized through", "start_timestamp": "00:44:12", "end_timestamp": "00:44:48", "start_second": 2652, "end_second": 2688, "url": "https://www.youtube.com/watch?v=6cGNcgzWZT8&t=2652s", "title": "18 Secrets That Lie Hidden In Your Subconscious Mind (Neville Goddard, Napoleon Hill)", "thumbnail": "https://i.ytimg.com/vi/6cGNcgzWZT8/maxresdefault.jpg"} {"video_id": "6cGNcgzWZT8", "text": "alignment with consciousness rising and experienced through our five senses guided by our six now this will make sense to you when you have raised your consciousness and I recommend studying the work of David Hawkins letting go and power versus force in the levels of consciousness when you work on releasing the lower consciousness level based thinking of fear anger resentment hatred Envy you move into a place of bliss joy unconditional love acceptance real reason will understand you seek to understand and then what happens is that", "start_timestamp": "00:44:48", "end_timestamp": "00:45:23", "start_second": 2688, "end_second": 2723, "url": "https://www.youtube.com/watch?v=6cGNcgzWZT8&t=2688s", "title": "18 Secrets That Lie Hidden In Your Subconscious Mind (Neville Goddard, Napoleon Hill)", "thumbnail": "https://i.ytimg.com/vi/6cGNcgzWZT8/maxresdefault.jpg"} {"video_id": "6cGNcgzWZT8", "text": "those experiences those thoughts project outwards via your subconscious mind and materialize into form and as we said guided by the sixth sense the voice with it lifting up your thoughts to higher degree no matter what you find as far as the statement it can always be Allah elevated to a higher level number 14 the most powerful televisions a great quote that I put together my Instagram that came to me as a result of a conversation I had with my superconscious the most powerful television is our own imagination in which you create worlds", "start_timestamp": "00:45:23", "end_timestamp": "00:46:01", "start_second": 2723, "end_second": 2761, "url": "https://www.youtube.com/watch?v=6cGNcgzWZT8&t=2723s", "title": "18 Secrets That Lie Hidden In Your Subconscious Mind (Neville Goddard, Napoleon Hill)", "thumbnail": "https://i.ytimg.com/vi/6cGNcgzWZT8/maxresdefault.jpg"} {"video_id": "6cGNcgzWZT8", "text": "where you are the hero and all supporting you to bring forth your well-being into this planet with such vivid detail that it rewires your neurology in your brain to automatically generate actions to bring it forth you do not need anyone else to tell you what your vision has to be a lot of us are programmed by the information that we're consuming and the question we have to ask consciously and it's where the conscious mind has to step in and say is this information programming our subconscious mind for abundance for wellbeing for joy", "start_timestamp": "00:46:01", "end_timestamp": "00:46:32", "start_second": 2761, "end_second": 2792, "url": "https://www.youtube.com/watch?v=6cGNcgzWZT8&t=2761s", "title": "18 Secrets That Lie Hidden In Your Subconscious Mind (Neville Goddard, Napoleon Hill)", "thumbnail": "https://i.ytimg.com/vi/6cGNcgzWZT8/maxresdefault.jpg"} {"video_id": "6cGNcgzWZT8", "text": "for peace or is it not and we have a choice right then and there and we always have to remember this the imagination in our mind can be the greatest source of entertainment joy happiness bliss inspiration in your own mind and you control it and you have access to it and you can cultivate it and you can build a relationship with it and just like how you consume information that information goes into your subconscious mind and projects outwards to materialize into form so will what will be in your imagination it", "start_timestamp": "00:46:32", "end_timestamp": "00:47:08", "start_second": 2792, "end_second": 2828, "url": "https://www.youtube.com/watch?v=6cGNcgzWZT8&t=2792s", "title": "18 Secrets That Lie Hidden In Your Subconscious Mind (Neville Goddard, Napoleon Hill)", "thumbnail": "https://i.ytimg.com/vi/6cGNcgzWZT8/maxresdefault.jpg"} {"video_id": "6cGNcgzWZT8", "text": "will do the same thing the question is do you want to have control of that what programs your subconscious mind so Napoleon Hill says before you can put any portion of this philosophy into successful use you must be prepared to receive it the preparation is not difficult it begins with study analysis and understanding of the three enemies which you shall have to clear out these are indecision doubt and fear the sixth sense will never function while these three negatives or any of them remain in your mind the members of this unholy", "start_timestamp": "00:47:08", "end_timestamp": "00:47:42", "start_second": 2828, "end_second": 2862, "url": "https://www.youtube.com/watch?v=6cGNcgzWZT8&t=2828s", "title": "18 Secrets That Lie Hidden In Your Subconscious Mind (Neville Goddard, Napoleon Hill)", "thumbnail": "https://i.ytimg.com/vi/6cGNcgzWZT8/maxresdefault.jpg"} {"video_id": "6cGNcgzWZT8", "text": "trio are closely related where one is found the other two are close at hand so this is one of the biggest insights that I'm going to share with you what I've found with working with the subconscious mind you can categorize all subconscious mind reprogramming to the removal of indecision doubt and fear which are $0.05 based input data and meaning gave not the data but the meaning so it's you've heard this said before it's not what happens due to how you respond to it it's not what happens to you it's the meaning you give to it and that's why I", "start_timestamp": "00:47:42", "end_timestamp": "00:48:19", "start_second": 2862, "end_second": 2899, "url": "https://www.youtube.com/watch?v=6cGNcgzWZT8&t=2862s", "title": "18 Secrets That Lie Hidden In Your Subconscious Mind (Neville Goddard, Napoleon Hill)", "thumbnail": "https://i.ytimg.com/vi/6cGNcgzWZT8/maxresdefault.jpg"} {"video_id": "6cGNcgzWZT8", "text": "recommend man's search for meaning and watch the video I did last on mental chemistry which is exactly about this transmutation now when fear doubt and indecision sets in your subconscious mind they will begin to project outwards to materializing to form to reflect fear doubt and indecision and fear when it's projected outwards in the external world cripples us because it shows up and it's scary because it's revealing what's within which is scary and it can cause us to remain there now there's always a way out we'll always find a way out but", "start_timestamp": "00:48:19", "end_timestamp": "00:48:58", "start_second": 2899, "end_second": 2938, "url": "https://www.youtube.com/watch?v=6cGNcgzWZT8&t=2899s", "title": "18 Secrets That Lie Hidden In Your Subconscious Mind (Neville Goddard, Napoleon Hill)", "thumbnail": "https://i.ytimg.com/vi/6cGNcgzWZT8/maxresdefault.jpg"} {"video_id": "6cGNcgzWZT8", "text": "the key is this if you want to continue to unravel that what is within your vision as you do the work to remove it at an early stage fears which is a net result of indecision and doubt number 15 the discipline of taking rapid action on an idea that you're curious about will not only give you empirical data but train your ability to trust yourself and act in spite of doubt fear and hesitation while cultivating wisdom and decisiveness ok discipline of taking rapid action will cut through the dis indecision and doubt right then and", "start_timestamp": "00:48:58", "end_timestamp": "00:49:33", "start_second": 2938, "end_second": 2973, "url": "https://www.youtube.com/watch?v=6cGNcgzWZT8&t=2938s", "title": "18 Secrets That Lie Hidden In Your Subconscious Mind (Neville Goddard, Napoleon Hill)", "thumbnail": "https://i.ytimg.com/vi/6cGNcgzWZT8/maxresdefault.jpg"} {"video_id": "6cGNcgzWZT8", "text": "there it's a habit see overthinking is going to lead to fear that's it when you start overthinking you have initially embraced a slight seedling of indecision in doubt that's why right then and there we could ask the subconscious on what we need to do and take the action right then and there and if you look at anybody that has created success one of my favorite programs that I've ever was part of us called get altitude by eben pagan and he said in the beginning he said one of the commonalities that he has founded he's a very extensive reach", "start_timestamp": "00:49:33", "end_timestamp": "00:50:09", "start_second": 2973, "end_second": 3009, "url": "https://www.youtube.com/watch?v=6cGNcgzWZT8&t=2973s", "title": "18 Secrets That Lie Hidden In Your Subconscious Mind (Neville Goddard, Napoleon Hill)", "thumbnail": "https://i.ytimg.com/vi/6cGNcgzWZT8/maxresdefault.jpg"} {"video_id": "6cGNcgzWZT8", "text": "a researcher and he a lot of his information I hold true as foundations fundamentals for building success especially in entrepreneurship he said the one commonality the one thing that successful entrepreneurs have in common is speed of implementation and I did that discussion on opportunity but I've been paying in I recommend you watch it he said get version one out in the marketplace as fast as possible and yet again then get it up to version three as fast as possible out into the marketplace take the action", "start_timestamp": "00:50:09", "end_timestamp": "00:50:42", "start_second": 3009, "end_second": 3042, "url": "https://www.youtube.com/watch?v=6cGNcgzWZT8&t=3009s", "title": "18 Secrets That Lie Hidden In Your Subconscious Mind (Neville Goddard, Napoleon Hill)", "thumbnail": "https://i.ytimg.com/vi/6cGNcgzWZT8/maxresdefault.jpg"} {"video_id": "6cGNcgzWZT8", "text": "now this is a practice it's a way of living when you get an idea to act upon it right away you are cultivating decisiveness and you're getting data you cannot get data from the external where everything is optimization you cannot get data from the external world unless there's some action taken in the external or otherwise you get stuck in your head in theory the beautiful thing is the more you do this with repetition the more you honor your inner voice because you'll start to see that some way somehow it works out number 16 the", "start_timestamp": "00:50:42", "end_timestamp": "00:51:15", "start_second": 3042, "end_second": 3075, "url": "https://www.youtube.com/watch?v=6cGNcgzWZT8&t=3042s", "title": "18 Secrets That Lie Hidden In Your Subconscious Mind (Neville Goddard, Napoleon Hill)", "thumbnail": "https://i.ytimg.com/vi/6cGNcgzWZT8/maxresdefault.jpg"} {"video_id": "6cGNcgzWZT8", "text": "sixth sense is a conversation with infinite intelligence that shows up when you are in a vibrational match in your imagination which is clear of thoughts containing fears doubts and indecision so by taking rapid action by honoring your inner voice by trusting it and not being afraid of the six basic fears as a result of thing so what happens fear of poverty fear of criticism fear of ill health fear of loss of love with someone fear of old age fear of death hold somebody back from taking action that's because in the earlier stages they", "start_timestamp": "00:51:15", "end_timestamp": "00:51:48", "start_second": 3075, "end_second": 3108, "url": "https://www.youtube.com/watch?v=6cGNcgzWZT8&t=3075s", "title": "18 Secrets That Lie Hidden In Your Subconscious Mind (Neville Goddard, Napoleon Hill)", "thumbnail": "https://i.ytimg.com/vi/6cGNcgzWZT8/maxresdefault.jpg"} {"video_id": "6cGNcgzWZT8", "text": "encourage indecision and doubt based thoughts that that became fear in those areas they encouraged them to the point it became fear and you don't want it to get to that point and if you want to build a connection to the superconscious mind you have to do this work you have to release those fears you have to release the early stages of it then take action because you have to get into the vibrational match of the success which is a really high vibration which is a confidence based vibration a person that believes in themselves has confidence", "start_timestamp": "00:51:48", "end_timestamp": "00:52:24", "start_second": 3108, "end_second": 3144, "url": "https://www.youtube.com/watch?v=6cGNcgzWZT8&t=3108s", "title": "18 Secrets That Lie Hidden In Your Subconscious Mind (Neville Goddard, Napoleon Hill)", "thumbnail": "https://i.ytimg.com/vi/6cGNcgzWZT8/maxresdefault.jpg"} {"video_id": "6cGNcgzWZT8", "text": "creates success look at anybody that has produced any results that you admire doesn't necessarily be money or finance whatever you will find within them a high degree of confidence and whether they can articulate what what it is I'm talking about right here and a lot of them I've talked to have never articulated before and I've shared this information with with them and they say this it said Wow you are able to articulate what I have been feeling and so they might not necessarily be able to explain what we're talking about here", "start_timestamp": "00:52:24", "end_timestamp": "00:52:56", "start_second": 3144, "end_second": 3176, "url": "https://www.youtube.com/watch?v=6cGNcgzWZT8&t=3144s", "title": "18 Secrets That Lie Hidden In Your Subconscious Mind (Neville Goddard, Napoleon Hill)", "thumbnail": "https://i.ytimg.com/vi/6cGNcgzWZT8/maxresdefault.jpg"} {"video_id": "6cGNcgzWZT8", "text": "but they are experiencing this in some shape or form but the key is this they get access to their subconscious and superconscious by releasing the fears doubts and indecision so thus that's why I put huge emphasis when it comes to programming the subconscious mind one of the big elements that we want to keep into consideration is noting where the fears doubts and indecision so up and work on releasing them number 17 flow brings forth results in harmony with our vision because the feeling experienced while in", "start_timestamp": "00:52:56", "end_timestamp": "00:53:30", "start_second": 3176, "end_second": 3210, "url": "https://www.youtube.com/watch?v=6cGNcgzWZT8&t=3176s", "title": "18 Secrets That Lie Hidden In Your Subconscious Mind (Neville Goddard, Napoleon Hill)", "thumbnail": "https://i.ytimg.com/vi/6cGNcgzWZT8/maxresdefault.jpg"} {"video_id": "6cGNcgzWZT8", "text": "flow is the mental alchemy that is projected outwards to materializes form it is then experienced back through our five senses as validation of the feeling of flow within flow is where we want to be at flow is where challenge meets skill it is where your being and living a purposeful life it is when you're progressively moving forward towards the realization of your definite chief aim it is when you're living your vision creating your vision and it will be brought forth and then you move on to the next one and the next one so watch", "start_timestamp": "00:53:30", "end_timestamp": "00:54:01", "start_second": 3210, "end_second": 3241, "url": "https://www.youtube.com/watch?v=6cGNcgzWZT8&t=3210s", "title": "18 Secrets That Lie Hidden In Your Subconscious Mind (Neville Goddard, Napoleon Hill)", "thumbnail": "https://i.ytimg.com/vi/6cGNcgzWZT8/maxresdefault.jpg"} {"video_id": "6cGNcgzWZT8", "text": "the video I did on mental chemistry watch the discussion I did on flow by me nitric sent behind flow is a very important element to keep into consideration when a person is in flow they're also very light-hearted when a person is not in flow they're stuck in their head they're angry that's because they have given in to fear fear brings more forth those lower levels of the emotional scale the level that you want to get at is the higher levels of the emotional scale okay watch the emotional scale discussions that are probably on", "start_timestamp": "00:54:01", "end_timestamp": "00:54:33", "start_second": 3241, "end_second": 3273, "url": "https://www.youtube.com/watch?v=6cGNcgzWZT8&t=3241s", "title": "18 Secrets That Lie Hidden In Your Subconscious Mind (Neville Goddard, Napoleon Hill)", "thumbnail": "https://i.ytimg.com/vi/6cGNcgzWZT8/maxresdefault.jpg"} {"video_id": "6cGNcgzWZT8", "text": "YouTube on Esther and Jerry Hicks maybe I'll do a discussion on that the emotional guidance scale they even talk about it they said being in those higher vibrations to higher levels of the emotional scale will allow you to connect to infinite intelligence to your voice within the superconscious and that is experience when you're in flow when you experience challenges while you're in flow you're able to overcome it because the idea has come from within we have the subconscious there's a harmonious relationship between the", "start_timestamp": "00:54:33", "end_timestamp": "00:55:02", "start_second": 3273, "end_second": 3302, "url": "https://www.youtube.com/watch?v=6cGNcgzWZT8&t=3273s", "title": "18 Secrets That Lie Hidden In Your Subconscious Mind (Neville Goddard, Napoleon Hill)", "thumbnail": "https://i.ytimg.com/vi/6cGNcgzWZT8/maxresdefault.jpg"} {"video_id": "6cGNcgzWZT8", "text": "conscious and the subconscious and I'll even add to it and say the superconscious as well and number 13 always remember this you are the conduit that streams infinite intelligence we all have access to infinite intelligence it is accessible to all of us and the link is within it is within you just like any conduit energy flows from side one side to another within its casing fully perfect protected from external contaminants to smoothly carry out its meaningful journey in all its purity all the way to its destination you are being", "start_timestamp": "00:55:02", "end_timestamp": "00:55:40", "start_second": 3302, "end_second": 3340, "url": "https://www.youtube.com/watch?v=6cGNcgzWZT8&t=3302s", "title": "18 Secrets That Lie Hidden In Your Subconscious Mind (Neville Goddard, Napoleon Hill)", "thumbnail": "https://i.ytimg.com/vi/6cGNcgzWZT8/maxresdefault.jpg"} {"video_id": "6cGNcgzWZT8", "text": "guided you are protected you are being supported on your vision and you have a choice to honor that by taking inventory of what is within your subconscious mind that denies it and releasing that and as you release it you'll find yourself more in flow flow is the energy that goes through the conduit it is where you are on purpose now this doesn't mean you have to be on flow all the time but it does mean this the more you embrace flow the more you honor your flow the more you honor these things that we're talking about the more you'll find that", "start_timestamp": "00:55:40", "end_timestamp": "00:56:25", "start_second": 3340, "end_second": 3385, "url": "https://www.youtube.com/watch?v=6cGNcgzWZT8&t=3340s", "title": "18 Secrets That Lie Hidden In Your Subconscious Mind (Neville Goddard, Napoleon Hill)", "thumbnail": "https://i.ytimg.com/vi/6cGNcgzWZT8/maxresdefault.jpg"} {"video_id": "6cGNcgzWZT8", "text": "you're living purposefully the more you'll find you'll be able to come up with saw and solve problems in creative ways never even thought of before because you have a very harmonious relationship between the conscious - subconscious and superconscious and the energy flows through you protected you will be guided your sixth sense will warn you you will have a heightened degree of sensitivity for vibes and energy you will learn to trust and honor that what you feel and I'll speak from experience I've had this many times I've", "start_timestamp": "00:56:25", "end_timestamp": "00:57:02", "start_second": 3385, "end_second": 3422, "url": "https://www.youtube.com/watch?v=6cGNcgzWZT8&t=3385s", "title": "18 Secrets That Lie Hidden In Your Subconscious Mind (Neville Goddard, Napoleon Hill)", "thumbnail": "https://i.ytimg.com/vi/6cGNcgzWZT8/maxresdefault.jpg"} {"video_id": "rk7fIhCH8Gc", "text": "well uh hello and welcome to this uh richard m carp distinguished lecture uh my name is peter barton i'm the associate director of the simons institute for the theory of computing uh thanks for joining us we established the richard m cup series to celebrate the role of simon's institute founding director dick carp in establishing the field of theoretical computer science formulating central problems and contributing amazing results in the areas of computational complexity and algorithms the series features visionary leaders in", "start_timestamp": "00:00:00", "end_timestamp": "00:00:36", "start_second": 0, "end_second": 36, "url": "https://www.youtube.com/watch?v=rk7fIhCH8Gc&t=0s", "title": "Insights on Gradient-Based Algorithms in High-Dimensional Learning", "thumbnail": "https://i.ytimg.com/vi/rk7fIhCH8Gc/maxresdefault.jpg"} {"video_id": "rk7fIhCH8Gc", "text": "tcs and is geared towards a broad scientific audience we're grateful to the many contributors to the richard m carp fund who've made this series possible so i'm delighted to welcome our speaker today lenka stevarova lenka is a researcher at cnrs working in the institute of theoretical physics in cea uh paris clay she has a background in physics and is famous for the application of methods of statistical physics to problems in machine learning and signal processing in inference and optimization link is the recipient of the cnrs bronze medal", "start_timestamp": "00:00:36", "end_timestamp": "00:01:11", "start_second": 36, "end_second": 71, "url": "https://www.youtube.com/watch?v=rk7fIhCH8Gc&t=36s", "title": "Insights on Gradient-Based Algorithms in High-Dimensional Learning", "thumbnail": "https://i.ytimg.com/vi/rk7fIhCH8Gc/maxresdefault.jpg"} {"video_id": "rk7fIhCH8Gc", "text": "in 2014 the philippe meyer prize in theoretical physics in 2016 and the irene julio curie prize in 2018. the talk today is entitled insights on gradient-based algorithms in high-dimensional learning so please join me in welcoming lenka's devarova thank you peter and i will share my screen so that you see the slides i prefer it and i'm really really honored to be giving this lecture especially given the influence that you know being part of one of the programs at simon's institute four years ago it had on my career and i enjoyed it so", "start_timestamp": "00:01:11", "end_timestamp": "00:01:53", "start_second": 71, "end_second": 113, "url": "https://www.youtube.com/watch?v=rk7fIhCH8Gc&t=71s", "title": "Insights on Gradient-Based Algorithms in High-Dimensional Learning", "thumbnail": "https://i.ytimg.com/vi/rk7fIhCH8Gc/maxresdefault.jpg"} {"video_id": "rk7fIhCH8Gc", "text": "immensely and it's amazing what the simuls institute is doing so first thing i should do is to correct my affiliation so it's only a second seminar i'm giving and a third week i'm spending at my new affiliation that is epfl so not anymore in france but in a neighboring country switzerland and i will be telling you about work that i you know i have recently did a lecture in this simon's institute bootcamp for the program of this semester where kind of a lot of the works that seemed like a statistical physics voodoo", "start_timestamp": "00:01:53", "end_timestamp": "00:02:30", "start_second": 113, "end_second": 150, "url": "https://www.youtube.com/watch?v=rk7fIhCH8Gc&t=113s", "title": "Insights on Gradient-Based Algorithms in High-Dimensional Learning", "thumbnail": "https://i.ytimg.com/vi/rk7fIhCH8Gc/maxresdefault.jpg"} {"video_id": "rk7fIhCH8Gc", "text": "maybe 20 years ago actually have been established rigorously and part of the program is about it and it's pretty and it's very exciting so for this very special lecture i decided to go back to results from physics where most of it is not established vigorously and is waiting for the mathematical inputs and works and and that's something that was going on in the past two years with the list of collaborators that i give here the the main among them are the two students highlighted in in blue stefano mannelly sarah and", "start_timestamp": "00:02:30", "end_timestamp": "00:03:05", "start_second": 150, "end_second": 185, "url": "https://www.youtube.com/watch?v=rk7fIhCH8Gc&t=150s", "title": "Insights on Gradient-Based Algorithms in High-Dimensional Learning", "thumbnail": "https://i.ytimg.com/vi/rk7fIhCH8Gc/maxresdefault.jpg"} {"video_id": "rk7fIhCH8Gc", "text": "francesca and stefano is among the panelists so if you have some clarification questions or questions you can he's able to answer them even during the talk without interrupting it so please don't hesitate so this is the list of six papers from the past two years on which this talk is based and the talk will be about gradient descent based algorithms or stochastic gratities and base algorithms that you know pictorially are the workhorse of machine learning that is really everywhere these days so they are really worth", "start_timestamp": "00:03:05", "end_timestamp": "00:03:38", "start_second": 185, "end_second": 218, "url": "https://www.youtube.com/watch?v=rk7fIhCH8Gc&t=185s", "title": "Insights on Gradient-Based Algorithms in High-Dimensional Learning", "thumbnail": "https://i.ytimg.com/vi/rk7fIhCH8Gc/maxresdefault.jpg"} {"video_id": "rk7fIhCH8Gc", "text": "understanding and studying in in more detail in particular in deep learning we have the empirical observation that local or even global minima with bad generalization error actually do exist there are many kind of works uh going towards showing like something like that empirically one of them that i like quite a bit is this this paper by dimitry akioptas and his collaborators where he starts by interpolating and fitting random labels in the neural network and then he puts back the real labels little by little", "start_timestamp": "00:03:38", "end_timestamp": "00:04:14", "start_second": 218, "end_second": 254, "url": "https://www.youtube.com/watch?v=rk7fIhCH8Gc&t=218s", "title": "Insights on Gradient-Based Algorithms in High-Dimensional Learning", "thumbnail": "https://i.ytimg.com/vi/rk7fIhCH8Gc/maxresdefault.jpg"} {"video_id": "rk7fIhCH8Gc", "text": "and he shows that the gradient descent actually doesn't go that far away from the point where it interpolated random labels and it generalizes pretty bad much worse than it would if you just initialize it randomly so that really tells us something notable about how this optimization landscape looks like and we really need to understand how comes that the gradient-based algorithms initialize randomly are able to avoid the bad minimum and so the goal here you know it's it's pretty much clear these days that this", "start_timestamp": "00:04:14", "end_timestamp": "00:04:46", "start_second": 254, "end_second": 286, "url": "https://www.youtube.com/watch?v=rk7fIhCH8Gc&t=254s", "title": "Insights on Gradient-Based Algorithms in High-Dimensional Learning", "thumbnail": "https://i.ytimg.com/vi/rk7fIhCH8Gc/maxresdefault.jpg"} {"video_id": "rk7fIhCH8Gc", "text": "will this cannot happen just by studying the landscape that it really what matters is you know even the initialization so what matters is the whole trajectory that the algorithm is taking so we want to understand the trajectory and these non-convex high dimensional problems and just two points to make to set the talk you know in practice the number of samples is limited so i don't want to be working in some limit where the number of samples is is unreasonably large and also constants do matter so i don't want to be talking", "start_timestamp": "00:04:46", "end_timestamp": "00:05:18", "start_second": 286, "end_second": 318, "url": "https://www.youtube.com/watch?v=rk7fIhCH8Gc&t=286s", "title": "Insights on Gradient-Based Algorithms in High-Dimensional Learning", "thumbnail": "https://i.ytimg.com/vi/rk7fIhCH8Gc/maxresdefault.jpg"} {"video_id": "rk7fIhCH8Gc", "text": "working with with some rates without log factors and with arbitrary constants so in order to be able to do something like that to keep in mind finite sample complexity and constants i need to make some simplification somewhere so for the purpose of you know the work that i'm describing in this talk this will be on the side of the data so i will not be assuming any kind of really generic data set i will be working with synthetic models for data for which we can say something so the first such model on which will be", "start_timestamp": "00:05:18", "end_timestamp": "00:05:55", "start_second": 318, "end_second": 355, "url": "https://www.youtube.com/watch?v=rk7fIhCH8Gc&t=318s", "title": "Insights on Gradient-Based Algorithms in High-Dimensional Learning", "thumbnail": "https://i.ytimg.com/vi/rk7fIhCH8Gc/maxresdefault.jpg"} {"video_id": "rk7fIhCH8Gc", "text": "the first say 20 minutes of the talk will be the spiked matrix tensor model which is the optimization you can think of optimizing the loss function that is written here it has two parts one so so the variable over which you are optimizing is the x that is living on an n-dimensional sphere and n will be large that will be the limit we will be interested in high dimensional and then the way the loss function depends on the x is through the matrix y that is created from some ground through x star plus a load of noise", "start_timestamp": "00:05:55", "end_timestamp": "00:06:32", "start_second": 355, "end_second": 392, "url": "https://www.youtube.com/watch?v=rk7fIhCH8Gc&t=355s", "title": "Insights on Gradient-Based Algorithms in High-Dimensional Learning", "thumbnail": "https://i.ytimg.com/vi/rk7fIhCH8Gc/maxresdefault.jpg"} {"video_id": "rk7fIhCH8Gc", "text": "so x star x star transpose more precisely and it also depends on this tensor t that has order p and is created by taking an outer product p times of the same vector x star and adding a lot of noise and then the goal of the you know interference problem here is to find back the vector x star by minimizing the loss function written over here so why this model so that also kind of sets again what what i'm aiming to to achieve so this small because it's high dimensional and on convex that's kind of what makes a study of", "start_timestamp": "00:06:32", "end_timestamp": "00:07:11", "start_second": 392, "end_second": 431, "url": "https://www.youtube.com/watch?v=rk7fIhCH8Gc&t=392s", "title": "Insights on Gradient-Based Algorithms in High-Dimensional Learning", "thumbnail": "https://i.ytimg.com/vi/rk7fIhCH8Gc/maxresdefault.jpg"} {"video_id": "rk7fIhCH8Gc", "text": "gradient descent non-trivial it's an interference problem meaning that what we are interested in is a correlation with the ground true signal x star we are not really interested in the optimization problem per sec so this is similar to the machine learning with neural networks where we always solve it by optimization but we are really interested in the generalization error in something slightly different than the value of the loss function itself and the third and fourth point is that this model has interesting computational", "start_timestamp": "00:07:11", "end_timestamp": "00:07:44", "start_second": 431, "end_second": 464, "url": "https://www.youtube.com/watch?v=rk7fIhCH8Gc&t=431s", "title": "Insights on Gradient-Based Algorithms in High-Dimensional Learning", "thumbnail": "https://i.ytimg.com/vi/rk7fIhCH8Gc/maxresdefault.jpg"} {"video_id": "rk7fIhCH8Gc", "text": "properties and the dynamics of the gradient is unsolvable and this is something that i will show you to persuade you of that so the statistical physics must come at some point in and this is where it does so you just rewrite the same model with the noises of the with the variances of the two gaussian noises that i considered just rescaled a little differently the way we usually do in physics and then i take this loss function that was sum of two squares and developed the squares and realized that some terms just don't depend on the", "start_timestamp": "00:07:44", "end_timestamp": "00:08:19", "start_second": 464, "end_second": 499, "url": "https://www.youtube.com/watch?v=rk7fIhCH8Gc&t=464s", "title": "Insights on Gradient-Based Algorithms in High-Dimensional Learning", "thumbnail": "https://i.ytimg.com/vi/rk7fIhCH8Gc/maxresdefault.jpg"} {"video_id": "rk7fIhCH8Gc", "text": "x over which i'm optimizing and some terms depend on it in a trivial way if the x lives on the sphere they're just equal to some constant so the only non-trivial term that matters is the term here i called h of x that if i look back in statistical physics is exactly the hamiltonian of something that is called the spherical mix b spin glass so those in the audience that know about spin glasses have seen this smile because that's the one one of those that is most often studied in the field of statistical physics of disordered", "start_timestamp": "00:08:19", "end_timestamp": "00:08:55", "start_second": 499, "end_second": 535, "url": "https://www.youtube.com/watch?v=rk7fIhCH8Gc&t=499s", "title": "Insights on Gradient-Based Algorithms in High-Dimensional Learning", "thumbnail": "https://i.ytimg.com/vi/rk7fIhCH8Gc/maxresdefault.jpg"} {"video_id": "rk7fIhCH8Gc", "text": "systems and so what we will be so we will be using that but what we'll be interested in is at one hand the you know when i say gradient-based algorithms i will be speaking at this in this part about mainly two one of them will be the launcher algorithm with the aim of actually estimating the ground through x star in a base optimal way which would be done by writing the posterior measure and computing its marginals and this corresponds to writing the boltzmann measure of the corresponding statistical physics problems and", "start_timestamp": "00:08:55", "end_timestamp": "00:09:33", "start_second": 535, "end_second": 573, "url": "https://www.youtube.com/watch?v=rk7fIhCH8Gc&t=535s", "title": "Insights on Gradient-Based Algorithms in High-Dimensional Learning", "thumbnail": "https://i.ytimg.com/vi/rk7fIhCH8Gc/maxresdefault.jpg"} {"video_id": "rk7fIhCH8Gc", "text": "sampling sampling it at temperature one and that's exactly what the longevity algorithm aims at and the second estimator i will be looking at is the kind of more common one maximum likelihood estimator that is computing the minimizer of that loss function of the or the ground state of the statistical physics model and that's what the gradient descent or flow aims at so just to get a little bit more familiar with this model so if you listened to the bootcamp lecture you would you know we told you about set of tools", "start_timestamp": "00:09:33", "end_timestamp": "00:10:08", "start_second": 573, "end_second": 608, "url": "https://www.youtube.com/watch?v=rk7fIhCH8Gc&t=573s", "title": "Insights on Gradient-Based Algorithms in High-Dimensional Learning", "thumbnail": "https://i.ytimg.com/vi/rk7fIhCH8Gc/maxresdefault.jpg"} {"video_id": "rk7fIhCH8Gc", "text": "that you can use to actually describe what's happening in a model such as this one from the point of view of information theory what is possible statistically and from the point of view of approximate message passing algorithm which ends up to be the best we know for this type of problem and this phase diagram summarizes of what's going on so i will just explain it and then in this talk we are interested what the gradient descent is doing so that will be the new part so on the axis here we have the variances of the noises delta 2 is the", "start_timestamp": "00:10:08", "end_timestamp": "00:10:41", "start_second": 608, "end_second": 641, "url": "https://www.youtube.com/watch?v=rk7fIhCH8Gc&t=608s", "title": "Insights on Gradient-Based Algorithms in High-Dimensional Learning", "thumbnail": "https://i.ytimg.com/vi/rk7fIhCH8Gc/maxresdefault.jpg"} {"video_id": "rk7fIhCH8Gc", "text": "noise added to the matrix and delta p is the noise added to the tensor so the bigger the noise the harder this inference problem will be and for instance if the delta p was infinity that would be effectively as if the tensor was not there it's not giving you any information so in that case you are in the case of spiked matrix factorization that is problem widely studied in statistics and you know it has the bbp phase transition and that's precisely what the value lambda 2 equal to 1 corresponds to so that's what", "start_timestamp": "00:10:41", "end_timestamp": "00:11:14", "start_second": 641, "end_second": 674, "url": "https://www.youtube.com/watch?v=rk7fIhCH8Gc&t=641s", "title": "Insights on Gradient-Based Algorithms in High-Dimensional Learning", "thumbnail": "https://i.ytimg.com/vi/rk7fIhCH8Gc/maxresdefault.jpg"} {"video_id": "rk7fIhCH8Gc", "text": "distinguishes the phase where the spike is impossible to recover from the easy phase where if you only had the matrix you recover the spike simply by spectral methods looking at the spectrum of the matrix then if the matrix was not there that is that delta 2 would be infinity 1 over delta 2 would be 0 then you only have the spiked tensor model which you know information theoretically is solvable at at some point uh highlighted with the red line here but even if the noise is smaller than that it's algorithmically hard and it's", "start_timestamp": "00:11:14", "end_timestamp": "00:11:50", "start_second": 674, "end_second": 710, "url": "https://www.youtube.com/watch?v=rk7fIhCH8Gc&t=674s", "title": "Insights on Gradient-Based Algorithms in High-Dimensional Learning", "thumbnail": "https://i.ytimg.com/vi/rk7fIhCH8Gc/maxresdefault.jpg"} {"video_id": "rk7fIhCH8Gc", "text": "also a problem that have been studied so in order to make the the kind of computational question more interesting i mix them and if i mix them then you see what's going on there is this algorithmically hard phase appearing that we believe cannot be entered by any polynomial algorithm that's a conjecture and now all i want to be telling you about how is how bad and decent and longivan dynamics fits in this diagram you know does it do as good as the approximate message fasting does it do worse why so to define", "start_timestamp": "00:11:50", "end_timestamp": "00:12:24", "start_second": 710, "end_second": 744, "url": "https://www.youtube.com/watch?v=rk7fIhCH8Gc&t=710s", "title": "Insights on Gradient-Based Algorithms in High-Dimensional Learning", "thumbnail": "https://i.ytimg.com/vi/rk7fIhCH8Gc/maxresdefault.jpg"} {"video_id": "rk7fIhCH8Gc", "text": "what i mean by you know more precisely was the longevity algorithm and the gradient flow it's simply the derivative so i will be working with the continuous time version here because that's the one that i know how to analyze so it's the time derivative of the x that's the variable over which i'm optimizing is simply equal to minus the gradient of the hamiltonian or the loss function plus a term that corresponds to weight decay or spherical constraint it would be called in physics plus noise that either is there and has a", "start_timestamp": "00:12:24", "end_timestamp": "00:12:59", "start_second": 744, "end_second": 779, "url": "https://www.youtube.com/watch?v=rk7fIhCH8Gc&t=744s", "title": "Insights on Gradient-Based Algorithms in High-Dimensional Learning", "thumbnail": "https://i.ytimg.com/vi/rk7fIhCH8Gc/maxresdefault.jpg"} {"video_id": "rk7fIhCH8Gc", "text": "covariance proportional to a constant that is called temperature in physics and if that constant t is equal to 1 then this is the logic algorithm that is guaranteed at exponentially large times to sample the boltzmann measure and to solve the problem optimally but we will not be looking at exponentially large times because that's untractable our question will be what happens attractable time so that will be constant on constant times logarithm of the dimension or something of that kind so that you know we can wait for such a long", "start_timestamp": "00:12:59", "end_timestamp": "00:13:32", "start_second": 779, "end_second": 812, "url": "https://www.youtube.com/watch?v=rk7fIhCH8Gc&t=779s", "title": "Insights on Gradient-Based Algorithms in High-Dimensional Learning", "thumbnail": "https://i.ytimg.com/vi/rk7fIhCH8Gc/maxresdefault.jpg"} {"video_id": "rk7fIhCH8Gc", "text": "time and then if we simply don't put this additional noise there so this constant t is zero then this is the gradient flow so so going you know how how that model is solvable so in statistical physics of disordered systems this this works cited here is very well known it's basically the reference work that we have in physics to understand what's going on in materials such as structural classes and it so happens that this work actually looked at a model very much related to the one we are studying here it's it's exactly the same one except", "start_timestamp": "00:13:32", "end_timestamp": "00:14:10", "start_second": 812, "end_second": 850, "url": "https://www.youtube.com/watch?v=rk7fIhCH8Gc&t=812s", "title": "Insights on Gradient-Based Algorithms in High-Dimensional Learning", "thumbnail": "https://i.ytimg.com/vi/rk7fIhCH8Gc/maxresdefault.jpg"} {"video_id": "rk7fIhCH8Gc", "text": "that it didn't have this ground truth vector x star so it's exactly the same loss function but the tensor and the matrix is created without this ground true planted in but that's you know that's that's a complication of the model that can be worked out and you know this theory from this paper can be generalized and this is what we did i will not be going into details of the of the derivation that would be very lengthy but if you are interested in the details actually just um two months ago there has been a wonderful lecture by", "start_timestamp": "00:14:10", "end_timestamp": "00:14:46", "start_second": 850, "end_second": 886, "url": "https://www.youtube.com/watch?v=rk7fIhCH8Gc&t=850s", "title": "Insights on Gradient-Based Algorithms in High-Dimensional Learning", "thumbnail": "https://i.ytimg.com/vi/rk7fIhCH8Gc/maxresdefault.jpg"} {"video_id": "rk7fIhCH8Gc", "text": "uh my co-author francesco urbani that you can watch at the at the leisure's website so this dynamical mean field theory that describes in a closed form what the gradient flow or the longevity algorithm i have the two versions here is doing is a set of equations that close on three parameters this function c of two times that is a correlation function this function c bar of one time that is a correlation of where the gradient flow is at the given time and the ground true vector x star and a so called response function", "start_timestamp": "00:14:46", "end_timestamp": "00:15:23", "start_second": 886, "end_second": 923, "url": "https://www.youtube.com/watch?v=rk7fIhCH8Gc&t=886s", "title": "Insights on Gradient-Based Algorithms in High-Dimensional Learning", "thumbnail": "https://i.ytimg.com/vi/rk7fIhCH8Gc/maxresdefault.jpg"} {"video_id": "rk7fIhCH8Gc", "text": "r of again two parameters and in the limit when the size of the system goes to infinity these functions in the algorithm evolve following this set of pretty ugly looking equations but the kind of important thing here is that we started with a high dimensional problem the n corresponds to the dimension was very large and the closed equations that we wrote they are just on scalar variables these functions corresponding to two times but the dimension is not there anymore so we described the complicated high dimensional dynamics with the effective", "start_timestamp": "00:15:23", "end_timestamp": "00:16:00", "start_second": 923, "end_second": 960, "url": "https://www.youtube.com/watch?v=rk7fIhCH8Gc&t=923s", "title": "Insights on Gradient-Based Algorithms in High-Dimensional Learning", "thumbnail": "https://i.ytimg.com/vi/rk7fIhCH8Gc/maxresdefault.jpg"} {"video_id": "rk7fIhCH8Gc", "text": "set of equations that are just color equations and so since they are you know scalar they're also simple to solve so we can plug them in in a computer program and solve them and yeah i will be i will be going through several open problems during the talk so the first one of them is you know proof that the dynamics gradient flow and launch of the dynamics in this model indeed follows these equations and there has been a related work in the past where you know this proof has been done but again for the version where there is not", "start_timestamp": "00:16:00", "end_timestamp": "00:16:34", "start_second": 960, "end_second": 994, "url": "https://www.youtube.com/watch?v=rk7fIhCH8Gc&t=960s", "title": "Insights on Gradient-Based Algorithms in High-Dimensional Learning", "thumbnail": "https://i.ytimg.com/vi/rk7fIhCH8Gc/maxresdefault.jpg"} {"video_id": "rk7fIhCH8Gc", "text": "the spike so the equations are not are not exactly the same so this you know this is something that is quite probably not so complicated to to generalize these proofs including this the spine but it hasn't been done yet so i will not be talking about that instead i will be talking about what happens if we solve these equations what insight can we get about the behavior of this optimization problem so this is depicted here so as a function of the iteration time i am plotting the correlation with the ground roof", "start_timestamp": "00:16:34", "end_timestamp": "00:17:08", "start_second": 994, "end_second": 1028, "url": "https://www.youtube.com/watch?v=rk7fIhCH8Gc&t=994s", "title": "Insights on Gradient-Based Algorithms in High-Dimensional Learning", "thumbnail": "https://i.ytimg.com/vi/rk7fIhCH8Gc/maxresdefault.jpg"} {"video_id": "rk7fIhCH8Gc", "text": "and at the i start randomly so at the beginning it's zero and then it is growing event or not but here it is growing and depending on the value of the noise so there's the delta p here so darker line here is larger delta p so larger noise is harder and indeed you are seeing that when it goes up the value at which it saturates is lower for the larger noise so that's intuitive should be lower correlation because it's higher noise so it's a harder problem but what's not intuitive is that it actually for larger noise the correlation the", "start_timestamp": "00:17:08", "end_timestamp": "00:17:45", "start_second": 1028, "end_second": 1065, "url": "https://www.youtube.com/watch?v=rk7fIhCH8Gc&t=1028s", "title": "Insights on Gradient-Based Algorithms in High-Dimensional Learning", "thumbnail": "https://i.ytimg.com/vi/rk7fIhCH8Gc/maxresdefault.jpg"} {"video_id": "rk7fIhCH8Gc", "text": "good correlation with the ground truth is attained earlier whereas for smaller values of the noise it takes longer to find it so this is non-intuitive nevertheless this is what is happening here that's the property of the launch of algorithm in this problem and in the inside i'm just comparing to the very same lines for the approximate message passing algorithm which is another iterative but not gradient based algorithm that one that one behaves in the intuitive way the easier ones get there earlier but not for the longevity", "start_timestamp": "00:17:45", "end_timestamp": "00:18:19", "start_second": 1065, "end_second": 1099, "url": "https://www.youtube.com/watch?v=rk7fIhCH8Gc&t=1065s", "title": "Insights on Gradient-Based Algorithms in High-Dimensional Learning", "thumbnail": "https://i.ytimg.com/vi/rk7fIhCH8Gc/maxresdefault.jpg"} {"video_id": "rk7fIhCH8Gc", "text": "so if i collect this information i can actually extrapolate the value of the noise at which the time to get a good correlation would diverge and if i plots in the phase diagram i showed before where this happens i actually get that the easy regime that is easy for the other algorithms say approximate message passing has actually part the one that is colored uh orange green here that is hard for the longest of algorithm where the london algorithm you run it full time that is some proportional to the dimension maybe", "start_timestamp": "00:18:19", "end_timestamp": "00:18:56", "start_second": 1099, "end_second": 1136, "url": "https://www.youtube.com/watch?v=rk7fIhCH8Gc&t=1099s", "title": "Insights on Gradient-Based Algorithms in High-Dimensional Learning", "thumbnail": "https://i.ytimg.com/vi/rk7fIhCH8Gc/maxresdefault.jpg"} {"video_id": "rk7fIhCH8Gc", "text": "with some polylog factors and it's still stuck at completely zero correlation with the ground truth and then if you are above this line where it is really only green then it reaches the optimal correlation so you can do exactly the same thing for the gradient flow and you will get you know another curve in this phase circum which is a bit higher so the fact that it is higher is expected because this is a high dimensional problem with a lot of noise the maximum likelihood estimator here is not optimal the optimal one is the one", "start_timestamp": "00:18:56", "end_timestamp": "00:19:32", "start_second": 1136, "end_second": 1172, "url": "https://www.youtube.com/watch?v=rk7fIhCH8Gc&t=1136s", "title": "Insights on Gradient-Based Algorithms in High-Dimensional Learning", "thumbnail": "https://i.ytimg.com/vi/rk7fIhCH8Gc/maxresdefault.jpg"} {"video_id": "rk7fIhCH8Gc", "text": "that samples the posterior so in a sense by running the gradient flow we are aiming to solve the wrong problem so no wonder that we do a bit worse so that's not surprising but you know it's it's a non-trivial curve in this diagram so can we explain it can we kind of understand intuitively where it comes from so kind of the popular explanation of why for some parameters the gradient flow would be working and why for others it will not be working will be kind of this this cartoon with spurious local minima that either are", "start_timestamp": "00:19:32", "end_timestamp": "00:20:05", "start_second": 1172, "end_second": 1205, "url": "https://www.youtube.com/watch?v=rk7fIhCH8Gc&t=1172s", "title": "Insights on Gradient-Based Algorithms in High-Dimensional Learning", "thumbnail": "https://i.ytimg.com/vi/rk7fIhCH8Gc/maxresdefault.jpg"} {"video_id": "rk7fIhCH8Gc", "text": "there or are not there since there are no spurious local minima then the gradient flow has basically no choice than to good for the good one then to go for the good one and if they are spurious local minima then then it's a high dimensional problem there will typically be exponentially many of them so the intuition is that it will just fall into one of the exponentially many and not the good one so in this model this is actually the model is so kind of basic that we can we have access to actually counting exactly", "start_timestamp": "00:20:05", "end_timestamp": "00:20:37", "start_second": 1205, "end_second": 1237, "url": "https://www.youtube.com/watch?v=rk7fIhCH8Gc&t=1205s", "title": "Insights on Gradient-Based Algorithms in High-Dimensional Learning", "thumbnail": "https://i.ytimg.com/vi/rk7fIhCH8Gc/maxresdefault.jpg"} {"video_id": "rk7fIhCH8Gc", "text": "how many minima they are at the given value of the energy of the loss and this is done by the so-called cats rise approach and so here again i'm not giving the derivation here just the resulting formula that is telling us of you know that the entropy is always number of something that is exponentially numerous so is the logarithm divided by the size of the system and its number of what it's number of the local minima that have a given correlation with the ground truth as the parameter m at the given value of the loss", "start_timestamp": "00:20:37", "end_timestamp": "00:21:13", "start_second": 1237, "end_second": 1273, "url": "https://www.youtube.com/watch?v=rk7fIhCH8Gc&t=1237s", "title": "Insights on Gradient-Based Algorithms in High-Dimensional Learning", "thumbnail": "https://i.ytimg.com/vi/rk7fIhCH8Gc/maxresdefault.jpg"} {"video_id": "rk7fIhCH8Gc", "text": "corresponding to the matrix e2 and corresponding to the tensor e and again this is a result you know resulting from a series of works where these kind of methods were developed so here what i'm showing you is the annual entropy that is the expectation of the of of the number of those minima but actually at zero correlation with the ground truth this also is the is the quenched so is also the expectation of the logarithm so we actually know when they are and when they are not spruce locally minima and if we collect it from this formula", "start_timestamp": "00:21:13", "end_timestamp": "00:21:48", "start_second": 1273, "end_second": 1308, "url": "https://www.youtube.com/watch?v=rk7fIhCH8Gc&t=1273s", "title": "Insights on Gradient-Based Algorithms in High-Dimensional Learning", "thumbnail": "https://i.ytimg.com/vi/rk7fIhCH8Gc/maxresdefault.jpg"} {"video_id": "rk7fIhCH8Gc", "text": "back to my face diagram we are getting the purple line here so the purple line means that above it with high probability the only minima that is there is the one that correlates with the signal and below it there are exponentially many spruce local minima not correlating with the signal and yet you see that these are not the same lines as the one starting from which the gradient flow is working so there is a region between the purple and the green line where they are exponentially many first local menu with no correlation to the signal yet", "start_timestamp": "00:21:48", "end_timestamp": "00:22:26", "start_second": 1308, "end_second": 1346, "url": "https://www.youtube.com/watch?v=rk7fIhCH8Gc&t=1308s", "title": "Insights on Gradient-Based Algorithms in High-Dimensional Learning", "thumbnail": "https://i.ytimg.com/vi/rk7fIhCH8Gc/maxresdefault.jpg"} {"video_id": "rk7fIhCH8Gc", "text": "the gradient flow happily manages to ignore them and finds the good one so how is this possible so to understand how is this possible in this model we need to dig a little bit more into what is happening with the algorithm and we actually can look at the at the following plot that is showing us how does the loss function the e on the y axis change as we iterate as a function of the iteration time t and we find out that either for a high value of the noises the dynamics is stuck at some value of the loss that seems to be you know pretty flat", "start_timestamp": "00:22:26", "end_timestamp": "00:23:07", "start_second": 1346, "end_second": 1387, "url": "https://www.youtube.com/watch?v=rk7fIhCH8Gc&t=1346s", "title": "Insights on Gradient-Based Algorithms in High-Dimensional Learning", "thumbnail": "https://i.ytimg.com/vi/rk7fIhCH8Gc/maxresdefault.jpg"} {"video_id": "rk7fIhCH8Gc", "text": "starting from sometime about 100 to 100 here or it's actually stuck at that value but then somehow escapes from it and reaches good correlation with the signal that is the dashed line that's the magnetization and when we actually investigate whether that value at which it is stuck corresponds to something we find that yes that it interestingly corresponds to the value of the loss that it would reach if the signal was not at all there if the if the x star was not in the in the mouse so just to the non-planted map", "start_timestamp": "00:23:07", "end_timestamp": "00:23:43", "start_second": 1387, "end_second": 1423, "url": "https://www.youtube.com/watch?v=rk7fIhCH8Gc&t=1387s", "title": "Insights on Gradient-Based Algorithms in High-Dimensional Learning", "thumbnail": "https://i.ytimg.com/vi/rk7fIhCH8Gc/maxresdefault.jpg"} {"video_id": "rk7fIhCH8Gc", "text": "and this is a value of um of energy that was studied a lot in physics that has a name that's the threshold energy and studying the non-planted system we actually can compute that value so so here we make a hypothesis we say okay let's assume that the dynamics goes to this threshold energy and then what matters is whether the minima that lie at the typical ones that lie at that energy not lower one not higher one that one whether those are stable or not towards the signal and that stability decides whether you get you stay there", "start_timestamp": "00:23:43", "end_timestamp": "00:24:22", "start_second": 1423, "end_second": 1462, "url": "https://www.youtube.com/watch?v=rk7fIhCH8Gc&t=1423s", "title": "Insights on Gradient-Based Algorithms in High-Dimensional Learning", "thumbnail": "https://i.ytimg.com/vi/rk7fIhCH8Gc/maxresdefault.jpg"} {"video_id": "rk7fIhCH8Gc", "text": "or whether you go to gain some correlation with the signal and what i'm saying in words here we can actually put into equations so here the first equation is where about are the threshold states and the second equation is telling us you know derived both from the gates rise approach and directly also from the dynamic domain field theory but again the details are not shown here is the condition for uh the lowest eigenvalue of the corresponding hessian of the minima being having an eigenvector that points towards the", "start_timestamp": "00:24:22", "end_timestamp": "00:24:58", "start_second": 1462, "end_second": 1498, "url": "https://www.youtube.com/watch?v=rk7fIhCH8Gc&t=1462s", "title": "Insights on Gradient-Based Algorithms in High-Dimensional Learning", "thumbnail": "https://i.ytimg.com/vi/rk7fIhCH8Gc/maxresdefault.jpg"} {"video_id": "rk7fIhCH8Gc", "text": "signal or does not so if i put these two together i actually get the third expression here a conjecture for where is the line above which the gradient descent or lounge of our dynamics depending on what the parameter t here will work and so this leads me to the following conjecture it you know the contractor is that gradient flow with random initialization finds you know in time that is it finds the optimal correlation with the signal in time that is proportional to the to the input size which is n to the power p times some polynomial", "start_timestamp": "00:24:58", "end_timestamp": "00:25:36", "start_second": 1498, "end_second": 1536, "url": "https://www.youtube.com/watch?v=rk7fIhCH8Gc&t=1498s", "title": "Insights on Gradient-Based Algorithms in High-Dimensional Learning", "thumbnail": "https://i.ytimg.com/vi/rk7fIhCH8Gc/maxresdefault.jpg"} {"video_id": "rk7fIhCH8Gc", "text": "of log n finds the optimal solution and if the noise is bigger than that then it does not and if i plot this expression into so so again open problem prove this conjecture and if i plot this expression into my face diagram that is the blue line you see that that one is perfectly agreeing with the points that i got previously by numerically solving this integral differential dynamical mean field equation so so this this seems to be explaining whether or not the gradient flow work works and i can do exactly the same", "start_timestamp": "00:25:36", "end_timestamp": "00:26:12", "start_second": 1536, "end_second": 1572, "url": "https://www.youtube.com/watch?v=rk7fIhCH8Gc&t=1536s", "title": "Insights on Gradient-Based Algorithms in High-Dimensional Learning", "thumbnail": "https://i.ytimg.com/vi/rk7fIhCH8Gc/maxresdefault.jpg"} {"video_id": "rk7fIhCH8Gc", "text": "thing just plug t one instead of zero in the same expression and get the same result for the lonzeva algorithm that has an interesting point that actually the line that corresponds to this uh to this threshold it's set to it it reaches the line lambda two equal one at uh sorry delta tweak one at delta p equal to so there is a three critical point so if delta p is bigger than two there is no loss of a hard phase anymore but maybe let's go back to this popular explanation what was wrong with that you know absence or", "start_timestamp": "00:26:12", "end_timestamp": "00:26:50", "start_second": 1572, "end_second": 1610, "url": "https://www.youtube.com/watch?v=rk7fIhCH8Gc&t=1572s", "title": "Insights on Gradient-Based Algorithms in High-Dimensional Learning", "thumbnail": "https://i.ytimg.com/vi/rk7fIhCH8Gc/maxresdefault.jpg"} {"video_id": "rk7fIhCH8Gc", "text": "presence of the spruce local minima at least in this particular model the correct explanation based on the landscape and kind of intuition about how the landscape looks like is the following one it's not the presence or absence of spurious local minima nor their number what it really is is the fact that the dynamics goes to the highest lying minima that happens to be the threshold state and what decides whether it finds the solution or not is that these highline states have a negative direction in the in the hashem towards the", "start_timestamp": "00:26:50", "end_timestamp": "00:27:29", "start_second": 1610, "end_second": 1649, "url": "https://www.youtube.com/watch?v=rk7fIhCH8Gc&t=1610s", "title": "Insights on Gradient-Based Algorithms in High-Dimensional Learning", "thumbnail": "https://i.ytimg.com/vi/rk7fIhCH8Gc/maxresdefault.jpg"} {"video_id": "rk7fIhCH8Gc", "text": "solution or not and if they do then the algorithm goes to the solution even though there still may be exponentially many spruce local minima at lower energy so they're not really spruce because the gradient flow just never ever sees them with probability that is that is one up to some exponentially small factor so here i should be about a middle of the talk and i want to conclude about the spike matrix model so i showed you you know this is i think the first time that we have a close form conjecture for the threshold of what", "start_timestamp": "00:27:29", "end_timestamp": "00:28:04", "start_second": 1649, "end_second": 1684, "url": "https://www.youtube.com/watch?v=rk7fIhCH8Gc&t=1649s", "title": "Insights on Gradient-Based Algorithms in High-Dimensional Learning", "thumbnail": "https://i.ytimg.com/vi/rk7fIhCH8Gc/maxresdefault.jpg"} {"video_id": "rk7fIhCH8Gc", "text": "gradient-based algorithm is able to do including the constant in a high-dimensional non-convex interference problem and the question would be you know can we apply the same methodology to something that looks more like a supervised neural network simple one we also show that the gradient flow is worse than the larger algorithm that itself is expected but they are both worse than the approximate message passing there is quite a considerable gap so is there some generic kind of a strategy that we that we can make them work as well as", "start_timestamp": "00:28:04", "end_timestamp": "00:28:38", "start_second": 1684, "end_second": 1718, "url": "https://www.youtube.com/watch?v=rk7fIhCH8Gc&t=1684s", "title": "Insights on Gradient-Based Algorithms in High-Dimensional Learning", "thumbnail": "https://i.ytimg.com/vi/rk7fIhCH8Gc/maxresdefault.jpg"} {"video_id": "rk7fIhCH8Gc", "text": "the approximate message passing or at least closer so that would be question related to the second point and question really and the third point is i showed you that gradient flow sometimes work even when the spruce local minima are present we showed that using the cast race approach but what about stochastic gradient descent so far i was only talking about gradient no stochastic so let's say a year ago i would have stopped here and say that you don't know the the green questions would be open but today actually i do have an answer to", "start_timestamp": "00:28:38", "end_timestamp": "00:29:09", "start_second": 1718, "end_second": 1749, "url": "https://www.youtube.com/watch?v=rk7fIhCH8Gc&t=1718s", "title": "Insights on Gradient-Based Algorithms in High-Dimensional Learning", "thumbnail": "https://i.ytimg.com/vi/rk7fIhCH8Gc/maxresdefault.jpg"} {"video_id": "rk7fIhCH8Gc", "text": "each of them so i will start with the first one so is the same methodology applicable to some simple neural networks and in statistical physics when we kind of set up a model for a data so that you can keep track of constants and and not only rates and finite sample complexity the kind of popular model in which something like that can be done at least in simple neural networks is this teacher student setting so now i'm switching the mall no more spiked matrix tensormal in the talk now i'm going towards this teacher student neural networks", "start_timestamp": "00:29:09", "end_timestamp": "00:29:50", "start_second": 1749, "end_second": 1790, "url": "https://www.youtube.com/watch?v=rk7fIhCH8Gc&t=1749s", "title": "Insights on Gradient-Based Algorithms in High-Dimensional Learning", "thumbnail": "https://i.ytimg.com/vi/rk7fIhCH8Gc/maxresdefault.jpg"} {"video_id": "rk7fIhCH8Gc", "text": "where at the input i put id data not only iad sample from sample but also the components of every sampler id so that's of course you know not real data do not like look like that but that's part of the simplifying assumptions here then i take a neural network like for instance the one here i generate the weights of the neural network in some again random way i let this teacher neural network generate the labels y using those ground through ways w star and then i hide the w stars i never show to the student network the w", "start_timestamp": "00:29:50", "end_timestamp": "00:30:26", "start_second": 1790, "end_second": 1826, "url": "https://www.youtube.com/watch?v=rk7fIhCH8Gc&t=1790s", "title": "Insights on Gradient-Based Algorithms in High-Dimensional Learning", "thumbnail": "https://i.ytimg.com/vi/rk7fIhCH8Gc/maxresdefault.jpg"} {"video_id": "rk7fIhCH8Gc", "text": "stars i just show to the studio network as traditionally the set of samples x and y and i will have n samples each sample will live in dimension p so before p was the order of the tensor now p will be the dimension till the end of the talk and then the student may or may not not know the architecture of the teacher network i will actually be telling you about both cases in this talk and the question is what is the generalization error that the gradient descent is reaching depending on the number of samples that", "start_timestamp": "00:30:26", "end_timestamp": "00:31:00", "start_second": 1826, "end_second": 1860, "url": "https://www.youtube.com/watch?v=rk7fIhCH8Gc&t=1826s", "title": "Insights on Gradient-Based Algorithms in High-Dimensional Learning", "thumbnail": "https://i.ytimg.com/vi/rk7fIhCH8Gc/maxresdefault.jpg"} {"video_id": "rk7fIhCH8Gc", "text": "it got from the teacher so this is a setting of a neural network that have been studied in physics for 30 years kind of the most common example would be this teacher student perceptron where the nonlinearity that the teacher is using is just a sign but with just a sign and no constraints on the weights w this becomes a convex optimization problem so today we are interested in intrinsically non-convex optimization problems so in order to make it more interesting and intrinsically non-convex we will actually be looking today at the face", "start_timestamp": "00:31:00", "end_timestamp": "00:31:34", "start_second": 1860, "end_second": 1894, "url": "https://www.youtube.com/watch?v=rk7fIhCH8Gc&t=1860s", "title": "Insights on Gradient-Based Algorithms in High-Dimensional Learning", "thumbnail": "https://i.ytimg.com/vi/rk7fIhCH8Gc/maxresdefault.jpg"} {"video_id": "rk7fIhCH8Gc", "text": "retrieval where the teacher instead of using a sign uses an absolute value on the scholar product of the samples and the ground true base or the teacher weights w star so the labels here will not be just binary this we will be looking at this regression problem the data will be generated as is written here the input is gaussian the labels are obtained as the absolute value of the of the scalar product and the neural network then sees the set of samples and tries to regress the y or the x on the y and so what do we know again again", "start_timestamp": "00:31:34", "end_timestamp": "00:32:15", "start_second": 1894, "end_second": 1935, "url": "https://www.youtube.com/watch?v=rk7fIhCH8Gc&t=1894s", "title": "Insights on Gradient-Based Algorithms in High-Dimensional Learning", "thumbnail": "https://i.ytimg.com/vi/rk7fIhCH8Gc/maxresdefault.jpg"} {"video_id": "rk7fIhCH8Gc", "text": "without yet talking about gradient descent what do we know about this problem information theoretically and in terms of the approximate message passing that also here is conjecture to be made the best of the of the polynomial algorithms so here i am showing you the the mean square error of recovering the w star which is you know very related to the generalization error just like this plot as a function of the alpha which is the ratio between the number of samples and the dimension and both number of samples and dimension", "start_timestamp": "00:32:15", "end_timestamp": "00:32:49", "start_second": 1935, "end_second": 1969, "url": "https://www.youtube.com/watch?v=rk7fIhCH8Gc&t=1935s", "title": "Insights on Gradient-Based Algorithms in High-Dimensional Learning", "thumbnail": "https://i.ytimg.com/vi/rk7fIhCH8Gc/maxresdefault.jpg"} {"video_id": "rk7fIhCH8Gc", "text": "are large i'm in the high dimensional limit and the ratio is some small constant here you know between 0.3 and 1.2 so information theoretically the generalization error can be zero start as long as soon as you have more samples than [Music] than is the dimension in this problem that corresponds to the orange line now algorithmically you need slightly more samples than the dimension about 13 percent more for the approximate message passing to work and be able to generalize perfectly in this problem so now we will be looking at what", "start_timestamp": "00:32:49", "end_timestamp": "00:33:27", "start_second": 1969, "end_second": 2007, "url": "https://www.youtube.com/watch?v=rk7fIhCH8Gc&t=1969s", "title": "Insights on Gradient-Based Algorithms in High-Dimensional Learning", "thumbnail": "https://i.ytimg.com/vi/rk7fIhCH8Gc/maxresdefault.jpg"} {"video_id": "rk7fIhCH8Gc", "text": "gradient d sent us and how it compares to to what to do this so gradient descent on watch loss which loss functions so corresponding to the phase retrieval the natural loss function is the one i write here that would correspond to in a sense to to to neural network with no hidden variable one hidden variable with quadratic activation function so that's an actual you know i just square the label sends instead of putting absolute value i put a square here and then i'm looking at the the performance of the gradient flow", "start_timestamp": "00:33:27", "end_timestamp": "00:33:58", "start_second": 2007, "end_second": 2038, "url": "https://www.youtube.com/watch?v=rk7fIhCH8Gc&t=2007s", "title": "Insights on Gradient-Based Algorithms in High-Dimensional Learning", "thumbnail": "https://i.ytimg.com/vi/rk7fIhCH8Gc/maxresdefault.jpg"} {"video_id": "rk7fIhCH8Gc", "text": "so just to you know set up the stage of what is known and what kind of we can expect so as i said if ignoring gradient descent we know that starting from one the problem is solvable information theoretically and starting from 1.13 by some very adapted algorithm to this problem for the gradient flow what we know is is this work that that popped here that vigorously shows that randomly initialized gradient descent will need uh we'll need the dimension times some polynomial of the log of the dimension samples in order to be able to solve the problem", "start_timestamp": "00:33:58", "end_timestamp": "00:34:37", "start_second": 2038, "end_second": 2077, "url": "https://www.youtube.com/watch?v=rk7fIhCH8Gc&t=2038s", "title": "Insights on Gradient-Based Algorithms in High-Dimensional Learning", "thumbnail": "https://i.ytimg.com/vi/rk7fIhCH8Gc/maxresdefault.jpg"} {"video_id": "rk7fIhCH8Gc", "text": "so there is quite a big gap between 1.13 and poly alpha it is 1.13 and alpha that is some polynomial of logarithm of the dimension so as a physicist we always try to look numerically what's actually going on so numerically if we are looking what's the fraction of success of writing this in terms of solving this problem as the dimension is growing so here the capital n is actually what i call the p is the dimension so we are seeing that at alpha that is around six or seven it's already solving the problem almost always so so can we understand", "start_timestamp": "00:34:37", "end_timestamp": "00:35:13", "start_second": 2077, "end_second": 2113, "url": "https://www.youtube.com/watch?v=rk7fIhCH8Gc&t=2077s", "title": "Insights on Gradient-Based Algorithms in High-Dimensional Learning", "thumbnail": "https://i.ytimg.com/vi/rk7fIhCH8Gc/maxresdefault.jpg"} {"video_id": "rk7fIhCH8Gc", "text": "that a bit more theoretically not just running the gradient decent that's of course nice but that's that's not just satisfactory so we take lessons from the spiked two plus piece per mole that i showed you and we kind of ask ours okay could it be could it be happening similarly as there could it be that the gradient flow first goes to the threshold states and then what matters is a kind of bvp-like transition of the hessian of these threshold states that drives the success versus failure and we just test this numerically", "start_timestamp": "00:35:13", "end_timestamp": "00:35:45", "start_second": 2113, "end_second": 2145, "url": "https://www.youtube.com/watch?v=rk7fIhCH8Gc&t=2113s", "title": "Insights on Gradient-Based Algorithms in High-Dimensional Learning", "thumbnail": "https://i.ytimg.com/vi/rk7fIhCH8Gc/maxresdefault.jpg"} {"video_id": "rk7fIhCH8Gc", "text": "whether it looks like true and it actually does in the sense that if we look at the non-planted phase retrieval that would be the right hand side here that defines the value of the loss function that i call the threshold value and then if you look at the dynamics of the gradient flow in the planted version we see that it's quite possible that it's again going to the threshold and then away from it or not or staying stuck there so we again hypothesize that this is actually the mechanism and put it into equations this time the", "start_timestamp": "00:35:45", "end_timestamp": "00:36:18", "start_second": 2145, "end_second": 2178, "url": "https://www.youtube.com/watch?v=rk7fIhCH8Gc&t=2145s", "title": "Insights on Gradient-Based Algorithms in High-Dimensional Learning", "thumbnail": "https://i.ytimg.com/vi/rk7fIhCH8Gc/maxresdefault.jpg"} {"video_id": "rk7fIhCH8Gc", "text": "equations are slightly more complicated but we can still do it actually using a recent random matrix theory results from works of ulu and square collide collaborator lee and also the fact that the threshold states are marginal meaning that the lowest eigenvalue corresponding to them is stuck to zero and this if we combine it gives us an expression of what should be the threshold above which the gradient descent works as a function of this probability distribution of the true labels y and the labels y hat that the gradient", "start_timestamp": "00:36:18", "end_timestamp": "00:36:55", "start_second": 2178, "end_second": 2215, "url": "https://www.youtube.com/watch?v=rk7fIhCH8Gc&t=2178s", "title": "Insights on Gradient-Based Algorithms in High-Dimensional Learning", "thumbnail": "https://i.ytimg.com/vi/rk7fIhCH8Gc/maxresdefault.jpg"} {"video_id": "rk7fIhCH8Gc", "text": "descent is currently estimating so that probability distribution is still something pretty not non-intuitive to capture but in the within the within the theory of one-step replica symmetry breaking that is again one of the methods coming from statistical physics we actually can estimate this probability distribution between the joint the true label and the label that is currently estimate currently estimated by the grading descent and this is shown in this picture i show so in the left hand side i actually show", "start_timestamp": "00:36:55", "end_timestamp": "00:37:30", "start_second": 2215, "end_second": 2250, "url": "https://www.youtube.com/watch?v=rk7fIhCH8Gc&t=2215s", "title": "Insights on Gradient-Based Algorithms in High-Dimensional Learning", "thumbnail": "https://i.ytimg.com/vi/rk7fIhCH8Gc/maxresdefault.jpg"} {"video_id": "rk7fIhCH8Gc", "text": "the loss the value of the threshold energy of the threshold loss as it comes from simulations that is the purple line and as it comes from this one rsp theory they are not exactly equal here that's not the conjecture here is that this is not exact but they are close so so we use this as an approximation on the right hand side i am showing again numerically obtained moments of the distribution on which the formula depends and the moment is computed from the one rsp theory and the agreement is pretty good so when we put", "start_timestamp": "00:37:30", "end_timestamp": "00:38:08", "start_second": 2250, "end_second": 2288, "url": "https://www.youtube.com/watch?v=rk7fIhCH8Gc&t=2250s", "title": "Insights on Gradient-Based Algorithms in High-Dimensional Learning", "thumbnail": "https://i.ytimg.com/vi/rk7fIhCH8Gc/maxresdefault.jpg"} {"video_id": "rk7fIhCH8Gc", "text": "these things together you know ignoring the little differences this actually leads us to an estimation of the gradient decent threshold that is about 13.8 so if i put it back onto this axis i showed you that the numerics this constant starting from which gradient descent is working looks like 7 from approximated theory we get something like 13.8 so we are not sure where the discrepancy comes from whether it is finite size effects and the numerics would actually converge to the 13.8 or whether it is the small difference", "start_timestamp": "00:38:08", "end_timestamp": "00:38:44", "start_second": 2288, "end_second": 2324, "url": "https://www.youtube.com/watch?v=rk7fIhCH8Gc&t=2288s", "title": "Insights on Gradient-Based Algorithms in High-Dimensional Learning", "thumbnail": "https://i.ytimg.com/vi/rk7fIhCH8Gc/maxresdefault.jpg"} {"video_id": "rk7fIhCH8Gc", "text": "between the exact result and the one rsv approximation so both is both is possible but what is nevertheless clear is that it seems that it's a constant the the polylog of b is not needed so here is another open problem proof that you know any constant times p is actually a sufficient number of samples for randomly initialized gradient descent to solve phase retrieval in time that is p times times some polylog p so in the time the polylog p is not avoidable because otherwise you're just stuck at kind of zero correlation but in the number of samples", "start_timestamp": "00:38:44", "end_timestamp": "00:39:24", "start_second": 2324, "end_second": 2364, "url": "https://www.youtube.com/watch?v=rk7fIhCH8Gc&t=2324s", "title": "Insights on Gradient-Based Algorithms in High-Dimensional Learning", "thumbnail": "https://i.ytimg.com/vi/rk7fIhCH8Gc/maxresdefault.jpg"} {"video_id": "rk7fIhCH8Gc", "text": "the conjecture from from our work is that it should be avoidable and that the true constant is somewhere around 7 or 13. but what about the gap between the the performance of the approximate message passing and of the grand descent there is you know still big difference between say one and ten so can we somehow close that gap can we do something generic that would diminish that gap so that's the question for the next a few slides and that that corresponds to to this you know when i was concluding about the spike matrix model that was the second", "start_timestamp": "00:39:24", "end_timestamp": "00:40:03", "start_second": 2364, "end_second": 2403, "url": "https://www.youtube.com/watch?v=rk7fIhCH8Gc&t=2364s", "title": "Insights on Gradient-Based Algorithms in High-Dimensional Learning", "thumbnail": "https://i.ytimg.com/vi/rk7fIhCH8Gc/maxresdefault.jpg"} {"video_id": "rk7fIhCH8Gc", "text": "input so that's the second point to which we are doing and surprisingly or not we will do that by over parametrization so let's still look at the phase retrieval so the problem the regression problem we are trying to solve here is still the same so phase retrieval with random gaussian data and the teacher coming from a gaussian and generating the labels this didn't change but what changes now is the loss function so now the loss function that i will be considering doesn't correspond anymore to simple the simplest neural network with no", "start_timestamp": "00:40:03", "end_timestamp": "00:40:38", "start_second": 2403, "end_second": 2438, "url": "https://www.youtube.com/watch?v=rk7fIhCH8Gc&t=2403s", "title": "Insights on Gradient-Based Algorithms in High-Dimensional Learning", "thumbnail": "https://i.ytimg.com/vi/rk7fIhCH8Gc/maxresdefault.jpg"} {"video_id": "rk7fIhCH8Gc", "text": "hidden unit or one hit unit that's the same now the neural network will have m hidden units and i will be working in a regime where the number of hidden units is bigger than the dimension p so this is the over parametrized two-layer neural network i will be optimizing over the weights of the first layer this matrix w and the second layer will be fixed the weights of the second layer will be fixed to one over m or to one and i use the scaling one over m so i'm not really learning here the second layer but that", "start_timestamp": "00:40:38", "end_timestamp": "00:41:12", "start_second": 2438, "end_second": 2472, "url": "https://www.youtube.com/watch?v=rk7fIhCH8Gc&t=2438s", "title": "Insights on Gradient-Based Algorithms in High-Dimensional Learning", "thumbnail": "https://i.ytimg.com/vi/rk7fIhCH8Gc/maxresdefault.jpg"} {"video_id": "rk7fIhCH8Gc", "text": "would know the conjecture kind of is that that wouldn't change much in the in the overall message of this so again i'm just running gradient flow on this loss function with a random initialization so how does this behave so this is a wide over parameterized two-layer neural network does this solve the phase you achieve or not and this is from a paper that that we that came up in june with a colleague from uiu eric vanden eiden and same student as stefano sarah mannelly where uh in two theorems we kind of answered", "start_timestamp": "00:41:12", "end_timestamp": "00:41:51", "start_second": 2472, "end_second": 2511, "url": "https://www.youtube.com/watch?v=rk7fIhCH8Gc&t=2472s", "title": "Insights on Gradient-Based Algorithms in High-Dimensional Learning", "thumbnail": "https://i.ytimg.com/vi/rk7fIhCH8Gc/maxresdefault.jpg"} {"video_id": "rk7fIhCH8Gc", "text": "provide some answer to this question so the first theorem is it is purely geometric it is telling us that if you are looking at the loss function as i just defined it then if alpha is so alpha again was the ratio between the number of samples and the dimension so if alpha is smaller than 2 then this loss function has many spiritual minima and if alpha is bigger than two then the probability that the only local minima that is there corresponds to the ground roof that would be this a star that is just the teacher vector times is transposed is", "start_timestamp": "00:41:51", "end_timestamp": "00:42:31", "start_second": 2511, "end_second": 2551, "url": "https://www.youtube.com/watch?v=rk7fIhCH8Gc&t=2511s", "title": "Insights on Gradient-Based Algorithms in High-Dimensional Learning", "thumbnail": "https://i.ytimg.com/vi/rk7fIhCH8Gc/maxresdefault.jpg"} {"video_id": "rk7fIhCH8Gc", "text": "actually the only one so we believe that this is actually with probability one but what we could prove is this is only with positive probability but there is something clearly happening about the threshold alpha equal to and when we and this is purely geometric no gradient descent yet but when we put this together with our second theorem about the gradient descent that tells us that in terms of this parameter a that is the weight matrix times its transpose the gradient descent always goes to global minima then putting these two together actually", "start_timestamp": "00:42:31", "end_timestamp": "00:43:07", "start_second": 2551, "end_second": 2587, "url": "https://www.youtube.com/watch?v=rk7fIhCH8Gc&t=2551s", "title": "Insights on Gradient-Based Algorithms in High-Dimensional Learning", "thumbnail": "https://i.ytimg.com/vi/rk7fIhCH8Gc/maxresdefault.jpg"} {"video_id": "rk7fIhCH8Gc", "text": "if there is only one global minima corresponding to the to the ground true with finite probability well then the gradient descent also goes there so this means that the gradient descent solved this problem by optimizing this loss function corresponding to the over parameterized neural network starting from alpha equal to and here is just a little plot that you know that shows that just running numerically writing descend on relatively small systems is pretty consistent with that result so if i put that back into onto undo the", "start_timestamp": "00:43:07", "end_timestamp": "00:43:43", "start_second": 2587, "end_second": 2623, "url": "https://www.youtube.com/watch?v=rk7fIhCH8Gc&t=2587s", "title": "Insights on Gradient-Based Algorithms in High-Dimensional Learning", "thumbnail": "https://i.ytimg.com/vi/rk7fIhCH8Gc/maxresdefault.jpg"} {"video_id": "rk7fIhCH8Gc", "text": "axis of alpha i obtain that by using over parameterized nail network i can push down the threshold at re starting from which the gradient flow is working down to two so not yet to the 1.13 of amp but much lower than if i was not over parameterizing so the conclusion is here that over parameterized neural networks need fewer samples and this is a quantification of how much fewer samples in this particular model and the open problem would be and that i i really don't know the answer like is there a neural network architecture", "start_timestamp": "00:43:43", "end_timestamp": "00:44:21", "start_second": 2623, "end_second": 2661, "url": "https://www.youtube.com/watch?v=rk7fIhCH8Gc&t=2623s", "title": "Insights on Gradient-Based Algorithms in High-Dimensional Learning", "thumbnail": "https://i.ytimg.com/vi/rk7fIhCH8Gc/maxresdefault.jpg"} {"video_id": "rk7fIhCH8Gc", "text": "maybe if you make it deeper or over parametrized differently for which the plane randomly slice guardian descent would just need less than alpha 2 so less than 2p samples so i think that's an interesting kind of concrete question for this particular model and i i might have time for the third point stop me if i don't but i wanted to mention the third point about what can we say about the stochastic gradient descent so far i was only talking about gradients and or more precisely gradient flow because i was always considering the", "start_timestamp": "00:44:21", "end_timestamp": "00:44:58", "start_second": 2661, "end_second": 2698, "url": "https://www.youtube.com/watch?v=rk7fIhCH8Gc&t=2661s", "title": "Insights on Gradient-Based Algorithms in High-Dimensional Learning", "thumbnail": "https://i.ytimg.com/vi/rk7fIhCH8Gc/maxresdefault.jpg"} {"video_id": "rk7fIhCH8Gc", "text": "continuous time version because that's the one that is easier to analyze so what about stochastic gravities so first of all a reminder was what's stochastic it's you know the same thing but we are taking the samples one by one so when we say stochastic writing decent in the literature we mean usually one of the two following things so either we mean the online stochastic grading descent where each iteration uses a fresh sample and never uses a sample that was ever seen before and i like to call it online stochastic", "start_timestamp": "00:44:58", "end_timestamp": "00:45:33", "start_second": 2698, "end_second": 2733, "url": "https://www.youtube.com/watch?v=rk7fIhCH8Gc&t=2698s", "title": "Insights on Gradient-Based Algorithms in High-Dimensional Learning", "thumbnail": "https://i.ytimg.com/vi/rk7fIhCH8Gc/maxresdefault.jpg"} {"video_id": "rk7fIhCH8Gc", "text": "running decent that one is simpler to analyze but it's also less interesting because it minimizes directly the population loss and there is no notion of the generalization gap the training test are the same so a lot of the mysteries about how comes the train error can be so much smaller than the test error that we are kind of asking and deep learning cannot really be answered by looking at the online stochastic gravity scene it's also not used in practice what's used in practice is multipass stochastic gradient descent", "start_timestamp": "00:45:33", "end_timestamp": "00:46:02", "start_second": 2733, "end_second": 2762, "url": "https://www.youtube.com/watch?v=rk7fIhCH8Gc&t=2733s", "title": "Insights on Gradient-Based Algorithms in High-Dimensional Learning", "thumbnail": "https://i.ytimg.com/vi/rk7fIhCH8Gc/maxresdefault.jpg"} {"video_id": "rk7fIhCH8Gc", "text": "where we use one or few samples at the time but we reuse the samples many times and this is much harder to analyze this has much less kind of existing theory but that's the one we want to look at because it's used in practice and it can access you know non-trivial generalization gap then so can we do that so first of all the first step above which i didn't even talk before because it was obvious how bright and descends in the limit of small learning rate becomes gradient flow for the stochastic gradient descent is", "start_timestamp": "00:46:02", "end_timestamp": "00:46:33", "start_second": 2762, "end_second": 2793, "url": "https://www.youtube.com/watch?v=rk7fIhCH8Gc&t=2762s", "title": "Insights on Gradient-Based Algorithms in High-Dimensional Learning", "thumbnail": "https://i.ytimg.com/vi/rk7fIhCH8Gc/maxresdefault.jpg"} {"video_id": "rk7fIhCH8Gc", "text": "not so clear what is the limit of the infinitely small learning rate that actually is well defined so so i'm just explaining it on this slide so if i define stochastic gradient descent using this variable s of t that would be you know one for some samples and zero for some other samples so if i do what is usually done in stochastic gradient descent is that every at every time step i randomly choose who is in the batch and who is not in the batch well then this doesn't have a well-defined limit of the learning", "start_timestamp": "00:46:33", "end_timestamp": "00:47:06", "start_second": 2793, "end_second": 2826, "url": "https://www.youtube.com/watch?v=rk7fIhCH8Gc&t=2793s", "title": "Insights on Gradient-Based Algorithms in High-Dimensional Learning", "thumbnail": "https://i.ytimg.com/vi/rk7fIhCH8Gc/maxresdefault.jpg"} {"video_id": "rk7fIhCH8Gc", "text": "rate going to zero it doesn't really have a gradient flow limit so this is not so nice for the dynamic community of theory so we instead define a slightly different version of stochastic iron descendant we call persistent stochastic variant descent where we as before have some fraction of samples that are in the batch but instead of reshuffling the batch at every time step randomly we actually decide at each time step whether we keep or not the the sample in the batch and we we keep the we keep that sample with some typical time that we call here", "start_timestamp": "00:47:06", "end_timestamp": "00:47:43", "start_second": 2826, "end_second": 2863, "url": "https://www.youtube.com/watch?v=rk7fIhCH8Gc&t=2826s", "title": "Insights on Gradient-Based Algorithms in High-Dimensional Learning", "thumbnail": "https://i.ytimg.com/vi/rk7fIhCH8Gc/maxresdefault.jpg"} {"video_id": "rk7fIhCH8Gc", "text": "the persistence time following the rule that is written here so if we do it this way then we can take the limit of the learning rate to zero and it has actually a well-defined stochastic guide in flow limit so that is the that is the that is the dynamics that's that i will be analyzing on a model that here will be um slightly different so it's not the phase retrieval or the model on which we will be analyzing this it's uh it's just a gaussian mixture a supervised learning of a gaussian mixture so in the two cluster case", "start_timestamp": "00:47:43", "end_timestamp": "00:48:20", "start_second": 2863, "end_second": 2900, "url": "https://www.youtube.com/watch?v=rk7fIhCH8Gc&t=2863s", "title": "Insights on Gradient-Based Algorithms in High-Dimensional Learning", "thumbnail": "https://i.ytimg.com/vi/rk7fIhCH8Gc/maxresdefault.jpg"} {"video_id": "rk7fIhCH8Gc", "text": "that is on the left here i have two clusters one cluster is plus labels plus one cluster is labels minus and i'm trying to separate them so that's very simple i can just imagine there is some hyperplane in the middle and separating them but the clusters are really noisy so i'm in a regime where i will not be able to separate perfectly and yet so so this will even lead to a convex problem so that's more for kind of a comparison but the one that will be interestingly non-convex is the three cluster case where i have three gaussian clusters", "start_timestamp": "00:48:20", "end_timestamp": "00:48:54", "start_second": 2900, "end_second": 2934, "url": "https://www.youtube.com/watch?v=rk7fIhCH8Gc&t=2900s", "title": "Insights on Gradient-Based Algorithms in High-Dimensional Learning", "thumbnail": "https://i.ytimg.com/vi/rk7fIhCH8Gc/maxresdefault.jpg"} {"video_id": "rk7fIhCH8Gc", "text": "two on the periphery and one in the middle but the two on the periphery they have the same label so this is a data set that is no longer linearly separable so actually to be able to to to to have some meaningful learning the loss function that i will be using for these three clusters is is actually is actually kind of you know using the the structure of the data set and i will be doing logistic regression but not directly on the data points but on the you know as specified here on this c c my mu okay but whether it is the two cluster case or", "start_timestamp": "00:48:54", "end_timestamp": "00:49:33", "start_second": 2934, "end_second": 2973, "url": "https://www.youtube.com/watch?v=rk7fIhCH8Gc&t=2934s", "title": "Insights on Gradient-Based Algorithms in High-Dimensional Learning", "thumbnail": "https://i.ytimg.com/vi/rk7fIhCH8Gc/maxresdefault.jpg"} {"video_id": "rk7fIhCH8Gc", "text": "the three cluster case how do we describe the full trajectory of the granules the complexity is really not so much helping us to describe the the full trajectory so this will be again done with the dynamical mean fee theory but this time with a little bit more advanced version of it because for the perceptron uh case the the the simple one that we used before is is not quite working it's not it's not so the equations do not close so simply but this period is the same here we start with this high dimensional markovian dynamics of a strongly", "start_timestamp": "00:49:33", "end_timestamp": "00:50:08", "start_second": 2973, "end_second": 3008, "url": "https://www.youtube.com/watch?v=rk7fIhCH8Gc&t=2973s", "title": "Insights on Gradient-Based Algorithms in High-Dimensional Learning", "thumbnail": "https://i.ytimg.com/vi/rk7fIhCH8Gc/maxresdefault.jpg"} {"video_id": "rk7fIhCH8Gc", "text": "correlated system and the dynamic domain field theory maps it into non-markovian so dynamics with memory but of one single degree of freedom so this is where we lose the high dimension and get a system that we can actually plug into computer and analyze and here uh rd you know is that one dimensional system but this is a little you know i want to get to the results or conclude so i will not be explaining this in detail but again i was mentioning this lecture by pierre francesco obani in lesos where he actually derives this equation", "start_timestamp": "00:50:08", "end_timestamp": "00:50:46", "start_second": 3008, "end_second": 3046, "url": "https://www.youtube.com/watch?v=rk7fIhCH8Gc&t=3008s", "title": "Insights on Gradient-Based Algorithms in High-Dimensional Learning", "thumbnail": "https://i.ytimg.com/vi/rk7fIhCH8Gc/maxresdefault.jpg"} {"video_id": "rk7fIhCH8Gc", "text": "in details peter put on his video does that mean i should be concluding maybe sorry you have five minutes five minutes great that's fine so let me just like okay this i will not be explaining every detail but let me just explain what this is so here i actually the claim is that the dynamics in this of this classifier in for the data set that is this that is this high dimensional gaussian mixture behaves in the same way as the following scholar stochastic process for this variable h of t t is just the iteration time that", "start_timestamp": "00:50:46", "end_timestamp": "00:51:28", "start_second": 3046, "end_second": 3088, "url": "https://www.youtube.com/watch?v=rk7fIhCH8Gc&t=3046s", "title": "Insights on Gradient-Based Algorithms in High-Dimensional Learning", "thumbnail": "https://i.ytimg.com/vi/rk7fIhCH8Gc/maxresdefault.jpg"} {"video_id": "rk7fIhCH8Gc", "text": "and the stochastic process actually has several types of noise so it has for instance a noise that corresponds to the regularization lambda which is just the ordinary reach legalization but it also has a noise that plays exactly the same role of the rich regularization but that came from the dynamics that is not there explicitly so this is some kind of implicit rich regularization that comes from this variable s of t that was the variable in the in the stochastic guiding descent that was deciding which sample is there and which sample is not", "start_timestamp": "00:51:28", "end_timestamp": "00:52:06", "start_second": 3088, "end_second": 3126, "url": "https://www.youtube.com/watch?v=rk7fIhCH8Gc&t=3088s", "title": "Insights on Gradient-Based Algorithms in High-Dimensional Learning", "thumbnail": "https://i.ytimg.com/vi/rk7fIhCH8Gc/maxresdefault.jpg"} {"video_id": "rk7fIhCH8Gc", "text": "there so that's that's kind of an interesting you know interpretation of what this stochastic model stochastic process actually is this is a interpretation of how kind of the implicit regularization might be coming out in these type of problems the second term is directly the noise coming from the stochastic grading descent because you might be saying that here i have batches that are still extensively big so maybe the noise doesn't matter so much so it actually does it's still explicitly there even in this effective", "start_timestamp": "00:52:06", "end_timestamp": "00:52:40", "start_second": 3126, "end_second": 3160, "url": "https://www.youtube.com/watch?v=rk7fIhCH8Gc&t=3126s", "title": "Insights on Gradient-Based Algorithms in High-Dimensional Learning", "thumbnail": "https://i.ytimg.com/vi/rk7fIhCH8Gc/maxresdefault.jpg"} {"video_id": "rk7fIhCH8Gc", "text": "dynamics and then there is a dynamical noise here that is just gaussian with some covariance matrix mc that is consistently computed you know for via set of closed equations and then there is some memory kernel here mr that also needs to be consistently computed from a set of equations so i put these in very small because you know these are these are kind of hard to grasp in just like one minute but it's again some some work that comes from recent works in statistical physics and that can be directly adapted", "start_timestamp": "00:52:40", "end_timestamp": "00:53:18", "start_second": 3160, "end_second": 3198, "url": "https://www.youtube.com/watch?v=rk7fIhCH8Gc&t=3160s", "title": "Insights on Gradient-Based Algorithms in High-Dimensional Learning", "thumbnail": "https://i.ytimg.com/vi/rk7fIhCH8Gc/maxresdefault.jpg"} {"video_id": "rk7fIhCH8Gc", "text": "to this problem as we did in this paper here again open problem would be of course challenge once once one goes in detail through what this color stochastic process is can we actually prove that it is equivalent to what the grad what the stochastic gradient flow is doing in in this problem and you know once you compute it numerically all these uh quantities uh from these equations then you can compute everything else including the training loss the test laws the generalization here the corresponding accuracy is this", "start_timestamp": "00:53:18", "end_timestamp": "00:53:56", "start_second": 3198, "end_second": 3236, "url": "https://www.youtube.com/watch?v=rk7fIhCH8Gc&t=3198s", "title": "Insights on Gradient-Based Algorithms in High-Dimensional Learning", "thumbnail": "https://i.ytimg.com/vi/rk7fIhCH8Gc/maxresdefault.jpg"} {"video_id": "rk7fIhCH8Gc", "text": "is this is highlighted in this set of equations so with that what you can do is you can plot for instance a picture like that where i plot the generalization error and the training error in the intersect the generalization here in the main part is a function of the time and now the points that's just running the stock the persistent stochastic gradient descent numerically on this data set so this is just a plain simulation of this simple neural network and the lines that's a result that i get from the dynamic community fury", "start_timestamp": "00:53:56", "end_timestamp": "00:54:30", "start_second": 3236, "end_second": 3270, "url": "https://www.youtube.com/watch?v=rk7fIhCH8Gc&t=3236s", "title": "Insights on Gradient-Based Algorithms in High-Dimensional Learning", "thumbnail": "https://i.ytimg.com/vi/rk7fIhCH8Gc/maxresdefault.jpg"} {"video_id": "rk7fIhCH8Gc", "text": "so it is you know you can see that it is describing the whole trajectory what is the generalization error at any given time for the persistent stochastic guiding descent so you see that at large times it is going somewhere but at intermediate times there would be some early stopping to do here and depending on the batch size and order on the persistence time is not exactly the same curve and quite interestingly even the the orange points actually would be the normal stochastic gradient descent and the line corresponding", "start_timestamp": "00:54:30", "end_timestamp": "00:55:05", "start_second": 3270, "end_second": 3305, "url": "https://www.youtube.com/watch?v=rk7fIhCH8Gc&t=3270s", "title": "Insights on Gradient-Based Algorithms in High-Dimensional Learning", "thumbnail": "https://i.ytimg.com/vi/rk7fIhCH8Gc/maxresdefault.jpg"} {"video_id": "rk7fIhCH8Gc", "text": "that is plotted there is is in a sense not quite justified by our theory we just kind of ad hoc discretize the theory in the same way we would discretize to do the canonical stochastic descent it still seems working which is a bit puzzling to us it really kind of we don't really know why should it but so we didn't maybe even need to do this this persistent stochastic guidance descent for this theory to work or maybe yes maybe there is some small error that doesn't show up on you know on this comparison with numeric", "start_timestamp": "00:55:05", "end_timestamp": "00:55:36", "start_second": 3305, "end_second": 3336, "url": "https://www.youtube.com/watch?v=rk7fIhCH8Gc&t=3305s", "title": "Insights on Gradient-Based Algorithms in High-Dimensional Learning", "thumbnail": "https://i.ytimg.com/vi/rk7fIhCH8Gc/maxresdefault.jpg"} {"video_id": "rk7fIhCH8Gc", "text": "so this we don't know yet and you can you know look at other things for instance this um this case of two clusters is quite interesting because if you don't do the stochastic gradient descent but fulbright guiding this and that would be this this picture that is of course just a special case of the equations that i just wrote it has this specular behavior that if you initialize so r is the variance at initialization if you initialize at zero with really small variance then after one iteration you reach the base optimal error", "start_timestamp": "00:55:36", "end_timestamp": "00:56:10", "start_second": 3336, "end_second": 3370, "url": "https://www.youtube.com/watch?v=rk7fIhCH8Gc&t=3336s", "title": "Insights on Gradient-Based Algorithms in High-Dimensional Learning", "thumbnail": "https://i.ytimg.com/vi/rk7fIhCH8Gc/maxresdefault.jpg"} {"video_id": "rk7fIhCH8Gc", "text": "and then the gradient descent is actually driving you away from it and training accuracy is growing you know this is a regime where you're interpolating the training accuracy goes to one but the test error is getting worse so actually after one iteration you were perfectly optimal but then the gradient is driving you away from the perfect generalization not perfectionist the optimal generalization point so that's a kind of specular property of this particular two cluster model that we that we discovered in this paper and", "start_timestamp": "00:56:10", "end_timestamp": "00:56:45", "start_second": 3370, "end_second": 3405, "url": "https://www.youtube.com/watch?v=rk7fIhCH8Gc&t=3370s", "title": "Insights on Gradient-Based Algorithms in High-Dimensional Learning", "thumbnail": "https://i.ytimg.com/vi/rk7fIhCH8Gc/maxresdefault.jpg"} {"video_id": "rk7fIhCH8Gc", "text": "and you can look at other things such as you know looking how different the dynamics is when you are changing the batch size and comparing how the dynamics changes when you're changing the ridge legalization of the loss and this is what's on these two pictures so i see that as i'm changing the batch size the time scale where i start to decrease is changing because the number of iterations i need with smaller batch size is bigger so this intuitive but otherwise if the curves look kind of comparable to the ones if i", "start_timestamp": "00:56:45", "end_timestamp": "00:57:21", "start_second": 3405, "end_second": 3441, "url": "https://www.youtube.com/watch?v=rk7fIhCH8Gc&t=3405s", "title": "Insights on Gradient-Based Algorithms in High-Dimensional Learning", "thumbnail": "https://i.ytimg.com/vi/rk7fIhCH8Gc/maxresdefault.jpg"} {"video_id": "rk7fIhCH8Gc", "text": "was adding more and more regularization in the sense that the blue ones that is just gradient descent no real backsides one is as if i was regularizing only a little in this case and regularizing a lot is actually corresponding to the smallest batch slice in this in this picture but you know that's that's just like observing how the how the curves look like so there's no like formal statement here so this was the last figure that i wanted to show you and just to conclude so you know i was telling you about this dynamical mean", "start_timestamp": "00:57:21", "end_timestamp": "00:57:58", "start_second": 3441, "end_second": 3478, "url": "https://www.youtube.com/watch?v=rk7fIhCH8Gc&t=3441s", "title": "Insights on Gradient-Based Algorithms in High-Dimensional Learning", "thumbnail": "https://i.ytimg.com/vi/rk7fIhCH8Gc/maxresdefault.jpg"} {"video_id": "rk7fIhCH8Gc", "text": "field theory and the results coming from it that is able to track the full trajectory of the grounding descent or the stochastic realities and for a range of our synthetic models for the data and there are of course many directions in which this this we would want to uh extend it including all the open problems that i stated we would like to have more more math and rigor into that but also deduce more of insights just by looking of what the dynamic and your equations are telling us as a function of all the hyper parameters and", "start_timestamp": "00:57:58", "end_timestamp": "00:58:33", "start_second": 3478, "end_second": 3513, "url": "https://www.youtube.com/watch?v=rk7fIhCH8Gc&t=3478s", "title": "Insights on Gradient-Based Algorithms in High-Dimensional Learning", "thumbnail": "https://i.ytimg.com/vi/rk7fIhCH8Gc/maxresdefault.jpg"} {"video_id": "rk7fIhCH8Gc", "text": "the nature of the noise that i was describing in the in the equations and we can look at other data models these are not the only ones for which this this this can be written and we can look at networks that actually have hidden variables and of variants of the gradient descending stochastic guardian descent that have momentum for instance etc so this is hopefully to come and i just flashed back the list of papers from the beginning that that i covered in this store and open for the discussion if there is still time for the", "start_timestamp": "00:58:33", "end_timestamp": "00:59:08", "start_second": 3513, "end_second": 3548, "url": "https://www.youtube.com/watch?v=rk7fIhCH8Gc&t=3513s", "title": "Insights on Gradient-Based Algorithms in High-Dimensional Learning", "thumbnail": "https://i.ytimg.com/vi/rk7fIhCH8Gc/maxresdefault.jpg"} {"video_id": "1L83tM8nwHU", "text": "hi there today we're looking at manifold mix-up better representations by interpolating hidden states by because Verma a tall number of big names on this paper as you can see and I also saw this at ICN also I was intrigued by it they proposed manifold mix-up which is sort of a regularizer of neural networks specifically of supervised learning and it's actually a pretty simple concept and they kind of show that it has some nice properties and outperforms of the regularizer z-- so what's the problem the problem is that if you look at this", "start_timestamp": "00:00:00", "end_timestamp": "00:00:44", "start_second": 0, "end_second": 44, "url": "https://www.youtube.com/watch?v=1L83tM8nwHU&t=0s", "title": "Manifold Mixup: Better Representations by Interpolating Hidden States", "thumbnail": "https://i.ytimg.com/vi/1L83tM8nwHU/maxresdefault.jpg"} {"video_id": "1L83tM8nwHU", "text": "spiral problem here which is often kind of used to to show properties and neural networks what you have are blue points and the blue points on our one class and the red points are another class you see the two classes here are in this kind of spiral pattern it's just the data space is just two-dimensional so you see here this is one class this is the other class this is pretty difficult for a model to learn because of course the easy models would be like linear classifiers but there's no way to like put a line through this such that one", "start_timestamp": "00:00:44", "end_timestamp": "00:01:22", "start_second": 44, "end_second": 82, "url": "https://www.youtube.com/watch?v=1L83tM8nwHU&t=44s", "title": "Manifold Mixup: Better Representations by Interpolating Hidden States", "thumbnail": "https://i.ytimg.com/vi/1L83tM8nwHU/maxresdefault.jpg"} {"video_id": "1L83tM8nwHU", "text": "classes on one side mostly so neural networks if you train them they will give you something like you see here they will try to kind of bound the regions with the red points from the blue points but then gets you know is it there's some weird things like here is a weird thing here is a weird thing so you'd imagine a correct model would actually classify this area as blue but the the neural network has no concept of of let's say that the spiral should continue that thus it it simply sees our here's blue here's blue here's a bit of", "start_timestamp": "00:01:22", "end_timestamp": "00:02:00", "start_second": 82, "end_second": 120, "url": "https://www.youtube.com/watch?v=1L83tM8nwHU&t=82s", "title": "Manifold Mixup: Better Representations by Interpolating Hidden States", "thumbnail": "https://i.ytimg.com/vi/1L83tM8nwHU/maxresdefault.jpg"} {"video_id": "1L83tM8nwHU", "text": "a gap in the training data so yeah it in this case it assigns a red class to it this is one problem that the decision boundaries are rather not say squiggly and irregular and the second one if you look at the actual colors full blue means very confident blue class full red means very confident red class and in between you kind of see going into the the white so if you look very closely I can actually zoom in more here if you look very closely you'll see that the blue gets lighter and lighter until it reaches white and from here the red goes", "start_timestamp": "00:02:00", "end_timestamp": "00:02:40", "start_second": 120, "end_second": 160, "url": "https://www.youtube.com/watch?v=1L83tM8nwHU&t=120s", "title": "Manifold Mixup: Better Representations by Interpolating Hidden States", "thumbnail": "https://i.ytimg.com/vi/1L83tM8nwHU/maxresdefault.jpg"} {"video_id": "1L83tM8nwHU", "text": "lighter and lighter until it reaches white and white means not confident white means like 5050 they see the that area of not confident is actually very small right if you consider a point here is actually still very confident that it's a blue point and the area of Nan confidence is very small even though maybe as as humans we would judge like a relatively large band in the middle to be not confident like if we get a point like this and the third problem is that you can see in multiple locations like here or here or here that the decision", "start_timestamp": "00:02:40", "end_timestamp": "00:03:23", "start_second": 160, "end_second": 203, "url": "https://www.youtube.com/watch?v=1L83tM8nwHU&t=160s", "title": "Manifold Mixup: Better Representations by Interpolating Hidden States", "thumbnail": "https://i.ytimg.com/vi/1L83tM8nwHU/maxresdefault.jpg"} {"video_id": "1L83tM8nwHU", "text": "boundary is very close to the data points unnecessarily close so especially if you look here the decision boundary could be much more optimally placed probably something like this right given the training data but the neural networks because the only C training data they they have no basically no incentive to do this all right one might think of you know something like a support vector machine that actually has an incentive to to put the decision boundary away from the from the training data but the neural networks currently", "start_timestamp": "00:03:23", "end_timestamp": "00:04:05", "start_second": 203, "end_second": 245, "url": "https://www.youtube.com/watch?v=1L83tM8nwHU&t=203s", "title": "Manifold Mixup: Better Representations by Interpolating Hidden States", "thumbnail": "https://i.ytimg.com/vi/1L83tM8nwHU/maxresdefault.jpg"} {"video_id": "1L83tM8nwHU", "text": "there are not SVM's they're basically logistic regressions and as such have no no incentive to do this so this these are the problems the other problems are this is the input space if you look at the hidden space so they build neural networks specifically they have like the 2d input and then that goes through a bunch of layers and then at one point there's a bottleneck layer was just two hidden nodes and then I guess that goes again and then it goes into a classifier so in this bottleneck layer they analyze the hidden", "start_timestamp": "00:04:05", "end_timestamp": "00:04:42", "start_second": 245, "end_second": 282, "url": "https://www.youtube.com/watch?v=1L83tM8nwHU&t=245s", "title": "Manifold Mixup: Better Representations by Interpolating Hidden States", "thumbnail": "https://i.ytimg.com/vi/1L83tM8nwHU/maxresdefault.jpg"} {"video_id": "1L83tM8nwHU", "text": "representations of the data points and in this case for this spiral dataset what happens is so in red you see again the red classes in blue the blue class it's 2d so you can plot it what it does is it bunches up the hidden representations fairly fairly so it bunches them kind of up it spreads them out in directions here here here most are bunched up here and it does these kind of weird arrangements here with the pockets of those and of course the neural network is powerful enough such that it can actually you know separate", "start_timestamp": "00:04:42", "end_timestamp": "00:05:19", "start_second": 282, "end_second": 319, "url": "https://www.youtube.com/watch?v=1L83tM8nwHU&t=282s", "title": "Manifold Mixup: Better Representations by Interpolating Hidden States", "thumbnail": "https://i.ytimg.com/vi/1L83tM8nwHU/maxresdefault.jpg"} {"video_id": "1L83tM8nwHU", "text": "all of this from each other but it's not ideal and the black dots they represent kind of points in between or points from the input space that are not part of the training data so they say they sample uniformly in the range of the input space you see that the black dots are all over the place right some are confidence blue some are confident red some are like somewhere right what you would expect from a good model is that if you input something that's kind of in-between or not really sure not even part of the input distribution that it", "start_timestamp": "00:05:19", "end_timestamp": "00:05:54", "start_second": 319, "end_second": 354, "url": "https://www.youtube.com/watch?v=1L83tM8nwHU&t=319s", "title": "Manifold Mixup: Better Representations by Interpolating Hidden States", "thumbnail": "https://i.ytimg.com/vi/1L83tM8nwHU/maxresdefault.jpg"} {"video_id": "1L83tM8nwHU", "text": "assigns like a low confidence to it that it says well I'm not sure about this this must be somewhere in the middle so just to jump and jump forward to the results what does manifold mix up do without knowing what it is in the same data set it gives you a picture like this you see the decision boundaries are much more smooth right the region of no-confidence or of low confidence indicated by the light color is here is much larger and also the decision boundary here we had specifically this data point here you", "start_timestamp": "00:05:54", "end_timestamp": "00:06:32", "start_second": 354, "end_second": 392, "url": "https://www.youtube.com/watch?v=1L83tM8nwHU&t=354s", "title": "Manifold Mixup: Better Representations by Interpolating Hidden States", "thumbnail": "https://i.ytimg.com/vi/1L83tM8nwHU/maxresdefault.jpg"} {"video_id": "1L83tM8nwHU", "text": "see the decision boundary is pushed away well you could argue about that particular point but the decision boundary is generally pushed away from the data points you also see no more kind of these squiggles here it doesn't happen in in here also if you look at the hidden representations the hidden representations now are spread out the classes are bunched up so not all the points are bunched up but the the points of individual classes are bunched up together and the randomly sampled points are in the middle as they should be si", "start_timestamp": "00:06:32", "end_timestamp": "00:07:14", "start_second": 392, "end_second": 434, "url": "https://www.youtube.com/watch?v=1L83tM8nwHU&t=392s", "title": "Manifold Mixup: Better Representations by Interpolating Hidden States", "thumbnail": "https://i.ytimg.com/vi/1L83tM8nwHU/maxresdefault.jpg"} {"video_id": "1L83tM8nwHU", "text": "only confident red is down here confident blue is up here and everything in between is unconfident and third if you look at the singular value decomposition of the hidden layer and that's kind of a measure of how spread out in the different dimensions a dataset is you see that the manifold makes up here in green it concentrates or it it real owers the singular values of the kind of lower indexes so the first singular value is large which means that there is like a dominant direction in the in the data and this is", "start_timestamp": "00:07:14", "end_timestamp": "00:08:01", "start_second": 434, "end_second": 481, "url": "https://www.youtube.com/watch?v=1L83tM8nwHU&t=434s", "title": "Manifold Mixup: Better Representations by Interpolating Hidden States", "thumbnail": "https://i.ytimg.com/vi/1L83tM8nwHU/maxresdefault.jpg"} {"video_id": "1L83tM8nwHU", "text": "done for each class separately as I understand it it puts a lot of weight on the first singular vector and then it pushes down the contributions of the other singular vector which means that the data set that is analyzed is is concentrated in two fewer directions of variance this is layer one and here is layer three means so you see it happens in both that the manifold makes up compared to the baseline model does this so now you might ask what is manifold mix-up it's actually a pretty pretty simple concept right here is another", "start_timestamp": "00:08:01", "end_timestamp": "00:08:45", "start_second": 481, "end_second": 525, "url": "https://www.youtube.com/watch?v=1L83tM8nwHU&t=481s", "title": "Manifold Mixup: Better Representations by Interpolating Hidden States", "thumbnail": "https://i.ytimg.com/vi/1L83tM8nwHU/maxresdefault.jpg"} {"video_id": "1L83tM8nwHU", "text": "comparing it to other kind of regularization techniques and showing that none of them really does this so manifold mix-up is this basically what you do is when you train a neural network you have input data and you take mini batches of input data specifically you take two mini batches x and y and X prime Y Prime all right and then what you do is if I have the draw the neural network here so here is the inputs like a picture of a cat [Music] it goes through layers right and then what you do is you say at some", "start_timestamp": "00:08:45", "end_timestamp": "00:09:32", "start_second": 525, "end_second": 572, "url": "https://www.youtube.com/watch?v=1L83tM8nwHU&t=525s", "title": "Manifold Mixup: Better Representations by Interpolating Hidden States", "thumbnail": "https://i.ytimg.com/vi/1L83tM8nwHU/maxresdefault.jpg"} {"video_id": "1L83tM8nwHU", "text": "particular you say stop stop right you take the representation up you and you do this with two different mini batches so here is this is cat one down back here is cat two dog that's a captain you pass it in right here you take it out here you pass it through the network and you take it out so you now have two different forward paths of two different mini batches and then you define a lambda and I guess they randomly sample a lambda in zero one right in the range of 0 1 so this is a mixing coefficient and then you mix you", "start_timestamp": "00:09:32", "end_timestamp": "00:10:22", "start_second": 572, "end_second": 622, "url": "https://www.youtube.com/watch?v=1L83tM8nwHU&t=572s", "title": "Manifold Mixup: Better Representations by Interpolating Hidden States", "thumbnail": "https://i.ytimg.com/vi/1L83tM8nwHU/maxresdefault.jpg"} {"video_id": "1L83tM8nwHU", "text": "say lambda times hidden representation of batch 1 plus 1 minus lambda of hidden representation of batch 2 and that is what you pass through the rest of the network right so basically you forward propagate to different batches until a certain layer here then you mix them with a random coefficient and then you pass it through the rest and then the only thing you also have to do is then at the end if you think of the labels of these two things you want to mix the labels in the same fashion so you want to mix lambda", "start_timestamp": "00:10:22", "end_timestamp": "00:11:09", "start_second": 622, "end_second": 669, "url": "https://www.youtube.com/watch?v=1L83tM8nwHU&t=622s", "title": "Manifold Mixup: Better Representations by Interpolating Hidden States", "thumbnail": "https://i.ytimg.com/vi/1L83tM8nwHU/maxresdefault.jpg"} {"video_id": "1L83tM8nwHU", "text": "times y of batch 1 plus 1 minus lambda of Y of batch 2 and and then this is your training signal for whatever comes out here right so it's it's um these are these are one hot labels so if it's class three its zero zero one zero zero and if Y two is class five its zero zero zero zero one and then you simply mix the two alright and that becomes your training signal so in a practical example if let's just have a mini batch size of one so just one sample if this is cat and this is dog you would pass them forward right you would mix so in", "start_timestamp": "00:11:09", "end_timestamp": "00:11:57", "start_second": 669, "end_second": 717, "url": "https://www.youtube.com/watch?v=1L83tM8nwHU&t=669s", "title": "Manifold Mixup: Better Representations by Interpolating Hidden States", "thumbnail": "https://i.ytimg.com/vi/1L83tM8nwHU/maxresdefault.jpg"} {"video_id": "1L83tM8nwHU", "text": "the hidden representation it would kind of become a cat dog maybe you do it 50/50 but then you would also mix the labels of cat and dog 5050 and tell the net well this is a mixture of 50% cut 50% dog and then you would train the network to predict that 50/50 coefficient so they do this the question is at which layer do you do this and they simply I think for each mini batch sample one hidden layer at random they might have someone waiting or something but the way they describe it is they simply sample one layer or mean per mini", "start_timestamp": "00:11:57", "end_timestamp": "00:12:36", "start_second": 717, "end_second": 756, "url": "https://www.youtube.com/watch?v=1L83tM8nwHU&t=717s", "title": "Manifold Mixup: Better Representations by Interpolating Hidden States", "thumbnail": "https://i.ytimg.com/vi/1L83tM8nwHU/maxresdefault.jpg"} {"video_id": "1L83tM8nwHU", "text": "batch and then do the mixing there and then you can actually backprop through everything everything is differentiable this mixing is differentiable so you can backdrop through any of everything and there's even you know a kind of an engineering trick to only use a single mini batch by mixing it with itself so that's that's pretty neat so this is manifold mix up as you can see here is the that's kind of the description you mix the hidden representations with lambda and you mix the labels with the same lambda and that will become your", "start_timestamp": "00:12:36", "end_timestamp": "00:13:07", "start_second": 756, "end_second": 787, "url": "https://www.youtube.com/watch?v=1L83tM8nwHU&t=756s", "title": "Manifold Mixup: Better Representations by Interpolating Hidden States", "thumbnail": "https://i.ytimg.com/vi/1L83tM8nwHU/maxresdefault.jpg"} {"video_id": "1L83tM8nwHU", "text": "actual training signal all right so they give some theory to it that it flattens representations and specifically they say under some conditions namely if the network is large enough so if the dimension of the hidden representation is of a certain size then if you optimize this manifold mix up like if you optimize over every London over the entire training data set what you will end up is actually a linear rally near function of the input this is not too surprising that if you because what you do is you mix linearly", "start_timestamp": "00:13:07", "end_timestamp": "00:13:57", "start_second": 787, "end_second": 837, "url": "https://www.youtube.com/watch?v=1L83tM8nwHU&t=787s", "title": "Manifold Mixup: Better Representations by Interpolating Hidden States", "thumbnail": "https://i.ytimg.com/vi/1L83tM8nwHU/maxresdefault.jpg"} {"video_id": "1L83tM8nwHU", "text": "this mixture happens in a linear fashion so if you optimize for and you not only optimize for the training set but you optimize for every possible mixture of the training set a linear mixture your minimization your minimizer function will actually become a linear function it's not surprising but they have a formal proof of this and they also have a proof that if certain assumptions are given then the minimizer's if you apply the minimizer's the hidden representations will actually fall on a low dimensional subspace which is also", "start_timestamp": "00:13:57", "end_timestamp": "00:14:41", "start_second": 837, "end_second": 881, "url": "https://www.youtube.com/watch?v=1L83tM8nwHU&t=837s", "title": "Manifold Mixup: Better Representations by Interpolating Hidden States", "thumbnail": "https://i.ytimg.com/vi/1L83tM8nwHU/maxresdefault.jpg"} {"video_id": "1L83tM8nwHU", "text": "not surprising but it's kind of the theoretical an analogue to what they show with the singular value distribution that it basically suppresses low singular values that means the data set is much more into a single direction the hidden representations sorry all right so this the theory part is you can you can read it if you if you want to it's yeah it's it's - the results are to be expected I would say from what they do and the last thing they give a pictorial example of why none fold mixup flattened representations so", "start_timestamp": "00:14:41", "end_timestamp": "00:15:27", "start_second": 881, "end_second": 927, "url": "https://www.youtube.com/watch?v=1L83tM8nwHU&t=881s", "title": "Manifold Mixup: Better Representations by Interpolating Hidden States", "thumbnail": "https://i.ytimg.com/vi/1L83tM8nwHU/maxresdefault.jpg"} {"video_id": "1L83tM8nwHU", "text": "both of these things the fact that the minimizer's will become linear functions and the fact that the singular value spectrum is more concentrated on the first thing you'll evaluate a shion's are flattened and here is a pictorial representation so in this case what happens if you if you basically have these four data points a 1 a 2 B 1 and B 2 where a 1 and a 2 are blue class and B 1 and B 2 or red class and if you now look at an interpolation point between the two so if you look at this interpolation point between a 1 and B 2", "start_timestamp": "00:15:27", "end_timestamp": "00:16:20", "start_second": 927, "end_second": 980, "url": "https://www.youtube.com/watch?v=1L83tM8nwHU&t=927s", "title": "Manifold Mixup: Better Representations by Interpolating Hidden States", "thumbnail": "https://i.ytimg.com/vi/1L83tM8nwHU/maxresdefault.jpg"} {"video_id": "1L83tM8nwHU", "text": "what happens is that in this case this should be 50/50 blue and red but if you now look at the points that it where it's not interpolated on this is very close to a 2 in this case it probably should be more like 95 blue and 5 red do they say here well if you use manifold mix up to learn the network what you'll actually do is you say ok actually this hidden representation needs to be pushed outward and you will achieve something over here where any mixture of two points of the opposite class will actually give you a 50/50 so all the", "start_timestamp": "00:16:20", "end_timestamp": "00:17:12", "start_second": 980, "end_second": 1032, "url": "https://www.youtube.com/watch?v=1L83tM8nwHU&t=980s", "title": "Manifold Mixup: Better Representations by Interpolating Hidden States", "thumbnail": "https://i.ytimg.com/vi/1L83tM8nwHU/maxresdefault.jpg"} {"video_id": "1L83tM8nwHU", "text": "midpoints here will give you a 50/50 mixture between the labels which basically means what you end up with is a line between this data and this data and it means that basically the network becomes more linear and the representations become more flat because flat is the optimal if you distribute our flat all the distances to the line are the same and this objective is optimized and this is basically my my kind of biggest problem with the method is that it it kind of mixes the input with a linear function where we know", "start_timestamp": "00:17:12", "end_timestamp": "00:18:02", "start_second": 1032, "end_second": 1082, "url": "https://www.youtube.com/watch?v=1L83tM8nwHU&t=1032s", "title": "Manifold Mixup: Better Representations by Interpolating Hidden States", "thumbnail": "https://i.ytimg.com/vi/1L83tM8nwHU/maxresdefault.jpg"} {"video_id": "1L83tM8nwHU", "text": "that that is kind of not the shape of the true data manifold the input manifolds as you can see here the input manifold here isn't linear or flat it's actually very very tangled and we know that neural networks as you continue in the layers will flatten those representations because ultimately at the end it needs to classify the dataset linearly because the last layer is a softmax layer but the the idea that you could apply this to any layer seems a bit shady to me of course it works and they show it works and it's really nice", "start_timestamp": "00:18:02", "end_timestamp": "00:18:46", "start_second": 1082, "end_second": 1126, "url": "https://www.youtube.com/watch?v=1L83tM8nwHU&t=1082s", "title": "Manifold Mixup: Better Representations by Interpolating Hidden States", "thumbnail": "https://i.ytimg.com/vi/1L83tM8nwHU/maxresdefault.jpg"} {"video_id": "1L83tM8nwHU", "text": "that it works but apply applying this to low layers and neural networks seems a bit not principled to me so I think this is not the end of the story of this line of work and there is kind of more that can be done in a more principled fashion but in any case they show that this actually works in terms of performance on generalization on kind of standard data sets so they have results on C 4 10 and C 4 100 which are famous image data sets and they show that they have a regularizer outperforms others and they also show that they can withstand", "start_timestamp": "00:18:46", "end_timestamp": "00:19:35", "start_second": 1126, "end_second": 1175, "url": "https://www.youtube.com/watch?v=1L83tM8nwHU&t=1126s", "title": "Manifold Mixup: Better Representations by Interpolating Hidden States", "thumbnail": "https://i.ytimg.com/vi/1L83tM8nwHU/maxresdefault.jpg"} {"video_id": "1L83tM8nwHU", "text": "one-step single step adversarial attacks more it kind of better so they have a better perform performance against single step adversarial attacks after regularizing mostly again giving kind of an idea that the if you push if you push it if you have a two points this is X this is X X 1 X 2 they're of different classes if you put the decision boundary really close to X 2 then an adversarial attack can simply move the point across the decision boundary with a very small step but if you actually have the decision boundary", "start_timestamp": "00:19:35", "end_timestamp": "00:20:22", "start_second": 1175, "end_second": 1222, "url": "https://www.youtube.com/watch?v=1L83tM8nwHU&t=1175s", "title": "Manifold Mixup: Better Representations by Interpolating Hidden States", "thumbnail": "https://i.ytimg.com/vi/1L83tM8nwHU/maxresdefault.jpg"} {"video_id": "1L83tM8nwHU", "text": "pushed away from both data points then the an adversarial attack must go a very long way to the decision boundary and thus if you limit the size of adversarial attacks which is what you usually do you can maybe not reach this decision boundary and thus you mitigate some of the problem so it's pretty cool I think yeah there's work to be done but I think this is pretty cool it's implemented pretty easy I I've seen there's a lot of libraries already available with it in and yeah won't hurt to add this to your code and make your", "start_timestamp": "00:20:22", "end_timestamp": "00:21:03", "start_second": 1222, "end_second": 1263, "url": "https://www.youtube.com/watch?v=1L83tM8nwHU&t=1222s", "title": "Manifold Mixup: Better Representations by Interpolating Hidden States", "thumbnail": "https://i.ytimg.com/vi/1L83tM8nwHU/maxresdefault.jpg"} {"video_id": "a6v92P0EbJc", "text": "hi there today we're looking at neural architecture search without training by Joseph Miller Jack Turner amis Torquay and Eliot J Crowley on a high level this paper performs neural architecture search by looking at the correlation matrices of the Jacobian of the of the data when you pass it through the network and it does so at initialization so you pass the data look at the Jacobian and if it's very correlated then the network is bad and if it's very uncorrelated then the network is good and by simply observing that they can", "start_timestamp": "00:00:00", "end_timestamp": "00:00:40", "start_second": 0, "end_second": 40, "url": "https://www.youtube.com/watch?v=a6v92P0EbJc&t=0s", "title": "Neural Architecture Search without Training (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/a6v92P0EbJc/maxresdefault.jpg"} {"video_id": "a6v92P0EbJc", "text": "already achieve a very good score on neural architecture search benchmark alright that was a high level and maybe a bit too simplified but that's sort of what's going on ok let's dive in so what's neural architecture search neural architecture search is the discipline of you are given a data set let's say here we have a data set which could be something like C 410 which is an image data set and you are given a sort of a training procedure let's say Adam or SGD for 100,000 steps or something like this with many batches of size 64 ok and", "start_timestamp": "00:00:40", "end_timestamp": "00:01:23", "start_second": 40, "end_second": 83, "url": "https://www.youtube.com/watch?v=a6v92P0EbJc&t=40s", "title": "Neural Architecture Search without Training (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/a6v92P0EbJc/maxresdefault.jpg"} {"video_id": "a6v92P0EbJc", "text": "you're given a loss function which the loss function here could be the cross entropy between the outputs of the network which we'll call L and the label Y and your task is now to find a neural network architecture that conforms to these specifications but gives the lowest possible loss or de sorry the highest possible validation accuracy in this case so this here would be like the Train and then you'd have the test accuracy or the validation accuracy okay so you could decide well I'm gonna go with you know first like three", "start_timestamp": "00:01:23", "end_timestamp": "00:02:00", "start_second": 83, "end_second": 120, "url": "https://www.youtube.com/watch?v=a6v92P0EbJc&t=83s", "title": "Neural Architecture Search without Training (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/a6v92P0EbJc/maxresdefault.jpg"} {"video_id": "a6v92P0EbJc", "text": "convolutional layers each one having like a real ooh non-linearity but you could also say well I'm going to build like a skip connection from here to here you could also say that I'm going to down sample by you could have maybe a bigger stride and so on so the kernel size of the convolution you can vary until now people have done this by hand right in effect we all use like the same 10 to 20 different architectures so if it's an image problem we tend to go for like a res net or a wide ResNet like a vgg style architecture someone has come up", "start_timestamp": "00:02:00", "end_timestamp": "00:02:39", "start_second": 120, "end_second": 159, "url": "https://www.youtube.com/watch?v=a6v92P0EbJc&t=120s", "title": "Neural Architecture Search without Training (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/a6v92P0EbJc/maxresdefault.jpg"} {"video_id": "a6v92P0EbJc", "text": "with those at some point with each of those discover that it works well and we don't really do much exploration we simply kind of use the same things over and over and the the truth is that there might be much better architectures that were simply not exploring right there might be much better building plans for networks that we don't know of that might perform a lot better with the same data and the same training so neural architecture searches the process of automatically searching for these better architectures of course that's a", "start_timestamp": "00:02:39", "end_timestamp": "00:03:14", "start_second": 159, "end_second": 194, "url": "https://www.youtube.com/watch?v=a6v92P0EbJc&t=159s", "title": "Neural Architecture Search without Training (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/a6v92P0EbJc/maxresdefault.jpg"} {"video_id": "a6v92P0EbJc", "text": "combinatorial problem but the idea is that you know you can actually learn to construct good architectures and by doing so you can you can sort of speed up this process that is manual otherwise and the idea behind it is there some regularity of when an architecture is good there's some like high level of pattern that you as a human maybe cannot really grasp but like a machine can figure out which architectures are good and which ones aren't so there have been a few inventions in this in this area but they are mostly costly that's what", "start_timestamp": "00:03:14", "end_timestamp": "00:03:53", "start_second": 194, "end_second": 233, "url": "https://www.youtube.com/watch?v=a6v92P0EbJc&t=194s", "title": "Neural Architecture Search without Training (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/a6v92P0EbJc/maxresdefault.jpg"} {"video_id": "a6v92P0EbJc", "text": "they say here the time and effort involved in hand designing deep neural networks is immense this has prompted development of neural architecture search techniques to automate this design however neural architecture search algorithms tend to be extremely slow and expensive they need to train vast numbers of candidate networks to inform the search process so what neural architecture search methods do is what they'll have is they'll have something like a controller in the controller itself of course is going to be a neural network", "start_timestamp": "00:03:53", "end_timestamp": "00:04:27", "start_second": 233, "end_second": 267, "url": "https://www.youtube.com/watch?v=a6v92P0EbJc&t=233s", "title": "Neural Architecture Search without Training (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/a6v92P0EbJc/maxresdefault.jpg"} {"video_id": "a6v92P0EbJc", "text": "so there'll be this thing that will be the controller and the controller will emit like a building plan so the controller will emit like a building plan for this network right here and then you train the entire thing once through for the entire hundred thousand steps and then you observe the final validation accuracy which might be something like eighty percent and then you know okay this is eighty percent so you feed the eighty percent into your controller and the controller out puts the next building plan that it thinks", "start_timestamp": "00:04:27", "end_timestamp": "00:05:01", "start_second": 267, "end_second": 301, "url": "https://www.youtube.com/watch?v=a6v92P0EbJc&t=267s", "title": "Neural Architecture Search without Training (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/a6v92P0EbJc/maxresdefault.jpg"} {"video_id": "a6v92P0EbJc", "text": "will score higher and then you train the entire thing again and you maybe observe a 70% accuracy you again feed that in right and the controller realizes oh I may have done something wrong let me try something else and does it yet if this looks like reinforcement learning to you that's because this is reinforcement learning so there really is see here the controller would be the agent the percentages here the accuracies would be the reward and the invasions would be basically this thing here this thing would be the actions but sometimes it's", "start_timestamp": "00:05:01", "end_timestamp": "00:05:38", "start_second": 301, "end_second": 338, "url": "https://www.youtube.com/watch?v=a6v92P0EbJc&t=301s", "title": "Neural Architecture Search without Training (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/a6v92P0EbJc/maxresdefault.jpg"} {"video_id": "a6v92P0EbJc", "text": "the observations and you need to score the different things okay so the problem of course with this is that the reinforcement learning requires a lot of data it requires a lot of steps to converge because the signal from the reward is just so weak you simply get one number for your action and you don't know what you can change to make it better you simply have to try so you need a lot of steps but this thing here is mighty slow because each each single step in your reinforcement learning procedure involves training an entire", "start_timestamp": "00:05:38", "end_timestamp": "00:06:14", "start_second": 338, "end_second": 374, "url": "https://www.youtube.com/watch?v=a6v92P0EbJc&t=338s", "title": "Neural Architecture Search without Training (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/a6v92P0EbJc/maxresdefault.jpg"} {"video_id": "a6v92P0EbJc", "text": "neural network for like this many steps ok so all of this is ginormously slow and resource intensive and that of course blocks a lot of research because you know we started with the plan to automate this part right here but automating it itself is super expensive so they go for a different solution they say this could be indeed if we could infer at Network sorry if we could infer a networks trained accuracy from its initial state okay it seems a bit out there but let's let's give them benefit of the doubt in", "start_timestamp": "00:06:14", "end_timestamp": "00:06:57", "start_second": 374, "end_second": 417, "url": "https://www.youtube.com/watch?v=a6v92P0EbJc&t=374s", "title": "Neural Architecture Search without Training (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/a6v92P0EbJc/maxresdefault.jpg"} {"video_id": "a6v92P0EbJc", "text": "this work we examine how the linear maps induced by data points correlate for untrained network architectures in the NASA bench 201 search space and motivate how this can be used to give a measure of modeling flexibility which is highly indicative of a network strained performance we incorporate this measure into a simple algorithm that allows us to search for powerful networks without any training in a matter of seconds on a single GPU okay and they have the code available right here if you want to go and check that out so let's go in let's", "start_timestamp": "00:06:57", "end_timestamp": "00:07:35", "start_second": 417, "end_second": 455, "url": "https://www.youtube.com/watch?v=a6v92P0EbJc&t=417s", "title": "Neural Architecture Search without Training (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/a6v92P0EbJc/maxresdefault.jpg"} {"video_id": "a6v92P0EbJc", "text": "go into that the claims are pretty big and the reasoning behind the claims is the following observation you can already sort of see in this graphic right here we'll we'll go over what it means in one second but what they do is they take different networks in this search space and the search space in this case is given by this benchmark so this benchmark basically has a long list I think of architectures that you could consider actually so it's a it's a constructive list so they don't actually give you the list but they give you like", "start_timestamp": "00:07:35", "end_timestamp": "00:08:11", "start_second": 455, "end_second": 491, "url": "https://www.youtube.com/watch?v=a6v92P0EbJc&t=455s", "title": "Neural Architecture Search without Training (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/a6v92P0EbJc/maxresdefault.jpg"} {"video_id": "a6v92P0EbJc", "text": "a a way to construct architectures and they took those architectures and they rank them by how well they score on C 410 so there are very good architectures which are here there are good ones there are mediocre ones and then the bad ones okay and you can see that the histograms here of whatever they measure they look quite different so the histograms with the good ones they all have kind of spiky around zero and the histograms of the bad ones all sort of look spread out so this is the measure that they're going to propose is they have some sort", "start_timestamp": "00:08:11", "end_timestamp": "00:08:48", "start_second": 491, "end_second": 528, "url": "https://www.youtube.com/watch?v=a6v92P0EbJc&t=491s", "title": "Neural Architecture Search without Training (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/a6v92P0EbJc/maxresdefault.jpg"} {"video_id": "a6v92P0EbJc", "text": "of number some sort of histogram that they produce and if the histogram is very spiky and close together around zero then they conclude that this network is good and if the histogram is very spread out like this they conclude that the network is bad now these histograms as you might expect they are computed not from the final trained Network but they are computed from the initial Network so here they show at least you know in this case it seems to be that there is a general correlation between the trained accuracy and how", "start_timestamp": "00:08:48", "end_timestamp": "00:09:27", "start_second": 528, "end_second": 567, "url": "https://www.youtube.com/watch?v=a6v92P0EbJc&t=528s", "title": "Neural Architecture Search without Training (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/a6v92P0EbJc/maxresdefault.jpg"} {"video_id": "a6v92P0EbJc", "text": "this histogram looks and we're going to explore what they do so it's essentially it's pretty easy they compute the linear map around each data point so what is that if you imagine a neural network as a nonlinear function which I guess you should because it is so let's imagine it as like a nonlinear function from X to Y what they'll do is simply they'll look at a given date training data point which could be here right this could be the X and this could be the the Y and in fact let's look at it in lost landscape not even in Y but", "start_timestamp": "00:09:27", "end_timestamp": "00:10:12", "start_second": 567, "end_second": 612, "url": "https://www.youtube.com/watch?v=a6v92P0EbJc&t=567s", "title": "Neural Architecture Search without Training (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/a6v92P0EbJc/maxresdefault.jpg"} {"video_id": "a6v92P0EbJc", "text": "in L in terms of the loss because we don't need necessarily a single label this could be for unsupervised this could be for anything okay so it Maps a data point to a loss now what we'll do is we'll simply linearize the function around that point which means we'll just freeze all the nonlinearities in place and that will give us this linear function right here okay we just observe that this linear function can exist it's the tangent to the lost landscape and it's at a particular data point right it's in data space not in in weight", "start_timestamp": "00:10:12", "end_timestamp": "00:10:47", "start_second": 612, "end_second": 647, "url": "https://www.youtube.com/watch?v=a6v92P0EbJc&t=612s", "title": "Neural Architecture Search without Training (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/a6v92P0EbJc/maxresdefault.jpg"} {"video_id": "a6v92P0EbJc", "text": "space then we look at a different data point so we look at this data point right here another data point what's the linear function around this one is sort of like whoops T is like that and then around this one is like this okay so this is one function now let's look at a different function right here so L X and we'll look at this function the linear function okay so for some reason this is like this and if we consider two data points their linearization is very similar now imagine that these two have been produced by the same sort of neural", "start_timestamp": "00:10:47", "end_timestamp": "00:11:38", "start_second": 647, "end_second": 698, "url": "https://www.youtube.com/watch?v=a6v92P0EbJc&t=647s", "title": "Neural Architecture Search without Training (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/a6v92P0EbJc/maxresdefault.jpg"} {"video_id": "a6v92P0EbJc", "text": "networks it's just the architecture is a little different but they have been produced like they have the same number of parameters in the neural network which neural network would you prefer remember you can in by training the neural network you can actually shape this loss function you can kind of shape that around so which one would you prefer I personally would prefer the top one because the top one already tells me that hey you know I might have 10 parameters here and this already sort of looks like each of the 10 parameters is", "start_timestamp": "00:11:38", "end_timestamp": "00:12:10", "start_second": 698, "end_second": 730, "url": "https://www.youtube.com/watch?v=a6v92P0EbJc&t=698s", "title": "Neural Architecture Search without Training (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/a6v92P0EbJc/maxresdefault.jpg"} {"video_id": "a6v92P0EbJc", "text": "doing something so if I then go into my 10 parameters and I you know turn this knob right here then I might you know up this bump or down this bump or do something with it but the sort of frequencies curvature the randomness of the function the way that it fluctuates tells me that all of the different parameters must have some sort of effect right because it's of quite an expressive function whereas if I have the same number of parameters for a function like this this sort of tells me well maybe only one of the when we only", "start_timestamp": "00:12:10", "end_timestamp": "00:12:47", "start_second": 730, "end_second": 767, "url": "https://www.youtube.com/watch?v=a6v92P0EbJc&t=730s", "title": "Neural Architecture Search without Training (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/a6v92P0EbJc/maxresdefault.jpg"} {"video_id": "a6v92P0EbJc", "text": "one of the weights is actually doing something maybe only one of the dimensions is doing something this seems odd right that even though I've initialized it randomly a super regular function like this comes out so maybe all of the all of these parameters down here they don't do anything or it is so somehow the signal doesn't get through so that's I they don't explicitly say it in these terms but this is how I make sense of this what they're saying is that if you look at the linearizations of the function and you look at the the angle right here", "start_timestamp": "00:12:47", "end_timestamp": "00:13:26", "start_second": 767, "end_second": 806, "url": "https://www.youtube.com/watch?v=a6v92P0EbJc&t=767s", "title": "Neural Architecture Search without Training (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/a6v92P0EbJc/maxresdefault.jpg"} {"video_id": "a6v92P0EbJc", "text": "so the angle in this case is that and in this case is that and in this case is that so you look at the slope here and the slope is basically the gradient of these linearized functions and what you want to do is you want to look at the correlation between those of the different data points so here you have three angles one is very short one is very bit longer like this and or no even like this and one is even over ninety degrees like that they are not correlated at all right they're all very different however the angles here they're all", "start_timestamp": "00:13:26", "end_timestamp": "00:14:08", "start_second": 806, "end_second": 848, "url": "https://www.youtube.com/watch?v=a6v92P0EbJc&t=806s", "title": "Neural Architecture Search without Training (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/a6v92P0EbJc/maxresdefault.jpg"} {"video_id": "a6v92P0EbJc", "text": "quite the same as you can see so what they propose is the following let's send all the data points or in that case all the data points in a particular mini-batch let's send them through the function and let's calculate their linearizations so the linearization is nothing else than you send them through the network to obtain the F value for the x value and then you calculate the gradient with respect to the input now you have to get used to this a bit because usually we calculate the gradient with respect to the weight but", "start_timestamp": "00:14:08", "end_timestamp": "00:14:44", "start_second": 848, "end_second": 884, "url": "https://www.youtube.com/watch?v=a6v92P0EbJc&t=848s", "title": "Neural Architecture Search without Training (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/a6v92P0EbJc/maxresdefault.jpg"} {"video_id": "a6v92P0EbJc", "text": "now we calculate the gradient with respect to the input which if this is a linear function so if you have a live f of X equals WX like a linear function then this gradient del F del X would just give you the W will give you the slope of the linear function and the same in the neural network when you linearize it alright so we're going to obtain all these linearizations and that gives us the this matrix J right here and what we can do is we can then observe the covariance matrix of J of all these linearizations the covariance", "start_timestamp": "00:14:44", "end_timestamp": "00:15:26", "start_second": 884, "end_second": 926, "url": "https://www.youtube.com/watch?v=a6v92P0EbJc&t=884s", "title": "Neural Architecture Search without Training (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/a6v92P0EbJc/maxresdefault.jpg"} {"video_id": "a6v92P0EbJc", "text": "matrix simply tells you how to data points vary with each other and in fact they don't look at the covariance matrix but they look at the correlation matrix which is simply the scaled covariance matrix so one entry in this covariance matrix so you have n data points and this gives you a matrix that's n by n and that particular entry here like the entry IJ would simply state how does the angle of data point I correlate with the angle of data point J okay that's the that's the covariance matrix and now the hypothesis is if all", "start_timestamp": "00:15:26", "end_timestamp": "00:16:09", "start_second": 926, "end_second": 969, "url": "https://www.youtube.com/watch?v=a6v92P0EbJc&t=926s", "title": "Neural Architecture Search without Training (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/a6v92P0EbJc/maxresdefault.jpg"} {"video_id": "a6v92P0EbJc", "text": "of these data points are sort of independent like in our very expressive function here then the these correlations they should not be high in fact most data points should be rather uncorrelated however in this case right here if the function is sort of kind of degenerative or something not very expressive then all of these all of these angles or of these linearizations should be highly correlated and that's what you see in this graph right here this right here now is these correlation histogram of the correlations between", "start_timestamp": "00:16:09", "end_timestamp": "00:16:48", "start_second": 969, "end_second": 1008, "url": "https://www.youtube.com/watch?v=a6v92P0EbJc&t=969s", "title": "Neural Architecture Search without Training (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/a6v92P0EbJc/maxresdefault.jpg"} {"video_id": "a6v92P0EbJc", "text": "local linear maps across all pairs of items in a mini batch of C 410 training data each pod is a histogram for a single untrained na s bench 201 architecture so remember the expressivity is important because we want to train that function and therefore it's important that every parameter does something and if it's degenerate we can't train it well and that's I find that's the reasoning they they sort of say this but not I might make I might make the wrong sense out of it here but it seems to me like that's what's actually going on so you can see", "start_timestamp": "00:16:48", "end_timestamp": "00:17:25", "start_second": 1008, "end_second": 1045, "url": "https://www.youtube.com/watch?v=a6v92P0EbJc&t=1008s", "title": "Neural Architecture Search without Training (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/a6v92P0EbJc/maxresdefault.jpg"} {"video_id": "a6v92P0EbJc", "text": "this is simply these matrix values rolled out and then plot it as a histogram so what does it mean when the histogram is like super spread out like this it means that there are a lot and I think down here our axes yes there are a lot of data points that correlate highly or anti correlate highly with each other okay which means that exactly this degeneracy happens either too high or too negative high correlation means that they're very much they're kind of the same thing so there is if you have as many parameters as", "start_timestamp": "00:17:25", "end_timestamp": "00:17:59", "start_second": 1045, "end_second": 1079, "url": "https://www.youtube.com/watch?v=a6v92P0EbJc&t=1045s", "title": "Neural Architecture Search without Training (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/a6v92P0EbJc/maxresdefault.jpg"} {"video_id": "a6v92P0EbJc", "text": "data points that means that one parameter can potentially serve these two data points or these two that are correlated by one or negative one you don't need both parameters and therefore you have a lot of parameters doing nothing whereas over here with the good networks you can see that this spikes around zero meaning that the data points are not correlated or the linearizations around the data points are not correlated and therefore you can sort of shape the function around each data point however you want which we sort of", "start_timestamp": "00:17:59", "end_timestamp": "00:18:35", "start_second": 1079, "end_second": 1115, "url": "https://www.youtube.com/watch?v=a6v92P0EbJc&t=1079s", "title": "Neural Architecture Search without Training (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/a6v92P0EbJc/maxresdefault.jpg"} {"video_id": "a6v92P0EbJc", "text": "know that neural networks what they do is they're so over expressive that they're actually able to shape the functions around the data points without necessarily looking at other data points nearby and that expressivity is what what you want and that expressivity is what this in part measures okay so they make a they have some experiments here where they validate this so for all these architectures in this benchmark and maybe I should tell you what show you what the benchmark looks like so the benchmark has this particular form this", "start_timestamp": "00:18:35", "end_timestamp": "00:19:13", "start_second": 1115, "end_second": 1153, "url": "https://www.youtube.com/watch?v=a6v92P0EbJc&t=1115s", "title": "Neural Architecture Search without Training (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/a6v92P0EbJc/maxresdefault.jpg"} {"video_id": "a6v92P0EbJc", "text": "particular form there's this skeleton and in this skeleton there is this block and it's always repeated and you're basically your task is to determine what this block should be so this block has an input node a and an output node D and two intermediate nodes and what you have to do is basically you have to determine these connections right here so there are six connections and for each one you have the option of putting different things there like you can see you put can put a convolution you can put the identity function which is a skip", "start_timestamp": "00:19:13", "end_timestamp": "00:19:43", "start_second": 1153, "end_second": 1183, "url": "https://www.youtube.com/watch?v=a6v92P0EbJc&t=1153s", "title": "Neural Architecture Search without Training (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/a6v92P0EbJc/maxresdefault.jpg"} {"video_id": "a6v92P0EbJc", "text": "connection zero eyes I'm I don't maybe that's the zero function so it basically means nothing I'm not so sure honestly but you could technically put a convolution here and here right or and or different convolutions or things like this so there are these 15,625 possible cells okay so the nurse benchmark contains 15,625 possible architectures that you'll have to search and they take these architectures and they plot now they plot for each architecture the validation accuracy after training and the training protocol", "start_timestamp": "00:19:43", "end_timestamp": "00:20:28", "start_second": 1183, "end_second": 1228, "url": "https://www.youtube.com/watch?v=a6v92P0EbJc&t=1183s", "title": "Neural Architecture Search without Training (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/a6v92P0EbJc/maxresdefault.jpg"} {"video_id": "a6v92P0EbJc", "text": "is standardized you don't have to care about that right and the score that they measure at the beginning of training and what you can see is that there is a linear relationship sort of like a sort of from from these experiments what you'll get is like this sort of feeling what they're gonna propose is that you should take that score as a as a measure and here again also sword of sword sword of there is a there is a clear trend as you can see right here though yeah though this as you can see this sort of spreads out and the most right one is", "start_timestamp": "00:20:28", "end_timestamp": "00:21:09", "start_second": 1228, "end_second": 1269, "url": "https://www.youtube.com/watch?v=a6v92P0EbJc&t=1228s", "title": "Neural Architecture Search without Training (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/a6v92P0EbJc/maxresdefault.jpg"} {"video_id": "a6v92P0EbJc", "text": "imagenet which is the most difficult one of course so and this is C for 100 which is more difficult than C for 10 so we can see that this sort of relationship at the top it doesn't really hold anymore if the task gets difficult and this is so what I think is happening this is kind of an interjection of my own opinion what's happening here is that this score that they discover allows them pretty efficiently to see which networks are just degenerate and and cannot be trained like if you try to train them they just perform really", "start_timestamp": "00:21:09", "end_timestamp": "00:21:48", "start_second": 1269, "end_second": 1308, "url": "https://www.youtube.com/watch?v=a6v92P0EbJc&t=1269s", "title": "Neural Architecture Search without Training (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/a6v92P0EbJc/maxresdefault.jpg"} {"video_id": "a6v92P0EbJc", "text": "poorly okay that it's probably a very good score for weeding those out and that would mean if you kind of barrier here somewhere right you could just discard a whole lot of this crap or even even here right you could just discard a whole lot of this crap and also now here just you know all of this crap yeah whereas here as you can see some of this score sometimes it's higher then these ones even though they perform better and again you could probably discard a lot of the crap but it's not as distinctive for the well performing", "start_timestamp": "00:21:48", "end_timestamp": "00:22:26", "start_second": 1308, "end_second": 1346, "url": "https://www.youtube.com/watch?v=a6v92P0EbJc&t=1308s", "title": "Neural Architecture Search without Training (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/a6v92P0EbJc/maxresdefault.jpg"} {"video_id": "a6v92P0EbJc", "text": "networks because these here are all not the degenerate version right they're not degenerate in the sense that they're they have some fundamental flaw where the function lakhs now expressivity from the very start so you can't train it and then probably other factors come into play other factors then you can simply determine with this particular score but you know there there is this relationship that's that's you know you can see that and they do some ablations on this here for example or your score is a proxy for a number of parameters", "start_timestamp": "00:22:26", "end_timestamp": "00:23:02", "start_second": 1346, "end_second": 1382, "url": "https://www.youtube.com/watch?v=a6v92P0EbJc&t=1346s", "title": "Neural Architecture Search without Training (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/a6v92P0EbJc/maxresdefault.jpg"} {"video_id": "a6v92P0EbJc", "text": "and they say no the number of parameters works way worse than this particular score which it always is a cool thing then how important is a specific mini-batch and initialization and they say look right here we for some architectures we do different mini batch sizes and you can see each of those groups they don't vary too much in how their it influences their score this is I believe this is the same architecture so it's always an architecture that achieves in this case for example wow that's a not a straight line 77% or so", "start_timestamp": "00:23:02", "end_timestamp": "00:23:41", "start_second": 1382, "end_second": 1421, "url": "https://www.youtube.com/watch?v=a6v92P0EbJc&t=1382s", "title": "Neural Architecture Search without Training (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/a6v92P0EbJc/maxresdefault.jpg"} {"video_id": "a6v92P0EbJc", "text": "and you can see if you go for different mini batches the score varies only minimally initialization is a bigger variance inducing thing but also here the scores don't vary too much but it is interesting that the different initialization to get you to different score because it would directly support kind of my hypothesis now what's going on here is that you sort of measure initial degeneracies and you can sort of make up for these initial degeneracies in the architecture sometimes with sort of a different initialization so the", "start_timestamp": "00:23:41", "end_timestamp": "00:24:18", "start_second": 1421, "end_second": 1458, "url": "https://www.youtube.com/watch?v=a6v92P0EbJc&t=1421s", "title": "Neural Architecture Search without Training (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/a6v92P0EbJc/maxresdefault.jpg"} {"video_id": "a6v92P0EbJc", "text": "different initializations give you differently performing networks we already know this from things like you know a lottery ticket hypothesis and so on that the initialization can matter to some degree in these types of things now that being said they always train to the same it seems but their their score varies so I might be backwards correct here or not correct but in any case the initialization here matters more but also you can still see this linear relationship and this is particularly interesting this is even the case when", "start_timestamp": "00:24:18", "end_timestamp": "00:24:57", "start_second": 1458, "end_second": 1497, "url": "https://www.youtube.com/watch?v=a6v92P0EbJc&t=1458s", "title": "Neural Architecture Search without Training (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/a6v92P0EbJc/maxresdefault.jpg"} {"video_id": "a6v92P0EbJc", "text": "you just input white noise so instead of the data you measure that score by just inputting noise that I guess has some sort of the same magnitude as the data would have but it's just noise and you can still sort of see this linear relationship which is very interesting and that I think also shows some that you what you'll find what you find is a property of the network itself and the fact that it is it is initialized and built in such a way that it allows you to train it in a very in a sort of a benign manner it has no degeneracies", "start_timestamp": "00:24:57", "end_timestamp": "00:25:37", "start_second": 1497, "end_second": 1537, "url": "https://www.youtube.com/watch?v=a6v92P0EbJc&t=1497s", "title": "Neural Architecture Search without Training (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/a6v92P0EbJc/maxresdefault.jpg"} {"video_id": "a6v92P0EbJc", "text": "okay so in last experiment they go here and they say we evaluated the score on initialized networks in the PI torch CV library so they go to this library that has a lot of these networks but these networks are not the same as this benchmark this benchmark is specifically designed to do architecture search now the networks in this library they are all designed to perform really well some are designed to be quite small some are designed to be quite fast and so on but in general they're all of their goal is to perform well and they have been sort", "start_timestamp": "00:25:37", "end_timestamp": "00:26:16", "start_second": 1537, "end_second": 1576, "url": "https://www.youtube.com/watch?v=a6v92P0EbJc&t=1537s", "title": "Neural Architecture Search without Training (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/a6v92P0EbJc/maxresdefault.jpg"} {"video_id": "a6v92P0EbJc", "text": "of found by humans to perform well so they take now these networks on C 410 and they test them so as you can see here here is the test accuracy again and here is their score that they give it and they say rip it up put it up now I can't move this anymore hello well okay they say that this linear relationship still sort of holds it doesn't it doesn't hold super super well but you can still sort of if you squint if you squint hard you can see that it sort of goes upward though you really have to squint hard like what are these things", "start_timestamp": "00:26:16", "end_timestamp": "00:27:03", "start_second": 1576, "end_second": 1623, "url": "https://www.youtube.com/watch?v=a6v92P0EbJc&t=1576s", "title": "Neural Architecture Search without Training (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/a6v92P0EbJc/maxresdefault.jpg"} {"video_id": "a6v92P0EbJc", "text": "right here and what again what's the case is that if the score is low you will sort of be able to cut off the cut of the worst-performing ones but really at the top here it doesn't seem like there is a particular relation between between these networks and this initial score which sort of strengthens my hypothesis that what this does is just kind of weed out the bad ones but it's pretty cool because you can weed out the bad ones without any training right it's simply forward prop backward prop there you have it so cool now they come they", "start_timestamp": "00:27:03", "end_timestamp": "00:27:47", "start_second": 1623, "end_second": 1667, "url": "https://www.youtube.com/watch?v=a6v92P0EbJc&t=1623s", "title": "Neural Architecture Search without Training (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/a6v92P0EbJc/maxresdefault.jpg"} {"video_id": "a6v92P0EbJc", "text": "here is the experiment where they now really do this na s benchmark and they compare with other methods so some of these other methods are designed to do they call it weight sharing which basically is a technique where you can sort of speed up the speed up the algorithm as compared to non weight sharing and the non weigh cheering that's one of these we have discussed initially that was my initial example with the controller and so on where it takes super long so here you see the method and how long each method takes", "start_timestamp": "00:27:47", "end_timestamp": "00:28:23", "start_second": 1667, "end_second": 1703, "url": "https://www.youtube.com/watch?v=a6v92P0EbJc&t=1667s", "title": "Neural Architecture Search without Training (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/a6v92P0EbJc/maxresdefault.jpg"} {"video_id": "a6v92P0EbJc", "text": "now the best ones as you can see already the best ones here are these these methods right here are the best ones they score somewhat like a 93.9 or so on C 410 whereas these weight sharing ones they don't perform too well except this one seems to perform quite well and in this hour's case they perform worse than that but they still perform better than a lot of the weight sharing once so what their point is basically is that they get a pretty good score which is a ninety one point five on C for ten which is you know it's at least not", "start_timestamp": "00:28:23", "end_timestamp": "00:29:05", "start_second": 1703, "end_second": 1745, "url": "https://www.youtube.com/watch?v=a6v92P0EbJc&t=1703s", "title": "Neural Architecture Search without Training (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/a6v92P0EbJc/maxresdefault.jpg"} {"video_id": "a6v92P0EbJc", "text": "degenerate it's a it's a good accuracy they score that with simply evaluating ten architectures right and as n goes up as they evaluate more and more architectures they do they do get better but not much so they have a discussion here I'm having trouble moving this all right so we'll sort of go through the discussion we report results yada yada yada yada as the set up the non weight sharing methods are given a time budget of twelve thousand seconds for our method and the non weight sharing methods are averaged accuracies or averaged over 500", "start_timestamp": "00:29:05", "end_timestamp": "00:29:47", "start_second": 1745, "end_second": 1787, "url": "https://www.youtube.com/watch?v=a6v92P0EbJc&t=1745s", "title": "Neural Architecture Search without Training (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/a6v92P0EbJc/maxresdefault.jpg"} {"video_id": "a6v92P0EbJc", "text": "runs for weight sharing methods accuracies are reported over three runs with the exception of G das our method is able to outperform all the way chairing methods while requiring a fraction of the search time and that you maybe see at the table this is the real I mean this is the real deal here they only use here one point seven seconds compared to the twelve thousand seconds of the other methods and you reach almost the same accuracy now to be said two percent in this particular regime on C 410 is still a sizable difference and", "start_timestamp": "00:29:47", "end_timestamp": "00:30:22", "start_second": 1787, "end_second": 1822, "url": "https://www.youtube.com/watch?v=a6v92P0EbJc&t=1787s", "title": "Neural Architecture Search without Training (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/a6v92P0EbJc/maxresdefault.jpg"} {"video_id": "a6v92P0EbJc", "text": "that's the same benchmark right with the same sort of the same training schedule and so on so there's not too much room to tune here you simply have to find a better architecture so these things are still sizably ahead of this and what it appears to me that these methods here that don't perform well they're they're simply crap it seems they're simply I don't I don't know but they might be trying out something or you know doing something researchy or whatnot but it seems like if you're well able to weed out the bad architectures you might be", "start_timestamp": "00:30:22", "end_timestamp": "00:31:03", "start_second": 1822, "end_second": 1863, "url": "https://www.youtube.com/watch?v=a6v92P0EbJc&t=1822s", "title": "Neural Architecture Search without Training (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/a6v92P0EbJc/maxresdefault.jpg"} {"video_id": "a6v92P0EbJc", "text": "getting to a score like this and then if you are actually performing a search to find the best one then you might be getting to somewhere like this and you can see this here throughout so in C for 100 they achieve a better score than these things but a worse score than the non weight sharing method and an image net it gets even the difference is even larger so again what I can see here is that there's is a good method to maybe get you like let's say 90% of the wear of the way you want to go and what's interesting is that here they say we", "start_timestamp": "00:31:03", "end_timestamp": "00:31:48", "start_second": 1863, "end_second": 1908, "url": "https://www.youtube.com/watch?v=a6v92P0EbJc&t=1863s", "title": "Neural Architecture Search without Training (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/a6v92P0EbJc/maxresdefault.jpg"} {"video_id": "a6v92P0EbJc", "text": "also show the effect of sample size we show the accuracy of the network's chosen by our method for each n so that's the sample size we list the optimal accuracy for sample sizes 10 and hundred and random selection over the whole benchmark so in this case they have the the optimal one which I guess they just draw 10 samples and then take the best one so they train all of them and then take the best one you can see that already gets you to the 93 and whereas in their case sometimes when they add more they get worse so here", "start_timestamp": "00:31:48", "end_timestamp": "00:32:21", "start_second": 1908, "end_second": 1941, "url": "https://www.youtube.com/watch?v=a6v92P0EbJc&t=1908s", "title": "Neural Architecture Search without Training (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/a6v92P0EbJc/maxresdefault.jpg"} {"video_id": "a6v92P0EbJc", "text": "they get better but then they get worse again so they comment on this right here we observe that the sample size does not have a large effect on the accuracy of our method but note that as sample size increases our method suffers from a small amount of noise increasing the gap between our score and the optimal result and of course the key practical benefit is execution time so again they are massively faster than the other methods but to me it seems you could just think of combining these methods right you combine this with this in that what you", "start_timestamp": "00:32:21", "end_timestamp": "00:33:01", "start_second": 1941, "end_second": 1981, "url": "https://www.youtube.com/watch?v=a6v92P0EbJc&t=1941s", "title": "Neural Architecture Search without Training (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/a6v92P0EbJc/maxresdefault.jpg"} {"video_id": "a6v92P0EbJc", "text": "want to do is actually actively search for the best ones but by doing so you could if you could pretty quickly weed out the bad ones using this method down here you might already have like a big speed up because again with comparison to this random ones what appears to happen is that the get good at finding you know you're 90% architecture but then they fail to differentiate the top performance performers from each other where you'd really have to train the network to find out what's you know which one's better", "start_timestamp": "00:33:01", "end_timestamp": "00:33:40", "start_second": 1981, "end_second": 2020, "url": "https://www.youtube.com/watch?v=a6v92P0EbJc&t=1981s", "title": "Neural Architecture Search without Training (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/a6v92P0EbJc/maxresdefault.jpg"} {"video_id": "a6v92P0EbJc", "text": "so yeah here they say they visualize the trade-off between search time and accuracy for C for 10 for different NES algorithms on dnas benchmark by removing the need for training our method is able to find accurate networks in seconds instead of hours and here you can see the accuracy and here you can see the time and all the the good ones are either way over here or here and there's is almost at zero while being quite close to the accuracy of the other ones all right yeah that was that was this paper again I think this is pretty", "start_timestamp": "00:33:40", "end_timestamp": "00:34:20", "start_second": 2020, "end_second": 2060, "url": "https://www.youtube.com/watch?v=a6v92P0EbJc&t=2020s", "title": "Neural Architecture Search without Training (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/a6v92P0EbJc/maxresdefault.jpg"} {"video_id": "a6v92P0EbJc", "text": "valuable if you are especially if you're in a new domain where you might not know what kind of network to build you might just be able to write a little script that generates networks run it through this algorithm and at least you get an idea of which ones are certainly not worth considering and then you can simply select one of the other ones it doesn't you know often it doesn't need to be the best ones and you can then tweak it a little bit manually the ones you found may be you see some regularity and yeah that was my two cents on this", "start_timestamp": "00:34:20", "end_timestamp": "00:34:52", "start_second": 2060, "end_second": 2092, "url": "https://www.youtube.com/watch?v=a6v92P0EbJc&t=2060s", "title": "Neural Architecture Search without Training (Paper Explained)", "thumbnail": "https://i.ytimg.com/vi/a6v92P0EbJc/maxresdefault.jpg"} {"video_id": "PXOhi6m09bA", "text": "hi welcome to lecture nine of CS 294 158 deep unsupervised learning spring 2020 I hope everyone had a good spring break despite the rather unusual circumstances today we will be covering two main topics semi supervised learning and unsupervised distribution alignment before diving into that couple of logistics so common situation and a quick mid semester update well first we hope everyone and their families are able to keep healthy during these pretty unusual times please practice your health and well be and accordingly", "start_timestamp": "00:00:00", "end_timestamp": "00:00:47", "start_second": 0, "end_second": 47, "url": "https://www.youtube.com/watch?v=PXOhi6m09bA&t=0s", "title": "L9 Semi-Supervised Learning and Unsupervised Distribution Alignment -- CS294-158-SP20 UC Berkeley", "thumbnail": "https://i.ytimg.com/vi/PXOhi6m09bA/maxresdefault.jpg"} {"video_id": "PXOhi6m09bA", "text": "please don't hesitate to let us know if this class would interfere with that and we'll be happy to figure something out when the kiss by kids faces since water just past the middle of the semester and says there's some replanting paired the current situation here's a quick overview of what's still ahead in this class today we have lecture 9 which will cover semi supervised learning and unsupervised distribution alignment next week we'll have lecture ten on compression which will be a live zoom lecture then at the end of next week", "start_timestamp": "00:00:47", "end_timestamp": "00:01:27", "start_second": 47, "end_second": 87, "url": "https://www.youtube.com/watch?v=PXOhi6m09bA&t=47s", "title": "L9 Semi-Supervised Learning and Unsupervised Distribution Alignment -- CS294-158-SP20 UC Berkeley", "thumbnail": "https://i.ytimg.com/vi/PXOhi6m09bA/maxresdefault.jpg"} {"video_id": "PXOhi6m09bA", "text": "your final project three-page milestone reports argue this is not graded but it's a great way to get feedback and make sure you're on track for a good final project and we'll try to give you feedback into your Google Doc that you share with us on pretty short turnaround then we'll have lecture 11 on language models with a guest instructor aleck Radford from open AI then we'll have our midterm which will adjust to the current circumstances we'll see how we do it but the high-level model will promised a similar that we want to cover the main", "start_timestamp": "00:01:27", "end_timestamp": "00:02:08", "start_second": 87, "end_second": 128, "url": "https://www.youtube.com/watch?v=PXOhi6m09bA&t=87s", "title": "L9 Semi-Supervised Learning and Unsupervised Distribution Alignment -- CS294-158-SP20 UC Berkeley", "thumbnail": "https://i.ytimg.com/vi/PXOhi6m09bA/maxresdefault.jpg"} {"video_id": "PXOhi6m09bA", "text": "derivations that we've seen in this semester and have you be able to read arrive those then we'll have our final regular lecture lecture 12 on representation learning in reinforcement learning that also be a live zoom lecture and recording then as our our week which hopefully gives you time to catch up on a lot of things you know hopefully including them making the extra push in the final project for this class and then during finals week on the 13th of Wednesday there's final project presentations we'll see how to do that", "start_timestamp": "00:02:08", "end_timestamp": "00:02:41", "start_second": 128, "end_second": 161, "url": "https://www.youtube.com/watch?v=PXOhi6m09bA&t=128s", "title": "L9 Semi-Supervised Learning and Unsupervised Distribution Alignment -- CS294-158-SP20 UC Berkeley", "thumbnail": "https://i.ytimg.com/vi/PXOhi6m09bA/maxresdefault.jpg"} {"video_id": "PXOhi6m09bA", "text": "with the new situation and final project reports will be Q at that time also with all the logistics covered let's dive into the technical accountant for today so we'll cover first semi-supervised learning which Harvin along will cover and then we'll cover unsupervised situation alignment which will be covered by Peter Chen and Ashley will use the lecture he gave last year also for this year welcome to lecture 9a of deepens for us learning in this part of the lecture we'll be covering semi-supervised learning so first to", "start_timestamp": "00:02:41", "end_timestamp": "00:03:25", "start_second": 161, "end_second": 205, "url": "https://www.youtube.com/watch?v=PXOhi6m09bA&t=161s", "title": "L9 Semi-Supervised Learning and Unsupervised Distribution Alignment -- CS294-158-SP20 UC Berkeley", "thumbnail": "https://i.ytimg.com/vi/PXOhi6m09bA/maxresdefault.jpg"} {"video_id": "PXOhi6m09bA", "text": "understand what is amiss for us learning let's look at supervised learning in supervised learning you have a joint distribution of data points and labels and use sample from the Joint Distribution in expectation over two samples from the Joint Distribution your goal is to maximize a lot of probability of the classifier love P of Y given X we all know how to do this as far as procedurally how it's done the basically sample image label or like sequence and particular label or any pair of x and y from your data set assuming they're all", "start_timestamp": "00:03:25", "end_timestamp": "00:04:08", "start_second": 205, "end_second": 248, "url": "https://www.youtube.com/watch?v=PXOhi6m09bA&t=205s", "title": "L9 Semi-Supervised Learning and Unsupervised Distribution Alignment -- CS294-158-SP20 UC Berkeley", "thumbnail": "https://i.ytimg.com/vi/PXOhi6m09bA/maxresdefault.jpg"} {"video_id": "PXOhi6m09bA", "text": "independent identically distributed and you don't know what the analytical following the distribution is where you assume that it is some complicated distribution and you just keep sampling multiple points repeatedly and from stochastic rain descent optimized objective now what is semis corresponding assume that you have an unlabeled data set tu where X is sample from P of X which is a marginal responding to the Joint Distribution from the label data set sample from so you have D you the unlabeled data set and D s the label ta set and your goal", "start_timestamp": "00:04:08", "end_timestamp": "00:04:50", "start_second": 248, "end_second": 290, "url": "https://www.youtube.com/watch?v=PXOhi6m09bA&t=248s", "title": "L9 Semi-Supervised Learning and Unsupervised Distribution Alignment -- CS294-158-SP20 UC Berkeley", "thumbnail": "https://i.ytimg.com/vi/PXOhi6m09bA/maxresdefault.jpg"} {"video_id": "PXOhi6m09bA", "text": "is to perform the same thing earlier which is supervised learning on label data set but with access to this extra unlabeled data that's coming from the same marginal so that's the assumption you make semi-square is learning which is that the unlabeled data is coming from the margin of the correspond to the same Joint Distribution that supervised materials coming from and assistant mathematical assumption in practice you can't really ensure that but your goal is to make sure that you can use this extra unlabeled data to perform your", "start_timestamp": "00:04:50", "end_timestamp": "00:05:25", "start_second": 290, "end_second": 325, "url": "https://www.youtube.com/watch?v=PXOhi6m09bA&t=290s", "title": "L9 Semi-Supervised Learning and Unsupervised Distribution Alignment -- CS294-158-SP20 UC Berkeley", "thumbnail": "https://i.ytimg.com/vi/PXOhi6m09bA/maxresdefault.jpg"} {"video_id": "PXOhi6m09bA", "text": "supervised learning objective even better on the label Deana so not mathematically here is the summary of what we just described take a task like classification a fully supervised scenarios where you have every single data point given to you in the form of image from a label they try and predict a label for new images that's your task that for a supervised learning now the scenario is you're going to be given a few on label samples but multiple you're going to be given a few label samples but you're also gonna", "start_timestamp": "00:05:25", "end_timestamp": "00:06:03", "start_second": 325, "end_second": 363, "url": "https://www.youtube.com/watch?v=PXOhi6m09bA&t=325s", "title": "L9 Semi-Supervised Learning and Unsupervised Distribution Alignment -- CS294-158-SP20 UC Berkeley", "thumbnail": "https://i.ytimg.com/vi/PXOhi6m09bA/maxresdefault.jpg"} {"video_id": "PXOhi6m09bA", "text": "be given a lotta on label samples now labeling is a time-consuming process and potentially very expensive and actually pretty hard in certain domains like medicine right so or detecting rare events and sub driving for them for that matter so if you have a lot of unlabeled data and your training data set can be reprime tries now is having some lis pairs of label data points image command label and also a lot of other data points where you just have the image and your goal is take this extra data extra data points where you don't have labels", "start_timestamp": "00:06:03", "end_timestamp": "00:06:43", "start_second": 363, "end_second": 403, "url": "https://www.youtube.com/watch?v=PXOhi6m09bA&t=363s", "title": "L9 Semi-Supervised Learning and Unsupervised Distribution Alignment -- CS294-158-SP20 UC Berkeley", "thumbnail": "https://i.ytimg.com/vi/PXOhi6m09bA/maxresdefault.jpg"} {"video_id": "PXOhi6m09bA", "text": "and try to improve our classifiers it just works with the label data and how you do it is totally up to you and that basically decides what kind of summarized algorithm you're gonna come up with to use and we're gonna look at in this lecture we're going to look at how these algorithms can be designed what are the mathematical or intuitive aspects of these algorithms and how you compare each other and how they can scale the larger data sets like image that and beyond so as to why we even interested in this problem semi-supervised learning is", "start_timestamp": "00:06:43", "end_timestamp": "00:07:19", "start_second": 403, "end_second": 439, "url": "https://www.youtube.com/watch?v=PXOhi6m09bA&t=403s", "title": "L9 Semi-Supervised Learning and Unsupervised Distribution Alignment -- CS294-158-SP20 UC Berkeley", "thumbnail": "https://i.ytimg.com/vi/PXOhi6m09bA/maxresdefault.jpg"} {"video_id": "PXOhi6m09bA", "text": "really important because they believe us even though you can collect label data very easily these days with a lot of data annotation startups it's still expensive in terms of hiring people to write annotation manuals for actual data AM traders and preparing graphical user interfaces so that all this is done really fast and making sure it's stored on the cloud efficiently and syncing from the browser to the cloud there are lots of engineering challenges involved in setting up a good data on editing tool now that's not to say we're never gonna", "start_timestamp": "00:07:19", "end_timestamp": "00:07:59", "start_second": 439, "end_second": 479, "url": "https://www.youtube.com/watch?v=PXOhi6m09bA&t=439s", "title": "L9 Semi-Supervised Learning and Unsupervised Distribution Alignment -- CS294-158-SP20 UC Berkeley", "thumbnail": "https://i.ytimg.com/vi/PXOhi6m09bA/maxresdefault.jpg"} {"video_id": "PXOhi6m09bA", "text": "do it we are still gonna do it but the goal is to make sure that we don't do it as much as we're doing it right now because we also have access to full-on link data that we can potentially exploit and they maybe even improve the performance or legal data systems this is similar in spirit to our goals for Salesforce learning and service learning is a different take on this subspace learning can work with just unlabeled data whereas my learning needs some label do a lot of unlabeled data that's the key difference so here is a slide", "start_timestamp": "00:07:59", "end_timestamp": "00:08:35", "start_second": 479, "end_second": 515, "url": "https://www.youtube.com/watch?v=PXOhi6m09bA&t=479s", "title": "L9 Semi-Supervised Learning and Unsupervised Distribution Alignment -- CS294-158-SP20 UC Berkeley", "thumbnail": "https://i.ytimg.com/vi/PXOhi6m09bA/maxresdefault.jpg"} {"video_id": "PXOhi6m09bA", "text": "from Tung Wong who took this particular picture from Vincent Van Hook on a blog post called the quiet some expose learning revolution where the belief of many more practitioners at least until recently was that semi-square is learning will be really useful in a low data regime where it's really really going to be better than normal supervised learning when you hardly have any label here however once you collect sufficient amount of labels supervised learning we catch up and eventually be much better this is why a lot of", "start_timestamp": "00:08:35", "end_timestamp": "00:09:12", "start_second": 515, "end_second": 552, "url": "https://www.youtube.com/watch?v=PXOhi6m09bA&t=515s", "title": "L9 Semi-Supervised Learning and Unsupervised Distribution Alignment -- CS294-158-SP20 UC Berkeley", "thumbnail": "https://i.ytimg.com/vi/PXOhi6m09bA/maxresdefault.jpg"} {"video_id": "PXOhi6m09bA", "text": "startups don't care about since Portland because the amount of effort needed to do research and engineering together since we're learning working especially even that it's a brand new fear this lot whenever needed to collect the external data points and you're guaranteed a better performance anyway so that's the rationale so as far as the left bloc glows but look at the plot on the right the dream of many times players learning researchers is that it not only is gonna be super useful in a loliter regime but it's gonna be extremely useful even in", "start_timestamp": "00:09:12", "end_timestamp": "00:09:49", "start_second": 552, "end_second": 589, "url": "https://www.youtube.com/watch?v=PXOhi6m09bA&t=552s", "title": "L9 Semi-Supervised Learning and Unsupervised Distribution Alignment -- CS294-158-SP20 UC Berkeley", "thumbnail": "https://i.ytimg.com/vi/PXOhi6m09bA/maxresdefault.jpg"} {"video_id": "PXOhi6m09bA", "text": "the high energy because it's still gonna give you that final few percentage extra performance because of access to a lot of unlabeled data and learning much richer or more fine-grained detail classifiers because of that and that's basically what's happened recently and we look at the history of the field in recent times so the core concepts needed to understand Sims quest learning are very few and we're just gonna look at them at a very high level and it's really really intuitive and not hard to understand so the first principle we", "start_timestamp": "00:09:49", "end_timestamp": "00:10:27", "start_second": 589, "end_second": 627, "url": "https://www.youtube.com/watch?v=PXOhi6m09bA&t=589s", "title": "L9 Semi-Supervised Learning and Unsupervised Distribution Alignment -- CS294-158-SP20 UC Berkeley", "thumbnail": "https://i.ytimg.com/vi/PXOhi6m09bA/maxresdefault.jpg"} {"video_id": "PXOhi6m09bA", "text": "look at is confidence of classifiers versus minimal entropy on unlabeled data it's it's it's a very nice coupling that the disciplines will shows and it's being used to very good effect in recent recent work and mathematically there have been papers on these two ideas which is first ideas entry minimization which we look at just the idea of taking on label data and making sure that the classifier training or label here has minimal entropy on unlabeled data so that way you're making sure that the classifier is confident even on", "start_timestamp": "00:10:27", "end_timestamp": "00:11:10", "start_second": 627, "end_second": 670, "url": "https://www.youtube.com/watch?v=PXOhi6m09bA&t=627s", "title": "L9 Semi-Supervised Learning and Unsupervised Distribution Alignment -- CS294-158-SP20 UC Berkeley", "thumbnail": "https://i.ytimg.com/vi/PXOhi6m09bA/maxresdefault.jpg"} {"video_id": "PXOhi6m09bA", "text": "unlabeled data it's a useful bit a regularizer classifier the second idea is your labeling where you take your classifier you asked to classify to predict what the labels are for unlabeled here and you take the confident directions and convert them to extra they ask 'but was the ground truth and train the model on those data points so this idea is also referred to literature as sub training the area of training the model of its own predictions if the model is confident enough and expanding your data set and regularizing your model further", "start_timestamp": "00:11:10", "end_timestamp": "00:11:47", "start_second": 670, "end_second": 707, "url": "https://www.youtube.com/watch?v=PXOhi6m09bA&t=670s", "title": "L9 Semi-Supervised Learning and Unsupervised Distribution Alignment -- CS294-158-SP20 UC Berkeley", "thumbnail": "https://i.ytimg.com/vi/PXOhi6m09bA/maxresdefault.jpg"} {"video_id": "PXOhi6m09bA", "text": "this is a little tricky because it has this reinforcement reinforcing effect of the model using its own predictions so it needs to be done very carefully so that's the caveat there and the other way to add noise to a model to regularize the model is virtual adversarial training which we really look at in detail but the idea is similar to how Ibis your training is performed for and you know absolute examples and images where you have a particular label and which you're trying to fool the classifier and believing", "start_timestamp": "00:11:47", "end_timestamp": "00:12:23", "start_second": 707, "end_second": 743, "url": "https://www.youtube.com/watch?v=PXOhi6m09bA&t=707s", "title": "L9 Semi-Supervised Learning and Unsupervised Distribution Alignment -- CS294-158-SP20 UC Berkeley", "thumbnail": "https://i.ytimg.com/vi/PXOhi6m09bA/maxresdefault.jpg"} {"video_id": "PXOhi6m09bA", "text": "that this image corresponds that label and you're trying to add a particular noise in the direction of the gradient of the output with respect to the input Society model starts producing some other label similarly here you want to make sure that the model the same squared learning model is regularized in the directions around which tomorrow on unlabeled data you want to make sure find directions in which the classifier is likely to be confused and you want to make sure that the model is not confused in those directions so that's the area", "start_timestamp": "00:12:23", "end_timestamp": "00:12:54", "start_second": 743, "end_second": 774, "url": "https://www.youtube.com/watch?v=PXOhi6m09bA&t=743s", "title": "L9 Semi-Supervised Learning and Unsupervised Distribution Alignment -- CS294-158-SP20 UC Berkeley", "thumbnail": "https://i.ytimg.com/vi/PXOhi6m09bA/maxresdefault.jpg"} {"video_id": "PXOhi6m09bA", "text": "in which we'll have a single training and there are also areas like label consistency which is make sure that the augmentations of the sample have the same class so you have an image we know that we use a lot of the documentation and regular supervised learning but in semi-supervised learning you have a lot of unlabeled data and you can't apply the augmentation to them if you're not passing them to the classifier but instead what you can do is you take an unlabeled sample it create two different orientations of it", "start_timestamp": "00:12:54", "end_timestamp": "00:13:27", "start_second": 774, "end_second": 807, "url": "https://www.youtube.com/watch?v=PXOhi6m09bA&t=774s", "title": "L9 Semi-Supervised Learning and Unsupervised Distribution Alignment -- CS294-158-SP20 UC Berkeley", "thumbnail": "https://i.ytimg.com/vi/PXOhi6m09bA/maxresdefault.jpg"} {"video_id": "PXOhi6m09bA", "text": "and you see it you tell the classifier that it's predictions on these two different augmentations of the unlabeled data should be roughly similar because even though you don't have a label you tell the classifier that whatever it's printing it should be similar and this way the classifier gets a lot of structure and loss function and parameters that are being learned from unlabeled data a lot of constrains are being imposed and therefore it's going to be much more regularized than just training on the label data so this is a", "start_timestamp": "00:13:27", "end_timestamp": "00:14:01", "start_second": 807, "end_second": 841, "url": "https://www.youtube.com/watch?v=PXOhi6m09bA&t=807s", "title": "L9 Semi-Supervised Learning and Unsupervised Distribution Alignment -- CS294-158-SP20 UC Berkeley", "thumbnail": "https://i.ytimg.com/vi/PXOhi6m09bA/maxresdefault.jpg"} {"video_id": "PXOhi6m09bA", "text": "very neat idea and it's also similar in spirit to things you've seen in subsidising which is the idea of taking two different views of an image and trying to make sure that they attract each other relative to another image so basically consistency constraints embedded into your encoder and raise different ideas in the past have attempted to do this and we are going to look at them in the PI Maru temporal ensemble and mean teacher finally we're going to look at regularization which is the idea of taking a model making sure", "start_timestamp": "00:14:01", "end_timestamp": "00:14:43", "start_second": 841, "end_second": 883, "url": "https://www.youtube.com/watch?v=PXOhi6m09bA&t=841s", "title": "L9 Semi-Supervised Learning and Unsupervised Distribution Alignment -- CS294-158-SP20 UC Berkeley", "thumbnail": "https://i.ytimg.com/vi/PXOhi6m09bA/maxresdefault.jpg"} {"video_id": "PXOhi6m09bA", "text": "that it's generalizing well to a new unlabeled data set or new validation set so typically people use weight decay dropout data argumentation for making sure that the classifiers generalize well and those are also pretty important in semi-supervised learning and methods that identity use these are unsupervised it augmentation or UDA and mix-match which we look at in detail but there are also other papers that we can't really cover in the scope of the structure but you should check out other people's that are related", "start_timestamp": "00:14:43", "end_timestamp": "00:15:17", "start_second": 883, "end_second": 917, "url": "https://www.youtube.com/watch?v=PXOhi6m09bA&t=883s", "title": "L9 Semi-Supervised Learning and Unsupervised Distribution Alignment -- CS294-158-SP20 UC Berkeley", "thumbnail": "https://i.ytimg.com/vi/PXOhi6m09bA/maxresdefault.jpg"} {"video_id": "PXOhi6m09bA", "text": "I mentioned this related work on these papers finally we look at the area of cool training or a self-training or pseudo Laban all these are ideas that have already been mentioned in this list of bullet points but there is a particular paper on his student which has taken these ideas to a whole new level in terms of performance and so we look at that in a little more detail so entropy minimization it's a very simple idea you have a lot of unlabeled data and you have your label here your training and classifier on the label", "start_timestamp": "00:15:17", "end_timestamp": "00:15:53", "start_second": 917, "end_second": 953, "url": "https://www.youtube.com/watch?v=PXOhi6m09bA&t=917s", "title": "L9 Semi-Supervised Learning and Unsupervised Distribution Alignment -- CS294-158-SP20 UC Berkeley", "thumbnail": "https://i.ytimg.com/vi/PXOhi6m09bA/maxresdefault.jpg"} {"video_id": "PXOhi6m09bA", "text": "data but you want to make sure that the unlabeled data is also influencing the classifier in some way so one simple idea is you take your classifier and ask ask you to predict on the unlabeled data and you want to make sure that the classifier is pretty confident on the unlabeled data or rather it's entropy of the class probabilities it outputs on the unlabeled data is small enough and this way you ensure that the structure is this way you ensure that the classifier is understanding the structure the unlabeled data we're", "start_timestamp": "00:15:53", "end_timestamp": "00:16:30", "start_second": 953, "end_second": 990, "url": "https://www.youtube.com/watch?v=PXOhi6m09bA&t=953s", "title": "L9 Semi-Supervised Learning and Unsupervised Distribution Alignment -- CS294-158-SP20 UC Berkeley", "thumbnail": "https://i.ytimg.com/vi/PXOhi6m09bA/maxresdefault.jpg"} {"video_id": "PXOhi6m09bA", "text": "trying to be confident about it so it's trying to find a solution for the label data in such a manner that it will be pretty confident on the unlabeled data as well so this this is one way to do something special so very old idea and pseudo label this is a very similar idea and being see how it's actually similar but the goal here is to take your classifier that's being trained on label data and ask you to predict on unlabeled data and you pick the most confident predictions and you turn them into extra label layer as if that were the ground", "start_timestamp": "00:16:30", "end_timestamp": "00:17:10", "start_second": 990, "end_second": 1030, "url": "https://www.youtube.com/watch?v=PXOhi6m09bA&t=990s", "title": "L9 Semi-Supervised Learning and Unsupervised Distribution Alignment -- CS294-158-SP20 UC Berkeley", "thumbnail": "https://i.ytimg.com/vi/PXOhi6m09bA/maxresdefault.jpg"} {"video_id": "PXOhi6m09bA", "text": "truth and you train a classifier or its own prediction so the classifier is making a bunch of predictions and those are being converted to ground truth proxy 100 data for itself and it's going to train again on new things new data sets created from the unlabeled data based on itself and this principle is also referred to as self training and there is a connection to end musician so here is the connection so consider an image X and classes y1 y2 y3 and let's say you're doing a classification problem and let's say", "start_timestamp": "00:17:10", "end_timestamp": "00:17:43", "start_second": 1030, "end_second": 1063, "url": "https://www.youtube.com/watch?v=PXOhi6m09bA&t=1030s", "title": "L9 Semi-Supervised Learning and Unsupervised Distribution Alignment -- CS294-158-SP20 UC Berkeley", "thumbnail": "https://i.ytimg.com/vi/PXOhi6m09bA/maxresdefault.jpg"} {"video_id": "PXOhi6m09bA", "text": "there is a classifier a with probabilities for the output classes being point one point eight point one and that's why at B with the probability is being point one point six and point three so that's why a clearly has lower entropy and you can say it's more confident it's more confident that the true ground truth is y2 and its score for y2 is much higher and scores for the other two classes are more similar and lower compared to Class B so there is clearly a connection to be made in terms of a classifier being more confident and", "start_timestamp": "00:17:43", "end_timestamp": "00:18:21", "start_second": 1063, "end_second": 1101, "url": "https://www.youtube.com/watch?v=PXOhi6m09bA&t=1063s", "title": "L9 Semi-Supervised Learning and Unsupervised Distribution Alignment -- CS294-158-SP20 UC Berkeley", "thumbnail": "https://i.ytimg.com/vi/PXOhi6m09bA/maxresdefault.jpg"} {"video_id": "PXOhi6m09bA", "text": "therefore having lower entropy in the output probability distribution of the classes and therefore minimizing the entropy of your classifier on unlabeled data I can do taking a classifiers outputs an unlabeled layer if they are confident enough you're trying to train on its own predictions so it has a similar effect and mathematically it's shown in these older papers that have been linked here so you can go and check it out and the next thing we're gonna see is later augmentation for label consistency so you take an image let's", "start_timestamp": "00:18:21", "end_timestamp": "00:18:59", "start_second": 1101, "end_second": 1139, "url": "https://www.youtube.com/watch?v=PXOhi6m09bA&t=1101s", "title": "L9 Semi-Supervised Learning and Unsupervised Distribution Alignment -- CS294-158-SP20 UC Berkeley", "thumbnail": "https://i.ytimg.com/vi/PXOhi6m09bA/maxresdefault.jpg"} {"video_id": "PXOhi6m09bA", "text": "say it's whether it's unlabeled or labeled you're given an image and now you've created it for an organisation's of it so similar this is the same picture we used in Sinclair and moco so I'm just using it so that you can relate this concept with the earlier lecture where we talked in cells for learning about data augmentation consistency using contrast losses so similar ideas have been used in semi-supervised learning as well so like I said you're already using the documentation for label data so it doesn't matter if you", "start_timestamp": "00:18:59", "end_timestamp": "00:19:32", "start_second": 1139, "end_second": 1172, "url": "https://www.youtube.com/watch?v=PXOhi6m09bA&t=1139s", "title": "L9 Semi-Supervised Learning and Unsupervised Distribution Alignment -- CS294-158-SP20 UC Berkeley", "thumbnail": "https://i.ytimg.com/vi/PXOhi6m09bA/maxresdefault.jpg"} {"video_id": "PXOhi6m09bA", "text": "enforce consistencies there but for unlabeled data if you take two different views and make sure that the logits are close enough for the classifier that's being trained on label dia that enforces a lot of structurally believer so you just make sure that the predictions are roughly similar and if you do this for a lot of legal data with a lot of different data occupations then your classifier is getting very much regularize in generalizable even those training on very little data so that's the idea of label label consistency constrains using", "start_timestamp": "00:19:32", "end_timestamp": "00:20:07", "start_second": 1172, "end_second": 1207, "url": "https://www.youtube.com/watch?v=PXOhi6m09bA&t=1172s", "title": "L9 Semi-Supervised Learning and Unsupervised Distribution Alignment -- CS294-158-SP20 UC Berkeley", "thumbnail": "https://i.ytimg.com/vi/PXOhi6m09bA/maxresdefault.jpg"} {"video_id": "PXOhi6m09bA", "text": "the augmentation and the more they documentation you use the better it is so that's it for the foundational material next we'd actually look at different semi-supervised learning algorithms like the PI model temporal ensemble in virtual adversarial training and so on but we also look at how the algorithms compare to each other and this particular paper from Google brain realistic evaluation decent spread learning algorithms compares these various different semi-square learning techniques on the C fart and svh and", "start_timestamp": "00:20:07", "end_timestamp": "00:20:49", "start_second": 1207, "end_second": 1249, "url": "https://www.youtube.com/watch?v=PXOhi6m09bA&t=1207s", "title": "L9 Semi-Supervised Learning and Unsupervised Distribution Alignment -- CS294-158-SP20 UC Berkeley", "thumbnail": "https://i.ytimg.com/vi/PXOhi6m09bA/maxresdefault.jpg"} {"video_id": "PXOhi6m09bA", "text": "data set which are reasonably small and you can run a lot of prototyping experiments with that so the four algorithms we're going to be looking at our PI model temporal ensemble main teacher and virtual adversary training so basically let's look at the PI model basically the idea is pretty much whatever we talked about our legal consistency you take your image you are creative different views using the stochastic a dog hunter but stochastic it could be at a random crop or a sequence of data augmentations whose sequence is randomized or but", "start_timestamp": "00:20:49", "end_timestamp": "00:21:31", "start_second": 1249, "end_second": 1291, "url": "https://www.youtube.com/watch?v=PXOhi6m09bA&t=1249s", "title": "L9 Semi-Supervised Learning and Unsupervised Distribution Alignment -- CS294-158-SP20 UC Berkeley", "thumbnail": "https://i.ytimg.com/vi/PXOhi6m09bA/maxresdefault.jpg"} {"video_id": "PXOhi6m09bA", "text": "there you apply it running grayscale so on and now you take it out you pass it to the model and the model itself could be stochastic it could have drop out so every forward pass could give you a different output even for the same image and you get two different latent variables for these different we'll turn the input down in the model so every time you make a forward pass you label data you can enforce it regularly the supervised crossing could be lost and if you throw unlabeled data you can enforce the square or square", "start_timestamp": "00:21:31", "end_timestamp": "00:22:08", "start_second": 1291, "end_second": 1328, "url": "https://www.youtube.com/watch?v=PXOhi6m09bA&t=1291s", "title": "L9 Semi-Supervised Learning and Unsupervised Distribution Alignment -- CS294-158-SP20 UC Berkeley", "thumbnail": "https://i.ytimg.com/vi/PXOhi6m09bA/maxresdefault.jpg"} {"video_id": "PXOhi6m09bA", "text": "difference between the outputs of the model or dutiful views and for label either you can force both of these losses well for unlabeled data you can you just enforce this label consistency loss which is you just take your output before the softmax or even after the softmax it depends on how you want to implement it but you take a particular layer at the end and you make sure that the layers is similar for two different views and you weigh both these losses together so one is gonna be unsupervised on same squares loss and the other is", "start_timestamp": "00:22:08", "end_timestamp": "00:22:42", "start_second": 1328, "end_second": 1362, "url": "https://www.youtube.com/watch?v=PXOhi6m09bA&t=1328s", "title": "L9 Semi-Supervised Learning and Unsupervised Distribution Alignment -- CS294-158-SP20 UC Berkeley", "thumbnail": "https://i.ytimg.com/vi/PXOhi6m09bA/maxresdefault.jpg"} {"video_id": "PXOhi6m09bA", "text": "supervised loss and you can actually control loose loss actually dominates the training the beginning at the end so for instance one reasonable motivation is you can make sure that the supervised lost dominates in the beginning so that the model already learns how to classify images and then you can ramp up to wait for these cells for the semi polarized or unsupervised loss of that it's learning to structure the unlimited on similar fashion so this idea is called pi model and this is the zero code for pi model excise", "start_timestamp": "00:22:42", "end_timestamp": "00:23:20", "start_second": 1362, "end_second": 1400, "url": "https://www.youtube.com/watch?v=PXOhi6m09bA&t=1362s", "title": "L9 Semi-Supervised Learning and Unsupervised Distribution Alignment -- CS294-158-SP20 UC Berkeley", "thumbnail": "https://i.ytimg.com/vi/PXOhi6m09bA/maxresdefault.jpg"} {"video_id": "PXOhi6m09bA", "text": "your training wise your labels WT is your rammed up wait for your hands provides function f theta faxes your neural net that's the classification task and it could have some drop are in that world to be stochastic G of X is your data argumentation is also stochastic and you basically perform two different augmentations of your mini-batch get two different output CI and Zi to D and you make sure that Zi and Zi absolutely are close to each other using a square error loss or some some distance metric and you also make", "start_timestamp": "00:23:20", "end_timestamp": "00:23:55", "start_second": 1400, "end_second": 1435, "url": "https://www.youtube.com/watch?v=PXOhi6m09bA&t=1400s", "title": "L9 Semi-Supervised Learning and Unsupervised Distribution Alignment -- CS294-158-SP20 UC Berkeley", "thumbnail": "https://i.ytimg.com/vi/PXOhi6m09bA/maxresdefault.jpg"} {"video_id": "PXOhi6m09bA", "text": "sure that the predictions of the classifier or matching the true ground truth whenever you have labels so that's that's really it that's as simple as I can get the PI model basically it's using the label the distance e principle so temporal augmentation there's something slightly different which is it says hey I don't want to do four passes of two different views all the time because it's expensive why not let me just keep a moving average of these sample embeddings for every single sample and make sure that the consistencies and", "start_timestamp": "00:23:55", "end_timestamp": "00:24:33", "start_second": 1435, "end_second": 1473, "url": "https://www.youtube.com/watch?v=PXOhi6m09bA&t=1435s", "title": "L9 Semi-Supervised Learning and Unsupervised Distribution Alignment -- CS294-158-SP20 UC Berkeley", "thumbnail": "https://i.ytimg.com/vi/PXOhi6m09bA/maxresdefault.jpg"} {"video_id": "PXOhi6m09bA", "text": "falls across time so I would still do a stochastic the augmentation stochastic Network I would get an embedding every time in the forward pass but I would say that those embeddings should be close to some historical a version of the same same same samples embedding the past and so that way gives to enforcing some kind of data augmentation constraints because you would have done a different organization in the past but you're going to keep an estimator of it for every single sample separately so this is very similar to those ideas we talked", "start_timestamp": "00:24:33", "end_timestamp": "00:25:10", "start_second": 1473, "end_second": 1510, "url": "https://www.youtube.com/watch?v=PXOhi6m09bA&t=1473s", "title": "L9 Semi-Supervised Learning and Unsupervised Distribution Alignment -- CS294-158-SP20 UC Berkeley", "thumbnail": "https://i.ytimg.com/vi/PXOhi6m09bA/maxresdefault.jpg"} {"video_id": "PXOhi6m09bA", "text": "about in again let chosen your improve gang where there was a constraint on the parameters to be closely extort conversions so this is that at a sample level and other than that it's pretty much the same as the PI model there's a cross between laws there's a roundup function for the unsupervised objective and we'll the objectives are only optimize together so this is the serial code for tempora ensemble is proposed in the same paper we're fine models pose so one negative thing about imported ensemble is it's", "start_timestamp": "00:25:10", "end_timestamp": "00:25:46", "start_second": 1510, "end_second": 1546, "url": "https://www.youtube.com/watch?v=PXOhi6m09bA&t=1510s", "title": "L9 Semi-Supervised Learning and Unsupervised Distribution Alignment -- CS294-158-SP20 UC Berkeley", "thumbnail": "https://i.ytimg.com/vi/PXOhi6m09bA/maxresdefault.jpg"} {"video_id": "PXOhi6m09bA", "text": "not going to scale with the size of the dataset you're not gonna be able to maintain a separate moving average embedding for every sample if you have if it is that is so big enough like a million or billion images so mean teacher basically amortize that and said that hey if you want to keep an exponential moving average for n bearings why not just keep an expansion moving average for parameters so that you used to take two different views but make sure that the embeddings match that of the moving average versions rather", "start_timestamp": "00:25:46", "end_timestamp": "00:26:17", "start_second": 1546, "end_second": 1577, "url": "https://www.youtube.com/watch?v=PXOhi6m09bA&t=1546s", "title": "L9 Semi-Supervised Learning and Unsupervised Distribution Alignment -- CS294-158-SP20 UC Berkeley", "thumbnail": "https://i.ytimg.com/vi/PXOhi6m09bA/maxresdefault.jpg"} {"video_id": "PXOhi6m09bA", "text": "than creating separate moving average embeddings for every sample so you take your model Tita there's also en male version of Tita and you make sure that the embeddings that you get for one view matches the in bearings very good for the other view but with different encoders basically so that's the idea of this main teacher approach where the teacher can be considered as the AMA version and a you think about it as a teacher because it's giving you these constraints and you also perform the classification task as", "start_timestamp": "00:26:17", "end_timestamp": "00:26:51", "start_second": 1577, "end_second": 1611, "url": "https://www.youtube.com/watch?v=PXOhi6m09bA&t=1577s", "title": "L9 Semi-Supervised Learning and Unsupervised Distribution Alignment -- CS294-158-SP20 UC Berkeley", "thumbnail": "https://i.ytimg.com/vi/PXOhi6m09bA/maxresdefault.jpg"} {"video_id": "PXOhi6m09bA", "text": "fication lost in peril sometimes they finally let's look at virtual art whistle training so in adversarial training you're create using this fast sine gradient method where you basically calculate the gradient of your input image basically the gradient of your output with respect your input image this is a huge dimensional vector or matrix depending on what your input is and you move your input in the direction where you basically get this gradient you get this sign and you move your input in a small epsilon in that", "start_timestamp": "00:26:51", "end_timestamp": "00:27:32", "start_second": 1611, "end_second": 1652, "url": "https://www.youtube.com/watch?v=PXOhi6m09bA&t=1611s", "title": "L9 Semi-Supervised Learning and Unsupervised Distribution Alignment -- CS294-158-SP20 UC Berkeley", "thumbnail": "https://i.ytimg.com/vi/PXOhi6m09bA/maxresdefault.jpg"} {"video_id": "PXOhi6m09bA", "text": "direction and that lets you fool your classifier and you want to make sure that the in your classifier is not full if you for these perturbations and so you would perform at with stable training to make sure that the classifier is not full at these data in these different rotor patient directions now in Sammy squirrel learning you don't have the labels for unlabeled data so how would you address your training there so the idea is to do virtual address zero training where you look at the distribution of your classes instead", "start_timestamp": "00:27:32", "end_timestamp": "00:28:07", "start_second": 1652, "end_second": 1687, "url": "https://www.youtube.com/watch?v=PXOhi6m09bA&t=1652s", "title": "L9 Semi-Supervised Learning and Unsupervised Distribution Alignment -- CS294-158-SP20 UC Berkeley", "thumbnail": "https://i.ytimg.com/vi/PXOhi6m09bA/maxresdefault.jpg"} {"video_id": "PXOhi6m09bA", "text": "of a particular class because you don't you you don't want to make take a particular class and see the perturbation direction for that class you take a distribution of classes and you take something a distance metric between your unperturbed data point and your perturb data point in trying to figure out the direction that maximizes this scale and it turns out if you linearized scale term you can actually solve for this direction are using power iteration and once you get the direction and you can make sure that you info", "start_timestamp": "00:28:07", "end_timestamp": "00:28:41", "start_second": 1687, "end_second": 1721, "url": "https://www.youtube.com/watch?v=PXOhi6m09bA&t=1687s", "title": "L9 Semi-Supervised Learning and Unsupervised Distribution Alignment -- CS294-158-SP20 UC Berkeley", "thumbnail": "https://i.ytimg.com/vi/PXOhi6m09bA/maxresdefault.jpg"} {"video_id": "PXOhi6m09bA", "text": "structure your classifier around unlabeled data we're trying to make sure that on these perturbations are unlabeled here the classifier is still not fooled even though you don't have access to a true label so that's why it's referred to as virtual data Co training it's not actually adversarial training but it shares principles with annual training done cleverly with some mathematical tricks and this is basically the pseudocode for the power iteration method which i mentioned because it works because you linearized", "start_timestamp": "00:28:41", "end_timestamp": "00:29:11", "start_second": 1721, "end_second": 1751, "url": "https://www.youtube.com/watch?v=PXOhi6m09bA&t=1721s", "title": "L9 Semi-Supervised Learning and Unsupervised Distribution Alignment -- CS294-158-SP20 UC Berkeley", "thumbnail": "https://i.ytimg.com/vi/PXOhi6m09bA/maxresdefault.jpg"} {"video_id": "PXOhi6m09bA", "text": "the k ho term so that's that's it for the techniques by model temporal on some going mean teacher and questionable training of that these are the four technique considered in this comparison paper and they make sure that they use the same architecture for all these techniques because prior work did not do that so they use a wide wrist not and the idea and white resonators your normal resin goes through a bottleneck and the wireless and doesn't do that it just uses three by three cons and does know one by one confer down sampling SAS wide", "start_timestamp": "00:29:11", "end_timestamp": "00:29:53", "start_second": 1751, "end_second": 1793, "url": "https://www.youtube.com/watch?v=PXOhi6m09bA&t=1751s", "title": "L9 Semi-Supervised Learning and Unsupervised Distribution Alignment -- CS294-158-SP20 UC Berkeley", "thumbnail": "https://i.ytimg.com/vi/PXOhi6m09bA/maxresdefault.jpg"} {"video_id": "PXOhi6m09bA", "text": "as possible so given all these constraints even all these similar architecture similar hyper parameters for various different semi-square learning algorithms it turns out that which will address your training performs the best if you look at C 410 which is four thousand labels so you see pretend originally it has 50,000 images so you basically just use 46,000 unlabeled data points and four thousand label data points which is like four hundred labels for class so that's really tiny compared to what the ocean is that is and virtual abyssal training", "start_timestamp": "00:29:53", "end_timestamp": "00:30:37", "start_second": 1793, "end_second": 1837, "url": "https://www.youtube.com/watch?v=PXOhi6m09bA&t=1793s", "title": "L9 Semi-Supervised Learning and Unsupervised Distribution Alignment -- CS294-158-SP20 UC Berkeley", "thumbnail": "https://i.ytimg.com/vi/PXOhi6m09bA/maxresdefault.jpg"} {"video_id": "PXOhi6m09bA", "text": "gets an error rate thirteen point eight six percent which is the best among all the other methods and which I was so training plus entropy minimization together it's even lower error rate of thirteen point one three percent and the trends is similar for the svh and I said where virtual artists are training plus entropy immunization outperforms all the other methods and the authors also say that the report much better piece like stand prior work like for instance in prior work the baseline supported had much higher error", "start_timestamp": "00:30:37", "end_timestamp": "00:31:13", "start_second": 1837, "end_second": 1873, "url": "https://www.youtube.com/watch?v=PXOhi6m09bA&t=1837s", "title": "L9 Semi-Supervised Learning and Unsupervised Distribution Alignment -- CS294-158-SP20 UC Berkeley", "thumbnail": "https://i.ytimg.com/vi/PXOhi6m09bA/maxresdefault.jpg"} {"video_id": "PXOhi6m09bA", "text": "rates than what the authors report so this paper actually took the effort to make sure that all the ablations are done carefully and one negative thing about semi-supervised learning on c4 is that if you use something like a preacher image it you take an image Tantalus pre-trained on image netlabels and then you find unit on c4 you actually get better numbers than using the unlabeled data on c4 itself even though see far the unlabeled data on c4 us coming from the same underlying distribution and image that is a", "start_timestamp": "00:31:13", "end_timestamp": "00:31:53", "start_second": 1873, "end_second": 1913, "url": "https://www.youtube.com/watch?v=PXOhi6m09bA&t=1873s", "title": "L9 Semi-Supervised Learning and Unsupervised Distribution Alignment -- CS294-158-SP20 UC Berkeley", "thumbnail": "https://i.ytimg.com/vi/PXOhi6m09bA/maxresdefault.jpg"} {"video_id": "PXOhi6m09bA", "text": "completely different distribution completely different image sizes so that's slightly bad so and there's a significant difference as at least one point one just one one plus percentage difference and even if you address the class overlaps and remove the classes overlapping and see far it's - you still get a lower error it and just using some experimenting on c4 and the author is also analyze each things like hey if your unlabeled data as soon see further Oerlikon ten classes and if you if you assume that the", "start_timestamp": "00:31:53", "end_timestamp": "00:32:37", "start_second": 1913, "end_second": 1957, "url": "https://www.youtube.com/watch?v=PXOhi6m09bA&t=1913s", "title": "L9 Semi-Supervised Learning and Unsupervised Distribution Alignment -- CS294-158-SP20 UC Berkeley", "thumbnail": "https://i.ytimg.com/vi/PXOhi6m09bA/maxresdefault.jpg"} {"video_id": "PXOhi6m09bA", "text": "unlegal leader doesn't have the uniform class distribution its label there and you can play around what the distribution of unlabeled here points are as far as a class overlap with Lolita is its it it is clear that as you as the distribution response increases which for a personal training is the most persistent compared to all the other approaches similarly if you vary the number of legal data points obviously the test error is going to be lower as the number of legal data points increases because that's slowly getting", "start_timestamp": "00:32:37", "end_timestamp": "00:33:12", "start_second": 1957, "end_second": 1992, "url": "https://www.youtube.com/watch?v=PXOhi6m09bA&t=1957s", "title": "L9 Semi-Supervised Learning and Unsupervised Distribution Alignment -- CS294-158-SP20 UC Berkeley", "thumbnail": "https://i.ytimg.com/vi/PXOhi6m09bA/maxresdefault.jpg"} {"video_id": "PXOhi6m09bA", "text": "to the supervised learning regime and all the meant is roughly performed similar but in the extreme scenario value are very few labels 215 up to 50 label data points and so on which will have a certain training significantly the best in this vhn and it is through the best on see for our notes other methods are competitive as well the lessons from this paper are when you use the standard when you compare different algorithms and semi squares I mean you should make sure that you use a standard architecture and equal training", "start_timestamp": "00:33:12", "end_timestamp": "00:33:47", "start_second": 1992, "end_second": 2027, "url": "https://www.youtube.com/watch?v=PXOhi6m09bA&t=1992s", "title": "L9 Semi-Supervised Learning and Unsupervised Distribution Alignment -- CS294-158-SP20 UC Berkeley", "thumbnail": "https://i.ytimg.com/vi/PXOhi6m09bA/maxresdefault.jpg"} {"video_id": "PXOhi6m09bA", "text": "budget which is you should spend equal amount of time tuning hyper parameters for all of them and if your unlabeled data is coming from a distribution that is a necessary overlap with your label data points then the benefits of something surprise learning will not be there thirdly most methods that likelike don't work well in a very very low data regime so this is not true right now but this was true when that paper was published and we look see how it changed over time and transferring 320 machine had produce better error rates but again", "start_timestamp": "00:33:47", "end_timestamp": "00:34:24", "start_second": 2027, "end_second": 2064, "url": "https://www.youtube.com/watch?v=PXOhi6m09bA&t=2027s", "title": "L9 Semi-Supervised Learning and Unsupervised Distribution Alignment -- CS294-158-SP20 UC Berkeley", "thumbnail": "https://i.ytimg.com/vi/PXOhi6m09bA/maxresdefault.jpg"} {"video_id": "PXOhi6m09bA", "text": "it's not true right now so this is an old paper but the main reason for going through that is to introduce all these different techniques like the PI Maru and virtually I was still training and temporal on something in the mean teacher so the agenda for the less rest of the lecture is to cover three very recent papers and some surprise learning that actually have taken some space learning to a whole new level unsupervised the augmentation makes match and noisy student so before that let's actually take a break", "start_timestamp": "00:34:24", "end_timestamp": "00:35:05", "start_second": 2064, "end_second": 2105, "url": "https://www.youtube.com/watch?v=PXOhi6m09bA&t=2064s", "title": "L9 Semi-Supervised Learning and Unsupervised Distribution Alignment -- CS294-158-SP20 UC Berkeley", "thumbnail": "https://i.ytimg.com/vi/PXOhi6m09bA/maxresdefault.jpg"} {"video_id": "PXOhi6m09bA", "text": "you you okay resuming so first let's look into unsupervised data augmentation for at for a consistency training in same as for a Fleming so this is a paper from Google brain from Cork lace group and these slides are from tong long was one of the authors in this paper so we've already seen how important data augmentation is and it's it's it's been significantly useful in supervised learning the high data regime but if you just do supervised learning if you don't have a lot of labels just data augmentation isn't gonna get you very", "start_timestamp": "00:35:05", "end_timestamp": "00:35:55", "start_second": 2105, "end_second": 2155, "url": "https://www.youtube.com/watch?v=PXOhi6m09bA&t=2105s", "title": "L9 Semi-Supervised Learning and Unsupervised Distribution Alignment -- CS294-158-SP20 UC Berkeley", "thumbnail": "https://i.ytimg.com/vi/PXOhi6m09bA/maxresdefault.jpg"} {"video_id": "PXOhi6m09bA", "text": "far even if you use extremely aggressively documentation like Auto augment which is shown here where you basically can rotate shear and add colors to some scene and create a lot of different views of the same image similarly in language you can create different versions of the same phrase or sentence using the technique called back translation so what it does is basically they cannot take and take a sentence in particular language translate it it to another language and then translate it back from the other language to the", "start_timestamp": "00:35:55", "end_timestamp": "00:36:35", "start_second": 2155, "end_second": 2195, "url": "https://www.youtube.com/watch?v=PXOhi6m09bA&t=2155s", "title": "L9 Semi-Supervised Learning and Unsupervised Distribution Alignment -- CS294-158-SP20 UC Berkeley", "thumbnail": "https://i.ytimg.com/vi/PXOhi6m09bA/maxresdefault.jpg"} {"video_id": "PXOhi6m09bA", "text": "existing set first source language so you go from a source language you go to a target language and you come back from the target language to the source language so you hope that this entropy in the decoder and the encoder will result in a in in a new version of the same sentence and examples are here so so the source sentence here is given the low budget and correctional limitations the movie is very good and if you look at it three different Mac translations since it was highly limited in terms of the budget and the production", "start_timestamp": "00:36:35", "end_timestamp": "00:37:13", "start_second": 2195, "end_second": 2233, "url": "https://www.youtube.com/watch?v=PXOhi6m09bA&t=2195s", "title": "L9 Semi-Supervised Learning and Unsupervised Distribution Alignment -- CS294-158-SP20 UC Berkeley", "thumbnail": "https://i.ytimg.com/vi/PXOhi6m09bA/maxresdefault.jpg"} {"video_id": "PXOhi6m09bA", "text": "restrictions the film was cheerful there are a few budget items with production limitations to make this film a really good one dude is a small dollar amount and recommendations Neos for them is very beautiful so the first and third versions are particularly really good well the second conveys a slightly different meaning but it's more or less there so this is giving a lot of diversity and based on which language you move to you're gonna get very different outputs and also the same language you're gonna get different", "start_timestamp": "00:37:13", "end_timestamp": "00:37:46", "start_second": 2233, "end_second": 2266, "url": "https://www.youtube.com/watch?v=PXOhi6m09bA&t=2233s", "title": "L9 Semi-Supervised Learning and Unsupervised Distribution Alignment -- CS294-158-SP20 UC Berkeley", "thumbnail": "https://i.ytimg.com/vi/PXOhi6m09bA/maxresdefault.jpg"} {"video_id": "PXOhi6m09bA", "text": "outputs for different decoding so the key idea and UDA or unsprayed augmentation is apply the state-of-the-art data augmentation techniques on unlabeled data for consistency training in semi-supervised learning you've already seen that PI model are like like Supai model was basically doing consistency training but it was a pretty old paper and data augmentation through these numeral architectures were not as devil back then so you can think of UTA is doing pi model right by using a lot of data augmentations on the right", "start_timestamp": "00:37:46", "end_timestamp": "00:38:23", "start_second": 2266, "end_second": 2303, "url": "https://www.youtube.com/watch?v=PXOhi6m09bA&t=2266s", "title": "L9 Semi-Supervised Learning and Unsupervised Distribution Alignment -- CS294-158-SP20 UC Berkeley", "thumbnail": "https://i.ytimg.com/vi/PXOhi6m09bA/maxresdefault.jpg"} {"video_id": "PXOhi6m09bA", "text": "architectures so here's a nice way to understand UDA so think about label data and you have X and your ground through the Y star and your training a classifier at P feet of Y given X and you have your standard supervised cross-entropy laws that make sure that the logit so the true class are really maximized and you also have unlabeled data so this is the situations and it's quite funny and models like virtual address zero training add noise to regularize to models predictions and the noises between the virtual address", "start_timestamp": "00:38:23", "end_timestamp": "00:38:58", "start_second": 2303, "end_second": 2338, "url": "https://www.youtube.com/watch?v=PXOhi6m09bA&t=2303s", "title": "L9 Semi-Supervised Learning and Unsupervised Distribution Alignment -- CS294-158-SP20 UC Berkeley", "thumbnail": "https://i.ytimg.com/vi/PXOhi6m09bA/maxresdefault.jpg"} {"video_id": "PXOhi6m09bA", "text": "retraining direction very calculated gradient in approximate fashion next you have this thing called unsupervised consistency loss which is you take the noise from the model and your original model and you make sure that the logic seriously similar unlabeled data so this is something you already know and a final loss is combination of supervised and unsupervised consistency loss and you can see that virtual adversity of training actually works pretty well so this is a green eyes illustration of which I do sell training", "start_timestamp": "00:38:58", "end_timestamp": "00:39:35", "start_second": 2338, "end_second": 2375, "url": "https://www.youtube.com/watch?v=PXOhi6m09bA&t=2338s", "title": "L9 Semi-Supervised Learning and Unsupervised Distribution Alignment -- CS294-158-SP20 UC Berkeley", "thumbnail": "https://i.ytimg.com/vi/PXOhi6m09bA/maxresdefault.jpg"} {"video_id": "PXOhi6m09bA", "text": "even though it's being presented in the UTA strikes so the uncolored data points are unlabeled data by the green and pink data points with a label here you only have roughly eight data points which are labeled but after performing virtual adversity of training after imposing the consistency between the model and the noise version of the model with a noise comes from that you can see how this labels propagated and covered up this federal so that's really cool so that's the goal of semi-square learning to do this really well in high dimensions when", "start_timestamp": "00:39:35", "end_timestamp": "00:40:10", "start_second": 2375, "end_second": 2410, "url": "https://www.youtube.com/watch?v=PXOhi6m09bA&t=2375s", "title": "L9 Semi-Supervised Learning and Unsupervised Distribution Alignment -- CS294-158-SP20 UC Berkeley", "thumbnail": "https://i.ytimg.com/vi/PXOhi6m09bA/maxresdefault.jpg"} {"video_id": "PXOhi6m09bA", "text": "you have a lot of data and a lot more parameters so you can think of UDA as creating this noise at the input level using various different data augmentations and depending on the two modality for instance you use Auto Alcuin for images you would use TF idea word replacement or back translation for NLP and based on that you create different organizations at the same image or the same sample and enforces consistency laws and the unsupervised consistency but the last part and you also do supervised cross and freelance", "start_timestamp": "00:40:10", "end_timestamp": "00:40:52", "start_second": 2410, "end_second": 2452, "url": "https://www.youtube.com/watch?v=PXOhi6m09bA&t=2410s", "title": "L9 Semi-Supervised Learning and Unsupervised Distribution Alignment -- CS294-158-SP20 UC Berkeley", "thumbnail": "https://i.ytimg.com/vi/PXOhi6m09bA/maxresdefault.jpg"} {"video_id": "PXOhi6m09bA", "text": "and you together out twice this so that's basically UDA and augmentations provides a diverse and valid perturbations for your input so like I said back translation produce three different versions that look very different from each other in converted roughly the same meaning as the original source sentence so in this case they actually went from English to French and back to English but you can also think of doing it to other languages and you can increase the diversity by playing around the temperature various different", "start_timestamp": "00:40:52", "end_timestamp": "00:41:29", "start_second": 2452, "end_second": 2489, "url": "https://www.youtube.com/watch?v=PXOhi6m09bA&t=2452s", "title": "L9 Semi-Supervised Learning and Unsupervised Distribution Alignment -- CS294-158-SP20 UC Berkeley", "thumbnail": "https://i.ytimg.com/vi/PXOhi6m09bA/maxresdefault.jpg"} {"video_id": "PXOhi6m09bA", "text": "sampling techniques like beam search on nucleus sampling and you know whenever you do use a software so you can always use a temperature there you're gonna get tablet samples if you use high not high enough temperature and if use a low enough temperature you're going to get the most confidence after product samples with less diversity but high quality so you can control for that similarly in images you can use argument but depending on the type of argumentation you can control the strength of the argumentation and get", "start_timestamp": "00:41:29", "end_timestamp": "00:42:01", "start_second": 2489, "end_second": 2521, "url": "https://www.youtube.com/watch?v=PXOhi6m09bA&t=2489s", "title": "L9 Semi-Supervised Learning and Unsupervised Distribution Alignment -- CS294-158-SP20 UC Berkeley", "thumbnail": "https://i.ytimg.com/vi/PXOhi6m09bA/maxresdefault.jpg"} {"video_id": "PXOhi6m09bA", "text": "sufficient distortions based on what one level of distortion eclair bar so let's look at the experiments carried out on this paper so in language they experimented with document classification or sentiment classification or review review sentiment analysis so if you look at the size of the data sets they have like twenty five thousand or four five sixty thousand and six fifty thousand samples and so on and the error rates before birth and after birth are mentioned different lines and you can see how Bert has significantly improved the error", "start_timestamp": "00:42:01", "end_timestamp": "00:42:40", "start_second": 2521, "end_second": 2560, "url": "https://www.youtube.com/watch?v=PXOhi6m09bA&t=2521s", "title": "L9 Semi-Supervised Learning and Unsupervised Distribution Alignment -- CS294-158-SP20 UC Berkeley", "thumbnail": "https://i.ytimg.com/vi/PXOhi6m09bA/maxresdefault.jpg"} {"video_id": "PXOhi6m09bA", "text": "rates so what they did in the UDA papers for various different initializations random bird base birth large and bird fine-tuned they have numbers for whether you use UDA or whether you don't use EDA but you notice how the number of label data points are changed by orders of magnitude three or four higher so if you look at IMDB you earlier had twenty five thousand label data points but the authors run his experiment trending labeled eight points which is three orders of magnitude lower similarly for yell quits 2.5 km this is", "start_timestamp": "00:42:40", "end_timestamp": "00:43:19", "start_second": 2560, "end_second": 2599, "url": "https://www.youtube.com/watch?v=PXOhi6m09bA&t=2560s", "title": "L9 Semi-Supervised Learning and Unsupervised Distribution Alignment -- CS294-158-SP20 UC Berkeley", "thumbnail": "https://i.ytimg.com/vi/PXOhi6m09bA/maxresdefault.jpg"} {"video_id": "PXOhi6m09bA", "text": "650 K and so on and you can see how the performance especially when you use word fine-tune is on par with the birth large that strain on all the label data points so the performance you get from taking birth large and fine-tuning on the fully supervised baseline is actually on par with what do you get from you da plus using bird fine-tune but just on like thousand x few labels which is incredible so it means that the consistency loss is using back translation is actually working really well secondly they have this idea called", "start_timestamp": "00:43:19", "end_timestamp": "00:44:00", "start_second": 2599, "end_second": 2640, "url": "https://www.youtube.com/watch?v=PXOhi6m09bA&t=2599s", "title": "L9 Semi-Supervised Learning and Unsupervised Distribution Alignment -- CS294-158-SP20 UC Berkeley", "thumbnail": "https://i.ytimg.com/vi/PXOhi6m09bA/maxresdefault.jpg"} {"video_id": "PXOhi6m09bA", "text": "training signal annealing which is you want to prevent overtraining on the label here so you have a label data so you don't want to make sure that you want to make sure that your classifier doesn't over train on it and for that they actually have something like a threshold a procedure where they take the classifier and if it's sufficiently confident they don't train the classifier points and this threshold is actually varied over time so you have an indicator variables of whether the classifiers output logic score is less", "start_timestamp": "00:44:00", "end_timestamp": "00:44:32", "start_second": 2640, "end_second": 2672, "url": "https://www.youtube.com/watch?v=PXOhi6m09bA&t=2640s", "title": "L9 Semi-Supervised Learning and Unsupervised Distribution Alignment -- CS294-158-SP20 UC Berkeley", "thumbnail": "https://i.ytimg.com/vi/PXOhi6m09bA/maxresdefault.jpg"} {"video_id": "PXOhi6m09bA", "text": "than threshold and only if it is you're gonna have the model trained on those data points or else you're just not gonna back up those gradients and this threshold has the you can play around with different schedules you could it could be really confident the beginning at the end you want to make sure that the thresholds are actually high enough so that model is not training and in the beginning you don't want to have high enough threshold it so because your mom could be erroneous so they play around with linear or exponential and log and", "start_timestamp": "00:44:32", "end_timestamp": "00:45:08", "start_second": 2672, "end_second": 2708, "url": "https://www.youtube.com/watch?v=PXOhi6m09bA&t=2672s", "title": "L9 Semi-Supervised Learning and Unsupervised Distribution Alignment -- CS294-158-SP20 UC Berkeley", "thumbnail": "https://i.ytimg.com/vi/PXOhi6m09bA/maxresdefault.jpg"} {"video_id": "PXOhi6m09bA", "text": "the paper finally this is a really cool plot that shows the dream of something which is the benefits hold even in the high rated regime even when you how to twenty five thousand label examples the performance that you get is from semi-supervised learning is better than supervised bird find here so it's actually able to take advantage of all the unlabeled I have so next let's look at the computer vision experiments carried out on the UDA paper so basically they use the standard benchmarks on C for Tanana speech and semi-supervised learning", "start_timestamp": "00:45:08", "end_timestamp": "00:45:49", "start_second": 2708, "end_second": 2749, "url": "https://www.youtube.com/watch?v=PXOhi6m09bA&t=2708s", "title": "L9 Semi-Supervised Learning and Unsupervised Distribution Alignment -- CS294-158-SP20 UC Berkeley", "thumbnail": "https://i.ytimg.com/vi/PXOhi6m09bA/maxresdefault.jpg"} {"video_id": "PXOhi6m09bA", "text": "which you saw in the prior work on realistic evaluation of same exploiting algorithms so a lot of these baselines are from that paper where there is a virus net with 28 letters and you can see how the parameters are controlled for all the different algorithms and the numbers are reported for C flower with 4000 labels innovation with thousand labels and UDA is actually the best algorithm in this setting with an error rate as low as 5% on c4 and 2.5% on s vhn and with architectural changes like shake shake and shake drop and permanet", "start_timestamp": "00:45:49", "end_timestamp": "00:46:31", "start_second": 2749, "end_second": 2791, "url": "https://www.youtube.com/watch?v=PXOhi6m09bA&t=2749s", "title": "L9 Semi-Supervised Learning and Unsupervised Distribution Alignment -- CS294-158-SP20 UC Berkeley", "thumbnail": "https://i.ytimg.com/vi/PXOhi6m09bA/maxresdefault.jpg"} {"video_id": "PXOhi6m09bA", "text": "they get all the way down to 2.7 percent which is significantly lower so just four thousand samples they're as good as ninety 90 97 either better than nice and accurate on C far which is kind of accuracies you usually get with using all the label data points so that really means to do domain consistency that the data augmentation label consistency is really really helping them and the models are the scaling was with larger networks so when you moved away from 1.5 million parameters it was typically used in semi-square is", "start_timestamp": "00:46:31", "end_timestamp": "00:47:08", "start_second": 2791, "end_second": 2828, "url": "https://www.youtube.com/watch?v=PXOhi6m09bA&t=2791s", "title": "L9 Semi-Supervised Learning and Unsupervised Distribution Alignment -- CS294-158-SP20 UC Berkeley", "thumbnail": "https://i.ytimg.com/vi/PXOhi6m09bA/maxresdefault.jpg"} {"video_id": "PXOhi6m09bA", "text": "learning to see far earlier to a model that's big enough they passed 26 million parameters the error rates are getting significantly lower so this shows that this technique scales with the number of parameters use next is how you can actually match the complete supervised baselines with just using order of magnitude fewer table data points so the complete supervised baseline uses 50,000 labels and this one uses 4,000 labels so that's 10x R or at like more than 10x fewer and and and if you look at the numbers you basically", "start_timestamp": "00:47:08", "end_timestamp": "00:47:50", "start_second": 2828, "end_second": 2870, "url": "https://www.youtube.com/watch?v=PXOhi6m09bA&t=2828s", "title": "L9 Semi-Supervised Learning and Unsupervised Distribution Alignment -- CS294-158-SP20 UC Berkeley", "thumbnail": "https://i.ytimg.com/vi/PXOhi6m09bA/maxresdefault.jpg"} {"video_id": "PXOhi6m09bA", "text": "get 5.4 percent error rate with supervised and 5.3 percent with UTA for the virus that's 28 architecture and for the other models even though UDA is slightly higher for shake-shake-shake drop it actually matters to supervise baselines though it's not as good as the Auto argument abortion it's still very very close finally they also ablated for how much data mutation matters and it seems to matter really it's really the biggest deal as we would expect because we've seen that in subsequent learning as well and that's what they observe even here", "start_timestamp": "00:47:50", "end_timestamp": "00:48:30", "start_second": 2870, "end_second": 2910, "url": "https://www.youtube.com/watch?v=PXOhi6m09bA&t=2870s", "title": "L9 Semi-Supervised Learning and Unsupervised Distribution Alignment -- CS294-158-SP20 UC Berkeley", "thumbnail": "https://i.ytimg.com/vi/PXOhi6m09bA/maxresdefault.jpg"} {"video_id": "PXOhi6m09bA", "text": "so the summary of UDA is that it's a data augmentation technique that's applied on unlabeled data to improve the performance of a classifier that's trained on very few labels their points and data augmentation is that for a critical part of this pipeline and it's an effective perturbation technique for us any surprise nothing so it's even it's it's even more effective than perturbing the model and Yui is significantly improves for both language and vision by 10x or 100x a thousand X fewer label data requirements and", "start_timestamp": "00:48:30", "end_timestamp": "00:49:04", "start_second": 2910, "end_second": 2944, "url": "https://www.youtube.com/watch?v=PXOhi6m09bA&t=2910s", "title": "L9 Semi-Supervised Learning and Unsupervised Distribution Alignment -- CS294-158-SP20 UC Berkeley", "thumbnail": "https://i.ytimg.com/vi/PXOhi6m09bA/maxresdefault.jpg"} {"video_id": "PXOhi6m09bA", "text": "combines very well to transfer learning like Bert and scales for the model parameters and model sizes so they also experimented with imagenet where they take unlabeled imagenet of 1 million examples and 1 million unlabeled data points they use 10% label data which is 100,000 labels or 100 labels per image and pure supervised gets 55% whereas their model you DEA gets 68.9 close to 69 percent accuracy which is at least 30% better and this shows the benefits are using more unlabeled data points so another thing that they tried", "start_timestamp": "00:49:04", "end_timestamp": "00:49:50", "start_second": 2944, "end_second": 2990, "url": "https://www.youtube.com/watch?v=PXOhi6m09bA&t=2944s", "title": "L9 Semi-Supervised Learning and Unsupervised Distribution Alignment -- CS294-158-SP20 UC Berkeley", "thumbnail": "https://i.ytimg.com/vi/PXOhi6m09bA/maxresdefault.jpg"} {"video_id": "PXOhi6m09bA", "text": "is having a data centers even larger than Internet which is the jft layer set from Google it's an internal data set it's on a scale of Google photos and they'd use 1.3 million images from JFK to just to see how much the domain is much works out [Music] Huli obtaining extra in domain unlegal data actually the auto domain on legal deer herd state performance and so you can see that it's not actually working as well so using more future days that actually works better for them and they also oblations on different", "start_timestamp": "00:49:50", "end_timestamp": "00:50:33", "start_second": 2990, "end_second": 3033, "url": "https://www.youtube.com/watch?v=PXOhi6m09bA&t=2990s", "title": "L9 Semi-Supervised Learning and Unsupervised Distribution Alignment -- CS294-158-SP20 UC Berkeley", "thumbnail": "https://i.ytimg.com/vi/PXOhi6m09bA/maxresdefault.jpg"} {"video_id": "PXOhi6m09bA", "text": "schedules for the thresholding and you find out the exponential scale it's better for language but linear scale works better for images that's that's something very empirical diversity constrains how did how do you control the diversity constraints for making sure that you have effective data augmentation they use a lot of hacks like minimizing the entropy and your decoding using the softmax temperature controlling and confidence based masking for your image image net case where they found the outer distribution dataset", "start_timestamp": "00:50:33", "end_timestamp": "00:51:06", "start_second": 3033, "end_second": 3066, "url": "https://www.youtube.com/watch?v=PXOhi6m09bA&t=3033s", "title": "L9 Semi-Supervised Learning and Unsupervised Distribution Alignment -- CS294-158-SP20 UC Berkeley", "thumbnail": "https://i.ytimg.com/vi/PXOhi6m09bA/maxresdefault.jpg"} {"video_id": "PXOhi6m09bA", "text": "unlabeled unlabeled I said like JT wrote the performance in that for they would take the most confident predictions from your label reader and you use that to filter the habibu data and tenday would find gains so domain relevance based data filtering is like a critical aspect of using own label data for improving performance of label data and semi-square learnings we already saw most of the mathematical or intuitive foundation system is for learning make the assumption that the unlabeled data is coming from the same data", "start_timestamp": "00:51:06", "end_timestamp": "00:51:36", "start_second": 3066, "end_second": 3096, "url": "https://www.youtube.com/watch?v=PXOhi6m09bA&t=3066s", "title": "L9 Semi-Supervised Learning and Unsupervised Distribution Alignment -- CS294-158-SP20 UC Berkeley", "thumbnail": "https://i.ytimg.com/vi/PXOhi6m09bA/maxresdefault.jpg"} {"video_id": "PXOhi6m09bA", "text": "distribution as to marginal corresponding to the joint distribution that you use for label data points but that's often not the case in practical scenarios because today said one label unlabeled data is usually something coming from some other data set or some other data source and you want to make sure you still transfer knowledge therefore that that's a suspect to assumption to make but in practice it can be there's a workaround using various kinds of filtering techniques that this paper proposes so next look at mix-match which is another very", "start_timestamp": "00:51:36", "end_timestamp": "00:52:18", "start_second": 3096, "end_second": 3138, "url": "https://www.youtube.com/watch?v=PXOhi6m09bA&t=3096s", "title": "L9 Semi-Supervised Learning and Unsupervised Distribution Alignment -- CS294-158-SP20 UC Berkeley", "thumbnail": "https://i.ytimg.com/vi/PXOhi6m09bA/maxresdefault.jpg"} {"video_id": "PXOhi6m09bA", "text": "interesting the similar paper similar spirits so the key idea in mismatches you take on label data you perform a lot of different augmentations and now you have you run all of these different augmentations to the same classifier and you get the predictions and you average two predictions across these augmentations and you end up with a bunch of class probabilities you can sharpen those class probabilities with soft max temperature controllers and once you get a sharpened distribution you you you you have an idea of what the", "start_timestamp": "00:52:18", "end_timestamp": "00:52:54", "start_second": 3138, "end_second": 3174, "url": "https://www.youtube.com/watch?v=PXOhi6m09bA&t=3138s", "title": "L9 Semi-Supervised Learning and Unsupervised Distribution Alignment -- CS294-158-SP20 UC Berkeley", "thumbnail": "https://i.ytimg.com/vi/PXOhi6m09bA/maxresdefault.jpg"} {"video_id": "PXOhi6m09bA", "text": "classifier would have guessed for the unlabeled data point and now what are you going to do is you're gonna take this and use it as part of your label Diaz for every label data updated new SEM is worth learning so it's that simple only thing is you you're making sure that your guess for unlabeled data comes from averaging or multiple augmentations and sharpening the distribution so there it's confident enough so it's called mix-match because it uses to mix up trick firstly for if you're not familiar with mix up the idea is you", "start_timestamp": "00:52:54", "end_timestamp": "00:53:33", "start_second": 3174, "end_second": 3213, "url": "https://www.youtube.com/watch?v=PXOhi6m09bA&t=3174s", "title": "L9 Semi-Supervised Learning and Unsupervised Distribution Alignment -- CS294-158-SP20 UC Berkeley", "thumbnail": "https://i.ytimg.com/vi/PXOhi6m09bA/maxresdefault.jpg"} {"video_id": "PXOhi6m09bA", "text": "take your input X and your output Y and let's say you're trading a model to predict Y from X you basically create convex combinations of two pairs of X for my one next to I do and create a new data file so for images it should be it would be something like for every pixel you take the weighted combination the pixel from the first image and the pixel from the second image and you which is average those two pixels and create a new image and similarly you would average the corresponding and ground route labels in Korea target ground", "start_timestamp": "00:53:33", "end_timestamp": "00:54:10", "start_second": 3213, "end_second": 3250, "url": "https://www.youtube.com/watch?v=PXOhi6m09bA&t=3213s", "title": "L9 Semi-Supervised Learning and Unsupervised Distribution Alignment -- CS294-158-SP20 UC Berkeley", "thumbnail": "https://i.ytimg.com/vi/PXOhi6m09bA/maxresdefault.jpg"} {"video_id": "PXOhi6m09bA", "text": "truth for your cross country with large its loss and this this technique is called mix-up and it's a documentation technique so mix-match basically the following it takes a bastion label data and imagine unlegal data and produces a new batch of process legal examples with this guessing technique and mixing up so let's look at the English part of this pseudocode apply data argumentation to X B so basically our badgerlink data points X X B PB and unlabeled data points you be so you apply the data argumentation to X P and", "start_timestamp": "00:54:10", "end_timestamp": "00:54:52", "start_second": 3250, "end_second": 3292, "url": "https://www.youtube.com/watch?v=PXOhi6m09bA&t=3250s", "title": "L9 Semi-Supervised Learning and Unsupervised Distribution Alignment -- CS294-158-SP20 UC Berkeley", "thumbnail": "https://i.ytimg.com/vi/PXOhi6m09bA/maxresdefault.jpg"} {"video_id": "PXOhi6m09bA", "text": "you apply Cayetano data augmentation to you B which is the only with a point and you compute the average predictions across all the augmentations of UB in practice they just use cake with a 2 but it can be really large if you want it to be and you apply temperature sharpening which is soft max temperature to make sure that the average predictions are picky enough and after the peaks are obtained you can take the our Max and get the corresponding classifier corresponding data point for proxy data point for the classifier so you Ottoman", "start_timestamp": "00:54:52", "end_timestamp": "00:55:32", "start_second": 3292, "end_second": 3332, "url": "https://www.youtube.com/watch?v=PXOhi6m09bA&t=3292s", "title": "L9 Semi-Supervised Learning and Unsupervised Distribution Alignment -- CS294-158-SP20 UC Berkeley", "thumbnail": "https://i.ytimg.com/vi/PXOhi6m09bA/maxresdefault.jpg"} {"video_id": "PXOhi6m09bA", "text": "the legal examples with these guest labels and using these guesses and the original labels you create an Augmented mini batch and you shuffle will combine these mini batch data points using the mix up trick and once you apply mix up to the labeled and unlabeled data you can just train a regular service learning model that has domain consistency losses and the supervisors cross will be lost treating this new batch as this mix match so mixed match is producing this processed label plus on people Taylor batch from", "start_timestamp": "00:55:32", "end_timestamp": "00:56:10", "start_second": 3332, "end_second": 3370, "url": "https://www.youtube.com/watch?v=PXOhi6m09bA&t=3332s", "title": "L9 Semi-Supervised Learning and Unsupervised Distribution Alignment -- CS294-158-SP20 UC Berkeley", "thumbnail": "https://i.ytimg.com/vi/PXOhi6m09bA/maxresdefault.jpg"} {"video_id": "PXOhi6m09bA", "text": "two different data batches that are coming independently so the last function for mixed match is going to be the regular cross country loss in a classifier there's some kind of consistency laws for unlabeled data points that you normally use in semi-supervised learning with some waiting constant and in practice it works really well earlier you saw in the realistic evaluation of learning algorithms that all these techniques were not really working well and this was concurrent work with UDA and you can clearly see", "start_timestamp": "00:56:10", "end_timestamp": "00:56:44", "start_second": 3370, "end_second": 3404, "url": "https://www.youtube.com/watch?v=PXOhi6m09bA&t=3370s", "title": "L9 Semi-Supervised Learning and Unsupervised Distribution Alignment -- CS294-158-SP20 UC Berkeley", "thumbnail": "https://i.ytimg.com/vi/PXOhi6m09bA/maxresdefault.jpg"} {"video_id": "PXOhi6m09bA", "text": "how its really into the performance and see for at an SV hm so here are the numbers you can see that it's not only working in the regime where you have 4,000 label examples but it works all the way well even when you have 250 labels mixed masses able to get all the way down to 10% error rate on CFR which is a which is very impressive so 250 images for class this means when two to three images of labels just means 25 images per class and you stood ability gets a classifier that gets around 90% accuracy so that's it for mixed match an", "start_timestamp": "00:56:44", "end_timestamp": "00:57:23", "start_second": 3404, "end_second": 3443, "url": "https://www.youtube.com/watch?v=PXOhi6m09bA&t=3404s", "title": "L9 Semi-Supervised Learning and Unsupervised Distribution Alignment -- CS294-158-SP20 UC Berkeley", "thumbnail": "https://i.ytimg.com/vi/PXOhi6m09bA/maxresdefault.jpg"} {"video_id": "PXOhi6m09bA", "text": "expansion UD ever like conquering work and I'll raise similar ideas different kind of implementations and UDA is probably more broader in terms of applications to NLP as well make sponsors well analyze internally ablated foresee for RNs future so let's look at the final paper in this agenda sub training with my student this is the largest scale since Freud learning experiment conducted and machine learning so far these are also sliced from tango Wong was one of the authors in this paper so as I said there's the", "start_timestamp": "00:57:23", "end_timestamp": "00:58:01", "start_second": 3443, "end_second": 3481, "url": "https://www.youtube.com/watch?v=PXOhi6m09bA&t=3443s", "title": "L9 Semi-Supervised Learning and Unsupervised Distribution Alignment -- CS294-158-SP20 UC Berkeley", "thumbnail": "https://i.ytimg.com/vi/PXOhi6m09bA/maxresdefault.jpg"} {"video_id": "PXOhi6m09bA", "text": "semi-square learning had a background just the dream is to make sure that unlabeled data improves the performance of supervised learning or label data even when you have a lot of labels and that's what this paper tries to achieve so you you already remember how unfiltered aft was not able to give sufficient gain on imagenet for the UDA paper and they actually use clever filtering techniques noisy student is actually a larger scale version of that so the way it works is as follows you train a teacher model with long label", "start_timestamp": "00:58:01", "end_timestamp": "00:58:39", "start_second": 3481, "end_second": 3519, "url": "https://www.youtube.com/watch?v=PXOhi6m09bA&t=3481s", "title": "L9 Semi-Supervised Learning and Unsupervised Distribution Alignment -- CS294-158-SP20 UC Berkeley", "thumbnail": "https://i.ytimg.com/vi/PXOhi6m09bA/maxresdefault.jpg"} {"video_id": "PXOhi6m09bA", "text": "data trying to really really good classifier and asset classifier to predict an unlabeled here and you infer the cereal labels on illegal data and you train a student model with the combined data of the teacher teacher of the original data label data that you use for the teacher as well as the guest labels on our label data and you add noise to this process through data augmentation dropout and stochastic depth which is another version of skip connections with stochastic and create a lot you know noisy student that's why", "start_timestamp": "00:58:39", "end_timestamp": "00:59:16", "start_second": 3519, "end_second": 3556, "url": "https://www.youtube.com/watch?v=PXOhi6m09bA&t=3519s", "title": "L9 Semi-Supervised Learning and Unsupervised Distribution Alignment -- CS294-158-SP20 UC Berkeley", "thumbnail": "https://i.ytimg.com/vi/PXOhi6m09bA/maxresdefault.jpg"} {"video_id": "PXOhi6m09bA", "text": "it's called noisy student here you're gonna have a lot of data augmentation to this process and once you do that your student model is pretty good it's it's highly regular I stand also trained on a lot more data and therefore it can over train on anything but still it rains on a lot of different proxy proxy later you generated from label data and which has already been filtered because it's only taking the confident predictions so now you can treat that student as a new teacher and repeat this process multiple times so", "start_timestamp": "00:59:16", "end_timestamp": "00:59:46", "start_second": 3556, "end_second": 3586, "url": "https://www.youtube.com/watch?v=PXOhi6m09bA&t=3556s", "title": "L9 Semi-Supervised Learning and Unsupervised Distribution Alignment -- CS294-158-SP20 UC Berkeley", "thumbnail": "https://i.ytimg.com/vi/PXOhi6m09bA/maxresdefault.jpg"} {"video_id": "PXOhi6m09bA", "text": "you take the new student as a teacher if you you basically train it on label data you infer the pseudo you infer the zero labels unlabeled data again with this new teacher and you create a new student and so on and repeat this process like multiple times and you get a really good for Adam so here are the experiment settings the architecture that you use is efficient at and the model noises they use is dropout of stochastic tap the input nicely uses Randolph which is a version of auto valve it's more efficient and for zero labels they use", "start_timestamp": "00:59:46", "end_timestamp": "01:00:24", "start_second": 3586, "end_second": 3624, "url": "https://www.youtube.com/watch?v=PXOhi6m09bA&t=3586s", "title": "L9 Semi-Supervised Learning and Unsupervised Distribution Alignment -- CS294-158-SP20 UC Berkeley", "thumbnail": "https://i.ytimg.com/vi/PXOhi6m09bA/maxresdefault.jpg"} {"video_id": "PXOhi6m09bA", "text": "the soft seal enables continuous values they don't actually use the one harden coatings and the label data said they uses a ridge net which is 1.3 million images and the unlabeled data said they use is jft which is 300 million images and they basically do iterative training where they take the biggest efficient at model possible and actually make it wider for the next next scenarios so the original teacher could be b7 which is the widest and deepest efficient net that exists and the student model the trains next could be another bigger", "start_timestamp": "01:00:24", "end_timestamp": "01:01:02", "start_second": 3624, "end_second": 3662, "url": "https://www.youtube.com/watch?v=PXOhi6m09bA&t=3624s", "title": "L9 Semi-Supervised Learning and Unsupervised Distribution Alignment -- CS294-158-SP20 UC Berkeley", "thumbnail": "https://i.ytimg.com/vi/PXOhi6m09bA/maxresdefault.jpg"} {"video_id": "PXOhi6m09bA", "text": "version of that model which is the L tomorrow the Dakar so in terms of results they actually had the state-of-the-art numbers for an image not already here two eighty eight point four percent top one accuracy which is significantly better than any other model and the previous best was eighty six point four percent which is actually trained on 3.5 billion labeled images from Instagram so with just one point three million labeled images and 300 million unlabeled images you're actually able to surpass those numbers", "start_timestamp": "01:01:02", "end_timestamp": "01:01:34", "start_second": 3662, "end_second": 3694, "url": "https://www.youtube.com/watch?v=PXOhi6m09bA&t=3662s", "title": "L9 Semi-Supervised Learning and Unsupervised Distribution Alignment -- CS294-158-SP20 UC Berkeley", "thumbnail": "https://i.ytimg.com/vi/PXOhi6m09bA/maxresdefault.jpg"} {"video_id": "PXOhi6m09bA", "text": "by two percentages significant especially in those those regimes and they do that with one order of magnitude fewer labels and they actually do that with twice few as that smaller in terms of number of parameters because they use efficient mass which are much more efficient in terms of parameters and flops and rest net or rest next so the improvements are also exists without iterative training so it's not that they actually need it right of training so even without iterative training they get significant improvement which is one", "start_timestamp": "01:01:34", "end_timestamp": "01:02:10", "start_second": 3694, "end_second": 3730, "url": "https://www.youtube.com/watch?v=PXOhi6m09bA&t=3694s", "title": "L9 Semi-Supervised Learning and Unsupervised Distribution Alignment -- CS294-158-SP20 UC Berkeley", "thumbnail": "https://i.ytimg.com/vi/PXOhi6m09bA/maxresdefault.jpg"} {"video_id": "PXOhi6m09bA", "text": "iteration they can get minimum of one percent improvement for all the different model regimes b0 b2 b5 b7 improvement of 1% is pretty standard and which is pretty pretty nice because this means that the filtering mechanism actually works for finally they also show really good robustness results on a mission net because of training on a lot of different data augmentations and model noises and water unlabeled data in addition to label data you would expect the resulting classifier to actually be good on a new server robustness", "start_timestamp": "01:02:10", "end_timestamp": "01:02:44", "start_second": 3730, "end_second": 3764, "url": "https://www.youtube.com/watch?v=PXOhi6m09bA&t=3730s", "title": "L9 Semi-Supervised Learning and Unsupervised Distribution Alignment -- CS294-158-SP20 UC Berkeley", "thumbnail": "https://i.ytimg.com/vi/PXOhi6m09bA/maxresdefault.jpg"} {"video_id": "PXOhi6m09bA", "text": "benchmarks which happens to be the case so they actually beat the state of their army robustness benchmarks on each net a which is a corrupted version of image net where usual classifiers fail over there and their model actually gets 83 point once and 83 point seven and top one which is unprecedented and they also do really well on emission at C and P where they get very very competitive very very very good numbers on the top one accuracy over their 77227 eighty-six percent which is significantly better than the", "start_timestamp": "01:02:44", "end_timestamp": "01:03:21", "start_second": 3764, "end_second": 3801, "url": "https://www.youtube.com/watch?v=PXOhi6m09bA&t=3764s", "title": "L9 Semi-Supervised Learning and Unsupervised Distribution Alignment -- CS294-158-SP20 UC Berkeley", "thumbnail": "https://i.ytimg.com/vi/PXOhi6m09bA/maxresdefault.jpg"} {"video_id": "PXOhi6m09bA", "text": "models that did not use this noisy student process so here's like examples where the model that they trained on these harder versions of imagenet ended up making the right predictions which is in black well the baseline models ended up making the wrong predictions so the baseline models are focusing on aspects that we use the car up and there's actually your amazing car or like for instance they're actually looking at so the models able to capture the basketball in the photo on the bottom room where there's a man holding in", "start_timestamp": "01:03:21", "end_timestamp": "01:04:00", "start_second": 3801, "end_second": 3840, "url": "https://www.youtube.com/watch?v=PXOhi6m09bA&t=3801s", "title": "L9 Semi-Supervised Learning and Unsupervised Distribution Alignment -- CS294-158-SP20 UC Berkeley", "thumbnail": "https://i.ytimg.com/vi/PXOhi6m09bA/maxresdefault.jpg"} {"video_id": "PXOhi6m09bA", "text": "basketball whereas the baseline models are not able to do that so so that shows this models very fine-grain in terms of its recognition abilities and here are a bigger example is this a dragonfly at the right but but the baseline models of food and thinking it's a bullfrog and similarly parking meter was his vacuum swing was his mosquito net so this mod is actually very very good at details and they also have ablations for how much the noise matters in the in this process and it seems to matter significantly enough when he used all", "start_timestamp": "01:04:00", "end_timestamp": "01:04:47", "start_second": 3840, "end_second": 3887, "url": "https://www.youtube.com/watch?v=PXOhi6m09bA&t=3840s", "title": "L9 Semi-Supervised Learning and Unsupervised Distribution Alignment -- CS294-158-SP20 UC Berkeley", "thumbnail": "https://i.ytimg.com/vi/PXOhi6m09bA/maxresdefault.jpg"} {"video_id": "PXOhi6m09bA", "text": "the different kinds of noises like grappa stochastic depth and data augmentation you get the best possible numbers so in summary we looked at semi-square is learning and it's a practically important problem in the industry for two different scenarios one is when you have a lot of label data and a lot more on label data like for instance image ladder and j ft and you're trying to improve the performance of image net the other is when you have very little label data and you have plenty of unequal data which is usually", "start_timestamp": "01:04:47", "end_timestamp": "01:05:18", "start_second": 3887, "end_second": 3918, "url": "https://www.youtube.com/watch?v=PXOhi6m09bA&t=3887s", "title": "L9 Semi-Supervised Learning and Unsupervised Distribution Alignment -- CS294-158-SP20 UC Berkeley", "thumbnail": "https://i.ytimg.com/vi/PXOhi6m09bA/maxresdefault.jpg"} {"video_id": "PXOhi6m09bA", "text": "the case in medicine or finance or something like that so the promise of semi-supervised learning is always existed for the second second scenario but there have been very good results in the the last few months or like last year or so in both these scenarios which is the noisiest student model really helping in the scenario where you have a lot of legal data but you have a lot more and label data but then unsupervised data augmentation or mix-match is really very good at the low Reiter regime where you have unlabeled data but very little", "start_timestamp": "01:05:18", "end_timestamp": "01:06:02", "start_second": 3918, "end_second": 3962, "url": "https://www.youtube.com/watch?v=PXOhi6m09bA&t=3918s", "title": "L9 Semi-Supervised Learning and Unsupervised Distribution Alignment -- CS294-158-SP20 UC Berkeley", "thumbnail": "https://i.ytimg.com/vi/PXOhi6m09bA/maxresdefault.jpg"} {"video_id": "PXOhi6m09bA", "text": "label data and you're able to do really well so this means that when you have the scenario are you using unlabeled data to improve the performance of supervised learning systems self supervised learning is not necessarily the only option semi-supervised learning is asked lucrative or probably even better because its ability to improve the performance even in the high data regimes and make it possible for building emotional classifiers that have an unprecedented top on accuracies like noisy student that's it for the lecture", "start_timestamp": "01:06:02", "end_timestamp": "01:06:36", "start_second": 3962, "end_second": 3996, "url": "https://www.youtube.com/watch?v=PXOhi6m09bA&t=3962s", "title": "L9 Semi-Supervised Learning and Unsupervised Distribution Alignment -- CS294-158-SP20 UC Berkeley", "thumbnail": "https://i.ytimg.com/vi/PXOhi6m09bA/maxresdefault.jpg"} {"video_id": "PXOhi6m09bA", "text": "thank you very much all right so let's get started welcome back to the lecture nine I'm actually not sure Oh keep unsupervised learning something lecture something and so we will have a two part lectures today the first part we would look at something called unsupervised distribution alignment it also goes by a lot of other names and then the second part would be a guest lecture by Professor Alyosha to talk about some of I guess the works from his lab so any logistics questions before we dive into your lecture mouse tone can we", "start_timestamp": "01:06:36", "end_timestamp": "01:07:35", "start_second": 3996, "end_second": 4055, "url": "https://www.youtube.com/watch?v=PXOhi6m09bA&t=3996s", "title": "L9 Semi-Supervised Learning and Unsupervised Distribution Alignment -- CS294-158-SP20 UC Berkeley", "thumbnail": "https://i.ytimg.com/vi/PXOhi6m09bA/maxresdefault.jpg"} {"video_id": "PXOhi6m09bA", "text": "have late days for mouse tones all right there you go which no one is working on all right so so let's get started so in this lecture we will look at unsupervised distribution alignment so what does that even mean so let's remove the part unsupervised let's just first look at a distribution alignment problem so a lot of problems in image to image translation take this form so let's say I want to go from semantic mask to RGB images then this is a distribution alignment problem because we can think of us having a distribution over mask", "start_timestamp": "01:07:35", "end_timestamp": "01:09:04", "start_second": 4055, "end_second": 4144, "url": "https://www.youtube.com/watch?v=PXOhi6m09bA&t=4055s", "title": "L9 Semi-Supervised Learning and Unsupervised Distribution Alignment -- CS294-158-SP20 UC Berkeley", "thumbnail": "https://i.ytimg.com/vi/PXOhi6m09bA/maxresdefault.jpg"} {"video_id": "PXOhi6m09bA", "text": "here and then we also have a distribution of like just regular images here and then they would like co-occur with certain join probability distribution right it's mostly for one image there is only one correct semantic mask but for one mask there could be many corresponding images and the goal would be like how can you align these two in such a way that when I give you an image on the right you can generate the mask or the other way if you want to generate more training data and then the more image problems that takes this form", "start_timestamp": "01:09:04", "end_timestamp": "01:09:42", "start_second": 4144, "end_second": 4182, "url": "https://www.youtube.com/watch?v=PXOhi6m09bA&t=4144s", "title": "L9 Semi-Supervised Learning and Unsupervised Distribution Alignment -- CS294-158-SP20 UC Berkeley", "thumbnail": "https://i.ytimg.com/vi/PXOhi6m09bA/maxresdefault.jpg"} {"video_id": "PXOhi6m09bA", "text": "let's say I want you know an image what does it look like in the day time and what does it look like in night time and then again we can think of it as having a distribution of images of day time and then also a distribution of images in night time and then you want to align them in certain way so you say why is this helpful like one way that this could be helpful is say if we want to train at home as vehicles to drive safely at night but then it's harder to collect data at night so is there a way for us to collect corresponding images during", "start_timestamp": "01:09:42", "end_timestamp": "01:10:15", "start_second": 4182, "end_second": 4215, "url": "https://www.youtube.com/watch?v=PXOhi6m09bA&t=4182s", "title": "L9 Semi-Supervised Learning and Unsupervised Distribution Alignment -- CS294-158-SP20 UC Berkeley", "thumbnail": "https://i.ytimg.com/vi/PXOhi6m09bA/maxresdefault.jpg"} {"video_id": "PXOhi6m09bA", "text": "daytime and then find a way to find their nighttime counterpart that would be useful if we can do this and then there's somehow a lot of other problems also fall under this formulation like black and white images to color images and for basically everything that we have seen they are relatively tractable because like I can totally just take a color image and then convert it into black and white and that gives me a lot of pairs that I can train on and similarly to the semantic mask and Street View RGB images as well as", "start_timestamp": "01:10:15", "end_timestamp": "01:10:52", "start_second": 4215, "end_second": 4252, "url": "https://www.youtube.com/watch?v=PXOhi6m09bA&t=4215s", "title": "L9 Semi-Supervised Learning and Unsupervised Distribution Alignment -- CS294-158-SP20 UC Berkeley", "thumbnail": "https://i.ytimg.com/vi/PXOhi6m09bA/maxresdefault.jpg"} {"video_id": "PXOhi6m09bA", "text": "daytime and nighttime for a lot of those you can actually find natural pairs so these are some of the problems distribution alignment problems in image space and this kind of distribution alignment problems also happen in the attacks analog of that and most straightforward example is really just machine translation how do you translate a sentence or a paragraph from one language to another that again you can think of as a distribution alignment problem you can think of there was a distribution of English text and then", "start_timestamp": "01:10:52", "end_timestamp": "01:11:27", "start_second": 4252, "end_second": 4287, "url": "https://www.youtube.com/watch?v=PXOhi6m09bA&t=4252s", "title": "L9 Semi-Supervised Learning and Unsupervised Distribution Alignment -- CS294-158-SP20 UC Berkeley", "thumbnail": "https://i.ytimg.com/vi/PXOhi6m09bA/maxresdefault.jpg"} {"video_id": "PXOhi6m09bA", "text": "there's a distribution of Chinese text and then the question here is how do you align these two things together so when this kind of distribution alignment problem when is supervised then they are relatively easy so when in the in the case of form an image goes to semantic mask it's basically just a Mantic segmentation problem when it's like other image to image translation there's this text effects work that's done here at Berkeley and then for text to text domain alignment when you have the supervised pairs it's just machine", "start_timestamp": "01:11:27", "end_timestamp": "01:12:08", "start_second": 4287, "end_second": 4328, "url": "https://www.youtube.com/watch?v=PXOhi6m09bA&t=4287s", "title": "L9 Semi-Supervised Learning and Unsupervised Distribution Alignment -- CS294-158-SP20 UC Berkeley", "thumbnail": "https://i.ytimg.com/vi/PXOhi6m09bA/maxresdefault.jpg"} {"video_id": "PXOhi6m09bA", "text": "translation and when you want to go from image to text again when you have supervised pairs they are just like captioning tasks and a lot of things so in the end of it it really just borrowed down to feeding a certain conditional distribution like your given image B what the correct mask a and the you have this luxuries when you have kind of a and B pairs that co-occur in the wheel world either through your annotation effort or by taking an image at a time and it take the same image at nighttime as long as you can gather this kind of pairs is", "start_timestamp": "01:12:08", "end_timestamp": "01:12:43", "start_second": 4328, "end_second": 4363, "url": "https://www.youtube.com/watch?v=PXOhi6m09bA&t=4328s", "title": "L9 Semi-Supervised Learning and Unsupervised Distribution Alignment -- CS294-158-SP20 UC Berkeley", "thumbnail": "https://i.ytimg.com/vi/PXOhi6m09bA/maxresdefault.jpg"} {"video_id": "PXOhi6m09bA", "text": "somewhat trivial at least from a formulation perspective it's really just fitting a conditional distribution and we have talked about all sorts of ways to feed distributions in this class can auto regressive model or whatever and but the question becomes interesting if this kind of pair of data what if they are expensive to obtain or they just don't exist then we are basically going out of the range of this supervised distribution alignment problem like you have one distribution you have another but you don't have any pair of data then", "start_timestamp": "01:12:43", "end_timestamp": "01:13:25", "start_second": 4363, "end_second": 4405, "url": "https://www.youtube.com/watch?v=PXOhi6m09bA&t=4363s", "title": "L9 Semi-Supervised Learning and Unsupervised Distribution Alignment -- CS294-158-SP20 UC Berkeley", "thumbnail": "https://i.ytimg.com/vi/PXOhi6m09bA/maxresdefault.jpg"} {"video_id": "PXOhi6m09bA", "text": "like can you still do this or even do it at all so I'm taking some examples from a paper called cycle game like what if you want to turn the painting into a photograph or turn a photograph into a painting the second one might be more tractable because like you could possibly say I take a picture and then I hire someone to paint it for me but if I want to do it in a very specific style by a specific artist and you really couldn't do that so in a sense the natural pairs don't even exist in the real world similarly like if you want", "start_timestamp": "01:13:25", "end_timestamp": "01:14:02", "start_second": 4405, "end_second": 4442, "url": "https://www.youtube.com/watch?v=PXOhi6m09bA&t=4405s", "title": "L9 Semi-Supervised Learning and Unsupervised Distribution Alignment -- CS294-158-SP20 UC Berkeley", "thumbnail": "https://i.ytimg.com/vi/PXOhi6m09bA/maxresdefault.jpg"} {"video_id": "PXOhi6m09bA", "text": "you for whatever reason if we want to turn a zebra into a horse or turn a horse into a zebra then it would be very difficult to force a zebra in a horse to take up exactly the same pose and take a picture of them having the exact correspondence so these are the kind of pair of data that would not exist in the real world and there are a lot of other applications so let's think back to machine translations so if we want to if I want to translate between Chinese and English or English and Germany that's relatively easy because there are", "start_timestamp": "01:14:02", "end_timestamp": "01:14:42", "start_second": 4442, "end_second": 4482, "url": "https://www.youtube.com/watch?v=PXOhi6m09bA&t=4442s", "title": "L9 Semi-Supervised Learning and Unsupervised Distribution Alignment -- CS294-158-SP20 UC Berkeley", "thumbnail": "https://i.ytimg.com/vi/PXOhi6m09bA/maxresdefault.jpg"} {"video_id": "PXOhi6m09bA", "text": "large demand of those language uses and it makes it economical to annotate a lot of data of basically supervised language sentences pairs but then like it's not economical to do it for the other probably hundreds of languages that exist in the world it just doesn't make sense to annotate that much data and and if we can make this kind of distribution alignment to work without any supervision then it could be used as a lot more it can be used as a way to augment label examples in a kind of semi-supervised way we can be also used", "start_timestamp": "01:14:42", "end_timestamp": "01:15:21", "start_second": 4482, "end_second": 4521, "url": "https://www.youtube.com/watch?v=PXOhi6m09bA&t=4482s", "title": "L9 Semi-Supervised Learning and Unsupervised Distribution Alignment -- CS294-158-SP20 UC Berkeley", "thumbnail": "https://i.ytimg.com/vi/PXOhi6m09bA/maxresdefault.jpg"} {"video_id": "PXOhi6m09bA", "text": "to do style transfer like some of the things that we have seen that basically had no ground truth in the real world yes yeah well it's just okay so if I have a good translation model between two languages then the value of that model is kind of proportional to the usage that you can get from it let's say to train any pair of languages you need the same amount of investment that's called fifty million dollars probably on the lower side then like if I throw in this fifty million dollars for between Chinese and English", "start_timestamp": "01:15:21", "end_timestamp": "01:16:02", "start_second": 4521, "end_second": 4562, "url": "https://www.youtube.com/watch?v=PXOhi6m09bA&t=4521s", "title": "L9 Semi-Supervised Learning and Unsupervised Distribution Alignment -- CS294-158-SP20 UC Berkeley", "thumbnail": "https://i.ytimg.com/vi/PXOhi6m09bA/maxresdefault.jpg"} {"video_id": "PXOhi6m09bA", "text": "you probably get a ton of usage and you get ads or like what other revenue but then I have like English - like whatever language that probably only a hundred thousand people speak then you get drastically less usage of the model that means for the same investment you get a lot less out of it so it's not that they are more expensive to label it's just like that value it doesn't make it justified so okay so let's look at this problem again so it would be of course it would be a nice thing to be able to achieve to give me two distributions and", "start_timestamp": "01:16:02", "end_timestamp": "01:16:35", "start_second": 4562, "end_second": 4595, "url": "https://www.youtube.com/watch?v=PXOhi6m09bA&t=4562s", "title": "L9 Semi-Supervised Learning and Unsupervised Distribution Alignment -- CS294-158-SP20 UC Berkeley", "thumbnail": "https://i.ytimg.com/vi/PXOhi6m09bA/maxresdefault.jpg"} {"video_id": "PXOhi6m09bA", "text": "then find a way to align them but this is even a feasible problem right so if we look at the problem statement we are basically told to do two things one thing is we have distributions a we have two random variables a and B two distributions and then we get access to the samples from them like we get a bunch of samples in one domain we also get a bunch of samples from the other domain that's all great but what we crucially don't have is we don't have any samples from the pairs yet we need to estimate how they are related to each", "start_timestamp": "01:16:35", "end_timestamp": "01:17:10", "start_second": 4595, "end_second": 4630, "url": "https://www.youtube.com/watch?v=PXOhi6m09bA&t=4595s", "title": "L9 Semi-Supervised Learning and Unsupervised Distribution Alignment -- CS294-158-SP20 UC Berkeley", "thumbnail": "https://i.ytimg.com/vi/PXOhi6m09bA/maxresdefault.jpg"} {"video_id": "PXOhi6m09bA", "text": "other so this problem form a high level seems pretty hopeless because you have really given too little information to to tacko it so what we would look at next kind of certain so basically the crucial problem now is like where do we even get any training data like if I don't have any supervised pairs like what do I even train the model on so the the way that people have been doing this is they try to rely on certain invariants that are true for kind of any pairs of distributions and then somehow you could get some meaningful learning", "start_timestamp": "01:17:10", "end_timestamp": "01:18:05", "start_second": 4630, "end_second": 4685, "url": "https://www.youtube.com/watch?v=PXOhi6m09bA&t=4630s", "title": "L9 Semi-Supervised Learning and Unsupervised Distribution Alignment -- CS294-158-SP20 UC Berkeley", "thumbnail": "https://i.ytimg.com/vi/PXOhi6m09bA/maxresdefault.jpg"} {"video_id": "PXOhi6m09bA", "text": "signal out of it so the first kind of invariance that we can rely on is something called marginal matching so there's some math here but then the brief idea is really like if I want to translate one distribution from one distribution to another after the translations the distribution should still look like each other and more precisely what it means is that there was some fundamentally unknown coupling there's some fundamentally unknown relationship between these two random variables a and B that we are trying to", "start_timestamp": "01:18:05", "end_timestamp": "01:18:44", "start_second": 4685, "end_second": 4724, "url": "https://www.youtube.com/watch?v=PXOhi6m09bA&t=4685s", "title": "L9 Semi-Supervised Learning and Unsupervised Distribution Alignment -- CS294-158-SP20 UC Berkeley", "thumbnail": "https://i.ytimg.com/vi/PXOhi6m09bA/maxresdefault.jpg"} {"video_id": "PXOhi6m09bA", "text": "approximate but we don't have access to them so let's call that our approximation Q so given B like what what a is most likely well this is the other this should be so basically we were trying to learn two mappings or two conditional distributions and when you specify this kind of conditional distribution distribution Q of a given B you implicitly induce a marginal distribution so when I specify Q of B given a I implicitly specifying a marginal distribution on B and the way that you compute it is if I sample a for", "start_timestamp": "01:18:44", "end_timestamp": "01:19:28", "start_second": 4724, "end_second": 4768, "url": "https://www.youtube.com/watch?v=PXOhi6m09bA&t=4724s", "title": "L9 Semi-Supervised Learning and Unsupervised Distribution Alignment -- CS294-158-SP20 UC Berkeley", "thumbnail": "https://i.ytimg.com/vi/PXOhi6m09bA/maxresdefault.jpg"} {"video_id": "PXOhi6m09bA", "text": "my one true distribution of a and then you essentially average out this conditional distribution that would be the marginal distribution of p and ideally I warn my Q to be close to the ground truth conditional distribution P of P given a and that means if I sample a lot of a and then I map it through my conditional distribution the outcome of that should map back to the original ground true of distribution and similarly I can do that for a so I sample from B and then I would from from this B samples I would calculate my", "start_timestamp": "01:19:28", "end_timestamp": "01:20:16", "start_second": 4768, "end_second": 4816, "url": "https://www.youtube.com/watch?v=PXOhi6m09bA&t=4768s", "title": "L9 Semi-Supervised Learning and Unsupervised Distribution Alignment -- CS294-158-SP20 UC Berkeley", "thumbnail": "https://i.ytimg.com/vi/PXOhi6m09bA/maxresdefault.jpg"} {"video_id": "PXOhi6m09bA", "text": "approximate conditional distribution and after those transformations they should be the same as original marginal distribution as a and oftentimes in literature this conditional distribution is just a deterministic mapping so I would say give me any sample from a I would map it to its corresponding sample in domain B so far so good so the question is basically we have stated the question to be in the end we are trying to match this marginal of be approximate B to the ground truth B which we have access to but in nopon you are saying", "start_timestamp": "01:20:16", "end_timestamp": "01:21:28", "start_second": 4816, "end_second": 4888, "url": "https://www.youtube.com/watch?v=PXOhi6m09bA&t=4816s", "title": "L9 Semi-Supervised Learning and Unsupervised Distribution Alignment -- CS294-158-SP20 UC Berkeley", "thumbnail": "https://i.ytimg.com/vi/PXOhi6m09bA/maxresdefault.jpg"} {"video_id": "PXOhi6m09bA", "text": "the new net Q needs to look at a essentially so that is a true statement so if my Q if my Q is so powerful that it could just represent the whole marginal distribution of B so let's say let's call if we have Q of P given a that is equal to Q of B for all a and B pair then your statement would be true basically you can just like approximate the marginal distribution without even doing any meaningful work so that's why like in practice people would have a fairly restrictive mapping so like that's why like in most of the works", "start_timestamp": "01:21:28", "end_timestamp": "01:22:07", "start_second": 4888, "end_second": 4927, "url": "https://www.youtube.com/watch?v=PXOhi6m09bA&t=4888s", "title": "L9 Semi-Supervised Learning and Unsupervised Distribution Alignment -- CS294-158-SP20 UC Berkeley", "thumbnail": "https://i.ytimg.com/vi/PXOhi6m09bA/maxresdefault.jpg"} {"video_id": "PXOhi6m09bA", "text": "that we would look at like it usually takes the form of a deterministic mapping so when when Q of B given a is deterministic then like you don't you don't get to represent the whole marginal distribution unless P of P itself is only a pawn mass but that is a correct observation but like you said like I suggested like I mean this is a very weak learning signal like if you don't correct you if you don't construct your model in the right way like you you could extract nothing from it so let's see some examples of like how like how", "start_timestamp": "01:22:07", "end_timestamp": "01:22:40", "start_second": 4927, "end_second": 4960, "url": "https://www.youtube.com/watch?v=PXOhi6m09bA&t=4927s", "title": "L9 Semi-Supervised Learning and Unsupervised Distribution Alignment -- CS294-158-SP20 UC Berkeley", "thumbnail": "https://i.ytimg.com/vi/PXOhi6m09bA/maxresdefault.jpg"} {"video_id": "PXOhi6m09bA", "text": "does this how does this in an ideal case work at all right so let's say I have two distribution a and B and they are very simple categorical distributions that only have three possible values a 1 a 2 a 3 I'm just going to draw some frequency here right so and then I'm going to do the same thing for B so this is probability mass function all right so so based on let's say we have a deterministic mapping that means like each of the a1 has each of the a has to map to some B and each of the B has to map to some a then like based on", "start_timestamp": "01:22:40", "end_timestamp": "01:23:43", "start_second": 4960, "end_second": 5023, "url": "https://www.youtube.com/watch?v=PXOhi6m09bA&t=4960s", "title": "L9 Semi-Supervised Learning and Unsupervised Distribution Alignment -- CS294-158-SP20 UC Berkeley", "thumbnail": "https://i.ytimg.com/vi/PXOhi6m09bA/maxresdefault.jpg"} {"video_id": "PXOhi6m09bA", "text": "the marginal meshing like this can seemingly gives us a way to recover the crunch of Correspondence let's say the ground shook of respondents is between a 2 and B 1 and a 3 and B 2 and a 1 and B 3 right so I would argue this would be the only correspondence that satisfied the marginal matching constraint well it could be not a by direction but then like it might not fulfill the properties until unless some of the unless some of the values have like no probability mass then like you can do whatever right so argue this is the only way that you", "start_timestamp": "01:23:43", "end_timestamp": "01:24:42", "start_second": 5023, "end_second": 5082, "url": "https://www.youtube.com/watch?v=PXOhi6m09bA&t=5023s", "title": "L9 Semi-Supervised Learning and Unsupervised Distribution Alignment -- CS294-158-SP20 UC Berkeley", "thumbnail": "https://i.ytimg.com/vi/PXOhi6m09bA/maxresdefault.jpg"} {"video_id": "PXOhi6m09bA", "text": "could make the marginal meshing look the reason is that because each value has a distinct frequency here so if you match it in the wrong way the marginal distribution of the induced mapping would no longer be measuring the original one but there are still a lot of ambiguity right so if we imagine a distributions that's like the most kind of the most difficult one let's say I have a uniform distributions over two random variables then this is kind of hopeless because all kind of mapping could work so let's first look at the a", "start_timestamp": "01:24:42", "end_timestamp": "01:25:26", "start_second": 5082, "end_second": 5126, "url": "https://www.youtube.com/watch?v=PXOhi6m09bA&t=5082s", "title": "L9 Semi-Supervised Learning and Unsupervised Distribution Alignment -- CS294-158-SP20 UC Berkeley", "thumbnail": "https://i.ytimg.com/vi/PXOhi6m09bA/maxresdefault.jpg"} {"video_id": "PXOhi6m09bA", "text": "to be mapping so a one can map to B 1 a 2 map to beat you but then it's also possible that a 1 can be mapped to beat you a to map to b1 and will still be fine form a marginal matching perspective but then the problem is so well then that means this thing is ambiguous and this is not just hopes sorry mainly joyed the other way so there are two set of mappings that we are we need to learn here one is G a B which is mapping from A to B and then another set of things is another set of things that we need to learn is GPA and", "start_timestamp": "01:25:26", "end_timestamp": "01:26:25", "start_second": 5126, "end_second": 5185, "url": "https://www.youtube.com/watch?v=PXOhi6m09bA&t=5126s", "title": "L9 Semi-Supervised Learning and Unsupervised Distribution Alignment -- CS294-158-SP20 UC Berkeley", "thumbnail": "https://i.ytimg.com/vi/PXOhi6m09bA/maxresdefault.jpg"} {"video_id": "PXOhi6m09bA", "text": "basically you would need to learn the product of these two different possibilities B 1 2 A 1 B 2 2 a 2 B so how many like totally possible solutions are there to this problem if we just use marginal matching yeah so basically each direction there are two possibilities and then if you multiply them together there are four possible solutions in this problem and they are basically totally ambiguous so this is one of the thing that we are seeing here is you can have your objective function that induce a really large solution set really in", "start_timestamp": "01:26:25", "end_timestamp": "01:27:23", "start_second": 5185, "end_second": 5243, "url": "https://www.youtube.com/watch?v=PXOhi6m09bA&t=5185s", "title": "L9 Semi-Supervised Learning and Unsupervised Distribution Alignment -- CS294-158-SP20 UC Berkeley", "thumbnail": "https://i.ytimg.com/vi/PXOhi6m09bA/maxresdefault.jpg"} {"video_id": "PXOhi6m09bA", "text": "this case is almost all of the solution set and then people realized this can potentially be a problem so they introduced another technique to try to at least restrict the solution set a little bit so this thing is oftentimes referred to as cycle consistency but it has also taken a lot of other names in literature called dual in learning back translations and really the core idea is that if I if I take my so basically the whole idea is that my apartment mapping should be similar to the ground truth mapping and if those mappings are", "start_timestamp": "01:27:23", "end_timestamp": "01:28:10", "start_second": 5243, "end_second": 5290, "url": "https://www.youtube.com/watch?v=PXOhi6m09bA&t=5243s", "title": "L9 Semi-Supervised Learning and Unsupervised Distribution Alignment -- CS294-158-SP20 UC Berkeley", "thumbnail": "https://i.ytimg.com/vi/PXOhi6m09bA/maxresdefault.jpg"} {"video_id": "PXOhi6m09bA", "text": "deterministic then what that means is that if I step through my mapping I should get back my original sample so if we think about the case of P of a be given a given of a would be this would map aid to his correspondence in B and then if you apply that again from the other direction B to a mapping you should get back a and this should hold you in both directions so this gives you another invariance so if you say that the relationship between these two distributions are indeed deterministic then I would say these", "start_timestamp": "01:28:10", "end_timestamp": "01:28:49", "start_second": 5290, "end_second": 5329, "url": "https://www.youtube.com/watch?v=PXOhi6m09bA&t=5290s", "title": "L9 Semi-Supervised Learning and Unsupervised Distribution Alignment -- CS294-158-SP20 UC Berkeley", "thumbnail": "https://i.ytimg.com/vi/PXOhi6m09bA/maxresdefault.jpg"} {"video_id": "PXOhi6m09bA", "text": "things should hold you for all possible pairs and if we step back to you the example that we just look at so if we impose psycho consistent see what would be the number of possible solutions now why is that no longer four yeah so like we can see that in this case the original total solution set is four but after you impose this cycle consistency constraint you can reduce the solution set to exist like some of the some of the mapping are no longer valid so let's say if I pick this and then take this this would be no longer", "start_timestamp": "01:28:49", "end_timestamp": "01:29:37", "start_second": 5329, "end_second": 5377, "url": "https://www.youtube.com/watch?v=PXOhi6m09bA&t=5329s", "title": "L9 Semi-Supervised Learning and Unsupervised Distribution Alignment -- CS294-158-SP20 UC Berkeley", "thumbnail": "https://i.ytimg.com/vi/PXOhi6m09bA/maxresdefault.jpg"} {"video_id": "PXOhi6m09bA", "text": "valid because an a.1 would get translated into B 1 and then P 1 according to this would get translated into be a 2 so this is G a B this is G B a and this is this no longer satisfy the cycle consistency constraint so that means like I can use this constraint to reduce the possible solution set in my in my search but still like we can see that it's still fundamentally under defined like we are still left with two possible mappings and we are not sure which one is correct but it at least exponentially shrink the space that is", "start_timestamp": "01:29:37", "end_timestamp": "01:30:14", "start_second": 5377, "end_second": 5414, "url": "https://www.youtube.com/watch?v=PXOhi6m09bA&t=5377s", "title": "L9 Semi-Supervised Learning and Unsupervised Distribution Alignment -- CS294-158-SP20 UC Berkeley", "thumbnail": "https://i.ytimg.com/vi/PXOhi6m09bA/maxresdefault.jpg"} {"video_id": "PXOhi6m09bA", "text": "possible so so far we have seen the two core in variances that people have used and these are invariants that's true for all alignment problems and then we can use them as learning signals and again like obviously like we just look at like even in a extremely low dimension basically categorical examples like it's still not going to work so there are there are definitely problems that this cannot solve but then in practice like people can find problems that this kind of search is amenable to you and then you can oftentimes ensure that there's", "start_timestamp": "01:30:14", "end_timestamp": "01:31:00", "start_second": 5414, "end_second": 5460, "url": "https://www.youtube.com/watch?v=PXOhi6m09bA&t=5414s", "title": "L9 Semi-Supervised Learning and Unsupervised Distribution Alignment -- CS294-158-SP20 UC Berkeley", "thumbnail": "https://i.ytimg.com/vi/PXOhi6m09bA/maxresdefault.jpg"} {"video_id": "PXOhi6m09bA", "text": "no biases in your system by selecting the right architectures loss functions and etc and you can actually get to a certain level of success with this yeah so they expect this this one basically says for an arbitrary data porn in its it's just a generalized version of the cycle consistency thing so for any of the data porn a if I draw samples for my approximate conditional distribution and then I translate that B back to a using my approximate once the the distribution that induce should be similar to if you do it with the real-world one and what", "start_timestamp": "01:31:00", "end_timestamp": "01:32:01", "start_second": 5460, "end_second": 5521, "url": "https://www.youtube.com/watch?v=PXOhi6m09bA&t=5460s", "title": "L9 Semi-Supervised Learning and Unsupervised Distribution Alignment -- CS294-158-SP20 UC Berkeley", "thumbnail": "https://i.ytimg.com/vi/PXOhi6m09bA/maxresdefault.jpg"} {"video_id": "PXOhi6m09bA", "text": "we're saying here is just that like when in the case of both of when we say we know the P and the Q are both deterministic then it reduces to the deterministic mapping example but it could exist in a more general form so probably the best-known example that uses those learning signals are cycle again so psycho gains laws essentially consists of two parts one part is this marginal matching so meaning after I translate my data from one domain to another the marginal of them still match with each other and you can kind of see", "start_timestamp": "01:32:01", "end_timestamp": "01:32:49", "start_second": 5521, "end_second": 5569, "url": "https://www.youtube.com/watch?v=PXOhi6m09bA&t=5521s", "title": "L9 Semi-Supervised Learning and Unsupervised Distribution Alignment -- CS294-158-SP20 UC Berkeley", "thumbnail": "https://i.ytimg.com/vi/PXOhi6m09bA/maxresdefault.jpg"} {"video_id": "PXOhi6m09bA", "text": "it here so essentially my generator no longer takes a mapping from Z to X instead this thing is trying to translate from X to Y and I want to do it in such a way that it looks like my target image so this is just a standard gain training loss where you're trying to say my mapping from X to Y it should look like just like Y so that's fairly straightforward and so but it's actually instead of looking at frequency you use again to help you do the marginal matching and the second dimension of this is you can achieve cycle", "start_timestamp": "01:32:49", "end_timestamp": "01:33:34", "start_second": 5569, "end_second": 5614, "url": "https://www.youtube.com/watch?v=PXOhi6m09bA&t=5569s", "title": "L9 Semi-Supervised Learning and Unsupervised Distribution Alignment -- CS294-158-SP20 UC Berkeley", "thumbnail": "https://i.ytimg.com/vi/PXOhi6m09bA/maxresdefault.jpg"} {"video_id": "PXOhi6m09bA", "text": "consistency by an l1 loss essentially what there's if we unpack this up this objective function is your sample for data in one of your domain probably your source domain and then you map it to you map it to a cycle then it should look like itself in an l1 sense so this is I think what they call forward cycle consistency because it's going from X to Y to X and then they also have a backward one where you essentially kind of think of a sample from oil labels and then you map it through this thing again Y dou X dou Y", "start_timestamp": "01:33:34", "end_timestamp": "01:34:18", "start_second": 5614, "end_second": 5658, "url": "https://www.youtube.com/watch?v=PXOhi6m09bA&t=5614s", "title": "L9 Semi-Supervised Learning and Unsupervised Distribution Alignment -- CS294-158-SP20 UC Berkeley", "thumbnail": "https://i.ytimg.com/vi/PXOhi6m09bA/maxresdefault.jpg"} {"video_id": "PXOhi6m09bA", "text": "should be similar to yourself in an l1 sense so that's the lost function and then you essentially would combine these two things together and then train it I think in practice they use at least I think the in practice they use at least square again instead of the original gain objective but probably it doesn't make too much difference so they reported a couple numerical results and the first results that they look at is so in the case of going from photo to semantic mask you can actually calculate the accuracy so we can get a", "start_timestamp": "01:34:18", "end_timestamp": "01:35:01", "start_second": 5658, "end_second": 5701, "url": "https://www.youtube.com/watch?v=PXOhi6m09bA&t=5658s", "title": "L9 Semi-Supervised Learning and Unsupervised Distribution Alignment -- CS294-158-SP20 UC Berkeley", "thumbnail": "https://i.ytimg.com/vi/PXOhi6m09bA/maxresdefault.jpg"} {"video_id": "PXOhi6m09bA", "text": "quantifiable notion of how well the method is doing so the method that we just introduced is called cycle again and it's basically unsupervised so you give it a bunch of distressing images and then a bunch of semantic masks and then you hope them that they somehow align each other and what this shows that they actually do pretty well so a peg to peg is a fully supervised model so the last rode means you get you actually get pairs of image and their corresponding label so the last rows should be read as basically an upper", "start_timestamp": "01:35:01", "end_timestamp": "01:35:40", "start_second": 5701, "end_second": 5740, "url": "https://www.youtube.com/watch?v=PXOhi6m09bA&t=5701s", "title": "L9 Semi-Supervised Learning and Unsupervised Distribution Alignment -- CS294-158-SP20 UC Berkeley", "thumbnail": "https://i.ytimg.com/vi/PXOhi6m09bA/maxresdefault.jpg"} {"video_id": "PXOhi6m09bA", "text": "bound on the performance and then the cycle again by using no labels at all you can actually do pretty fully so like you can roughly say that 60% of the pixels are labeled correctly with the right with the right class we actually know even with no information on how they should be related to each other know so there's there's no pairs so you you train you train the whole system with just a bunch of unordered images and then a bunch of an order masks and then it's learning to align them good good question I don't think", "start_timestamp": "01:35:40", "end_timestamp": "01:36:34", "start_second": 5740, "end_second": 5794, "url": "https://www.youtube.com/watch?v=PXOhi6m09bA&t=5740s", "title": "L9 Semi-Supervised Learning and Unsupervised Distribution Alignment -- CS294-158-SP20 UC Berkeley", "thumbnail": "https://i.ytimg.com/vi/PXOhi6m09bA/maxresdefault.jpg"} {"video_id": "PXOhi6m09bA", "text": "they have that level of analysis but I think that would be interesting to see like what are the kind of things that are easier for you to align what are the things that are not like a mushrik yeah yeah yeah yeah so the so the question is there was a lot of inductive bias going from one image to another using a conf net and then also using a certain discriminator that operates on a patch basis so you kind of like do a kind of domain alignment patch like so you can think of it as having its there's really a lot of training signal that is not", "start_timestamp": "01:36:34", "end_timestamp": "01:37:15", "start_second": 5794, "end_second": 5835, "url": "https://www.youtube.com/watch?v=PXOhi6m09bA&t=5794s", "title": "L9 Semi-Supervised Learning and Unsupervised Distribution Alignment -- CS294-158-SP20 UC Berkeley", "thumbnail": "https://i.ytimg.com/vi/PXOhi6m09bA/maxresdefault.jpg"} {"video_id": "PXOhi6m09bA", "text": "captured by the loss function at all so unfortunately we don't know the answer to that so I guess what what I know for sure is like if you just scramble the image like like I mean just permute the dimensions in your image tenser then I'm pretty sure you would do full but then like is does that mean like this is no useful probably not but this still means that we don't fully understand what are the inductive biases they're helping us but that's a good question yes like right so I guess so so the common is around like the a lot of these", "start_timestamp": "01:37:15", "end_timestamp": "01:38:12", "start_second": 5835, "end_second": 5892, "url": "https://www.youtube.com/watch?v=PXOhi6m09bA&t=5835s", "title": "L9 Semi-Supervised Learning and Unsupervised Distribution Alignment -- CS294-158-SP20 UC Berkeley", "thumbnail": "https://i.ytimg.com/vi/PXOhi6m09bA/maxresdefault.jpg"} {"video_id": "PXOhi6m09bA", "text": "translation problems like operate in a very local manner like you're kind of like saying I just need to change my local pixels like when you go from zero to you to two holes like you're just kind of changing a local texture as opposed to something that is global which is presumably much harder I think that's likely the case I have no idea you cannot you can ask Alyosha who will be here soon so I I think this is a long way from supervised learning so I believe supervised learning like this thing should like I don't know but", "start_timestamp": "01:38:12", "end_timestamp": "01:38:55", "start_second": 5892, "end_second": 5935, "url": "https://www.youtube.com/watch?v=PXOhi6m09bA&t=5892s", "title": "L9 Semi-Supervised Learning and Unsupervised Distribution Alignment -- CS294-158-SP20 UC Berkeley", "thumbnail": "https://i.ytimg.com/vi/PXOhi6m09bA/maxresdefault.jpg"} {"video_id": "PXOhi6m09bA", "text": "I I imagine this to be at least 95 percent plus but again like this is not a correct comparison I think the correct comparison is to compare cycle again with takes two pics because they use similar architecture except one is supervised the other is unsupervised all right so they have some Appalachians in terms of loss function or don't know in terms of architecture the Appalachians basically tell you what we sort of expect that like for one like if you can alone if you just you can along this means you just do marginal matching", "start_timestamp": "01:38:55", "end_timestamp": "01:39:43", "start_second": 5935, "end_second": 5983, "url": "https://www.youtube.com/watch?v=PXOhi6m09bA&t=5935s", "title": "L9 Semi-Supervised Learning and Unsupervised Distribution Alignment -- CS294-158-SP20 UC Berkeley", "thumbnail": "https://i.ytimg.com/vi/PXOhi6m09bA/maxresdefault.jpg"} {"video_id": "PXOhi6m09bA", "text": "so which is actually not bad already and then you can see that if you add psycho consistency in there it helps you and there's something that's really puzzling like I'm really confused by what is happening here it just kills everything I like I have no idea what is happening there so I don't know and and what's also interesting is that it doesn't always help so what we're looking at we're looking at going from photos to labels and then they have another experiment that is going from labels to photos so this is a much higher entropy", "start_timestamp": "01:39:43", "end_timestamp": "01:40:24", "start_second": 5983, "end_second": 6024, "url": "https://www.youtube.com/watch?v=PXOhi6m09bA&t=5983s", "title": "L9 Semi-Supervised Learning and Unsupervised Distribution Alignment -- CS294-158-SP20 UC Berkeley", "thumbnail": "https://i.ytimg.com/vi/PXOhi6m09bA/maxresdefault.jpg"} {"video_id": "PXOhi6m09bA", "text": "mapping whereas they still use just deterministic mapping so you can imagine some there might be something that is playing with that in here that I guess we don't we don't fully understand what's interesting here though is the evaluation map metric is pretty interesting so remember here we are evaluating it from label to photos so basically is give you a semantic mask how well you can generate the scene but then how do you even evaluate that so they actually have a pretty clever way of evaluating that so what they would do", "start_timestamp": "01:40:24", "end_timestamp": "01:41:02", "start_second": 6024, "end_second": 6062, "url": "https://www.youtube.com/watch?v=PXOhi6m09bA&t=6024s", "title": "L9 Semi-Supervised Learning and Unsupervised Distribution Alignment -- CS294-158-SP20 UC Berkeley", "thumbnail": "https://i.ytimg.com/vi/PXOhi6m09bA/maxresdefault.jpg"} {"video_id": "PXOhi6m09bA", "text": "is they would run another pre-trained semantic segmentation now walk fully convolutional network they will run it on the generated image and then they use step to quantify the results so that is a pretty interesting trick to evaluate this mapping kind of like the inception school except like in this case like we you kind of yeah it's kind of like Inception school but I think it's better than inception in school in this restricted domain these are some of the other codes the first cases where you translate from I guess a schematic", "start_timestamp": "01:41:02", "end_timestamp": "01:41:44", "start_second": 6062, "end_second": 6104, "url": "https://www.youtube.com/watch?v=PXOhi6m09bA&t=6062s", "title": "L9 Semi-Supervised Learning and Unsupervised Distribution Alignment -- CS294-158-SP20 UC Berkeley", "thumbnail": "https://i.ytimg.com/vi/PXOhi6m09bA/maxresdefault.jpg"} {"video_id": "PXOhi6m09bA", "text": "annotation of a facade facade going from address to Shu going from shoes to address most of them make sense they applied this to a wide variety of different problems where you it's just impossible to get label pair it's just like I guess somewhere Yosemite and winter with somebody you can get pears although not exactly the same and translating apples to oranges like just like we said like this is it's it's not like this is not supervised and it's not fully unique they have their set of failures and in this one what the", "start_timestamp": "01:41:44", "end_timestamp": "01:42:32", "start_second": 6104, "end_second": 6152, "url": "https://www.youtube.com/watch?v=PXOhi6m09bA&t=6104s", "title": "L9 Semi-Supervised Learning and Unsupervised Distribution Alignment -- CS294-158-SP20 UC Berkeley", "thumbnail": "https://i.ytimg.com/vi/PXOhi6m09bA/maxresdefault.jpg"} {"video_id": "PXOhi6m09bA", "text": "authors explain the paper is they're like I mean when you train on how movies are success I don't know when you train on horses in imagenet they decide like they're like they would they have not seen a human riding on it and as such like you would just classify or similar texture to be horses and then you just translate that so like this this I guess goes back to one of one of the question that someone mentioned it's like what are the failure cases so like I think this is one good example of like what it fails on I think this is a good example", "start_timestamp": "01:42:32", "end_timestamp": "01:43:07", "start_second": 6152, "end_second": 6187, "url": "https://www.youtube.com/watch?v=PXOhi6m09bA&t=6152s", "title": "L9 Semi-Supervised Learning and Unsupervised Distribution Alignment -- CS294-158-SP20 UC Berkeley", "thumbnail": "https://i.ytimg.com/vi/PXOhi6m09bA/maxresdefault.jpg"} {"video_id": "PXOhi6m09bA", "text": "of what the model is doing is it's trying to find like yellowish pattern and then change that yellowish pattern to stripes so that's apparently what the model is doing so that's that for cycle gain so essentially it's pretty surprising that it can work on certain domains and when it works I think it's very reasonable so the next thing that we would look at is we look at improving cycle gain in certain dimensions so the crucial dimension that we would look at here is that remember when we talked about the cycle again the cycle", "start_timestamp": "01:43:07", "end_timestamp": "01:43:50", "start_second": 6187, "end_second": 6230, "url": "https://www.youtube.com/watch?v=PXOhi6m09bA&t=6187s", "title": "L9 Semi-Supervised Learning and Unsupervised Distribution Alignment -- CS294-158-SP20 UC Berkeley", "thumbnail": "https://i.ytimg.com/vi/PXOhi6m09bA/maxresdefault.jpg"} {"video_id": "PXOhi6m09bA", "text": "again has this deterministic mapping so you give you an image X which translated into domain Y and but this translation is deterministic but that is fundamentally not correct at least for a lot of the alignment problems that we care about so let's say if I want you go from mask to image semantic mask to image like there was a lot of different ways to satisfy the same semantic mask there are a lot of different ways to generate that image like semantic mask only tells you there's a car here but what does the car look like what's", "start_timestamp": "01:43:50", "end_timestamp": "01:44:28", "start_second": 6230, "end_second": 6268, "url": "https://www.youtube.com/watch?v=PXOhi6m09bA&t=6230s", "title": "L9 Semi-Supervised Learning and Unsupervised Distribution Alignment -- CS294-158-SP20 UC Berkeley", "thumbnail": "https://i.ytimg.com/vi/PXOhi6m09bA/maxresdefault.jpg"} {"video_id": "PXOhi6m09bA", "text": "inside the car what color it is like it's simply specifying none of those so basically there's this high entropy mapping going from semantic mask to image and that is apparently not deterministic so you can say oh one straightforward way to extend cycle again is to say cycle gain is essentially this right so you take in an image a and then you're trying to map it to image B so one straightforward way to extend that would be to make this mapping taking an additional noise sauce just like in it typical again so you", "start_timestamp": "01:44:28", "end_timestamp": "01:45:06", "start_second": 6268, "end_second": 6306, "url": "https://www.youtube.com/watch?v=PXOhi6m09bA&t=6268s", "title": "L9 Semi-Supervised Learning and Unsupervised Distribution Alignment -- CS294-158-SP20 UC Berkeley", "thumbnail": "https://i.ytimg.com/vi/PXOhi6m09bA/maxresdefault.jpg"} {"video_id": "PXOhi6m09bA", "text": "could take in an image a and then you can also take in a noise sauce that probably described like what does the car look like what is the color other than the contour everything other than the contour and then from that noise source you can map to some image P and then if you sample different Z's hopefully you get different cars is that some motivation make sense so that's all good so and in fact like this has been done concurrent you cycle again there's another paper called do you again where this is essentially the architecture the", "start_timestamp": "01:45:06", "end_timestamp": "01:45:41", "start_second": 6306, "end_second": 6341, "url": "https://www.youtube.com/watch?v=PXOhi6m09bA&t=6306s", "title": "L9 Semi-Supervised Learning and Unsupervised Distribution Alignment -- CS294-158-SP20 UC Berkeley", "thumbnail": "https://i.ytimg.com/vi/PXOhi6m09bA/maxresdefault.jpg"} {"video_id": "PXOhi6m09bA", "text": "mapping would take in both an image from source domain as well as a random noise source however this is not enough to just change your architecture because if you changed if you even if you change your architecture the noise are doomed to be ignored and to see that the reason is essentially our lost function the l1 law the l1 cycle consistence loss carry to do the following so if my map my a with certain Z this this produce like some kind of B for me and then if I met my be with another Z Prime I should get back to a and we can see", "start_timestamp": "01:45:41", "end_timestamp": "01:46:25", "start_second": 6341, "end_second": 6385, "url": "https://www.youtube.com/watch?v=PXOhi6m09bA&t=6341s", "title": "L9 Semi-Supervised Learning and Unsupervised Distribution Alignment -- CS294-158-SP20 UC Berkeley", "thumbnail": "https://i.ytimg.com/vi/PXOhi6m09bA/maxresdefault.jpg"} {"video_id": "PXOhi6m09bA", "text": "that in this whole mapping the choice of z and z prime are essentially ignored so you can choose different Z or Z Prime you still need to satisfy this mapping so what that means is that the noise source is necessarily ignored when you impose a cycle consistency loss and when you optimize it to is fixed port so that's not good and then there was this augment excite Oakham paper that proposed a way to solve it so you would augment the noise to your architecture but you will also learn so instead of only learning the mapping from A to B", "start_timestamp": "01:46:25", "end_timestamp": "01:47:11", "start_second": 6385, "end_second": 6431, "url": "https://www.youtube.com/watch?v=PXOhi6m09bA&t=6385s", "title": "L9 Semi-Supervised Learning and Unsupervised Distribution Alignment -- CS294-158-SP20 UC Berkeley", "thumbnail": "https://i.ytimg.com/vi/PXOhi6m09bA/maxresdefault.jpg"} {"video_id": "PXOhi6m09bA", "text": "and B to a you will learn an encoder of each of the noise source not the similar to how an encoder is used in a variational method and it's actually pretty interesting so the way that you would go is I have some ground Shu image a and I have then what I'm going to say is that my ground truth image a would comes from a corresponding B and it's corresponding noise sauce za and then I would have this blue arrow which is a network that infer what's the a it is so basically I'm trying to infer what Zee produce my a and I'm going to", "start_timestamp": "01:47:11", "end_timestamp": "01:47:59", "start_second": 6431, "end_second": 6479, "url": "https://www.youtube.com/watch?v=PXOhi6m09bA&t=6431s", "title": "L9 Semi-Supervised Learning and Unsupervised Distribution Alignment -- CS294-158-SP20 UC Berkeley", "thumbnail": "https://i.ytimg.com/vi/PXOhi6m09bA/maxresdefault.jpg"} {"video_id": "PXOhi6m09bA", "text": "infer what P produce my a so instead of only inferring what is my corresponding B I infer that as well as what is the noise source that produced me now with both the noise source and the corresponding B I can use that to map it to an a prime using this color which is the mapping coming back from B to a and in the end I can say that a and a prime should be similar in l1 in l1 lost sense so now it's okay because I'm choosing a specific Z for each particular data point so if we think about it from an information theoretic sense whatever", "start_timestamp": "01:47:59", "end_timestamp": "01:48:46", "start_second": 6479, "end_second": 6526, "url": "https://www.youtube.com/watch?v=PXOhi6m09bA&t=6479s", "title": "L9 Semi-Supervised Learning and Unsupervised Distribution Alignment -- CS294-158-SP20 UC Berkeley", "thumbnail": "https://i.ytimg.com/vi/PXOhi6m09bA/maxresdefault.jpg"} {"video_id": "PXOhi6m09bA", "text": "information that is not captured in B you push it in to Z that allows you to perfectly reconstruct the original image as well as maintaining the ability to have diversity of mappings because a different egg come from different see yes so the question is how do you prevent the model from putting everything into Z so you could but in my fail the marginal matching criteria right so I guess the statement is like the Amb relationship could could become decoupled right like from a I would just match an arbitrary B that actually has", "start_timestamp": "01:48:46", "end_timestamp": "01:49:37", "start_second": 6526, "end_second": 6577, "url": "https://www.youtube.com/watch?v=PXOhi6m09bA&t=6526s", "title": "L9 Semi-Supervised Learning and Unsupervised Distribution Alignment -- CS294-158-SP20 UC Berkeley", "thumbnail": "https://i.ytimg.com/vi/PXOhi6m09bA/maxresdefault.jpg"} {"video_id": "PXOhi6m09bA", "text": "no corresponding with a from the beginning but then like remember that is always the problem like even with the original cycle again you could still produce an arbitrary mapping that this consistent but it's not the ground truth mapping so this I guess what I'm saying is this doesn't make it worse yeah so we can say it again so I was saying like basically you can play this multiple steps and then like the evolution of them should still match the original marginal distribution yeah so many ways that you can play with this", "start_timestamp": "01:49:37", "end_timestamp": "01:51:00", "start_second": 6577, "end_second": 6660, "url": "https://www.youtube.com/watch?v=PXOhi6m09bA&t=6577s", "title": "L9 Semi-Supervised Learning and Unsupervised Distribution Alignment -- CS294-158-SP20 UC Berkeley", "thumbnail": "https://i.ytimg.com/vi/PXOhi6m09bA/maxresdefault.jpg"} {"video_id": "PXOhi6m09bA", "text": "[Music] yeah you have Ken loss on B and then you would I think you also have Ken loss on Z which like Z you restricted to piece the marginal of that you restrict it to be some Gaussian or something so in a sense you cannot put infinite information in there so both of them are in a sense information regular lies it's it's really more like an adversarial autoencoder which we didn't cover in a lecture so it's kind of like a VA yi but instead of like a care loss you use again loss so it is more it is but like a it is very much like a Nathan Coe", "start_timestamp": "01:51:00", "end_timestamp": "01:51:45", "start_second": 6660, "end_second": 6705, "url": "https://www.youtube.com/watch?v=PXOhi6m09bA&t=6660s", "title": "L9 Semi-Supervised Learning and Unsupervised Distribution Alignment -- CS294-158-SP20 UC Berkeley", "thumbnail": "https://i.ytimg.com/vi/PXOhi6m09bA/maxresdefault.jpg"} {"video_id": "PXOhi6m09bA", "text": "model that is training with again it could it wouldn't like that applies to everything that we would go over today there are holes in all of them oh you mean why they don't use a Vee I think it's probably the mapping from A to B that he wouldn't do well like you wouldn't do it in like a visually appealing way otherwise I think for Z they could actually use a VAD type of loss but it would not be a VAD because there's still this thing that is [Laughter] yes no it's not it's not that fair comparison I would say though like if", "start_timestamp": "01:51:45", "end_timestamp": "01:53:22", "start_second": 6705, "end_second": 6802, "url": "https://www.youtube.com/watch?v=PXOhi6m09bA&t=6705s", "title": "L9 Semi-Supervised Learning and Unsupervised Distribution Alignment -- CS294-158-SP20 UC Berkeley", "thumbnail": "https://i.ytimg.com/vi/PXOhi6m09bA/maxresdefault.jpg"} {"video_id": "PXOhi6m09bA", "text": "your data set is small like Dan training is usually pretty fast as well anyways but that's off topic so just like operationally what does that mean if we go through like one cycle of that well cycle consistency loss so we get some image from some from a source domain and then we would randomly sample AZ for my mapping to be because remember that the mapping from A to B is also stochastic so B could take up a lot of different forms and I'm going to generate a noise source that dictate what it is so this is what I'm going to samples and then", "start_timestamp": "01:53:22", "end_timestamp": "01:54:05", "start_second": 6802, "end_second": 6845, "url": "https://www.youtube.com/watch?v=PXOhi6m09bA&t=6802s", "title": "L9 Semi-Supervised Learning and Unsupervised Distribution Alignment -- CS294-158-SP20 UC Berkeley", "thumbnail": "https://i.ytimg.com/vi/PXOhi6m09bA/maxresdefault.jpg"} {"video_id": "PXOhi6m09bA", "text": "there will be a set of mappings that go through so the mapping from A to B now it takes a and annoy sauce for B that gives me B and then from this BN a I can try to guess what was the the Z that has generated my original a so that is my encoder to guess Z of a and then finally I would plug in the B they're generated and the Z they are generated and from these two I get back I get back this a prime which supposedly should be close to my original sample so that's all good like it's fairly easy to implement like", "start_timestamp": "01:54:05", "end_timestamp": "01:54:50", "start_second": 6845, "end_second": 6890, "url": "https://www.youtube.com/watch?v=PXOhi6m09bA&t=6845s", "title": "L9 Semi-Supervised Learning and Unsupervised Distribution Alignment -- CS294-158-SP20 UC Berkeley", "thumbnail": "https://i.ytimg.com/vi/PXOhi6m09bA/maxresdefault.jpg"} {"video_id": "PXOhi6m09bA", "text": "just a small surgery on cycle again how a does it you so the full the first thing that you would want to run is essentially you would want to first give Z as the additional input to the mapping which they call stochastic cycle again I guess so that is without changing the loss function like without introducing the encoder it's what this column that we are looking at and and then the test here is we simple and add app for my Gwangju of data here and then I will fit it through my cycle again but with different Z terms so this is", "start_timestamp": "01:54:50", "end_timestamp": "01:55:36", "start_second": 6890, "end_second": 6936, "url": "https://www.youtube.com/watch?v=PXOhi6m09bA&t=6890s", "title": "L9 Semi-Supervised Learning and Unsupervised Distribution Alignment -- CS294-158-SP20 UC Berkeley", "thumbnail": "https://i.ytimg.com/vi/PXOhi6m09bA/maxresdefault.jpg"} {"video_id": "PXOhi6m09bA", "text": "imagine this is coming from Z 1 Z 2 Z 3 Z 4 so this is surprising right because we ocean aliy we went through this argument of like how just changing cycle games to make you to have to take in Z wouldn't actually make use of Z like it would just ignore it but what we are seeing here is actually different right so you give it an H mask and it actually generate diverse samples for you so that's interesting like if we look at like this shoe like apparently there are all different colors and even though we do the Augmented one like using the new", "start_timestamp": "01:55:36", "end_timestamp": "01:56:17", "start_second": 6936, "end_second": 6977, "url": "https://www.youtube.com/watch?v=PXOhi6m09bA&t=6936s", "title": "L9 Semi-Supervised Learning and Unsupervised Distribution Alignment -- CS294-158-SP20 UC Berkeley", "thumbnail": "https://i.ytimg.com/vi/PXOhi6m09bA/maxresdefault.jpg"} {"video_id": "PXOhi6m09bA", "text": "loss function and the encoder I would say they look probably about the same the same kind of diversity but it is kind of interesting like why why does why does cycle again walk especially like this is highly contrasting with the analysis that we just went through so if I especially if I take a black shoe and then map it to a semantic mask and then map it back then if I get a Y shoe which is a point to here is something that could happen if I get a y shoes back I'm going to incur huge l1 loss because black and white are", "start_timestamp": "01:56:17", "end_timestamp": "01:56:54", "start_second": 6977, "end_second": 7014, "url": "https://www.youtube.com/watch?v=PXOhi6m09bA&t=6977s", "title": "L9 Semi-Supervised Learning and Unsupervised Distribution Alignment -- CS294-158-SP20 UC Berkeley", "thumbnail": "https://i.ytimg.com/vi/PXOhi6m09bA/maxresdefault.jpg"} {"video_id": "PXOhi6m09bA", "text": "just to end of the spectrum in your color space so so that is somewhat puzzling like what what is this actually doing like if if it can generate this diverse samples then that means it's not optimizing its cycle gain loss well but it is optimizing its cycle gain loss as well so the very interesting thing here is that cycle again when you go from a high dimension like a high entropy mapping like RGB images to a low entropy one like a semantic mask it can actually hide information in some kind of high frequency pattern so this is what we are", "start_timestamp": "01:56:54", "end_timestamp": "01:57:33", "start_second": 7014, "end_second": 7053, "url": "https://www.youtube.com/watch?v=PXOhi6m09bA&t=7014s", "title": "L9 Semi-Supervised Learning and Unsupervised Distribution Alignment -- CS294-158-SP20 UC Berkeley", "thumbnail": "https://i.ytimg.com/vi/PXOhi6m09bA/maxresdefault.jpg"} {"video_id": "PXOhi6m09bA", "text": "this is what we are seeing here so like like going back to the black shoe exam like when you map from a black shoe to his H pattern like it would give you like seem a plot seemingly plausible H patterns and then as some high-frequency noise to it that they know that this is coming from a black shoes and then you can look at the rough shape of that pattern and then it also reached the high frequency noise that's encoded in there to say oh this should be black and that's how it manages to still do the cycle consistency right so I", "start_timestamp": "01:57:33", "end_timestamp": "01:58:10", "start_second": 7053, "end_second": 7090, "url": "https://www.youtube.com/watch?v=PXOhi6m09bA&t=7053s", "title": "L9 Semi-Supervised Learning and Unsupervised Distribution Alignment -- CS294-158-SP20 UC Berkeley", "thumbnail": "https://i.ytimg.com/vi/PXOhi6m09bA/maxresdefault.jpg"} {"video_id": "PXOhi6m09bA", "text": "consistently get the same color back by hiding imperceptible information in some of my mask on my edge and that's pretty interesting and the way that you can show that it's doing that is by essentially constructing an experiment where you so so this is quite domain a this is domain B you get the B out and then you try to sample different Z that you try to fix that B and then you try to sample different Z's so if this is coming from a cycle gain loss then you will see that the mask itself even though seemingly it doesn't in cold", "start_timestamp": "01:58:10", "end_timestamp": "01:58:56", "start_second": 7090, "end_second": 7136, "url": "https://www.youtube.com/watch?v=PXOhi6m09bA&t=7090s", "title": "L9 Semi-Supervised Learning and Unsupervised Distribution Alignment -- CS294-158-SP20 UC Berkeley", "thumbnail": "https://i.ytimg.com/vi/PXOhi6m09bA/maxresdefault.jpg"} {"video_id": "PXOhi6m09bA", "text": "color information it's implicitly encoding color information so if I take this mask that's coming from my model I will have color information hidden in it in such a way that when I sample different Z's you always get the same output and that's how is able to still satisfy the cycle consistency constraint and so and then if you do the Augmented cycle gain loss what you will see is that the mask looked basically the same but there are seemingly less information in there so when you sample different random noise or Z you actually get", "start_timestamp": "01:58:56", "end_timestamp": "01:59:34", "start_second": 7136, "end_second": 7174, "url": "https://www.youtube.com/watch?v=PXOhi6m09bA&t=7136s", "title": "L9 Semi-Supervised Learning and Unsupervised Distribution Alignment -- CS294-158-SP20 UC Berkeley", "thumbnail": "https://i.ytimg.com/vi/PXOhi6m09bA/maxresdefault.jpg"} {"video_id": "PXOhi6m09bA", "text": "different color of shoes back I don't remember that's a that's good to check but like I like I guess like us as we have discussed like if Z is very powerful and potentially getting Co too much so I think I think there would be a balance there and this is kind of doing the cycle walk this is somewhat interesting some are similar to what we had discussed so maybe from A to B and then B to a a to be while I cycle through different noise sauce and then if you do this kind of random walk in an auto augmented cycle again", "start_timestamp": "01:59:34", "end_timestamp": "02:00:20", "start_second": 7174, "end_second": 7220, "url": "https://www.youtube.com/watch?v=PXOhi6m09bA&t=7174s", "title": "L9 Semi-Supervised Learning and Unsupervised Distribution Alignment -- CS294-158-SP20 UC Berkeley", "thumbnail": "https://i.ytimg.com/vi/PXOhi6m09bA/maxresdefault.jpg"} {"video_id": "PXOhi6m09bA", "text": "you can see that even though the mask stays relatively the same the over appearances some of the other color texture does change over time whereas if you train it with the original cycle getting lost you will just get the same pairs repeated again and again that's a somewhat interesting a in my opinion relatively simple and arrogant extension to cycle gain that helped you to deal with stochastic mappings any questions on that before we move on so the next set of questions are can we do better so so far we have covered two", "start_timestamp": "02:00:20", "end_timestamp": "02:01:04", "start_second": 7220, "end_second": 7264, "url": "https://www.youtube.com/watch?v=PXOhi6m09bA&t=7220s", "title": "L9 Semi-Supervised Learning and Unsupervised Distribution Alignment -- CS294-158-SP20 UC Berkeley", "thumbnail": "https://i.ytimg.com/vi/PXOhi6m09bA/maxresdefault.jpg"} {"video_id": "PXOhi6m09bA", "text": "learning principles one is marginal matching and then the other is psycho consistency and I guess it's a good question that whether those are the all of the invariances that we can rely on or are there additional learning signals that we can derive from it it's a good open problem and if we step back and think about this whole problem it's really aligned to distributions we found knowing what's inside was really difficult if we if we think about the categorical distribution that has even probabilities like it's just impossible", "start_timestamp": "02:01:04", "end_timestamp": "02:01:46", "start_second": 7264, "end_second": 7306, "url": "https://www.youtube.com/watch?v=PXOhi6m09bA&t=7264s", "title": "L9 Semi-Supervised Learning and Unsupervised Distribution Alignment -- CS294-158-SP20 UC Berkeley", "thumbnail": "https://i.ytimg.com/vi/PXOhi6m09bA/maxresdefault.jpg"} {"video_id": "PXOhi6m09bA", "text": "to align them because we treat them as pure black paths like all values are the same as each other so one idea that we can move forward from this point is we can look inside a random variable it's we can say this image it's not just a huge a high dimensional random variable to me like I can actually look inside and see what's in there and then maybe use that to help us and for this kind of high dimensional a and B like they typically have certain structures in them that that could be leveraged and as people have pointed out", "start_timestamp": "02:01:46", "end_timestamp": "02:02:25", "start_second": 7306, "end_second": 7345, "url": "https://www.youtube.com/watch?v=PXOhi6m09bA&t=7306s", "title": "L9 Semi-Supervised Learning and Unsupervised Distribution Alignment -- CS294-158-SP20 UC Berkeley", "thumbnail": "https://i.ytimg.com/vi/PXOhi6m09bA/maxresdefault.jpg"} {"video_id": "PXOhi6m09bA", "text": "like when you use a continent and patch phase discriminator in a cycle game they're kind of implicitly employing some of this inductive bias already but I think we there are cases where we can push this even further so the best example that I could find is in NLP for that so let's say ptomaine a is all English sentences and domain B or French sentences then we can imagine that like I can get a random sentence from or English sentences and a random sentence from all French sentences they might have the same empirical frequency but", "start_timestamp": "02:02:25", "end_timestamp": "02:03:05", "start_second": 7345, "end_second": 7385, "url": "https://www.youtube.com/watch?v=PXOhi6m09bA&t=7345s", "title": "L9 Semi-Supervised Learning and Unsupervised Distribution Alignment -- CS294-158-SP20 UC Berkeley", "thumbnail": "https://i.ytimg.com/vi/PXOhi6m09bA/maxresdefault.jpg"} {"video_id": "PXOhi6m09bA", "text": "they might be totally semantically unrelated which is likely to happen so it's like what consistency wooden also rule out that either this is just basically going back to the problems of when you have distributions to have uniform densities nothing could help but what we do know is that each sentence is made up of words and it's very unlikely that in those two totally semantically unrelated sentences they would have words that have same kind of statistics so I'm using the term statistics loosely here I'm going to say more about like what we", "start_timestamp": "02:03:05", "end_timestamp": "02:03:49", "start_second": 7385, "end_second": 7429, "url": "https://www.youtube.com/watch?v=PXOhi6m09bA&t=7385s", "title": "L9 Semi-Supervised Learning and Unsupervised Distribution Alignment -- CS294-158-SP20 UC Berkeley", "thumbnail": "https://i.ytimg.com/vi/PXOhi6m09bA/maxresdefault.jpg"} {"video_id": "PXOhi6m09bA", "text": "can do with this so basically what the exercise that we have gone through is instead of thinking of it as distribution alignment between sentences in different languages if we are allowed to look inside like what's in between each random variable and look at their sub components and do some influence on the sub components they can help us circumvent the problem of long enough learning signal so in the case of NLP the sub components are the words and the large and the larger higher dimensional random variables are sentences or", "start_timestamp": "02:03:49", "end_timestamp": "02:04:23", "start_second": 7429, "end_second": 7463, "url": "https://www.youtube.com/watch?v=PXOhi6m09bA&t=7429s", "title": "L9 Semi-Supervised Learning and Unsupervised Distribution Alignment -- CS294-158-SP20 UC Berkeley", "thumbnail": "https://i.ytimg.com/vi/PXOhi6m09bA/maxresdefault.jpg"} {"video_id": "PXOhi6m09bA", "text": "paragraphs and so that's interesting so like now what we can do is we can for one we can first of all align the words like we can think of ways that you can do distribution alignment on words and even more we can think of we can make use of how different words occur together so let's say the word eye is most likely to be followed by M and because these two things Co occur most frequently in within this large random variable sent a sentence so what is the thing that we can make use of this kind of co-occurrence to the six of sub", "start_timestamp": "02:04:23", "end_timestamp": "02:05:03", "start_second": 7463, "end_second": 7503, "url": "https://www.youtube.com/watch?v=PXOhi6m09bA&t=7463s", "title": "L9 Semi-Supervised Learning and Unsupervised Distribution Alignment -- CS294-158-SP20 UC Berkeley", "thumbnail": "https://i.ytimg.com/vi/PXOhi6m09bA/maxresdefault.jpg"} {"video_id": "PXOhi6m09bA", "text": "components well we have learned one of them what you back in two lectures ago probably so just a recap on sip sip gram water back it's really simple so basically always trying to do is is trying to say given one was the world in a sentence I'm going to say that others other words in this sentence is more likely to occur than every other things in my corpus and in practice you wouldn't sum over all of your dictionary you would do some negative sampling to optimize this but essentially the end we saw this certain vector that described", "start_timestamp": "02:05:03", "end_timestamp": "02:05:42", "start_second": 7503, "end_second": 7542, "url": "https://www.youtube.com/watch?v=PXOhi6m09bA&t=7503s", "title": "L9 Semi-Supervised Learning and Unsupervised Distribution Alignment -- CS294-158-SP20 UC Berkeley", "thumbnail": "https://i.ytimg.com/vi/PXOhi6m09bA/maxresdefault.jpg"} {"video_id": "PXOhi6m09bA", "text": "if two vectors are close together in that vector spaced and they're more likely to occur in a sentence and if you train a very very large model on a lot of text data then they capture how different words are likely to occur together so let's give Graham and what's really interesting is that this kind of woodwork method exhibits really interesting vector calculus so this is again a recap slice what we can look at is that if we look at the direction from a country to his capital the vector is actually relatively similar across a lot", "start_timestamp": "02:05:42", "end_timestamp": "02:06:26", "start_second": 7542, "end_second": 7586, "url": "https://www.youtube.com/watch?v=PXOhi6m09bA&t=7542s", "title": "L9 Semi-Supervised Learning and Unsupervised Distribution Alignment -- CS294-158-SP20 UC Berkeley", "thumbnail": "https://i.ytimg.com/vi/PXOhi6m09bA/maxresdefault.jpg"} {"video_id": "PXOhi6m09bA", "text": "of these different pairs and we might we might be able to ask like so based on this kind of vector if the vector calculus makes sense then does it mean that we can say the vector representation of those words are distributed in a certain manner and more importantly if similar cap vector calculus holds true for all languages meaning I train a word embedding for English let's say on the on the left and they also training for embedding for Italian if they exhibit the same vector calculus meaning all different in", "start_timestamp": "02:06:26", "end_timestamp": "02:07:07", "start_second": 7586, "end_second": 7627, "url": "https://www.youtube.com/watch?v=PXOhi6m09bA&t=7586s", "title": "L9 Semi-Supervised Learning and Unsupervised Distribution Alignment -- CS294-158-SP20 UC Berkeley", "thumbnail": "https://i.ytimg.com/vi/PXOhi6m09bA/maxresdefault.jpg"} {"video_id": "PXOhi6m09bA", "text": "bearings in them placed together in a such way that you could do the kind of country to capital translation then that's a really strong inductive bias for us and if that holds true then we can possibly align words by similarly by just like uncovering some kind of affine or linear transformation that align these two things together so very surprisingly it's actually true so for the virtual vac that we use let's say in fast text if you train it on one language if your trailer multiple languages and then these embedding space they are only a", "start_timestamp": "02:07:07", "end_timestamp": "02:07:53", "start_second": 7627, "end_second": 7673, "url": "https://www.youtube.com/watch?v=PXOhi6m09bA&t=7627s", "title": "L9 Semi-Supervised Learning and Unsupervised Distribution Alignment -- CS294-158-SP20 UC Berkeley", "thumbnail": "https://i.ytimg.com/vi/PXOhi6m09bA/maxresdefault.jpg"} {"video_id": "PXOhi6m09bA", "text": "rotational weight so you're going to learn a rotation matrix that rotate another language into your space and then the results would be both this this is a graphics that grab from a Facebook blog post that Illustrated really well so you basically get this embedding space that that exhibit similar relative structure and then so what you can do but the absolute location is undefined so what you can do is you can just learn a way to align them together and then after the rotation one point in the embedding space would be very likely to", "start_timestamp": "02:07:53", "end_timestamp": "02:08:31", "start_second": 7673, "end_second": 7711, "url": "https://www.youtube.com/watch?v=PXOhi6m09bA&t=7673s", "title": "L9 Semi-Supervised Learning and Unsupervised Distribution Alignment -- CS294-158-SP20 UC Berkeley", "thumbnail": "https://i.ytimg.com/vi/PXOhi6m09bA/maxresdefault.jpg"} {"video_id": "PXOhi6m09bA", "text": "exhibit the same word but in different languages so this this totally blew my mind that this could work yeah so so initially so it was with the citations here you can basically use a small dictionary play a language to language dictionary you can use a small dictionary to learn the alignment so this it was still supervised but the search space is much smaller like instead of going from each word map through a new net and then after another word you would be like every word is already presented by some embedding now", "start_timestamp": "02:08:31", "end_timestamp": "02:09:08", "start_second": 7711, "end_second": 7748, "url": "https://www.youtube.com/watch?v=PXOhi6m09bA&t=7711s", "title": "L9 Semi-Supervised Learning and Unsupervised Distribution Alignment -- CS294-158-SP20 UC Berkeley", "thumbnail": "https://i.ytimg.com/vi/PXOhi6m09bA/maxresdefault.jpg"} {"video_id": "PXOhi6m09bA", "text": "I'm only learning the rotation that is used across all embeddings but then like so basically a couple data points is enough to to specify that yes I don't know probably two mm I would I would guess like I've been totally uneducated guess I like I'm not gonna NLP cousin it could be pretty big yeah so that's pretty interesting so that's what happened up until like fari before 2017 ish is people can like you can align these two embedding spy is basically examples like I can just go to you like French English dictionary and", "start_timestamp": "02:09:08", "end_timestamp": "02:09:52", "start_second": 7748, "end_second": 7792, "url": "https://www.youtube.com/watch?v=PXOhi6m09bA&t=7748s", "title": "L9 Semi-Supervised Learning and Unsupervised Distribution Alignment -- CS294-158-SP20 UC Berkeley", "thumbnail": "https://i.ytimg.com/vi/PXOhi6m09bA/maxresdefault.jpg"} {"video_id": "PXOhi6m09bA", "text": "then look up a couple Wars and then you use that to to align these two embeddings that's really cool then you can do that yes what apparently they are scaled the same or less similar enough [Laughter] this recent paper that proposed a way that you can oh actually another thing that I forgot to mention is so no actually this is it so basically that's a supervised way to align this wording batting like so that's that's really interesting that you can just capture that by a simple rotation well probably not simple but a rotation and what this", "start_timestamp": "02:09:52", "end_timestamp": "02:10:52", "start_second": 7792, "end_second": 7852, "url": "https://www.youtube.com/watch?v=PXOhi6m09bA&t=7792s", "title": "L9 Semi-Supervised Learning and Unsupervised Distribution Alignment -- CS294-158-SP20 UC Berkeley", "thumbnail": "https://i.ytimg.com/vi/PXOhi6m09bA/maxresdefault.jpg"} {"video_id": "PXOhi6m09bA", "text": "work has done is to show that you can actually do that with really good performance in an unsupervised way so now I have two embedding space and then you are lying them without any training signal so the way that is done is actually basically just using the principle of marginal matching so you would similarly apply so basically you your rotation each of each possible rotation basically specify a mapping and then you're going to say after the mapping my marginal distributions to match and then they just train that with", "start_timestamp": "02:10:52", "end_timestamp": "02:11:33", "start_second": 7852, "end_second": 7893, "url": "https://www.youtube.com/watch?v=PXOhi6m09bA&t=7852s", "title": "L9 Semi-Supervised Learning and Unsupervised Distribution Alignment -- CS294-158-SP20 UC Berkeley", "thumbnail": "https://i.ytimg.com/vi/PXOhi6m09bA/maxresdefault.jpg"} {"video_id": "PXOhi6m09bA", "text": "ever Cyril training so like again like a loss to spur make sure that the marginal is specified is the margin always matched and then after they do that but one of the results of possibly do to gain training is usually no very robust and high precision so after they do that they have they have found a rotation that roughly aligned to distributions and then after they have that they would select some top pairs of high-frequency words in those graph alignment and then they would assume that they are actually gwangju for linemen and then you would", "start_timestamp": "02:11:33", "end_timestamp": "02:12:12", "start_second": 7893, "end_second": 7932, "url": "https://www.youtube.com/watch?v=PXOhi6m09bA&t=7893s", "title": "L9 Semi-Supervised Learning and Unsupervised Distribution Alignment -- CS294-158-SP20 UC Berkeley", "thumbnail": "https://i.ytimg.com/vi/PXOhi6m09bA/maxresdefault.jpg"} {"video_id": "PXOhi6m09bA", "text": "use those as like actual pairs to to solve for the exact rotation and apparently this works really well so that's some of the data that you get from there's some additional tricks in terms of embedding nearest neighbor that I didn't go into so but this is the results that they have and they're comparing that with cost lingo supervision and without any supervision which is their own method and it's really surprisingly they could get you competitive performance with using ground truth data of actual pairs so this again like this is not as complex", "start_timestamp": "02:12:12", "end_timestamp": "02:12:58", "start_second": 7932, "end_second": 7978, "url": "https://www.youtube.com/watch?v=PXOhi6m09bA&t=7932s", "title": "L9 Semi-Supervised Learning and Unsupervised Distribution Alignment -- CS294-158-SP20 UC Berkeley", "thumbnail": "https://i.ytimg.com/vi/PXOhi6m09bA/maxresdefault.jpg"} {"video_id": "PXOhi6m09bA", "text": "as translating whole sentences this is only translating words but I still see this as very impressive that this can work at all and become very competitive with supervised methods so then the next part of this is again another paper from facebook is now you can actually lavish all three of the core principles that we have covered so far you can look at so they use world level alignment meaning they started from what we just look at the end supervised level the unsupervised world level alignment so this is you're not", "start_timestamp": "02:12:58", "end_timestamp": "02:13:37", "start_second": 7978, "end_second": 8017, "url": "https://www.youtube.com/watch?v=PXOhi6m09bA&t=7978s", "title": "L9 Semi-Supervised Learning and Unsupervised Distribution Alignment -- CS294-158-SP20 UC Berkeley", "thumbnail": "https://i.ytimg.com/vi/PXOhi6m09bA/maxresdefault.jpg"} {"video_id": "PXOhi6m09bA", "text": "just looking at sentence level you're looking inside sentence to sub component level statistics and then they also use a monolingual language models to make sure what you translate actually looks like a real sentence so that basically you can see that as marginal matching and then they also have this thing called black translation which is another variant of cycle consistency so you translate it from English to French and then French back to English you get you should get back the same sentence and this is a paper that essentially", "start_timestamp": "02:13:37", "end_timestamp": "02:14:11", "start_second": 8017, "end_second": 8051, "url": "https://www.youtube.com/watch?v=PXOhi6m09bA&t=8017s", "title": "L9 Semi-Supervised Learning and Unsupervised Distribution Alignment -- CS294-158-SP20 UC Berkeley", "thumbnail": "https://i.ytimg.com/vi/PXOhi6m09bA/maxresdefault.jpg"} {"video_id": "PXOhi6m09bA", "text": "utilized all these three methods and then from there I think I think they get say of the art and supervised machine translation results that are I think ten you know I don't remember the precise results but that were widely beyond previous stay of the arts and these are some of the Appalachian of showing how many training sentences they use to in order to surpass this kind of system so you can see that you would need probably somewhere between half a million data in order to for it to surpass the flat line which is the unsupervised machine", "start_timestamp": "02:14:11", "end_timestamp": "02:15:01", "start_second": 8051, "end_second": 8101, "url": "https://www.youtube.com/watch?v=PXOhi6m09bA&t=8051s", "title": "L9 Semi-Supervised Learning and Unsupervised Distribution Alignment -- CS294-158-SP20 UC Berkeley", "thumbnail": "https://i.ytimg.com/vi/PXOhi6m09bA/maxresdefault.jpg"} {"video_id": "PXOhi6m09bA", "text": "translation results so that's that for that method and so again like these none of the kind of principles that we have talked about so far a bullet proof but I think what's really interesting is you can seemingly extract training signal out of nowhere by just carefully considering the problems of what are the invariants that you can exploit and especially in NLP like really thinking about how thinking about it knock at a random variable level but really look inside each random variable and what are some additional core current statistics that", "start_timestamp": "02:15:01", "end_timestamp": "02:15:43", "start_second": 8101, "end_second": 8143, "url": "https://www.youtube.com/watch?v=PXOhi6m09bA&t=8101s", "title": "L9 Semi-Supervised Learning and Unsupervised Distribution Alignment -- CS294-158-SP20 UC Berkeley", "thumbnail": "https://i.ytimg.com/vi/PXOhi6m09bA/maxresdefault.jpg"} {"video_id": "McV4a6umbAY", "text": ">> I'm going to talk about AI for Large Imperfect Information Games, in particular, on how emitted AI that beat top humans in no-limit poker. Okay. So, for starters, this talk is going to be about imperfect-information games in general. I'm not going to talk about perfect-information games like chess or Go, it will be applicable to poker, but also more generally, any strategic interaction that involves hidden information, for example, security interactions or negotiations. I think this is really important for bringing AI into the real world,", "start_timestamp": "00:00:00", "end_timestamp": "00:00:36", "start_second": 0, "end_second": 36, "url": "https://www.youtube.com/watch?v=McV4a6umbAY&t=0s", "title": "AI for Imperfect-Information Games: Beating Top Humans in No-Limit Poker", "thumbnail": "https://i.ytimg.com/vi/McV4a6umbAY/maxresdefault.jpg"} {"video_id": "McV4a6umbAY", "text": "because the truth is most real-world strategic interactions involve some amount of hidden information. So, when it comes to these games, poker has served as the primary benchmark challenge going back decades. In fact, if you look at the original papers on game theory, pretty much the only application they talk about is poker, because it's so accurately, it captures the challenge of hidden information. Particularly, there's a variant of poker called heads-up no-limit Texas hold'em that has emerged as the primary benchmark for these games.", "start_timestamp": "00:00:36", "end_timestamp": "00:01:05", "start_second": 36, "end_second": 65, "url": "https://www.youtube.com/watch?v=McV4a6umbAY&t=36s", "title": "AI for Imperfect-Information Games: Beating Top Humans in No-Limit Poker", "thumbnail": "https://i.ytimg.com/vi/McV4a6umbAY/maxresdefault.jpg"} {"video_id": "McV4a6umbAY", "text": "A heads-up no-limit Texas hold 'em is a massive game. It has about 10 to the 161 different decision points. It is also the most popular variant of poker in the world. For example, no-limit Texas hold'em is the game that is played at The World Series of Poker main events. Every year the winner is determined by Heads of No-limit Texas Hold 'Em. It's also featured in popular movies about poker. For example, Casino Royale and Rounders. In some ways, you could argue it's the purest form of poker. It's subjective, but it is a very strategic game,", "start_timestamp": "00:01:05", "end_timestamp": "00:01:39", "start_second": 65, "end_second": 99, "url": "https://www.youtube.com/watch?v=McV4a6umbAY&t=65s", "title": "AI for Imperfect-Information Games: Beating Top Humans in No-Limit Poker", "thumbnail": "https://i.ytimg.com/vi/McV4a6umbAY/maxresdefault.jpg"} {"video_id": "McV4a6umbAY", "text": "whether you win or lose, it's entirely up to your skill. It's not up to the other players at the table, except for your own opponent, I guess. So, there's no kingmaker effects for example, and no pro AI has been able to beat top humans in this game. That isn't till 2017. So, in 2017, we organized something called the Brains vs AI Challenge. We created an AI called the Libratus, which we played against four of the world's best heads-up no-limit Texas hold'em specialists in the world. These are all people that make about seven figures per", "start_timestamp": "00:01:39", "end_timestamp": "00:02:07", "start_second": 99, "end_second": 127, "url": "https://www.youtube.com/watch?v=McV4a6umbAY&t=99s", "title": "AI for Imperfect-Information Games: Beating Top Humans in No-Limit Poker", "thumbnail": "https://i.ytimg.com/vi/McV4a6umbAY/maxresdefault.jpg"} {"video_id": "McV4a6umbAY", "text": "year playing this game online. As we played 120,000 hands of poker over the course of 20 days, and there was a $200,000 prize pool divided among the pros to incentivize them to play their best. So, they weren't risking money, but how much money they want, depended on how well they did relative to the other players. So, obviously, if you're familiar with poker, you might not have heard of these pros. So, I wanted to say a word about how strong these pros are, because it really is important to play against the top players.", "start_timestamp": "00:02:07", "end_timestamp": "00:02:39", "start_second": 127, "end_second": 159, "url": "https://www.youtube.com/watch?v=McV4a6umbAY&t=127s", "title": "AI for Imperfect-Information Games: Beating Top Humans in No-Limit Poker", "thumbnail": "https://i.ytimg.com/vi/McV4a6umbAY/maxresdefault.jpg"} {"video_id": "McV4a6umbAY", "text": "Unfortunately, there are no objective rankings of professional poker players. But like I said, these are all players that make millions of dollars a year. In fact, here's a question from the poker subreddit, where somebody was asking, ''How good are these players that we were playing against?'' Somebody responded, ''These players will absolutely trounce all the 2,000 heroes that you might have heard of. The heroes from 2000s would be division three college players. Well, whereas these guys are all star caliber pros.''", "start_timestamp": "00:02:39", "end_timestamp": "00:03:05", "start_second": 159, "end_second": 185, "url": "https://www.youtube.com/watch?v=McV4a6umbAY&t=159s", "title": "AI for Imperfect-Information Games: Beating Top Humans in No-Limit Poker", "thumbnail": "https://i.ytimg.com/vi/McV4a6umbAY/maxresdefault.jpg"} {"video_id": "McV4a6umbAY", "text": "So, this is a pretty accurate description I would say. There is a big scale difference between the a pros that you see on ESPN, and these guys who actually play this game for. The guys you see on ESPN are basically celebrities. These guys are the guys that actually make a living playing this game. The final result is that Libratus beat the humans in this game by a lot. The victory margin was 147 mbb/game, which is a measurement of win rate and poker, which, unless you are an actual poker player doesn't mean much,", "start_timestamp": "00:03:05", "end_timestamp": "00:03:36", "start_second": 185, "end_second": 216, "url": "https://www.youtube.com/watch?v=McV4a6umbAY&t=185s", "title": "AI for Imperfect-Information Games: Beating Top Humans in No-Limit Poker", "thumbnail": "https://i.ytimg.com/vi/McV4a6umbAY/maxresdefault.jpg"} {"video_id": "McV4a6umbAY", "text": "but to give you some perspective, this is about three times the win rate of a top pro versus an average pro. It was statistically significant at about four standard deviations, and each human lost individually to the AI. This was a big surprise to everybody. In fact, when we announced the competition, there was a betting market on the outcome, because it's the poker world, and obviously, like to gamble on these things. When we first announced that we're going to do this competition, the betting odds were four to one against us.", "start_timestamp": "00:03:36", "end_timestamp": "00:04:04", "start_second": 216, "end_second": 244, "url": "https://www.youtube.com/watch?v=McV4a6umbAY&t=216s", "title": "AI for Imperfect-Information Games: Beating Top Humans in No-Limit Poker", "thumbnail": "https://i.ytimg.com/vi/McV4a6umbAY/maxresdefault.jpg"} {"video_id": "McV4a6umbAY", "text": "In fact, even after we won on the first day, the betting odds were still two to one against us. I think I was until like the third day that the betting odds were even, and by the eighth day, you couldn't even bet on the outcome of the competition anymore. You could just bet on how much each human would lose on each individual day, because it was clear at that point that this AI is going to win. In fact, even if you asked us, we were not very confident that we would win. I put our odds at about like 60 percent,", "start_timestamp": "00:04:04", "end_timestamp": "00:04:29", "start_second": 244, "end_second": 269, "url": "https://www.youtube.com/watch?v=McV4a6umbAY&t=244s", "title": "AI for Imperfect-Information Games: Beating Top Humans in No-Limit Poker", "thumbnail": "https://i.ytimg.com/vi/McV4a6umbAY/maxresdefault.jpg"} {"video_id": "McV4a6umbAY", "text": "maybe 65, but I didn't think we would have a lot victories like this. Actually, after this competition, we did another competition against these Chinese pros. So, basically, somebody called Kai-Fu Lee in China called us and he said, ''We would like you to do another competition in China against Chinese players. We will broadcast it, it would be a lot of fun.'' We were like, ''Well, why should we do this? Because we just played against the top humans. These Chinese players not as good.'' He said that he would pay us.", "start_timestamp": "00:04:29", "end_timestamp": "00:04:56", "start_second": 269, "end_second": 296, "url": "https://www.youtube.com/watch?v=McV4a6umbAY&t=269s", "title": "AI for Imperfect-Information Games: Beating Top Humans in No-Limit Poker", "thumbnail": "https://i.ytimg.com/vi/McV4a6umbAY/maxresdefault.jpg"} {"video_id": "McV4a6umbAY", "text": "So, we said, ''Okay, great.'' So, we played 36,000 hands against six Chinese players. We beat them by even more than we beat the top humans in America. That was actually a huge hit in China. It was watched live by millions of people during that competition. They had really nice production where you could see a poster like this. It was way better than what we did in America. All right. So, why are imperfect-information games so hard? After all, we have AIs that can beat humans in games like chess, we have AIs that beat humans in Go.", "start_timestamp": "00:04:56", "end_timestamp": "00:05:28", "start_second": 296, "end_second": 328, "url": "https://www.youtube.com/watch?v=McV4a6umbAY&t=296s", "title": "AI for Imperfect-Information Games: Beating Top Humans in No-Limit Poker", "thumbnail": "https://i.ytimg.com/vi/McV4a6umbAY/maxresdefault.jpg"} {"video_id": "McV4a6umbAY", "text": "In fact, you might have heard recently about AlphaZero which can beat humans. Well, it's essentially superhuman in chess, Go, and shogi, all using the same algorithm. So, what is it about imperfect information games that are so difficult? One of the major challenges, not the only one, but one of the major ones, is that in an imperfect-information game, the optimal strategy for a subgame, for part of the game, cannot be determined in isolation. It cannot be determined using information in just that subgame alone.", "start_timestamp": "00:05:28", "end_timestamp": "00:05:54", "start_second": 328, "end_second": 354, "url": "https://www.youtube.com/watch?v=McV4a6umbAY&t=328s", "title": "AI for Imperfect-Information Games: Beating Top Humans in No-Limit Poker", "thumbnail": "https://i.ytimg.com/vi/McV4a6umbAY/maxresdefault.jpg"} {"video_id": "McV4a6umbAY", "text": "So, let me show you what I mean. Before I get to that, deep learning has taken a lot of credit recently for a lot of the breakthroughs in AI. Actually, all AI did not use any deep learning, no deep learning at all. But I would also argue that a big reason for why all these AIs are superhuman in various games like chess, Go, backgammon even, is because they use real-time planning. The planning component is huge. In AlphaGo, for example, use Monte-Carlo Tree Search, in Deep Blue, it used Alpha-beta pruning. So, in fact, if you look at", "start_timestamp": "00:05:54", "end_timestamp": "00:06:27", "start_second": 354, "end_second": 387, "url": "https://www.youtube.com/watch?v=McV4a6umbAY&t=354s", "title": "AI for Imperfect-Information Games: Beating Top Humans in No-Limit Poker", "thumbnail": "https://i.ytimg.com/vi/McV4a6umbAY/maxresdefault.jpg"} {"video_id": "McV4a6umbAY", "text": "AlphaZero, without real-time planning, I guess this is washed out, but it ends up being right around there without Monte-Carlo Tree Search during real-time. Top human performance is right around here. So, in fact, without Monte-Carlo Tree Search, AlphaZero is not superhuman. The tree search gets you 2,000 ELO addition. So, real-time planning is really important, not just in Go, but also in poker, it turns out. This is actually the key breakthrough that allowed us to be top humans is figuring out how to do real-time planning.", "start_timestamp": "00:06:27", "end_timestamp": "00:07:00", "start_second": 387, "end_second": 420, "url": "https://www.youtube.com/watch?v=McV4a6umbAY&t=387s", "title": "AI for Imperfect-Information Games: Beating Top Humans in No-Limit Poker", "thumbnail": "https://i.ytimg.com/vi/McV4a6umbAY/maxresdefault.jpg"} {"video_id": "McV4a6umbAY", "text": "But it turns out that in poker, it ends up being way harder which is where it gets you right now. So in perfect-information games, you take some action, your opponent takes some action, you find yourself in a particularly subgame. Now, you can forget about everything that came before, all the other situations you did not encountered. The only thing that matters is the situation that you're in, and the situations that can be reached from this point on. So in perfect-information games, so for example, if I were to show you this chess board,", "start_timestamp": "00:07:00", "end_timestamp": "00:07:32", "start_second": 420, "end_second": 452, "url": "https://www.youtube.com/watch?v=McV4a6umbAY&t=420s", "title": "AI for Imperfect-Information Games: Beating Top Humans in No-Limit Poker", "thumbnail": "https://i.ytimg.com/vi/McV4a6umbAY/maxresdefault.jpg"} {"video_id": "McV4a6umbAY", "text": "you don't have to know how we ended up in this situation, you don't have to know about the Sicilian defense of the Queen's gambit. You can just look at this board, and if you're white, you can say, ''Okay, well, if I do a search, I can see if I move my white queen there, then it's checkmate, and the game is over. So, I should just do that. You don't have to know anything about the strategy of chess. But in imperfect-information games, if you take some action, and your opponent takes some action, and you find yourself in a particularly sub-game,", "start_timestamp": "00:07:32", "end_timestamp": "00:07:55", "start_second": 452, "end_second": 475, "url": "https://www.youtube.com/watch?v=McV4a6umbAY&t=452s", "title": "AI for Imperfect-Information Games: Beating Top Humans in No-Limit Poker", "thumbnail": "https://i.ytimg.com/vi/McV4a6umbAY/maxresdefault.jpg"} {"video_id": "McV4a6umbAY", "text": "now some other sub game that you are not in, and in fact, you might not even be able to reach from this point on, can affect what the optimal strategy is for the sub-game that you are in. This is counter-intuitive, but I'm going to give you a concrete example in a little bit that illustrates this. Now, before I get to that, I want to talk a little bit about what our goal is in these games. Our goal is to find a Nash equilibrium which in-two player zero-sum games, is the same thing as a min-max equilibrium. I won't get too technical", "start_timestamp": "00:07:55", "end_timestamp": "00:08:24", "start_second": 475, "end_second": 504, "url": "https://www.youtube.com/watch?v=McV4a6umbAY&t=475s", "title": "AI for Imperfect-Information Games: Beating Top Humans in No-Limit Poker", "thumbnail": "https://i.ytimg.com/vi/McV4a6umbAY/maxresdefault.jpg"} {"video_id": "McV4a6umbAY", "text": "about the definition, but basically, in a two-player zero-sum game, if you're playing the Nash equilibrium, you are guaranteed to not lose an expectation. Now, it's not always easy to find a Nash equilibrium, but it's always guaranteed to exist and a finite two-player zero-sum game. So, for example, in rock, paper, scissors, the Nash equilibrium is to this mix randomly between rock, paper, and scissors, with equal probability, because if you do that, then no matter what your opponent does, you will not lose an expectation.", "start_timestamp": "00:08:24", "end_timestamp": "00:08:47", "start_second": 504, "end_second": 527, "url": "https://www.youtube.com/watch?v=McV4a6umbAY&t=504s", "title": "AI for Imperfect-Information Games: Beating Top Humans in No-Limit Poker", "thumbnail": "https://i.ytimg.com/vi/McV4a6umbAY/maxresdefault.jpg"} {"video_id": "McV4a6umbAY", "text": "Now, in rock, paper, scissors, that also means you're not going to win an expectation, but in a complicated game like poker where there's a lot of sub-optimal actions that aren't actually played in the Nash equilibrium, it's likely that your opponent will make mistakes and you will end up in practice winning as well. Yes. >> How important is it going to be to play a game So, if I compare this to say, heads up or not. If I got a heads up, if I got to sort of thinking about like this will go about seven players.", "start_timestamp": "00:08:47", "end_timestamp": "00:09:15", "start_second": 527, "end_second": 555, "url": "https://www.youtube.com/watch?v=McV4a6umbAY&t=527s", "title": "AI for Imperfect-Information Games: Beating Top Humans in No-Limit Poker", "thumbnail": "https://i.ytimg.com/vi/McV4a6umbAY/maxresdefault.jpg"} {"video_id": "McV4a6umbAY", "text": ">> That is a great question. So, I'll get to this, let's talk about this now. In poker, it doesn't really matter. So, in poker, if you were to use these same techniques for six player poker, you would almost certainly win. That said in general, poker is a special game because, I don't know if you play poker but two special things about poker. One is, it's really hard to collaborate with other players. So, you can't say, \"Hey, let's team up against this other person at the table.\" In fact, if you try to do that,", "start_timestamp": "00:09:15", "end_timestamp": "00:09:42", "start_second": 555, "end_second": 582, "url": "https://www.youtube.com/watch?v=McV4a6umbAY&t=555s", "title": "AI for Imperfect-Information Games: Beating Top Humans in No-Limit Poker", "thumbnail": "https://i.ytimg.com/vi/McV4a6umbAY/maxresdefault.jpg"} {"video_id": "McV4a6umbAY", "text": "that'll be against the rules of poker. The other thing that's unique about poker is that people fold in the game. So, even if you have six players at the start of the game, it very quickly comes down to two players because people fold. So, you can use these techniques that are only guaranteed for two-player zero-sum games and it will just work in six player poker. But a big challenge is extending these techniques to other games that do allow for collaboration. In that, we don't really have a good approach for those games yet.", "start_timestamp": "00:09:42", "end_timestamp": "00:10:11", "start_second": 582, "end_second": 611, "url": "https://www.youtube.com/watch?v=McV4a6umbAY&t=582s", "title": "AI for Imperfect-Information Games: Beating Top Humans in No-Limit Poker", "thumbnail": "https://i.ytimg.com/vi/McV4a6umbAY/maxresdefault.jpg"} {"video_id": "McV4a6umbAY", "text": "So, for now, I'm just going to assume that we're working in the two-player zero-sum setting and it does extend in some cases to other situations as well. So, our goal is to find an approximate Nash equilibrium. We're going to measure performance in terms of exploitability. You can think of it as, distance from a Nash equilibrium, it's how well we would do against a worst-case adversary relative to if we had played a Nash equilibrium instead. So, how exploitable we are? I would argue that exploitability is actually extremely important and has", "start_timestamp": "00:10:11", "end_timestamp": "00:10:45", "start_second": 611, "end_second": 645, "url": "https://www.youtube.com/watch?v=McV4a6umbAY&t=611s", "title": "AI for Imperfect-Information Games: Beating Top Humans in No-Limit Poker", "thumbnail": "https://i.ytimg.com/vi/McV4a6umbAY/maxresdefault.jpg"} {"video_id": "McV4a6umbAY", "text": "been overlooked in the AI community as a whole. I think two recent man-machine matches actually really highlight this. One is the OpenAI, one versus one Dota2 Matches that you might have heard about, and the other is Fan Hui versus AlphaGo. In the OpenAI Matches, they made this AI that was able to beat top humans in one versus one Dota2 over three games. But after they won against the top human, they actually opened it up to the public and they invited random mediocre players to play against it to see if they could find any weaknesses.", "start_timestamp": "00:10:45", "end_timestamp": "00:11:16", "start_second": 645, "end_second": 676, "url": "https://www.youtube.com/watch?v=McV4a6umbAY&t=645s", "title": "AI for Imperfect-Information Games: Beating Top Humans in No-Limit Poker", "thumbnail": "https://i.ytimg.com/vi/McV4a6umbAY/maxresdefault.jpg"} {"video_id": "McV4a6umbAY", "text": "In fact, pretty quickly within a few thousand games, weak players were able to find certain tricks that they could basically fool the AI and figured out how to exploit it and beat it. Also, in Fan Hui versus AlphaGo, so they famously beat Fan Hui 5-0. But then after they published the Nature paper, they invited him to play several more matches against it to see if he could find out any weaknesses in the AI. In fact, he was able to find weaknesses where he was able to consistently beat the AI and they had to patch this before they released on the Nature.", "start_timestamp": "00:11:16", "end_timestamp": "00:11:47", "start_second": 676, "end_second": 707, "url": "https://www.youtube.com/watch?v=McV4a6umbAY&t=676s", "title": "AI for Imperfect-Information Games: Beating Top Humans in No-Limit Poker", "thumbnail": "https://i.ytimg.com/vi/McV4a6umbAY/maxresdefault.jpg"} {"video_id": "McV4a6umbAY", "text": "So, I think what this really demonstrates is that it's not enough to beat top humans in three or five or even 10 games. You really have to be able to consistently beat top humans, especially if you want to deploy an AI into the real world. If you're Microsoft and you're trying to deploy this products with real users, there's millions or billions of them, if there's a weakness, they're going to find it. But with the [inaudible] , we played the top humans not just in three or five hands of poker, we played them in 120,000 hands of", "start_timestamp": "00:11:47", "end_timestamp": "00:12:15", "start_second": 707, "end_second": 735, "url": "https://www.youtube.com/watch?v=McV4a6umbAY&t=707s", "title": "AI for Imperfect-Information Games: Beating Top Humans in No-Limit Poker", "thumbnail": "https://i.ytimg.com/vi/McV4a6umbAY/maxresdefault.jpg"} {"video_id": "McV4a6umbAY", "text": "poker over the course of 20 days. That whole time, all four players working as a team to try to exploit the AI in any way they could find. In fact, actually, I had lunch with one of the players, just a couple months ago. He said that the thing they found most shocking about the competition is that, at the end of each day, we gave them a log of all the hands that were played and we told them what the bot had on each hand that was played. This is big because in poker, a big part of the game is actually keeping your strategy hidden.", "start_timestamp": "00:12:15", "end_timestamp": "00:12:46", "start_second": 735, "end_second": 766, "url": "https://www.youtube.com/watch?v=McV4a6umbAY&t=735s", "title": "AI for Imperfect-Information Games: Beating Top Humans in No-Limit Poker", "thumbnail": "https://i.ytimg.com/vi/McV4a6umbAY/maxresdefault.jpg"} {"video_id": "McV4a6umbAY", "text": "If you fold, your opponents does see what cards you have. In fact, even if you don't fold but you lose the hand, you still don't show your cards are. So, you only see your opponent's hand about 20 percent, 25 percent of the time. So, like poker players will sometimes even call, just to see what their opponent had. But here, we're just giving them that information. We're telling them what the bot had on every single hand that it played. So, they didn't have to worry about that part all, and they found it absolutely", "start_timestamp": "00:12:46", "end_timestamp": "00:13:14", "start_second": 766, "end_second": 794, "url": "https://www.youtube.com/watch?v=McV4a6umbAY&t=766s", "title": "AI for Imperfect-Information Games: Beating Top Humans in No-Limit Poker", "thumbnail": "https://i.ytimg.com/vi/McV4a6umbAY/maxresdefault.jpg"} {"video_id": "McV4a6umbAY", "text": "amazing that they could not figure out how to exploit the AI, even though we were showing them the hands that the bot was playing every single time and the bot strategy wasn't really changing that much between days. All right, so I think exploitability is extremely important. I think has been overlooked by the AI community, and this is telling that the imperfect information game solving community has focused on throughout its existence. All right, so now, I want to get to the example of why imperfect information games are hard.", "start_timestamp": "00:13:14", "end_timestamp": "00:13:41", "start_second": 794, "end_second": 821, "url": "https://www.youtube.com/watch?v=McV4a6umbAY&t=794s", "title": "AI for Imperfect-Information Games: Beating Top Humans in No-Limit Poker", "thumbnail": "https://i.ytimg.com/vi/McV4a6umbAY/maxresdefault.jpg"} {"video_id": "McV4a6umbAY", "text": "I'm going to talk about a simple example that I call a Coin Toss. It starts with the coin flip. So, the coin is flipped that lands heads or tails with 50-50 probability. Player one is going to observe the outcome of the coin toss. Player two is not. So, after this coin lands, Player one has a choice. They can either sell the coin or they can choose play. We'll say, if they choose sell this to some separate subgame, the details of which are not important. The only thing that's important is the expected value.", "start_timestamp": "00:13:41", "end_timestamp": "00:14:09", "start_second": 821, "end_second": 849, "url": "https://www.youtube.com/watch?v=McV4a6umbAY&t=821s", "title": "AI for Imperfect-Information Games: Beating Top Humans in No-Limit Poker", "thumbnail": "https://i.ytimg.com/vi/McV4a6umbAY/maxresdefault.jpg"} {"video_id": "McV4a6umbAY", "text": "So, we'll say, if the coin landed heads, then the coin was lucky and they can sell it for 0.50 cents. On the other hand, if the coin landed it tails, we'll say it's unlucky and Player one loses 0.50 cents by selling it. On the other hand, they could choose play, and if they choose play, then it leads to Player two, and Player two has to then guess how the coin landed without having observed how it actually landed. So, if they guess correctly that is Player two guesses heads and the coin actually landed heads,", "start_timestamp": "00:14:09", "end_timestamp": "00:14:37", "start_second": 849, "end_second": 877, "url": "https://www.youtube.com/watch?v=McV4a6umbAY&t=849s", "title": "AI for Imperfect-Information Games: Beating Top Humans in No-Limit Poker", "thumbnail": "https://i.ytimg.com/vi/McV4a6umbAY/maxresdefault.jpg"} {"video_id": "McV4a6umbAY", "text": "then Player one is going to lose one dollar and Player two is going to gain one dollar. Here, the payoffs are shown for Player one because this is a two-player zero-sum game. So, Player two just receives the opposite payoff. Now, on the other hand, if Player two guesses incorrectly that is they guess tails and the coin actually landed heads, then Player one gains one dollar and Player two loses one dollar. You can see there's a dotted line between the two players, two nodes this signifies that Player two is in what's called an information set.", "start_timestamp": "00:14:37", "end_timestamp": "00:15:04", "start_second": 877, "end_second": 904, "url": "https://www.youtube.com/watch?v=McV4a6umbAY&t=877s", "title": "AI for Imperfect-Information Games: Beating Top Humans in No-Limit Poker", "thumbnail": "https://i.ytimg.com/vi/McV4a6umbAY/maxresdefault.jpg"} {"video_id": "McV4a6umbAY", "text": "This means that Player two because they did not observe how the coin landed, they do not know which of those two states they were actually in. So, why do you imagine that you are Player two in this game and yeah, so why do you imagine that you are Player two in this game, you've just observed Player one chooses play action and so you know that you are in this imperfect information subgame. So, what should you do? Should you guess heads or should you guess tails? But one option is to just always guess heads. But if you do that, that's", "start_timestamp": "00:15:04", "end_timestamp": "00:15:34", "start_second": 904, "end_second": 934, "url": "https://www.youtube.com/watch?v=McV4a6umbAY&t=904s", "title": "AI for Imperfect-Information Games: Beating Top Humans in No-Limit Poker", "thumbnail": "https://i.ytimg.com/vi/McV4a6umbAY/maxresdefault.jpg"} {"video_id": "McV4a6umbAY", "text": "obviously a really bad strategy because now Player two can just sell the coin when it lands heads and get 0.50 cents, and choose play when the coin lands tails and gain a dollar. So, on average they're getting 0.75 cents. On the other hand, you could always choose tails, but that's also a really bad idea because now Player two can choose play when the coin lands heads and gain a dollar and choose sell when the coin lands in tails and lose 0.50 cents but it's better than losing a dollar. So, on average, they're still getting", "start_timestamp": "00:15:34", "end_timestamp": "00:16:03", "start_second": 934, "end_second": 963, "url": "https://www.youtube.com/watch?v=McV4a6umbAY&t=934s", "title": "AI for Imperfect-Information Games: Beating Top Humans in No-Limit Poker", "thumbnail": "https://i.ytimg.com/vi/McV4a6umbAY/maxresdefault.jpg"} {"video_id": "McV4a6umbAY", "text": "0.25 cents in this game. So, it turns out that the optimal strategy is to mix. It's to guess heads with 25 percent probability and tails with 75 percent probability. If you do that, then no matter what Player one does, the best they can do is just break-even, get on average zero dollar in this game. So, this is the Nash equilibrium strategy for Player two in this game, at least for this subgame. But now, let's say we change the game a little bit. Let's say we changed the payoff for the sell action. So, now, an expectation Player one", "start_timestamp": "00:16:03", "end_timestamp": "00:16:37", "start_second": 963, "end_second": 997, "url": "https://www.youtube.com/watch?v=McV4a6umbAY&t=963s", "title": "AI for Imperfect-Information Games: Beating Top Humans in No-Limit Poker", "thumbnail": "https://i.ytimg.com/vi/McV4a6umbAY/maxresdefault.jpg"} {"video_id": "McV4a6umbAY", "text": "loses 0.50 cents for choosing sell when the coin lands heads, and gains 0.50 cents for choosing sell when the coin lands tails. Well, it's pretty easy to see that as Player two, your strategy in this subgame should now change as well. Now, you should be guessing heads with 75 percent probability and tails with 25 percent probability. But you can see what's happened here is that, by changing the expected value of the sell action, we have affected what the optimal strategy is in the play subgame. Even though the sell action is not", "start_timestamp": "00:16:37", "end_timestamp": "00:17:06", "start_second": 997, "end_second": 1026, "url": "https://www.youtube.com/watch?v=McV4a6umbAY&t=997s", "title": "AI for Imperfect-Information Games: Beating Top Humans in No-Limit Poker", "thumbnail": "https://i.ytimg.com/vi/McV4a6umbAY/maxresdefault.jpg"} {"video_id": "McV4a6umbAY", "text": "part of the play subgame and in fact, it's not even on the path leading to the play subgame. So, this is something that happens in imperfect information games. It does not happen in perfect information games. In perfect information games, if you wanted to determine the optimal strategy in subgame, you only need to look at that subgame by itself. But in imperfect information games, you have to look at the game as a whole. So, you can think of like perfect information games is a special case where you don't have to worry about all this stuff.", "start_timestamp": "00:17:06", "end_timestamp": "00:17:32", "start_second": 1026, "end_second": 1052, "url": "https://www.youtube.com/watch?v=McV4a6umbAY&t=1026s", "title": "AI for Imperfect-Information Games: Beating Top Humans in No-Limit Poker", "thumbnail": "https://i.ytimg.com/vi/McV4a6umbAY/maxresdefault.jpg"} {"video_id": "McV4a6umbAY", "text": "Imperfect information games are the more general case where this is a problem. So, what do we do? Well, it turns out that we don't actually have to know the strategy for the entire game as a whole. I mentioned that this sell action leads to a subgame, where both players might take actions. But you don't have to worry about that, the only that really matters for determining the optimal strategy in this play subgame, is the expected value of Player one choosing sell. So, what we can do is try to estimate what that value is to Player one,", "start_timestamp": "00:17:32", "end_timestamp": "00:18:03", "start_second": 1052, "end_second": 1083, "url": "https://www.youtube.com/watch?v=McV4a6umbAY&t=1052s", "title": "AI for Imperfect-Information Games: Beating Top Humans in No-Limit Poker", "thumbnail": "https://i.ytimg.com/vi/McV4a6umbAY/maxresdefault.jpg"} {"video_id": "McV4a6umbAY", "text": "and if we have that, then we can determine the optimal strategy in the play subgame. So, that's what we actually did in [inaudible]. We also have a theorem that says, \"If this estimate is within delta of the true Nash equilibrium value, then we can solve for the play subgame and get within delta of the Nash Equilibrium.\" So, in the [inaudible] , we actually do this. We have this massive game which is simply way too large to solve upfront. So, we come up with a really good strategy just for the early part of the game,", "start_timestamp": "00:18:03", "end_timestamp": "00:18:35", "start_second": 1083, "end_second": 1115, "url": "https://www.youtube.com/watch?v=McV4a6umbAY&t=1083s", "title": "AI for Imperfect-Information Games: Beating Top Humans in No-Limit Poker", "thumbnail": "https://i.ytimg.com/vi/McV4a6umbAY/maxresdefault.jpg"} {"video_id": "McV4a6umbAY", "text": "and we estimate what the optimal strategy is and what the expected values are in the later parts of the game. Now, when we're actually playing, we find ourselves in a particular subgame, we come up with a much better strategy for that particular subgame using information about the expected values from the other subgames. Then, we repeat this process, we just come up with a really good strategy for that early parts that are coming up and just estimate how to play in the later parts. We find ourselves in early subgame,", "start_timestamp": "00:18:35", "end_timestamp": "00:19:00", "start_second": 1115, "end_second": 1140, "url": "https://www.youtube.com/watch?v=McV4a6umbAY&t=1115s", "title": "AI for Imperfect-Information Games: Beating Top Humans in No-Limit Poker", "thumbnail": "https://i.ytimg.com/vi/McV4a6umbAY/maxresdefault.jpg"} {"video_id": "McV4a6umbAY", "text": "we again compute a much better strategy and that particular subgame using information about the expected values of the other subgames. That's called nested subgame solving. This was the key breakthrough that allowed us to be top humans. So, when I, yes? >> Just [inaudible]. >> Yes, that's a great question. So, actually when we do this, this is sort of a general, how we would do this in general. But in poker, we solved the first two, there's four betting rounds. So, we solve the first two, with a pre-computed strategy.", "start_timestamp": "00:19:00", "end_timestamp": "00:19:34", "start_second": 1140, "end_second": 1174, "url": "https://www.youtube.com/watch?v=McV4a6umbAY&t=1140s", "title": "AI for Imperfect-Information Games: Beating Top Humans in No-Limit Poker", "thumbnail": "https://i.ytimg.com/vi/McV4a6umbAY/maxresdefault.jpg"} {"video_id": "McV4a6umbAY", "text": "Because it's like each round grows exponentially in size. So, the first two rounds are actually pretty small. We got to the end of the second betting round, that's when we applied Subgame Solving. So, we came up with a much better strategy for the remainder of the game. We abstracted the bets. So, we want to consider all the 20,000 different bet sizes, we would just consider a small fraction of them. Then each time the opponent acted, each time they've made a bet, then we would solve a new subgame for that bet size.", "start_timestamp": "00:19:34", "end_timestamp": "00:20:01", "start_second": 1174, "end_second": 1201, "url": "https://www.youtube.com/watch?v=McV4a6umbAY&t=1174s", "title": "AI for Imperfect-Information Games: Beating Top Humans in No-Limit Poker", "thumbnail": "https://i.ytimg.com/vi/McV4a6umbAY/maxresdefault.jpg"} {"video_id": "McV4a6umbAY", "text": "So, we would apply this recursive solving thing every time the opponent made an action beyond the second betting round. So, when I mention this idea of what's called Safe Subgame Solving, where we use the expected values from the other subgames, people always ask about this thing called Unsafe Subgame Solving, which is the more intuitive approach to doing this. The idea here is well, why don't we just estimate what the opponent strategy is? Let's say we can sort of like we played a bunch of hands against them or we can estimate what", "start_timestamp": "00:20:01", "end_timestamp": "00:20:29", "start_second": 1201, "end_second": 1229, "url": "https://www.youtube.com/watch?v=McV4a6umbAY&t=1201s", "title": "AI for Imperfect-Information Games: Beating Top Humans in No-Limit Poker", "thumbnail": "https://i.ytimg.com/vi/McV4a6umbAY/maxresdefault.jpg"} {"video_id": "McV4a6umbAY", "text": "the Nash equilibrium is for them, and we figured out, well, they should be choosing play 80 percent of the time when the coin lands heads and play 30 percent of the time when the coin lands tails. Let's say, just for example. Now, if we assume that the opponent's playing this strategy, can we then reason about the distribution of states that we might be in and then solve optimally using that distribution. It turns out that doesn't work. So, let me give you an example of what this would look like. When the coin lands either heads or tails,", "start_timestamp": "00:20:29", "end_timestamp": "00:20:58", "start_second": 1229, "end_second": 1258, "url": "https://www.youtube.com/watch?v=McV4a6umbAY&t=1229s", "title": "AI for Imperfect-Information Games: Beating Top Humans in No-Limit Poker", "thumbnail": "https://i.ytimg.com/vi/McV4a6umbAY/maxresdefault.jpg"} {"video_id": "McV4a6umbAY", "text": "we reason that we're in one of these states with 50-50 probability. Now if we observe player one choose play, we would say, okay, well, in a Nash equilibrium, we would expect player one to choose play 80 percent of the time if we were in the left state, 30 percent of the time when we're in the tail state. So, we update our belief about what state we're in using Bayes rule, and now we can reason that we're in that left state with 73 percent probability, and in that right state with 27 percent probability. Now, we would just say, well, if", "start_timestamp": "00:20:58", "end_timestamp": "00:21:22", "start_second": 1258, "end_second": 1282, "url": "https://www.youtube.com/watch?v=McV4a6umbAY&t=1258s", "title": "AI for Imperfect-Information Games: Beating Top Humans in No-Limit Poker", "thumbnail": "https://i.ytimg.com/vi/McV4a6umbAY/maxresdefault.jpg"} {"video_id": "McV4a6umbAY", "text": "we assume this distribution is correct, then the optimal strategy is to always choose heads. But if we've already established that's a really bad idea, because now the opponent can simply shift to selling the coin when it lands heads and choose and play with the coin lands tails. So, the problem with this approach is that we're making an assumption about how the opponent is playing. If it were true, like if this distribution were true that they were choosing play with 80 percent probability in heads and 30 percent probability in tails,", "start_timestamp": "00:21:22", "end_timestamp": "00:21:50", "start_second": 1282, "end_second": 1310, "url": "https://www.youtube.com/watch?v=McV4a6umbAY&t=1282s", "title": "AI for Imperfect-Information Games: Beating Top Humans in No-Limit Poker", "thumbnail": "https://i.ytimg.com/vi/McV4a6umbAY/maxresdefault.jpg"} {"video_id": "McV4a6umbAY", "text": "then yes, we can apply this reasoning. But the opponent strategy can always change. They can always change adversarially to us. Yes? >> There's one thing that I've always been interested in, when you play the Nash or when you play against the opponent. It seems like they're not going to shift. Even if you're playing the wrong strategy, they wouldn't exploit it off immediately. They have to learn to exploit it. I guess it's safe to definitely model the Nash, but I am curious about this intermediate space where", "start_timestamp": "00:21:50", "end_timestamp": "00:22:14", "start_second": 1310, "end_second": 1334, "url": "https://www.youtube.com/watch?v=McV4a6umbAY&t=1310s", "title": "AI for Imperfect-Information Games: Beating Top Humans in No-Limit Poker", "thumbnail": "https://i.ytimg.com/vi/McV4a6umbAY/maxresdefault.jpg"} {"video_id": "McV4a6umbAY", "text": "you'd play against how they've been playing in the past, recognize that you need to shift in some way because they may shift as well. >> So, yeah, that's a great question. We actually did not, so one of the interesting things about humans playing poker, is that they're actually really good at exploiting. They are phenomenal at it. Way better than computers are currently. So, we actually did a competition against them in 2015 where we lost, and we would sometimes change the bot that they were playing against between days.", "start_timestamp": "00:22:14", "end_timestamp": "00:22:41", "start_second": 1334, "end_second": 1361, "url": "https://www.youtube.com/watch?v=McV4a6umbAY&t=1334s", "title": "AI for Imperfect-Information Games: Beating Top Humans in No-Limit Poker", "thumbnail": "https://i.ytimg.com/vi/McV4a6umbAY/maxresdefault.jpg"} {"video_id": "McV4a6umbAY", "text": "Within 50 hands of playing, they could figure out how the bot had changed. So yes, if you can make an AI that could figure out how to do this better than humans, then that that might be valuable. But we're playing against really talented humans and we didn't think that we could beat them at that game, but then also why bother playing that game? Why bother trying to play that mind game if we can just approximate a Nash equilibrium and guarantee that we're going to win. So, I would argue that in the two player zero-sum game,", "start_timestamp": "00:22:41", "end_timestamp": "00:23:09", "start_second": 1361, "end_second": 1389, "url": "https://www.youtube.com/watch?v=McV4a6umbAY&t=1361s", "title": "AI for Imperfect-Information Games: Beating Top Humans in No-Limit Poker", "thumbnail": "https://i.ytimg.com/vi/McV4a6umbAY/maxresdefault.jpg"} {"video_id": "McV4a6umbAY", "text": "if you want to beat top humans in a two-player zero-sum game, the best approach is to just approximate the Nash equilibrium because now, no matter what they're going to do, you're going to beat them. Now, I would argue that if your objectives are different, so for example, if you really want to beat up on a weak player and exploit them, then yeah, you don't want to necessarily play Nash equilibrium. You want to adapt to their weaknesses. This is challenging to do correctly, because if you try to adapt to a weak players weaknesses,", "start_timestamp": "00:23:09", "end_timestamp": "00:23:35", "start_second": 1389, "end_second": 1415, "url": "https://www.youtube.com/watch?v=McV4a6umbAY&t=1389s", "title": "AI for Imperfect-Information Games: Beating Top Humans in No-Limit Poker", "thumbnail": "https://i.ytimg.com/vi/McV4a6umbAY/maxresdefault.jpg"} {"video_id": "McV4a6umbAY", "text": "you never know if they're just fooling you. Like if you're playing rock, paper, scissors against somebody and they throw rocks three times in a row, and you say, well, he's clearly an idiot who's throwing rock every single time, I'm going to throw paper next time, they could just throw scissors. So, there's no safe way that, except in special cases, there's no safe way to do that kind of opponent exploitation and still guarantee that you're going to beat top humans expectation. So, I think that is an excellent avenue for future research,", "start_timestamp": "00:23:35", "end_timestamp": "00:24:01", "start_second": 1415, "end_second": 1441, "url": "https://www.youtube.com/watch?v=McV4a6umbAY&t=1415s", "title": "AI for Imperfect-Information Games: Beating Top Humans in No-Limit Poker", "thumbnail": "https://i.ytimg.com/vi/McV4a6umbAY/maxresdefault.jpg"} {"video_id": "McV4a6umbAY", "text": "but I think that in the two-player zero-sum setting, where we're just trying to beat top humans, I think this is the better way to go about it. So, Unsafe Subgame Solving, is very risky for this reason because if you make an assumption about how the opponent is playing, they can always shift to a different strategy and take advantage of that. Now, that said, it turns out that this actually works, yes, so we must account for the opponent's ability to adapt. Now, that said in practice, Unsafe Subgame Solving works unusually well in poker.", "start_timestamp": "00:24:01", "end_timestamp": "00:24:34", "start_second": 1441, "end_second": 1474, "url": "https://www.youtube.com/watch?v=McV4a6umbAY&t=1441s", "title": "AI for Imperfect-Information Games: Beating Top Humans in No-Limit Poker", "thumbnail": "https://i.ytimg.com/vi/McV4a6umbAY/maxresdefault.jpg"} {"video_id": "McV4a6umbAY", "text": "It turns out that if you just approximate what the Nash Equilibrium strategy is and then assume that the opponent is playing that, and apply Subgame solving in this way, that actually works really well in this domain. But we have found situations where this does not work well, and I think in more general settings, it would not do well. So, we actually use this in a few situations in Libratus. But in general, I would not recommend doing this. Unless the domain is specially structured that it would work. So, Safe Subgame Solving,", "start_timestamp": "00:24:34", "end_timestamp": "00:25:07", "start_second": 1474, "end_second": 1507, "url": "https://www.youtube.com/watch?v=McV4a6umbAY&t=1474s", "title": "AI for Imperfect-Information Games: Beating Top Humans in No-Limit Poker", "thumbnail": "https://i.ytimg.com/vi/McV4a6umbAY/maxresdefault.jpg"} {"video_id": "McV4a6umbAY", "text": "the idea is instead that we're just going to estimate what the expected value is for the opponent's Subgames, for the opponent's actions for different Subgames, and use that information to determine the optimal strategy for the Subgame that we're in. Now, this works if your expected values are perfect, but if they're not perfect you're obviously not going to compute an exact Nash equilibrium. So, it turns out that there's room for improvement here. By the way, this idea has been around for awhile. It's was first introduced in 2014.", "start_timestamp": "00:25:07", "end_timestamp": "00:25:30", "start_second": 1507, "end_second": 1530, "url": "https://www.youtube.com/watch?v=McV4a6umbAY&t=1507s", "title": "AI for Imperfect-Information Games: Beating Top Humans in No-Limit Poker", "thumbnail": "https://i.ytimg.com/vi/McV4a6umbAY/maxresdefault.jpg"} {"video_id": "McV4a6umbAY", "text": "It was never really used in practice because it didn't actually give you good results in practice. Because you don't have perfect estimates. But what we came up with, is a way to dramatically improve the performance without giving up any theoretical guarantees. With this thing called Reach Subgame Solving. So, here's an example of how this works. This is going to get a little tricky, so if you have any questions in the next few slides please let me know. So, let's say that we have this slightly larger game now.", "start_timestamp": "00:25:30", "end_timestamp": "00:25:58", "start_second": 1530, "end_second": 1558, "url": "https://www.youtube.com/watch?v=McV4a6umbAY&t=1530s", "title": "AI for Imperfect-Information Games: Beating Top Humans in No-Limit Poker", "thumbnail": "https://i.ytimg.com/vi/McV4a6umbAY/maxresdefault.jpg"} {"video_id": "McV4a6umbAY", "text": "It's still basically the same structure, there's a coin flip that only player one observes. Player one takes a sequence of actions, they eventually end up in this choice between selling the coin or playing, choosing play. Now, if they choose play, player two has to guess how the coin landed. Well, let's say your estimates are off in this game. Let's say we estimate that for choosing Sell, they will get an expected value of minus one regardless of which state they're in. Well, the best that we can do is just guess", "start_timestamp": "00:25:58", "end_timestamp": "00:26:24", "start_second": 1558, "end_second": 1584, "url": "https://www.youtube.com/watch?v=McV4a6umbAY&t=1558s", "title": "AI for Imperfect-Information Games: Beating Top Humans in No-Limit Poker", "thumbnail": "https://i.ytimg.com/vi/McV4a6umbAY/maxresdefault.jpg"} {"video_id": "McV4a6umbAY", "text": "50-50 between heads and tails and guarantee that they get just an expected value zero, for choosing play. But maybe we can use information about the earlier actions to improve upon this. So, maybe there is this earlier action that player one could have chosen, if the coin landed heads, where they could have gotten expected value of 0.5. Well, that means that in the Nash equilibrium, they would choose that action and get expected value of 0.5, and in the other case, they would come down here and choose play and get it fixed value of zero.", "start_timestamp": "00:26:24", "end_timestamp": "00:26:52", "start_second": 1584, "end_second": 1612, "url": "https://www.youtube.com/watch?v=McV4a6umbAY&t=1584s", "title": "AI for Imperfect-Information Games: Beating Top Humans in No-Limit Poker", "thumbnail": "https://i.ytimg.com/vi/McV4a6umbAY/maxresdefault.jpg"} {"video_id": "McV4a6umbAY", "text": "So, they're getting an average of 0.25 in this game. But we can now shift our strategy as player two, to ensure we get tails more often, which guarantees that player one now gets negative 0.5 in this case. In the heads case that means they will get to 0.5 for choosing play, but that doesn't really matter because they're already getting 0.5 for this earlier deviate action. So, we're not really giving up anything in this situation, we're just making ourselves better off. Because they would never gets to the situation", "start_timestamp": "00:26:52", "end_timestamp": "00:27:22", "start_second": 1612, "end_second": 1642, "url": "https://www.youtube.com/watch?v=McV4a6umbAY&t=1612s", "title": "AI for Imperfect-Information Games: Beating Top Humans in No-Limit Poker", "thumbnail": "https://i.ytimg.com/vi/McV4a6umbAY/maxresdefault.jpg"} {"video_id": "McV4a6umbAY", "text": "where they would choose the 0.5 anyway. This seems really intuitive but there's a problem with this, which is really subtle. I'm going to have to go to a bigger game, which is going to get even more complicated to really illustrate it. So, here is this bigger game. It's still pretty similar, that a coin that lands heads or tails with 50-50 probability. Player one in both cases now, let's say has this deviate action. In the heads case, they can get 0.5, and at tails case, they get minus 0.5. Or they can choose to continue,", "start_timestamp": "00:27:22", "end_timestamp": "00:27:49", "start_second": 1642, "end_second": 1669, "url": "https://www.youtube.com/watch?v=McV4a6umbAY&t=1642s", "title": "AI for Imperfect-Information Games: Beating Top Humans in No-Limit Poker", "thumbnail": "https://i.ytimg.com/vi/McV4a6umbAY/maxresdefault.jpg"} {"video_id": "McV4a6umbAY", "text": "in which case we run into this chance node. This chance node is public. It just leads to two different Subgames, so both players observe the outcome of this chance node. It just leads to see different situations that are strategically identical. It's an irrelevant chance node, but it is a chance node. Then after this chance node, player one let's say, chooses play and we estimate the expected value of them choosing play is now zero. So, let's say, we were player two in this game, we observe player one choose play.", "start_timestamp": "00:27:49", "end_timestamp": "00:28:22", "start_second": 1669, "end_second": 1702, "url": "https://www.youtube.com/watch?v=McV4a6umbAY&t=1669s", "title": "AI for Imperfect-Information Games: Beating Top Humans in No-Limit Poker", "thumbnail": "https://i.ytimg.com/vi/McV4a6umbAY/maxresdefault.jpg"} {"video_id": "McV4a6umbAY", "text": "Which means that we are in one of these two different situations. Either the coin landed heads, they choose to continue and let's say we observed that the chance node end up going left, and then they chose play. Or the coin landed tails, player one chose to continue. We observe the chance node going left and they choose play. So, we're in either this situation or this situation. Well, we observed that they have this deviate action of where they could've gotten expected value 0.5, if the coin landed heads. So, maybe we would say,", "start_timestamp": "00:28:22", "end_timestamp": "00:28:54", "start_second": 1702, "end_second": 1734, "url": "https://www.youtube.com/watch?v=McV4a6umbAY&t=1702s", "title": "AI for Imperfect-Information Games: Beating Top Humans in No-Limit Poker", "thumbnail": "https://i.ytimg.com/vi/McV4a6umbAY/maxresdefault.jpg"} {"video_id": "McV4a6umbAY", "text": "we say, okay, well, we can increase the expected value for this action to one and lower it to minus one in this case, for example, by always guessing tails. That is okay because since this situation is only encountered 50 percent of the time, the expected value for this action is now just 0.5, and so that matches the deviant actions, so we're not giving up anything. Does anybody see the problem with this? All right. The problem is, if that chance node had gone the other way, if it had gone right, we would apply this same exact reasoning.", "start_timestamp": "00:28:54", "end_timestamp": "00:29:29", "start_second": 1734, "end_second": 1769, "url": "https://www.youtube.com/watch?v=McV4a6umbAY&t=1734s", "title": "AI for Imperfect-Information Games: Beating Top Humans in No-Limit Poker", "thumbnail": "https://i.ytimg.com/vi/McV4a6umbAY/maxresdefault.jpg"} {"video_id": "McV4a6umbAY", "text": "We would say, okay, well, we can increase this expected value to one, because we're all encountering the situation half the time, so this expected value goes up to 0.5, and now the opponent is getting expected value zero, we're not giving up anything. But if we apply this reasoning regardless of which way this chance node goes, then what that means is our strategy is to always guess tails in both situations. So, in reality, it means that the expected value in this case is one and in this case is one, which means that the expected value is", "start_timestamp": "00:29:29", "end_timestamp": "00:29:57", "start_second": 1769, "end_second": 1797, "url": "https://www.youtube.com/watch?v=McV4a6umbAY&t=1769s", "title": "AI for Imperfect-Information Games: Beating Top Humans in No-Limit Poker", "thumbnail": "https://i.ytimg.com/vi/McV4a6umbAY/maxresdefault.jpg"} {"video_id": "McV4a6umbAY", "text": "actually one, it's not 0.5. So, now player one could be better off by choosing to continue instead of choosing this deviate action. So, what this illustrates is that when you are doing this Subgame solving, you're doing this real-time reasoning, you can't look at the expected values of what we call the Blueprint Strategy, the pre-computed strategy. You have to think about what the expected values would have been if we had entered that Subgame and applied Subgame solving there too. So, that makes things that way more complicated.", "start_timestamp": "00:29:57", "end_timestamp": "00:30:31", "start_second": 1797, "end_second": 1831, "url": "https://www.youtube.com/watch?v=McV4a6umbAY&t=1797s", "title": "AI for Imperfect-Information Games: Beating Top Humans in No-Limit Poker", "thumbnail": "https://i.ytimg.com/vi/McV4a6umbAY/maxresdefault.jpg"} {"video_id": "McV4a6umbAY", "text": "But fortunately, with Reach subgame solving, by the way, two prior papers had actually discussed this idea of, okay, we're encountering this situation, let's just increase the expected value for here, because they could have gotten an expected value earlier on and missed this problem that you have to consider about all the subgames that people could end up in. So, two prior papers published about this and they both got it wrong, and our paper, NIPS 2017 recognized this problem and actually came up with a fix that", "start_timestamp": "00:30:31", "end_timestamp": "00:31:00", "start_second": 1831, "end_second": 1860, "url": "https://www.youtube.com/watch?v=McV4a6umbAY&t=1831s", "title": "AI for Imperfect-Information Games: Beating Top Humans in No-Limit Poker", "thumbnail": "https://i.ytimg.com/vi/McV4a6umbAY/maxresdefault.jpg"} {"video_id": "McV4a6umbAY", "text": "allows you to do this Reach subgame solving, while still guaranteeing that the points, your exploitability is not going to go up. The basic idea is to just only increase the expected value for both of these situations by 0.5. The actual details get a little bit more complicated, but the aren't too important for this talk. But the idea is you just increase the expected values by less depending on how many subgames they could end up in. You have question? >> Well, I was just wondering, I know this is really simple thing, you can't hold it.", "start_timestamp": "00:31:00", "end_timestamp": "00:31:30", "start_second": 1860, "end_second": 1890, "url": "https://www.youtube.com/watch?v=McV4a6umbAY&t=1860s", "title": "AI for Imperfect-Information Games: Beating Top Humans in No-Limit Poker", "thumbnail": "https://i.ytimg.com/vi/McV4a6umbAY/maxresdefault.jpg"} {"video_id": "McV4a6umbAY", "text": "You don't think if it's just wrong versus this weird like if you only increase the values for the left of the public chance node. >> Yeah, so let's see where the situation- >> You set those first to explain it? >> Yeah. >> It seems like this is still correct. It's just weird because you're saying if nature flips a coin heads, then I'm going to do this weird reaction, and if it's tails, then I'm not. >> So, I'll say this. If you were actually only increasing the expected value to one in this situation, and keeping the expected value at zero", "start_timestamp": "00:31:30", "end_timestamp": "00:31:59", "start_second": 1890, "end_second": 1919, "url": "https://www.youtube.com/watch?v=McV4a6umbAY&t=1890s", "title": "AI for Imperfect-Information Games: Beating Top Humans in No-Limit Poker", "thumbnail": "https://i.ytimg.com/vi/McV4a6umbAY/maxresdefault.jpg"} {"video_id": "McV4a6umbAY", "text": "in this situation, that's totally fine. >> Okay. >> But you have to think about what would have happened. Imagine this from player one's perspective. If we would increase the expected value to one in this situation because this chance node went left, and we would have increased this expected value to one if this chance node are gone, right, Then what player one is thinking is that if they're in this situation they're thinking that, if I choose this action, regardless of which way this chance node goes, I'm getting expected value of one.", "start_timestamp": "00:31:59", "end_timestamp": "00:32:27", "start_second": 1919, "end_second": 1947, "url": "https://www.youtube.com/watch?v=McV4a6umbAY&t=1919s", "title": "AI for Imperfect-Information Games: Beating Top Humans in No-Limit Poker", "thumbnail": "https://i.ytimg.com/vi/McV4a6umbAY/maxresdefault.jpg"} {"video_id": "McV4a6umbAY", "text": ">> Yeah, yeah. If our algorithms actually did. >> Yeah. So, that's what I'm saying is that, you have to think about what your algorithm would have done in all these other situations that didn't come up as player two, yeah. >> Okay. >> Okay. So, that's Reach subgame solving. Yeah, so the idea is for off path actions, we have to consider how we would have applied subgame solving and all these other situations that didn't come up. We have a solution for it, which is basically to split that what we call slack among the different subgames.", "start_timestamp": "00:32:27", "end_timestamp": "00:32:59", "start_second": 1947, "end_second": 1979, "url": "https://www.youtube.com/watch?v=McV4a6umbAY&t=1947s", "title": "AI for Imperfect-Information Games: Beating Top Humans in No-Limit Poker", "thumbnail": "https://i.ytimg.com/vi/McV4a6umbAY/maxresdefault.jpg"} {"video_id": "McV4a6umbAY", "text": "We have a theorem that says, \"Reach subgame solving will not increase the exploitability of a safe subgame solving's technique.\" If there is, these earlier actions where the opponent could have chosen a higher expected value, then it will actually improve performance relative to traditional subgame solving. In terms of actual, yes. >> [inaudible]. >> Yeah. >> Then you'll have to know the expected values for the subgames that are on the path from there. >> That's correct. So, you look at all the path, yeah, yes.", "start_timestamp": "00:32:59", "end_timestamp": "00:33:29", "start_second": 1979, "end_second": 2009, "url": "https://www.youtube.com/watch?v=McV4a6umbAY&t=1979s", "title": "AI for Imperfect-Information Games: Beating Top Humans in No-Limit Poker", "thumbnail": "https://i.ytimg.com/vi/McV4a6umbAY/maxresdefault.jpg"} {"video_id": "McV4a6umbAY", "text": "You look at all the situations where the opponent could have gone off path. >> Yeah. >> You have to know the fixed values for those. Yeah. >> I understand that these are values you put. So, is it correct when you're learning? So, it seems that you're allowed to do the updating some of these expected values, but then you are making this per all [inaudible]. >> Yes, so maybe I should have clarified this earlier. So, I'm assuming that we're basically run an algorithm that approximates a Nash equilibrium for the entire game,", "start_timestamp": "00:33:29", "end_timestamp": "00:33:56", "start_second": 2009, "end_second": 2036, "url": "https://www.youtube.com/watch?v=McV4a6umbAY&t=2009s", "title": "AI for Imperfect-Information Games: Beating Top Humans in No-Limit Poker", "thumbnail": "https://i.ytimg.com/vi/McV4a6umbAY/maxresdefault.jpg"} {"video_id": "McV4a6umbAY", "text": "and that's where these expected values are coming from. So, we have this a rough approximation of what the Nash equilibrium is for the entire game, and that's giving us expected values for all these different actions, but they're not perfect. It's like with AlphaZero for example. AlphaZero gives you policy and values for all the different states that you might be in, but that's obviously not perfect and you can improve upon it by doing Monte-Carlo tree search in real-time. >> What do you mean formally by the safe technique to exploitability.", "start_timestamp": "00:33:56", "end_timestamp": "00:34:21", "start_second": 2036, "end_second": 2061, "url": "https://www.youtube.com/watch?v=McV4a6umbAY&t=2036s", "title": "AI for Imperfect-Information Games: Beating Top Humans in No-Limit Poker", "thumbnail": "https://i.ytimg.com/vi/McV4a6umbAY/maxresdefault.jpg"} {"video_id": "McV4a6umbAY", "text": "So, you assume that [inaudible] is off by a certain fixed data? >> That's a great question. So, by safe subgames solving, I mean that there is some exploitability, our strategy, this pre-computed strategy that we have is exploitable by some amount. >> Assuming that all of these are correct or they have-? >> Well, I'm just saying that we've run this. Let's just say we've run a traditional reinforcement learning algorithm on the entire game, no real-time playing, just pre-computed strategy, that strategy that we have now", "start_timestamp": "00:34:21", "end_timestamp": "00:34:51", "start_second": 2061, "end_second": 2091, "url": "https://www.youtube.com/watch?v=McV4a6umbAY&t=2061s", "title": "AI for Imperfect-Information Games: Beating Top Humans in No-Limit Poker", "thumbnail": "https://i.ytimg.com/vi/McV4a6umbAY/maxresdefault.jpg"} {"video_id": "McV4a6umbAY", "text": "is exploitable by some amount. I am saying that we would like to improve upon this in practice by doing real-time planning. But we want to guarantee that by doing real-time planning, we're at least not going to increase the exploitability of our strategy relative to what we had before. Now, in practice, it ends up being way lower exploitability, but we want to guarantee. We can't really guarantee that it's going to decrease in most situations, but we want at least guarantee that's not going to increase. So, that is what I mean by safe subgame solving.", "start_timestamp": "00:34:51", "end_timestamp": "00:35:22", "start_second": 2091, "end_second": 2122, "url": "https://www.youtube.com/watch?v=McV4a6umbAY&t=2091s", "title": "AI for Imperfect-Information Games: Beating Top Humans in No-Limit Poker", "thumbnail": "https://i.ytimg.com/vi/McV4a6umbAY/maxresdefault.jpg"} {"video_id": "McV4a6umbAY", "text": ">> The purple like estimates from your pre-computed suboptimal value function? >> Yeah, so those are the values that we've estimated based on our prec-omputed strategy for the game. >> We can use those to compute the red basically? >> Yeah. >> Okay. >> So, the red is the real-time planning expected values. >> So, what's the right procedure? >> For time reasons, I decided to not really talk about how we're actually doing all this computation of the strategy. We use something called counterfactual rep minimization,", "start_timestamp": "00:35:22", "end_timestamp": "00:35:58", "start_second": 2122, "end_second": 2158, "url": "https://www.youtube.com/watch?v=McV4a6umbAY&t=2122s", "title": "AI for Imperfect-Information Games: Beating Top Humans in No-Limit Poker", "thumbnail": "https://i.ytimg.com/vi/McV4a6umbAY/maxresdefault.jpg"} {"video_id": "McV4a6umbAY", "text": "which converges to an approximate solution and one over square root T time. So, if you do T iterations, you will get within one over square root T of the Nash equilibrium. Okay. >> Like in terms of the numbers things then. >> It is also, yes. So, it's linear in the number of information sets. >> So, like terabytes? >> Well, okay. So, with the broadest, we actually used about several terabytes of data and we used about millions of core hours of computation time. >> In real-time? >> In real time, no. In real time, it was lower.", "start_timestamp": "00:35:58", "end_timestamp": "00:36:36", "start_second": 2158, "end_second": 2196, "url": "https://www.youtube.com/watch?v=McV4a6umbAY&t=2158s", "title": "AI for Imperfect-Information Games: Beating Top Humans in No-Limit Poker", "thumbnail": "https://i.ytimg.com/vi/McV4a6umbAY/maxresdefault.jpg"} {"video_id": "McV4a6umbAY", "text": ">> [inaudible] . >> So, that was for the pre-computed strategy. For real time, it ended up being we used about 1,000 cores, so about 50 nodes, and the memory is actually really small. It was probably less than 100 megabytes. We all actually figured out how to improve upon this massively, which I'll get to in a little bit. >> Is it a 100 like per core or per? >> No, it's the whole thing, 100 megabytes for the whole thing. The actual game, when you're solving from the turn onwards, like third betting rounds at the end of the game,", "start_timestamp": "00:36:36", "end_timestamp": "00:37:10", "start_second": 2196, "end_second": 2230, "url": "https://www.youtube.com/watch?v=McV4a6umbAY&t=2196s", "title": "AI for Imperfect-Information Games: Beating Top Humans in No-Limit Poker", "thumbnail": "https://i.ytimg.com/vi/McV4a6umbAY/maxresdefault.jpg"} {"video_id": "McV4a6umbAY", "text": "the actual size of the game is pretty small, but because you have to compute a solution equilibrium for it, it takes a lot of computational power. >> So, was there a limit on how much time you have to make a decision? >> Yeah, we ended up doing it in less than 20 seconds or so. There wasn't like an official time limit, but we gave them some guidelines on how long it would take on average. We also didn't get the humans the time on it. So, if they wanted to take 10 minutes for a decision then that was fine with us.", "start_timestamp": "00:37:10", "end_timestamp": "00:37:37", "start_second": 2230, "end_second": 2257, "url": "https://www.youtube.com/watch?v=McV4a6umbAY&t=2230s", "title": "AI for Imperfect-Information Games: Beating Top Humans in No-Limit Poker", "thumbnail": "https://i.ytimg.com/vi/McV4a6umbAY/maxresdefault.jpg"} {"video_id": "McV4a6umbAY", "text": "I will also, when we're thinking about the time on the thing. So, one of the interesting challenges of poker is that, you don't want to give it away timing tells, right? So, if it takes you two seconds to make a decision, then the opponent might think you have a really easy decision whereas, if you take two minutes, and it might be a difficult decision, they can figure out what hand you have. So, if you're playing, if you look at the World Series of Poker, it gets really annoying because, at the final table,", "start_timestamp": "00:37:37", "end_timestamp": "00:38:01", "start_second": 2257, "end_second": 2281, "url": "https://www.youtube.com/watch?v=McV4a6umbAY&t=2257s", "title": "AI for Imperfect-Information Games: Beating Top Humans in No-Limit Poker", "thumbnail": "https://i.ytimg.com/vi/McV4a6umbAY/maxresdefault.jpg"} {"video_id": "McV4a6umbAY", "text": "they all take the same amount of time for every single decision. So, they take like two minutes for every single decision, even if it's a really trivial one. We didn't want the humans to have to do this because it would've taken forever and would pissed them off, would have pissed us off, so we told them flat out that the bot is not going to look at timing tells. So, if they took two seconds to make a decision, that's totally fine, we won't change anything. But we can't make them also do that for the bot, rather like if the bot took", "start_timestamp": "00:38:01", "end_timestamp": "00:38:24", "start_second": 2281, "end_second": 2304, "url": "https://www.youtube.com/watch?v=McV4a6umbAY&t=2281s", "title": "AI for Imperfect-Information Games: Beating Top Humans in No-Limit Poker", "thumbnail": "https://i.ytimg.com/vi/McV4a6umbAY/maxresdefault.jpg"} {"video_id": "McV4a6umbAY", "text": "two seconds versus two minutes, they would pick up on that and they can't not pick up on that, right? So, we had to make the bot take the same amount of time for every single decision you made to prevent that. So, that's also why it ends up taking longer to do this thing. There's a lot of decisions that are easy, but we can't make that obvious. All right. So, experiments on medium-size games. So, it turns out that our reach subgame solving technique that I just described, that's about three times, it's about three times less exploitable.", "start_timestamp": "00:38:24", "end_timestamp": "00:38:54", "start_second": 2304, "end_second": 2334, "url": "https://www.youtube.com/watch?v=McV4a6umbAY&t=2304s", "title": "AI for Imperfect-Information Games: Beating Top Humans in No-Limit Poker", "thumbnail": "https://i.ytimg.com/vi/McV4a6umbAY/maxresdefault.jpg"} {"video_id": "McV4a6umbAY", "text": "Then, prior, safe subgame solving techniques, and nested subgame solving, this idea of applying subgame solving repeatedly as you go down the game tree. That is 12 times less exploitable than the prior state of the art, which is to just say, well, if the guy bet $60, and we have in our precomputed solution, a strategy for if he had bet $50, then we'll round it, and treat it as if you had bet $50 instead. That was the previous state of the art. So, this is 12 times less exploitable than that in Heads-up No-Limit Texas Hold'em.", "start_timestamp": "00:38:54", "end_timestamp": "00:39:24", "start_second": 2334, "end_second": 2364, "url": "https://www.youtube.com/watch?v=McV4a6umbAY&t=2334s", "title": "AI for Imperfect-Information Games: Beating Top Humans in No-Limit Poker", "thumbnail": "https://i.ytimg.com/vi/McV4a6umbAY/maxresdefault.jpg"} {"video_id": "McV4a6umbAY", "text": "Sorry, in smaller versions of Heads up No-limit Texas Hold'em. Okay. So, that is one reason for why imperfect information games are hard. There is a second reason that I wanted to get to you, and I think I still have time to do it. This is more recent research. The second reason is that states don't have well defined values in imperfect-information games. So, let me show you what I mean here. In a perfect-information game, and it's single agent settings, if you take smashing, your opponent takes an action, you find yourself in", "start_timestamp": "00:39:24", "end_timestamp": "00:39:53", "start_second": 2364, "end_second": 2393, "url": "https://www.youtube.com/watch?v=McV4a6umbAY&t=2364s", "title": "AI for Imperfect-Information Games: Beating Top Humans in No-Limit Poker", "thumbnail": "https://i.ytimg.com/vi/McV4a6umbAY/maxresdefault.jpg"} {"video_id": "McV4a6umbAY", "text": "a particular decision point is particularly subgame. You don't solve to the end of that subgame. Right? You do what's called depth limited reasoning. So, the remainder of the game is too large, so you come up with a strategy for the next several actions. Then, once you get to a certain depth limit, you say,okay, I've looked far enough, I'm just going to substitute a value for this leaf node, and say the value of this arrangement of pieces on the board looks like player one, white wins with 60 percent probability.", "start_timestamp": "00:39:53", "end_timestamp": "00:40:22", "start_second": 2393, "end_second": 2422, "url": "https://www.youtube.com/watch?v=McV4a6umbAY&t=2393s", "title": "AI for Imperfect-Information Games: Beating Top Humans in No-Limit Poker", "thumbnail": "https://i.ytimg.com/vi/McV4a6umbAY/maxresdefault.jpg"} {"video_id": "McV4a6umbAY", "text": "So, you assign a value to that state, and then you solve the depth limited subgame using those values at the leaf nodes. It turns out that this does not work in imperfect-information games. I can give you a really simple example of why. This is a game that I called Rock-Paper-Scissors+, is exactly like rock-paper-scissors, except if either player throws, what is it? Scissors? Yes. If other player throws scissors, then the winner gets two points, and the loser loses two points. That's just to break the symmetry of the game.", "start_timestamp": "00:40:22", "end_timestamp": "00:40:53", "start_second": 2422, "end_second": 2453, "url": "https://www.youtube.com/watch?v=McV4a6umbAY&t=2422s", "title": "AI for Imperfect-Information Games: Beating Top Humans in No-Limit Poker", "thumbnail": "https://i.ytimg.com/vi/McV4a6umbAY/maxresdefault.jpg"} {"video_id": "McV4a6umbAY", "text": "So, the next equilibrium in this game is to throw rock and paper with 40 percent probability, each, and scissors with 20 percent probability. Now, imagine that we're trying to do a depth-limited version, a depth-limited solving of this game as player one. So, we look one move ahead, and then we're going to substitute the Nash equilibrium value at each of those states, where instead of going to the end of the game. This is the depth-limited subgame. It's really easy to see that if we were to try to solve this depth limited subgame,", "start_timestamp": "00:40:53", "end_timestamp": "00:41:25", "start_second": 2453, "end_second": 2485, "url": "https://www.youtube.com/watch?v=McV4a6umbAY&t=2453s", "title": "AI for Imperfect-Information Games: Beating Top Humans in No-Limit Poker", "thumbnail": "https://i.ytimg.com/vi/McV4a6umbAY/maxresdefault.jpg"} {"video_id": "McV4a6umbAY", "text": "there is no way that we're going to find the optimal strategy of 40 percent, 40 percent, 20 percent. Right? There's no just not enough information in this depth-limited subgame to find the Nash Equilibrium. Why is that? Well, it turns out the reason is because we are essentially assuming that beyond this decision point, player two is going to play the Nash equilibrium strategy. Right? That's where we got the 000 from, this is assuming that if we had chosen scissors, and player two plays the Nash equilibrium strategy beyond this point,", "start_timestamp": "00:41:25", "end_timestamp": "00:41:55", "start_second": 2485, "end_second": 2515, "url": "https://www.youtube.com/watch?v=McV4a6umbAY&t=2485s", "title": "AI for Imperfect-Information Games: Beating Top Humans in No-Limit Poker", "thumbnail": "https://i.ytimg.com/vi/McV4a6umbAY/maxresdefault.jpg"} {"video_id": "McV4a6umbAY", "text": "this expected value is zero.. But in reality, player two strategy beyond the depth-limit depends on what our strategy is above the depth-limit. If we choose rock, 80 percent of the time, player two's strategy isn't going to be to play the Nash equilibrium, it's going to be choosing paper, 100 percent of the time. So, this is what the state values will look like or if we were to choose paper, 80 percent of time, then they would switch to always choosing scissors, and this is what the state values would look like.", "start_timestamp": "00:41:55", "end_timestamp": "00:42:23", "start_second": 2515, "end_second": 2543, "url": "https://www.youtube.com/watch?v=McV4a6umbAY&t=2515s", "title": "AI for Imperfect-Information Games: Beating Top Humans in No-Limit Poker", "thumbnail": "https://i.ytimg.com/vi/McV4a6umbAY/maxresdefault.jpg"} {"video_id": "McV4a6umbAY", "text": "So, in an imperfect-information game, the values of the states of the depth-limit depend on what our policy is, in the earlier parts of the game. So, how do we deal with this? Well, one option is to just actually make the state values dependent on our policy, and say, \"Okay. Well, the value of a state is a function of the description of that state, and our policy for the entire game.\" Well, that is theoretically correct to the promise that's extremely expensive, I mean absurdly expensive. The other option is something called, well,", "start_timestamp": "00:42:23", "end_timestamp": "00:42:56", "start_second": 2543, "end_second": 2576, "url": "https://www.youtube.com/watch?v=McV4a6umbAY&t=2543s", "title": "AI for Imperfect-Information Games: Beating Top Humans in No-Limit Poker", "thumbnail": "https://i.ytimg.com/vi/McV4a6umbAY/maxresdefault.jpg"} {"video_id": "McV4a6umbAY", "text": "we actually metaphor about this AI called DeepStack. They condition the value of a state on the belief distribution of both players at that state. So, they said, okay. Well, at this decision point, I'm not going to condition on the strategy for the early part of the game, I'm going to look at all the different states that I might be in, and the probability that I believe I'm in each of those states, which in this case is 80 percent, 10 percent, 10 percent. Then, this game, it ends up being the same exact thing as just", "start_timestamp": "00:42:56", "end_timestamp": "00:43:24", "start_second": 2576, "end_second": 2604, "url": "https://www.youtube.com/watch?v=McV4a6umbAY&t=2576s", "title": "AI for Imperfect-Information Games: Beating Top Humans in No-Limit Poker", "thumbnail": "https://i.ytimg.com/vi/McV4a6umbAY/maxresdefault.jpg"} {"video_id": "McV4a6umbAY", "text": "conditioning on the policy but, in general that's not the case. The problem is that this is still extremely expensive. So DeepStack for example, in Heads up No-limit Texas Hold'em, use 1.5 million core hours of computation, and could not be prior top AIs. The other problem is that the technique currently does not scale to larger games, basically games where you have more states in an information set. This, you can get by with this in Heads-up No-Limits Texas Hold'em, because it's only about, in any single decision point,", "start_timestamp": "00:43:24", "end_timestamp": "00:43:56", "start_second": 2604, "end_second": 2636, "url": "https://www.youtube.com/watch?v=McV4a6umbAY&t=2604s", "title": "AI for Imperfect-Information Games: Beating Top Humans in No-Limit Poker", "thumbnail": "https://i.ytimg.com/vi/McV4a6umbAY/maxresdefault.jpg"} {"video_id": "McV4a6umbAY", "text": "every single information set, there's only about 1,000 different states in that information set. But in a game like Five Card Draw for example, there could be 5 billion or in a game like Stratego, that could be 10th to the 20th or something. So, this would not work in those larger games. So, we do instead, is this paper that we just had except it's NIPS for 2018 called a depth-limited solving, we use this in an AI, we created called Modicum, and let me walk you through the algorithm here. The idea is, instead of assuming", "start_timestamp": "00:43:56", "end_timestamp": "00:44:29", "start_second": 2636, "end_second": 2669, "url": "https://www.youtube.com/watch?v=McV4a6umbAY&t=2636s", "title": "AI for Imperfect-Information Games: Beating Top Humans in No-Limit Poker", "thumbnail": "https://i.ytimg.com/vi/McV4a6umbAY/maxresdefault.jpg"} {"video_id": "McV4a6umbAY", "text": "that there is a single value at this depth-limit, we're actually going to let the opponent choose between multiple different values for these states. We create these different values in an iterative process. So, we start off by assuming that player two, beyond the depth-limit, is going to play the Nash equilibrium strategy that we precomputed. Then, we solve this depth-limited subgame, which in that case means, let say, we solve it, and we say our strategy is going to be one-third, one-third, one-third probability.", "start_timestamp": "00:44:29", "end_timestamp": "00:45:00", "start_second": 2669, "end_second": 2700, "url": "https://www.youtube.com/watch?v=McV4a6umbAY&t=2669s", "title": "AI for Imperfect-Information Games: Beating Top Humans in No-Limit Poker", "thumbnail": "https://i.ytimg.com/vi/McV4a6umbAY/maxresdefault.jpg"} {"video_id": "McV4a6umbAY", "text": "Now, we're going to calculate a player two best response. So, we can say, okay. Well, if our strategy here is to play one-third, one-third, one-third, player two could exploit us by always choosing rock. Now, we add that best response to the set of strategies that player two can choose at the depth-limit. So, now, we're going to solve this depth-limited subgame again, and we're going to say, add this depth-limit, now player two can choose. They can either choose the Nash equilibrium strategy that we had before or they can", "start_timestamp": "00:45:00", "end_timestamp": "00:45:32", "start_second": 2700, "end_second": 2732, "url": "https://www.youtube.com/watch?v=McV4a6umbAY&t=2700s", "title": "AI for Imperfect-Information Games: Beating Top Humans in No-Limit Poker", "thumbnail": "https://i.ytimg.com/vi/McV4a6umbAY/maxresdefault.jpg"} {"video_id": "McV4a6umbAY", "text": "choose this best response that we just calculated, which was always play rock. They make this decision because all of these states, sharing information set, they can't say, \"Okay, well, in this state, I'm going to choose this policy, and this state I'm going choose this policy.\" They have to make the same decision at each of these different states that share an information set. So, we solve this depth-limited subgame again. Then, we again calculate a player two best response to that strategy, and then we add that strategy to the set of", "start_timestamp": "00:45:32", "end_timestamp": "00:45:59", "start_second": 2732, "end_second": 2759, "url": "https://www.youtube.com/watch?v=McV4a6umbAY&t=2732s", "title": "AI for Imperfect-Information Games: Beating Top Humans in No-Limit Poker", "thumbnail": "https://i.ytimg.com/vi/McV4a6umbAY/maxresdefault.jpg"} {"video_id": "McV4a6umbAY", "text": "best responses that player two can choose at the depth-limit. We can repeat this process as many times as we want. Now, some details about this technique, it might seem like this is really expensive, and may not get good performance because we can't add like a million different strategies for player two to choose that the debt limit, but it turns out that because they are making this choice separately at each information set, they're essentially able to, even if we only give them a choice between 10 different strategies,", "start_timestamp": "00:45:59", "end_timestamp": "00:46:28", "start_second": 2759, "end_second": 2788, "url": "https://www.youtube.com/watch?v=McV4a6umbAY&t=2759s", "title": "AI for Imperfect-Information Games: Beating Top Humans in No-Limit Poker", "thumbnail": "https://i.ytimg.com/vi/McV4a6umbAY/maxresdefault.jpg"} {"video_id": "McV4a6umbAY", "text": "beyond the depth limit, if they are making that choice individually at 100 different information sets, they're actually choosing between 10 to the 100 different strategies for the entire remainder of the game. So, it actually grows very quickly. The other thing for player one, I talked about what player two does, so player two is choosing between these different strategies that they could play for the remainder of the game, player one, we're going to assume is playing the approximate Nash equilibrium. Player one is us, we're going", "start_timestamp": "00:46:28", "end_timestamp": "00:46:57", "start_second": 2788, "end_second": 2817, "url": "https://www.youtube.com/watch?v=McV4a6umbAY&t=2788s", "title": "AI for Imperfect-Information Games: Beating Top Humans in No-Limit Poker", "thumbnail": "https://i.ytimg.com/vi/McV4a6umbAY/maxresdefault.jpg"} {"video_id": "McV4a6umbAY", "text": "assume that weren't playing according to the approximate Nash equilibrium strategy for the remainder of the game. The set of player two's strategies is precomputed it's not determined in real time. They could do it in real time, it would be too expensive in practice. Okay. So, this is what performance looks like if we do this depth-limited solving thing. So, on the x-axis here, we have the number of values per leaf node that the opponent can choose between at the depth limit, and on the y-axis, we have exploitability measured in milligrams per game.", "start_timestamp": "00:46:57", "end_timestamp": "00:47:29", "start_second": 2817, "end_second": 2849, "url": "https://www.youtube.com/watch?v=McV4a6umbAY&t=2817s", "title": "AI for Imperfect-Information Games: Beating Top Humans in No-Limit Poker", "thumbnail": "https://i.ytimg.com/vi/McV4a6umbAY/maxresdefault.jpg"} {"video_id": "McV4a6umbAY", "text": "This is a simplified version of No-Limit Access Hold'em, that only has two betting rounds instead of four. You can see, if we only assume, that if we assume that each state has a unique value, which is essentially the Nash equilibrium value like we would in a perfect information game, exploitability is extremely high. But as we add more strategies for the appointed to choose between at the depth limit, exploitability drops off very quickly. In fact, with 16 different choices, were essentially at Nash equilibrium.", "start_timestamp": "00:47:29", "end_timestamp": "00:48:02", "start_second": 2849, "end_second": 2882, "url": "https://www.youtube.com/watch?v=McV4a6umbAY&t=2849s", "title": "AI for Imperfect-Information Games: Beating Top Humans in No-Limit Poker", "thumbnail": "https://i.ytimg.com/vi/McV4a6umbAY/maxresdefault.jpg"} {"video_id": "McV4a6umbAY", "text": "So, the beautiful thing about this. Yes. >> So why is that have one. [inaudible] blue will be the same.. >> No. Because actually blue, sorry, so for blue here, it's not doing any real-time reasoning, it's doing this like, if they had bet $60, I'm going to round that to $50. So, red is actually doing real-time reasoning, but it's assuming that each value has a well defined unique value. >> So, [inaudible] generation or frequency or something, so what [inaudible]. >> There are a lot of similarities between this", "start_timestamp": "00:48:02", "end_timestamp": "00:48:34", "start_second": 2882, "end_second": 2914, "url": "https://www.youtube.com/watch?v=McV4a6umbAY&t=2882s", "title": "AI for Imperfect-Information Games: Beating Top Humans in No-Limit Poker", "thumbnail": "https://i.ytimg.com/vi/McV4a6umbAY/maxresdefault.jpg"} {"video_id": "McV4a6umbAY", "text": "and things like double oracle methods and things like that. Yes. All right. So, in terms of head-to-head performance, the really cool thing about this technique is that it allows us to make a really strong poker AI using very few resources. To give you some perspective, we had this bot called Tartanian8 that we made in 2016, which won the Annual Computer Poker Competition, which is competition among poker AIs. It used two million core hours of computation, 18 terabytes of memory, and there's no real-time reasoning.", "start_timestamp": "00:48:34", "end_timestamp": "00:49:03", "start_second": 2914, "end_second": 2943, "url": "https://www.youtube.com/watch?v=McV4a6umbAY&t=2914s", "title": "AI for Imperfect-Information Games: Beating Top Humans in No-Limit Poker", "thumbnail": "https://i.ytimg.com/vi/McV4a6umbAY/maxresdefault.jpg"} {"video_id": "McV4a6umbAY", "text": "We have Slumbot which won the, that wasn't us, this is a different bot, that won the 2018 competition, used 250,000 core hours, two terabytes of memory, no real-time reasoning. Modicum which is the bot that uses its depth limited solving, uses just 700 core hours, 16 gigabytes of memory, plus real-time with a 4-core CPU in under 20 seconds per hand, and it beats both of those other bots. So, to put this in perspective even further, the broadest, which is the AI that we played against the top humans, used millions of core.", "start_timestamp": "00:49:03", "end_timestamp": "00:49:34", "start_second": 2943, "end_second": 2974, "url": "https://www.youtube.com/watch?v=McV4a6umbAY&t=2943s", "title": "AI for Imperfect-Information Games: Beating Top Humans in No-Limit Poker", "thumbnail": "https://i.ytimg.com/vi/McV4a6umbAY/maxresdefault.jpg"} {"video_id": "McV4a6umbAY", "text": "I think it was something like two million or five million core hours, probably it's 20 terabytes of memory, and played in real-time using 1,000 cores. So, we're able to get what is essentially probably superhuman, we haven't actually tested against humans but I'm pretty sure this is a superhuman poker AI. We're able to get basically superhuman performance using the resources in a laptop. In fact, since I published this paper, I've just put another paper on archive where we figured out how to make this three times faster.", "start_timestamp": "00:49:34", "end_timestamp": "00:50:04", "start_second": 2974, "end_second": 3004, "url": "https://www.youtube.com/watch?v=McV4a6umbAY&t=2974s", "title": "AI for Imperfect-Information Games: Beating Top Humans in No-Limit Poker", "thumbnail": "https://i.ytimg.com/vi/McV4a6umbAY/maxresdefault.jpg"} {"video_id": "McV4a6umbAY", "text": "So, it could probably run on a smartphone now. Yeah. >> Where is the [inaudible] you can just run. >> It turns out that there is a huge amount of variance in poker. >>Yes. >> Because we're doing real-time reasoning and we're taking 20 seconds per hand, the variance is massive. In fact, we actually, we train this using 700 core hours, it took us like a million core hours to actually compute all these results. So, this has been a problem in the entire field, that the variance is just absurd. So, this is a graph of head-to-head performance.", "start_timestamp": "00:50:04", "end_timestamp": "00:50:37", "start_second": 3004, "end_second": 3037, "url": "https://www.youtube.com/watch?v=McV4a6umbAY&t=3004s", "title": "AI for Imperfect-Information Games: Beating Top Humans in No-Limit Poker", "thumbnail": "https://i.ytimg.com/vi/McV4a6umbAY/maxresdefault.jpg"} {"video_id": "McV4a6umbAY", "text": "Here we have the Libratus, which beats up humans. Here is Modicum which is the AI we just created that uses way fewer resources. Here are some other benchmark bots. A bot from 2016, a bot from 2014. Here is DeepStack, which is a bought from the University of Alberta, which actually has very low exploitability. But in terms of head-to-head performance didn't end up being that strong. It also uses this real-time reasoning as well. Though a different form of it. All right so the key takeaways, yes. >> You said that one doesn't have,", "start_timestamp": "00:50:37", "end_timestamp": "00:51:11", "start_second": 3037, "end_second": 3071, "url": "https://www.youtube.com/watch?v=McV4a6umbAY&t=3037s", "title": "AI for Imperfect-Information Games: Beating Top Humans in No-Limit Poker", "thumbnail": "https://i.ytimg.com/vi/McV4a6umbAY/maxresdefault.jpg"} {"video_id": "McV4a6umbAY", "text": "it has low exploitability but it's not that strong? >> In terms of head-to-head performance it's not as strong. So, in terms of head-to-head performance it actually doesn't beat prior benchmark bots. >> Yes, I guess that's curious to me, because if you're not exploitable. Okay so-. >> When I say low exploitability, I mean just relative to the past bots. So, the exploitability is still, it could be extremely high, we actually don't know. We can't calculate exploitability exactly and heads up no limit takes it whole so we don't know.", "start_timestamp": "00:51:11", "end_timestamp": "00:51:42", "start_second": 3071, "end_second": 3102, "url": "https://www.youtube.com/watch?v=McV4a6umbAY&t=3071s", "title": "AI for Imperfect-Information Games: Beating Top Humans in No-Limit Poker", "thumbnail": "https://i.ytimg.com/vi/McV4a6umbAY/maxresdefault.jpg"} {"video_id": "McV4a6umbAY", "text": "But it appears that it has lower exploitability compared to the absurdly high exploitability of the previous bots Yes. So, key takeaways. In real-time planning, you always have to consider how the opponent can adapt to changes in your policy. That is something that is really important in imperfect permission games. Perfect-information games you can mostly ignore that but not completely. Imperfect-information subgame cannot be solved in isolation. States and imperfect-information games do not have well-defined values.", "start_timestamp": "00:51:42", "end_timestamp": "00:52:20", "start_second": 3102, "end_second": 3140, "url": "https://www.youtube.com/watch?v=McV4a6umbAY&t=3102s", "title": "AI for Imperfect-Information Games: Beating Top Humans in No-Limit Poker", "thumbnail": "https://i.ytimg.com/vi/McV4a6umbAY/maxresdefault.jpg"} {"video_id": "McV4a6umbAY", "text": "All right. So, I have done some other work that I did not discuss. One thing I did not discuss, well I guess I talked about it briefly is how we actually solve these games. We use an algorithm called counterfactual regret minimization, which has been around for about 10 years now. Works extremely well, even though the theoretical guarantee on convergence is only 1 over square root t. I just had a paper that I released, where I developed a new form of CFR which beats the prior state-of-the-art by a factor of three.", "start_timestamp": "00:52:20", "end_timestamp": "00:52:49", "start_second": 3140, "end_second": 3169, "url": "https://www.youtube.com/watch?v=McV4a6umbAY&t=3140s", "title": "AI for Imperfect-Information Games: Beating Top Humans in No-Limit Poker", "thumbnail": "https://i.ytimg.com/vi/McV4a6umbAY/maxresdefault.jpg"} {"video_id": "McV4a6umbAY", "text": "So, I'm really excited about that, and that's going to be used in all the future research. I have some work on pruning in CFR. So, it turns out that CFR you end up exploring the entire game tree which is a big waste, because a lot of actions are suboptimal and you don't want to waste time coming up with what you should do if you play a really crappy poker hand, because in reality you just fold it right away. So, I have this pruning technique that provably reduces to computing and memory costs of running CFR asymptotically.", "start_timestamp": "00:52:49", "end_timestamp": "00:53:19", "start_second": 3169, "end_second": 3199, "url": "https://www.youtube.com/watch?v=McV4a6umbAY&t=3169s", "title": "AI for Imperfect-Information Games: Beating Top Humans in No-Limit Poker", "thumbnail": "https://i.ytimg.com/vi/McV4a6umbAY/maxresdefault.jpg"} {"video_id": "McV4a6umbAY", "text": "In practice, it speeds things up by an order of magnitude of two. I also have a paper on determining the optimal actions in a continuous action space. So, one of the interesting things about no limits exist hold them is that you have this continuous action space where you can choose to bet any amount between $100 to $20,000. The truth is that, what we do right now is just domain-specific abstraction techniques, where we say okay, well, you probably just want to bet either half the pot or one times the pot or two times the pot,", "start_timestamp": "00:53:19", "end_timestamp": "00:53:49", "start_second": 3199, "end_second": 3229, "url": "https://www.youtube.com/watch?v=McV4a6umbAY&t=3199s", "title": "AI for Imperfect-Information Games: Beating Top Humans in No-Limit Poker", "thumbnail": "https://i.ytimg.com/vi/McV4a6umbAY/maxresdefault.jpg"} {"video_id": "McV4a6umbAY", "text": "and it doesn't really matter if you are betting 0.6 times the pot or 0.5 times the pot. But that relies on domain knowledge that we know the optimal bet fractions are roughly in that range. So, it'll be nice to have an algorithm that doesn't rely on domain knowledge that can actually determine the bets to use without any human knowledge. So, that's where this 2014 paper does, and hopefully we'll have some follow-up work on that in the future. For future directions. So, I mentioned before, a lot of this work has been looking at poker as a domain,", "start_timestamp": "00:53:49", "end_timestamp": "00:54:24", "start_second": 3229, "end_second": 3264, "url": "https://www.youtube.com/watch?v=McV4a6umbAY&t=3229s", "title": "AI for Imperfect-Information Games: Beating Top Humans in No-Limit Poker", "thumbnail": "https://i.ytimg.com/vi/McV4a6umbAY/maxresdefault.jpg"} {"video_id": "McV4a6umbAY", "text": "and that's because it takes a lot of infrastructure to actually build up to do experiments on large games. So, we have a lot of expertise that has been developed over the years on how to run these techniques efficiently on a game like poker. And if we wanted to test on another large game, it would take years to build up that expertise on how to do experiments on those other games. There's also no other really good, we can't really compare performance to other bots in other games, because there are no other games where", "start_timestamp": "00:54:24", "end_timestamp": "00:54:50", "start_second": 3264, "end_second": 3290, "url": "https://www.youtube.com/watch?v=McV4a6umbAY&t=3264s", "title": "AI for Imperfect-Information Games: Beating Top Humans in No-Limit Poker", "thumbnail": "https://i.ytimg.com/vi/McV4a6umbAY/maxresdefault.jpg"} {"video_id": "McV4a6umbAY", "text": "there are bots that are competitive. But I would love to move beyond poker, and a few different directions with that. One is, we have these techniques for perfect information games like AlphaZero, and we have these techniques for imperfect-information games like poker, and it'll be nice to bridge the gap and find a single algorithm that works really well in all these different games. Another thing that I'm really interested in is going beyond two-player zero-sum games. So, I mentioned that if you try to move on", "start_timestamp": "00:54:50", "end_timestamp": "00:55:19", "start_second": 3290, "end_second": 3319, "url": "https://www.youtube.com/watch?v=McV4a6umbAY&t=3290s", "title": "AI for Imperfect-Information Games: Beating Top Humans in No-Limit Poker", "thumbnail": "https://i.ytimg.com/vi/McV4a6umbAY/maxresdefault.jpg"} {"video_id": "McV4a6umbAY", "text": "to general-sum games it's a bunch of theoretical challenges the pop-up. So, right now, we don't know how to cope with those challenges. But dealing with general-sum games is really important because most real-world situations are general-sum and not not zero-sum, except for like maybe military interactions or security. So, in particular working out something like negotiation, I think is a really interesting line of research. In general moving things more towards real-world domains, I think we're at the point right now where we", "start_timestamp": "00:55:19", "end_timestamp": "00:55:49", "start_second": 3319, "end_second": 3349, "url": "https://www.youtube.com/watch?v=McV4a6umbAY&t=3319s", "title": "AI for Imperfect-Information Games: Beating Top Humans in No-Limit Poker", "thumbnail": "https://i.ytimg.com/vi/McV4a6umbAY/maxresdefault.jpg"} {"video_id": "McV4a6umbAY", "text": "can actually start bringing these techniques into the real world, and I think that's going to be a really interesting line of research as well. All right. So, I'll stop there and I'll take some last minute questions thank you. >> Yes so, I guess, in the real world often you don't know the rules of the game, you don't know them in advance, [inaudible] situation. >> Right. >> But you do observe the outcomes of players. Have you thought about trying to, what works with race when you go into a situation where you don't know", "start_timestamp": "00:55:49", "end_timestamp": "00:56:20", "start_second": 3349, "end_second": 3380, "url": "https://www.youtube.com/watch?v=McV4a6umbAY&t=3349s", "title": "AI for Imperfect-Information Games: Beating Top Humans in No-Limit Poker", "thumbnail": "https://i.ytimg.com/vi/McV4a6umbAY/maxresdefault.jpg"} {"video_id": "McV4a6umbAY", "text": "the rules of the game but you can observe the outcomes of players? >> That's a good question so yes. So, all of our work assumes that we have an accurate model of the world or the game that's being played. I think a lot of these techniques will carry over if you want to try to figure out the structure of the game as you go. There was actually a paper from another student at CMU recently on this problem. So, people are starting to look in this direction, I have not. But it's something that I think is very interesting.", "start_timestamp": "00:56:20", "end_timestamp": "00:56:46", "start_second": 3380, "end_second": 3406, "url": "https://www.youtube.com/watch?v=McV4a6umbAY&t=3380s", "title": "AI for Imperfect-Information Games: Beating Top Humans in No-Limit Poker", "thumbnail": "https://i.ytimg.com/vi/McV4a6umbAY/maxresdefault.jpg"} {"video_id": "McV4a6umbAY", "text": "It is also a way harder problem, because to figure out what information your opponent knows or figuring out what they could know is a really difficult challenge. I'm not sure how to go about that. Yes. >> I think in many applications in something like reinforcement learning. So, right now I can imagine splitting the environment into the portion that which I can model that will be like Chess moves, and then I can set the parts that I don't necessarily want to model, because I'm very risk averse. So, that could become like adversarial moves,", "start_timestamp": "00:56:46", "end_timestamp": "00:57:20", "start_second": 3406, "end_second": 3440, "url": "https://www.youtube.com/watch?v=McV4a6umbAY&t=3406s", "title": "AI for Imperfect-Information Games: Beating Top Humans in No-Limit Poker", "thumbnail": "https://i.ytimg.com/vi/McV4a6umbAY/maxresdefault.jpg"} {"video_id": "McV4a6umbAY", "text": "and I to be robust. Have you thought about how well other techniques would apply, or completely out of the ballpark. >> To be honest, I never really considered that direction for the research. I think there's a lot of potential here. This is an area of research that I think has been overlooked by a lot of people. So, it's actually been a very small community that has been working on the space, and I think people are starting to appreciate that can be applied to a lot of different things. We're starting to see papers on how", "start_timestamp": "00:57:20", "end_timestamp": "00:57:50", "start_second": 3440, "end_second": 3470, "url": "https://www.youtube.com/watch?v=McV4a6umbAY&t=3440s", "title": "AI for Imperfect-Information Games: Beating Top Humans in No-Limit Poker", "thumbnail": "https://i.ytimg.com/vi/McV4a6umbAY/maxresdefault.jpg"} {"video_id": "McV4a6umbAY", "text": "to apply something like counterfactual regret minimization to more mainstream reinforcement learning topics. So, I think there was a paper from Berkeley recently on regret minimization in single-agent settings. So, I think there is definitely potential to extend the research and to the more traditional reinforcement learning settings, but I have not looked into that yet. >> [inaudible] a little bit, is there anything I'm learning? So, in the real world one of the things that's really difficult is, I don't know the rules of the game", "start_timestamp": "00:57:50", "end_timestamp": "00:58:24", "start_second": 3470, "end_second": 3504, "url": "https://www.youtube.com/watch?v=McV4a6umbAY&t=3470s", "title": "AI for Imperfect-Information Games: Beating Top Humans in No-Limit Poker", "thumbnail": "https://i.ytimg.com/vi/McV4a6umbAY/maxresdefault.jpg"} {"video_id": "McV4a6umbAY", "text": "and I really have no idea what my problems hails are often. Is that something that people have looked at? Trying to basically think about simultaneously trying to improve my own hails trying to get them in model of what's going on with my opponent. >> Yes. I guess that would be kind of a subset of the previous case we discussed, where it will be like, maybe you know the structure of the game but you don't know what the opponent's payoffs. I think this has been looked at in the game theory community, but more in simple cases not", "start_timestamp": "00:58:24", "end_timestamp": "00:58:58", "start_second": 3504, "end_second": 3538, "url": "https://www.youtube.com/watch?v=McV4a6umbAY&t=3504s", "title": "AI for Imperfect-Information Games: Beating Top Humans in No-Limit Poker", "thumbnail": "https://i.ytimg.com/vi/McV4a6umbAY/maxresdefault.jpg"} {"video_id": "6tbHYvH347A", "text": "[Music] [Music] from the earliest writings of men we know that the human race has been comprised of the haves and the have-nots when I was a kid back during the Great Depression I was obsessed with a desire to know what invisible something separated the haves from the have-nots being a have nots I wanted to know why so few manage to be well-off financially in a country where success is available to everyone for example in checking the Statistical Abstract of the United States published by the Bureau of the census I discovered just lately", "start_timestamp": "00:00:00", "end_timestamp": "00:00:47", "start_second": 0, "end_second": 47, "url": "https://www.youtube.com/watch?v=6tbHYvH347A&t=0s", "title": "Change Your Life in 19 Minutes with Earl Nightingale", "thumbnail": "https://i.ytimg.com/vi/6tbHYvH347A/maxresdefault.jpg"} {"video_id": "6tbHYvH347A", "text": "that only 10% of the men in this country 65 years of age and older have incomes of $6,000 or more a year more than 80 percent of all men 65 or older have incomes under four thousand a year only seven point six percent of incomes between seven and ten thousand a year and only three point seven percent have incomes of ten thousand a year or more a man starts his working career in his 20s often earlier he's fortunate in that he lives in the free world he has better than 40 years to make the grade financially in the richest country on", "start_timestamp": "00:00:47", "end_timestamp": "00:01:20", "start_second": 47, "end_second": 80, "url": "https://www.youtube.com/watch?v=6tbHYvH347A&t=47s", "title": "Change Your Life in 19 Minutes with Earl Nightingale", "thumbnail": "https://i.ytimg.com/vi/6tbHYvH347A/maxresdefault.jpg"} {"video_id": "6tbHYvH347A", "text": "earth yet according with his statistics only about ten out of a hundred will be financially secure by the time 65 rolls around and only about four men out of a hundred will be financially comfortable now why let me tell you how to find out for yourself conduct your own survey start down the street in your neighborhood on any Saturday or Sunday and ask the man of every house two questions the first question is what are you doing at the present time to increase your income now that is how much do you want to earn", "start_timestamp": "00:01:20", "end_timestamp": "00:01:53", "start_second": 80, "end_second": 113, "url": "https://www.youtube.com/watch?v=6tbHYvH347A&t=80s", "title": "Change Your Life in 19 Minutes with Earl Nightingale", "thumbnail": "https://i.ytimg.com/vi/6tbHYvH347A/maxresdefault.jpg"} {"video_id": "6tbHYvH347A", "text": "and when you've evaluated the blanks there you get in response to that question asked question number 2 which goes how much money are you planning to be worth at age 65 and when the silence becomes too unnerving thank him and move on to the next house ask 50 men a hundred a thousand until you're completely convinced that the reason men don't make more money during their working lives and the reason they're not financially independent by the time they're sixty-five is simply that they seldom if ever do any constructive", "start_timestamp": "00:01:53", "end_timestamp": "00:02:22", "start_second": 113, "end_second": 142, "url": "https://www.youtube.com/watch?v=6tbHYvH347A&t=113s", "title": "Change Your Life in 19 Minutes with Earl Nightingale", "thumbnail": "https://i.ytimg.com/vi/6tbHYvH347A/maxresdefault.jpg"} {"video_id": "6tbHYvH347A", "text": "thinking on either subject it's that simple unfortunately the reason is so easy to earn far more money than the average man earns in this country is that so few so very few are going about it the right way this is a race without enough contestants to mother about the few who are really in the race can all be winners some will finish ahead of the others but even the man who finishes last in this race will be financially secure most people more than 90% aren't even in the race to prove it ask yourself the two survey questions", "start_timestamp": "00:02:22", "end_timestamp": "00:02:56", "start_second": 142, "end_second": 176, "url": "https://www.youtube.com/watch?v=6tbHYvH347A&t=142s", "title": "Change Your Life in 19 Minutes with Earl Nightingale", "thumbnail": "https://i.ytimg.com/vi/6tbHYvH347A/maxresdefault.jpg"} {"video_id": "6tbHYvH347A", "text": "up until the time you started listening to this message what were your plans for increasing your income how much do you want to earn and how much money had you decided to be worth by the time you're 65 you see people who earn large incomes aren't lucky and they're not crooks as those without money are so fond of pretending nor are they end up with more brains or talent necessarily than their friends and neighbors nor are they privy to our cult secrets and only a very few were lucky enough to have had rich fathers or grandfather's most of the", "start_timestamp": "00:02:56", "end_timestamp": "00:03:29", "start_second": 176, "end_second": 209, "url": "https://www.youtube.com/watch?v=6tbHYvH347A&t=176s", "title": "Change Your Life in 19 Minutes with Earl Nightingale", "thumbnail": "https://i.ytimg.com/vi/6tbHYvH347A/maxresdefault.jpg"} {"video_id": "6tbHYvH347A", "text": "people earning the big incomes today started the same way you and I did and most other people the only difference between the men who earned big incomes and those who earn small incomes is that those earning big incomes decided to earn more they're the people who made it their business to earn more you see a woman who does not think about baking an apple pie for dinner tonight will never think of looking up the recipe for apple pie without the decision for pie there's no motivation for checking out the recipe a man who", "start_timestamp": "00:03:29", "end_timestamp": "00:04:02", "start_second": 209, "end_second": 242, "url": "https://www.youtube.com/watch?v=6tbHYvH347A&t=209s", "title": "Change Your Life in 19 Minutes with Earl Nightingale", "thumbnail": "https://i.ytimg.com/vi/6tbHYvH347A/maxresdefault.jpg"} {"video_id": "6tbHYvH347A", "text": "does not think about driving his car to st. Louis Missouri or Nacogdoches Texas will never get roadmaps which show how to get the same go as in Nacogdoches and a man who never decides to earn more money will never think of learning how of looking up the rules burning more money you see people do what they make up their minds to do so get rid of the ancient superstition once and for all that people who are in big money are special people are lucky or get the breaks or had money to begin with or knew someone or are smarter or", "start_timestamp": "00:04:02", "end_timestamp": "00:04:33", "start_second": 242, "end_second": 273, "url": "https://www.youtube.com/watch?v=6tbHYvH347A&t=242s", "title": "Change Your Life in 19 Minutes with Earl Nightingale", "thumbnail": "https://i.ytimg.com/vi/6tbHYvH347A/maxresdefault.jpg"} {"video_id": "6tbHYvH347A", "text": "anything else these are alibis they can all be disproved a thousand times the reason there are so many of these alibis around is that men who failed to make the grade financially are seldom honest enough to just admit that they really didn't try and keep trying so in order to justify their failure in order to remain seated they dream up and passed along these old alibis we're all self-made but only the successful will admit it I want that occasion to visit Charleston South Carolina had never been there before so I hired a taxi to drive me", "start_timestamp": "00:04:33", "end_timestamp": "00:05:06", "start_second": 273, "end_second": 306, "url": "https://www.youtube.com/watch?v=6tbHYvH347A&t=273s", "title": "Change Your Life in 19 Minutes with Earl Nightingale", "thumbnail": "https://i.ytimg.com/vi/6tbHYvH347A/maxresdefault.jpg"} {"video_id": "6tbHYvH347A", "text": "around the historic old town I particularly wanted to see the battery with that famous shot was fired on Fort Sumter along this beautiful drive some of Charleston's oldest and finest homes without over the bay I commented to my cab driver on what lovely homes they were and he said yes some of those homes have 40 rooms and then he thought a moment he said and every one of them is owned by a crook this is how the have-nots justify themselves and their lot in life I didn't say anything because I didn't feel I was entitled to", "start_timestamp": "00:05:06", "end_timestamp": "00:05:33", "start_second": 306, "end_second": 333, "url": "https://www.youtube.com/watch?v=6tbHYvH347A&t=306s", "title": "Change Your Life in 19 Minutes with Earl Nightingale", "thumbnail": "https://i.ytimg.com/vi/6tbHYvH347A/maxresdefault.jpg"} {"video_id": "6tbHYvH347A", "text": "advise him or try to straighten out his thinking this is a free country where as long as he doesn't hurt others everyone has the inalienable right to be just as wrong as he wants to be as Thomas R Lounsbury the American scholar and educator put it we must view with profound respect the infinite capacity of the human mind to resist the inroads of useful knowledge my taxi driver and men and women like him all over the world have been kidding themselves and holding themselves down and refusing the bounty and abundance of the world for", "start_timestamp": "00:05:33", "end_timestamp": "00:06:03", "start_second": 333, "end_second": 363, "url": "https://www.youtube.com/watch?v=6tbHYvH347A&t=333s", "title": "Change Your Life in 19 Minutes with Earl Nightingale", "thumbnail": "https://i.ytimg.com/vi/6tbHYvH347A/maxresdefault.jpg"} {"video_id": "6tbHYvH347A", "text": "centuries knowledge is available to everyone we can either listen to those qualified to teach us or we can go along with those ancient stumbling blocks we get from people who don't know any more than we do the truth incidentally about those homes along that beautiful Drive is that they were built by the men and women who made the largest contribution to the city of Charleston in just a moment I'm going to give you the formula for getting rich but before I do I want to remind you of something before a jet pilot begins his takeoff from an airport", "start_timestamp": "00:06:03", "end_timestamp": "00:06:31", "start_second": 363, "end_second": 391, "url": "https://www.youtube.com/watch?v=6tbHYvH347A&t=363s", "title": "Change Your Life in 19 Minutes with Earl Nightingale", "thumbnail": "https://i.ytimg.com/vi/6tbHYvH347A/maxresdefault.jpg"} {"video_id": "6tbHYvH347A", "text": "he carefully goes over a checklist item by item he does this not only because it's required by law but because he cannot afford to trust so important a job to his memory alone he has another checklist that he goes over just as carefully before he begins his lap down at his destination he does this without fail every time he takes off and every time he lands well I think living successfully is as important as flying an airplane and because of this I think each of us needs a checklist too and that's why there's one included with", "start_timestamp": "00:06:31", "end_timestamp": "00:07:01", "start_second": 391, "end_second": 421, "url": "https://www.youtube.com/watch?v=6tbHYvH347A&t=391s", "title": "Change Your Life in 19 Minutes with Earl Nightingale", "thumbnail": "https://i.ytimg.com/vi/6tbHYvH347A/maxresdefault.jpg"} {"video_id": "6tbHYvH347A", "text": "this cassette we need a checklist to go over item by item before we take off in the morning and before we drop off to sleep every night so I want to recommend that you have fixed the checklist be a bathroom mirror stare at it as you brush your teeth in the morning and stare at it again as you prepare for bed at night go over each item and as you do think of what each item represents and here's number one it's the formula for getting rich it also explains why you're in your present position whether you're earning", "start_timestamp": "00:07:01", "end_timestamp": "00:07:29", "start_second": 421, "end_second": 449, "url": "https://www.youtube.com/watch?v=6tbHYvH347A&t=421s", "title": "Change Your Life in 19 Minutes with Earl Nightingale", "thumbnail": "https://i.ytimg.com/vi/6tbHYvH347A/maxresdefault.jpg"} {"video_id": "6tbHYvH347A", "text": "six thousand a year or sixteen thousand or sixty thousand or six hundred thousand it applies to every adult whether he's employed or unemployed it applies to the richest man and to the forest and every person in between and here it is our rewards in life will always be an exact proportion to our contribution our service now that's what the formula means as the first item on your checklist memorize it our rewards in life will always be in exact proportion to our contribution our service listen to it think about it", "start_timestamp": "00:07:29", "end_timestamp": "00:08:04", "start_second": 449, "end_second": 484, "url": "https://www.youtube.com/watch?v=6tbHYvH347A&t=449s", "title": "Change Your Life in 19 Minutes with Earl Nightingale", "thumbnail": "https://i.ytimg.com/vi/6tbHYvH347A/maxresdefault.jpg"} {"video_id": "6tbHYvH347A", "text": "until you know it emotionally as well as intellectually it might give you some slight feeling of superiority to realize that there's probably not another man within a mile of where you live who knows it you can add it as a question on your survey if you want proof of that if you want it in another form here it is as it applies to a man's job it's the same thing really the same thing applies but you can express it differently the money you're paid by the company you work for will always be in direct ratio to then", "start_timestamp": "00:08:04", "end_timestamp": "00:08:32", "start_second": 484, "end_second": 512, "url": "https://www.youtube.com/watch?v=6tbHYvH347A&t=484s", "title": "Change Your Life in 19 Minutes with Earl Nightingale", "thumbnail": "https://i.ytimg.com/vi/6tbHYvH347A/maxresdefault.jpg"} {"video_id": "6tbHYvH347A", "text": "what you do your ability to do it and the degree of difficulty involved in replacing you maybe you want to write the formula down in both of its forms and think about it until it's as much a part of you as your name the reason it isn't spelled out on your checklist is because you might not want everyone to know what you're up to the checklist is valuable only to a person who knows what the words really indicate all right you've got the formula as you think about it its meaning will become clearer to you with", "start_timestamp": "00:08:32", "end_timestamp": "00:08:58", "start_second": 512, "end_second": 538, "url": "https://www.youtube.com/watch?v=6tbHYvH347A&t=512s", "title": "Change Your Life in 19 Minutes with Earl Nightingale", "thumbnail": "https://i.ytimg.com/vi/6tbHYvH347A/maxresdefault.jpg"} {"video_id": "6tbHYvH347A", "text": "the formula there are two rules which must be applied to properly use it this formula together with the two rules is your recipe or your roadmap to earning all the money you really want now let's take a look at item number two on your checklist and I'm serious about your putting this checklist on your mirror you'll notice it's pressure sensitive item number two the gold mine the pulitzer prize-winning playwright Archibald MacLeish and is play the secret of freedom wrote the only thing about a man that is a man is his mind", "start_timestamp": "00:08:58", "end_timestamp": "00:09:28", "start_second": 538, "end_second": 568, "url": "https://www.youtube.com/watch?v=6tbHYvH347A&t=538s", "title": "Change Your Life in 19 Minutes with Earl Nightingale", "thumbnail": "https://i.ytimg.com/vi/6tbHYvH347A/maxresdefault.jpg"} {"video_id": "6tbHYvH347A", "text": "everything else you can find in a pig or a horse strong words aren't they but as long as you live you will never hear a truer statement the key to every human being success lies in his mind the goldmine between his ears one idea can make you rich a lot of good ideas can move your steadily upward in the work you do and ideas are free and just think there's nothing now being done commercially that will not be done better much better in the years ahead next years homes and most of what's in them will be better than this year's", "start_timestamp": "00:09:28", "end_timestamp": "00:10:04", "start_second": 568, "end_second": 604, "url": "https://www.youtube.com/watch?v=6tbHYvH347A&t=568s", "title": "Change Your Life in 19 Minutes with Earl Nightingale", "thumbnail": "https://i.ytimg.com/vi/6tbHYvH347A/maxresdefault.jpg"} {"video_id": "6tbHYvH347A", "text": "next year's cars will be better next year's manufacturing distributing marketing and selling and advertising should be better nothing is now being done as well as it must be done in the future and every innovation every new improvement will be somebody's brainchild now what your especially how many good ideas have you come up with during the past year if you continue on as you have in the past where would you be and what will you be earning say a year from now five years from now every day of our lives we walk", "start_timestamp": "00:10:04", "end_timestamp": "00:10:35", "start_second": 604, "end_second": 635, "url": "https://www.youtube.com/watch?v=6tbHYvH347A&t=604s", "title": "Change Your Life in 19 Minutes with Earl Nightingale", "thumbnail": "https://i.ytimg.com/vi/6tbHYvH347A/maxresdefault.jpg"} {"video_id": "6tbHYvH347A", "text": "or drive buy more opportunity than we could develop in a lifetime in 50 lifetimes back in the twentieth Sinclair Lewis wrote that you can kidnap a man blindfold and take him to any city in the country with a couple of notable exceptions put him in a chair in the downtown area take off his blindfold and he could sit there a week and not be able to tell you what town he's in the streets are all alike the buildings are all alike the businesses all look alike this is still largely true today the reason for this being that most business", "start_timestamp": "00:10:35", "end_timestamp": "00:11:05", "start_second": 635, "end_second": 665, "url": "https://www.youtube.com/watch?v=6tbHYvH347A&t=635s", "title": "Change Your Life in 19 Minutes with Earl Nightingale", "thumbnail": "https://i.ytimg.com/vi/6tbHYvH347A/maxresdefault.jpg"} {"video_id": "6tbHYvH347A", "text": "men in this country are playing a game called follow the follower if a man goes into business no matter what line it happens to be the first thing he does is make certain that his place of business outside and inside looks exactly like every other place of business of that type in the country do you know why it's because he's been playing copycats since he was a year old and does it without thinking about it for the same reason kids dress alike in school he wants to be one of the gang he doesn't necessarily examine all the", "start_timestamp": "00:11:05", "end_timestamp": "00:11:33", "start_second": 665, "end_second": 693, "url": "https://www.youtube.com/watch?v=6tbHYvH347A&t=665s", "title": "Change Your Life in 19 Minutes with Earl Nightingale", "thumbnail": "https://i.ytimg.com/vi/6tbHYvH347A/maxresdefault.jpg"} {"video_id": "6tbHYvH347A", "text": "business establishments in his field and pattern his on the one outstanding example the one that inspires him the one that he can really believe in he just does what everybody else in his business is doing and by this simple process he guarantees his own mediocrity whose drum are you marching to if indeed you're marching to anyone's and why remember whatever you now do for a living will be done differently quite differently a few years from now never in the history of mankind have the opportunities for all of us been so", "start_timestamp": "00:11:33", "end_timestamp": "00:12:02", "start_second": 693, "end_second": 722, "url": "https://www.youtube.com/watch?v=6tbHYvH347A&t=693s", "title": "Change Your Life in 19 Minutes with Earl Nightingale", "thumbnail": "https://i.ytimg.com/vi/6tbHYvH347A/maxresdefault.jpg"} {"video_id": "6tbHYvH347A", "text": "great but the great majority of people will be the beneficiaries of progress not those who bring it about which group you've rather belong to if you want to be a contributor you're not just a beneficiary here's the first rule it appears on your checklist as the gold mind so think think deliberately and with a purpose use the goldmine between your ears begin by thinking of a special time every day back during the Depression in New York lumber dealer was growing rich while other lumber dealers were going broke when asked how he did", "start_timestamp": "00:12:02", "end_timestamp": "00:12:34", "start_second": 722, "end_second": 754, "url": "https://www.youtube.com/watch?v=6tbHYvH347A&t=722s", "title": "Change Your Life in 19 Minutes with Earl Nightingale", "thumbnail": "https://i.ytimg.com/vi/6tbHYvH347A/maxresdefault.jpg"} {"video_id": "6tbHYvH347A", "text": "it he said every evening when I get home I closed myself up in a quiet room sit in a comfortable chair and ask myself how will my business be conducted ten years from now then I try to do it now instead of competing with every other lumber dealer which is what they were doing he was creating he was doing the very thing man was designed to do the very thing man does best a company growing at the rate of 10% a year will double its size in less than eight years but a man can improve his effectiveness 50% or a hundred percent a year or more", "start_timestamp": "00:12:34", "end_timestamp": "00:13:05", "start_second": 754, "end_second": 785, "url": "https://www.youtube.com/watch?v=6tbHYvH347A&t=754s", "title": "Change Your Life in 19 Minutes with Earl Nightingale", "thumbnail": "https://i.ytimg.com/vi/6tbHYvH347A/maxresdefault.jpg"} {"video_id": "6tbHYvH347A", "text": "the experts tell us that every one of us says with an in deep reservoir sir rebuttal - even genius that he habitually fails to use well let's begin now to reach into these deep rich areas of pure net profit and use more more of our real abilities let's think here's the best way I've found to make yourself think start getting up a little earlier than you're accustomed to right off the bat this gives you extra time that 95% of the men in this country are not using it all one power earlier a day gives you six and a half extra 40-hour", "start_timestamp": "00:13:05", "end_timestamp": "00:13:36", "start_second": 785, "end_second": 816, "url": "https://www.youtube.com/watch?v=6tbHYvH347A&t=785s", "title": "Change Your Life in 19 Minutes with Earl Nightingale", "thumbnail": "https://i.ytimg.com/vi/6tbHYvH347A/maxresdefault.jpg"} {"video_id": "6tbHYvH347A", "text": "weeks a year but at this time in the morning take a refreshing shower address get yourself a fresh hot cup of coffee if you're a coffee man and then sit down to a clean sheet of paper at the top of the paper write your financial goal now this is the amount of money per year you intend to earn soon incidentally you might like to keep this to yourself - it's nobody's business with yours then start to think think about your goal and what it'll mean to you and your family then see how many ideas you can come up with to help you reach that goal ideas", "start_timestamp": "00:13:36", "end_timestamp": "00:14:07", "start_second": 816, "end_second": 847, "url": "https://www.youtube.com/watch?v=6tbHYvH347A&t=816s", "title": "Change Your Life in 19 Minutes with Earl Nightingale", "thumbnail": "https://i.ytimg.com/vi/6tbHYvH347A/maxresdefault.jpg"} {"video_id": "6tbHYvH347A", "text": "to improve what you now do for a living ways of increasing your contribution to match your income goal you know jobs don't have futures people do no matter what line of work you may be in there is within it more than enough opportunity to last a lifetime you don't have to think of brand new ideas or revolutionary new ways of doing things although you well might come up with them think of ways of improving what is now being done if you are to increase your income by the amount you've specified you must find ways of", "start_timestamp": "00:14:07", "end_timestamp": "00:14:37", "start_second": 847, "end_second": 877, "url": "https://www.youtube.com/watch?v=6tbHYvH347A&t=847s", "title": "Change Your Life in 19 Minutes with Earl Nightingale", "thumbnail": "https://i.ytimg.com/vi/6tbHYvH347A/maxresdefault.jpg"} {"video_id": "6tbHYvH347A", "text": "increasing your contribution your service and the heat of this is to be found in your mind in that goldmine between your ears try for five ideas every morning and write them down and save those sheets of paper in a special idea file many perhaps most of your ideas will be worthless but some of them will be very good a few will be excellent and every once in a while you will come up with something really outstanding you see five ideas a day is 25 a week if you don't think on weekends that's more than a thousand ideas a year", "start_timestamp": "00:14:37", "end_timestamp": "00:15:08", "start_second": 877, "end_second": 908, "url": "https://www.youtube.com/watch?v=6tbHYvH347A&t=877s", "title": "Change Your Life in 19 Minutes with Earl Nightingale", "thumbnail": "https://i.ytimg.com/vi/6tbHYvH347A/maxresdefault.jpg"} {"video_id": "6tbHYvH347A", "text": "one idea can get you to that income you shooting for the law of averages swing so far in your favor you just can't miss try to develop a sense of expectancy that is try to hold the feeling that the goal you're shooting for is a sure thing and that it's only a matter of time before it's realized you know Henry Ford didn't start making cars until he was 45 a friend of mine started a new company at 65 he's still going strong and his new company has sales of better than 300 million dollars a year it's almost never", "start_timestamp": "00:15:08", "end_timestamp": "00:15:39", "start_second": 908, "end_second": 939, "url": "https://www.youtube.com/watch?v=6tbHYvH347A&t=908s", "title": "Change Your Life in 19 Minutes with Earl Nightingale", "thumbnail": "https://i.ytimg.com/vi/6tbHYvH347A/maxresdefault.jpg"} {"video_id": "6tbHYvH347A", "text": "too late try not to think of things outside of your own line of work or whatever it is you're most interested in to think well and profitably you must discipline your thinking keep it on course controlled he put in one field specialized now for the final item on your checklist it appears as the word and the word is attitude attitude has been called the most important word in the language William James put it this way he said the greatest discovery of my generation is that human beings can alter their lives by altering their", "start_timestamp": "00:15:39", "end_timestamp": "00:16:11", "start_second": 939, "end_second": 971, "url": "https://www.youtube.com/watch?v=6tbHYvH347A&t=939s", "title": "Change Your Life in 19 Minutes with Earl Nightingale", "thumbnail": "https://i.ytimg.com/vi/6tbHYvH347A/maxresdefault.jpg"} {"video_id": "6tbHYvH347A", "text": "attitudes of mind now this is something to think about can alter their lives by altering their attitudes of mind it's another way of saying we become what we think about look at it this way you're a total environment if you've been an adult for any appreciable period of time your total environment is a reflection of you as a person the house or the neighborhood in which you live the car you drive the clothes you wear the job you do the people with whom you regularly associate your total environment is an exact and merciless", "start_timestamp": "00:16:11", "end_timestamp": "00:16:43", "start_second": 971, "end_second": 1003, "url": "https://www.youtube.com/watch?v=6tbHYvH347A&t=971s", "title": "Change Your Life in 19 Minutes with Earl Nightingale", "thumbnail": "https://i.ytimg.com/vi/6tbHYvH347A/maxresdefault.jpg"} {"video_id": "6tbHYvH347A", "text": "mirror of you as a human being how have you feel your environment can stand some improvement you have only to improve your attitude and your world will gradually change to reflect the changing prison is how to change your attitude beginning now begin to act as would the person you most want to become now that is if you were already in possession of the goal you're shooting for how would you conduct yourself at all of your affairs we'll do it now and tomorrow and the next day begin now how to act the part of the person you most", "start_timestamp": "00:16:43", "end_timestamp": "00:17:14", "start_second": 1003, "end_second": 1034, "url": "https://www.youtube.com/watch?v=6tbHYvH347A&t=1003s", "title": "Change Your Life in 19 Minutes with Earl Nightingale", "thumbnail": "https://i.ytimg.com/vi/6tbHYvH347A/maxresdefault.jpg"} {"video_id": "6tbHYvH347A", "text": "want to become and you learn by becoming that person subtly in little ways in the way you dress in the way you talk in the unfailing courtesy you show to every person with whom you come in contact begin to act the part of the person who has already achieved that which you're shooting for the German philosopher Goethe gave us the secret when he said before you can do something you must first be something when you behave like the person you most want to become the things that person would have will tend to come to you is simply cause and", "start_timestamp": "00:17:14", "end_timestamp": "00:17:43", "start_second": 1034, "end_second": 1063, "url": "https://www.youtube.com/watch?v=6tbHYvH347A&t=1034s", "title": "Change Your Life in 19 Minutes with Earl Nightingale", "thumbnail": "https://i.ytimg.com/vi/6tbHYvH347A/maxresdefault.jpg"} {"video_id": "6tbHYvH347A", "text": "effect don't be in too big a hurry it takes longer to build a skyscraper than a chicken coop they'll slowly steadily and well then when you make it you'll keep it you will stay on top always be suspicious of the so-called get-rich-quick scheme or sudden success never forget that word attitude it's your attitude toward the people with whom you come in contact that will determine their attitudes toward you the person with a great attitude toward life in the world is the person other people call lucky he's not lucky he's just using our old", "start_timestamp": "00:17:43", "end_timestamp": "00:18:15", "start_second": 1063, "end_second": 1095, "url": "https://www.youtube.com/watch?v=6tbHYvH347A&t=1063s", "title": "Change Your Life in 19 Minutes with Earl Nightingale", "thumbnail": "https://i.ytimg.com/vi/6tbHYvH347A/maxresdefault.jpg"} {"video_id": "6tbHYvH347A", "text": "friend cause and effect his causes are excellent and his effects has to be just as good well that's it three things to remember three things to practice every day if you spent sixteen hours a day seven days a week practicing your golf swing in a relatively short time you'd have a grooved beautiful swing like the pros so practice your new attitude every day every in our practice thinking a few minutes every morning and you'll find yourself thinking all day long to remember the formula our rewards in life will always", "start_timestamp": "00:18:15", "end_timestamp": "00:18:45", "start_second": 1095, "end_second": 1125, "url": "https://www.youtube.com/watch?v=6tbHYvH347A&t=1095s", "title": "Change Your Life in 19 Minutes with Earl Nightingale", "thumbnail": "https://i.ytimg.com/vi/6tbHYvH347A/maxresdefault.jpg"} {"video_id": "HGYYEUSm-0Q", "text": "all right so welcome to the third tutorial session this one's on generative adversarial networks so it is actually is my great pleasure to introduce dr. Ian good fellow he did a masters and bachelors at Stanford University finishing there in 2009 at which point he moved to the University of Montreal where he did a PhD with yoshua bengio and I and after that he moved to the Google brain group at that same year and after that he moved just recently earlier this year to the open AI where he currently is so I think that", "start_timestamp": "00:00:00", "end_timestamp": "00:00:39", "start_second": 0, "end_second": 39, "url": "https://www.youtube.com/watch?v=HGYYEUSm-0Q&t=0s", "title": "Ian Goodfellow: Generative Adversarial Networks (NIPS 2016 tutorial)", "thumbnail": "https://i.ytimg.com/vi/HGYYEUSm-0Q/maxresdefault.jpg"} {"video_id": "HGYYEUSm-0Q", "text": "Ian is quite simply one of the most creative and influential researchers in our community today and I think that we have a room full of people ready to hear about a topic ganzar generative adversarial networks that he invented two years ago in a bar in Montreal I might add is testament to that so yeah well so without further ado I give you good fellow yeah I forgot to mention he's requested that we have questions throughout so if you actually have a question just go to the mic and he'll maybe stop and try to", "start_timestamp": "00:00:39", "end_timestamp": "00:01:23", "start_second": 39, "end_second": 83, "url": "https://www.youtube.com/watch?v=HGYYEUSm-0Q&t=39s", "title": "Ian Goodfellow: Generative Adversarial Networks (NIPS 2016 tutorial)", "thumbnail": "https://i.ytimg.com/vi/HGYYEUSm-0Q/maxresdefault.jpg"} {"video_id": "HGYYEUSm-0Q", "text": "answer your question I'll try not to do that again thank you very much for the introduction Aaron thank you everybody for coming today let me tell you a little bit about the format here despite the size of the event I'd still like it to be a little bit interactive and let you feel like you can make the tutorial what you want it to be for yourself I believe a lot that the tutorial should be a chance for you to get some hands-on experience and and to feel like you're building your own mastery of this subject so I've included three exercises", "start_timestamp": "00:01:23", "end_timestamp": "00:01:55", "start_second": 83, "end_second": 115, "url": "https://www.youtube.com/watch?v=HGYYEUSm-0Q&t=83s", "title": "Ian Goodfellow: Generative Adversarial Networks (NIPS 2016 tutorial)", "thumbnail": "https://i.ytimg.com/vi/HGYYEUSm-0Q/maxresdefault.jpg"} {"video_id": "HGYYEUSm-0Q", "text": "that will appear throughout the presentation every time there's an exercise you can choose whether you want to work on it or not I'll give a little five-minute break since I know it's hard to pay attention to a presentation for two hours straight and if you'd like to work through the exercise you can work through it otherwise just take a break and chat with your neighbors the basic topic of today's tutorial is really generative modeling in general it's impossible to describe generative ever salient works without contrasting them", "start_timestamp": "00:01:55", "end_timestamp": "00:02:25", "start_second": 115, "end_second": 145, "url": "https://www.youtube.com/watch?v=HGYYEUSm-0Q&t=115s", "title": "Ian Goodfellow: Generative Adversarial Networks (NIPS 2016 tutorial)", "thumbnail": "https://i.ytimg.com/vi/HGYYEUSm-0Q/maxresdefault.jpg"} {"video_id": "HGYYEUSm-0Q", "text": "with some of the other approaches and describing some of the overall goals in this area that we're working on the basic idea of generative modeling is to take a collection of training examples and form some representation of a probability distribution that explains where those training examples came from there are two basic things that you can do with a generative model one is you can take a collection of points and infer a density function that describes the probability distribution that generated them I show that in the upper", "start_timestamp": "00:02:25", "end_timestamp": "00:02:55", "start_second": 145, "end_second": 175, "url": "https://www.youtube.com/watch?v=HGYYEUSm-0Q&t=145s", "title": "Ian Goodfellow: Generative Adversarial Networks (NIPS 2016 tutorial)", "thumbnail": "https://i.ytimg.com/vi/HGYYEUSm-0Q/maxresdefault.jpg"} {"video_id": "HGYYEUSm-0Q", "text": "row of this slide where I have taken several points on a one-dimensional number line and fitted a Gaussian density to them that's what we usually think of when we describe generative modeling but there's another way that you can build a generative model which is to take a machine that observes many samples from a distribution and then is able to create more samples from that same distribution generative adversarial networks primarily lie in the second category we're what we want to do is simply generate more samples rather than find", "start_timestamp": "00:02:55", "end_timestamp": "00:03:26", "start_second": 175, "end_second": 206, "url": "https://www.youtube.com/watch?v=HGYYEUSm-0Q&t=175s", "title": "Ian Goodfellow: Generative Adversarial Networks (NIPS 2016 tutorial)", "thumbnail": "https://i.ytimg.com/vi/HGYYEUSm-0Q/maxresdefault.jpg"} {"video_id": "HGYYEUSm-0Q", "text": "the density function as a brief outline of the presentation today I'm first going to describe why we should study generative modeling at all it might seem a little bit silly to just make more images when we already have millions of images lying around next I'll describe how generative models work in general and situate generative address all networks among the family of generative models explaining exactly what is different about them and other approaches then I'll describe in detail how generative adversarial networks work", "start_timestamp": "00:03:26", "end_timestamp": "00:03:57", "start_second": 206, "end_second": 237, "url": "https://www.youtube.com/watch?v=HGYYEUSm-0Q&t=206s", "title": "Ian Goodfellow: Generative Adversarial Networks (NIPS 2016 tutorial)", "thumbnail": "https://i.ytimg.com/vi/HGYYEUSm-0Q/maxresdefault.jpg"} {"video_id": "HGYYEUSm-0Q", "text": "and I'll move on to special tips and tricks that practitioners have developed that are less theoretically motivated but it seemed to work well in practice then I'll describe some research frontiers and I'll conclude by describing the latest state of the art and generative modeling which combines generative adverse health at works with other methods so the first section of this presentation is about why we should study generative models at all most of the time and machine learning we use models that take an input and map that", "start_timestamp": "00:03:57", "end_timestamp": "00:04:26", "start_second": 237, "end_second": 266, "url": "https://www.youtube.com/watch?v=HGYYEUSm-0Q&t=237s", "title": "Ian Goodfellow: Generative Adversarial Networks (NIPS 2016 tutorial)", "thumbnail": "https://i.ytimg.com/vi/HGYYEUSm-0Q/maxresdefault.jpg"} {"video_id": "HGYYEUSm-0Q", "text": "input to a single output that's really great for things like looking at an image and saying what kind of object is in that image or looking at a sentence and saying whether that sentence is positive or negative why exactly would you want to learn a distribution over different different training examples well first off high dimensional probability distributions are an important object in many branches of engineering and applied math and this exercises our ability to manipulate them but more concretely there are several", "start_timestamp": "00:04:26", "end_timestamp": "00:04:55", "start_second": 266, "end_second": 295, "url": "https://www.youtube.com/watch?v=HGYYEUSm-0Q&t=266s", "title": "Ian Goodfellow: Generative Adversarial Networks (NIPS 2016 tutorial)", "thumbnail": "https://i.ytimg.com/vi/HGYYEUSm-0Q/maxresdefault.jpg"} {"video_id": "HGYYEUSm-0Q", "text": "ways that we could imagine using generative models once we have perfected them one is that we could use the generative model to simulate possible futures for reinforcement learning there are at least two different ways that you could use this one is you could train your agent in a simulated environment that's built entirely by the generative model rather than needing to build an environment by hand the advantage of using this simulated environment over the real world is that it could be more easily realized across many machines and", "start_timestamp": "00:04:55", "end_timestamp": "00:05:22", "start_second": 295, "end_second": 322, "url": "https://www.youtube.com/watch?v=HGYYEUSm-0Q&t=295s", "title": "Ian Goodfellow: Generative Adversarial Networks (NIPS 2016 tutorial)", "thumbnail": "https://i.ytimg.com/vi/HGYYEUSm-0Q/maxresdefault.jpg"} {"video_id": "HGYYEUSm-0Q", "text": "the mistakes in this environment are not as costly as if you actually make a mistake in the physical world and do real harm similarly an agent that is able to imagine future states of the world using a generative model can plan for the future by simulating many different ideas of plans that it could execute and testing which of them works out as best as possible there's a paper on that subject with Chelsea Finn is the first author where we evaluated generative models on the robot pushing data set to start working toward this", "start_timestamp": "00:05:22", "end_timestamp": "00:05:54", "start_second": 322, "end_second": 354, "url": "https://www.youtube.com/watch?v=HGYYEUSm-0Q&t=322s", "title": "Ian Goodfellow: Generative Adversarial Networks (NIPS 2016 tutorial)", "thumbnail": "https://i.ytimg.com/vi/HGYYEUSm-0Q/maxresdefault.jpg"} {"video_id": "HGYYEUSm-0Q", "text": "goal of using generative models to plan actions another major use of generative models is that they are able to handle missing data much more effectively than the standard input to output mappings of machine learning models that we usually use generative models are able to fill in missing inputs and they're also able to learn when some of the labels in the data set are missing semi-supervised learning is a particularly useful application of generative modeling where we may have very few labeled inputs but by leveraging many more unlabeled", "start_timestamp": "00:05:54", "end_timestamp": "00:06:29", "start_second": 354, "end_second": 389, "url": "https://www.youtube.com/watch?v=HGYYEUSm-0Q&t=354s", "title": "Ian Goodfellow: Generative Adversarial Networks (NIPS 2016 tutorial)", "thumbnail": "https://i.ytimg.com/vi/HGYYEUSm-0Q/maxresdefault.jpg"} {"video_id": "HGYYEUSm-0Q", "text": "examples we were able to obtain very good error rates on the test set many other tasks also intrinsically require that we use multimodal outputs rather than mapping one input to a single output there are many possible outputs and the model needs to capture all of them and finally there are several tasks that just plain require realistic generation of images or audio waveforms as the actual specification of the task itself and these clearly require generative modeling intrinsically one example of a task that requires", "start_timestamp": "00:06:29", "end_timestamp": "00:07:07", "start_second": 389, "end_second": 427, "url": "https://www.youtube.com/watch?v=HGYYEUSm-0Q&t=389s", "title": "Ian Goodfellow: Generative Adversarial Networks (NIPS 2016 tutorial)", "thumbnail": "https://i.ytimg.com/vi/HGYYEUSm-0Q/maxresdefault.jpg"} {"video_id": "HGYYEUSm-0Q", "text": "multimodal outputs is predicting the next frame in a video because there are many different things that can happen in the next time step there are many different frames that can appear in a sequence after the current image because there are so many different things that can happen traditional approaches for predicting the next video frame often become very blurry when they try to represent the distribution over the next frame using a single image many different possible next frame images are averaged together", "start_timestamp": "00:07:07", "end_timestamp": "00:07:36", "start_second": 427, "end_second": 456, "url": "https://www.youtube.com/watch?v=HGYYEUSm-0Q&t=427s", "title": "Ian Goodfellow: Generative Adversarial Networks (NIPS 2016 tutorial)", "thumbnail": "https://i.ytimg.com/vi/HGYYEUSm-0Q/maxresdefault.jpg"} {"video_id": "HGYYEUSm-0Q", "text": "and result in a blurry mess I'm showing here some images from a paper by William Lauder and his collaborators that was published earlier this year on the Left I show you the ground truth image the image that should be predicted next in a video of a 3d rendering of a rotated head in the middle I show you the image that is predicted when we take a traditional model that is trained using mean squared error because this mean squared error model is predicting many different possible futures and then averaging them together to hedge its", "start_timestamp": "00:07:36", "end_timestamp": "00:08:05", "start_second": 456, "end_second": 485, "url": "https://www.youtube.com/watch?v=HGYYEUSm-0Q&t=456s", "title": "Ian Goodfellow: Generative Adversarial Networks (NIPS 2016 tutorial)", "thumbnail": "https://i.ytimg.com/vi/HGYYEUSm-0Q/maxresdefault.jpg"} {"video_id": "HGYYEUSm-0Q", "text": "bets we end up with a blurry image where the eyes are not particularly crisply defined small variations in the amount that the head rotates can place the eyes in very different positions and we average all those different positions together we get a blurry image of the eyes likewise the ears on this person's head have more or less disappeared on the right I show you what happens when we bring in a more generative modeling type approach and in particular when we use an adversarial loss to train the model in the image on the right the", "start_timestamp": "00:08:05", "end_timestamp": "00:08:38", "start_second": 485, "end_second": 518, "url": "https://www.youtube.com/watch?v=HGYYEUSm-0Q&t=485s", "title": "Ian Goodfellow: Generative Adversarial Networks (NIPS 2016 tutorial)", "thumbnail": "https://i.ytimg.com/vi/HGYYEUSm-0Q/maxresdefault.jpg"} {"video_id": "HGYYEUSm-0Q", "text": "model has successfully predicted the presence of the ear and has successfully drawn a crisp image of the eyes with dark pixels in that area and sharp edges on the features of the eyes another task that intrinsically requires being able to generate good data is super resolution of images in this example we begin with the original image on the left and then not pictured we down sample that image to about half its original resolution we then share several different ways of reconstructing the high resolution version of the image", "start_timestamp": "00:08:38", "end_timestamp": "00:09:15", "start_second": 518, "end_second": 555, "url": "https://www.youtube.com/watch?v=HGYYEUSm-0Q&t=518s", "title": "Ian Goodfellow: Generative Adversarial Networks (NIPS 2016 tutorial)", "thumbnail": "https://i.ytimg.com/vi/HGYYEUSm-0Q/maxresdefault.jpg"} {"video_id": "HGYYEUSm-0Q", "text": "if we just use the bicubic interpolation method just a hand designed mathematical formula for what the pixels ought to be based on sampling Theory we get a relatively blurry image that's shown second from the left the remaining two images show different ways of using machine learning to actually learn to create high resolution images that look like the data distribution so here the model is actually able to use its knowledge of what high resolution images look like to provide details that have been lost in the down sampling process", "start_timestamp": "00:09:15", "end_timestamp": "00:09:49", "start_second": 555, "end_second": 589, "url": "https://www.youtube.com/watch?v=HGYYEUSm-0Q&t=555s", "title": "Ian Goodfellow: Generative Adversarial Networks (NIPS 2016 tutorial)", "thumbnail": "https://i.ytimg.com/vi/HGYYEUSm-0Q/maxresdefault.jpg"} {"video_id": "HGYYEUSm-0Q", "text": "the new high resolution image may not be perfectly accurate and may not perfectly agree with reality but it at least looks like something that is plausible and is visually pleasing there are many different applications that involve interaction between a human being and an image generation process one of these is a collaboration between Berkley and Adobe called I again or the I stands for interactive the basic idea of igon is that it assists a human to create artwork the human artist draws a few squiggly green lines and then a", "start_timestamp": "00:09:49", "end_timestamp": "00:10:27", "start_second": 589, "end_second": 627, "url": "https://www.youtube.com/watch?v=HGYYEUSm-0Q&t=589s", "title": "Ian Goodfellow: Generative Adversarial Networks (NIPS 2016 tutorial)", "thumbnail": "https://i.ytimg.com/vi/HGYYEUSm-0Q/maxresdefault.jpg"} {"video_id": "HGYYEUSm-0Q", "text": "generative model is used to search over the space of possible images that resemble what the human has begun to draw even though the human doesn't have much artistic ability they can draw a simple black triangle and it will be turned into a photo-quality Mountain this is such a popular area that they've actually been to papers on this subject that came out just in the last few months introspective adversarial networks also offer this ability to provide interactive photo editing and have demonstrated their results mostly in the", "start_timestamp": "00:10:27", "end_timestamp": "00:10:57", "start_second": 627, "end_second": 657, "url": "https://www.youtube.com/watch?v=HGYYEUSm-0Q&t=627s", "title": "Ian Goodfellow: Generative Adversarial Networks (NIPS 2016 tutorial)", "thumbnail": "https://i.ytimg.com/vi/HGYYEUSm-0Q/maxresdefault.jpg"} {"video_id": "HGYYEUSm-0Q", "text": "context of editing faces so the same idea still applies that a human can begin editing a photo and the generative model will automatically update the photo to keep it appearing realistic even though the human is making very poorly controlled mouse controlled movements that are not nearly as fine as would be needed to make nice photorealistic details there are also just a long tail of different applications that require generating really good images a recent paper called image to image translation shows how conditional generative adversarial", "start_timestamp": "00:10:57", "end_timestamp": "00:11:39", "start_second": 657, "end_second": 699, "url": "https://www.youtube.com/watch?v=HGYYEUSm-0Q&t=657s", "title": "Ian Goodfellow: Generative Adversarial Networks (NIPS 2016 tutorial)", "thumbnail": "https://i.ytimg.com/vi/HGYYEUSm-0Q/maxresdefault.jpg"} {"video_id": "HGYYEUSm-0Q", "text": "networks can be trained to implement many of these multimodal output distributions where an input can be mapped to many different possible outputs one example is taking sketches and turning them into photos in this case it's very easy to train the model because photos can be converted to sketches just by using an edged extractor and that provides a very large training set for the mapping from sketch to image essentially in this case the generative model learns to invert the edge detection process even though the", "start_timestamp": "00:11:39", "end_timestamp": "00:12:11", "start_second": 699, "end_second": 731, "url": "https://www.youtube.com/watch?v=HGYYEUSm-0Q&t=699s", "title": "Ian Goodfellow: Generative Adversarial Networks (NIPS 2016 tutorial)", "thumbnail": "https://i.ytimg.com/vi/HGYYEUSm-0Q/maxresdefault.jpg"} {"video_id": "HGYYEUSm-0Q", "text": "inverse has many possible inputs that respond to the same output and vice versa the same kind of model can also convert aerial photographs into maps and can take descriptions of scenes in terms of which object category should appear at each pixel and turn them into photorealistic images so these are all several different reasons that we might want to study generative models ranging from the different kinds of mathematical abilities they force us to develop to the many different applications that we can carry out once we have these kinds", "start_timestamp": "00:12:11", "end_timestamp": "00:12:45", "start_second": 731, "end_second": 765, "url": "https://www.youtube.com/watch?v=HGYYEUSm-0Q&t=731s", "title": "Ian Goodfellow: Generative Adversarial Networks (NIPS 2016 tutorial)", "thumbnail": "https://i.ytimg.com/vi/HGYYEUSm-0Q/maxresdefault.jpg"} {"video_id": "HGYYEUSm-0Q", "text": "of models so next we might want your how exactly do generative models work and in particular how do generative adversarial networks compare in terms of the way that they work to other models it's easiest to compare many different models if I describe all of them as performing maximum likelihood there are in fact other approaches to generative modeling besides maximum likelihood but for the purpose of making a nice crisp comparison of several different models I'm going to pretend that they all do maximum likelihood for the moment and", "start_timestamp": "00:12:45", "end_timestamp": "00:13:16", "start_second": 765, "end_second": 796, "url": "https://www.youtube.com/watch?v=HGYYEUSm-0Q&t=765s", "title": "Ian Goodfellow: Generative Adversarial Networks (NIPS 2016 tutorial)", "thumbnail": "https://i.ytimg.com/vi/HGYYEUSm-0Q/maxresdefault.jpg"} {"video_id": "HGYYEUSm-0Q", "text": "the basic idea of maximum likelihood is that we write down a density function that the model describes that I represent with P model of X X is a vector describing the input and P model of X is a distribution controlled by parameters theta that describes exactly where the data concentrates and where it is spread more thinly maximum likelihood consists in measuring the log probability that this density function assigns to all the training data points and adjusting the parameters theta to increase that probability the way that", "start_timestamp": "00:13:16", "end_timestamp": "00:13:52", "start_second": 796, "end_second": 832, "url": "https://www.youtube.com/watch?v=HGYYEUSm-0Q&t=796s", "title": "Ian Goodfellow: Generative Adversarial Networks (NIPS 2016 tutorial)", "thumbnail": "https://i.ytimg.com/vi/HGYYEUSm-0Q/maxresdefault.jpg"} {"video_id": "HGYYEUSm-0Q", "text": "different models go about accomplishing this is what makes the models different from each other so among all the different models that can be described as implementing maximum likelihood we can draw them in a family tree where the first place where this tree forks is we asked whether the model represents the data with the density with an explicit function or not so when we have an explicit density function it looks exactly like what I showed on this previous side slide we actually write down a function P model and we're able", "start_timestamp": "00:13:52", "end_timestamp": "00:14:21", "start_second": 832, "end_second": 861, "url": "https://www.youtube.com/watch?v=HGYYEUSm-0Q&t=832s", "title": "Ian Goodfellow: Generative Adversarial Networks (NIPS 2016 tutorial)", "thumbnail": "https://i.ytimg.com/vi/HGYYEUSm-0Q/maxresdefault.jpg"} {"video_id": "HGYYEUSm-0Q", "text": "to evaluate log P model and increase it on the training data within the family of models that have an explicit density we may then ask whether that density function is actually tractable or not when we want to model very complicated distributions like the distribution of our natural images or the distribution of her speech waveforms it can be challenging to design a parametric function that is able to capture the distribution efficiently and this means that many of the distributions we have studied are not actually tractable", "start_timestamp": "00:14:21", "end_timestamp": "00:14:52", "start_second": 861, "end_second": 892, "url": "https://www.youtube.com/watch?v=HGYYEUSm-0Q&t=861s", "title": "Ian Goodfellow: Generative Adversarial Networks (NIPS 2016 tutorial)", "thumbnail": "https://i.ytimg.com/vi/HGYYEUSm-0Q/maxresdefault.jpg"} {"video_id": "HGYYEUSm-0Q", "text": "however with careful design it has been possible to design a few different density functions that actually are tractable that's the family of models like pixel RNN pixel CNN and other fully visible belief networks like nade and made the other major family of distributions that have a tractable density is the nonlinear ICA family this family of models is based on taking a simple distribution like a Gaussian distribution and then using a nonlinear transformation of samples from that distribution to warp the samples into", "start_timestamp": "00:14:52", "end_timestamp": "00:15:26", "start_second": 892, "end_second": 926, "url": "https://www.youtube.com/watch?v=HGYYEUSm-0Q&t=892s", "title": "Ian Goodfellow: Generative Adversarial Networks (NIPS 2016 tutorial)", "thumbnail": "https://i.ytimg.com/vi/HGYYEUSm-0Q/maxresdefault.jpg"} {"video_id": "HGYYEUSm-0Q", "text": "the space that we care about if we're able to measure the determinant of the Jacobian of that transformation we can determine the density in the new space that results from net warping within the family of models that used in explicit density the other set of approaches is those that cannot actually have a tractable density function there are two basic approaches within this family one of these is the model family that approximates an intractable density function by placing a lower bound on the log-likelihood", "start_timestamp": "00:15:26", "end_timestamp": "00:16:02", "start_second": 926, "end_second": 962, "url": "https://www.youtube.com/watch?v=HGYYEUSm-0Q&t=926s", "title": "Ian Goodfellow: Generative Adversarial Networks (NIPS 2016 tutorial)", "thumbnail": "https://i.ytimg.com/vi/HGYYEUSm-0Q/maxresdefault.jpg"} {"video_id": "HGYYEUSm-0Q", "text": "and then maximizing that lower bound another approach is to use a Markov chain to make an estimate of the density function or of its gradient both of these families incur some disadvantages from the approximations that they use finally we may give up altogether on having an explicit density function and instead we represent the density function implicitly this is the rightmost branch of the tree one of the main ways that you can implicitly represent a probability distribution is to design a procedure that can draw", "start_timestamp": "00:16:02", "end_timestamp": "00:16:36", "start_second": 962, "end_second": 996, "url": "https://www.youtube.com/watch?v=HGYYEUSm-0Q&t=962s", "title": "Ian Goodfellow: Generative Adversarial Networks (NIPS 2016 tutorial)", "thumbnail": "https://i.ytimg.com/vi/HGYYEUSm-0Q/maxresdefault.jpg"} {"video_id": "HGYYEUSm-0Q", "text": "samples from that probability distribution even if we don't necessarily know the density function if we draw simple as using a Markov chain that gives us one family of distributions of models of which the main example is the generative stochastic Network and then finally if we would like to draw samples directly we have models like generative adversarial networks or deep moment matching networks are both examples of models that can draw samples directly but don't necessarily represent a density function so now let's look at", "start_timestamp": "00:16:36", "end_timestamp": "00:17:08", "start_second": 996, "end_second": 1028, "url": "https://www.youtube.com/watch?v=HGYYEUSm-0Q&t=996s", "title": "Ian Goodfellow: Generative Adversarial Networks (NIPS 2016 tutorial)", "thumbnail": "https://i.ytimg.com/vi/HGYYEUSm-0Q/maxresdefault.jpg"} {"video_id": "HGYYEUSm-0Q", "text": "each of these in a little bit more detail and describe exactly what the advantages and disadvantages of them are and why you might want to be in one branch of the tree or another so first fully visible belief networks are the most mathematically straightforward they use the chain rule of probability to decompose the probability distribution over a vector into a product over each of the members of the vector we write down a probability distribution for the distribution over X 1 and then we multiply that by the distribution over X", "start_timestamp": "00:17:08", "end_timestamp": "00:17:38", "start_second": 1028, "end_second": 1058, "url": "https://www.youtube.com/watch?v=HGYYEUSm-0Q&t=1028s", "title": "Ian Goodfellow: Generative Adversarial Networks (NIPS 2016 tutorial)", "thumbnail": "https://i.ytimg.com/vi/HGYYEUSm-0Q/maxresdefault.jpg"} {"video_id": "HGYYEUSm-0Q", "text": "2 given X 1 and then X 3 given X 1 and X 2 and so on until we finally have a distribution over the final member of the vector given all of the other members of the vector so this goes back to a paper by Brendan Freund 1996 but has had several other advancements in the meantime the current most popular member of this model family is the pixel CNN and I show here some samples of elephants that it generated the primary disadvantage of this approach is that generating a sample is very slow each time we want to sample a different X I", "start_timestamp": "00:17:38", "end_timestamp": "00:18:16", "start_second": 1058, "end_second": 1096, "url": "https://www.youtube.com/watch?v=HGYYEUSm-0Q&t=1058s", "title": "Ian Goodfellow: Generative Adversarial Networks (NIPS 2016 tutorial)", "thumbnail": "https://i.ytimg.com/vi/HGYYEUSm-0Q/maxresdefault.jpg"} {"video_id": "HGYYEUSm-0Q", "text": "from the vector X we need to run the model again and these n different times that we run the model cannot be parallelized each of these operations of sampling another X is dependent on all of the earlier X I values and that means that there's really no choice but to schedule them one after another regardless of how much bandwidth we have available one other smaller drawback is that the generation process is not guided by a latent code many of the other models that we study have a latent code that we can sample first that", "start_timestamp": "00:18:16", "end_timestamp": "00:18:51", "start_second": 1096, "end_second": 1131, "url": "https://www.youtube.com/watch?v=HGYYEUSm-0Q&t=1096s", "title": "Ian Goodfellow: Generative Adversarial Networks (NIPS 2016 tutorial)", "thumbnail": "https://i.ytimg.com/vi/HGYYEUSm-0Q/maxresdefault.jpg"} {"video_id": "HGYYEUSm-0Q", "text": "describes the entire vector to be generated and then the rest of the process involves translating that vector into something that lies in the data space and that allows us to do things like have embeddings that are useful for semi-supervised learning or generating samples that have particular properties that were interested in fully visible belief networks don't do this out of the box but there are different extensions of them that can enable these abilities one very recent example of a fully visible belief net is wavenet and it", "start_timestamp": "00:18:51", "end_timestamp": "00:19:26", "start_second": 1131, "end_second": 1166, "url": "https://www.youtube.com/watch?v=HGYYEUSm-0Q&t=1131s", "title": "Ian Goodfellow: Generative Adversarial Networks (NIPS 2016 tutorial)", "thumbnail": "https://i.ytimg.com/vi/HGYYEUSm-0Q/maxresdefault.jpg"} {"video_id": "HGYYEUSm-0Q", "text": "shows both some of the advantages and some of the disadvantages of these fully visible belief networks first because the optimization process is very straightforward it's just minimizing a cost function with no approximation to that cost function it's very effective and generates really amazing samples but the disadvantage is that the sample generation is very slow in particular it takes about two minutes to generate one second of audio and that means that barring some major improvement in the way that we're able to run the model", "start_timestamp": "00:19:26", "end_timestamp": "00:19:56", "start_second": 1166, "end_second": 1196, "url": "https://www.youtube.com/watch?v=HGYYEUSm-0Q&t=1166s", "title": "Ian Goodfellow: Generative Adversarial Networks (NIPS 2016 tutorial)", "thumbnail": "https://i.ytimg.com/vi/HGYYEUSm-0Q/maxresdefault.jpg"} {"video_id": "HGYYEUSm-0Q", "text": "it's not going to be able to be used for interactive dialogue any time soon even though it is able to generate very good lifelike audio waveforms the other major family of explicit tractable density models is the family of models based on the change of variables where we begin with a simple distribution like a Gaussian and we use a non-linear function to transform that distribution into another space so we transform from a latent space to on this slide the space of natural images the main drawback to this approach is that the", "start_timestamp": "00:19:56", "end_timestamp": "00:20:33", "start_second": 1196, "end_second": 1233, "url": "https://www.youtube.com/watch?v=HGYYEUSm-0Q&t=1196s", "title": "Ian Goodfellow: Generative Adversarial Networks (NIPS 2016 tutorial)", "thumbnail": "https://i.ytimg.com/vi/HGYYEUSm-0Q/maxresdefault.jpg"} {"video_id": "HGYYEUSm-0Q", "text": "transformation must be carefully designed to be invertible and to have a tractable Jacobian and in fact a tractable determinant of the Jacobian in particular this requirement says that the latent variables must have the same dimension allottee as the data space so if we want to generate 3,000 pixels we need to have 3,000 latent variables it makes it harder to design the model to have exactly the capacity that we would like to have another major family of models is those that have intractable density functions but then use tractable", "start_timestamp": "00:20:33", "end_timestamp": "00:21:07", "start_second": 1233, "end_second": 1267, "url": "https://www.youtube.com/watch?v=HGYYEUSm-0Q&t=1233s", "title": "Ian Goodfellow: Generative Adversarial Networks (NIPS 2016 tutorial)", "thumbnail": "https://i.ytimg.com/vi/HGYYEUSm-0Q/maxresdefault.jpg"} {"video_id": "HGYYEUSm-0Q", "text": "approximations to those density functions currently one of the most popular members of this family is the variational auto-encoder the basic idea is to write down a density function log P of X where the density is intractable because we need to marginalize out a random variable Z Z is a vector of latent variables that provide a hidden code describing the input image and because of the process of marginalizing these variables out to recover simply the distribution over X is intractable we're forced to use instead a variational approximation this", "start_timestamp": "00:21:07", "end_timestamp": "00:21:46", "start_second": 1267, "end_second": 1306, "url": "https://www.youtube.com/watch?v=HGYYEUSm-0Q&t=1267s", "title": "Ian Goodfellow: Generative Adversarial Networks (NIPS 2016 tutorial)", "thumbnail": "https://i.ytimg.com/vi/HGYYEUSm-0Q/maxresdefault.jpg"} {"video_id": "HGYYEUSm-0Q", "text": "variational approximation introduces a distribution Q over the latent variable Z and to the extent that this distribution Q is closer to the true posterior over the latent variables we're able to make it bound that becomes tighter and tighter and does a better job of lower bounding the true density unfortunately this model is only asymptotically consistent if this Q distribution is perfect otherwise there's a gap between the lower bound and the actual density so even if the optimizer is perfect and even if we have infinite training data", "start_timestamp": "00:21:46", "end_timestamp": "00:22:22", "start_second": 1306, "end_second": 1342, "url": "https://www.youtube.com/watch?v=HGYYEUSm-0Q&t=1306s", "title": "Ian Goodfellow: Generative Adversarial Networks (NIPS 2016 tutorial)", "thumbnail": "https://i.ytimg.com/vi/HGYYEUSm-0Q/maxresdefault.jpg"} {"video_id": "HGYYEUSm-0Q", "text": "we are not able to recover exactly the distribution that was used to generate the data in practice we observe that variational autoencoders are very good at obtaining high likelihood but they tend to produce lower quality samples and in particular the samples are often relatively blurry another major family of models is the Bolton machine these also have an explicit density function that is not actually tractable in this case the Bolton machine is defined by an energy function and the probability of a particular state is proportional to e to", "start_timestamp": "00:22:22", "end_timestamp": "00:22:59", "start_second": 1342, "end_second": 1379, "url": "https://www.youtube.com/watch?v=HGYYEUSm-0Q&t=1342s", "title": "Ian Goodfellow: Generative Adversarial Networks (NIPS 2016 tutorial)", "thumbnail": "https://i.ytimg.com/vi/HGYYEUSm-0Q/maxresdefault.jpg"} {"video_id": "HGYYEUSm-0Q", "text": "the value of the energy in order to convert this to an actual probability distribution it started to renormalize by dividing by the sum over all the different states and that sum becomes intractable we're able to approximate it using Monte Carlo methods but those Monte Carlo methods often suffer from problems like failing to mix between different modes and in general Monte Carlo methods especially Markov chain Monte Carlo method perform very poorly in high dimensional spaces because the Markov chains break down for very large images we don't", "start_timestamp": "00:22:59", "end_timestamp": "00:23:31", "start_second": 1379, "end_second": 1411, "url": "https://www.youtube.com/watch?v=HGYYEUSm-0Q&t=1379s", "title": "Ian Goodfellow: Generative Adversarial Networks (NIPS 2016 tutorial)", "thumbnail": "https://i.ytimg.com/vi/HGYYEUSm-0Q/maxresdefault.jpg"} {"video_id": "HGYYEUSm-0Q", "text": "really see both some machines applied to tasks like modeling image annette images they perform very well on small data sets like m nest but then have never really scaled all of these different observations about the other members of the family tree bring us to generative adversarial networks and explained the design requirements that I had in mind when I thought of this model first they use a latent code that describes everything that's generated later they have this property in common with other models like variational Ottoman coders", "start_timestamp": "00:23:31", "end_timestamp": "00:24:03", "start_second": 1411, "end_second": 1443, "url": "https://www.youtube.com/watch?v=HGYYEUSm-0Q&t=1411s", "title": "Ian Goodfellow: Generative Adversarial Networks (NIPS 2016 tutorial)", "thumbnail": "https://i.ytimg.com/vi/HGYYEUSm-0Q/maxresdefault.jpg"} {"video_id": "HGYYEUSm-0Q", "text": "and Boltzmann machines but it's an advantage that they have over fully visible belief networks they're also asymptotically consistent if you're able to find the equilibrium point of the game defining a general a generative adverse trail network you're guaranteed that you've actually recovered the true distribution that generates the data modulo sample complexity issues so if you have infinite training data you do eventually recover the correct distribution there are no Markov chains needed neither to train the generative", "start_timestamp": "00:24:03", "end_timestamp": "00:24:35", "start_second": 1443, "end_second": 1475, "url": "https://www.youtube.com/watch?v=HGYYEUSm-0Q&t=1443s", "title": "Ian Goodfellow: Generative Adversarial Networks (NIPS 2016 tutorial)", "thumbnail": "https://i.ytimg.com/vi/HGYYEUSm-0Q/maxresdefault.jpg"} {"video_id": "HGYYEUSm-0Q", "text": "adversarial Network nor to draw samples from it and I felt like that was an important requirement based on the way that the Markov chains had seemed to hold back restricted Boltzmann machines today we've started to see some models that use Markov chains more successfully and I'll describe those later in the talk but that was one of my primary motivations for designing this particular model family finally a major advantage of generative adversarial networks is that they are often regarded as producing the best samples compared", "start_timestamp": "00:24:35", "end_timestamp": "00:25:03", "start_second": 1475, "end_second": 1503, "url": "https://www.youtube.com/watch?v=HGYYEUSm-0Q&t=1475s", "title": "Ian Goodfellow: Generative Adversarial Networks (NIPS 2016 tutorial)", "thumbnail": "https://i.ytimg.com/vi/HGYYEUSm-0Q/maxresdefault.jpg"} {"video_id": "HGYYEUSm-0Q", "text": "to other models like variational autoencoders in the past few months we've started to see other models like pixel cnn's competing with them and it's now somewhat difficult to say which is the best because we don't have a good way of quantifying exactly how good a set of samples are that concludes my description of the different families of generative models and how they relate to each other and how generative address our networks are situated in this family of generative models so I'll move on to describing exactly how generative", "start_timestamp": "00:25:03", "end_timestamp": "00:25:37", "start_second": 1503, "end_second": 1537, "url": "https://www.youtube.com/watch?v=HGYYEUSm-0Q&t=1503s", "title": "Ian Goodfellow: Generative Adversarial Networks (NIPS 2016 tutorial)", "thumbnail": "https://i.ytimg.com/vi/HGYYEUSm-0Q/maxresdefault.jpg"} {"video_id": "HGYYEUSm-0Q", "text": "adversarial net works actually work the basic framework is that we have two different models and their adversaries of each other in the sense of game theory there's a game that has well-defined payoff functions and each of the two players tries to determine how they can get the most payoff possible within this game there are two different networks one of them is called the generator and it is the primary model that were interested in learning the generator is the model that actually generates samples that are", "start_timestamp": "00:25:37", "end_timestamp": "00:26:12", "start_second": 1537, "end_second": 1572, "url": "https://www.youtube.com/watch?v=HGYYEUSm-0Q&t=1537s", "title": "Ian Goodfellow: Generative Adversarial Networks (NIPS 2016 tutorial)", "thumbnail": "https://i.ytimg.com/vi/HGYYEUSm-0Q/maxresdefault.jpg"} {"video_id": "HGYYEUSm-0Q", "text": "intended to resemble those that were in the training distribution the other model is the discriminator the discriminator is not really necessary after we finished the training process at least not in the original development of generative adversary works there are some ways of getting some extra use out of the discriminator but in the basic setup we can think of the discriminator as a tool that we use during training that can be discarded as soon as training is over the role of the discriminator is to inspect a sample and", "start_timestamp": "00:26:12", "end_timestamp": "00:26:43", "start_second": 1572, "end_second": 1603, "url": "https://www.youtube.com/watch?v=HGYYEUSm-0Q&t=1572s", "title": "Ian Goodfellow: Generative Adversarial Networks (NIPS 2016 tutorial)", "thumbnail": "https://i.ytimg.com/vi/HGYYEUSm-0Q/maxresdefault.jpg"} {"video_id": "HGYYEUSm-0Q", "text": "say whether that sample looks real or fake so the training process consists of sampling images or other kinds of data from the training set and then running the discriminator on those inputs the discriminator is any kind of differentiable function that has parameters that we can learn with gradient descent so we usually represent it as a deep neural network but in principle it could be other kinds of models when the discriminator is applied to images that come from the training set its goal is to output a value that", "start_timestamp": "00:26:43", "end_timestamp": "00:27:17", "start_second": 1603, "end_second": 1637, "url": "https://www.youtube.com/watch?v=HGYYEUSm-0Q&t=1603s", "title": "Ian Goodfellow: Generative Adversarial Networks (NIPS 2016 tutorial)", "thumbnail": "https://i.ytimg.com/vi/HGYYEUSm-0Q/maxresdefault.jpg"} {"video_id": "HGYYEUSm-0Q", "text": "is near one representing a high probability that the input was real rather than fake but half the time we also apply the discriminator to examples that are in fact fake in this case we begin by sampling the latent vector Z in this case we sample Z from the prior distribution over latent variables so Z is essentially a vector of unstructured noise it's a source of randomness that allows the generator to output a wide variety of different vectors we then apply the generator to the input vector Z the generator function is a differentiable function", "start_timestamp": "00:27:17", "end_timestamp": "00:27:56", "start_second": 1637, "end_second": 1676, "url": "https://www.youtube.com/watch?v=HGYYEUSm-0Q&t=1637s", "title": "Ian Goodfellow: Generative Adversarial Networks (NIPS 2016 tutorial)", "thumbnail": "https://i.ytimg.com/vi/HGYYEUSm-0Q/maxresdefault.jpg"} {"video_id": "HGYYEUSm-0Q", "text": "that has parameters that can be learned by gradient descent similar to the discriminator function and we usually represent the generator as being a deep neural network though once again it could be any other kind of model that satisfies those differentiability properties after we have applied G to Z we obtain a sample from the model and ideally this will resemble actual samples from the data set though early in learning it will not after we've obtained that sample we apply the discriminator function D again and this", "start_timestamp": "00:27:56", "end_timestamp": "00:28:31", "start_second": 1676, "end_second": 1711, "url": "https://www.youtube.com/watch?v=HGYYEUSm-0Q&t=1676s", "title": "Ian Goodfellow: Generative Adversarial Networks (NIPS 2016 tutorial)", "thumbnail": "https://i.ytimg.com/vi/HGYYEUSm-0Q/maxresdefault.jpg"} {"video_id": "HGYYEUSm-0Q", "text": "time the goal of the discriminator D is to output a value D of G of Z that is near zero I'm sorry I realized there's a mistake in the slide actually it's backwards the discriminator wants to make the value in this case be near zero and the generator would like to make it be near one so the discriminator would like to reject these samples as being fake well the generator would like to fool the discriminator into thinking that they're real you can think of the generator and the discriminator as being a little bit like counterfeit", "start_timestamp": "00:28:31", "end_timestamp": "00:29:03", "start_second": 1711, "end_second": 1743, "url": "https://www.youtube.com/watch?v=HGYYEUSm-0Q&t=1711s", "title": "Ian Goodfellow: Generative Adversarial Networks (NIPS 2016 tutorial)", "thumbnail": "https://i.ytimg.com/vi/HGYYEUSm-0Q/maxresdefault.jpg"} {"video_id": "HGYYEUSm-0Q", "text": "counterfeiters and police the counterfeiters are trying to make money that looks realistic and the police are trying to correctly identify counterfeit money and reject it without accidentally rejecting real money as the two adversaries are forced to compete against each other the counterfeiters must become better and better if they want to fool the police and eventually they're forced to make counterfeit money that is identical to real money similarly in this framework the generator must eventually learn to make", "start_timestamp": "00:29:03", "end_timestamp": "00:29:36", "start_second": 1743, "end_second": 1776, "url": "https://www.youtube.com/watch?v=HGYYEUSm-0Q&t=1743s", "title": "Ian Goodfellow: Generative Adversarial Networks (NIPS 2016 tutorial)", "thumbnail": "https://i.ytimg.com/vi/HGYYEUSm-0Q/maxresdefault.jpg"} {"video_id": "HGYYEUSm-0Q", "text": "samples that come from the distribution that generated the data so let's look at the generator Network in a little bit more detail we can think of the generator network is being a very simple graphical model shown on the Left there's a vector of latent variable Z and there's a vector of observed variables X and depending on the model architecture we usually have every member of X depend on every layer of Z so every member of X 2 and on every member of Z so I've drawn this as just a simple vector-valued model where we see one edge you could", "start_timestamp": "00:29:36", "end_timestamp": "00:30:12", "start_second": 1776, "end_second": 1812, "url": "https://www.youtube.com/watch?v=HGYYEUSm-0Q&t=1776s", "title": "Ian Goodfellow: Generative Adversarial Networks (NIPS 2016 tutorial)", "thumbnail": "https://i.ytimg.com/vi/HGYYEUSm-0Q/maxresdefault.jpg"} {"video_id": "HGYYEUSm-0Q", "text": "also imagine expanding it into a graph of scalar variables where would be a bytes heart bipartite directed graph the main reason that generative adversarial networks are relatively simple to train is that we never actually try to infer the probability distribution over Z given X instead we sample values of Z from the prior and then we sample values of X from P of x given Z because that's an central ancestral sampling in a directed graphical model it's very efficient in particular we accomplished this ancestral sampling by applying the", "start_timestamp": "00:30:12", "end_timestamp": "00:30:48", "start_second": 1812, "end_second": 1848, "url": "https://www.youtube.com/watch?v=HGYYEUSm-0Q&t=1812s", "title": "Ian Goodfellow: Generative Adversarial Networks (NIPS 2016 tutorial)", "thumbnail": "https://i.ytimg.com/vi/HGYYEUSm-0Q/maxresdefault.jpg"} {"video_id": "HGYYEUSm-0Q", "text": "function G to the input variable Z one of the very nice things about the generative every cell networks framework is that there are not really any requirements other than differentiability on G unlike nonlinear nonlinear ICA there is no requirement that Z have the same dimension as X for example or Boltzmann machines require energy functions that are tractable and have different tractable conditional distributions we don't actually need to be careful to design values that have multiple different conditionals that are", "start_timestamp": "00:30:48", "end_timestamp": "00:31:25", "start_second": 1848, "end_second": 1885, "url": "https://www.youtube.com/watch?v=HGYYEUSm-0Q&t=1848s", "title": "Ian Goodfellow: Generative Adversarial Networks (NIPS 2016 tutorial)", "thumbnail": "https://i.ytimg.com/vi/HGYYEUSm-0Q/maxresdefault.jpg"} {"video_id": "HGYYEUSm-0Q", "text": "all tractable in this case we only really need to make one conditional distribution tractable there are a few properties that we'd like to be able to guarantee that impose a few extra requirements on G in particular if we want to be sure that we're able to recover the training distribution we need to make sure that X has a higher dimension than Z or at least an equal dimension this is just to make sure that we aren't forced to represent only a low dimensional manifold with an X space an interesting thing is that it's actually", "start_timestamp": "00:31:25", "end_timestamp": "00:31:57", "start_second": 1885, "end_second": 1917, "url": "https://www.youtube.com/watch?v=HGYYEUSm-0Q&t=1885s", "title": "Ian Goodfellow: Generative Adversarial Networks (NIPS 2016 tutorial)", "thumbnail": "https://i.ytimg.com/vi/HGYYEUSm-0Q/maxresdefault.jpg"} {"video_id": "HGYYEUSm-0Q", "text": "possible to train the generator network even if we don't provide support across all of X space if we make Z be lower dimensional in X then we obtain a low dimensional manifold that assigns no probability whatsoever to most space most points in X space but we're still able to train the model using the discriminator as a guide that's kind of an unusual quirk that sets this framework apart from the methods that are based on maximizing a density function those would break if we evaluated the logarithm of zero density", "start_timestamp": "00:31:57", "end_timestamp": "00:32:32", "start_second": 1917, "end_second": 1952, "url": "https://www.youtube.com/watch?v=HGYYEUSm-0Q&t=1917s", "title": "Ian Goodfellow: Generative Adversarial Networks (NIPS 2016 tutorial)", "thumbnail": "https://i.ytimg.com/vi/HGYYEUSm-0Q/maxresdefault.jpg"} {"video_id": "HGYYEUSm-0Q", "text": "so the training procedure is to choose an optimization algorithm you can pick your favorite one I usually like to use atom these days and then repeatedly sample to different many batches of data one of these is a mini batch of training examples that you draw from the data set and the other mini batch is a set of input values Z that we sample from the prior and then feed to the generator we then run gradient descent on both of the players costs simultaneously in one optional variant we can also run the update for the discriminator more often", "start_timestamp": "00:32:32", "end_timestamp": "00:33:10", "start_second": 1952, "end_second": 1990, "url": "https://www.youtube.com/watch?v=HGYYEUSm-0Q&t=1952s", "title": "Ian Goodfellow: Generative Adversarial Networks (NIPS 2016 tutorial)", "thumbnail": "https://i.ytimg.com/vi/HGYYEUSm-0Q/maxresdefault.jpg"} {"video_id": "HGYYEUSm-0Q", "text": "than we run the update for the generator I personally usually just use one update for each player each player has its own cost and the choice of the cost determines exactly how the training algorithm proceeds there are many different ways of specifying the cost the simplest one is to use a minimax game where we have a cost function J superscript D defining the cost for the generator for the discriminator and then the cost for the generator is just the negative of the cost for the discriminator so you can think of this", "start_timestamp": "00:33:10", "end_timestamp": "00:33:47", "start_second": 1990, "end_second": 2027, "url": "https://www.youtube.com/watch?v=HGYYEUSm-0Q&t=1990s", "title": "Ian Goodfellow: Generative Adversarial Networks (NIPS 2016 tutorial)", "thumbnail": "https://i.ytimg.com/vi/HGYYEUSm-0Q/maxresdefault.jpg"} {"video_id": "HGYYEUSm-0Q", "text": "as having a single value that the discriminator is trying to maximize and the generator is trying to minimize so what exactly is this value that the two players are fighting over it's simply the cross-entropy between the discriminators predictions and the correct labels and the binary classification task of discriminating real data from fake data so we have one term where we're feeding data and we're with a discriminator is trying to maximize the log probability of assigning one to the data and then we have another term where the", "start_timestamp": "00:33:47", "end_timestamp": "00:34:20", "start_second": 2027, "end_second": 2060, "url": "https://www.youtube.com/watch?v=HGYYEUSm-0Q&t=2027s", "title": "Ian Goodfellow: Generative Adversarial Networks (NIPS 2016 tutorial)", "thumbnail": "https://i.ytimg.com/vi/HGYYEUSm-0Q/maxresdefault.jpg"} {"video_id": "HGYYEUSm-0Q", "text": "discriminator is aiming to maximize the log probability of assigning 0 to the fake samples when we look for an equilibrium point to a game it's different than minimizing a function we're actually looking for a saddle point of J superscript D and if we're able to successfully find this saddle point the whole procedure resembles minimizing the Jensens Shannon divergence between the data and the distribution represented by the model so as our first exercise which will be accompanied by a little five-minute break we're going to study", "start_timestamp": "00:34:20", "end_timestamp": "00:34:56", "start_second": 2060, "end_second": 2096, "url": "https://www.youtube.com/watch?v=HGYYEUSm-0Q&t=2060s", "title": "Ian Goodfellow: Generative Adversarial Networks (NIPS 2016 tutorial)", "thumbnail": "https://i.ytimg.com/vi/HGYYEUSm-0Q/maxresdefault.jpg"} {"video_id": "HGYYEUSm-0Q", "text": "what the discriminator does when the discriminator plays this game at the top of the slide I've shown the cost function that the discriminator is going to minimize and the exercise is to determine what the solution to D of X is written in terms of the data distribution and the generator distribution you'll also find that you need to make a few assumptions in order to make a clean solution to this exercise so I'll give you about five minutes to work on this exercise or if you don't want to do the exercise feel", "start_timestamp": "00:34:56", "end_timestamp": "00:35:28", "start_second": 2096, "end_second": 2128, "url": "https://www.youtube.com/watch?v=HGYYEUSm-0Q&t=2096s", "title": "Ian Goodfellow: Generative Adversarial Networks (NIPS 2016 tutorial)", "thumbnail": "https://i.ytimg.com/vi/HGYYEUSm-0Q/maxresdefault.jpg"} {"video_id": "HGYYEUSm-0Q", "text": "free to talk with your neighbors or just take a break for a minute so that you don't need to remain attentive for too many consecutive minutes I'm also happy to take questions from the mic during this time if anyone's interested yeah over there yeah my question is what prevents the generator from always generating the same image you see what I mean it could just lazily learn to always generate one single realistic image and be fine with this yeah that's a good question and it's an important part of ongoing research in generative adversarial", "start_timestamp": "00:35:28", "end_timestamp": "00:36:10", "start_second": 2128, "end_second": 2170, "url": "https://www.youtube.com/watch?v=HGYYEUSm-0Q&t=2128s", "title": "Ian Goodfellow: Generative Adversarial Networks (NIPS 2016 tutorial)", "thumbnail": "https://i.ytimg.com/vi/HGYYEUSm-0Q/maxresdefault.jpg"} {"video_id": "HGYYEUSm-0Q", "text": "networks essentially if we're able to correctly play this minimax game then the generator is not able to consistently fool the discriminator by always generating the same sample the discriminator would learn to recognize that individual sample and rejected as being fake in practice it's difficult to find a true equilibrium point of this game and one of the failure modes is actually to generate samples that have too little diversity to them and because of that we're having to study ways to improve our ability to find the", "start_timestamp": "00:36:10", "end_timestamp": "00:36:45", "start_second": 2170, "end_second": 2205, "url": "https://www.youtube.com/watch?v=HGYYEUSm-0Q&t=2170s", "title": "Ian Goodfellow: Generative Adversarial Networks (NIPS 2016 tutorial)", "thumbnail": "https://i.ytimg.com/vi/HGYYEUSm-0Q/maxresdefault.jpg"} {"video_id": "HGYYEUSm-0Q", "text": "equilibrium I became thanks did I yeah over here okay so I'm on your left actually yeah here I'm raising my hand okay so I'm actually learning a bit of Gans as well and variational encoders and I see certain resemblances in terms of sampling in this Z space in what case should I when generating samples use again I know what cases should I use variational autoencoders Thanks if your goal is to obtain a high likelihood then you would be better off using a variational auto encoder if your goal is to obtain realistic samples then you", "start_timestamp": "00:36:45", "end_timestamp": "00:37:36", "start_second": 2205, "end_second": 2256, "url": "https://www.youtube.com/watch?v=HGYYEUSm-0Q&t=2205s", "title": "Ian Goodfellow: Generative Adversarial Networks (NIPS 2016 tutorial)", "thumbnail": "https://i.ytimg.com/vi/HGYYEUSm-0Q/maxresdefault.jpg"} {"video_id": "HGYYEUSm-0Q", "text": "would usually be better off using a generative adversarial network rather than a variational autoencoder you can kind of see this in the cost function the generative adversity all Network is designed to fool the discriminator into thinking that it's samples are realistic and the variational autoencoder is designed to maximize the likelihood I how to sample from the data is just uniform distribution or that's also a really good question and I think one that is a topic of ongoing research the naive way of implementing the algorithm and the", "start_timestamp": "00:37:36", "end_timestamp": "00:38:11", "start_second": 2256, "end_second": 2291, "url": "https://www.youtube.com/watch?v=HGYYEUSm-0Q&t=2256s", "title": "Ian Goodfellow: Generative Adversarial Networks (NIPS 2016 tutorial)", "thumbnail": "https://i.ytimg.com/vi/HGYYEUSm-0Q/maxresdefault.jpg"} {"video_id": "HGYYEUSm-0Q", "text": "one that everyone does so far is to sample uniformly from the training data and also to sample uniformly from the z space but you could imagine the importance sampling could give us big improvements in particular most of the points that we train the generator on are wasted because we're usually going to sample from points that are doing pretty well and what we'd really like to do is find points that are doing very badly or maybe points that lie on the boundary between two modes in order to adjust those boundaries so you could imagine", "start_timestamp": "00:38:11", "end_timestamp": "00:38:44", "start_second": 2291, "end_second": 2324, "url": "https://www.youtube.com/watch?v=HGYYEUSm-0Q&t=2291s", "title": "Ian Goodfellow: Generative Adversarial Networks (NIPS 2016 tutorial)", "thumbnail": "https://i.ytimg.com/vi/HGYYEUSm-0Q/maxresdefault.jpg"} {"video_id": "HGYYEUSm-0Q", "text": "that as a procedure for doing important sampling where we visit latent encodes the yield more important aspects of the learning process and then reweighed those samples to correct for the bias on the sampling procedure could actually lead to an improvement so I just have one quick question I'm surprised well extremely impressed by this this beautiful algorithm but one thing that I'm rather confused by is why don't strange artifacts appear on the representation for the weight created by the generator and once is created by the", "start_timestamp": "00:38:44", "end_timestamp": "00:39:31", "start_second": 2324, "end_second": 2371, "url": "https://www.youtube.com/watch?v=HGYYEUSm-0Q&t=2324s", "title": "Ian Goodfellow: Generative Adversarial Networks (NIPS 2016 tutorial)", "thumbnail": "https://i.ytimg.com/vi/HGYYEUSm-0Q/maxresdefault.jpg"} {"video_id": "HGYYEUSm-0Q", "text": "generator it has some and it's any sort of non visually relevant artifact whether it is a non smoothness and then that would just mean the discriminator is set up to just win does that make sense yeah that makes sense so there are unusual artifacts that appear in samples created by the generator and in a lot of cases we're fortunate that those artifacts are somewhat compatible with the blind spots and the discriminator one example is if we use a convolutional generator the generator is somewhat inclined to producing unusual tile", "start_timestamp": "00:39:31", "end_timestamp": "00:40:13", "start_second": 2371, "end_second": 2413, "url": "https://www.youtube.com/watch?v=HGYYEUSm-0Q&t=2371s", "title": "Ian Goodfellow: Generative Adversarial Networks (NIPS 2016 tutorial)", "thumbnail": "https://i.ytimg.com/vi/HGYYEUSm-0Q/maxresdefault.jpg"} {"video_id": "HGYYEUSm-0Q", "text": "patterns there's a really good blog post by a ghostess Adina vase londoom Alain and Chris Ola I'm sorry if I forgot into the authors in that list about the checkerboard patterns that appear when you use D convolution with large stride in the generator though the good news is that the discriminator is also using convolution presumably with similar stride and so it might actually become blind to the same grid patterns that the generator creates the best answer exactly right but more generally there are a lot of artifacts that come out of", "start_timestamp": "00:40:13", "end_timestamp": "00:40:48", "start_second": 2413, "end_second": 2448, "url": "https://www.youtube.com/watch?v=HGYYEUSm-0Q&t=2413s", "title": "Ian Goodfellow: Generative Adversarial Networks (NIPS 2016 tutorial)", "thumbnail": "https://i.ytimg.com/vi/HGYYEUSm-0Q/maxresdefault.jpg"} {"video_id": "HGYYEUSm-0Q", "text": "the generator that don't really seem all that relevant to the sample creation process and the discriminator spends a lot of its time learning to reject patterns that ideally it would just you know not ever have to encounter in the first place like on M NIST is a very simple data set with just handwritten digits on a background if you look at the weights that the discriminator learns in the first layer they often look a little bit like 40 a basis so early on in learning they're realizing that the generator often makes a lot of", "start_timestamp": "00:40:48", "end_timestamp": "00:41:19", "start_second": 2448, "end_second": 2479, "url": "https://www.youtube.com/watch?v=HGYYEUSm-0Q&t=2448s", "title": "Ian Goodfellow: Generative Adversarial Networks (NIPS 2016 tutorial)", "thumbnail": "https://i.ytimg.com/vi/HGYYEUSm-0Q/maxresdefault.jpg"} {"video_id": "HGYYEUSm-0Q", "text": "high-frequency stuff and the data doesn't really have that frequency and so the discriminator is looking at this whole spectrum of different frequencies in order to figure out if there's too much of different bands president or not really it seems like it would be much better for the generator to go straight to making pen strokes and the discriminator go straight to paying attention to pen strokes instead of spending all of its time policing exactly how it sharp the transitions between neighboring pixels are so if", "start_timestamp": "00:41:19", "end_timestamp": "00:41:54", "start_second": 2479, "end_second": 2514, "url": "https://www.youtube.com/watch?v=HGYYEUSm-0Q&t=2479s", "title": "Ian Goodfellow: Generative Adversarial Networks (NIPS 2016 tutorial)", "thumbnail": "https://i.ytimg.com/vi/HGYYEUSm-0Q/maxresdefault.jpg"} {"video_id": "HGYYEUSm-0Q", "text": "just wanna understand this objective function a little bit better if you fix the generator so that it just does negative sampling or what a rather let me ask what is the relation between this objective function and a negative sampling approach the kind that are used with like board Tyvek Oh negative sampling forward to Veck I haven't really thought about that one connection to negative sampling is when trading boltzmann machines we generate samples from the model in order to estimate the gradient on the log partition function", "start_timestamp": "00:41:54", "end_timestamp": "00:42:27", "start_second": 2514, "end_second": 2547, "url": "https://www.youtube.com/watch?v=HGYYEUSm-0Q&t=2514s", "title": "Ian Goodfellow: Generative Adversarial Networks (NIPS 2016 tutorial)", "thumbnail": "https://i.ytimg.com/vi/HGYYEUSm-0Q/maxresdefault.jpg"} {"video_id": "HGYYEUSm-0Q", "text": "and we call that the negative phase you can think of the generative adversity all Network training procedure as being almost entirely negative phase the the generator only really learns from the samples it makes and it makes it a little bit like when you carve a statue out of marble you only ever remove things rather than adding things it's it's kind of a unique peculiarity of this particular training process so in the interest of time I think I should move on to the solution to this exercise but I'll continue taking more questions", "start_timestamp": "00:42:27", "end_timestamp": "00:42:55", "start_second": 2547, "end_second": 2575, "url": "https://www.youtube.com/watch?v=HGYYEUSm-0Q&t=2547s", "title": "Ian Goodfellow: Generative Adversarial Networks (NIPS 2016 tutorial)", "thumbnail": "https://i.ytimg.com/vi/HGYYEUSm-0Q/maxresdefault.jpg"} {"video_id": "HGYYEUSm-0Q", "text": "probably most of them at the next exercise break okay yeah okay so yeah I'll take your question next when I come to the exercise to you so the solution to exercise 1 and as you recall if you were paying attention to the questions rather than to the exercise we're looking for the optimal discriminative function D of X in terms of P data and P generator to solve this it's best to assume that both P data and P generator are nonzero everywhere if we don't make that assumption then there's this issue that some points in the discriminator", "start_timestamp": "00:42:55", "end_timestamp": "00:43:33", "start_second": 2575, "end_second": 2613, "url": "https://www.youtube.com/watch?v=HGYYEUSm-0Q&t=2575s", "title": "Ian Goodfellow: Generative Adversarial Networks (NIPS 2016 tutorial)", "thumbnail": "https://i.ytimg.com/vi/HGYYEUSm-0Q/maxresdefault.jpg"} {"video_id": "HGYYEUSm-0Q", "text": "the discriminators input space might never be sampled there it's training process and then those particular inputs would not really have a defined behavior because they're just never trained but if you make those relatively weak assumptions we can then just solve for the functional derivatives where we regard D of X as being almost like this infinite dimensional vector where every x value index is a different member of the vector and we're just solving for a big vector like we're used to doing with calculus so in this case we take the", "start_timestamp": "00:43:33", "end_timestamp": "00:44:08", "start_second": 2613, "end_second": 2648, "url": "https://www.youtube.com/watch?v=HGYYEUSm-0Q&t=2613s", "title": "Ian Goodfellow: Generative Adversarial Networks (NIPS 2016 tutorial)", "thumbnail": "https://i.ytimg.com/vi/HGYYEUSm-0Q/maxresdefault.jpg"} {"video_id": "HGYYEUSm-0Q", "text": "derivative with respect to a particular D of X output value of the cost function and we set it equal to zero it's pretty straightforward to take those derivatives and then from there it's straightforward algebra to solve this stationarity condition and what we get is that the optimal discrimination function is the ratio between P data of X and the sum of P data of X and P model of X so this is the main mathematical technique that sets generative adverse own networks apart from the other models that I described in the family tree", "start_timestamp": "00:44:08", "end_timestamp": "00:44:43", "start_second": 2648, "end_second": 2683, "url": "https://www.youtube.com/watch?v=HGYYEUSm-0Q&t=2648s", "title": "Ian Goodfellow: Generative Adversarial Networks (NIPS 2016 tutorial)", "thumbnail": "https://i.ytimg.com/vi/HGYYEUSm-0Q/maxresdefault.jpg"} {"video_id": "HGYYEUSm-0Q", "text": "some of them use techniques like lower bounds some of them use techniques like Markov chains generative adversarial networks use supervised learning to estimate a ratio of densities and essentially this is the the property that makes them really unique supervised learning is able to in in the ideal limit of infinite data and perfect optimization it's able to recover exactly the function that we want and the way that it breaks down is different from the other approximations it can suffer from under fitting if the", "start_timestamp": "00:44:43", "end_timestamp": "00:45:17", "start_second": 2683, "end_second": 2717, "url": "https://www.youtube.com/watch?v=HGYYEUSm-0Q&t=2683s", "title": "Ian Goodfellow: Generative Adversarial Networks (NIPS 2016 tutorial)", "thumbnail": "https://i.ytimg.com/vi/HGYYEUSm-0Q/maxresdefault.jpg"} {"video_id": "HGYYEUSm-0Q", "text": "optimizer is not perfect and it can suffer from overfitting if the training data is limited and it doesn't learn to generalize very well from that training data so far I've described everything in terms of a minimax game where there's a single value function and one player tries to maximize it and the other player tries to minimize it we can actually make the game a little bit more complicated where each player has its own independently parameterised cost so in all the different versions of the game we pretty much always want the", "start_timestamp": "00:45:17", "end_timestamp": "00:45:48", "start_second": 2717, "end_second": 2748, "url": "https://www.youtube.com/watch?v=HGYYEUSm-0Q&t=2717s", "title": "Ian Goodfellow: Generative Adversarial Networks (NIPS 2016 tutorial)", "thumbnail": "https://i.ytimg.com/vi/HGYYEUSm-0Q/maxresdefault.jpg"} {"video_id": "HGYYEUSm-0Q", "text": "discriminator to be using your bridge version of the game where it's just trying to be a good binary classifier but there are many different things we might consider doing with the generator in particular one really big problem with the minimax game is that when the discriminator becomes too smart the gradient for the generator goes away one of the really nice properties of the cross entropy loss function that we use to Train sigmoid classifiers and softmax classifiers is that whenever the classifier is making a mistake whenever", "start_timestamp": "00:45:48", "end_timestamp": "00:46:19", "start_second": 2748, "end_second": 2779, "url": "https://www.youtube.com/watch?v=HGYYEUSm-0Q&t=2748s", "title": "Ian Goodfellow: Generative Adversarial Networks (NIPS 2016 tutorial)", "thumbnail": "https://i.ytimg.com/vi/HGYYEUSm-0Q/maxresdefault.jpg"} {"video_id": "HGYYEUSm-0Q", "text": "it's choosing the wrong class the gradient is guaranteed to be nonzero the gradient of the cross entropy with respect to the logits approaches 1 as the probability assigned to the correct class approach is zero so we can never get in a situation where the classifier is unable to learn due to a lack of gradient either it has gradient and it's making a mistake or it lacks gradient and it's perfect so the discriminator has this particular property but unfortunately if we negate the discriminators cost then the generator has the opposite of that", "start_timestamp": "00:46:19", "end_timestamp": "00:46:55", "start_second": 2779, "end_second": 2815, "url": "https://www.youtube.com/watch?v=HGYYEUSm-0Q&t=2779s", "title": "Ian Goodfellow: Generative Adversarial Networks (NIPS 2016 tutorial)", "thumbnail": "https://i.ytimg.com/vi/HGYYEUSm-0Q/maxresdefault.jpg"} {"video_id": "HGYYEUSm-0Q", "text": "property whenever the generator is failing to fool the discriminator completely then it has no gradient because the output of the discriminator has saturated what we can do is instead of flipping the sign of the discriminators cost we can flip the order of the arguments to the cross-entropy function specifically this means that rather than trying to minimize the log probability of the correct answer we have the generator try to maximize the log probability of the wrong answer both of these cost functions are monotonically decreasing", "start_timestamp": "00:46:55", "end_timestamp": "00:47:28", "start_second": 2815, "end_second": 2848, "url": "https://www.youtube.com/watch?v=HGYYEUSm-0Q&t=2815s", "title": "Ian Goodfellow: Generative Adversarial Networks (NIPS 2016 tutorial)", "thumbnail": "https://i.ytimg.com/vi/HGYYEUSm-0Q/maxresdefault.jpg"} {"video_id": "HGYYEUSm-0Q", "text": "in the same direction but they're steep in different places at this point it's no longer possible to describe the equilibrium with just a single loss function and the motivations for this particular cost are far more heuristic we don't have a good theoretical argument that this place is the Nash equilibrium in the right place but in practice we see that this cost function behaves similar to the minimax cost function early in learning and then later in learning when the minimax function would start to have trouble", "start_timestamp": "00:47:28", "end_timestamp": "00:47:57", "start_second": 2848, "end_second": 2877, "url": "https://www.youtube.com/watch?v=HGYYEUSm-0Q&t=2848s", "title": "Ian Goodfellow: Generative Adversarial Networks (NIPS 2016 tutorial)", "thumbnail": "https://i.ytimg.com/vi/HGYYEUSm-0Q/maxresdefault.jpg"} {"video_id": "HGYYEUSm-0Q", "text": "with saturation and a lack of gradient this cost function continues to learn rapidly so this is the default cost function usually advocate that most people use even though it's not quite as theoretically appealing generative address trail networks did not really scale to very large inputs when my co-authors and I first developed them and eventually they were scaled to large images using a hand design process called lap Gans that used a laplacian pyramid to separate the image into multiple scales and generate each scale", "start_timestamp": "00:47:57", "end_timestamp": "00:48:31", "start_second": 2877, "end_second": 2911, "url": "https://www.youtube.com/watch?v=HGYYEUSm-0Q&t=2877s", "title": "Ian Goodfellow: Generative Adversarial Networks (NIPS 2016 tutorial)", "thumbnail": "https://i.ytimg.com/vi/HGYYEUSm-0Q/maxresdefault.jpg"} {"video_id": "HGYYEUSm-0Q", "text": "independently but more recently the way that they are usually used is following an architecture that was introduced in a collaboration between a start-up called in deco and face book AI research this architecture is called the DC Gann architecture for deep convolutional generative adversarial networks even in the original paper generative Ebersole networks were deep and convolutional but this paper placed greater emphasis on having multiple convolutional layers and using techniques that were invented after the original development of", "start_timestamp": "00:48:31", "end_timestamp": "00:49:03", "start_second": 2911, "end_second": 2943, "url": "https://www.youtube.com/watch?v=HGYYEUSm-0Q&t=2911s", "title": "Ian Goodfellow: Generative Adversarial Networks (NIPS 2016 tutorial)", "thumbnail": "https://i.ytimg.com/vi/HGYYEUSm-0Q/maxresdefault.jpg"} {"video_id": "HGYYEUSm-0Q", "text": "generative error so networks such as batch normalization so in particular when we generate images we might wonder exactly what we should do to increase the resolution as we move through a convolutional network the answer from the DC gun architecture is just to use a stride of greater than one when using the deconvolution operator another important contribution of the DC gun paper is to show that it's important to use batch normalization that every layer except for the last layer of the generator network that makes the", "start_timestamp": "00:49:03", "end_timestamp": "00:49:36", "start_second": 2943, "end_second": 2976, "url": "https://www.youtube.com/watch?v=HGYYEUSm-0Q&t=2943s", "title": "Ian Goodfellow: Generative Adversarial Networks (NIPS 2016 tutorial)", "thumbnail": "https://i.ytimg.com/vi/HGYYEUSm-0Q/maxresdefault.jpg"} {"video_id": "HGYYEUSm-0Q", "text": "learning process much more stable and since then guns have been applied to a wide range of large image generation tasks DC guns showed that you can generate really good images of bedrooms in particular many different data sets that have a small number of output modes work really well with DC gun style architectures so here we can see that we're getting realistic beds blankets windows cabinets and so on and that we have a quite a variety of different kinds of lighting and all the different sources of lighting are rendered in a", "start_timestamp": "00:49:36", "end_timestamp": "00:50:12", "start_second": 2976, "end_second": 3012, "url": "https://www.youtube.com/watch?v=HGYYEUSm-0Q&t=2976s", "title": "Ian Goodfellow: Generative Adversarial Networks (NIPS 2016 tutorial)", "thumbnail": "https://i.ytimg.com/vi/HGYYEUSm-0Q/maxresdefault.jpg"} {"video_id": "HGYYEUSm-0Q", "text": "very nice realistic way another domain where generative adversity or networks work well because the number of outputs is restricted is the domain of images of faces DC guns were shown to work very well on faces and in particular they showed that the latent code is actually very useful for representing faces many of you have probably seen the result the language models that have word embeddings can have properties where the word embedding for Queen if you subtract the word embedding for female and add the word of", "start_timestamp": "00:50:12", "end_timestamp": "00:50:45", "start_second": 3012, "end_second": 3045, "url": "https://www.youtube.com/watch?v=HGYYEUSm-0Q&t=3012s", "title": "Ian Goodfellow: Generative Adversarial Networks (NIPS 2016 tutorial)", "thumbnail": "https://i.ytimg.com/vi/HGYYEUSm-0Q/maxresdefault.jpg"} {"video_id": "HGYYEUSm-0Q", "text": "bedding for male give us a word embedding very close to the word embedding for King so you can actually do algebra in latent space and have it correspond to semantics the authors of the DC GaN paper showed that generative adversarial networks provide a similar property for images in particular if we take the word or the image embedding for images of a man with glasses and subtract the embedding for images of a man and add the embedding for images of a woman we obtained the embedding that corresponds to images of women with", "start_timestamp": "00:50:45", "end_timestamp": "00:51:19", "start_second": 3045, "end_second": 3079, "url": "https://www.youtube.com/watch?v=HGYYEUSm-0Q&t=3045s", "title": "Ian Goodfellow: Generative Adversarial Networks (NIPS 2016 tutorial)", "thumbnail": "https://i.ytimg.com/vi/HGYYEUSm-0Q/maxresdefault.jpg"} {"video_id": "HGYYEUSm-0Q", "text": "glasses all of the images in this slide were generated by the network none of them our training data they all come from decoding different embeddings so this shows that we're able to do algebra and latent space and have that algebra correspond to semantic properties just like with language models but what's even more exciting than language models is that we're actually able to decode this latent variable to a rich high dimensional image where all the different thousands of pixels are actually arranged correctly in relation", "start_timestamp": "00:51:19", "end_timestamp": "00:51:51", "start_second": 3079, "end_second": 3111, "url": "https://www.youtube.com/watch?v=HGYYEUSm-0Q&t=3079s", "title": "Ian Goodfellow: Generative Adversarial Networks (NIPS 2016 tutorial)", "thumbnail": "https://i.ytimg.com/vi/HGYYEUSm-0Q/maxresdefault.jpg"} {"video_id": "HGYYEUSm-0Q", "text": "to each other in the case of language models we only had to find an embedding that was really close to the embedding for the word King but we didn't have to actually map from the embedding to some kind of complicated data space so here we've shown we can go one step further and actually accomplish that mapping tasks when we try to understand exactly how a generative adversarial networks work one thing that's important to think about is whether the particular choice of divergence that we minimize is really important and in the past I and several", "start_timestamp": "00:51:51", "end_timestamp": "00:52:23", "start_second": 3111, "end_second": 3143, "url": "https://www.youtube.com/watch?v=HGYYEUSm-0Q&t=3111s", "title": "Ian Goodfellow: Generative Adversarial Networks (NIPS 2016 tutorial)", "thumbnail": "https://i.ytimg.com/vi/HGYYEUSm-0Q/maxresdefault.jpg"} {"video_id": "HGYYEUSm-0Q", "text": "other people have argued the generative adversarial networks made good samples and obtained bad likelihood because of the divergence that we chose I no longer believe that and I'm going to give you an argument now that the divergence doesn't matter but I will start by explaining to you why you might think that it should so if we maximize the likelihood of the data that's similar to it that's equivalent to minimizing the KL divergence between the data distribution and the model distribution and that's shown on the", "start_timestamp": "00:52:23", "end_timestamp": "00:52:54", "start_second": 3143, "end_second": 3174, "url": "https://www.youtube.com/watch?v=HGYYEUSm-0Q&t=3143s", "title": "Ian Goodfellow: Generative Adversarial Networks (NIPS 2016 tutorial)", "thumbnail": "https://i.ytimg.com/vi/HGYYEUSm-0Q/maxresdefault.jpg"} {"video_id": "HGYYEUSm-0Q", "text": "left in this panel here the data distribution is represented by the blue curves where we have a bimodal data distribution for this example the model distribution is represented by the dashed green curve and in this particular demonstration I'm assuming that the model is a Gaussian with a single mode so it's not able to represent the data distribution correctly so this is what the maximum likelihood solution to this problem would give us the Gaussian ends of averaging out the two different modes the KL divergence is not actually", "start_timestamp": "00:52:54", "end_timestamp": "00:53:26", "start_second": 3174, "end_second": 3206, "url": "https://www.youtube.com/watch?v=HGYYEUSm-0Q&t=3174s", "title": "Ian Goodfellow: Generative Adversarial Networks (NIPS 2016 tutorial)", "thumbnail": "https://i.ytimg.com/vi/HGYYEUSm-0Q/maxresdefault.jpg"} {"video_id": "HGYYEUSm-0Q", "text": "symmetric maximum likelihood corresponds to minimizing the KL divergence with the data on the left and the model on the right but we can actually flip that around we can minimize the KL divergence with the model on the left and the data on the right and when we do that we get a different result where instead of averaging out the two modes the model as shown in the panel on the right here we'll choose one of the modes we can think of KL data come a model as saying that the model should put probability mass everywhere that the data puts", "start_timestamp": "00:53:26", "end_timestamp": "00:53:57", "start_second": 3206, "end_second": 3237, "url": "https://www.youtube.com/watch?v=HGYYEUSm-0Q&t=3206s", "title": "Ian Goodfellow: Generative Adversarial Networks (NIPS 2016 tutorial)", "thumbnail": "https://i.ytimg.com/vi/HGYYEUSm-0Q/maxresdefault.jpg"} {"video_id": "HGYYEUSm-0Q", "text": "probability mass and we can think of KL data KL model comma data as saying that the model should not put probability mass anywhere that the data does not put probability mass in the left it's really important to have some mass on both peaks on the right it's really important to never generate a sample in the valley between the two peaks because none of the data ever actually occurs there both of these are perfectly legitimate approaches to generative modeling and you can choose one or the other based on whichever task you are using and what", "start_timestamp": "00:53:57", "end_timestamp": "00:54:28", "start_second": 3237, "end_second": 3268, "url": "https://www.youtube.com/watch?v=HGYYEUSm-0Q&t=3237s", "title": "Ian Goodfellow: Generative Adversarial Networks (NIPS 2016 tutorial)", "thumbnail": "https://i.ytimg.com/vi/HGYYEUSm-0Q/maxresdefault.jpg"} {"video_id": "HGYYEUSm-0Q", "text": "the design requirements for that task are the loss that we traditionally use with generative adversarial networks mostly because it was the thing that popped into my head in a bar as as Erin mentioned is pretty similar to the the divergence on the right but since that night in the I've realized that it's possible to use other divergences and and several papers by other people have been published on how to use other divergences and I now no longer think that the choice of divergence explains why we get really", "start_timestamp": "00:54:28", "end_timestamp": "00:54:56", "start_second": 3268, "end_second": 3296, "url": "https://www.youtube.com/watch?v=HGYYEUSm-0Q&t=3268s", "title": "Ian Goodfellow: Generative Adversarial Networks (NIPS 2016 tutorial)", "thumbnail": "https://i.ytimg.com/vi/HGYYEUSm-0Q/maxresdefault.jpg"} {"video_id": "HGYYEUSm-0Q", "text": "good samples and don't get as good of likelihood so here's how you can actually get maximum likelihood out of a generative adversarial network where you approximately minimize the KL divergence between data and model rather than model and data for the discriminator Network you use the same cost function as before which is just the binary classification task and for the generator network we now sample from the generator and then we penalize it according to e to the value of the logits of the discriminator and if the discriminator is optimal this", "start_timestamp": "00:54:56", "end_timestamp": "00:55:32", "start_second": 3296, "end_second": 3332, "url": "https://www.youtube.com/watch?v=HGYYEUSm-0Q&t=3296s", "title": "Ian Goodfellow: Generative Adversarial Networks (NIPS 2016 tutorial)", "thumbnail": "https://i.ytimg.com/vi/HGYYEUSm-0Q/maxresdefault.jpg"} {"video_id": "HGYYEUSm-0Q", "text": "has the same expected gradient with respect to the parameters as the KL divergence data and the model does so its approximating maximum likelihood by using supervised learning to estimate a ratio that would be intractable if we were to evaluate the maximum likelihood criterion directly in general we can think of these different costs as being like reward functions we can kind of think of the generator net as being a reinforcement learning agent where it takes actions and we reward its actions depending on the way that the", "start_timestamp": "00:55:32", "end_timestamp": "00:56:07", "start_second": 3332, "end_second": 3367, "url": "https://www.youtube.com/watch?v=HGYYEUSm-0Q&t=3332s", "title": "Ian Goodfellow: Generative Adversarial Networks (NIPS 2016 tutorial)", "thumbnail": "https://i.ytimg.com/vi/HGYYEUSm-0Q/maxresdefault.jpg"} {"video_id": "HGYYEUSm-0Q", "text": "environment responds the thing that makes this particular reinforcement learning setup a little unusual is that part of the environment is another learning agent in particular the discriminator all these different costs have one thing in common you can compute the cost using only the output of the discriminator and then for every sample you you just give a reward that depends on exactly what the discriminator did so if we look at a graph of the cost that the generator incurs as a function of the output of the discriminator we can", "start_timestamp": "00:56:07", "end_timestamp": "00:56:41", "start_second": 3367, "end_second": 3401, "url": "https://www.youtube.com/watch?v=HGYYEUSm-0Q&t=3367s", "title": "Ian Goodfellow: Generative Adversarial Networks (NIPS 2016 tutorial)", "thumbnail": "https://i.ytimg.com/vi/HGYYEUSm-0Q/maxresdefault.jpg"} {"video_id": "HGYYEUSm-0Q", "text": "see that all these different costs increase as we move from left to right and so our decrease as we move from left to right essentially that's saying that if you make the discriminator think that the samples that the generator created are real then you incur a very low cost we can see the way that they saturate in places and all so we can see how sampling along these curves that give us very different variance in the estimate of the gradient the green curve that lies the highest is the heuristic Allah motivated cost which is designed not to", "start_timestamp": "00:56:41", "end_timestamp": "00:57:13", "start_second": 3401, "end_second": 3433, "url": "https://www.youtube.com/watch?v=HGYYEUSm-0Q&t=3401s", "title": "Ian Goodfellow: Generative Adversarial Networks (NIPS 2016 tutorial)", "thumbnail": "https://i.ytimg.com/vi/HGYYEUSm-0Q/maxresdefault.jpg"} {"video_id": "HGYYEUSm-0Q", "text": "saturate when the generator is making a mistake so if you look at the very extreme left where the discriminator is outputting zeros where the discriminator is successfully rejecting the generator samples this cost function has a a high derivative value so the model is able to learn rapidly early on when it samples did not yet look realistic then if we move downward in the series of plots the blue curve the minimax curve is the one that we originally used to design this model framework and the one that's the easiest to analyze using the minimax", "start_timestamp": "00:57:13", "end_timestamp": "00:57:48", "start_second": 3433, "end_second": 3468, "url": "https://www.youtube.com/watch?v=HGYYEUSm-0Q&t=3433s", "title": "Ian Goodfellow: Generative Adversarial Networks (NIPS 2016 tutorial)", "thumbnail": "https://i.ytimg.com/vi/HGYYEUSm-0Q/maxresdefault.jpg"} {"video_id": "HGYYEUSm-0Q", "text": "theorem this curve is relatively flat most of the way across and starts to curve down gently as the samples become more realistic and then finally the maximum likelihood cost which has the negation of an exponential function in it is very flat on the left side but then shoots off exponentially downward as we get very far to the right so we can see that we would actually incur very high variance in the estimate of the gradient if we were to use that particular function because almost all the gradient comes from a single member", "start_timestamp": "00:57:48", "end_timestamp": "00:58:23", "start_second": 3468, "end_second": 3503, "url": "https://www.youtube.com/watch?v=HGYYEUSm-0Q&t=3468s", "title": "Ian Goodfellow: Generative Adversarial Networks (NIPS 2016 tutorial)", "thumbnail": "https://i.ytimg.com/vi/HGYYEUSm-0Q/maxresdefault.jpg"} {"video_id": "HGYYEUSm-0Q", "text": "of the mini batch whichever one is the most realistic because of that we don't usually use the maximum likelihood cost with generative adversarial networks we use one of the other costs that has nicer saturation properties and nicer variance properties but it is a perfectly legitimate cost and when we go ahead and we use that cost to Train there's actually there's a few other ways of approximating the KL divergence but none of the different ways of approximating the KL divergence give us blurry samples like we get with a V ie", "start_timestamp": "00:58:23", "end_timestamp": "00:58:54", "start_second": 3503, "end_second": 3534, "url": "https://www.youtube.com/watch?v=HGYYEUSm-0Q&t=3503s", "title": "Ian Goodfellow: Generative Adversarial Networks (NIPS 2016 tutorial)", "thumbnail": "https://i.ytimg.com/vi/HGYYEUSm-0Q/maxresdefault.jpg"} {"video_id": "HGYYEUSm-0Q", "text": "so that we used to think that the VA was using the KL divergence and got blurry samples and gowns were using the reverse KL divergence and got sharp samples but now that we're able to do both divergences with gans we see that we get sharp samples both ways my interpretation this is that it is the approximation strategy of using supervised learning to estimate the density ratio that leads to the samples being very sharp and that something about the variational bound is what leads to the samples for the VA e being blurry there's one other", "start_timestamp": "00:58:54", "end_timestamp": "00:59:32", "start_second": 3534, "end_second": 3572, "url": "https://www.youtube.com/watch?v=HGYYEUSm-0Q&t=3534s", "title": "Ian Goodfellow: Generative Adversarial Networks (NIPS 2016 tutorial)", "thumbnail": "https://i.ytimg.com/vi/HGYYEUSm-0Q/maxresdefault.jpg"} {"video_id": "HGYYEUSm-0Q", "text": "possibility which is that the model architectures we use for generative adversarial Nets are usually a little bit different VA use usually are conditionally Gaussian and usually have an isotropic Gaussian at the output layer generative adversarial networks don't need to have any particular conditional distribution that you can evaluate so the last layer is often just a linear layer which would look kind of like a Gaussian distribution with a complete covariance matrix instead of a restricted covariance matrix so it's", "start_timestamp": "00:59:32", "end_timestamp": "01:00:04", "start_second": 3572, "end_second": 3604, "url": "https://www.youtube.com/watch?v=HGYYEUSm-0Q&t=3572s", "title": "Ian Goodfellow: Generative Adversarial Networks (NIPS 2016 tutorial)", "thumbnail": "https://i.ytimg.com/vi/HGYYEUSm-0Q/maxresdefault.jpg"} {"video_id": "HGYYEUSm-0Q", "text": "possible that that complete covariance matrix at the last layer remove some of the blurriness but we no longer think that the choice of the divergence is really important to understanding how generative adversarial networks behave earlier I showed you a family tree of different generative models and I said we're going to pretend that all of them do maximum likelihood and clearly they don't actually do that now that we've seen how generative adversarial networks work in a little bit more detail we can actually start to describe exactly how", "start_timestamp": "01:00:04", "end_timestamp": "01:00:38", "start_second": 3604, "end_second": 3638, "url": "https://www.youtube.com/watch?v=HGYYEUSm-0Q&t=3604s", "title": "Ian Goodfellow: Generative Adversarial Networks (NIPS 2016 tutorial)", "thumbnail": "https://i.ytimg.com/vi/HGYYEUSm-0Q/maxresdefault.jpg"} {"video_id": "HGYYEUSm-0Q", "text": "it is that they compare to some of the more similar generative models in particular noise contrastive estimation is a procedure for fitting many different generative models including bolts and machines and other different types of generator nets and noise contrastive estimation uses exactly the same value function that we use as as the value function for the minimax game for generative adversarial nets so a lot of people look at this and think maybe these two methods are almost the same thing and and I myself wondered about", "start_timestamp": "01:00:38", "end_timestamp": "01:01:09", "start_second": 3638, "end_second": 3669, "url": "https://www.youtube.com/watch?v=HGYYEUSm-0Q&t=3638s", "title": "Ian Goodfellow: Generative Adversarial Networks (NIPS 2016 tutorial)", "thumbnail": "https://i.ytimg.com/vi/HGYYEUSm-0Q/maxresdefault.jpg"} {"video_id": "HGYYEUSm-0Q", "text": "that for a little while so it turns out that actually this same value function also appears for maximum likelihood if you look at it the right way so what this value function consists of is on the Left we have a term where we sample values from the data and we measure the log discriminator function on the right we sample values from a generator function and we measure the log of one minus the discriminator function it turns out that the differences between noise contrastive estimation maximum likelihood estimation", "start_timestamp": "01:01:09", "end_timestamp": "01:01:39", "start_second": 3669, "end_second": 3699, "url": "https://www.youtube.com/watch?v=HGYYEUSm-0Q&t=3669s", "title": "Ian Goodfellow: Generative Adversarial Networks (NIPS 2016 tutorial)", "thumbnail": "https://i.ytimg.com/vi/HGYYEUSm-0Q/maxresdefault.jpg"} {"video_id": "HGYYEUSm-0Q", "text": "and generative adversarial notes all revolve around exactly what the generator and the discriminator and the learning process are so for generative adverse neural networks the discriminator is just a neural network that we parameterize directly the function D of X is just directly implemented for both noise contrastive estimation and maximum likelihood estimation the discriminator is a ratio between the model that we're learning and the sum of the model density and the generator density so that probably got a", "start_timestamp": "01:01:39", "end_timestamp": "01:02:12", "start_second": 3699, "end_second": 3732, "url": "https://www.youtube.com/watch?v=HGYYEUSm-0Q&t=3699s", "title": "Ian Goodfellow: Generative Adversarial Networks (NIPS 2016 tutorial)", "thumbnail": "https://i.ytimg.com/vi/HGYYEUSm-0Q/maxresdefault.jpg"} {"video_id": "HGYYEUSm-0Q", "text": "little bit confusing right there what is this model that we are learning and how is it different from the generator well it turns out that for noise contrastive estimation the generator is used as a source of reference noise and the model learns to tell samples apart from noise by assigning higher density to the data so noise contrastive estimation might consist of generating samples from a Gaussian distribution and then training this discriminator function to tell whether a given input comes from the gaussian distribution or it comes from", "start_timestamp": "01:02:12", "end_timestamp": "01:02:44", "start_second": 3732, "end_second": 3764, "url": "https://www.youtube.com/watch?v=HGYYEUSm-0Q&t=3732s", "title": "Ian Goodfellow: Generative Adversarial Networks (NIPS 2016 tutorial)", "thumbnail": "https://i.ytimg.com/vi/HGYYEUSm-0Q/maxresdefault.jpg"} {"video_id": "HGYYEUSm-0Q", "text": "the data distribution and it implements that discriminator function by actually implementing an explicit tractable density over the data and by accessing an explicit tractable density over the generator that creates the noise and I ask a question yeah go ahead because you have this nice slide there my name is Yong Schmidt Hoover from this with a high lab and I was wondering whether you can relate these very interesting GA and soy games to the other adversarial network that we had back sent in 1992 where you had two types of network", "start_timestamp": "01:02:44", "end_timestamp": "01:03:25", "start_second": 3764, "end_second": 3805, "url": "https://www.youtube.com/watch?v=HGYYEUSm-0Q&t=3764s", "title": "Ian Goodfellow: Generative Adversarial Networks (NIPS 2016 tutorial)", "thumbnail": "https://i.ytimg.com/vi/HGYYEUSm-0Q/maxresdefault.jpg"} {"video_id": "HGYYEUSm-0Q", "text": "fighting each other also playing a minimax game where one of them I to come up with try to minimize an error function that the others were maximizing and it was not exactly like that but it was very similar in many ways because there you had an image coming in and then you had these cold layers like in an auditing color and then you try to find a representation initially random representation of the image but then for each of these units in the cold layer there was a predictor which try to predict this code unit from", "start_timestamp": "01:03:25", "end_timestamp": "01:03:57", "start_second": 3805, "end_second": 3837, "url": "https://www.youtube.com/watch?v=HGYYEUSm-0Q&t=3805s", "title": "Ian Goodfellow: Generative Adversarial Networks (NIPS 2016 tutorial)", "thumbnail": "https://i.ytimg.com/vi/HGYYEUSm-0Q/maxresdefault.jpg"} {"video_id": "HGYYEUSm-0Q", "text": "the other guys in them in the code layer and then the predictors try to minimize the error just like the feature detectors the code units try to maximize it trying to become as unpredictable as possible now this is closely related to coming up with this reference noise vector that you just mentioned because of course then in the in the code layer you basically get in the ideal case a factorial code where each of these units is statistically independent of each other of the other units but still tells you a lot about the image so you still", "start_timestamp": "01:03:57", "end_timestamp": "01:04:33", "start_second": 3837, "end_second": 3873, "url": "https://www.youtube.com/watch?v=HGYYEUSm-0Q&t=3837s", "title": "Ian Goodfellow: Generative Adversarial Networks (NIPS 2016 tutorial)", "thumbnail": "https://i.ytimg.com/vi/HGYYEUSm-0Q/maxresdefault.jpg"} {"video_id": "HGYYEUSm-0Q", "text": "can attach an ordering coder to that and then get a generative distribution you just wake up the code layer units and you randomly activate them according to their probabilities they are factual code which means that you get in images that are just reflecting the original distribution of the images so in many ways very similar but in other ways different and I was wondering whether you have comments on the similarities and differences of these old adversarial networks yeah so Jurgen has asked me if I have any comment on the similarities", "start_timestamp": "01:04:33", "end_timestamp": "01:05:09", "start_second": 3873, "end_second": 3909, "url": "https://www.youtube.com/watch?v=HGYYEUSm-0Q&t=3873s", "title": "Ian Goodfellow: Generative Adversarial Networks (NIPS 2016 tutorial)", "thumbnail": "https://i.ytimg.com/vi/HGYYEUSm-0Q/maxresdefault.jpg"} {"video_id": "HGYYEUSm-0Q", "text": "and differences here but he's in fact aware of my opinion because we've correspond about this by email before I mean I don't exactly appreciate the public confrontation if you want to form your own if you want to form your own opinion about whether predictability minimization is the same thing as generative adversity all networks you're welcome to read the paper one of the nips reviewers requested that we add a description of predictability minimization to the generative a Brazil Networks paper and we undid added our", "start_timestamp": "01:05:09", "end_timestamp": "01:05:45", "start_second": 3909, "end_second": 3945, "url": "https://www.youtube.com/watch?v=HGYYEUSm-0Q&t=3909s", "title": "Ian Goodfellow: Generative Adversarial Networks (NIPS 2016 tutorial)", "thumbnail": "https://i.ytimg.com/vi/HGYYEUSm-0Q/maxresdefault.jpg"} {"video_id": "HGYYEUSm-0Q", "text": "comments on the extent to which we think that they are similar which is that they're not particularly similar to the nips final copy just just for completeness however so I reacted to exactly these changes and then you did not comment it's not sure that you commented or reacted to these confrontations yeah so there are comments which you did not address and I think still think I would prefer to use my tutorial to teach about generative adversarial networks if people want to read about particularly memorization and", "start_timestamp": "01:05:45", "end_timestamp": "01:06:18", "start_second": 3945, "end_second": 3978, "url": "https://www.youtube.com/watch?v=HGYYEUSm-0Q&t=3945s", "title": "Ian Goodfellow: Generative Adversarial Networks (NIPS 2016 tutorial)", "thumbnail": "https://i.ytimg.com/vi/HGYYEUSm-0Q/maxresdefault.jpg"} {"video_id": "HGYYEUSm-0Q", "text": "please do sir just to just honor to make sure what you will have so related work section their comments have been added to the newspaper so returning to the comparison to noise contrastive estimation which is far more similar to generative ever-so networks than predictability minimization in that they have exactly the same value function we find that for noise contrastive estimation the learning of the final generative model occurs in the discriminator and for the generative address or network the learning occurs", "start_timestamp": "01:06:18", "end_timestamp": "01:06:55", "start_second": 3978, "end_second": 4015, "url": "https://www.youtube.com/watch?v=HGYYEUSm-0Q&t=3978s", "title": "Ian Goodfellow: Generative Adversarial Networks (NIPS 2016 tutorial)", "thumbnail": "https://i.ytimg.com/vi/HGYYEUSm-0Q/maxresdefault.jpg"} {"video_id": "HGYYEUSm-0Q", "text": "in the generator that's one way that they're different from each other and it has consequences on exactly what they are able to do an interesting thing is that maximum likelihood estimation also turns out to use this same value function and can also be interpreted as having a discriminative function inside it the difference between noise contrastive estimation and maximum likelihood estimation is that for noise contrastive estimation the noise distribution is fixed and never changes throughout training if we choose to use", "start_timestamp": "01:06:55", "end_timestamp": "01:07:27", "start_second": 4015, "end_second": 4047, "url": "https://www.youtube.com/watch?v=HGYYEUSm-0Q&t=4015s", "title": "Ian Goodfellow: Generative Adversarial Networks (NIPS 2016 tutorial)", "thumbnail": "https://i.ytimg.com/vi/HGYYEUSm-0Q/maxresdefault.jpg"} {"video_id": "HGYYEUSm-0Q", "text": "a noise distribution that is gaussian as the reference distribution then in practice learning tends to slow down relatively quickly once the generator once the model has learned to create samples that are easily distinguishable from a Gaussian in maximum likelihood estimation we take the parameters of the model distribution and we copy them into the noise distribution and we do this before each step begins so in some ways the maximum likelihood estimation procedure can be seen as the model constantly trying to", "start_timestamp": "01:07:27", "end_timestamp": "01:07:59", "start_second": 4047, "end_second": 4079, "url": "https://www.youtube.com/watch?v=HGYYEUSm-0Q&t=4047s", "title": "Ian Goodfellow: Generative Adversarial Networks (NIPS 2016 tutorial)", "thumbnail": "https://i.ytimg.com/vi/HGYYEUSm-0Q/maxresdefault.jpg"} {"video_id": "HGYYEUSm-0Q", "text": "learn its own shortcomings and distinguish its own samples from the data and the generative every cell that works approach we constantly update the generator network by following the gradient on its parameters all three of these approaches constantly follow the gradient on the parameters over the discriminator so we can see the way that we get some computational savings relative to maximum likelihood by looking at the corners that both noise contrastive estimation and generative adversarial networks cut for a noise", "start_timestamp": "01:07:59", "end_timestamp": "01:08:30", "start_second": 4079, "end_second": 4110, "url": "https://www.youtube.com/watch?v=HGYYEUSm-0Q&t=4079s", "title": "Ian Goodfellow: Generative Adversarial Networks (NIPS 2016 tutorial)", "thumbnail": "https://i.ytimg.com/vi/HGYYEUSm-0Q/maxresdefault.jpg"} {"video_id": "HGYYEUSm-0Q", "text": "contrastive estimation it's clear that the main corner we cut is that we never update the noise distribution and that eliminates a lot of computations right there for generative adversarial networks the way that we're able to cut a corner is that we don't need to make sure that there's an exact correspondence between a density and a sampler so for maximum likelihood we need to be able to sample if we're going to follow this particular implementation of maximum likelihood we need to be able to sample from the model when we", "start_timestamp": "01:08:30", "end_timestamp": "01:09:00", "start_second": 4110, "end_second": 4140, "url": "https://www.youtube.com/watch?v=HGYYEUSm-0Q&t=4110s", "title": "Ian Goodfellow: Generative Adversarial Networks (NIPS 2016 tutorial)", "thumbnail": "https://i.ytimg.com/vi/HGYYEUSm-0Q/maxresdefault.jpg"} {"video_id": "HGYYEUSm-0Q", "text": "evaluate the term on the right but we also need to be able to evaluate densities of the model in order to evaluate the D function and we need to perform computations that convert between those density representation and the sampling procedure generative adversarial networks only over a sample from G and only ever evaluate D there's no need to perform these transitions from densities to sampling procedures and that provides a lot of computational savings so I've completed this section of our roadmap on exactly how it is that", "start_timestamp": "01:09:00", "end_timestamp": "01:09:37", "start_second": 4140, "end_second": 4177, "url": "https://www.youtube.com/watch?v=HGYYEUSm-0Q&t=4140s", "title": "Ian Goodfellow: Generative Adversarial Networks (NIPS 2016 tutorial)", "thumbnail": "https://i.ytimg.com/vi/HGYYEUSm-0Q/maxresdefault.jpg"} {"video_id": "HGYYEUSm-0Q", "text": "generative adder cell networks are able to work from a theoretical point of view and now I'll move on to a few tips and tricks that should help you to make them work better in your own practical applied work the first really big tip is that labels turn out to really improve the subjective sample quality a lot as far as I know this is first observed by Emily Denton and her collaborators at NYU and Facebook AI research where they showed the bakwin generative Evaristo networks didn't work very well at all you could actually get them to work", "start_timestamp": "01:09:37", "end_timestamp": "01:10:06", "start_second": 4177, "end_second": 4206, "url": "https://www.youtube.com/watch?v=HGYYEUSm-0Q&t=4177s", "title": "Ian Goodfellow: Generative Adversarial Networks (NIPS 2016 tutorial)", "thumbnail": "https://i.ytimg.com/vi/HGYYEUSm-0Q/maxresdefault.jpg"} {"video_id": "HGYYEUSm-0Q", "text": "really well if you made them class conditional so Metis Mira and Simon OS and arrow had developed a conditional version of the generative adversarial network where you could give some input value that should control what output should come out and Emily and the collaborators showed that if you use that class label as the input you could then create an output value of an image from that class and that these images would be much better than if you just learned the the density over images to begin with another thing is that even if", "start_timestamp": "01:10:06", "end_timestamp": "01:10:39", "start_second": 4206, "end_second": 4239, "url": "https://www.youtube.com/watch?v=HGYYEUSm-0Q&t=4206s", "title": "Ian Goodfellow: Generative Adversarial Networks (NIPS 2016 tutorial)", "thumbnail": "https://i.ytimg.com/vi/HGYYEUSm-0Q/maxresdefault.jpg"} {"video_id": "HGYYEUSm-0Q", "text": "you don't want to go fully to the level that you have a class conditional model you can learn a joint distribution over the probability distribution of X and Y and even if it's sample time you don't provide an input Y to request a specific kind of sample the samples that come out will be better Tim Salomon's and I did this in our paper that we'll be showing at the poster session tonight it's it's not a key contribution of our paper but it's it's one of the tricks that we use to get better images one of the caveats", "start_timestamp": "01:10:39", "end_timestamp": "01:11:07", "start_second": 4239, "end_second": 4267, "url": "https://www.youtube.com/watch?v=HGYYEUSm-0Q&t=4239s", "title": "Ian Goodfellow: Generative Adversarial Networks (NIPS 2016 tutorial)", "thumbnail": "https://i.ytimg.com/vi/HGYYEUSm-0Q/maxresdefault.jpg"} {"video_id": "HGYYEUSm-0Q", "text": "about using this trick is that you need to keep in mind that there are now three different categories of models that shouldn't be directly compared to each other there are those models that are trained entirely without labels there are models that are class conditional and there are models that are not class conditional but that benefited from the use of labels to guide the training somewhat and it wouldn't really be fair to make a class conditional model and then say that it's strictly superior to some model that didn't use labels to", "start_timestamp": "01:11:07", "end_timestamp": "01:11:32", "start_second": 4267, "end_second": 4292, "url": "https://www.youtube.com/watch?v=HGYYEUSm-0Q&t=4267s", "title": "Ian Goodfellow: Generative Adversarial Networks (NIPS 2016 tutorial)", "thumbnail": "https://i.ytimg.com/vi/HGYYEUSm-0Q/maxresdefault.jpg"} {"video_id": "HGYYEUSm-0Q", "text": "improve its samples at all another tip that can really help a lot is a technique that I call one-sided label smoothing and we also introduced this in the paper with Tim that we're showing tonight the basic idea of one-sided label smoothing is that usually when you train the discriminator you're turning it to output hard ones on the data and hard zeros on the fake samples but it's much better if you turn it to output a soft value like 0.9 on the data and on the fixed samples it should still strive to output zeros that's why it's", "start_timestamp": "01:11:32", "end_timestamp": "01:12:08", "start_second": 4292, "end_second": 4328, "url": "https://www.youtube.com/watch?v=HGYYEUSm-0Q&t=4292s", "title": "Ian Goodfellow: Generative Adversarial Networks (NIPS 2016 tutorial)", "thumbnail": "https://i.ytimg.com/vi/HGYYEUSm-0Q/maxresdefault.jpg"} {"video_id": "HGYYEUSm-0Q", "text": "called one-sided is that we only smooth the the side that's on the data so what this will do is you can think of it as introducing some kind of like a leak probability that sometimes the data has been mislabeled that we accidentally gave you something fake and said it was real in particular this will reduce the confidence of the model somewhat so that it will not predict really extreme values it's important not to smooth the generator samples and we can see this by optimizing what the optimal discriminator is if we smooth by", "start_timestamp": "01:12:08", "end_timestamp": "01:12:42", "start_second": 4328, "end_second": 4362, "url": "https://www.youtube.com/watch?v=HGYYEUSm-0Q&t=4328s", "title": "Ian Goodfellow: Generative Adversarial Networks (NIPS 2016 tutorial)", "thumbnail": "https://i.ytimg.com/vi/HGYYEUSm-0Q/maxresdefault.jpg"} {"video_id": "HGYYEUSm-0Q", "text": "replacing the positive targets of one minus alpha and replacing the negative targets with beta then we see that we get this ratio of densities again or in the numerator we have 1 minus alpha times the data distribution and we have beta times the model distribution because this value in the numerator determines where the output of the discriminator function is large and therefore determines where the generator wants to steer samples we need to make sure that this second term does not appear in the numerator otherwise we", "start_timestamp": "01:12:42", "end_timestamp": "01:13:11", "start_second": 4362, "end_second": 4391, "url": "https://www.youtube.com/watch?v=HGYYEUSm-0Q&t=4362s", "title": "Ian Goodfellow: Generative Adversarial Networks (NIPS 2016 tutorial)", "thumbnail": "https://i.ytimg.com/vi/HGYYEUSm-0Q/maxresdefault.jpg"} {"video_id": "HGYYEUSm-0Q", "text": "would reinforce the current behavior of the generator if the generator is making lots of weird pictures of grids and we assign beta times P model to those weird pictures of grids in the discriminator we will just ask you to keep making weird pictures of grids forever and and the gradient near those images will not steer you away from them so that's why we always set beta to zero and only really smooth using the alpha term on the left term so we didn't invent label smoothing we just advocating the one-sided use of it for just for the", "start_timestamp": "01:13:11", "end_timestamp": "01:13:44", "start_second": 4391, "end_second": 4424, "url": "https://www.youtube.com/watch?v=HGYYEUSm-0Q&t=4391s", "title": "Ian Goodfellow: Generative Adversarial Networks (NIPS 2016 tutorial)", "thumbnail": "https://i.ytimg.com/vi/HGYYEUSm-0Q/maxresdefault.jpg"} {"video_id": "HGYYEUSm-0Q", "text": "discriminator label smoothing dates back to the 1980s I'm not sure where it originated Christian's egg ad and his collaborators showed that it works really well for regularizing inception models and one of the really nice properties that I've observed for it is that compared to weight decay weight decay actually will reduce the training accuracy of your model it will actually cause the model to make classification mistakes by shrinking the weights until it's not possible to make the correct classification anymore if you turn up", "start_timestamp": "01:13:44", "end_timestamp": "01:14:15", "start_second": 4424, "end_second": 4455, "url": "https://www.youtube.com/watch?v=HGYYEUSm-0Q&t=4424s", "title": "Ian Goodfellow: Generative Adversarial Networks (NIPS 2016 tutorial)", "thumbnail": "https://i.ytimg.com/vi/HGYYEUSm-0Q/maxresdefault.jpg"} {"video_id": "HGYYEUSm-0Q", "text": "the weight decay coefficient enough label smoothing will not actually introduce mistakes it will just reduce the confidence of the correct classifications but it will never actually steer the model toward an incorrect classification so if regenerative adverse Erichs this allows it the discriminator it is still more or less know which direction is real data in which direction is fake data but it doesn't actually result in it miss guiding the generator and it gets rid of really large gradients it gets rid of behaviors where the discriminator", "start_timestamp": "01:14:15", "end_timestamp": "01:14:44", "start_second": 4455, "end_second": 4484, "url": "https://www.youtube.com/watch?v=HGYYEUSm-0Q&t=4455s", "title": "Ian Goodfellow: Generative Adversarial Networks (NIPS 2016 tutorial)", "thumbnail": "https://i.ytimg.com/vi/HGYYEUSm-0Q/maxresdefault.jpg"} {"video_id": "HGYYEUSm-0Q", "text": "linearly extrapolates to decide that if you move a little bit in one direction then moving very far in that direction will give you more and more realistic samples it's important to use batch normalization most layers of the model and I won't go into batch normalization in detail but the idea is you take a full batch of input samples and you normalize the features of the network by subtracting the mean of those features across the whole batch and dividing by their standard deviation this makes the learning process a lot better", "start_timestamp": "01:14:44", "end_timestamp": "01:15:18", "start_second": 4484, "end_second": 4518, "url": "https://www.youtube.com/watch?v=HGYYEUSm-0Q&t=4484s", "title": "Ian Goodfellow: Generative Adversarial Networks (NIPS 2016 tutorial)", "thumbnail": "https://i.ytimg.com/vi/HGYYEUSm-0Q/maxresdefault.jpg"} {"video_id": "HGYYEUSm-0Q", "text": "conditioned unfortunately the use of these normalization constants that are computed across a whole mini batch can induce correlations between different samples generated in the same mini batch so I'm showing you a grid of sixteen examples in the top image that we're all in one batch and then the next grid of sixteen samples is all in another batch same generator model in both cases the only reason that there seems to be a common theme in all the examples in each image is that they're using the same mean and standard deviation normalizing", "start_timestamp": "01:15:18", "end_timestamp": "01:15:53", "start_second": 4518, "end_second": 4553, "url": "https://www.youtube.com/watch?v=HGYYEUSm-0Q&t=4518s", "title": "Ian Goodfellow: Generative Adversarial Networks (NIPS 2016 tutorial)", "thumbnail": "https://i.ytimg.com/vi/HGYYEUSm-0Q/maxresdefault.jpg"} {"video_id": "HGYYEUSm-0Q", "text": "constants and in this case the model has kind of pathologically learned to have its output depend a lot more on the precise randomly sampled value of that mean and that standard deviation rather than paying attention to the individual values in the code so in the top we see a lot of very like orange images and in the bottom we see a lot of very green images so to fix that problem we are able to change two different versions of batch normalization that actually process every example in the same way the simplest of these is what we call", "start_timestamp": "01:15:53", "end_timestamp": "01:16:26", "start_second": 4553, "end_second": 4586, "url": "https://www.youtube.com/watch?v=HGYYEUSm-0Q&t=4553s", "title": "Ian Goodfellow: Generative Adversarial Networks (NIPS 2016 tutorial)", "thumbnail": "https://i.ytimg.com/vi/HGYYEUSm-0Q/maxresdefault.jpg"} {"video_id": "HGYYEUSm-0Q", "text": "reference batch normalization where you just pick a reference batch of examples at the start of training and you never change them and you always compute the mean and the standard deviation of the features on those reference images and then you use them to normalize different images that you train on it means that every image throughout all of training is normalized using the statistics from the same reference batch and there's no longer this random jitter as we resample the images that are used to create the", "start_timestamp": "01:16:26", "end_timestamp": "01:16:53", "start_second": 4586, "end_second": 4613, "url": "https://www.youtube.com/watch?v=HGYYEUSm-0Q&t=4586s", "title": "Ian Goodfellow: Generative Adversarial Networks (NIPS 2016 tutorial)", "thumbnail": "https://i.ytimg.com/vi/HGYYEUSm-0Q/maxresdefault.jpg"} {"video_id": "HGYYEUSm-0Q", "text": "normalizing statistics unfortunately because we always use the same images we can start to overfit to that particular reference batch to partially resolve that we introduced a technique called virtual batch normalization the basic idea here is that every time you want to normal as an example X we normalize it using statistics computed both on the reference batch and on the example X itself added to that batch a lot of people ask me questions about how to balance the generator and the discriminator and if they need to be", "start_timestamp": "01:16:53", "end_timestamp": "01:17:29", "start_second": 4613, "end_second": 4649, "url": "https://www.youtube.com/watch?v=HGYYEUSm-0Q&t=4613s", "title": "Ian Goodfellow: Generative Adversarial Networks (NIPS 2016 tutorial)", "thumbnail": "https://i.ytimg.com/vi/HGYYEUSm-0Q/maxresdefault.jpg"} {"video_id": "HGYYEUSm-0Q", "text": "carefully adjusted to make sure that neither one of them wins in reality I usually find that the discriminator wins and I also believe that this is a good thing the way that the theory works is all based on assuming that the discriminator will converge to its optimal distribution where it correctly estimates the ratios that were interested in and we really want the discriminator to do a good job of that in some cases you can get problems where if the discriminator gets really good at rejecting generator samples the", "start_timestamp": "01:17:29", "end_timestamp": "01:17:58", "start_second": 4649, "end_second": 4678, "url": "https://www.youtube.com/watch?v=HGYYEUSm-0Q&t=4649s", "title": "Ian Goodfellow: Generative Adversarial Networks (NIPS 2016 tutorial)", "thumbnail": "https://i.ytimg.com/vi/HGYYEUSm-0Q/maxresdefault.jpg"} {"video_id": "HGYYEUSm-0Q", "text": "generator doesn't have a gradient anymore some people have an instinct to fix that problem by making the discriminator less powerful but I think that's the wrong way of going about it I think the right way to do it is to use things like one sided label smoothing to reduce the how extreme the gradients from the discriminator are and also to use things like the heuristic non saturating cost instead of the minimax cost and that will make sure that you can still get a learning signal even when the discriminator is able to reject", "start_timestamp": "01:17:58", "end_timestamp": "01:18:26", "start_second": 4678, "end_second": 4706, "url": "https://www.youtube.com/watch?v=HGYYEUSm-0Q&t=4678s", "title": "Ian Goodfellow: Generative Adversarial Networks (NIPS 2016 tutorial)", "thumbnail": "https://i.ytimg.com/vi/HGYYEUSm-0Q/maxresdefault.jpg"} {"video_id": "HGYYEUSm-0Q", "text": "most of the samples there are a few other things that you can do to try to make sure that the coordination between the generator and the discriminator works out correctly in particular we really want the discriminator to always do a good job of estimating that ratio we want the discriminator you really up to date and to have fit really well to the latest changes to the generator that motivates running the update on the discriminator more often than the update on the generator some people still do this I don't usually", "start_timestamp": "01:18:26", "end_timestamp": "01:18:57", "start_second": 4706, "end_second": 4737, "url": "https://www.youtube.com/watch?v=HGYYEUSm-0Q&t=4706s", "title": "Ian Goodfellow: Generative Adversarial Networks (NIPS 2016 tutorial)", "thumbnail": "https://i.ytimg.com/vi/HGYYEUSm-0Q/maxresdefault.jpg"} {"video_id": "HGYYEUSm-0Q", "text": "find that it works that well in practice I can't really explain why it doesn't work very well all the theory suggests that it should be the right thing to do but that particular approach doesn't seem to consistently yield an obvious payoff we're now coming to the most exciting part of the roadmap which is the research frontiers in generative adversarial networks can I get a quick check on how much time I have left okay yes so the biggest research frontier in generative ever sail networks is confronting the non convergence problem", "start_timestamp": "01:18:57", "end_timestamp": "01:19:37", "start_second": 4737, "end_second": 4777, "url": "https://www.youtube.com/watch?v=HGYYEUSm-0Q&t=4737s", "title": "Ian Goodfellow: Generative Adversarial Networks (NIPS 2016 tutorial)", "thumbnail": "https://i.ytimg.com/vi/HGYYEUSm-0Q/maxresdefault.jpg"} {"video_id": "HGYYEUSm-0Q", "text": "usually when we train deep models we are minimizing a cost function and so we're using an optimization algorithm to perform minimization there are a lot of things that can go wrong with minimization especially when you're training a deep model you can approach a saddle point rather than approaching a minimum you can approach a local minimum rather than a global minimum we're starting to become skeptical that local minima are as much of a problem as we used to think they were and you can have all kinds of things we think they're", "start_timestamp": "01:19:37", "end_timestamp": "01:20:06", "start_second": 4777, "end_second": 4806, "url": "https://www.youtube.com/watch?v=HGYYEUSm-0Q&t=4777s", "title": "Ian Goodfellow: Generative Adversarial Networks (NIPS 2016 tutorial)", "thumbnail": "https://i.ytimg.com/vi/HGYYEUSm-0Q/maxresdefault.jpg"} {"video_id": "HGYYEUSm-0Q", "text": "like bad conditioning high variance in the gradient and so on but for the most part you're pretty much going to go down a hill until eventually you stop somewhere unless your hyper parameters are really bad and you don't usually need to worry that your optimization algorithm will fail to even converge in the case of looking for an equilibrium to a game it is actually pretty difficult to guarantee that you will eventually converge to a specific equilibrium point or even that you will stop in some particular location that", "start_timestamp": "01:20:06", "end_timestamp": "01:20:37", "start_second": 4806, "end_second": 4837, "url": "https://www.youtube.com/watch?v=HGYYEUSm-0Q&t=4806s", "title": "Ian Goodfellow: Generative Adversarial Networks (NIPS 2016 tutorial)", "thumbnail": "https://i.ytimg.com/vi/HGYYEUSm-0Q/maxresdefault.jpg"} {"video_id": "HGYYEUSm-0Q", "text": "isn't a great equilibrium so to start looking at exactly how this works we're going to do another exercise where we're going to analyze a minimax game and see what gradient descent does for this game we have a scalar variable X at a scalar variable Y and we have a value function x times why and basically the one player controls X and would like to minimize this value function the other player controls Y and would like to maximize it and the exercise is to figure out if this value function has an equilibrium anywhere if so where is at equilibrium", "start_timestamp": "01:20:37", "end_timestamp": "01:21:12", "start_second": 4837, "end_second": 4872, "url": "https://www.youtube.com/watch?v=HGYYEUSm-0Q&t=4837s", "title": "Ian Goodfellow: Generative Adversarial Networks (NIPS 2016 tutorial)", "thumbnail": "https://i.ytimg.com/vi/HGYYEUSm-0Q/maxresdefault.jpg"} {"video_id": "HGYYEUSm-0Q", "text": "and then to look at the dynamics of gradient descent and analyze gradient descent as a continuous time process and just determine what the trajectory that gradient descent follows looks like on this particular problem I can take a few more questions while people work on this one now you have guns that generate really really nice results and train on a lot of data I think like there's the vegan work presented here let's train on 27 terabytes of video so the thing I'm wondering is nobody has looked at all these videos how can you know that Yan", "start_timestamp": "01:21:12", "end_timestamp": "01:21:53", "start_second": 4872, "end_second": 4913, "url": "https://www.youtube.com/watch?v=HGYYEUSm-0Q&t=4872s", "title": "Ian Goodfellow: Generative Adversarial Networks (NIPS 2016 tutorial)", "thumbnail": "https://i.ytimg.com/vi/HGYYEUSm-0Q/maxresdefault.jpg"} {"video_id": "HGYYEUSm-0Q", "text": "is not generating near duplicates is there any theoretical motivation is it related to overfitting and are people trying near duplicate search to see if it's just very good at compressing this data instead of generating yeah so duplicating a training example would actually definitely be a form of overfitting it's not something that we really believe happens in generative ever cell networks we don't have a strong theoretical guarantee that it doesn't happen one thing I can point out is that the generator never actually", "start_timestamp": "01:21:53", "end_timestamp": "01:22:24", "start_second": 4913, "end_second": 4944, "url": "https://www.youtube.com/watch?v=HGYYEUSm-0Q&t=4913s", "title": "Ian Goodfellow: Generative Adversarial Networks (NIPS 2016 tutorial)", "thumbnail": "https://i.ytimg.com/vi/HGYYEUSm-0Q/maxresdefault.jpg"} {"video_id": "HGYYEUSm-0Q", "text": "gets to see a training example directly it only gets to see the gradients coming from the discriminator so the discriminator would need to perfectly memorize a training example and then communicate it into the generator via the gradient another thing is because we have this problem with fitting games finding the equilibria like like people are analyzing in the exercise right now we just we tend to under fit rather than over fit we I'd be really quite happy if we started to overfit consistently but it's it's actually pretty difficult to", "start_timestamp": "01:22:24", "end_timestamp": "01:22:52", "start_second": 4944, "end_second": 4972, "url": "https://www.youtube.com/watch?v=HGYYEUSm-0Q&t=4944s", "title": "Ian Goodfellow: Generative Adversarial Networks (NIPS 2016 tutorial)", "thumbnail": "https://i.ytimg.com/vi/HGYYEUSm-0Q/maxresdefault.jpg"} {"video_id": "HGYYEUSm-0Q", "text": "really measure how much we're overfitting because you wouldn't really expect the model to perfectly copy a training example it's more likely that it would mostly copy the training example and then kind of change a few small things about it and we do things like look for nearest neighbors we generate samples and then see the most similar training example in terms of Euclidean distances but it's really easy to make a small change that causes a gigantic difference in Euclidean distance so that can be kind of hard to tell if it's actually", "start_timestamp": "01:22:52", "end_timestamp": "01:23:23", "start_second": 4972, "end_second": 5003, "url": "https://www.youtube.com/watch?v=HGYYEUSm-0Q&t=4972s", "title": "Ian Goodfellow: Generative Adversarial Networks (NIPS 2016 tutorial)", "thumbnail": "https://i.ytimg.com/vi/HGYYEUSm-0Q/maxresdefault.jpg"} {"video_id": "HGYYEUSm-0Q", "text": "eliminating the duplicates or not and it's it's also worth mentioning that in many cases genitive address donuts aren't even necessarily compressing the data sometimes we actually train them with more parameters than there are floating-point values in the original data set we're just we're converting it into a form where you can get infinitely many samples in a computationally efficient way but yeah we are usually compressing as you said yeah and so on my question is right now in like for example the vanilla ganz right you're", "start_timestamp": "01:23:23", "end_timestamp": "01:23:55", "start_second": 5003, "end_second": 5035, "url": "https://www.youtube.com/watch?v=HGYYEUSm-0Q&t=5003s", "title": "Ian Goodfellow: Generative Adversarial Networks (NIPS 2016 tutorial)", "thumbnail": "https://i.ytimg.com/vi/HGYYEUSm-0Q/maxresdefault.jpg"} {"video_id": "HGYYEUSm-0Q", "text": "you're taking noise you're doing like a noise shaping in a sense right and then you're reconstructing some signal some image in its native space in Ag native basis so our question is what do you think of actually doing the generation in a more safe sparsa fied basis of those types of signals for example maybe a cosine basis or even the coefficients of some dictionary do you think that it might make the learning of the gans easier or do you think it might not matter or something like that so I was just curious alike should the output of", "start_timestamp": "01:23:55", "end_timestamp": "01:24:26", "start_second": 5035, "end_second": 5066, "url": "https://www.youtube.com/watch?v=HGYYEUSm-0Q&t=5035s", "title": "Ian Goodfellow: Generative Adversarial Networks (NIPS 2016 tutorial)", "thumbnail": "https://i.ytimg.com/vi/HGYYEUSm-0Q/maxresdefault.jpg"} {"video_id": "HGYYEUSm-0Q", "text": "the generator network be a set of bases yeah or for example coefficients of some say it's a natural base maybe a Fourier basis or some wavelet basis or a dictionary or something just wondering if that makes any difference of the learning if it makes it easier because you can put some more priors on these as a member of the deep learning cult I'm not allowed to hand engineer anything so the the closest thing I've done to what you're suggesting is my co-author Bing Xuan the original generative Ebersole net paper was able to train a really", "start_timestamp": "01:24:26", "end_timestamp": "01:25:00", "start_second": 5066, "end_second": 5100, "url": "https://www.youtube.com/watch?v=HGYYEUSm-0Q&t=5066s", "title": "Ian Goodfellow: Generative Adversarial Networks (NIPS 2016 tutorial)", "thumbnail": "https://i.ytimg.com/vi/HGYYEUSm-0Q/maxresdefault.jpg"} {"video_id": "HGYYEUSm-0Q", "text": "good generator net on the Toronto faces data set by doing layer wise pre-training I wasn't able to get the deep jointly trained model to fit that data set very well back then my guess is it would probably work now that we have patched norm we didn't have bachelor on back then you can view what Bing did as being a little bit like what you're suggesting because when you train the output layer of the generator in the training step it learns essentially a dictionary that looks a little bit like like wavelet dictionaries and then when", "start_timestamp": "01:25:00", "end_timestamp": "01:25:30", "start_second": 5100, "end_second": 5130, "url": "https://www.youtube.com/watch?v=HGYYEUSm-0Q&t=5100s", "title": "Ian Goodfellow: Generative Adversarial Networks (NIPS 2016 tutorial)", "thumbnail": "https://i.ytimg.com/vi/HGYYEUSm-0Q/maxresdefault.jpg"} {"video_id": "HGYYEUSm-0Q", "text": "you start training the deeper layers of the generator those layers are essentially learning to output wavelet coefficients and so I do think that would that would help yeah question I can use after the gains are trained to create more synthetic data set for another classifier like the idea that after the guns are trained I kind of captured the probability distribution of my input and use them to automatically generate more images like to avoid like how we normally use data set augmentation to the images like that", "start_timestamp": "01:25:30", "end_timestamp": "01:26:09", "start_second": 5130, "end_second": 5169, "url": "https://www.youtube.com/watch?v=HGYYEUSm-0Q&t=5130s", "title": "Ian Goodfellow: Generative Adversarial Networks (NIPS 2016 tutorial)", "thumbnail": "https://i.ytimg.com/vi/HGYYEUSm-0Q/maxresdefault.jpg"} {"video_id": "HGYYEUSm-0Q", "text": "yeah so my former intern Chen Qi Chen who was mentored when I was at Google I don't want to disclose his project but I'll tell you that he's doing something cool related to that and if you talk to him he can decide whether he wants to disclose it or not I don't think I'm giving away anything about what he's done by saying that I've also had a lot of other people tell me that sometimes when they're evaluating a generator Network to see how well it's doing the one test they'll run is they will create a synthetic data set using the generator", "start_timestamp": "01:26:09", "end_timestamp": "01:26:40", "start_second": 5169, "end_second": 5200, "url": "https://www.youtube.com/watch?v=HGYYEUSm-0Q&t=5169s", "title": "Ian Goodfellow: Generative Adversarial Networks (NIPS 2016 tutorial)", "thumbnail": "https://i.ytimg.com/vi/HGYYEUSm-0Q/maxresdefault.jpg"} {"video_id": "HGYYEUSm-0Q", "text": "and then train a classifier on that new data set then use it to classify the real test set and if that classifier is able to classify the real test set they take that as evidence that their generator was pretty good if if it could be used to make a fake training set there are a few downsides to that procedure like for example if you were generating one mode way too often but you were still generating all the other modes occasionally your classifier might still be pretty good even though your generative model is screwed up but it", "start_timestamp": "01:26:40", "end_timestamp": "01:27:10", "start_second": 5200, "end_second": 5230, "url": "https://www.youtube.com/watch?v=HGYYEUSm-0Q&t=5200s", "title": "Ian Goodfellow: Generative Adversarial Networks (NIPS 2016 tutorial)", "thumbnail": "https://i.ytimg.com/vi/HGYYEUSm-0Q/maxresdefault.jpg"} {"video_id": "HGYYEUSm-0Q", "text": "does it does basically seem to work so in the interest of time I think I'll move on to the solution of the exercise but there'll be one more exercise you'll get to ask a few more questions so the solution to this exercise which is we're looking at the value function of x times y where x and y are just scalars there is actually an equilibrium point where x is 0 and Y is 0 when when they're both 0 the each of them causes the gradient to go away on the and then we can look at the gradient descent dynamics by analyzing it as a", "start_timestamp": "01:27:10", "end_timestamp": "01:27:41", "start_second": 5230, "end_second": 5261, "url": "https://www.youtube.com/watch?v=HGYYEUSm-0Q&t=5230s", "title": "Ian Goodfellow: Generative Adversarial Networks (NIPS 2016 tutorial)", "thumbnail": "https://i.ytimg.com/vi/HGYYEUSm-0Q/maxresdefault.jpg"} {"video_id": "HGYYEUSm-0Q", "text": "continuous-time system so if we actually evaluate the gradients DX DT is negative Y and dy DT is positive x the sign difference is because one of them is trying to minimize the value of function and one of them is trying to maximize it if we then go ahead and solve this differential equation to find the directions I guess there's a lot of different ways of doing it depending on exactly which pattern matching technique you're most comfortable with my particular approach is to differentiate the second equation with respect to T", "start_timestamp": "01:27:41", "end_timestamp": "01:28:13", "start_second": 5261, "end_second": 5293, "url": "https://www.youtube.com/watch?v=HGYYEUSm-0Q&t=5261s", "title": "Ian Goodfellow: Generative Adversarial Networks (NIPS 2016 tutorial)", "thumbnail": "https://i.ytimg.com/vi/HGYYEUSm-0Q/maxresdefault.jpg"} {"video_id": "HGYYEUSm-0Q", "text": "and then I get that d squared Y DT squared is negative Y so I recognize from that that we're looking at a sinusoidal basis of solutions and from that you can guess and check the corresponding coefficients and we get that we have this circular orbit where the only real thing that changes exactly what this the circle looks like is the initial conditions so if you initialize right on the origin you'll stay on the origin but if you initialize off the origin you never get any closer to it so a gradient descent goes into an orbit", "start_timestamp": "01:28:13", "end_timestamp": "01:28:48", "start_second": 5293, "end_second": 5328, "url": "https://www.youtube.com/watch?v=HGYYEUSm-0Q&t=5293s", "title": "Ian Goodfellow: Generative Adversarial Networks (NIPS 2016 tutorial)", "thumbnail": "https://i.ytimg.com/vi/HGYYEUSm-0Q/maxresdefault.jpg"} {"video_id": "HGYYEUSm-0Q", "text": "and oscillates forever rather than converging and then this is continuous time gradient descent where we have an infinitesimal step size if we use a larger step size then it can actually spiral outward forever so there are actually conditions that you can check to see whether or not simultaneous gradient descent will converge or not and they involve complex eigenvalues of a matrix of second derivatives and I won't go into it because it's not really the kind of thing that makes for a nice talk but the long and short of it is the", "start_timestamp": "01:28:48", "end_timestamp": "01:29:24", "start_second": 5328, "end_second": 5364, "url": "https://www.youtube.com/watch?v=HGYYEUSm-0Q&t=5328s", "title": "Ian Goodfellow: Generative Adversarial Networks (NIPS 2016 tutorial)", "thumbnail": "https://i.ytimg.com/vi/HGYYEUSm-0Q/maxresdefault.jpg"} {"video_id": "HGYYEUSm-0Q", "text": "the generative adversarial Nets game does not satisfy the main sufficient condition for convergence so that doesn't mean that they don't converge it means that we don't know whether they converge or not according to you the main criterion that we can look at and it seems like in practice they do converge sometimes and they doubt other times and we don't have a great understanding of why they do or don't but the most important thing under said is that simultaneous gradient descent is not really an algorithm for looking for", "start_timestamp": "01:29:24", "end_timestamp": "01:29:48", "start_second": 5364, "end_second": 5388, "url": "https://www.youtube.com/watch?v=HGYYEUSm-0Q&t=5364s", "title": "Ian Goodfellow: Generative Adversarial Networks (NIPS 2016 tutorial)", "thumbnail": "https://i.ytimg.com/vi/HGYYEUSm-0Q/maxresdefault.jpg"} {"video_id": "HGYYEUSm-0Q", "text": "equilibria of game it it sometimes does that but it's it's not really its its purpose and the most important research direction in genitive ever sonnets is to find an algorithm that does find equilibria in these high dimensional continuous non convex spaces it's important to mention that if we were able to optimize the generative adversarial network in function space if we were able to update the density function corresponding to the generator and and the discriminators beliefs about the generator directly then we can", "start_timestamp": "01:29:48", "end_timestamp": "01:30:24", "start_second": 5388, "end_second": 5424, "url": "https://www.youtube.com/watch?v=HGYYEUSm-0Q&t=5388s", "title": "Ian Goodfellow: Generative Adversarial Networks (NIPS 2016 tutorial)", "thumbnail": "https://i.ytimg.com/vi/HGYYEUSm-0Q/maxresdefault.jpg"} {"video_id": "HGYYEUSm-0Q", "text": "actually use convexity in function space to prove that simultaneous gradient descent converges for that particular problem the reason that this breaks down is that we don't actually update the densities directly we update the G and D functions that do the sampling and and the ratio estimation and then on top of that we represent G and D using parametric functions deep neural networks where the actual output values of G and D are very non convex functions of the parameters and so that causes us to lose all of our guarantees for", "start_timestamp": "01:30:24", "end_timestamp": "01:30:57", "start_second": 5424, "end_second": 5457, "url": "https://www.youtube.com/watch?v=HGYYEUSm-0Q&t=5424s", "title": "Ian Goodfellow: Generative Adversarial Networks (NIPS 2016 tutorial)", "thumbnail": "https://i.ytimg.com/vi/HGYYEUSm-0Q/maxresdefault.jpg"} {"video_id": "HGYYEUSm-0Q", "text": "convergence the main way that we see this affect the generative a dresser networks game is that we get behaviors like oscillation where the generator continually makes very different samples from one step to another but doesn't ever actually converge to producing a nice consistent set of samples in particular the worst form of non convergence and one that happens particularly often is what we call mode collapse where the generator starts to make only one sample or one similar theme of related samples it usually", "start_timestamp": "01:30:57", "end_timestamp": "01:31:29", "start_second": 5457, "end_second": 5489, "url": "https://www.youtube.com/watch?v=HGYYEUSm-0Q&t=5457s", "title": "Ian Goodfellow: Generative Adversarial Networks (NIPS 2016 tutorial)", "thumbnail": "https://i.ytimg.com/vi/HGYYEUSm-0Q/maxresdefault.jpg"} {"video_id": "HGYYEUSm-0Q", "text": "doesn't output exactly the same image over and over again but it might do something like every image it creates is a picture of the same dog and the dog is in different positions or has different objects in the background or we might see you know every sample it makes as a beach scene for example but it is essentially generating too few of things so the reason that mode collapse happens particularly often for the genitive adversarial Nets game is that the game is a little bit pathological and the way that we specify the value function in", "start_timestamp": "01:31:29", "end_timestamp": "01:32:02", "start_second": 5489, "end_second": 5522, "url": "https://www.youtube.com/watch?v=HGYYEUSm-0Q&t=5489s", "title": "Ian Goodfellow: Generative Adversarial Networks (NIPS 2016 tutorial)", "thumbnail": "https://i.ytimg.com/vi/HGYYEUSm-0Q/maxresdefault.jpg"} {"video_id": "HGYYEUSm-0Q", "text": "particular if we look at the minimax version the min Max and the max min do different things if we do the min max where we put the discriminator in the inner loop and maximize over it there then we're guaranteed to converge to the correct distribution in practice we don't actually do the maximization in the inner loop we do gradient descent on both players simultaneously if we put G in the inner loop that actually corresponds to a pathological version of the game where the generator learns to place all of its mass on the single", "start_timestamp": "01:32:02", "end_timestamp": "01:32:35", "start_second": 5522, "end_second": 5555, "url": "https://www.youtube.com/watch?v=HGYYEUSm-0Q&t=5522s", "title": "Ian Goodfellow: Generative Adversarial Networks (NIPS 2016 tutorial)", "thumbnail": "https://i.ytimg.com/vi/HGYYEUSm-0Q/maxresdefault.jpg"} {"video_id": "HGYYEUSm-0Q", "text": "point that the discriminator currently finds to be most likely so Luke Metz and his collaborators produced a really nice visualization of this in their recent papers submitted to iclear where we have this target distribution shown in the middle of the slide which has several different modes in two-dimensional space and then over time we see how as we move left to right and train a generative adversarial Network we learn to sample from different modes of that distribution but we don't ever actually get multiple modes at the same time this", "start_timestamp": "01:32:35", "end_timestamp": "01:33:06", "start_second": 5555, "end_second": 5586, "url": "https://www.youtube.com/watch?v=HGYYEUSm-0Q&t=5555s", "title": "Ian Goodfellow: Generative Adversarial Networks (NIPS 2016 tutorial)", "thumbnail": "https://i.ytimg.com/vi/HGYYEUSm-0Q/maxresdefault.jpg"} {"video_id": "HGYYEUSm-0Q", "text": "is because simultaneous gradient descent can sometimes behave a little bit like min Max and a little bit like max min and we're just unlucky enough that it often behaves more like max min and does the thing that we don't want some people have explained mode collapse in terms of the fact that we use the reverse KL loss that I described earlier when I said that I don't believe the reverse KL loss it describes why we get sharp samples because the reverse KL loss would prefer to choose a single mode rather than averaged out two different modes it does", "start_timestamp": "01:33:06", "end_timestamp": "01:33:39", "start_second": 5586, "end_second": 5619, "url": "https://www.youtube.com/watch?v=HGYYEUSm-0Q&t=5586s", "title": "Ian Goodfellow: Generative Adversarial Networks (NIPS 2016 tutorial)", "thumbnail": "https://i.ytimg.com/vi/HGYYEUSm-0Q/maxresdefault.jpg"} {"video_id": "HGYYEUSm-0Q", "text": "superficially seem like it might explain why we get mode collapse but I don't think that it is actually the explanation in this case for one thing if we use the forward KL we still get mode collapse in many cases also the reverse KL divergence does not say that we should collapse to a single mode it says that if our model is not able to represent every mode and to put sharp divisions between them then it should discard modes rather than blur modes but it would still prefer to have as many modes as the model can represent and", "start_timestamp": "01:33:39", "end_timestamp": "01:34:10", "start_second": 5619, "end_second": 5650, "url": "https://www.youtube.com/watch?v=HGYYEUSm-0Q&t=5619s", "title": "Ian Goodfellow: Generative Adversarial Networks (NIPS 2016 tutorial)", "thumbnail": "https://i.ytimg.com/vi/HGYYEUSm-0Q/maxresdefault.jpg"} {"video_id": "HGYYEUSm-0Q", "text": "with generative adversarial networks we usually see is a collapse to a much smaller number of modes than the all can represent that makes me believe that the problem is really that we're doing max-min rather than that we're using the wrong cost we often see that generative adverts on networks work best on tasks that are conditional where we take an input and map it to some output and we're reasonably happy with the result as long as the output looks acceptable and in particular we may not really notice if there's low diversity", "start_timestamp": "01:34:10", "end_timestamp": "01:34:43", "start_second": 5650, "end_second": 5683, "url": "https://www.youtube.com/watch?v=HGYYEUSm-0Q&t=5650s", "title": "Ian Goodfellow: Generative Adversarial Networks (NIPS 2016 tutorial)", "thumbnail": "https://i.ytimg.com/vi/HGYYEUSm-0Q/maxresdefault.jpg"} {"video_id": "HGYYEUSm-0Q", "text": "in the output so for example sentence to image generation as long as we get an image that actually resembles the sentence we're pretty happy with the output even if there isn't that much diversity in it Scott Reid and his collaborators have recently showed that for these sentence to image tasks generative adversarial networks seem to produce samples that are much less diverse than those produced by other models in the panel on the right we can see how the sentence a man in an orange jacket with sunglasses and a hat skis", "start_timestamp": "01:34:43", "end_timestamp": "01:35:13", "start_second": 5683, "end_second": 5713, "url": "https://www.youtube.com/watch?v=HGYYEUSm-0Q&t=5683s", "title": "Ian Goodfellow: Generative Adversarial Networks (NIPS 2016 tutorial)", "thumbnail": "https://i.ytimg.com/vi/HGYYEUSm-0Q/maxresdefault.jpg"} {"video_id": "HGYYEUSm-0Q", "text": "down a hill gives three different images of a man in essentially the same pose when we use a generative adversarial Network but using the model developed in this paper it's possible to get greater diversity in the output one way that we can try to reduce the mode collapse problem is to introduce what Tim Solomon calls mini-batch features these are features that look at the entire mini batch of samples when examining a single sample if that sample is too close to the other members of the mini batch then it can be rejected as having collapsed", "start_timestamp": "01:35:13", "end_timestamp": "01:35:46", "start_second": 5713, "end_second": 5746, "url": "https://www.youtube.com/watch?v=HGYYEUSm-0Q&t=5713s", "title": "Ian Goodfellow: Generative Adversarial Networks (NIPS 2016 tutorial)", "thumbnail": "https://i.ytimg.com/vi/HGYYEUSm-0Q/maxresdefault.jpg"} {"video_id": "HGYYEUSm-0Q", "text": "to a single mode this procedure led to much better image quality on CFR 10 we're now able to see all the different ten classes of images in CFR 10 on the Left I show you the training data so you can see that this data is not particularly beautiful to start with it's 32 by 32 pixels so it's it's relatively low resolution you can see that there are things like cars airplanes horses and so on in the panel on the right we have a Gantt rain'd with mini-batch features and it is now successfully able to generate many", "start_timestamp": "01:35:46", "end_timestamp": "01:36:16", "start_second": 5746, "end_second": 5776, "url": "https://www.youtube.com/watch?v=HGYYEUSm-0Q&t=5746s", "title": "Ian Goodfellow: Generative Adversarial Networks (NIPS 2016 tutorial)", "thumbnail": "https://i.ytimg.com/vi/HGYYEUSm-0Q/maxresdefault.jpg"} {"video_id": "HGYYEUSm-0Q", "text": "different recognizable classes like cars and horses and so on previous generative adverse own networks on CFR 10 would usually give only photo texture blobs that would look like regions of grass and regions of sky regions of water but would not usually have recognizable object classes in them an image net the object classes are not as recognizable but if we go through and cherry-pick examples we can see some relatively nice recognizable images where we get many different kinds of animals like dogs and maybe koalas and", "start_timestamp": "01:36:16", "end_timestamp": "01:36:49", "start_second": 5776, "end_second": 5809, "url": "https://www.youtube.com/watch?v=HGYYEUSm-0Q&t=5776s", "title": "Ian Goodfellow: Generative Adversarial Networks (NIPS 2016 tutorial)", "thumbnail": "https://i.ytimg.com/vi/HGYYEUSm-0Q/maxresdefault.jpg"} {"video_id": "HGYYEUSm-0Q", "text": "birds and so on if we look at some of the problems that arise with this sampling procedure we can see some of the amusing things that convolutional networks get wrong one thing in particular is that I think probably due to the way that pooling works in the convolutional network the network is usually testing whether some feature is absent or present but not testing how many times it occurs so we tend to get multiple heads in one image or animals that have more than one face on the same head we also often get problems where", "start_timestamp": "01:36:49", "end_timestamp": "01:37:21", "start_second": 5809, "end_second": 5841, "url": "https://www.youtube.com/watch?v=HGYYEUSm-0Q&t=5809s", "title": "Ian Goodfellow: Generative Adversarial Networks (NIPS 2016 tutorial)", "thumbnail": "https://i.ytimg.com/vi/HGYYEUSm-0Q/maxresdefault.jpg"} {"video_id": "HGYYEUSm-0Q", "text": "the perspective of an image is greatly reduced and I think this might be due to the network not having enough long range connections between different pixels in the image but it's hard for it to tell the things like foreshortening ought to happen in particular the picture of the gray and orange dog looks literally like a cubist painting to me where you know the Cubist's intentionally removed the perspective some of them also just look like we've taken an animal and skinned it and laid its fur out flat on the", "start_timestamp": "01:37:21", "end_timestamp": "01:37:50", "start_second": 5841, "end_second": 5870, "url": "https://www.youtube.com/watch?v=HGYYEUSm-0Q&t=5841s", "title": "Ian Goodfellow: Generative Adversarial Networks (NIPS 2016 tutorial)", "thumbnail": "https://i.ytimg.com/vi/HGYYEUSm-0Q/maxresdefault.jpg"} {"video_id": "HGYYEUSm-0Q", "text": "ground and then taken an axis aligned photo of it we also see a lot of problems where individual details are great but the global structure is wrong like there's this cow that is both quadrupedal and bipedal there's a dog whose eyes are different sizes from each other and and a cat that has like a lamprey mouth we also often just see animals that don't really seem to have legs that they just sort of vanished into fur blobs that often conveniently end at the edge of the image so that the network doesn't need to provide the legs", "start_timestamp": "01:37:50", "end_timestamp": "01:38:24", "start_second": 5870, "end_second": 5904, "url": "https://www.youtube.com/watch?v=HGYYEUSm-0Q&t=5870s", "title": "Ian Goodfellow: Generative Adversarial Networks (NIPS 2016 tutorial)", "thumbnail": "https://i.ytimg.com/vi/HGYYEUSm-0Q/maxresdefault.jpg"} {"video_id": "HGYYEUSm-0Q", "text": "so did anybody notice anything that actually looked real in in these samples Aaron yeah so the cat was the cat was real to test your discriminator network good job Aaron another really promising way to reduce the moat collapse problem besides many batch features is called unrolled gans this was recently introduced by Google brain and was submitted to iclear and I guess it's worth mentioning that a few other people had suggested doing this for a few years beforehand so it's it is an idea that was floating around into ether a little bit I imagine", "start_timestamp": "01:38:24", "end_timestamp": "01:39:04", "start_second": 5904, "end_second": 5944, "url": "https://www.youtube.com/watch?v=HGYYEUSm-0Q&t=5904s", "title": "Ian Goodfellow: Generative Adversarial Networks (NIPS 2016 tutorial)", "thumbnail": "https://i.ytimg.com/vi/HGYYEUSm-0Q/maxresdefault.jpg"} {"video_id": "HGYYEUSm-0Q", "text": "some people in the audience are probably thinking like oh I told people about that but Brian was the first to go ahead and get it to really work really well revisiting the same visualization that we saw earlier there unrolled Gunn is able to actually get all the different modes so the way that unrolling works is that to really make sure that we're doing min/max rather than max-min we actually use that maximization operation in the inner loop as part of the computational graph that we backprop through so instead of having a single", "start_timestamp": "01:39:04", "end_timestamp": "01:39:37", "start_second": 5944, "end_second": 5977, "url": "https://www.youtube.com/watch?v=HGYYEUSm-0Q&t=5944s", "title": "Ian Goodfellow: Generative Adversarial Networks (NIPS 2016 tutorial)", "thumbnail": "https://i.ytimg.com/vi/HGYYEUSm-0Q/maxresdefault.jpg"} {"video_id": "HGYYEUSm-0Q", "text": "fixed copy of the discriminator we build a complete tensor flow graph describing K steps of the learning process for the discriminator so the generator nut is essentially looking into the future and predicting where the discriminator will be several steps later and because it's the generator looking into the future rather than the discriminator looking into the future we're actually setting a direction for that min max problem we're saying that it's max over the discriminator and the inner loop and then men over the generator in the outer", "start_timestamp": "01:39:37", "end_timestamp": "01:40:09", "start_second": 5977, "end_second": 6009, "url": "https://www.youtube.com/watch?v=HGYYEUSm-0Q&t=5977s", "title": "Ian Goodfellow: Generative Adversarial Networks (NIPS 2016 tutorial)", "thumbnail": "https://i.ytimg.com/vi/HGYYEUSm-0Q/maxresdefault.jpg"} {"video_id": "HGYYEUSm-0Q", "text": "loop and that very elegantly gets us around the mode collapse problem another really big important research direction for generative address that works is figuring out how to evaluate them this is actually a problem that's broader than just generative address donuts it's a problem for generative models across the board models with good likelihood can produce bad samples models with good samples can actually have a very bad likelihood and then even when we talk about good samples and bad samples there's not really a very effective way", "start_timestamp": "01:40:09", "end_timestamp": "01:40:39", "start_second": 6009, "end_second": 6039, "url": "https://www.youtube.com/watch?v=HGYYEUSm-0Q&t=6009s", "title": "Ian Goodfellow: Generative Adversarial Networks (NIPS 2016 tutorial)", "thumbnail": "https://i.ytimg.com/vi/HGYYEUSm-0Q/maxresdefault.jpg"} {"video_id": "HGYYEUSm-0Q", "text": "to quantify how good about a sample is there's a really good paper called a note on the evaluation of generative models the walks through a lot of corner cases to clearly explain all the problems with the different metrics that we have available for today and then for genitive adverse so networks these problems are compounded by the fact that it's actually pretty hard to estimate the likelihood there is a paper based on estimating the likelihood in submission to I clear though so that problem might be cleared up", "start_timestamp": "01:40:39", "end_timestamp": "01:41:05", "start_second": 6039, "end_second": 6065, "url": "https://www.youtube.com/watch?v=HGYYEUSm-0Q&t=6039s", "title": "Ian Goodfellow: Generative Adversarial Networks (NIPS 2016 tutorial)", "thumbnail": "https://i.ytimg.com/vi/HGYYEUSm-0Q/maxresdefault.jpg"} {"video_id": "HGYYEUSm-0Q", "text": "pretty soon once once we have more experience with that particular methodology another research frontier is figuring out how to use discrete outputs with generative ever sale networks I described earlier that the only real condition we impose on the generator network is that it be differentiable and that's a pretty weak criterion but unfortunately it means that we can't really generate sequences of characters or words because those are discrete and if the output is discrete then the function isn't differentiable you can", "start_timestamp": "01:41:05", "end_timestamp": "01:41:34", "start_second": 6065, "end_second": 6094, "url": "https://www.youtube.com/watch?v=HGYYEUSm-0Q&t=6065s", "title": "Ian Goodfellow: Generative Adversarial Networks (NIPS 2016 tutorial)", "thumbnail": "https://i.ytimg.com/vi/HGYYEUSm-0Q/maxresdefault.jpg"} {"video_id": "HGYYEUSm-0Q", "text": "imagine a few ways around this one is you could use the reinforce algorithm to do policy gradients and use that to Train the generator network there's also the recently introduced techniques based on the Gumbel distribution for doing relaxations that allow you to Train discrete variables or finally you could do the old-fashioned thing that we used to do I saw geoff hinton on thursday and he was mentioning to me how this reminds him a lot of the way that Boltzmann machines were really bad at generating continuous values so what we do there is", "start_timestamp": "01:41:34", "end_timestamp": "01:42:05", "start_second": 6094, "end_second": 6125, "url": "https://www.youtube.com/watch?v=HGYYEUSm-0Q&t=6094s", "title": "Ian Goodfellow: Generative Adversarial Networks (NIPS 2016 tutorial)", "thumbnail": "https://i.ytimg.com/vi/HGYYEUSm-0Q/maxresdefault.jpg"} {"video_id": "HGYYEUSm-0Q", "text": "we would pre process continuous values to convert them into a binary space and then we'd use Boltzmann machines from there so you could do the same thing in Reverse with genitive adversarial nuts you could have a model that converts these binary values to continuous values and then use generative adverts or networks from there you could for example train a word embedding model and then have a generative adversarial network that produces word embeddings rather than directly producing discrete words one very interesting extension of", "start_timestamp": "01:42:05", "end_timestamp": "01:42:36", "start_second": 6125, "end_second": 6156, "url": "https://www.youtube.com/watch?v=HGYYEUSm-0Q&t=6125s", "title": "Ian Goodfellow: Generative Adversarial Networks (NIPS 2016 tutorial)", "thumbnail": "https://i.ytimg.com/vi/HGYYEUSm-0Q/maxresdefault.jpg"} {"video_id": "HGYYEUSm-0Q", "text": "the discriminator is to actually make it recognize different classes and this allows us to participate in an important research area of semi-supervised learning with generative adversarial networks originally generative adversarial networks used just a binary output value that said whether things are real or fake but if we add extra outputs saying which class they belong to and then having one fake class we are able to then take the and use it to classify data after we finished training the whole process and", "start_timestamp": "01:42:36", "end_timestamp": "01:43:10", "start_second": 6156, "end_second": 6190, "url": "https://www.youtube.com/watch?v=HGYYEUSm-0Q&t=6156s", "title": "Ian Goodfellow: Generative Adversarial Networks (NIPS 2016 tutorial)", "thumbnail": "https://i.ytimg.com/vi/HGYYEUSm-0Q/maxresdefault.jpg"} {"video_id": "HGYYEUSm-0Q", "text": "because it's learned to reject lots of fake data it actually gets regularize drooly well using this approach tim Salomon's and and i and our other collaborators in open area we're able to set the state of the art on several different recognition tasks with very few labeled examples on em nist c fart n @ sv hn another important research direction is learning to make the code interpretable Peter Chen's info Gann paper here at nips actually shows how we can learn a code where different elements of the code correspond to", "start_timestamp": "01:43:10", "end_timestamp": "01:43:43", "start_second": 6190, "end_second": 6223, "url": "https://www.youtube.com/watch?v=HGYYEUSm-0Q&t=6190s", "title": "Ian Goodfellow: Generative Adversarial Networks (NIPS 2016 tutorial)", "thumbnail": "https://i.ytimg.com/vi/HGYYEUSm-0Q/maxresdefault.jpg"} {"video_id": "HGYYEUSm-0Q", "text": "specific semantically meaningful variables like the position of an image another research Direction is connections to reinforcement learning recent papers have shown that generative address cell networks can be interpreted as an actor critic method or used for imitation learning or interpreted as inverse reinforcement learning finally if we're able to come up with a good algorithm for finding equilibria in games we can apply that algorithm to many other places besides generative adversarial networks things like robust", "start_timestamp": "01:43:43", "end_timestamp": "01:44:20", "start_second": 6223, "end_second": 6260, "url": "https://www.youtube.com/watch?v=HGYYEUSm-0Q&t=6223s", "title": "Ian Goodfellow: Generative Adversarial Networks (NIPS 2016 tutorial)", "thumbnail": "https://i.ytimg.com/vi/HGYYEUSm-0Q/maxresdefault.jpg"} {"video_id": "HGYYEUSm-0Q", "text": "optimization literally playing games like chess and checkers resisting adversarial examples guaranteeing privacy against an attacker who wants to thwart your privacy and all of these different application areas are all examples of games that are rise in artificial intelligence and might be improved by the same kinds of techniques that could help us to improve generative adversarial networks we're very close to out of time but I'll give you five minutes to do this exercise and I'll answer the last set of questions during", "start_timestamp": "01:44:20", "end_timestamp": "01:44:50", "start_second": 6260, "end_second": 6290, "url": "https://www.youtube.com/watch?v=HGYYEUSm-0Q&t=6260s", "title": "Ian Goodfellow: Generative Adversarial Networks (NIPS 2016 tutorial)", "thumbnail": "https://i.ytimg.com/vi/HGYYEUSm-0Q/maxresdefault.jpg"} {"video_id": "HGYYEUSm-0Q", "text": "the exercise this exercise is jumping back a little bit to earlier how I described that there's a different cost function that you can use for maximum likelihood and generative adversarial networks and I think this is a really good closing exercise because it really drives home the point that the key mathematical tool generative a versatile networks give you is the ability to estimate a ratio and to see how the estimate ratio estimation works you are going to derive the maximum likelihood learning rule in particular", "start_timestamp": "01:44:50", "end_timestamp": "01:45:22", "start_second": 6290, "end_second": 6322, "url": "https://www.youtube.com/watch?v=HGYYEUSm-0Q&t=6290s", "title": "Ian Goodfellow: Generative Adversarial Networks (NIPS 2016 tutorial)", "thumbnail": "https://i.ytimg.com/vi/HGYYEUSm-0Q/maxresdefault.jpg"} {"video_id": "HGYYEUSm-0Q", "text": "we have a cost function for the generator network which is an expectation of X sampled from the generator and then applying f of X and we want to figure out what f of X should be to make this cost function give us a maximum likelihood as a hint you should first start by showing the following that the derivatives with respect to the parameters of the cost function are given by this expectation of f of X multiplied by the derivatives of the likelihood and if you'd like you could actually just take that as a given and", "start_timestamp": "01:45:22", "end_timestamp": "01:45:56", "start_second": 6322, "end_second": 6356, "url": "https://www.youtube.com/watch?v=HGYYEUSm-0Q&t=6322s", "title": "Ian Goodfellow: Generative Adversarial Networks (NIPS 2016 tutorial)", "thumbnail": "https://i.ytimg.com/vi/HGYYEUSm-0Q/maxresdefault.jpg"} {"video_id": "HGYYEUSm-0Q", "text": "skip to the last step at the very end what you do is you should figure out what f of X should be given this fact about the gradients if you can choose the right f of X you can get the maximum likelihood gradient so I'll give you a few minutes to work on that and I'll take a few questions and then and then I'll conclude so in your previous about as a generative Network are you missions out there is a important assumption that is a function should be differentiable what if the function is not differentiable because in some area such", "start_timestamp": "01:45:56", "end_timestamp": "01:46:39", "start_second": 6356, "end_second": 6399, "url": "https://www.youtube.com/watch?v=HGYYEUSm-0Q&t=6356s", "title": "Ian Goodfellow: Generative Adversarial Networks (NIPS 2016 tutorial)", "thumbnail": "https://i.ytimg.com/vi/HGYYEUSm-0Q/maxresdefault.jpg"} {"video_id": "HGYYEUSm-0Q", "text": "lacto Informatics the data is some categorical categorical label not numerical value so it's 94 insurable so in that use that acceleration how to generate a synthetic data and you using GT and network so there's there haven't been any papers actually solving that problem yet I talk about this a few slides earlier and my recommendations are to try the reinforce algorithm to do policy gradients with discrete actions to try the concrete distribution and Gumbel softmax which are two papers that were recently released about how to", "start_timestamp": "01:46:39", "end_timestamp": "01:47:14", "start_second": 6399, "end_second": 6434, "url": "https://www.youtube.com/watch?v=HGYYEUSm-0Q&t=6399s", "title": "Ian Goodfellow: Generative Adversarial Networks (NIPS 2016 tutorial)", "thumbnail": "https://i.ytimg.com/vi/HGYYEUSm-0Q/maxresdefault.jpg"} {"video_id": "HGYYEUSm-0Q", "text": "train models with discrete outputs or to convert the problem into a continuous space where a generative address donuts can be applied so the variance and ganz is that it's very powerful in capturing the modes of the distribution right but it's not really truly understanding what images are as in disease you know you start from zero to generate X right so the question is you know if you increase the systems increase the image size assumably the modes of the distribution going to increase exponentially so ultimately you", "start_timestamp": "01:47:14", "end_timestamp": "01:47:54", "start_second": 6434, "end_second": 6474, "url": "https://www.youtube.com/watch?v=HGYYEUSm-0Q&t=6434s", "title": "Ian Goodfellow: Generative Adversarial Networks (NIPS 2016 tutorial)", "thumbnail": "https://i.ytimg.com/vi/HGYYEUSm-0Q/maxresdefault.jpg"} {"video_id": "HGYYEUSm-0Q", "text": "know if you have a you know practically this may not be solution this may not be a problem maybe we just care about hundred by hundred pixel images but this assume I'm interested in two thousand by 2,000 pixel images you know if I truly understand what images are how images are generated you know there is no difference between a hundred by a hundred and two thousand by two thousand I can you know that ultimate machine my question is about like way down the future I mean at the end of the day you are capturing modes of the distribution", "start_timestamp": "01:47:54", "end_timestamp": "01:48:25", "start_second": 6474, "end_second": 6505, "url": "https://www.youtube.com/watch?v=HGYYEUSm-0Q&t=6474s", "title": "Ian Goodfellow: Generative Adversarial Networks (NIPS 2016 tutorial)", "thumbnail": "https://i.ytimg.com/vi/HGYYEUSm-0Q/maxresdefault.jpg"} {"video_id": "HGYYEUSm-0Q", "text": "but this mode is going to explode if you go to larger images so at some point you know the the modes of the model also have an exponential explosion as you use a bigger convolutional net so if I mean I mean I don't want to repeat the same structure I mean the question is the modes of the distribution right at the end at the end of the day you are capturing the modes of the distribution yeah but a larger model can capture more modes I guess the nice thing about natural images is that when you increase the resolution you're looking at a", "start_timestamp": "01:48:25", "end_timestamp": "01:49:02", "start_second": 6505, "end_second": 6542, "url": "https://www.youtube.com/watch?v=HGYYEUSm-0Q&t=6505s", "title": "Ian Goodfellow: Generative Adversarial Networks (NIPS 2016 tutorial)", "thumbnail": "https://i.ytimg.com/vi/HGYYEUSm-0Q/maxresdefault.jpg"} {"video_id": "HGYYEUSm-0Q", "text": "different level of detail but within the same level of detail the same structure is repeated all across the image so let's say that we've been studying 64 by 64 images and we couldn't really see the individual firs ah like Harrison and animals fur and then we move up to a higher resolution we can see their fur at the higher resolution we don't need to relearn the distribution over images of fur at every pixel separately we we learn one level of detail that can be replicated across the whole image and we generate different Z values at every X", "start_timestamp": "01:49:02", "end_timestamp": "01:49:40", "start_second": 6542, "end_second": 6580, "url": "https://www.youtube.com/watch?v=HGYYEUSm-0Q&t=6542s", "title": "Ian Goodfellow: Generative Adversarial Networks (NIPS 2016 tutorial)", "thumbnail": "https://i.ytimg.com/vi/HGYYEUSm-0Q/maxresdefault.jpg"} {"video_id": "HGYYEUSm-0Q", "text": "and y coordinate the that randomly decided you know fine details of the fur like which angle it should be pointed in and things like why do you think in practice ganz don't ask a love well when you go to larger images oh well you might be surprised by what comes in a few slides yeah I think I should probably move toward the conclusion now so recalling exercise 3 we're looking to design this f of X this cost function that's applied for every example generated by the generator in order to recover the maximum likelihood", "start_timestamp": "01:49:40", "end_timestamp": "01:50:12", "start_second": 6580, "end_second": 6612, "url": "https://www.youtube.com/watch?v=HGYYEUSm-0Q&t=6580s", "title": "Ian Goodfellow: Generative Adversarial Networks (NIPS 2016 tutorial)", "thumbnail": "https://i.ytimg.com/vi/HGYYEUSm-0Q/maxresdefault.jpg"} {"video_id": "HGYYEUSm-0Q", "text": "gradient we start by showing this property that we can write down the gradient of the generator in terms of an expectation where the expectation is we've taken with respect to generator samples and we multiplied f of X by a likelihood gradient that's relatively straightforward to show the basic step is to turn the expectation into an integral use leibnitz's rule which means you have to make a few assumptions about the structure of the distribution involved and then finally we take advantage of our earlier assumption that", "start_timestamp": "01:50:12", "end_timestamp": "01:50:46", "start_second": 6612, "end_second": 6646, "url": "https://www.youtube.com/watch?v=HGYYEUSm-0Q&t=6612s", "title": "Ian Goodfellow: Generative Adversarial Networks (NIPS 2016 tutorial)", "thumbnail": "https://i.ytimg.com/vi/HGYYEUSm-0Q/maxresdefault.jpg"} {"video_id": "HGYYEUSm-0Q", "text": "the generator distribution is nonzero everywhere that allows us to say that the derivatives of pg are equal to P G times the derivatives of log P G so that gives us this nice expression where we can get a gradients of the likelihood in terms of samples that came out of the generator but what would you really like is gradients of the likelihood in terms of samples that came from the data so the way that we're able to do that is important sampling we have this f of X coefficient that we're able to multiply by each of the gradients and we can fix", "start_timestamp": "01:50:46", "end_timestamp": "01:51:21", "start_second": 6646, "end_second": 6681, "url": "https://www.youtube.com/watch?v=HGYYEUSm-0Q&t=6646s", "title": "Ian Goodfellow: Generative Adversarial Networks (NIPS 2016 tutorial)", "thumbnail": "https://i.ytimg.com/vi/HGYYEUSm-0Q/maxresdefault.jpg"} {"video_id": "HGYYEUSm-0Q", "text": "the problem that we're sampling from the generator when we want to sample from the discriminator by setting f of X to be P data over P generator and this means that we'll have kind of bad variance in our samples because we're sampling from the generator and then rewriting everything to make it look like we sampled from the discriminator but in theory this is unbiased from there it takes a little bit of algebra to figure out exactly how we should take the discriminator and implement this ratio we recall that the optimal", "start_timestamp": "01:51:21", "end_timestamp": "01:51:51", "start_second": 6681, "end_second": 6711, "url": "https://www.youtube.com/watch?v=HGYYEUSm-0Q&t=6681s", "title": "Ian Goodfellow: Generative Adversarial Networks (NIPS 2016 tutorial)", "thumbnail": "https://i.ytimg.com/vi/HGYYEUSm-0Q/maxresdefault.jpg"} {"video_id": "HGYYEUSm-0Q", "text": "discriminator gives us this ratio of p data over p data plus p generator and doing a little bit more algebra we can rearrange that to say that we need to set f of X to negative e to the logits this is maybe a lot to absorb right now but I think it's it's pretty intuitive once you've worked through it slowly on your own once and it gives you an idea of how you can take this ratio that the discriminator gives you and build lots of other things with it so to conclude the talk I'd like to show you some really exciting new", "start_timestamp": "01:51:51", "end_timestamp": "01:52:23", "start_second": 6711, "end_second": 6743, "url": "https://www.youtube.com/watch?v=HGYYEUSm-0Q&t=6711s", "title": "Ian Goodfellow: Generative Adversarial Networks (NIPS 2016 tutorial)", "thumbnail": "https://i.ytimg.com/vi/HGYYEUSm-0Q/maxresdefault.jpg"} {"video_id": "HGYYEUSm-0Q", "text": "results that came out using generative adversarial networks and that kind of addressed the last question we had about whether a generative Ebersole networks scale to very large images a new model just came out last week I seem to have this curse that every time I have to give a talk about something an important new result comes out right as I have finished my slides so I desperately made some new slides on the plane on the way here plug and play generative networks or generative models sorry make 256 by 256 high-resolution images", "start_timestamp": "01:52:23", "end_timestamp": "01:52:58", "start_second": 6743, "end_second": 6778, "url": "https://www.youtube.com/watch?v=HGYYEUSm-0Q&t=6743s", "title": "Ian Goodfellow: Generative Adversarial Networks (NIPS 2016 tutorial)", "thumbnail": "https://i.ytimg.com/vi/HGYYEUSm-0Q/maxresdefault.jpg"} {"video_id": "HGYYEUSm-0Q", "text": "of all thousand classes from imagenet and have very good sample diversity the basic idea is to combine adversarial training moment matching in a latent space do you know atom encoders and Monte Carlo sampling using the gradient and the really cool thing is they also work for captioning or inverse captioning where you generate images by giving an input senate's that describes the image overall the basic technique is to follow a Markov chain that moves around in the direction of the gradient of the logarithm of P of x and y with with Y", "start_timestamp": "01:52:58", "end_timestamp": "01:53:38", "start_second": 6778, "end_second": 6818, "url": "https://www.youtube.com/watch?v=HGYYEUSm-0Q&t=6778s", "title": "Ian Goodfellow: Generative Adversarial Networks (NIPS 2016 tutorial)", "thumbnail": "https://i.ytimg.com/vi/HGYYEUSm-0Q/maxresdefault.jpg"} {"video_id": "HGYYEUSm-0Q", "text": "marginalized out you can use denoising auto-encoders to estimate the required gradient but to make the denoising auto-encoder create really good images the auto encoder needs to be trained with several different losses one of those losses is the adversarial networks loss and that forces it to make images that look very realistic as well as images that are close to the original data in l2 space this confirms some of the tips that I gave earlier in the talk for example on the tips and tricks section I said that you often get much", "start_timestamp": "01:53:38", "end_timestamp": "01:54:08", "start_second": 6818, "end_second": 6848, "url": "https://www.youtube.com/watch?v=HGYYEUSm-0Q&t=6818s", "title": "Ian Goodfellow: Generative Adversarial Networks (NIPS 2016 tutorial)", "thumbnail": "https://i.ytimg.com/vi/HGYYEUSm-0Q/maxresdefault.jpg"} {"video_id": "HGYYEUSm-0Q", "text": "better results if you include class labels we see here that plug-and-play generative models don't make nearly as recognized full of images if we generate samples without the class we also see that the adversarial loss is a really important component of this new system if you look at the reconstructions of the denoising auto-encoder we begin on the left with the raw data in the middle we share the reconstructed image and on the right we show the reconstruction that you get if you train the model without the adversarial Network loss so adversarial", "start_timestamp": "01:54:08", "end_timestamp": "01:54:39", "start_second": 6848, "end_second": 6879, "url": "https://www.youtube.com/watch?v=HGYYEUSm-0Q&t=6848s", "title": "Ian Goodfellow: Generative Adversarial Networks (NIPS 2016 tutorial)", "thumbnail": "https://i.ytimg.com/vi/HGYYEUSm-0Q/maxresdefault.jpg"} {"video_id": "HGYYEUSm-0Q", "text": "learning has contributed a lot to the overall quality of this current state of the art model so in conclusion I guess I'd hope that everyone remembers that generative adversarial networks are models that use supervised learning to approximate in intractable costs by estimating ratios and that they can simulate many different cost functions including the one that's used for maximum likelihood the most important research frontier in generative adversarial networks is figuring out how to find Nash equilibria in high", "start_timestamp": "01:54:39", "end_timestamp": "01:55:09", "start_second": 6879, "end_second": 6909, "url": "https://www.youtube.com/watch?v=HGYYEUSm-0Q&t=6879s", "title": "Ian Goodfellow: Generative Adversarial Networks (NIPS 2016 tutorial)", "thumbnail": "https://i.ytimg.com/vi/HGYYEUSm-0Q/maxresdefault.jpg"} {"video_id": "HGYYEUSm-0Q", "text": "dimensional non convex continuous games and finally generate veteran-owned networks are important component of the current state of the art in image generation and are now able to make high resolution images with high diversity from many different classes and that concludes my talk and I believe that we're out of time for questions because we already took several of them in the exercise brakes and I think Erin will now announce that we are headed to sign textbooks and I hope you know what room were sending them", "start_timestamp": "01:55:09", "end_timestamp": "01:55:45", "start_second": 6909, "end_second": 6945, "url": "https://www.youtube.com/watch?v=HGYYEUSm-0Q&t=6909s", "title": "Ian Goodfellow: Generative Adversarial Networks (NIPS 2016 tutorial)", "thumbnail": "https://i.ytimg.com/vi/HGYYEUSm-0Q/maxresdefault.jpg"} {"video_id": "dMUes74-nYY", "text": "lecture seven of deep unsupervised learning today we'll be talking about self supervised learning and it is going to be pretty different from the previous lectures you've heard so far so far you've been looking at a lot of generative models how to use various classes of generative models to generate high dimensional data like images audio text and so forth however unsupervised learning is a much broader goal than just being able to generate data and one of the goals of unsupervised learning is to be able to learn rich features from raw unlabeled", "start_timestamp": "00:00:00", "end_timestamp": "00:00:44", "start_second": 0, "end_second": 44, "url": "https://www.youtube.com/watch?v=dMUes74-nYY&t=0s", "title": "Lecture 7 Self-Supervised Learning -- UC Berkeley Spring 2020 - CS294-158 Deep Unsupervised Learning", "thumbnail": "https://i.ytimg.com/vi/dMUes74-nYY/maxresdefault.jpg"} {"video_id": "dMUes74-nYY", "text": "data such that they can be useful for a lot of downstream tasks and this lecture is going to get at that and recently people have started calling this stuff supervised learning where the data creates its own supervision and so we are going to look at all the various classes of techniques that are allow us to do stuff supervised learning so so far we've seen density modeling where we've covered Auto regressive models flow models and we also talked about variational inference and we've also looked at implicit generative models", "start_timestamp": "00:00:44", "end_timestamp": "00:01:27", "start_second": 44, "end_second": 87, "url": "https://www.youtube.com/watch?v=dMUes74-nYY&t=44s", "title": "Lecture 7 Self-Supervised Learning -- UC Berkeley Spring 2020 - CS294-158 Deep Unsupervised Learning", "thumbnail": "https://i.ytimg.com/vi/dMUes74-nYY/maxresdefault.jpg"} {"video_id": "dMUes74-nYY", "text": "implicit density models like jion's and energy based models and both these classes of techniques allow you to learn generative models which is you are going to be able to generate images and be able to report lac courte scores and so on but other than that we mainly looked at applications of generative models to various modalities of data we haven't actually seen how to use unsupervised learning to learn features so that's the motivation for today's lecture how do we learn rich and useful features from raw unlabeled Ino such that it can be useful", "start_timestamp": "00:01:27", "end_timestamp": "00:02:08", "start_second": 87, "end_second": 128, "url": "https://www.youtube.com/watch?v=dMUes74-nYY&t=87s", "title": "Lecture 7 Self-Supervised Learning -- UC Berkeley Spring 2020 - CS294-158 Deep Unsupervised Learning", "thumbnail": "https://i.ytimg.com/vi/dMUes74-nYY/maxresdefault.jpg"} {"video_id": "dMUes74-nYY", "text": "for a wide variety of downstream tasks and we're also going to ask ourselves the question of what are these various pretext or proxy tasks that can be used to learn representations from raw unlabeled data and if we are able to learn good representations how can we leverage that and improve the data efficiency and performance of downstream tasks with a good pre training model so here is a figure from in Goodfellows deep learning textbook the focus here is how do we learn good representations and here's a simple case study of why representations", "start_timestamp": "00:02:08", "end_timestamp": "00:02:53", "start_second": 128, "end_second": 173, "url": "https://www.youtube.com/watch?v=dMUes74-nYY&t=128s", "title": "Lecture 7 Self-Supervised Learning -- UC Berkeley Spring 2020 - CS294-158 Deep Unsupervised Learning", "thumbnail": "https://i.ytimg.com/vi/dMUes74-nYY/maxresdefault.jpg"} {"video_id": "dMUes74-nYY", "text": "matter see how a bunch of points two-dimensional and if you visualize this an XY coordinate the Cartesian coordinate the they're clearly two separate clusters but it's harder to visualize how to linearly separate them but the moment you visualize them in the polar coordinates you can clearly say that there are two different radii and a lot of different angles and so you can have draw linearly separable hyperplane between them so it's clear that representation matters so once you move from the Cartesian representation to", "start_timestamp": "00:02:53", "end_timestamp": "00:03:34", "start_second": 173, "end_second": 214, "url": "https://www.youtube.com/watch?v=dMUes74-nYY&t=173s", "title": "Lecture 7 Self-Supervised Learning -- UC Berkeley Spring 2020 - CS294-158 Deep Unsupervised Learning", "thumbnail": "https://i.ytimg.com/vi/dMUes74-nYY/maxresdefault.jpg"} {"video_id": "dMUes74-nYY", "text": "polar coordinate representation things become a lot easier to handle as well you know you can actually use this linear SVM so logistic regression to learn a classifier on this particular polar coordinate representation so what is deep learning doing deep learning is basically using depth and repeated computation to iterating we will find the features as you move thereby they over there so the bottom was there is using the raw pixels as input and here it's trying to make sense that there is a person in the photograph and you know", "start_timestamp": "00:03:34", "end_timestamp": "00:04:16", "start_second": 214, "end_second": 256, "url": "https://www.youtube.com/watch?v=dMUes74-nYY&t=214s", "title": "Lecture 7 Self-Supervised Learning -- UC Berkeley Spring 2020 - CS294-158 Deep Unsupervised Learning", "thumbnail": "https://i.ytimg.com/vi/dMUes74-nYY/maxresdefault.jpg"} {"video_id": "dMUes74-nYY", "text": "if it's looking at all the background pixels and understanding that there is a phase so the way it starts it starts with the highest frequency information at the bottom and it refines at the next level to edges and it refines the next level two corners and contours and then the next load actual object parts and finally it's able to figure out the identity of the objects present and the actual image so just like how we saw representations matter so representations are the higher levels are more semantic representations are", "start_timestamp": "00:04:16", "end_timestamp": "00:04:50", "start_second": 256, "end_second": 290, "url": "https://www.youtube.com/watch?v=dMUes74-nYY&t=256s", "title": "Lecture 7 Self-Supervised Learning -- UC Berkeley Spring 2020 - CS294-158 Deep Unsupervised Learning", "thumbnail": "https://i.ytimg.com/vi/dMUes74-nYY/maxresdefault.jpg"} {"video_id": "dMUes74-nYY", "text": "the lower levels are more fine-grain and detailed and high frequency so the deeper on that can be thought of as writing its own representations that every layer every successive there is written on top of the previous layer which is more abstract than the raw input and that allows you to do downstream tasks if you take the topmost layers so here's the Venn diagram that being good for suggests for deeper how to think about deep learning so deep learning is a subset of representation learning which is a subset of machine", "start_timestamp": "00:04:50", "end_timestamp": "00:05:27", "start_second": 290, "end_second": 327, "url": "https://www.youtube.com/watch?v=dMUes74-nYY&t=290s", "title": "Lecture 7 Self-Supervised Learning -- UC Berkeley Spring 2020 - CS294-158 Deep Unsupervised Learning", "thumbnail": "https://i.ytimg.com/vi/dMUes74-nYY/maxresdefault.jpg"} {"video_id": "dMUes74-nYY", "text": "learning which which can be considered as sort of AI in general and so the goal of deep learning itself nothing to do with unsupervised learning is to learn good representations of raw data so what is deep unsupervised learning so unsupervised learning is concerned with learning these representations without labels so it can be considered as another subset of deep learning which is doing deep learning without labels basically so we are gonna get at the goal of representation learning without labels and that's deepens promise", "start_timestamp": "00:05:27", "end_timestamp": "00:06:03", "start_second": 327, "end_second": 363, "url": "https://www.youtube.com/watch?v=dMUes74-nYY&t=327s", "title": "Lecture 7 Self-Supervised Learning -- UC Berkeley Spring 2020 - CS294-158 Deep Unsupervised Learning", "thumbnail": "https://i.ytimg.com/vi/dMUes74-nYY/maxresdefault.jpg"} {"video_id": "dMUes74-nYY", "text": "learning so it sort of gets at the core goal of the class and recently it's been called a self supervised learning and it's used interchangeably with unsupervised learning the exact terminology of what itself and what is undoes not matter it's basically concerned with learning representations with our labels whether one self usually refers to the scenario where you can create your own supervision based on the data but at the end of the day it it can be considered as as another way to reprime tries unsupervised learning", "start_timestamp": "00:06:03", "end_timestamp": "00:06:43", "start_second": 363, "end_second": 403, "url": "https://www.youtube.com/watch?v=dMUes74-nYY&t=363s", "title": "Lecture 7 Self-Supervised Learning -- UC Berkeley Spring 2020 - CS294-158 Deep Unsupervised Learning", "thumbnail": "https://i.ytimg.com/vi/dMUes74-nYY/maxresdefault.jpg"} {"video_id": "dMUes74-nYY", "text": "so why self supervised learning the expense of producing a new dataset for each task this is really high so there are actually billion dollar startups just doing data annotation for people who can just upload their images say what kind of labels they want and overnight or within an hour or fortnight you can get like high quality labels created by humans who would annotate this data on the client side so sorry on the server side so so basically you you need to prepare labeling manuals you need to figure out what categories of", "start_timestamp": "00:06:43", "end_timestamp": "00:07:23", "start_second": 403, "end_second": 443, "url": "https://www.youtube.com/watch?v=dMUes74-nYY&t=403s", "title": "Lecture 7 Self-Supervised Learning -- UC Berkeley Spring 2020 - CS294-158 Deep Unsupervised Learning", "thumbnail": "https://i.ytimg.com/vi/dMUes74-nYY/maxresdefault.jpg"} {"video_id": "dMUes74-nYY", "text": "objects you want you need to have someone else hiring humans you need to hire your humans to annotate data and whoever is doing that job you need to create good graphical user interfaces so that it's the process of annotation is really fast you also need to create good storage pipelines so that every you know let's say people are annotating usually it's like a lot of mouse clicks per minute or second and every mouse click is automatically recorded and converted to prompt rate data storage formats and stored efficiently into the cloud so", "start_timestamp": "00:07:23", "end_timestamp": "00:08:00", "start_second": 443, "end_second": 480, "url": "https://www.youtube.com/watch?v=dMUes74-nYY&t=443s", "title": "Lecture 7 Self-Supervised Learning -- UC Berkeley Spring 2020 - CS294-158 Deep Unsupervised Learning", "thumbnail": "https://i.ytimg.com/vi/dMUes74-nYY/maxresdefault.jpg"} {"video_id": "dMUes74-nYY", "text": "there are lots of back-end engineering you need to do not that this is a bad thing to do it is it is good unlike if we really need to work on better and better pipelines for data creation however good supervision may not be cheap for example annotating what are the objects contained in an image is probably something that you can take for granted now because people have created a lot of datasets but if you move to another domain like medicine or legal creating another data set may actually be pretty hard so taking advantage of", "start_timestamp": "00:08:00", "end_timestamp": "00:08:37", "start_second": 480, "end_second": 517, "url": "https://www.youtube.com/watch?v=dMUes74-nYY&t=480s", "title": "Lecture 7 Self-Supervised Learning -- UC Berkeley Spring 2020 - CS294-158 Deep Unsupervised Learning", "thumbnail": "https://i.ytimg.com/vi/dMUes74-nYY/maxresdefault.jpg"} {"video_id": "dMUes74-nYY", "text": "vast amount of unlabeled data on the Internet is something supervisor learning cannot do so no matter how much you can appreciate the success of supervised learning there is still a lot more unlabeled data then there is the amount of label data and it would be nice if we can leverage the of unlabeled data to further improve the performance of systems that work on label data so it doesn't have to be a dichotomy between hey we just want to do unsupervised learning or we just wanted to supervise for anything but rather we", "start_timestamp": "00:08:37", "end_timestamp": "00:09:11", "start_second": 517, "end_second": 551, "url": "https://www.youtube.com/watch?v=dMUes74-nYY&t=517s", "title": "Lecture 7 Self-Supervised Learning -- UC Berkeley Spring 2020 - CS294-158 Deep Unsupervised Learning", "thumbnail": "https://i.ytimg.com/vi/dMUes74-nYY/maxresdefault.jpg"} {"video_id": "dMUes74-nYY", "text": "want to figure out how to take advantage of large amounts of unlabeled data order billions of images for lots of text or audio samples or YouTube videos and learn amazing features and then make the process of doing supervised learning much more cost and compute and time efficient and finally there's this cognitive motivation which is how babies or animals learn in that like when they mostly learn by experimenting and without actually having a labels so a child can just look at other people doing things or its own experience", "start_timestamp": "00:09:11", "end_timestamp": "00:09:50", "start_second": 551, "end_second": 590, "url": "https://www.youtube.com/watch?v=dMUes74-nYY&t=551s", "title": "Lecture 7 Self-Supervised Learning -- UC Berkeley Spring 2020 - CS294-158 Deep Unsupervised Learning", "thumbnail": "https://i.ytimg.com/vi/dMUes74-nYY/maxresdefault.jpg"} {"video_id": "dMUes74-nYY", "text": "moving its own hands or looking at other people around in the house or like you know no modern days children grow up with gadgets so they can look at videos and already start learning good features without actually knowing this is it this is a cat this is a cat this is a cat like hundreds of times that's how emission a classifier is learned so that was a really nice code by Pierre cermony who's one of the leading researchers in the field which is give a robot a label and you feed it for a second but teach a robot a label", "start_timestamp": "00:09:50", "end_timestamp": "00:10:24", "start_second": 590, "end_second": 624, "url": "https://www.youtube.com/watch?v=dMUes74-nYY&t=590s", "title": "Lecture 7 Self-Supervised Learning -- UC Berkeley Spring 2020 - CS294-158 Deep Unsupervised Learning", "thumbnail": "https://i.ytimg.com/vi/dMUes74-nYY/maxresdefault.jpg"} {"video_id": "dMUes74-nYY", "text": "and you feed it for a lifetime what what he means by this is that if you taught the robot the underlying aspects of various objects in a completely Sal supervised fashion it knows what like a cat or dog is without actually being taught so that means that it can actually generalize much better so labels are cheap but then it may not be the most optimal way to learn representations so so what exactly is self supervised learning it's it is a version of unsupervised learning where data provides its own supervision so in", "start_timestamp": "00:10:24", "end_timestamp": "00:11:05", "start_second": 624, "end_second": 665, "url": "https://www.youtube.com/watch?v=dMUes74-nYY&t=624s", "title": "Lecture 7 Self-Supervised Learning -- UC Berkeley Spring 2020 - CS294-158 Deep Unsupervised Learning", "thumbnail": "https://i.ytimg.com/vi/dMUes74-nYY/maxresdefault.jpg"} {"video_id": "dMUes74-nYY", "text": "general the way it could work is you withhold some part of the data and you task a neural net to predict a bit with health portion from the remaining parts so this could be like you occlude some part of the image you look at the remaining pixels and you try to predict the occluded version or you have a video and you just hide some frames in the video you have the other frames and you try to fill in the blanks of the missing frames or you have a sentence and you mask out some words and you ask the neural network to fill in", "start_timestamp": "00:11:05", "end_timestamp": "00:11:39", "start_second": 665, "end_second": 699, "url": "https://www.youtube.com/watch?v=dMUes74-nYY&t=665s", "title": "Lecture 7 Self-Supervised Learning -- UC Berkeley Spring 2020 - CS294-158 Deep Unsupervised Learning", "thumbnail": "https://i.ytimg.com/vi/dMUes74-nYY/maxresdefault.jpg"} {"video_id": "dMUes74-nYY", "text": "those words or you just have to predict the future from the past or the past from the future or present from the past like loss a various different versions depending on the mask so this way the data is creating its own supervision and you can perfect you can ask a neural network we'll learn a lot more than just predicting labels so the details obviously decide what is a proxy loss or what is the pretext ask you can think about all these withhold and predict classes some kind of pretext ask and you can think of whatever details you use", "start_timestamp": "00:11:39", "end_timestamp": "00:12:17", "start_second": 699, "end_second": 737, "url": "https://www.youtube.com/watch?v=dMUes74-nYY&t=699s", "title": "Lecture 7 Self-Supervised Learning -- UC Berkeley Spring 2020 - CS294-158 Deep Unsupervised Learning", "thumbnail": "https://i.ytimg.com/vi/dMUes74-nYY/maxresdefault.jpg"} {"video_id": "dMUes74-nYY", "text": "whatever loss functions or whatever tasks you create depending on that the quality of the task could be different and therefore the quality of the underlying representation study uncover could also be different and so that is basically this whole topic which is how can we create these really good tasks which make the neural network to learn a lot of useful things and therefore be very useful in downstream tasks so the motivation another motivation of why we want to learn good features is one of the biggest reasons for supervised", "start_timestamp": "00:12:17", "end_timestamp": "00:12:53", "start_second": 737, "end_second": 773, "url": "https://www.youtube.com/watch?v=dMUes74-nYY&t=737s", "title": "Lecture 7 Self-Supervised Learning -- UC Berkeley Spring 2020 - CS294-158 Deep Unsupervised Learning", "thumbnail": "https://i.ytimg.com/vi/dMUes74-nYY/maxresdefault.jpg"} {"video_id": "dMUes74-nYY", "text": "learning to really take off not just as a research topic but also as an industry of practice is that the you can use a pre trained classifier for a lot of commercial a downstream tasks so a pre trained imagenet state-of-the-art emission a classifier like a rest at 50 can just be taken and the same backbone can be taken and put into a faster or CNN or a mass for CNN or a retina net and can be used for object detection or instant segmentation or it can also be used in a fully compositional neural net with the backbone as the rest in 54 a", "start_timestamp": "00:12:53", "end_timestamp": "00:13:30", "start_second": 773, "end_second": 810, "url": "https://www.youtube.com/watch?v=dMUes74-nYY&t=773s", "title": "Lecture 7 Self-Supervised Learning -- UC Berkeley Spring 2020 - CS294-158 Deep Unsupervised Learning", "thumbnail": "https://i.ytimg.com/vi/dMUes74-nYY/maxresdefault.jpg"} {"video_id": "dMUes74-nYY", "text": "semantic segmentation so this way you're able to solve a lot of harder computer vision problems with black collecting label data is much harder and you can actually just retrain a good classifier take those features and start the underlying downstream tasks with a much better prior and you don't have need so much label data now and you can also converge much faster on these harder problems so that way the recipe is very clear so you just collect a large table data set your trainer model you deploy and as long as you have a lot of good", "start_timestamp": "00:13:30", "end_timestamp": "00:14:11", "start_second": 810, "end_second": 851, "url": "https://www.youtube.com/watch?v=dMUes74-nYY&t=810s", "title": "Lecture 7 Self-Supervised Learning -- UC Berkeley Spring 2020 - CS294-158 Deep Unsupervised Learning", "thumbnail": "https://i.ytimg.com/vi/dMUes74-nYY/maxresdefault.jpg"} {"video_id": "dMUes74-nYY", "text": "data and you have sufficient data it is basically all you need in terms of getting some autom automation on on production so most of your industry or usage of computer vision like in video surveillance or in robotics where people have to detect objects or in shopping automated shopping where people you want to detect what people pick or which objects people pick it's basically just object detection and to get a very good object detector all you need is a lot of labels and a lot of good patron features so what is the goal of self-professed", "start_timestamp": "00:14:11", "end_timestamp": "00:14:50", "start_second": 851, "end_second": 890, "url": "https://www.youtube.com/watch?v=dMUes74-nYY&t=851s", "title": "Lecture 7 Self-Supervised Learning -- UC Berkeley Spring 2020 - CS294-158 Deep Unsupervised Learning", "thumbnail": "https://i.ytimg.com/vi/dMUes74-nYY/maxresdefault.jpg"} {"video_id": "dMUes74-nYY", "text": "learning the goal is to learn equally good if not even better features without supervision and be able to deploy similar quality systems as what is currently in production without relying on too many labels so what if instead of collecting 10000 labels now you could just collect thousand labels or hundred labels that makes the process of production much faster much more efficient and you don't have to spend as much and it's also much simpler to maintain and you can keep on bootstrapping more and more data you", "start_timestamp": "00:14:50", "end_timestamp": "00:15:28", "start_second": 890, "end_second": 928, "url": "https://www.youtube.com/watch?v=dMUes74-nYY&t=890s", "title": "Lecture 7 Self-Supervised Learning -- UC Berkeley Spring 2020 - CS294-158 Deep Unsupervised Learning", "thumbnail": "https://i.ytimg.com/vi/dMUes74-nYY/maxresdefault.jpg"} {"video_id": "dMUes74-nYY", "text": "don't have to rely on high quality expert labeling and you can still uncover the same level features as you have currently with all the effort had to collect labels so it could also generalize much better potentially because by doing some harder pretext asks than just predicting labels you are expected to learn more about the world and therefore generalizing in the longtail scenario is likely to be better so that is the hope and that's why people want to make self worth learning really work so this has been really you", "start_timestamp": "00:15:28", "end_timestamp": "00:16:04", "start_second": 928, "end_second": 964, "url": "https://www.youtube.com/watch?v=dMUes74-nYY&t=928s", "title": "Lecture 7 Self-Supervised Learning -- UC Berkeley Spring 2020 - CS294-158 Deep Unsupervised Learning", "thumbnail": "https://i.ytimg.com/vi/dMUes74-nYY/maxresdefault.jpg"} {"video_id": "dMUes74-nYY", "text": "know very nicely put together as a more inspiring slide by young Nicole and it's often referred to as the lake which we saw in the introduction to the class which is you can think of self supervised learning as the cake if intelligence of the cake you can think of sociable as learning as the cake and you can think of supervised learning as the icing on the cake and you can think of reinforcement learning as the cherry on the cake and there the argument is that most of the useful bits can come from doing really hard pretext asks just", "start_timestamp": "00:16:04", "end_timestamp": "00:16:44", "start_second": 964, "end_second": 1004, "url": "https://www.youtube.com/watch?v=dMUes74-nYY&t=964s", "title": "Lecture 7 Self-Supervised Learning -- UC Berkeley Spring 2020 - CS294-158 Deep Unsupervised Learning", "thumbnail": "https://i.ytimg.com/vi/dMUes74-nYY/maxresdefault.jpg"} {"video_id": "dMUes74-nYY", "text": "from data and where the machine is predicting some part for missing parts and you get millions of bits that way whereas in supervised learning consider imagenet you have thousand classes so that's ten bits per image and if you have a million images you have basically a million times ten bit space key that's basically your whole data set and whereas if you're just doing generative modeling you're modeling all possible bits in your data set so that that's that's too huge right so some supervised learning is trying to find a middle", "start_timestamp": "00:16:44", "end_timestamp": "00:17:18", "start_second": 1004, "end_second": 1038, "url": "https://www.youtube.com/watch?v=dMUes74-nYY&t=1004s", "title": "Lecture 7 Self-Supervised Learning -- UC Berkeley Spring 2020 - CS294-158 Deep Unsupervised Learning", "thumbnail": "https://i.ytimg.com/vi/dMUes74-nYY/maxresdefault.jpg"} {"video_id": "dMUes74-nYY", "text": "ground between these two and it's possible that the bits you get from sabra was learning or more useful that said there is a caveat that subscrube is learning the bits you get from there or not as high quality bits is the bits you get from supervised tasks when human is telling you that there is a cat here or there is a dog here or like there there is a cat exactly at this coordinate there's a dog there's a bounding box around a human that's very higher quality bit than saying these two pixels are of the same", "start_timestamp": "00:17:18", "end_timestamp": "00:17:53", "start_second": 1038, "end_second": 1073, "url": "https://www.youtube.com/watch?v=dMUes74-nYY&t=1038s", "title": "Lecture 7 Self-Supervised Learning -- UC Berkeley Spring 2020 - CS294-158 Deep Unsupervised Learning", "thumbnail": "https://i.ytimg.com/vi/dMUes74-nYY/maxresdefault.jpg"} {"video_id": "dMUes74-nYY", "text": "color or like these this is a flipped version of that image or this image is a 90 degree rotated version of the other image things like that so it's not just a number of bits that matter the quality of the bits is equally important so you should take this slide with not not too seriously it's just for inspiration making the bits argument as a way to like work on unsupervised learning is fundamentally flawed because the label data bits are much much much more useful much more grounded in the real world you to behave so here is the aleck own", "start_timestamp": "00:17:53", "end_timestamp": "00:18:31", "start_second": 1073, "end_second": 1111, "url": "https://www.youtube.com/watch?v=dMUes74-nYY&t=1073s", "title": "Lecture 7 Self-Supervised Learning -- UC Berkeley Spring 2020 - CS294-158 Deep Unsupervised Learning", "thumbnail": "https://i.ytimg.com/vi/dMUes74-nYY/maxresdefault.jpg"} {"video_id": "dMUes74-nYY", "text": "suggestion for how to do unsupervised or sub supervised learning which is creating your own proxy tasks and you can think of various different versions of that let's say that there's a video you could predict the top from the bottom bottom from the top left from the right right from the left it's basically masked some part of your input predict the mass part from the unmask part and obviously depending on the mass the mother's going to learn something trivial or non-trivial so usually like like for instance in a", "start_timestamp": "00:18:31", "end_timestamp": "00:19:04", "start_second": 1111, "end_second": 1144, "url": "https://www.youtube.com/watch?v=dMUes74-nYY&t=1111s", "title": "Lecture 7 Self-Supervised Learning -- UC Berkeley Spring 2020 - CS294-158 Deep Unsupervised Learning", "thumbnail": "https://i.ytimg.com/vi/dMUes74-nYY/maxresdefault.jpg"} {"video_id": "dMUes74-nYY", "text": "video if you're just masking a couple of frames in between it may be very easy to fill it up by just using the optical flow information and just using the reference frames and just interpolating between the pixels the model doesn't necessarily have to capture what an object is and whereas if you have a sentence and if you are predicting the missing words or sub words in a sentence it's possible that the model learns a lot more about the grammar and the syntax and semantics of collective language because it's it's not possible", "start_timestamp": "00:19:04", "end_timestamp": "00:19:32", "start_second": 1144, "end_second": 1172, "url": "https://www.youtube.com/watch?v=dMUes74-nYY&t=1144s", "title": "Lecture 7 Self-Supervised Learning -- UC Berkeley Spring 2020 - CS294-158 Deep Unsupervised Learning", "thumbnail": "https://i.ytimg.com/vi/dMUes74-nYY/maxresdefault.jpg"} {"video_id": "dMUes74-nYY", "text": "to just copy-paste previous words to fill in the fill in the sentence because language is already syntactic and grammatical and separate like every cent every word is conveying something new whereas pixels are more high-frequency there are more natural signals and so there is an spatial temporal correlation this is already available naturally so the model may not really learn the actual high-level information that you wanted to learn unless you carefully engineer what are the master you want to use so for the actual technical content we", "start_timestamp": "00:19:32", "end_timestamp": "00:20:09", "start_second": 1172, "end_second": 1209, "url": "https://www.youtube.com/watch?v=dMUes74-nYY&t=1172s", "title": "Lecture 7 Self-Supervised Learning -- UC Berkeley Spring 2020 - CS294-158 Deep Unsupervised Learning", "thumbnail": "https://i.ytimg.com/vi/dMUes74-nYY/maxresdefault.jpg"} {"video_id": "dMUes74-nYY", "text": "are going to separate it into three parts so the first part is we're going to learn about how to do that we're going to separate the various cognitive principles basically the first principle is you corrupt your data and you try to predict the actual data from the corrupt aggression and the corruption can just be like you add some noise to your input or it could be like you hide some part if you input an you predict the missing part or it could be like you take your data and you basically do some signal separation so it could be like hey an", "start_timestamp": "00:20:09", "end_timestamp": "00:20:44", "start_second": 1209, "end_second": 1244, "url": "https://www.youtube.com/watch?v=dMUes74-nYY&t=1209s", "title": "Lecture 7 Self-Supervised Learning -- UC Berkeley Spring 2020 - CS294-158 Deep Unsupervised Learning", "thumbnail": "https://i.ytimg.com/vi/dMUes74-nYY/maxresdefault.jpg"} {"video_id": "dMUes74-nYY", "text": "image is basically a grayscale in the color so you could predict the color from the grayscale or you have you have the depth image then you have the color image and you could try to predict the depth image from the color image where let's say you're recording everything from your Kinect so it could be source separation and then you try to predict the separate equation so that is the first principle the second principle is we're going to do something like visual common sense to us where it's more ad hoc and you just trying to create tasks", "start_timestamp": "00:20:44", "end_timestamp": "00:21:16", "start_second": 1244, "end_second": 1276, "url": "https://www.youtube.com/watch?v=dMUes74-nYY&t=1244s", "title": "Lecture 7 Self-Supervised Learning -- UC Berkeley Spring 2020 - CS294-158 Deep Unsupervised Learning", "thumbnail": "https://i.ytimg.com/vi/dMUes74-nYY/maxresdefault.jpg"} {"video_id": "dMUes74-nYY", "text": "from data in a very creative way and see what kind of features the model can learn and there we are gonna look at three different techniques relative patch prediction jigsaw puzzles and rotation and finally we are going to look at contrast learning which is really the version of supervised or unsupervised learning that's been taking off very very recently and we're going to look at a foundational work over to back which explains a lot of these foundational ideas like nice contrast loss and then we're going to look at a", "start_timestamp": "00:21:16", "end_timestamp": "00:21:52", "start_second": 1276, "end_second": 1312, "url": "https://www.youtube.com/watch?v=dMUes74-nYY&t=1276s", "title": "Lecture 7 Self-Supervised Learning -- UC Berkeley Spring 2020 - CS294-158 Deep Unsupervised Learning", "thumbnail": "https://i.ytimg.com/vi/dMUes74-nYY/maxresdefault.jpg"} {"video_id": "dMUes74-nYY", "text": "version that's been used in images called CPC or contrast to predictive coding and we're also going to look at follow-ups to that that made the CPC pipeline much simpler like instance discrimination and where is state of the art instantiations of that note that in this lecture we are not going to cover the more popular sub supervised techniques like or any anything to do with the latest language retraining pipelines arguably sub supervised learning has taken off way more language than computer vision but the focus of this lecture is going", "start_timestamp": "00:21:52", "end_timestamp": "00:22:30", "start_second": 1312, "end_second": 1350, "url": "https://www.youtube.com/watch?v=dMUes74-nYY&t=1312s", "title": "Lecture 7 Self-Supervised Learning -- UC Berkeley Spring 2020 - CS294-158 Deep Unsupervised Learning", "thumbnail": "https://i.ytimg.com/vi/dMUes74-nYY/maxresdefault.jpg"} {"video_id": "dMUes74-nYY", "text": "to be more on computer vision because language pre-training will be covered separately in a guest lecture by Alec Radford and we're also not going to look at how unsupervised learning helps for reinforcement learning that will also be covered separately by Peter in another lecture ok so now let's go to denoising auto-encoders so do isaac autoencoder the basic idea is add some noise to your input and try to remove the noise and decode the actual image so here you see an amnesty j't and you and you see the noisy input on the on the left and you", "start_timestamp": "00:22:30", "end_timestamp": "00:23:10", "start_second": 1350, "end_second": 1390, "url": "https://www.youtube.com/watch?v=dMUes74-nYY&t=1350s", "title": "Lecture 7 Self-Supervised Learning -- UC Berkeley Spring 2020 - CS294-158 Deep Unsupervised Learning", "thumbnail": "https://i.ytimg.com/vi/dMUes74-nYY/maxresdefault.jpg"} {"video_id": "dMUes74-nYY", "text": "see the demo image on the right and the end coder takes in the noisy image puts it into a smaller latent representation which is the features that we care about and the decoder is trying to use these features to get back the original input so you hope that the encoder gets the high level details removes the noise and the decoder can up sample that and get get you back to the actual image so depending on the kind of noise you add you've want to learn more non-trivial things if you don't add any noise you're just going to learn an identity function", "start_timestamp": "00:23:10", "end_timestamp": "00:23:44", "start_second": 1390, "end_second": 1424, "url": "https://www.youtube.com/watch?v=dMUes74-nYY&t=1390s", "title": "Lecture 7 Self-Supervised Learning -- UC Berkeley Spring 2020 - CS294-158 Deep Unsupervised Learning", "thumbnail": "https://i.ytimg.com/vi/dMUes74-nYY/maxresdefault.jpg"} {"video_id": "dMUes74-nYY", "text": "that's just an or encoder but if you add some level of noise it's possible that you learn more useful features because you're learning to separate noise from the actual signal right and if you had too much noise then it may actually be a really hard task because the signal-to-noise ratio will you really low so this is the general computation graph of the denoising auto-encoder where x tilde refers to the noise version and F data basically at the ground truth x and you're trying to figure out how to reconstruct that back", "start_timestamp": "00:23:44", "end_timestamp": "00:24:21", "start_second": 1424, "end_second": 1461, "url": "https://www.youtube.com/watch?v=dMUes74-nYY&t=1424s", "title": "Lecture 7 Self-Supervised Learning -- UC Berkeley Spring 2020 - CS294-158 Deep Unsupervised Learning", "thumbnail": "https://i.ytimg.com/vi/dMUes74-nYY/maxresdefault.jpg"} {"video_id": "dMUes74-nYY", "text": "and get the Layden's in a very useful way so there are various different versions of noise that you can add to in put in in a denoising auto-encoder so in the original denoising auto-encoder paper they considered the task of Ensenada they considered three different noises additive isotropic Gaussian noise where you basically just add Gaussian noise to the pixels and another version is the masking noise where you basically some fraction of your input pixels I just chosen at random and you just forced into zero which K you just mash", "start_timestamp": "00:24:21", "end_timestamp": "00:24:59", "start_second": 1461, "end_second": 1499, "url": "https://www.youtube.com/watch?v=dMUes74-nYY&t=1461s", "title": "Lecture 7 Self-Supervised Learning -- UC Berkeley Spring 2020 - CS294-158 Deep Unsupervised Learning", "thumbnail": "https://i.ytimg.com/vi/dMUes74-nYY/maxresdefault.jpg"} {"video_id": "dMUes74-nYY", "text": "them out and they're going to be a black and finally there's the salt and pepper noise where some fraction of the elements the chosen at random and you can basically set them either to the minimum possible value or maximum possible value so instead of basically it's it's a version of masking where instead of just assigning masks to be 0 you can randomly assign the mask to be 0 1 so these are three different noises to consider in the paper you can note that as pixel level noise so you can think of denoising auto-encoders basically", "start_timestamp": "00:24:59", "end_timestamp": "00:25:35", "start_second": 1499, "end_second": 1535, "url": "https://www.youtube.com/watch?v=dMUes74-nYY&t=1499s", "title": "Lecture 7 Self-Supervised Learning -- UC Berkeley Spring 2020 - CS294-158 Deep Unsupervised Learning", "thumbnail": "https://i.ytimg.com/vi/dMUes74-nYY/maxresdefault.jpg"} {"video_id": "dMUes74-nYY", "text": "learning a tangent hyperplane on your data manifold where around every input x there is a distortion radius created around it based on the noise that you're trying to add so it's very easy to understand in the case of additive Gaussian noise because you can think of Gaussian is a spherical distortion around your every input and you can think of the decoder as trying to put the distorted version back onto the tangent hyperplane so you can think of the whole denoising auto-encoder pipeline is trying to learn this tangent", "start_timestamp": "00:25:35", "end_timestamp": "00:26:11", "start_second": 1535, "end_second": 1571, "url": "https://www.youtube.com/watch?v=dMUes74-nYY&t=1535s", "title": "Lecture 7 Self-Supervised Learning -- UC Berkeley Spring 2020 - CS294-158 Deep Unsupervised Learning", "thumbnail": "https://i.ytimg.com/vi/dMUes74-nYY/maxresdefault.jpg"} {"video_id": "dMUes74-nYY", "text": "hyperplane that describes the data manifold so that it's able to put back to distortions around the hyperplane back to the correct and like back to the data manifold and that way it uncovers the shape of the data manifold by operating at these local hyper planes at every individual point so here is the loss function of the denoising auto-encoder you can clearly see that there is a version where you can use the reconstruction error for the available pixels which are not having any noise and the reconstruction error for the", "start_timestamp": "00:26:11", "end_timestamp": "00:26:57", "start_second": 1571, "end_second": 1617, "url": "https://www.youtube.com/watch?v=dMUes74-nYY&t=1571s", "title": "Lecture 7 Self-Supervised Learning -- UC Berkeley Spring 2020 - CS294-158 Deep Unsupervised Learning", "thumbnail": "https://i.ytimg.com/vi/dMUes74-nYY/maxresdefault.jpg"} {"video_id": "dMUes74-nYY", "text": "pictures that have been noise and you can also rate them based on you know like you can you can prioritize the reconstruction of the noise versions as compared to the versions that have not been noise so so if you have an amnesty Majin your dick map adding noise like 10% of the pixels you could prioritize the reconstruction error of those pixels more than the other pixels so that the model is not incentivized to learn identity function or around what is like already available without noise well and the models actually striving hard to", "start_timestamp": "00:26:57", "end_timestamp": "00:27:30", "start_second": 1617, "end_second": 1650, "url": "https://www.youtube.com/watch?v=dMUes74-nYY&t=1617s", "title": "Lecture 7 Self-Supervised Learning -- UC Berkeley Spring 2020 - CS294-158 Deep Unsupervised Learning", "thumbnail": "https://i.ytimg.com/vi/dMUes74-nYY/maxresdefault.jpg"} {"video_id": "dMUes74-nYY", "text": "like we're getting get the details right at the noise pixels so and you could also imagine optimizing to different versions of the loss one version of the losses using the mean squared error and the other words for the loss is using a binary signal across central P loss and both both are equally good and endless it makes more sense to use the cross entropy loss but the mean square error loss is also very likely to work well if as long as you let it train with the right set of hyper parameters so stack denoising auto-encoder is basically the", "start_timestamp": "00:27:30", "end_timestamp": "00:28:10", "start_second": 1650, "end_second": 1690, "url": "https://www.youtube.com/watch?v=dMUes74-nYY&t=1650s", "title": "Lecture 7 Self-Supervised Learning -- UC Berkeley Spring 2020 - CS294-158 Deep Unsupervised Learning", "thumbnail": "https://i.ytimg.com/vi/dMUes74-nYY/maxresdefault.jpg"} {"video_id": "dMUes74-nYY", "text": "version of denoising auto-encoder where you're going to do this layer by layer by layer so you take your original amnesty which you have one hidden layer and you run it in using or encoder and you get that feeling layer now you can take that hidden layer as you're like version of the image that you want use instead of the actual pixels you can add noise at the hidden feature level and learn a denoising auto-encoder for that feature right so and if you do this iteratively the denoising auto-encoder is now operating on more more abstract", "start_timestamp": "00:28:10", "end_timestamp": "00:28:45", "start_second": 1690, "end_second": 1725, "url": "https://www.youtube.com/watch?v=dMUes74-nYY&t=1690s", "title": "Lecture 7 Self-Supervised Learning -- UC Berkeley Spring 2020 - CS294-158 Deep Unsupervised Learning", "thumbnail": "https://i.ytimg.com/vi/dMUes74-nYY/maxresdefault.jpg"} {"video_id": "dMUes74-nYY", "text": "inputs instead of a raw pixels so that's basically the idea in a stack denoising auto-encoder and the hope is that as you keep stacking more layers the higher layers get more more semantics but you should also be careful in thinking about what kind of noise you can add to the features so if back in those days people used to use neural networks with Sigma nonlinearities so in that case it's it's easy to add a noise like masking because Sigma can be considered as you know the neurons firing or not firing but but now", "start_timestamp": "00:28:45", "end_timestamp": "00:29:23", "start_second": 1725, "end_second": 1763, "url": "https://www.youtube.com/watch?v=dMUes74-nYY&t=1725s", "title": "Lecture 7 Self-Supervised Learning -- UC Berkeley Spring 2020 - CS294-158 Deep Unsupervised Learning", "thumbnail": "https://i.ytimg.com/vi/dMUes74-nYY/maxresdefault.jpg"} {"video_id": "dMUes74-nYY", "text": "neural nets and like like the way they did the ear and that's a design is much different like the kind of nonlinearities used are very different so so this may not be particularly in a appealing idea to use in the current current infrastructure so finally one utility of the denoising auto-encoder is that once you've learned sufficient layers with the stark denoising auto-encoder you could basically have a target like a class table and just freeze all the features that you've learnt from the auto encoder and have a supervised layer on top make", "start_timestamp": "00:29:23", "end_timestamp": "00:30:13", "start_second": 1763, "end_second": 1813, "url": "https://www.youtube.com/watch?v=dMUes74-nYY&t=1763s", "title": "Lecture 7 Self-Supervised Learning -- UC Berkeley Spring 2020 - CS294-158 Deep Unsupervised Learning", "thumbnail": "https://i.ytimg.com/vi/dMUes74-nYY/maxresdefault.jpg"} {"video_id": "dMUes74-nYY", "text": "a single linear layer that just predicts the class logit and you could use that perform a classification task and and this was particularly appealing back then because back then it was really hard to train deep neural networks to just do supervised learning even if you had a lot of data because directly training deep neural networks was not something that was particularly working with and like innovations were needed in terms of using momentum optimizers and bigger math sizes and etc convolutional neural network so as far as hidden here", "start_timestamp": "00:30:13", "end_timestamp": "00:30:52", "start_second": 1813, "end_second": 1852, "url": "https://www.youtube.com/watch?v=dMUes74-nYY&t=1813s", "title": "Lecture 7 Self-Supervised Learning -- UC Berkeley Spring 2020 - CS294-158 Deep Unsupervised Learning", "thumbnail": "https://i.ytimg.com/vi/dMUes74-nYY/maxresdefault.jpg"} {"video_id": "dMUes74-nYY", "text": "look like feed-forward neural networks goes this was a standard recipe to even do supervised learning back then because you needed some reasonably high level features to operate well to train a good supervised model so here are the filters that you learn with a denoising auto-encoder for various levels of noises and you can clearly see that the ones where you actually add more noise is learning more of these digit edges whereas the ones you don't have any noise you're hardly learning anything because it's mostly gonna do an identity", "start_timestamp": "00:30:52", "end_timestamp": "00:31:33", "start_second": 1852, "end_second": 1893, "url": "https://www.youtube.com/watch?v=dMUes74-nYY&t=1852s", "title": "Lecture 7 Self-Supervised Learning -- UC Berkeley Spring 2020 - CS294-158 Deep Unsupervised Learning", "thumbnail": "https://i.ytimg.com/vi/dMUes74-nYY/maxresdefault.jpg"} {"video_id": "dMUes74-nYY", "text": "map and this is also visualized for a particular neuron in magnified and you can see that filters are like more visible for the higher masking ratios you can also see the you can also see that there's something like a six visible towards the right and and and it's getting it's getting the notions of digit edges or strokes at the hidden level so these are various classifiers that it can train on top of these denoising auto-encoders the features that you get from stacking nosing order into rows and these are the error rates", "start_timestamp": "00:31:33", "end_timestamp": "00:32:18", "start_second": 1893, "end_second": 1938, "url": "https://www.youtube.com/watch?v=dMUes74-nYY&t=1893s", "title": "Lecture 7 Self-Supervised Learning -- UC Berkeley Spring 2020 - CS294-158 Deep Unsupervised Learning", "thumbnail": "https://i.ytimg.com/vi/dMUes74-nYY/maxresdefault.jpg"} {"video_id": "dMUes74-nYY", "text": "in M this classification it's not particularly relevant now because M nest is considered solve but then you can clearly see that you know it's getting like 97% accuracy or something that range with putting SPMS and top so this is like a cool result at the time so here is another version of corrupting your image and trying to predict corruption or like trendy you try to hide some portion of your image and trying to predict the hidden portion so this is a people context encoders from the potato from work from Alyosha afer this group here", "start_timestamp": "00:32:18", "end_timestamp": "00:33:03", "start_second": 1938, "end_second": 1983, "url": "https://www.youtube.com/watch?v=dMUes74-nYY&t=1938s", "title": "Lecture 7 Self-Supervised Learning -- UC Berkeley Spring 2020 - CS294-158 Deep Unsupervised Learning", "thumbnail": "https://i.ytimg.com/vi/dMUes74-nYY/maxresdefault.jpg"} {"video_id": "dMUes74-nYY", "text": "at Berkeley so the wait works since it basically takes an average mass out a rectangular region and encodes that image now with the mask and has a big order that tries to reconstruct the actual image now so that way the model is filling up the details of what's missing in the mask and supervision basically can be constructed from your data itself because you actually knew what part you masked you because you had the complete image so the model is able to learn without any labels by creating its own supervision so that that's", "start_timestamp": "00:33:03", "end_timestamp": "00:33:45", "start_second": 1983, "end_second": 2025, "url": "https://www.youtube.com/watch?v=dMUes74-nYY&t=1983s", "title": "Lecture 7 Self-Supervised Learning -- UC Berkeley Spring 2020 - CS294-158 Deep Unsupervised Learning", "thumbnail": "https://i.ytimg.com/vi/dMUes74-nYY/maxresdefault.jpg"} {"video_id": "dMUes74-nYY", "text": "that's why it's called supervised learning so you can have various instantiations of this where you could mask out only the central region and try to fill up the central region or you can mask out various square blocks across spread across the image much smaller but lots of mass and you could try to fill them all or you could if you have access to a segment the segmentation mass of actual objects in your image you could segment out particularly the pixels belonging to one particular object like in this case the baseball player and you", "start_timestamp": "00:33:45", "end_timestamp": "00:34:29", "start_second": 2025, "end_second": 2069, "url": "https://www.youtube.com/watch?v=dMUes74-nYY&t=2025s", "title": "Lecture 7 Self-Supervised Learning -- UC Berkeley Spring 2020 - CS294-158 Deep Unsupervised Learning", "thumbnail": "https://i.ytimg.com/vi/dMUes74-nYY/maxresdefault.jpg"} {"video_id": "dMUes74-nYY", "text": "could just try to fill in those pixels and that assumes access to label data of segmentation mass so that's not something that is completely self supervised but the other two versions are completely self supervised so they've made the reconstruction last good work is you you have a masking region and you have a ground truth for that and you could just apply the reconstruction error on the pixels that have been masked so you take your decoded image and you apply the inverse of the mask and you get all the other pixels out and", "start_timestamp": "00:34:29", "end_timestamp": "00:35:13", "start_second": 2069, "end_second": 2113, "url": "https://www.youtube.com/watch?v=dMUes74-nYY&t=2069s", "title": "Lecture 7 Self-Supervised Learning -- UC Berkeley Spring 2020 - CS294-158 Deep Unsupervised Learning", "thumbnail": "https://i.ytimg.com/vi/dMUes74-nYY/maxresdefault.jpg"} {"video_id": "dMUes74-nYY", "text": "and then you can just mask out those pixels for the reconstruction here so there are multiple losses that you can use for the reconstruction objective one is one is you could use just a mean square error that you saw in the previous slide of diagnosing or encoder or one problem that's usually a common with the mean square error which was also mentioned in the Gant lecture is that they often tend to be blurry so because you don't want Larry reconstructions you actually want sharp predictions of all these missing pixels you can", "start_timestamp": "00:35:13", "end_timestamp": "00:35:49", "start_second": 2113, "end_second": 2149, "url": "https://www.youtube.com/watch?v=dMUes74-nYY&t=2113s", "title": "Lecture 7 Self-Supervised Learning -- UC Berkeley Spring 2020 - CS294-158 Deep Unsupervised Learning", "thumbnail": "https://i.ytimg.com/vi/dMUes74-nYY/maxresdefault.jpg"} {"video_id": "dMUes74-nYY", "text": "actually think of using a gann loss which is you have a discriminator and that discriminators you behaving going to behavior I could learn a loss function and you can think of using the discriminator objective and the reconstruction objective together because back back in those days training just a discriminator and using adversary losses wasn't particularly easy so the authors ended up using a combination of the regular reconstruction objective and the and the adversarial a discriminator objective so so this is the architecture", "start_timestamp": "00:35:49", "end_timestamp": "00:36:27", "start_second": 2149, "end_second": 2187, "url": "https://www.youtube.com/watch?v=dMUes74-nYY&t=2149s", "title": "Lecture 7 Self-Supervised Learning -- UC Berkeley Spring 2020 - CS294-158 Deep Unsupervised Learning", "thumbnail": "https://i.ytimg.com/vi/dMUes74-nYY/maxresdefault.jpg"} {"video_id": "dMUes74-nYY", "text": "they adopted where you have your original image one finis 8 by 128 and you just use strided 4x4 cons and down sample spatial resolutions and you also up keep up sampling your channel resolutions and you get a flat hidden vector of 4,000 dimensions and then you up sample using transpose convolutions back into the actual original image and you can use the reconstruction error it could be an l1 or l2 objective and I think the authors tried both both l1 and l2 and found l1 to be working slightly better as far as reconstruction goes and", "start_timestamp": "00:36:27", "end_timestamp": "00:37:10", "start_second": 2187, "end_second": 2230, "url": "https://www.youtube.com/watch?v=dMUes74-nYY&t=2187s", "title": "Lecture 7 Self-Supervised Learning -- UC Berkeley Spring 2020 - CS294-158 Deep Unsupervised Learning", "thumbnail": "https://i.ytimg.com/vi/dMUes74-nYY/maxresdefault.jpg"} {"video_id": "dMUes74-nYY", "text": "they also have the discriminator that takes in real data as well as your predicted missing patches and then you classifying if it's a real or a fake image so you can see that the l2 loss is producing a blurry pixel interpolation basically scrapping all the neighborhood pixels the vaping workers you just have the missing square and you can just fill up like the borders based on what based on the pixels that are available immediately to the left or the top of the right at the bottom respectively and once you filled it up you can just fill", "start_timestamp": "00:37:10", "end_timestamp": "00:37:48", "start_second": 2230, "end_second": 2268, "url": "https://www.youtube.com/watch?v=dMUes74-nYY&t=2230s", "title": "Lecture 7 Self-Supervised Learning -- UC Berkeley Spring 2020 - CS294-158 Deep Unsupervised Learning", "thumbnail": "https://i.ytimg.com/vi/dMUes74-nYY/maxresdefault.jpg"} {"video_id": "dMUes74-nYY", "text": "in the missing regions of the square based on what you've already filled up within the square on the edges and that will look like a reasonable completion very blurry though and if you look at the adversarial loss it's introducing artifacts that are completely new so as long as the discriminator thinks like it's a real object it would still work but it may not particularly have a coherence with respect to the actual background image and you've seen this problem in pics to Pyxis will vary for a conditional ganyo to provide the context", "start_timestamp": "00:37:48", "end_timestamp": "00:38:21", "start_second": 2268, "end_second": 2301, "url": "https://www.youtube.com/watch?v=dMUes74-nYY&t=2268s", "title": "Lecture 7 Self-Supervised Learning -- UC Berkeley Spring 2020 - CS294-158 Deep Unsupervised Learning", "thumbnail": "https://i.ytimg.com/vi/dMUes74-nYY/maxresdefault.jpg"} {"video_id": "dMUes74-nYY", "text": "in addition to what you're trying to translate to so so that's clearly what's going on there and the joint loss is trying to use both the reconstruction objective and the and the adversary's objective and it's producing something sharper than just using l2 loss so so here are some results for what what happens okay let's say you finish this retraining process now you take the encoder out and you you want to use it for a bunch of downstream task so the downstream task could be a classification in detection or semantic", "start_timestamp": "00:38:21", "end_timestamp": "00:39:01", "start_second": 2301, "end_second": 2341, "url": "https://www.youtube.com/watch?v=dMUes74-nYY&t=2301s", "title": "Lecture 7 Self-Supervised Learning -- UC Berkeley Spring 2020 - CS294-158 Deep Unsupervised Learning", "thumbnail": "https://i.ytimg.com/vi/dMUes74-nYY/maxresdefault.jpg"} {"video_id": "dMUes74-nYY", "text": "segmentation and classification and detection are done on a Pascal dataset which is much smaller than imagenet so you can think of the advantage of P training as hey if you don't have too many labels you know you really need some kind of features to start with to be able to perform the task and semantic segmentation are also on Pascal VLC but a different version of the dataset 2012 and that uses another architecture like fully convolution that for doing the semantic segmentation part so using the feed train part of context", "start_timestamp": "00:39:01", "end_timestamp": "00:39:37", "start_second": 2341, "end_second": 2377, "url": "https://www.youtube.com/watch?v=dMUes74-nYY&t=2341s", "title": "Lecture 7 Self-Supervised Learning -- UC Berkeley Spring 2020 - CS294-158 Deep Unsupervised Learning", "thumbnail": "https://i.ytimg.com/vi/dMUes74-nYY/maxresdefault.jpg"} {"video_id": "dMUes74-nYY", "text": "encoder as a backbone our part in the FCN so so here the results are reasonably good so if you just use image net features which is you pre train a classifier or an image net and then you fine-tune it on Pascal the results are like seventy eight point two percent for classification and 56 twenty-eight percent for deduction forty eight percent for segmentation and context encoder the fine tuning to pascal classification is not that good it gets only fifty six point five percent which is way lower than supervised but it's", "start_timestamp": "00:39:37", "end_timestamp": "00:40:14", "start_second": 2377, "end_second": 2414, "url": "https://www.youtube.com/watch?v=dMUes74-nYY&t=2377s", "title": "Lecture 7 Self-Supervised Learning -- UC Berkeley Spring 2020 - CS294-158 Deep Unsupervised Learning", "thumbnail": "https://i.ytimg.com/vi/dMUes74-nYY/maxresdefault.jpg"} {"video_id": "dMUes74-nYY", "text": "when it's reasonably good in the sense that it's able to perform on par with with a or are actually quite quite better than an auto encoder an auto encoder gets fifty three point eight percent on pascal classification and detection in case forty two percent so so they get two to three percentage points more than doing just a regular encoder and other other other self supervision methods that were available at the time so this was a reasonably interesting result at the time so next we look at the principles of predicting", "start_timestamp": "00:40:14", "end_timestamp": "00:40:52", "start_second": 2414, "end_second": 2452, "url": "https://www.youtube.com/watch?v=dMUes74-nYY&t=2414s", "title": "Lecture 7 Self-Supervised Learning -- UC Berkeley Spring 2020 - CS294-158 Deep Unsupervised Learning", "thumbnail": "https://i.ytimg.com/vi/dMUes74-nYY/maxresdefault.jpg"} {"video_id": "dMUes74-nYY", "text": "one view from another where you basically do some source separation and you're trying to predict the separate parts from each other so this is a slide for Richard Zhang who is also the first author of this line of work so we already saw what a denoising auto-encoder is it basically takes raw data corrupts it and tries to reconstruct the original data now imagine that you can separate the raw data in two different views the best way to understand this is an image can be separated into a color image and the grayscale and you could try to predict", "start_timestamp": "00:40:52", "end_timestamp": "00:41:31", "start_second": 2452, "end_second": 2491, "url": "https://www.youtube.com/watch?v=dMUes74-nYY&t=2452s", "title": "Lecture 7 Self-Supervised Learning -- UC Berkeley Spring 2020 - CS294-158 Deep Unsupervised Learning", "thumbnail": "https://i.ytimg.com/vi/dMUes74-nYY/maxresdefault.jpg"} {"video_id": "dMUes74-nYY", "text": "one view from another so you could try to predict the color image from the grayscale putting the grayscale from the color doesn't actually need any deep neural net all you need to do is average two pixels and quantize and you're going to get something that is reasonably grayscale and in exactly the conversion is done it's a weighted average of your RGB pixels so but the other version which is predicting the color from the grayscale means that you have to add some new information to what's already there because you don't have any information", "start_timestamp": "00:41:31", "end_timestamp": "00:42:02", "start_second": 2491, "end_second": 2522, "url": "https://www.youtube.com/watch?v=dMUes74-nYY&t=2491s", "title": "Lecture 7 Self-Supervised Learning -- UC Berkeley Spring 2020 - CS294-158 Deep Unsupervised Learning", "thumbnail": "https://i.ytimg.com/vi/dMUes74-nYY/maxresdefault.jpg"} {"video_id": "dMUes74-nYY", "text": "about the color so that way you do have to understand that hey if you have a tree you know the leaves are green and like the bark is brown so you have to identify some of the objects and try to like learn features about like edges and so on so so so that's the goal of this this line of work so it's best visually illustrated here so grayscale so an image can be just like you have RGB there are like various different color channels parameterizations of an actual image and instead of using the RGB color space you can use the L a B color space", "start_timestamp": "00:42:02", "end_timestamp": "00:42:42", "start_second": 2522, "end_second": 2562, "url": "https://www.youtube.com/watch?v=dMUes74-nYY&t=2522s", "title": "Lecture 7 Self-Supervised Learning -- UC Berkeley Spring 2020 - CS294-158 Deep Unsupervised Learning", "thumbnail": "https://i.ytimg.com/vi/dMUes74-nYY/maxresdefault.jpg"} {"video_id": "dMUes74-nYY", "text": "and the L channel will behave like a grayscale image and the a B channels behave like the color image and you can use the L Channel encode it and predict the a B channels and that is basically the tasks consequently considered in the learning to colorize work so you can see that those yellow light pixels are identifying the eyes of the fish of the body and but you can also see that because the background is Queen starting with the color of the body of the fish it's it's not able to separate it out so uniformly color is the background but", "start_timestamp": "00:42:42", "end_timestamp": "00:43:24", "start_second": 2562, "end_second": 2604, "url": "https://www.youtube.com/watch?v=dMUes74-nYY&t=2562s", "title": "Lecture 7 Self-Supervised Learning -- UC Berkeley Spring 2020 - CS294-158 Deep Unsupervised Learning", "thumbnail": "https://i.ytimg.com/vi/dMUes74-nYY/maxresdefault.jpg"} {"video_id": "dMUes74-nYY", "text": "it's able to color the coral reefs around so incident francium is green so you can see that it's able to understand some high-level aspects of the image by doing this task so so that's basically what's going on and to visualize what how the actual image looks like you can just concatenate the two channels and see how it looks like so that is the ocean and here the author's first tried the new idea which is take your draw a ground truth then you just do a mean square between your predicted and the ground truth and that would", "start_timestamp": "00:43:24", "end_timestamp": "00:44:00", "start_second": 2604, "end_second": 2640, "url": "https://www.youtube.com/watch?v=dMUes74-nYY&t=2604s", "title": "Lecture 7 Self-Supervised Learning -- UC Berkeley Spring 2020 - CS294-158 Deep Unsupervised Learning", "thumbnail": "https://i.ytimg.com/vi/dMUes74-nYY/maxresdefault.jpg"} {"video_id": "dMUes74-nYY", "text": "producing a very degenerate like degenerate like colorization of the actual bird and the ground truth is made more colorful and diverse so what the author has realized is instead of treating the prediction as a mean square a regression task what if you treated as a classification task where you quantize the pixels so you quantize the a/b channel information into a button and and and and bit it into a bunch of categories and now instead of predicting some value for the a/b Channel and just regressing to the ground truth you're", "start_timestamp": "00:44:00", "end_timestamp": "00:44:39", "start_second": 2640, "end_second": 2679, "url": "https://www.youtube.com/watch?v=dMUes74-nYY&t=2640s", "title": "Lecture 7 Self-Supervised Learning -- UC Berkeley Spring 2020 - CS294-158 Deep Unsupervised Learning", "thumbnail": "https://i.ytimg.com/vi/dMUes74-nYY/maxresdefault.jpg"} {"video_id": "dMUes74-nYY", "text": "going to actually output a distribution over the possible values because it won't test and we can then do a softmax puzzle to be lost instead of mean square error loss and it's in general it works out really well and deep learning to do this you've also seen how it worked out really well in pixel RNN and where where all the pixels were quantized to discrete categories instead of using a Gaussian mean square error going to use the process to be lost and that produces sharper images so here here's how it goes you you basically take an image you", "start_timestamp": "00:44:39", "end_timestamp": "00:45:16", "start_second": 2679, "end_second": 2716, "url": "https://www.youtube.com/watch?v=dMUes74-nYY&t=2679s", "title": "Lecture 7 Self-Supervised Learning -- UC Berkeley Spring 2020 - CS294-158 Deep Unsupervised Learning", "thumbnail": "https://i.ytimg.com/vi/dMUes74-nYY/maxresdefault.jpg"} {"video_id": "dMUes74-nYY", "text": "have the a be channel prediction you quantized it and now you're going to use Krauser to be lost to predict your actual a B channel information so that is basically how the colorization work is done another version of this done by Richard Zhang was trying to do something like a split brain or encode or a split view or encoder which is you've separate the channels in your source you have encoders that try to predict the other channel from the current channel so where X 2 hat is the other channel prediction and X 1 hat is the prediction", "start_timestamp": "00:45:16", "end_timestamp": "00:46:01", "start_second": 2716, "end_second": 2761, "url": "https://www.youtube.com/watch?v=dMUes74-nYY&t=2716s", "title": "Lecture 7 Self-Supervised Learning -- UC Berkeley Spring 2020 - CS294-158 Deep Unsupervised Learning", "thumbnail": "https://i.ytimg.com/vi/dMUes74-nYY/maxresdefault.jpg"} {"video_id": "dMUes74-nYY", "text": "for from X 2 and now you concatenate these two channels together and you get your actual input again and you want to make sure that this version matches your original version so so this way it's like ensuring a backward consistency in some sense because it's not just about predicting the color from your grayscale it's also like hey the a B channel should also make sense like whatever you predict from your a B channels your L channel that the and what are we predicting I'll channel to a beach on together if you look at it it should", "start_timestamp": "00:46:01", "end_timestamp": "00:46:38", "start_second": 2761, "end_second": 2798, "url": "https://www.youtube.com/watch?v=dMUes74-nYY&t=2761s", "title": "Lecture 7 Self-Supervised Learning -- UC Berkeley Spring 2020 - CS294-158 Deep Unsupervised Learning", "thumbnail": "https://i.ytimg.com/vi/dMUes74-nYY/maxresdefault.jpg"} {"video_id": "dMUes74-nYY", "text": "make sense and it should look like my actual image so this can make more sense if you're looking at other kinds of views like depth image and color images so here's here's like one way to implement this which is you you separate out you have two different encoders you predict the other channels missing channels you concatenate and then you have a loss on the actual predicted image so and this is how it would work for a color and depth information so so so these are all interesting ideas and we're not gonna really look into the", "start_timestamp": "00:46:38", "end_timestamp": "00:47:13", "start_second": 2798, "end_second": 2833, "url": "https://www.youtube.com/watch?v=dMUes74-nYY&t=2798s", "title": "Lecture 7 Self-Supervised Learning -- UC Berkeley Spring 2020 - CS294-158 Deep Unsupervised Learning", "thumbnail": "https://i.ytimg.com/vi/dMUes74-nYY/maxresdefault.jpg"} {"video_id": "dMUes74-nYY", "text": "metrics that these methods have because more than the metrics these papers are more famous for the input-output version itself which we looked at pics depicts where the fact that even colorization can work so well is so peeling but but but in terms of the metrics in terms of the numbers will we look at them much later when we look at contrast Aloni so here here is here is the other final the second version that we wanted to see which is introduce some kind of common visual common sense tasks so so so here we are going to look at relative patch", "start_timestamp": "00:47:13", "end_timestamp": "00:48:03", "start_second": 2833, "end_second": 2883, "url": "https://www.youtube.com/watch?v=dMUes74-nYY&t=2833s", "title": "Lecture 7 Self-Supervised Learning -- UC Berkeley Spring 2020 - CS294-158 Deep Unsupervised Learning", "thumbnail": "https://i.ytimg.com/vi/dMUes74-nYY/maxresdefault.jpg"} {"video_id": "dMUes74-nYY", "text": "prediction this was an idea put forth by Kadosh abhinavagupta ala al Alex EE froze and so in some sense this was one of the first papers to do cells crew as learning on images at a larger scale and it's considered one of the foundational papers for a lot of the ideas put forth so basically what is the task the authors considered here the task was given two patches try to identify the relative position of the two patches which is to say that if you have a center patch and if you have a patch linked to its immediate right and", "start_timestamp": "00:48:03", "end_timestamp": "00:48:43", "start_second": 2883, "end_second": 2923, "url": "https://www.youtube.com/watch?v=dMUes74-nYY&t=2883s", "title": "Lecture 7 Self-Supervised Learning -- UC Berkeley Spring 2020 - CS294-158 Deep Unsupervised Learning", "thumbnail": "https://i.ytimg.com/vi/dMUes74-nYY/maxresdefault.jpg"} {"video_id": "dMUes74-nYY", "text": "given these two patches to a neural network the neural network should say hey this patches to the right of this reference patch so look at look at its best looked at from this figure so you take an image you take an approximately three by three grid of non-overlapping patches and now given the blue patch you're trying to predict that the yellow patches to the top right corner so you can number the surrounding patches from one to three so on until eight so and you basically have eight categories for a classification", "start_timestamp": "00:48:43", "end_timestamp": "00:49:20", "start_second": 2923, "end_second": 2960, "url": "https://www.youtube.com/watch?v=dMUes74-nYY&t=2923s", "title": "Lecture 7 Self-Supervised Learning -- UC Berkeley Spring 2020 - CS294-158 Deep Unsupervised Learning", "thumbnail": "https://i.ytimg.com/vi/dMUes74-nYY/maxresdefault.jpg"} {"video_id": "dMUes74-nYY", "text": "task given the reference center patch and this way you can take any you know different regions so the same image or like lots of different images you just have to take an approximately 3x3 grid of non-overlapping patches and give it to your neural network select two of them give to your neural network and create these labels for free from your data so it is it is another version of supervised learning where you you are actually adding creating a task like a jigsaw task sort of not exactly jigsaw but you can you can think of it learning", "start_timestamp": "00:49:20", "end_timestamp": "00:49:54", "start_second": 2960, "end_second": 2994, "url": "https://www.youtube.com/watch?v=dMUes74-nYY&t=2960s", "title": "Lecture 7 Self-Supervised Learning -- UC Berkeley Spring 2020 - CS294-158 Deep Unsupervised Learning", "thumbnail": "https://i.ytimg.com/vi/dMUes74-nYY/maxresdefault.jpg"} {"video_id": "dMUes74-nYY", "text": "spatial associations and then you know then it has to understand hey if you give me the year of the cat and the eyes of the cat it's then then it's likely that the year is lying on top to the right or to the left and and it also has to understand that what is what is left in right here so so so that means that it's learning these low-level features and as well as high-level associations and it's way that could be useful for a downstream task so that's pretty much it you you you share the CNN in chorus for the two", "start_timestamp": "00:49:54", "end_timestamp": "00:50:34", "start_second": 2994, "end_second": 3034, "url": "https://www.youtube.com/watch?v=dMUes74-nYY&t=2994s", "title": "Lecture 7 Self-Supervised Learning -- UC Berkeley Spring 2020 - CS294-158 Deep Unsupervised Learning", "thumbnail": "https://i.ytimg.com/vi/dMUes74-nYY/maxresdefault.jpg"} {"video_id": "dMUes74-nYY", "text": "patches you use the mean pool representation at the end and you train a classifier on top and you can create a lot of data for your training tasks because you can sample a lot of different batch grades from a lot of images and see how good the features are so there are a couple of the details in in in getting this right which were extremely crucial and which have been adopted in every follow-up paper almost which is making sure that you jitter both spatially and color wise so firstly you should make sure that to pick the patches don't", "start_timestamp": "00:50:34", "end_timestamp": "00:51:18", "start_second": 3034, "end_second": 3078, "url": "https://www.youtube.com/watch?v=dMUes74-nYY&t=3034s", "title": "Lecture 7 Self-Supervised Learning -- UC Berkeley Spring 2020 - CS294-158 Deep Unsupervised Learning", "thumbnail": "https://i.ytimg.com/vi/dMUes74-nYY/maxresdefault.jpg"} {"video_id": "dMUes74-nYY", "text": "overlap or else it's so easy to get the well which branches like to the ref to the right just by looking at the boundary pixels so that's one obvious but but but highly non-trivial at the time but considered obvious now so you make sure that the patches don't overlap second thing is you jitter you jitter the patches to prevent chromatic aberration so so that the but by that what I mean is you you sample you basically you you sample a particular random crop view you can you divide it into three by three grid of patches and", "start_timestamp": "00:51:18", "end_timestamp": "00:51:57", "start_second": 3078, "end_second": 3117, "url": "https://www.youtube.com/watch?v=dMUes74-nYY&t=3078s", "title": "Lecture 7 Self-Supervised Learning -- UC Berkeley Spring 2020 - CS294-158 Deep Unsupervised Learning", "thumbnail": "https://i.ytimg.com/vi/dMUes74-nYY/maxresdefault.jpg"} {"video_id": "dMUes74-nYY", "text": "then within each cell of the three by three grid you be you basically create another random crop and drop some color channels and so you basically do spatial and color jittering at every single patch to prevent the chromatic aberration from happening and that way the neural network can cheat and both of these details will have a non trigger at a time and and they were very crucial and all the follow-up work so another version which is very similar to router position prediction is you know actually going all the way", "start_timestamp": "00:51:57", "end_timestamp": "00:52:31", "start_second": 3117, "end_second": 3151, "url": "https://www.youtube.com/watch?v=dMUes74-nYY&t=3117s", "title": "Lecture 7 Self-Supervised Learning -- UC Berkeley Spring 2020 - CS294-158 Deep Unsupervised Learning", "thumbnail": "https://i.ytimg.com/vi/dMUes74-nYY/maxresdefault.jpg"} {"video_id": "dMUes74-nYY", "text": "so rather to position prediction looks looks like a jigsaw task like like the jigsaw processed children saw so why not actually do exactly that just just make you don't have to saw six of puzzles and that's what this paper is doing which is you similarly take a crop three by three grid of matches from a random crop of your actual image and you shuffle them and then you try to predict what is the correct order of the shuffling so in this case you it has to identify that similar kind of spatial reasoning and associations so that that's really what", "start_timestamp": "00:52:31", "end_timestamp": "00:53:14", "start_second": 3151, "end_second": 3194, "url": "https://www.youtube.com/watch?v=dMUes74-nYY&t=3151s", "title": "Lecture 7 Self-Supervised Learning -- UC Berkeley Spring 2020 - CS294-158 Deep Unsupervised Learning", "thumbnail": "https://i.ytimg.com/vi/dMUes74-nYY/maxresdefault.jpg"} {"video_id": "dMUes74-nYY", "text": "works well here and and and and if the neural network is able to solve this task well it means it's able to understand what this comes to the right and left it's learning general visual reasoning so how is it implemented a very easy very implement this is you if you have a 3x3 jigsaw puzzle task then there are 9 factorial possible permutations so instead of asking the neural network to output the exact order you can ask the numeron that will top with an index of that and you can hash the corresponding order in a hash table", "start_timestamp": "00:53:14", "end_timestamp": "00:53:53", "start_second": 3194, "end_second": 3233, "url": "https://www.youtube.com/watch?v=dMUes74-nYY&t=3194s", "title": "Lecture 7 Self-Supervised Learning -- UC Berkeley Spring 2020 - CS294-158 Deep Unsupervised Learning", "thumbnail": "https://i.ytimg.com/vi/dMUes74-nYY/maxresdefault.jpg"} {"video_id": "dMUes74-nYY", "text": "of all possible permutations indexed by like some simple declare category and you can just have the neural network to predict the category and that way the neural network is the you know doesn't you don't need an RNN decoder or something like that and and and it just looks like a normal classification task where every single patch in that word is passed through the same shard see in an encoder the meaningful representations are taken and they're concatenated in some form and tried you just trying to predict this output category so a final", "start_timestamp": "00:53:53", "end_timestamp": "00:54:30", "start_second": 3233, "end_second": 3270, "url": "https://www.youtube.com/watch?v=dMUes74-nYY&t=3233s", "title": "Lecture 7 Self-Supervised Learning -- UC Berkeley Spring 2020 - CS294-158 Deep Unsupervised Learning", "thumbnail": "https://i.ytimg.com/vi/dMUes74-nYY/maxresdefault.jpg"} {"video_id": "dMUes74-nYY", "text": "version of a final version of this idea of doing puzzle tasks creating data creating tasks from raw readers wrote the very simplest version is rotation prediction this is really really so simple that it's amazing a works over but but there is also a concrete argument as to why it works the idea is you take an image you rotate it by a random angle so in this case you're rotating it by a multiple of 90 degrees and this C and you pass it through the convolutional neural net and the conversion here and that is asked to", "start_timestamp": "00:54:30", "end_timestamp": "00:55:06", "start_second": 3270, "end_second": 3306, "url": "https://www.youtube.com/watch?v=dMUes74-nYY&t=3270s", "title": "Lecture 7 Self-Supervised Learning -- UC Berkeley Spring 2020 - CS294-158 Deep Unsupervised Learning", "thumbnail": "https://i.ytimg.com/vi/dMUes74-nYY/maxresdefault.jpg"} {"video_id": "dMUes74-nYY", "text": "predict what is the angle you rotated the original image by that's that's really it so in this case for the first time as you would predict the conversion and that has to predict 90 degrees for the second it has predict 270 for the third 180 and so on for the fourth it just has to predict there's no rotation which is a zero so why does it learn something nice why does it have to learn anything at all so if you look at the 180 degree rotation like the only reason you're able to say it's 180 degrees is because", "start_timestamp": "00:55:06", "end_timestamp": "00:55:38", "start_second": 3306, "end_second": 3338, "url": "https://www.youtube.com/watch?v=dMUes74-nYY&t=3306s", "title": "Lecture 7 Self-Supervised Learning -- UC Berkeley Spring 2020 - CS294-158 Deep Unsupervised Learning", "thumbnail": "https://i.ytimg.com/vi/dMUes74-nYY/maxresdefault.jpg"} {"video_id": "dMUes74-nYY", "text": "there are human faces and you know that they're inverted right so that means to be able to say this 180 degrees you've identified that there are humans faces in the image similarly like look at the first image there's a bird and there is it you know there is a tree and so on but but but you know that if the normal view would have been that the bird is like if it was tilted by 90 degrees the tree would have the bark would have been horizontal and the bird would be standing standing in the same vertical position so this is basically trying to", "start_timestamp": "00:55:38", "end_timestamp": "00:56:12", "start_second": 3338, "end_second": 3372, "url": "https://www.youtube.com/watch?v=dMUes74-nYY&t=3338s", "title": "Lecture 7 Self-Supervised Learning -- UC Berkeley Spring 2020 - CS294-158 Deep Unsupervised Learning", "thumbnail": "https://i.ytimg.com/vi/dMUes74-nYY/maxresdefault.jpg"} {"video_id": "dMUes74-nYY", "text": "identify if there was a photograph for that captured the actual image how is the photograph of position right so so so that is an inductive bias that is physically or geometrically grounded which is Kam image formation is something that's fundamental and that's how we all record images and most of images on image net have been captured where the object was very centralized and so there's a lot of information as to why the camera was placed at what what pose and so on to capture the actual object so because you rotated it", "start_timestamp": "00:56:12", "end_timestamp": "00:56:45", "start_second": 3372, "end_second": 3405, "url": "https://www.youtube.com/watch?v=dMUes74-nYY&t=3372s", "title": "Lecture 7 Self-Supervised Learning -- UC Berkeley Spring 2020 - CS294-158 Deep Unsupervised Learning", "thumbnail": "https://i.ytimg.com/vi/dMUes74-nYY/maxresdefault.jpg"} {"video_id": "dMUes74-nYY", "text": "you're actually trying to identify in some sense you're trying to do inverse graphics of the camera parameters here which the only parameter you care about is the rotation angle here but but since it's physically grounded it's gonna learn something useful so here is how they implemented it which is you take an image be rotated by various possible angles and you can struck these rotated versions you pass them to the same convolutional neural network and it is pretty clear rotation angle so you just do it for all possible images in your", "start_timestamp": "00:56:45", "end_timestamp": "00:57:18", "start_second": 3405, "end_second": 3438, "url": "https://www.youtube.com/watch?v=dMUes74-nYY&t=3405s", "title": "Lecture 7 Self-Supervised Learning -- UC Berkeley Spring 2020 - CS294-158 Deep Unsupervised Learning", "thumbnail": "https://i.ytimg.com/vi/dMUes74-nYY/maxresdefault.jpg"} {"video_id": "dMUes74-nYY", "text": "dataset and you you you would learn really good features so one interesting point is you might think hey the more angles you had to a dataset the more good features that you can learn that's not particularly true so the authors found that if you just use for rotation angle 0 90 180 270 the and you put a linear classifier on top and train it on C for this is a small data set you can get 89% op1 accuracy but if you add multiples of 45 degrees which is like one level more fine-grain your performance drops by 0.5% approximately", "start_timestamp": "00:57:18", "end_timestamp": "00:58:01", "start_second": 3438, "end_second": 3481, "url": "https://www.youtube.com/watch?v=dMUes74-nYY&t=3438s", "title": "Lecture 7 Self-Supervised Learning -- UC Berkeley Spring 2020 - CS294-158 Deep Unsupervised Learning", "thumbnail": "https://i.ytimg.com/vi/dMUes74-nYY/maxresdefault.jpg"} {"video_id": "dMUes74-nYY", "text": "and if you just use two angles which is less fine-grain the performance drops even more by two percent and and if you think about it as vertical versus horizontal like 90 270 instead of 0 180 the performance drops like another two percent so it is particularly important to use both horizontal and vertical and it is more important to use vertical and it is also not so important to use a lot of angles maybe the amount even that that's breathing eight rotations wasn't trained well but it's sufficient to predict four rotations you don't have to", "start_timestamp": "00:58:01", "end_timestamp": "00:58:38", "start_second": 3481, "end_second": 3518, "url": "https://www.youtube.com/watch?v=dMUes74-nYY&t=3481s", "title": "Lecture 7 Self-Supervised Learning -- UC Berkeley Spring 2020 - CS294-158 Deep Unsupervised Learning", "thumbnail": "https://i.ytimg.com/vi/dMUes74-nYY/maxresdefault.jpg"} {"video_id": "dMUes74-nYY", "text": "be so fine great so here the results that they have where they basically train an allit so at that time it will be Alex net was the common backbone used for self provision studies so they basically trained an alux net on all these different tasks like like for instance rotation that is there rot net was there paper but the baselines are coming from the car existing substitution papers which we also covered so if you use the conlusion l 4 & 5 which are the fourth and fifth collusion filters in alex net and put a", "start_timestamp": "00:58:38", "end_timestamp": "00:59:17", "start_second": 3518, "end_second": 3557, "url": "https://www.youtube.com/watch?v=dMUes74-nYY&t=3518s", "title": "Lecture 7 Self-Supervised Learning -- UC Berkeley Spring 2020 - CS294-158 Deep Unsupervised Learning", "thumbnail": "https://i.ytimg.com/vi/dMUes74-nYY/maxresdefault.jpg"} {"video_id": "dMUes74-nYY", "text": "linear classifier on top the the results from just using supervised learning features are the topmost row 59.7 percentage top one which is pretty close to what alex net gets in terms of top one accuracy so just using the random features gives you like 27 percent and 12 percent respectively and the context paper doors are always the paper we saw relative position prediction and and that's able to get fork on forcibly at forty five point six percent which is which is way better than random but not as good as supervising internet and colorization", "start_timestamp": "00:59:17", "end_timestamp": "00:59:58", "start_second": 3557, "end_second": 3598, "url": "https://www.youtube.com/watch?v=dMUes74-nYY&t=3557s", "title": "Lecture 7 Self-Supervised Learning -- UC Berkeley Spring 2020 - CS294-158 Deep Unsupervised Learning", "thumbnail": "https://i.ytimg.com/vi/dMUes74-nYY/maxresdefault.jpg"} {"video_id": "dMUes74-nYY", "text": "work which we saw the Richard Zhang's work that's five percent lower than doing relative with a transposition prediction so you can clearly see that doing more puzzles like tasks is better than doing colorization and the jig suppose a task is on par with the context the relative position prediction it's similar 45% and bygone which is a paper we already covered in the again lecture is not as good as these puzzle tasks but it's on par with colorization and finally this rot net paper has a substantial improvement over the state", "start_timestamp": "00:59:58", "end_timestamp": "01:00:37", "start_second": 3598, "end_second": 3637, "url": "https://www.youtube.com/watch?v=dMUes74-nYY&t=3598s", "title": "Lecture 7 Self-Supervised Learning -- UC Berkeley Spring 2020 - CS294-158 Deep Unsupervised Learning", "thumbnail": "https://i.ytimg.com/vi/dMUes74-nYY/maxresdefault.jpg"} {"video_id": "dMUes74-nYY", "text": "of the art at the time so basically the state of the art self supervision method had forty five percent and rot net accrued it to fifty percent and so that that's real clearly good and even on the con filer it shows like for the doors at all the the numbers are really low like 30% and the rot net is like clearly better forty three percent and way better than the other methods as well and these are like more more detail results for various convolutional layers and you can see that the rutland numbers are significantly better than the other", "start_timestamp": "01:00:37", "end_timestamp": "01:01:21", "start_second": 3637, "end_second": 3681, "url": "https://www.youtube.com/watch?v=dMUes74-nYY&t=3637s", "title": "Lecture 7 Self-Supervised Learning -- UC Berkeley Spring 2020 - CS294-158 Deep Unsupervised Learning", "thumbnail": "https://i.ytimg.com/vi/dMUes74-nYY/maxresdefault.jpg"} {"video_id": "dMUes74-nYY", "text": "self supervision techniques at the time and it also transfers well to pascal so on pascal classification and detection and segmentation it ended up being sphere the arts of supervision method and was significantly better then context prediction so but but still the gap between rot net and imagenet labels is really large if you look at the detection results it's close down like like fifty-four point four fifty six point eight is pretty pretty small but on classification the significant gap was seven percent and on", "start_timestamp": "01:01:21", "end_timestamp": "01:02:10", "start_second": 3681, "end_second": 3730, "url": "https://www.youtube.com/watch?v=dMUes74-nYY&t=3681s", "title": "Lecture 7 Self-Supervised Learning -- UC Berkeley Spring 2020 - CS294-158 Deep Unsupervised Learning", "thumbnail": "https://i.ytimg.com/vi/dMUes74-nYY/maxresdefault.jpg"} {"video_id": "dMUes74-nYY", "text": "segmentation there's a significant gap of nine for me not being mean my intersection over Union so so while this was a pretty promising technique it was still not there yet so that's it for like puzzle-based us next we'd actually get into this context based prediction techniques so predicting the neighbor context was the final line of work that we wanted to cover and again like I wanted to go back to this likkle slide where you you're basically interested in tasks in the neural network to predict missing parts from given parts or like", "start_timestamp": "01:02:10", "end_timestamp": "01:02:53", "start_second": 3730, "end_second": 3773, "url": "https://www.youtube.com/watch?v=dMUes74-nYY&t=3730s", "title": "Lecture 7 Self-Supervised Learning -- UC Berkeley Spring 2020 - CS294-158 Deep Unsupervised Learning", "thumbnail": "https://i.ytimg.com/vi/dMUes74-nYY/maxresdefault.jpg"} {"video_id": "dMUes74-nYY", "text": "neighbors given the neighbors you're trying to predict the surrounding context so one idea which was explore way back in 2013 was this word to back and we're going to cover that for us because it's very foundational so this is a figure from the bar from the 2:00 to 4:00 in class with Stanford and where the goal is to learn good word embeddings so we're the bearings are very fundamental like you have a lot of words in the vocabulary and you would like to represent them vectors so that similar words are having similar kind of", "start_timestamp": "01:02:53", "end_timestamp": "01:03:34", "start_second": 3773, "end_second": 3814, "url": "https://www.youtube.com/watch?v=dMUes74-nYY&t=3773s", "title": "Lecture 7 Self-Supervised Learning -- UC Berkeley Spring 2020 - CS294-158 Deep Unsupervised Learning", "thumbnail": "https://i.ytimg.com/vi/dMUes74-nYY/maxresdefault.jpg"} {"video_id": "dMUes74-nYY", "text": "vectors either directionally or they are very close to each other in the high dimensional space and the name word embedding you can use this is one hard encoding and that's hardly infinitive of any similarity across words so let's say that you have a bunch of sentences and you create account matrix of what words are occurring how many times so in this case you're basically trying to do some co-occurrence matrix if that that's very popular in NLP which is how many times each word co-occurs with the other and that that's usually used to", "start_timestamp": "01:03:34", "end_timestamp": "01:04:14", "start_second": 3814, "end_second": 3854, "url": "https://www.youtube.com/watch?v=dMUes74-nYY&t=3814s", "title": "Lecture 7 Self-Supervised Learning -- UC Berkeley Spring 2020 - CS294-158 Deep Unsupervised Learning", "thumbnail": "https://i.ytimg.com/vi/dMUes74-nYY/maxresdefault.jpg"} {"video_id": "dMUes74-nYY", "text": "construct these similarity matrices and so this is how the count matrix looks like so I in like are occurring together because there are two sentences with them but but IND don't go together like like basically once this matrix is constructed you can think of applying single value decomposition to this matrix and this is really how recommender systems have been built in the sense that you will have a history of what uses goodbye war items and you would construct a user ID matrix and then you would do an SPD on this", "start_timestamp": "01:04:14", "end_timestamp": "01:04:52", "start_second": 3854, "end_second": 3892, "url": "https://www.youtube.com/watch?v=dMUes74-nYY&t=3854s", "title": "Lecture 7 Self-Supervised Learning -- UC Berkeley Spring 2020 - CS294-158 Deep Unsupervised Learning", "thumbnail": "https://i.ytimg.com/vi/dMUes74-nYY/maxresdefault.jpg"} {"video_id": "dMUes74-nYY", "text": "matrix and you would get a user embedding in an item embedding and you just cluster or similar users in similar items and you will use it to build a recommender system so similarly you can think of building a term frequency resistant frequency matrix here in a low peak and constructing a sweetie and get word embeddings so that's that's precisely what's being done here and you can get these U and V vectors so so what is the problem with the SVD approach this video prods the first one sparsity right so obviously there may be very", "start_timestamp": "01:04:52", "end_timestamp": "01:05:33", "start_second": 3892, "end_second": 3933, "url": "https://www.youtube.com/watch?v=dMUes74-nYY&t=3892s", "title": "Lecture 7 Self-Supervised Learning -- UC Berkeley Spring 2020 - CS294-158 Deep Unsupervised Learning", "thumbnail": "https://i.ytimg.com/vi/dMUes74-nYY/maxresdefault.jpg"} {"video_id": "dMUes74-nYY", "text": "related words which should be not necessarily co-occur with each other because you just not have that possible sentence so sparsity is a big issue the resulting matrix you construct is very likely to be sparse and the computation cost right so SVD is a third order to compute so it's not going to be easy to optimize and there's also this problem of infrequent words which is when you have any dope when certain words are not particularly frequently present they're going to be hard to optimize because the word embeddings for them", "start_timestamp": "01:05:33", "end_timestamp": "01:06:12", "start_second": 3933, "end_second": 3972, "url": "https://www.youtube.com/watch?v=dMUes74-nYY&t=3933s", "title": "Lecture 7 Self-Supervised Learning -- UC Berkeley Spring 2020 - CS294-158 Deep Unsupervised Learning", "thumbnail": "https://i.ytimg.com/vi/dMUes74-nYY/maxresdefault.jpg"} {"video_id": "dMUes74-nYY", "text": "will not be very accurate and you can also have noise from very frequent words like like like you know words like a or D or articles and these are going to be very frequently present so you have to use some heuristics like inverse document frequency to make sure that they don't corrupt your bearings and it's however all of this sounds like a very hockey engineered pipeline and it's not particularly efficient and it's not going to scale well with like larger larger the vocabularies or like larger data sets it's it's very hard to scale", "start_timestamp": "01:06:12", "end_timestamp": "01:06:44", "start_second": 3972, "end_second": 4004, "url": "https://www.youtube.com/watch?v=dMUes74-nYY&t=3972s", "title": "Lecture 7 Self-Supervised Learning -- UC Berkeley Spring 2020 - CS294-158 Deep Unsupervised Learning", "thumbnail": "https://i.ytimg.com/vi/dMUes74-nYY/maxresdefault.jpg"} {"video_id": "dMUes74-nYY", "text": "this approach so that comes the idea of using Engram language models which is hey you just have if you just have a bunch of terms and in a sentence you say that the probability of sentence is probability of individual trans present in it and that's the unigram model but a bigram model is like the problem you you you bet you basically try to say that you take into account the previous word percent in the sentence and say that the probability of a word is conditioned on its previous word and that's saddam v you can start counting pairwise", "start_timestamp": "01:06:44", "end_timestamp": "01:07:24", "start_second": 4004, "end_second": 4044, "url": "https://www.youtube.com/watch?v=dMUes74-nYY&t=4004s", "title": "Lecture 7 Self-Supervised Learning -- UC Berkeley Spring 2020 - CS294-158 Deep Unsupervised Learning", "thumbnail": "https://i.ytimg.com/vi/dMUes74-nYY/maxresdefault.jpg"} {"video_id": "dMUes74-nYY", "text": "occurrences of words instead of just you know frequencies of single words so an Engram model basically generalized stuff so so let's let's actually go to the word Tyvek idea which is going to clearly generalize all these things and this is first proposed by niccol\u00f2 back in 2013 and so we're two back what's the area here you're going to have a bunch of surrounding words and you're going to try to predict the center word so you have a sentence you're you pick a particular word and treat that as a center word make all the surrounding", "start_timestamp": "01:07:24", "end_timestamp": "01:08:13", "start_second": 4044, "end_second": 4093, "url": "https://www.youtube.com/watch?v=dMUes74-nYY&t=4044s", "title": "Lecture 7 Self-Supervised Learning -- UC Berkeley Spring 2020 - CS294-158 Deep Unsupervised Learning", "thumbnail": "https://i.ytimg.com/vi/dMUes74-nYY/maxresdefault.jpg"} {"video_id": "dMUes74-nYY", "text": "words treat them as surrounding words and embed each of them and try to identify what is the center word from the surrounding woods so that is referred to as the Cibao model and the script skip grandmothers exactly the mirror image of this model which is it tries to take the center word and it tries to predict all the surrounding words so one way is you embed all these individual one heart and coatings of the original words into a word embedding matrix which is basically like the word embedding matrix would be the number of resistance", "start_timestamp": "01:08:13", "end_timestamp": "01:08:50", "start_second": 4093, "end_second": 4130, "url": "https://www.youtube.com/watch?v=dMUes74-nYY&t=4093s", "title": "Lecture 7 Self-Supervised Learning -- UC Berkeley Spring 2020 - CS294-158 Deep Unsupervised Learning", "thumbnail": "https://i.ytimg.com/vi/dMUes74-nYY/maxresdefault.jpg"} {"video_id": "dMUes74-nYY", "text": "embedding dimension so by embedding one heart word it's basically if we went looking up what what what the corresponding word embedding is and the C bar model would just average the embeddings of your surrounding words and try to identify the embedding of your missing word and the skip grammar model would basically try to use the word embedding and do an individual softmax over all the surrounding words so let's actually look into the math of how this works out so consider the Cibo Matto and here you're trying to maximize the log", "start_timestamp": "01:08:50", "end_timestamp": "01:09:28", "start_second": 4130, "end_second": 4168, "url": "https://www.youtube.com/watch?v=dMUes74-nYY&t=4130s", "title": "Lecture 7 Self-Supervised Learning -- UC Berkeley Spring 2020 - CS294-158 Deep Unsupervised Learning", "thumbnail": "https://i.ytimg.com/vi/dMUes74-nYY/maxresdefault.jpg"} {"video_id": "dMUes74-nYY", "text": "probability of the surrounding word or the context word given the neighboring words and so you can think of this locked bu WC given all this WC minus ni for different values and so so so the way it's actually constructed is huge is average the embeddings of your neighboring words it's a very simple model so once your average that you just have a single vector and now you just say that this is a nonparametric softmax over all possible words that that you can have for your Center word and so that way you don't have to explicitly", "start_timestamp": "01:09:28", "end_timestamp": "01:10:12", "start_second": 4168, "end_second": 4212, "url": "https://www.youtube.com/watch?v=dMUes74-nYY&t=4168s", "title": "Lecture 7 Self-Supervised Learning -- UC Berkeley Spring 2020 - CS294-158 Deep Unsupervised Learning", "thumbnail": "https://i.ytimg.com/vi/dMUes74-nYY/maxresdefault.jpg"} {"video_id": "dMUes74-nYY", "text": "take a soft max you just optimize for the dot products of the averaging words from the neighboring words and the possible product of you are sent over and that's what this loss amounts to and oh and this way the parameters of your loss function end up being your word embedding matrices and all you need to do is take lots of different chunks of text pick particular sent over to pick some respondent neighboring words embed them average neighboring words try to maximize the dot product of the actual central word relatively all the other", "start_timestamp": "01:10:12", "end_timestamp": "01:10:52", "start_second": 4212, "end_second": 4252, "url": "https://www.youtube.com/watch?v=dMUes74-nYY&t=4212s", "title": "Lecture 7 Self-Supervised Learning -- UC Berkeley Spring 2020 - CS294-158 Deep Unsupervised Learning", "thumbnail": "https://i.ytimg.com/vi/dMUes74-nYY/maxresdefault.jpg"} {"video_id": "dMUes74-nYY", "text": "words in your vocabulary and if you do this for waste chunks of text and optimize for a while you're going to end up with a relatively good word of any so that is the idea of what effect the SIBO model and the Skip Graham model is exactly the mirror image of this model where you're going to try to predict every independent sounding word with the key assumption it makes us that given the center where the surrounding words are all independent of each other that's in the probability of a surrounding word given the Center word is independent of", "start_timestamp": "01:10:52", "end_timestamp": "01:11:23", "start_second": 4252, "end_second": 4283, "url": "https://www.youtube.com/watch?v=dMUes74-nYY&t=4252s", "title": "Lecture 7 Self-Supervised Learning -- UC Berkeley Spring 2020 - CS294-158 Deep Unsupervised Learning", "thumbnail": "https://i.ytimg.com/vi/dMUes74-nYY/maxresdefault.jpg"} {"video_id": "dMUes74-nYY", "text": "the other surrounding words so that's kind of similar to the Navy base assumption which is given the class the term frequencies are independent of each other so it's similar assumption to simplify the computation and so then you just have a similar kind of nonparametric softmax over the possible word embeddings of the surrounding words and you just perform a similar optimization so a main idea here in the nonparametric softmax is you need to optimize over all possible word embeddings in your vocabulary and that", "start_timestamp": "01:11:23", "end_timestamp": "01:12:07", "start_second": 4283, "end_second": 4327, "url": "https://www.youtube.com/watch?v=dMUes74-nYY&t=4283s", "title": "Lecture 7 Self-Supervised Learning -- UC Berkeley Spring 2020 - CS294-158 Deep Unsupervised Learning", "thumbnail": "https://i.ytimg.com/vi/dMUes74-nYY/maxresdefault.jpg"} {"video_id": "dMUes74-nYY", "text": "could be very computationally inefficient especially back in that time when GPUs were not the go-to mechanism for deep neural nets and the main aim of where Tyvek was to have some software that where you just feed in a chunk of text and it could just run and spit out word bearings for you but this whole process can just run on a very simple lightweight CPU so the authors went for very clever techniques like using negative sampling and not normalizing over all the words in the vocabulary in the denominator so the partition", "start_timestamp": "01:12:07", "end_timestamp": "01:12:46", "start_second": 4327, "end_second": 4366, "url": "https://www.youtube.com/watch?v=dMUes74-nYY&t=4327s", "title": "Lecture 7 Self-Supervised Learning -- UC Berkeley Spring 2020 - CS294-158 Deep Unsupervised Learning", "thumbnail": "https://i.ytimg.com/vi/dMUes74-nYY/maxresdefault.jpg"} {"video_id": "dMUes74-nYY", "text": "function is now going to cover the entire vocabulary and as long as you pick really negative samples you can be very efficient in terms of the kind of embeddings you learn and you don't really and and and so we won't really go into the details of that but you can refer to the paper in terms of what how hierarchical softmax is and negative sampling was used to make word to that really efficient and in terms of results the authors had really good results for the resulting word embeddings that we learned for instance here if you look at", "start_timestamp": "01:12:46", "end_timestamp": "01:13:27", "start_second": 4366, "end_second": 4407, "url": "https://www.youtube.com/watch?v=dMUes74-nYY&t=4366s", "title": "Lecture 7 Self-Supervised Learning -- UC Berkeley Spring 2020 - CS294-158 Deep Unsupervised Learning", "thumbnail": "https://i.ytimg.com/vi/dMUes74-nYY/maxresdefault.jpg"} {"video_id": "dMUes74-nYY", "text": "the word embeddings of different countries and their capitals you can see that the difference vectors are all quite parallel alright so the vector from China to Beijing and Russia to Moscow they're almost parallel so it means that the relationships captured between the aspera the words are easily lay you can translate these relationships easily to like another capital and you basically have to translate the country by the same translation vector so so it's geometrically very consistent and you've also seen earlier about how DC Gann was", "start_timestamp": "01:13:27", "end_timestamp": "01:14:04", "start_second": 4407, "end_second": 4444, "url": "https://www.youtube.com/watch?v=dMUes74-nYY&t=4407s", "title": "Lecture 7 Self-Supervised Learning -- UC Berkeley Spring 2020 - CS294-158 Deep Unsupervised Learning", "thumbnail": "https://i.ytimg.com/vi/dMUes74-nYY/maxresdefault.jpg"} {"video_id": "dMUes74-nYY", "text": "able to do vector arithmetic on like Sarah Brady faces it was all pretty inspired from word to back because you can because it's geometrically translation consistent you can think of taking the vector of Portugal and you know you can add a you and you can take the vector of Spain you can subtract those two vectors and you can think of the difference vector being similar to the difference vectors between the capitals Lisbon in Madrid there are difference in the position vectors so so here are like various different you know", "start_timestamp": "01:14:04", "end_timestamp": "01:14:43", "start_second": 4444, "end_second": 4483, "url": "https://www.youtube.com/watch?v=dMUes74-nYY&t=4444s", "title": "Lecture 7 Self-Supervised Learning -- UC Berkeley Spring 2020 - CS294-158 Deep Unsupervised Learning", "thumbnail": "https://i.ytimg.com/vi/dMUes74-nYY/maxresdefault.jpg"} {"video_id": "dMUes74-nYY", "text": "clustered word embeddings for categories of different different categories of words okay so in word to back well the authors tried to look at how the word embeddings cluster together and you can see that various newspapers are clustered together and various NHL teams and ba teams you can actually see that if you take the nearest neighbor of if you take a nearest neighbor of Detroit you get Detroit Pistons or Oakland and get Golden State Warriors off you take the nearest neighbor of Steve Ballmer you basically get Microsoft for Larry", "start_timestamp": "01:14:43", "end_timestamp": "01:15:32", "start_second": 4483, "end_second": 4532, "url": "https://www.youtube.com/watch?v=dMUes74-nYY&t=4483s", "title": "Lecture 7 Self-Supervised Learning -- UC Berkeley Spring 2020 - CS294-158 Deep Unsupervised Learning", "thumbnail": "https://i.ytimg.com/vi/dMUes74-nYY/maxresdefault.jpg"} {"video_id": "dMUes74-nYY", "text": "Page look at Google for Airlines you basically see that Spain and Spain are closes to each other are same for Greece and egg and Airlines so so you basically can please see that the word embeddings because they have looked at what terms are good to next to each other and so forth they've understood relationships between companies and their CEOs or like you know Airlines and the countries that they operate in and so forth and that's that's true that's really interesting I'm just going to switch to PowerPoint okay so so so next thing is how does the", "start_timestamp": "01:15:32", "end_timestamp": "01:16:38", "start_second": 4532, "end_second": 4598, "url": "https://www.youtube.com/watch?v=dMUes74-nYY&t=4532s", "title": "Lecture 7 Self-Supervised Learning -- UC Berkeley Spring 2020 - CS294-158 Deep Unsupervised Learning", "thumbnail": "https://i.ytimg.com/vi/dMUes74-nYY/maxresdefault.jpg"} {"video_id": "dMUes74-nYY", "text": "clothes and tees relate for different short phrases using the vertical bearings so here is one thing about various explorers like let's go to gamma and you can see that there's a relationship between Italian Explorer and for chess master chess grandmaster there's a relationship between Garry Kasparov and and so so even these sharp phrases the closest entities that are being very exactly relevant and you can also see how it's relevant to the airlines and you know or or if you add two different embeddings what is the closest entity you get so", "start_timestamp": "01:16:38", "end_timestamp": "01:17:23", "start_second": 4598, "end_second": 4643, "url": "https://www.youtube.com/watch?v=dMUes74-nYY&t=4598s", "title": "Lecture 7 Self-Supervised Learning -- UC Berkeley Spring 2020 - CS294-158 Deep Unsupervised Learning", "thumbnail": "https://i.ytimg.com/vi/dMUes74-nYY/maxresdefault.jpg"} {"video_id": "dMUes74-nYY", "text": "for instance if you add Vietnam embedding for Vietnam the embedding for capital you're ending up getting the closest and the closest nearest neighbor and bearing up that of Hanoi which is exactly right and similarly if you had the embedding for German and Airlines and it flew it searched for the entity or the word with the nearest embedding you're getting airline Lufthansa which is really cool and similarly Russian plus River get Volga River Moscow and you get various French actresses and and you also get the currency for Jack so basically it's", "start_timestamp": "01:17:23", "end_timestamp": "01:18:01", "start_second": 4643, "end_second": 4681, "url": "https://www.youtube.com/watch?v=dMUes74-nYY&t=4643s", "title": "Lecture 7 Self-Supervised Learning -- UC Berkeley Spring 2020 - CS294-158 Deep Unsupervised Learning", "thumbnail": "https://i.ytimg.com/vi/dMUes74-nYY/maxresdefault.jpg"} {"video_id": "dMUes74-nYY", "text": "understanding relationships at phrase level not just word level and understanding relationships between multiple phrases so another interesting example is how can get all these different skip phrasing bearings so you can see that the closest tokens for skip grammar models are like way better compared to the other models so these are like different models and like me at all is another example of noise contrasting model those strain on words and skip phrase model too McCollum or Oh like like a basically the nearest neighbor to the topmost row", "start_timestamp": "01:18:01", "end_timestamp": "01:18:51", "start_second": 4681, "end_second": 4731, "url": "https://www.youtube.com/watch?v=dMUes74-nYY&t=4681s", "title": "Lecture 7 Self-Supervised Learning -- UC Berkeley Spring 2020 - CS294-158 Deep Unsupervised Learning", "thumbnail": "https://i.ytimg.com/vi/dMUes74-nYY/maxresdefault.jpg"} {"video_id": "dMUes74-nYY", "text": "redmond is they the most relevant in the skiptrace model compared to the other models for instance redmond Ross Redmond Washington Microsoft and you know graffiti he get spray paint and graffiti taggers etc whereas for the other things for instance for graffiti the nearest neighbors from these model is basically things like anesthetics monkeys Jews which doesn't really make sense at all so so that way you know it's understanding the actual things in the script our model so next we look at this paper called representation learning", "start_timestamp": "01:18:51", "end_timestamp": "01:19:29", "start_second": 4731, "end_second": 4769, "url": "https://www.youtube.com/watch?v=dMUes74-nYY&t=4731s", "title": "Lecture 7 Self-Supervised Learning -- UC Berkeley Spring 2020 - CS294-158 Deep Unsupervised Learning", "thumbnail": "https://i.ytimg.com/vi/dMUes74-nYY/maxresdefault.jpg"} {"video_id": "dMUes74-nYY", "text": "contrast to predictive coding in some sense the best way to understand CPC is if someone were to do work back on all the modalities and not just text how do we go about it right so you remember that word to back has a very interesting model but it's also very primitive that is you if you look at the c ba model you know you're averaging the embeddings of your neighboring words and then try to predict the context word so but average this is like a rape a you know you don't it's only important if you really care", "start_timestamp": "01:19:29", "end_timestamp": "01:20:06", "start_second": 4769, "end_second": 4806, "url": "https://www.youtube.com/watch?v=dMUes74-nYY&t=4769s", "title": "Lecture 7 Self-Supervised Learning -- UC Berkeley Spring 2020 - CS294-158 Deep Unsupervised Learning", "thumbnail": "https://i.ytimg.com/vi/dMUes74-nYY/maxresdefault.jpg"} {"video_id": "dMUes74-nYY", "text": "about the simplest possible linear model that you want to get working but in principle you can aggregate context using neural networks right like we have really powerful neural networks for context aggregation like continents or transformers or LST amps so why not actually try to put this all together use the contrast loss that were to accuse is to predict the neighbors in a nonparametric softmax but try to replace all these embeddings of individual words and surrounding words with very powerful expression you know networks so that", "start_timestamp": "01:20:06", "end_timestamp": "01:20:38", "start_second": 4806, "end_second": 4838, "url": "https://www.youtube.com/watch?v=dMUes74-nYY&t=4806s", "title": "Lecture 7 Self-Supervised Learning -- UC Berkeley Spring 2020 - CS294-158 Deep Unsupervised Learning", "thumbnail": "https://i.ytimg.com/vi/dMUes74-nYY/maxresdefault.jpg"} {"video_id": "dMUes74-nYY", "text": "forms the basis for contrast the predictive coding then by Ironman the New York so here's the idea let's say you have an audio signal the raw audio signal and let's say that you're trying to predict the future audio signal from the past or you're trying to for relationships between the future audio in the past audio so call the past as see a context see and call the future as X and instead of predicting the actual audio like a vein net what do you want is to just operate at the latent space so let's say that you encode the context", "start_timestamp": "01:20:38", "end_timestamp": "01:21:18", "start_second": 4838, "end_second": 4878, "url": "https://www.youtube.com/watch?v=dMUes74-nYY&t=4838s", "title": "Lecture 7 Self-Supervised Learning -- UC Berkeley Spring 2020 - CS294-158 Deep Unsupervised Learning", "thumbnail": "https://i.ytimg.com/vi/dMUes74-nYY/maxresdefault.jpg"} {"video_id": "dMUes74-nYY", "text": "in with an encoder and you also encode a future audio chunk we call you can call it as a target and you encode the target at the same encoder so now our goal is to maximize the mutual information between the context and target don't really worry about like what mutual information well why does it come out of nowhere that's not really the objective here the goal is we want to make sure that we learn a representation the encoder sits that its maximally predictive of the actual future when contrasted with with some fake futures", "start_timestamp": "01:21:18", "end_timestamp": "01:21:57", "start_second": 4878, "end_second": 4917, "url": "https://www.youtube.com/watch?v=dMUes74-nYY&t=4878s", "title": "Lecture 7 Self-Supervised Learning -- UC Berkeley Spring 2020 - CS294-158 Deep Unsupervised Learning", "thumbnail": "https://i.ytimg.com/vi/dMUes74-nYY/maxresdefault.jpg"} {"video_id": "dMUes74-nYY", "text": "so so that's that's really what's going on in CPC which is hey you produce your neural network a bunch of targets where your actual target is also present but you could also have some fake targets so imagine that you could sample random audio chunks from totally different waveforms or you could also sample audio chunks which are not exactly corresponding to that future time step and you could present the inure network various different alternatives of what the true audio chunk should be and based on the past context the real truth and", "start_timestamp": "01:21:57", "end_timestamp": "01:22:31", "start_second": 4917, "end_second": 4951, "url": "https://www.youtube.com/watch?v=dMUes74-nYY&t=4917s", "title": "Lecture 7 Self-Supervised Learning -- UC Berkeley Spring 2020 - CS294-158 Deep Unsupervised Learning", "thumbnail": "https://i.ytimg.com/vi/dMUes74-nYY/maxresdefault.jpg"} {"video_id": "dMUes74-nYY", "text": "all these other possible alternatives the neural network is supposed to pick or classify which is the right future and similar to word to back this can be done in a nonparametric softmax fashion where instead of actually decoding the audio chunk you're just trying to make sure that the embedding of your context and the embedding of your true future should correlate the most when contrasted with the other fake audio chunks so so that so that way becomes the softmax or your number of negatives and and and and you can actually use any", "start_timestamp": "01:22:31", "end_timestamp": "01:23:06", "start_second": 4951, "end_second": 4986, "url": "https://www.youtube.com/watch?v=dMUes74-nYY&t=4951s", "title": "Lecture 7 Self-Supervised Learning -- UC Berkeley Spring 2020 - CS294-158 Deep Unsupervised Learning", "thumbnail": "https://i.ytimg.com/vi/dMUes74-nYY/maxresdefault.jpg"} {"video_id": "dMUes74-nYY", "text": "kind of a score function for how to assign the score between two embeddings it could be a simpler product in contrast pre-recording they make use of the bilinear dot product which is which which is also been used in the past in past work so that's a little more expressive than just using a regular dot product and it doesn't require you to normalize the vectors so so that way you can think of the W matrix in the bilinear product is learning some kind of Association matrix that figures out some property that helps you to", "start_timestamp": "01:23:06", "end_timestamp": "01:23:40", "start_second": 4986, "end_second": 5020, "url": "https://www.youtube.com/watch?v=dMUes74-nYY&t=4986s", "title": "Lecture 7 Self-Supervised Learning -- UC Berkeley Spring 2020 - CS294-158 Deep Unsupervised Learning", "thumbnail": "https://i.ytimg.com/vi/dMUes74-nYY/maxresdefault.jpg"} {"video_id": "dMUes74-nYY", "text": "correlate two different things so that's really what's going on in CPC you're basically going to take a huge audio chunk you are going to use audio signal you're gonna split it into small chunks you're gonna include each of these small chunks with a shared encoder it could be straight conversation on your network in this case and you could take a bunch of past audio chunks pass them through it gru any auto regressive model would do and you could use the final gate in state of the gru to predict all the future Leyton's of true", "start_timestamp": "01:23:40", "end_timestamp": "01:24:17", "start_second": 5020, "end_second": 5057, "url": "https://www.youtube.com/watch?v=dMUes74-nYY&t=5020s", "title": "Lecture 7 Self-Supervised Learning -- UC Berkeley Spring 2020 - CS294-158 Deep Unsupervised Learning", "thumbnail": "https://i.ytimg.com/vi/dMUes74-nYY/maxresdefault.jpg"} {"video_id": "dMUes74-nYY", "text": "audio chunks but predict I just mean that your contrast elite trying to maximize those embeddings through the true futures when contrasted with the fake futures and you can sample negatives by using other time steps within the same audio waveform or you could use other audio waveforms and depending on this negatives you're learning different things for instance if you're trying to collect audio signal from different speakers the negatives that come from other speakers lets you learn representations that allow you to", "start_timestamp": "01:24:17", "end_timestamp": "01:24:49", "start_second": 5057, "end_second": 5089, "url": "https://www.youtube.com/watch?v=dMUes74-nYY&t=5057s", "title": "Lecture 7 Self-Supervised Learning -- UC Berkeley Spring 2020 - CS294-158 Deep Unsupervised Learning", "thumbnail": "https://i.ytimg.com/vi/dMUes74-nYY/maxresdefault.jpg"} {"video_id": "dMUes74-nYY", "text": "identify the speaker while the negatives that are across the same waveform within the same speaker that you let's you learn like more fine-grained phoneme features which are useful for like phoneme classification so depending on the downstream task the kind of negatives you pick are going to be crucial so here's basically CPC at a high level the diagram is very very very clear you're just basically gonna try to do this across various audio waveforms various different numbers of time steps you should be very careful in picking", "start_timestamp": "01:24:49", "end_timestamp": "01:25:24", "start_second": 5089, "end_second": 5124, "url": "https://www.youtube.com/watch?v=dMUes74-nYY&t=5089s", "title": "Lecture 7 Self-Supervised Learning -- UC Berkeley Spring 2020 - CS294-158 Deep Unsupervised Learning", "thumbnail": "https://i.ytimg.com/vi/dMUes74-nYY/maxresdefault.jpg"} {"video_id": "dMUes74-nYY", "text": "what is the gap between the future and the context so there are overlapping audio chunks just like how you saw in the door shadow work where if you had overlapping patches it's going to be very easy to do these jigsaw puzzles Dallas CPC basically also suffers from the same problem so you should make sure that your negatives are come or your actual prediction tasks are not so trivial enough that you could just look at the whatever is overlapping and just try to predict that so that's one really important thing about CPC but the more", "start_timestamp": "01:25:24", "end_timestamp": "01:25:55", "start_second": 5124, "end_second": 5155, "url": "https://www.youtube.com/watch?v=dMUes74-nYY&t=5124s", "title": "Lecture 7 Self-Supervised Learning -- UC Berkeley Spring 2020 - CS294-158 Deep Unsupervised Learning", "thumbnail": "https://i.ytimg.com/vi/dMUes74-nYY/maxresdefault.jpg"} {"video_id": "dMUes74-nYY", "text": "interesting thing is that you can actually perform all these tasks in any kind of fashion you can't you like you can so that the order of time is not so important you can pick anything as the context and anything is target so you can actually predict from the future and go to the past as well or you could even mask something in the middle and use everything else as to context so it's it's totally up to you to how to frame what is the context and what is the target in CPC but based on how you frame it you should make sure that the", "start_timestamp": "01:25:55", "end_timestamp": "01:26:24", "start_second": 5155, "end_second": 5184, "url": "https://www.youtube.com/watch?v=dMUes74-nYY&t=5155s", "title": "Lecture 7 Self-Supervised Learning -- UC Berkeley Spring 2020 - CS294-158 Deep Unsupervised Learning", "thumbnail": "https://i.ytimg.com/vi/dMUes74-nYY/maxresdefault.jpg"} {"video_id": "dMUes74-nYY", "text": "negatives and the targets are chosen in a non trivial fashion so you can clearly see that CPC is trying to generalize the older ideas like puzzle tasks and were to back together it's basically a framework in which you can perform all of these different tasks within one one particular architectural variant and and various different hyper parameters will correspond to various different versions of these different tasks so you can think of CPC is trying to do something like multi-step prediction tasks so if you look at these multiple predictions", "start_timestamp": "01:26:24", "end_timestamp": "01:26:58", "start_second": 5184, "end_second": 5218, "url": "https://www.youtube.com/watch?v=dMUes74-nYY&t=5184s", "title": "Lecture 7 Self-Supervised Learning -- UC Berkeley Spring 2020 - CS294-158 Deep Unsupervised Learning", "thumbnail": "https://i.ytimg.com/vi/dMUes74-nYY/maxresdefault.jpg"} {"video_id": "dMUes74-nYY", "text": "emerging out of CT vector in the diagram you can think of a different W matrix being used for each of these different prediction time steps and each of them corresponds to say hey predict one step ahead or parade two steps ahead predict three steps and so forth and you can think of all them is trying to make the representations learn different things and because you are optimizing all of them at once you're trying to learn a really rich representation that is able to do lots of different sub supervised dance at once so it constructs a whole", "start_timestamp": "01:26:58", "end_timestamp": "01:27:29", "start_second": 5218, "end_second": 5249, "url": "https://www.youtube.com/watch?v=dMUes74-nYY&t=5218s", "title": "Lecture 7 Self-Supervised Learning -- UC Berkeley Spring 2020 - CS294-158 Deep Unsupervised Learning", "thumbnail": "https://i.ytimg.com/vi/dMUes74-nYY/maxresdefault.jpg"} {"video_id": "dMUes74-nYY", "text": "variety of pretax tasks within a single loss a single framework and it's railing and then appealing that way so one figure which is really nice to understand what CPC is trying to do is how it basically is doing something like slow feature analysis but what I mean by that is you your audio waveform is really high-frequency fast wearing and you're actually information that you care about for downstream towns is basically the slow bearing like hike signal content like phonemes because that's what really allows you to use CPC", "start_timestamp": "01:27:29", "end_timestamp": "01:28:08", "start_second": 5249, "end_second": 5288, "url": "https://www.youtube.com/watch?v=dMUes74-nYY&t=5249s", "title": "Lecture 7 Self-Supervised Learning -- UC Berkeley Spring 2020 - CS294-158 Deep Unsupervised Learning", "thumbnail": "https://i.ytimg.com/vi/dMUes74-nYY/maxresdefault.jpg"} {"video_id": "dMUes74-nYY", "text": "features for something like speech recognition so what what CPC is trying to do is its trying to make predictions in the latent space at the levels of phonemes instead of variety of audio waveforms so as you keep processing these are your signals the information becomes more more semantic and so if you're trying to predict something that's not overlapping in terms of a target if you're trying to predict your target that's reason to be a few time steps ahead you're trying to go more towards these slow varying phonemes that", "start_timestamp": "01:28:08", "end_timestamp": "01:28:41", "start_second": 5288, "end_second": 5321, "url": "https://www.youtube.com/watch?v=dMUes74-nYY&t=5288s", "title": "Lecture 7 Self-Supervised Learning -- UC Berkeley Spring 2020 - CS294-158 Deep Unsupervised Learning", "thumbnail": "https://i.ytimg.com/vi/dMUes74-nYY/maxresdefault.jpg"} {"video_id": "dMUes74-nYY", "text": "are that you know they would only change if the time steps are sufficiently further apart and so therefore if you're doing predictions are an appropriate offset by offset I just mean that the gap between City and zt+ cave are the case like a number of time steps between the sufficiently high sister the phonemes actually change then you're going to end up learning really rich features so this is a really nice visualization of the representations learned on CPC audio audio tasks we're basically collecting a data set of", "start_timestamp": "01:28:41", "end_timestamp": "01:29:14", "start_second": 5321, "end_second": 5354, "url": "https://www.youtube.com/watch?v=dMUes74-nYY&t=5321s", "title": "Lecture 7 Self-Supervised Learning -- UC Berkeley Spring 2020 - CS294-158 Deep Unsupervised Learning", "thumbnail": "https://i.ytimg.com/vi/dMUes74-nYY/maxresdefault.jpg"} {"video_id": "dMUes74-nYY", "text": "various speakers as audio waveforms and just performing the CBC optimization and your for and then you're taking the embeddings out and doing a piece near visualization in 2d and you can clearly see that different speakers having clustered out in separate blobs so it's clearly garden the speaker initially and you can also see the accuracy of predicting the the you can also see that the accuracy of predicting the positive sample and the contrast of loss you it basically is so high in the beginning but it keeps going down touristy and by", "start_timestamp": "01:29:14", "end_timestamp": "01:29:56", "start_second": 5354, "end_second": 5396, "url": "https://www.youtube.com/watch?v=dMUes74-nYY&t=5354s", "title": "Lecture 7 Self-Supervised Learning -- UC Berkeley Spring 2020 - CS294-158 Deep Unsupervised Learning", "thumbnail": "https://i.ytimg.com/vi/dMUes74-nYY/maxresdefault.jpg"} {"video_id": "dMUes74-nYY", "text": "that what I mean is it's much easier to perform this contrast to task of identifying what is the right future when the prediction task offset is not much when you're actually trying to predict much closer to the future but as you keep moving further and further away the mutual information between what you already have and what you're trying to predict is much lower there's much more entropy so you're not actually would optimize those future time steps as well because the context is not sufficient so the accuracy drops exponentially as you", "start_timestamp": "01:29:56", "end_timestamp": "01:30:27", "start_second": 5396, "end_second": 5427, "url": "https://www.youtube.com/watch?v=dMUes74-nYY&t=5396s", "title": "Lecture 7 Self-Supervised Learning -- UC Berkeley Spring 2020 - CS294-158 Deep Unsupervised Learning", "thumbnail": "https://i.ytimg.com/vi/dMUes74-nYY/maxresdefault.jpg"} {"video_id": "dMUes74-nYY", "text": "keep increasing the number of time steps you're using to predict the future so here are the CPC audio results in terms of the downstream tasks on the left you see before the phoneme classification and speaker classification results and for phoneme classification there are four even possible classes so basically the way it works is you take the CPC features you freeze them out and you just put a linear classifier on top of these CPC features and you try to perform the task but what it look like which is to say you try to identify the", "start_timestamp": "01:30:27", "end_timestamp": "01:31:00", "start_second": 5427, "end_second": 5460, "url": "https://www.youtube.com/watch?v=dMUes74-nYY&t=5427s", "title": "Lecture 7 Self-Supervised Learning -- UC Berkeley Spring 2020 - CS294-158 Deep Unsupervised Learning", "thumbnail": "https://i.ytimg.com/vi/dMUes74-nYY/maxresdefault.jpg"} {"video_id": "dMUes74-nYY", "text": "phonemes in the audio chunk or you try to identify who spoke this particular thing and you have labels for that but you're not gonna change the features you're just going to keep them frozen so for that version CPC speaker classification gets ninety 7.4 percent accuracy it is very close to what you'll get by just doing supervised learning whereas if you just use something like m of CC features which are which is very engineered you're not able to do that well you're just able to get 17.6% so these features that you learn in a", "start_timestamp": "01:31:00", "end_timestamp": "01:31:33", "start_second": 5460, "end_second": 5493, "url": "https://www.youtube.com/watch?v=dMUes74-nYY&t=5460s", "title": "Lecture 7 Self-Supervised Learning -- UC Berkeley Spring 2020 - CS294-158 Deep Unsupervised Learning", "thumbnail": "https://i.ytimg.com/vi/dMUes74-nYY/maxresdefault.jpg"} {"video_id": "dMUes74-nYY", "text": "completely unsupervised fashion are v better way more semantic than something engineered with domain knowledge and also for phoneme classification which is actually even higher than speaker classification CPC features without any fine-tuning just linear classifiers able to do is able to get 64 point six percent way better than MCC features which is this forty percent and we're better than rounding random initializations 30 percent and supervised learning gets seventy four point six percent which is 10% better than just linear classifier", "start_timestamp": "01:31:33", "end_timestamp": "01:32:07", "start_second": 5493, "end_second": 5527, "url": "https://www.youtube.com/watch?v=dMUes74-nYY&t=5493s", "title": "Lecture 7 Self-Supervised Learning -- UC Berkeley Spring 2020 - CS294-158 Deep Unsupervised Learning", "thumbnail": "https://i.ytimg.com/vi/dMUes74-nYY/maxresdefault.jpg"} {"video_id": "dMUes74-nYY", "text": "on top of CPC features however the authors found that if you put an MLP instead of a linear classifier you can actually get pretty close to 74% with just CPC features no fine-tuning so this means that the information may not be linearly separable but all the useful information for performing foreign classification is there the CPC like the uncovered features and on the right you see a positions for phoneme classification experiments and the the point I mentioned earlier is are really illustrated well here where depending on", "start_timestamp": "01:32:07", "end_timestamp": "01:32:45", "start_second": 5527, "end_second": 5565, "url": "https://www.youtube.com/watch?v=dMUes74-nYY&t=5527s", "title": "Lecture 7 Self-Supervised Learning -- UC Berkeley Spring 2020 - CS294-158 Deep Unsupervised Learning", "thumbnail": "https://i.ytimg.com/vi/dMUes74-nYY/maxresdefault.jpg"} {"video_id": "dMUes74-nYY", "text": "where you take the negative samples from it you're going to get different levels of results so if you if you if all if your negatives are all coming from the same speaker the accuracy is like sixty five point five percent that's that's basically for that that means that the models like learning only phoneme relevant features it's not trying to do speaker and if occation whereas if you are if your negatives are all coming from mixed speaker so that that's going to get sixty four point six percent which is the result on the left table so", "start_timestamp": "01:32:45", "end_timestamp": "01:33:19", "start_second": 5565, "end_second": 5599, "url": "https://www.youtube.com/watch?v=dMUes74-nYY&t=5565s", "title": "Lecture 7 Self-Supervised Learning -- UC Berkeley Spring 2020 - CS294-158 Deep Unsupervised Learning", "thumbnail": "https://i.ytimg.com/vi/dMUes74-nYY/maxresdefault.jpg"} {"video_id": "dMUes74-nYY", "text": "that means that if you are more clever about sample negatives we already know what your downstream task is you could prioritize something negatives in a fashion that will incentivize CPC to learn the features that will be more relevant for your downstream tasks so if you're if you don't really care about speaker identification you could just make sure that all the negatives are constantly coming from the same speaker and so that's really interesting way to illustrate this point and and and and the second ablation that they did is the", "start_timestamp": "01:33:19", "end_timestamp": "01:33:49", "start_second": 5599, "end_second": 5629, "url": "https://www.youtube.com/watch?v=dMUes74-nYY&t=5599s", "title": "Lecture 7 Self-Supervised Learning -- UC Berkeley Spring 2020 - CS294-158 Deep Unsupervised Learning", "thumbnail": "https://i.ytimg.com/vi/dMUes74-nYY/maxresdefault.jpg"} {"video_id": "dMUes74-nYY", "text": "number of steps you predict into the future and namely you would imagine that predicting all the way into the future is going to be really helpful but it doesn't turn out to be the case so if you predict only up to twelve steps instead of predicting 16 up to all the way up to sixteen steps the downstream accuracy is better so this means that the right way to predict it is such that the targets that you're trying to predict should share so amount of information with the context that you already have you go further and further into the", "start_timestamp": "01:33:49", "end_timestamp": "01:34:19", "start_second": 5629, "end_second": 5659, "url": "https://www.youtube.com/watch?v=dMUes74-nYY&t=5629s", "title": "Lecture 7 Self-Supervised Learning -- UC Berkeley Spring 2020 - CS294-158 Deep Unsupervised Learning", "thumbnail": "https://i.ytimg.com/vi/dMUes74-nYY/maxresdefault.jpg"} {"video_id": "dMUes74-nYY", "text": "future the entropy is higher and so the amount of actual information amount of bits that are shared between the two entities are not much so predict like for making a neural network focus on those stars may actually end up encoding not very useful features so the hard part about CPC is trying to pick the right number of time steps to predict into the future or like they're how you sample the negatives but if you get those details right the features learnt are really useful and on par with supervised learning so one", "start_timestamp": "01:34:19", "end_timestamp": "01:34:52", "start_second": 5659, "end_second": 5692, "url": "https://www.youtube.com/watch?v=dMUes74-nYY&t=5659s", "title": "Lecture 7 Self-Supervised Learning -- UC Berkeley Spring 2020 - CS294-158 Deep Unsupervised Learning", "thumbnail": "https://i.ytimg.com/vi/dMUes74-nYY/maxresdefault.jpg"} {"video_id": "dMUes74-nYY", "text": "motivation for CPC was that they wanted a single framework to work on any modality any kind supervised learning any modality you should just be able to use the same framework so that's a lofty goal so let's see how they actually instantiate it for image not so here is the image snap numbers for CPC where the framework how they actually executed is as follows you take an image you take these overlapping patches so you grid your image into a bunch of overlapping patches so in this case the image is 256 by 256 from image net and you're", "start_timestamp": "01:34:52", "end_timestamp": "01:35:32", "start_second": 5692, "end_second": 5732, "url": "https://www.youtube.com/watch?v=dMUes74-nYY&t=5692s", "title": "Lecture 7 Self-Supervised Learning -- UC Berkeley Spring 2020 - CS294-158 Deep Unsupervised Learning", "thumbnail": "https://i.ytimg.com/vi/dMUes74-nYY/maxresdefault.jpg"} {"video_id": "dMUes74-nYY", "text": "basically taking a 64 by 64 patch laid across the image and you take matches with 50% overlap so 32 by 32 is a stripe and that mean you get a sent by seven grid of patches and you would encode each patch it with the same rest net so think about it as a rest at 101 or less than 50 and you would get an embedding the meaningful embedding at the end and what you do with that is now that you have an embedding at every single patch this will form a grid of embeddings and you can perform predictive tasks 2d predictor tasks and on top of this grid", "start_timestamp": "01:35:32", "end_timestamp": "01:36:08", "start_second": 5732, "end_second": 5768, "url": "https://www.youtube.com/watch?v=dMUes74-nYY&t=5732s", "title": "Lecture 7 Self-Supervised Learning -- UC Berkeley Spring 2020 - CS294-158 Deep Unsupervised Learning", "thumbnail": "https://i.ytimg.com/vi/dMUes74-nYY/maxresdefault.jpg"} {"video_id": "dMUes74-nYY", "text": "so you can treat this great as your sequence just like the audio sequence there's a bunch of overlapping audio waveforms audio chunks from your actual raw ear signal in this case it could be a bunch of were lapping patches in this 2d transcript and the task that the artists construct is to predict the future parents from the top row of patches so in this case you basically using the first three rows let's say and then you're trying to predict the bottom three rows you try to predict every single patch in the bottom", "start_timestamp": "01:36:08", "end_timestamp": "01:36:41", "start_second": 5768, "end_second": 5801, "url": "https://www.youtube.com/watch?v=dMUes74-nYY&t=5768s", "title": "Lecture 7 Self-Supervised Learning -- UC Berkeley Spring 2020 - CS294-158 Deep Unsupervised Learning", "thumbnail": "https://i.ytimg.com/vi/dMUes74-nYY/maxresdefault.jpg"} {"video_id": "dMUes74-nYY", "text": "three rows by using the row the row of matches from the top three rows so in order to aggregate the context of the top few patches you would want to use some kind of a model that can take a bunch of embeddings in two-dimensional layout and try to summarize what is embedding screen at every single spatial location and that's and you also want to do it in such a way that you don't want the information from the top to leak into the bottom because you're trying to predict the bottom for the top so we already know of one model", "start_timestamp": "01:36:41", "end_timestamp": "01:37:12", "start_second": 5801, "end_second": 5832, "url": "https://www.youtube.com/watch?v=dMUes74-nYY&t=5801s", "title": "Lecture 7 Self-Supervised Learning -- UC Berkeley Spring 2020 - CS294-158 Deep Unsupervised Learning", "thumbnail": "https://i.ytimg.com/vi/dMUes74-nYY/maxresdefault.jpg"} {"video_id": "dMUes74-nYY", "text": "that allows us to do this very efficiently it's called the mask convolution or the pixel RNN fix the scene and style models and and and and it also makes sense because CNN's were also invented by the same first author so he just use pixel cnn's to aggregate the context of the top few rows of batches and once you lay that out on top of the grid of matches you can predict the bottom patches in a very parallel fashion so here is an example of how it would look like for an actual image when you grid it into patches so take this", "start_timestamp": "01:37:12", "end_timestamp": "01:37:44", "start_second": 5832, "end_second": 5864, "url": "https://www.youtube.com/watch?v=dMUes74-nYY&t=5832s", "title": "Lecture 7 Self-Supervised Learning -- UC Berkeley Spring 2020 - CS294-158 Deep Unsupervised Learning", "thumbnail": "https://i.ytimg.com/vi/dMUes74-nYY/maxresdefault.jpg"} {"video_id": "dMUes74-nYY", "text": "dog and you're like constructing all these patches overlapping patches and then you let's say you you take the first three rows of embeddings and you try to predict the last two rows or which oh this is dispatch belonging to the last row second column or not it's this patch belonging to the last row third column or not so you would perform all these predictive tasks once once you get all these embeddings of individual patches in a pixels in and on the top so how does the accuracy work out for like say you you're doing something similar", "start_timestamp": "01:37:44", "end_timestamp": "01:38:22", "start_second": 5864, "end_second": 5902, "url": "https://www.youtube.com/watch?v=dMUes74-nYY&t=5864s", "title": "Lecture 7 Self-Supervised Learning -- UC Berkeley Spring 2020 - CS294-158 Deep Unsupervised Learning", "thumbnail": "https://i.ytimg.com/vi/dMUes74-nYY/maxresdefault.jpg"} {"video_id": "dMUes74-nYY", "text": "to the audio experiment where you trained all this and a lot of images and then you take just the rest net encoder out and you put a linear classifier on top of it and see how well it performs on an omission air classification which is also the standard test being used in previous self supervision methods though initially they were all attempt with Alex net and but but the baselines for rest net existed this table so we've already seen relative position we've seen by gal in synchronization and and jigsaw puzzles so all these methods when", "start_timestamp": "01:38:22", "end_timestamp": "01:38:58", "start_second": 5902, "end_second": 5938, "url": "https://www.youtube.com/watch?v=dMUes74-nYY&t=5902s", "title": "Lecture 7 Self-Supervised Learning -- UC Berkeley Spring 2020 - CS294-158 Deep Unsupervised Learning", "thumbnail": "https://i.ytimg.com/vi/dMUes74-nYY/maxresdefault.jpg"} {"video_id": "dMUes74-nYY", "text": "you just use a rest net encoder and you put a mean pool at the end and you put a linear classifier at the end D we get you a top one accuracy like not more than 38% jigsaw works the best and the rot net numbers are not there on this but they're out net numbers are not are not higher than the CPC version of the CPC numbers so so if you look at the Aleks net results they are like around 38% and if you use a resident we do every single baselines numbers goes up so relative position gets a 6% game by just using a rest net video instead of", "start_timestamp": "01:38:58", "end_timestamp": "01:39:37", "start_second": 5938, "end_second": 5977, "url": "https://www.youtube.com/watch?v=dMUes74-nYY&t=5938s", "title": "Lecture 7 Self-Supervised Learning -- UC Berkeley Spring 2020 - CS294-158 Deep Unsupervised Learning", "thumbnail": "https://i.ytimg.com/vi/dMUes74-nYY/maxresdefault.jpg"} {"video_id": "dMUes74-nYY", "text": "an alux net and colorization goes to 39 from 35 so the baseline for jigsaw doesn't exist here but I would imagine it getting to somewhere in the early 40s so CPC gets forty eight point seven this is really an old result now we will see in the next few slides how the state of the art has been pushed up way further but at that time this was a pretty big jump from the existing state of the art and it also works really well if you look at if you look at competitive approaches of similar nature like you know a relative position of jigsaw is", "start_timestamp": "01:39:37", "end_timestamp": "01:40:15", "start_second": 5977, "end_second": 6015, "url": "https://www.youtube.com/watch?v=dMUes74-nYY&t=5977s", "title": "Lecture 7 Self-Supervised Learning -- UC Berkeley Spring 2020 - CS294-158 Deep Unsupervised Learning", "thumbnail": "https://i.ytimg.com/vi/dMUes74-nYY/maxresdefault.jpg"} {"video_id": "dMUes74-nYY", "text": "like kind of performing these spatial Association tasks too but doing it in a contrast of fashion and doing lot like a family of tasks within one parametrize model gets you much better numbers and these are the standard reason we should visualize what kind of features that these models learn which is you take a particular feature there and you just see what new what what what kind of input maximally activates a particular neuron and you lose for a bunch of neurons and you see that you know like maximally activating neurons are the", "start_timestamp": "01:40:15", "end_timestamp": "01:40:48", "start_second": 6015, "end_second": 6048, "url": "https://www.youtube.com/watch?v=dMUes74-nYY&t=6015s", "title": "Lecture 7 Self-Supervised Learning -- UC Berkeley Spring 2020 - CS294-158 Deep Unsupervised Learning", "thumbnail": "https://i.ytimg.com/vi/dMUes74-nYY/maxresdefault.jpg"} {"video_id": "dMUes74-nYY", "text": "corresponding to different classes in this case the first row is corresponding to leaf like patterns and textures calculator textures computers keypads and then skies and baby faces dogs so so there are clearly capturing all these high-level omission of features so another version of CPC was to try it on language will not really go into the details here but it but at that time the it was competitive with skip top vectors which had similar similar ideas like predicting the future sentence when we give him the pass", "start_timestamp": "01:40:48", "end_timestamp": "01:41:27", "start_second": 6048, "end_second": 6087, "url": "https://www.youtube.com/watch?v=dMUes74-nYY&t=6048s", "title": "Lecture 7 Self-Supervised Learning -- UC Berkeley Spring 2020 - CS294-158 Deep Unsupervised Learning", "thumbnail": "https://i.ytimg.com/vi/dMUes74-nYY/maxresdefault.jpg"} {"video_id": "dMUes74-nYY", "text": "so CPC was able to somewhat be competitive or match those numbers but not not really that good finally it was also applied in reinforcement learning so in reinforcement learning you can think of accelerating your data efficiency by allowing your model or agent to learn way faster by performing these unsupervised auxilary tasks in parallel along with your reward optimization and the authors try to use contrast of losses as the auxiliary losses and they were able to see some gains on sample efficiency will not", "start_timestamp": "01:41:27", "end_timestamp": "01:42:05", "start_second": 6087, "end_second": 6125, "url": "https://www.youtube.com/watch?v=dMUes74-nYY&t=6087s", "title": "Lecture 7 Self-Supervised Learning -- UC Berkeley Spring 2020 - CS294-158 Deep Unsupervised Learning", "thumbnail": "https://i.ytimg.com/vi/dMUes74-nYY/maxresdefault.jpg"} {"video_id": "dMUes74-nYY", "text": "really go to the details here as well so that's basically it for a CPC version 1 or CPC as it is called because the fundamental ideas were put forth in this paper but like like like I said the numbers are not that great yet right like if you look at these numbers the linear classifier with the rest net gets 48.7% whereas a supervised learner with the rezident we do architecture typically gets like something like 76 percent top one so the gap was like really high the like around twenty eight twenty nine percent and and and and so", "start_timestamp": "01:42:05", "end_timestamp": "01:42:45", "start_second": 6125, "end_second": 6165, "url": "https://www.youtube.com/watch?v=dMUes74-nYY&t=6125s", "title": "Lecture 7 Self-Supervised Learning -- UC Berkeley Spring 2020 - CS294-158 Deep Unsupervised Learning", "thumbnail": "https://i.ytimg.com/vi/dMUes74-nYY/maxresdefault.jpg"} {"video_id": "dMUes74-nYY", "text": "that needs to be addressed so until that's address self supervised is not really worth doing for practical amused classification or practical like this lofty goals we started off with which is hey we just want to learn features from data with our labels is that you can get similar quality of features so this is clearly far away and that was the version that was the goal in in CPC version two to address that gap and this is a work I did during my internship at equine well with Ironman dinner where we basically took CPC", "start_timestamp": "01:42:45", "end_timestamp": "01:43:22", "start_second": 6165, "end_second": 6202, "url": "https://www.youtube.com/watch?v=dMUes74-nYY&t=6165s", "title": "Lecture 7 Self-Supervised Learning -- UC Berkeley Spring 2020 - CS294-158 Deep Unsupervised Learning", "thumbnail": "https://i.ytimg.com/vi/dMUes74-nYY/maxresdefault.jpg"} {"video_id": "dMUes74-nYY", "text": "version one and kept on hacking the architecture the the different type of parameters needed to get get all various patches and like add a lot of details in terms of data augmentations and and see how far we could push the number so and what we ended up doing was like like we actually were able to match or sometimes beat supervised learning on various downstream tasks and I want to go through the details here so you've already looked at this where you grid an image into a bunch of matches and you encode every single pass using a really", "start_timestamp": "01:43:22", "end_timestamp": "01:44:00", "start_second": 6202, "end_second": 6240, "url": "https://www.youtube.com/watch?v=dMUes74-nYY&t=6202s", "title": "Lecture 7 Self-Supervised Learning -- UC Berkeley Spring 2020 - CS294-158 Deep Unsupervised Learning", "thumbnail": "https://i.ytimg.com/vi/dMUes74-nYY/maxresdefault.jpg"} {"video_id": "dMUes74-nYY", "text": "deep rest net so earlier you saw that arrests at 101 we to the first three snacks were being used in the first CPC version this mist net is much deeper arrests at 161 with 2x wit and your third snack so it's it's having 4,000 features at the end and once you do that you get the embeddings for every single patch and you process that with pixel CNN this Pixar CNN is 2x wider than the pixel CNN use in the original work and now you're trying to predict the bottom future like just like an original work but here we just use one offset we only", "start_timestamp": "01:44:00", "end_timestamp": "01:44:39", "start_second": 6240, "end_second": 6279, "url": "https://www.youtube.com/watch?v=dMUes74-nYY&t=6240s", "title": "Lecture 7 Self-Supervised Learning -- UC Berkeley Spring 2020 - CS294-158 Deep Unsupervised Learning", "thumbnail": "https://i.ytimg.com/vi/dMUes74-nYY/maxresdefault.jpg"} {"video_id": "dMUes74-nYY", "text": "predict two rows below and nothing else so like it's like you already saw in the audio experiment doing lots of predictions can hurt you if the amount of information shown art is much lower so we and also doing predictions when the information overlap as much closer will also hurt you so you need to pick the prediction step very carefully depending on your Kratz eyes and so forth so we only predict two rows below and nothing else and so these are your context and latent vectors and you have the same kind of scoring function which", "start_timestamp": "01:44:39", "end_timestamp": "01:45:11", "start_second": 6279, "end_second": 6311, "url": "https://www.youtube.com/watch?v=dMUes74-nYY&t=6279s", "title": "Lecture 7 Self-Supervised Learning -- UC Berkeley Spring 2020 - CS294-158 Deep Unsupervised Learning", "thumbnail": "https://i.ytimg.com/vi/dMUes74-nYY/maxresdefault.jpg"} {"video_id": "dMUes74-nYY", "text": "is by linear product and you optimize this with a nonparametric softmax over there negatives and like I said the naked in see how you sample the negatives is really crucial so you can sample negatives by taking negatives from other patches within the same image or you can take patches from other images and we have a version called all neg which is basically taking all the possible negatives you can you can construct which is all the patches in in your whole mini batch which is your mini batch will be a bunch of images and each", "start_timestamp": "01:45:11", "end_timestamp": "01:45:46", "start_second": 6311, "end_second": 6346, "url": "https://www.youtube.com/watch?v=dMUes74-nYY&t=6311s", "title": "Lecture 7 Self-Supervised Learning -- UC Berkeley Spring 2020 - CS294-158 Deep Unsupervised Learning", "thumbnail": "https://i.ytimg.com/vi/dMUes74-nYY/maxresdefault.jpg"} {"video_id": "dMUes74-nYY", "text": "image is now a bunch of patches you just take all of them as your negatives in this particular loss so that way you get a lot of negatives this whole stack of optimization is like in general it doesn't matter how you construct the negatives of positives whether they use patches or not but just this whole framework because in general refer to as the info and see a loss like you very basically construct contexts and targets and try to use contrast objectives to optimize for the associations and the implementation is really parallel", "start_timestamp": "01:45:46", "end_timestamp": "01:46:19", "start_second": 6346, "end_second": 6379, "url": "https://www.youtube.com/watch?v=dMUes74-nYY&t=6346s", "title": "Lecture 7 Self-Supervised Learning -- UC Berkeley Spring 2020 - CS294-158 Deep Unsupervised Learning", "thumbnail": "https://i.ytimg.com/vi/dMUes74-nYY/maxresdefault.jpg"} {"video_id": "dMUes74-nYY", "text": "because we can just use a pixel CNN with mass convolutions and you do the predictions at every single local position using a one by one composition so the recipe for CPC v2 is this train on unlabeled image net train as long as possible so we trained for 500 bucks and this basically takes you like approximately a week and you augment every single local patch with a lot of species and color augmentations so like I already mentioned in the doors work on relative position prediction making a lot of spatial jitters is really useful", "start_timestamp": "01:46:19", "end_timestamp": "01:46:55", "start_second": 6379, "end_second": 6415, "url": "https://www.youtube.com/watch?v=dMUes74-nYY&t=6379s", "title": "Lecture 7 Self-Supervised Learning -- UC Berkeley Spring 2020 - CS294-158 Deep Unsupervised Learning", "thumbnail": "https://i.ytimg.com/vi/dMUes74-nYY/maxresdefault.jpg"} {"video_id": "dMUes74-nYY", "text": "so we take that to the extreme in this work and use all possible augmentations and the effect a number of negatives that you have is number of instances in your mini batch times the number of patches for instance so unlike the earlier work which gridded the image into 7x7 grid of 64 by 64 we actually use much bigger patches and much bigger images like we used to 80 by 280 images and 80 by 80 crops so that that gave us 6 by 6 grid and with an overlap of start of 36 or something is that and so so that way the number of negatives is", "start_timestamp": "01:46:55", "end_timestamp": "01:47:32", "start_second": 6415, "end_second": 6452, "url": "https://www.youtube.com/watch?v=dMUes74-nYY&t=6415s", "title": "Lecture 7 Self-Supervised Learning -- UC Berkeley Spring 2020 - CS294-158 Deep Unsupervised Learning", "thumbnail": "https://i.ytimg.com/vi/dMUes74-nYY/maxresdefault.jpg"} {"video_id": "dMUes74-nYY", "text": "approximately 600 so so the fact is like we don't have a lot of negatives but all these negatives are really hard because they're coming from other practices within the same image so it's it's a mix of like instance negatives as for the spacial negatives and it learns both kind of discriminative features so this is basically a diagram that illustrates this whole pipeline where you perform you you you have this feature extractor which is the rest net1 61 running on patches of images and you train the cells provision objective which is CPC", "start_timestamp": "01:47:32", "end_timestamp": "01:48:09", "start_second": 6452, "end_second": 6489, "url": "https://www.youtube.com/watch?v=dMUes74-nYY&t=6452s", "title": "Lecture 7 Self-Supervised Learning -- UC Berkeley Spring 2020 - CS294-158 Deep Unsupervised Learning", "thumbnail": "https://i.ytimg.com/vi/dMUes74-nYY/maxresdefault.jpg"} {"video_id": "dMUes74-nYY", "text": "and you do that for like a lot of long time once the mod of the strain there are various ways in which you can evaluate the kind of features that you learn one is to take put a linear classifier as was already done in the past the other is you just take the rest net that you uncovered and instead of freezing those features what if we can actually find unit on a classification task so which is to say that instead of training a linear classifier and all possible available labels what if you're allowed to put a small model on top", "start_timestamp": "01:48:09", "end_timestamp": "01:48:45", "start_second": 6489, "end_second": 6525, "url": "https://www.youtube.com/watch?v=dMUes74-nYY&t=6489s", "title": "Lecture 7 Self-Supervised Learning -- UC Berkeley Spring 2020 - CS294-158 Deep Unsupervised Learning", "thumbnail": "https://i.ytimg.com/vi/dMUes74-nYY/maxresdefault.jpg"} {"video_id": "dMUes74-nYY", "text": "finding the entire stack when the situation is you're not going to be given all the labels you're just going to be given a few percentage of the labels so you're allowed to train on all the unlabeled data you have but when you when you're beginning to perform supervised learning you're you're gonna be given label data on different shards you're not gonna be given all of them but we also have benchmarks where you have all of them but in general as imagine the scenario where you can perform classification even with like 1%", "start_timestamp": "01:48:45", "end_timestamp": "01:49:17", "start_second": 6525, "end_second": 6557, "url": "https://www.youtube.com/watch?v=dMUes74-nYY&t=6525s", "title": "Lecture 7 Self-Supervised Learning -- UC Berkeley Spring 2020 - CS294-158 Deep Unsupervised Learning", "thumbnail": "https://i.ytimg.com/vi/dMUes74-nYY/maxresdefault.jpg"} {"video_id": "dMUes74-nYY", "text": "of the labels which is like 10 images per class and imagenet and you can also do a transfer learning which is you you're given a completely different data set now you take this rest net throw it on that data set and perform new tasks so which could be something like pascal where you just take that rest that you've got and CPC and you throw it on object detection just like regular computer vision in benchmarking so so that's basically the goal and we will see how all these things work out if you do a lot of engineering so CPC we", "start_timestamp": "01:49:17", "end_timestamp": "01:49:54", "start_second": 6557, "end_second": 6594, "url": "https://www.youtube.com/watch?v=dMUes74-nYY&t=6557s", "title": "Lecture 7 Self-Supervised Learning -- UC Berkeley Spring 2020 - CS294-158 Deep Unsupervised Learning", "thumbnail": "https://i.ytimg.com/vi/dMUes74-nYY/maxresdefault.jpg"} {"video_id": "dMUes74-nYY", "text": "do linear classifiers core remember that CPC v1 got 48.7% so just look at CPC v 1 and CP CV to CPC v2 gets 71.5% which is significantly larger than 48.7% and around that time a lot of competitive approaches were published with really good linear classifier scores as well like big buy again which is a largest Caleb I can push it up is 61% I'm Tim was another technique which is also using something very similar to CPC that push it up to 68% and all these different methods just try to go for the same approaches just make your models as", "start_timestamp": "01:49:54", "end_timestamp": "01:50:35", "start_second": 6594, "end_second": 6635, "url": "https://www.youtube.com/watch?v=dMUes74-nYY&t=6594s", "title": "Lecture 7 Self-Supervised Learning -- UC Berkeley Spring 2020 - CS294-158 Deep Unsupervised Learning", "thumbnail": "https://i.ytimg.com/vi/dMUes74-nYY/maxresdefault.jpg"} {"video_id": "dMUes74-nYY", "text": "wide as possible or as deep as possible or a combination of both and optimize really long and use a lot of augmentations and careful engineering and CPC was the first method a shape improve and get to all the way up to 70 70 plus and the top rows where the models were different models were using different encoders so it's really hard to see what is helping there so on the bottom you like you can see that you just use the same rest at 50 encoder and then you compare across methods and CPC is better than all the existing methods", "start_timestamp": "01:50:35", "end_timestamp": "01:51:13", "start_second": 6635, "end_second": 6673, "url": "https://www.youtube.com/watch?v=dMUes74-nYY&t=6635s", "title": "Lecture 7 Self-Supervised Learning -- UC Berkeley Spring 2020 - CS294-158 Deep Unsupervised Learning", "thumbnail": "https://i.ytimg.com/vi/dMUes74-nYY/maxresdefault.jpg"} {"video_id": "dMUes74-nYY", "text": "this was the story when CPC version 2 was published but it's no longer the case there's also a base line here in this table called momentum contrast which we'll cover as the next topic but note that momentum contrast has also improved a lot from the numbers that are presented on this table so on efficient image recognition which is you take the CPC features and you find unit for supervised learning where you can actually control the amount of label data you have CPC version 2 is able to perform significantly better than just", "start_timestamp": "01:51:13", "end_timestamp": "01:51:47", "start_second": 6673, "end_second": 6707, "url": "https://www.youtube.com/watch?v=dMUes74-nYY&t=6673s", "title": "Lecture 7 Self-Supervised Learning -- UC Berkeley Spring 2020 - CS294-158 Deep Unsupervised Learning", "thumbnail": "https://i.ytimg.com/vi/dMUes74-nYY/maxresdefault.jpg"} {"video_id": "dMUes74-nYY", "text": "doing supervised learning so the red line is basically like for that corresponding percentage of label data you just do supervised learning so you just take a rest net you've trained it on whatever labels you have so you can clearly see that that would really well if you have all the labels but as you reduce the amount of label data the rest net performance keeps going down so supervised learning is really really data Hungry's as far as the number of labels your house is concerned whereas if you do unsupervised learning and all", "start_timestamp": "01:51:47", "end_timestamp": "01:52:16", "start_second": 6707, "end_second": 6736, "url": "https://www.youtube.com/watch?v=dMUes74-nYY&t=6707s", "title": "Lecture 7 Self-Supervised Learning -- UC Berkeley Spring 2020 - CS294-158 Deep Unsupervised Learning", "thumbnail": "https://i.ytimg.com/vi/dMUes74-nYY/maxresdefault.jpg"} {"video_id": "dMUes74-nYY", "text": "the unlabeled data you have but you only collect label data for that corresponding percentage you can see how much gain it's giving you especially in the low data regime so you basically just need eighty percent fewer labels to match the same amount or accuracy that supervised learning gets so with just ten images per class you're almost close to 80 percent top five accuracy on image classification which is the standard set by alx not and and and you can also see that the supervised state of the art is matched with around like twenty twenty", "start_timestamp": "01:52:16", "end_timestamp": "01:52:53", "start_second": 6736, "end_second": 6773, "url": "https://www.youtube.com/watch?v=dMUes74-nYY&t=6736s", "title": "Lecture 7 Self-Supervised Learning -- UC Berkeley Spring 2020 - CS294-158 Deep Unsupervised Learning", "thumbnail": "https://i.ytimg.com/vi/dMUes74-nYY/maxresdefault.jpg"} {"video_id": "dMUes74-nYY", "text": "to thirty percent of the labels that you need so this way sub supervised learning is actually letting you to be very very data efficient for supervised learning and it also means like you need to hire very few data annotators now like instead of collecting 10,000 labels here you're collecting something like 2,000 naples or fewer right so your data annotation is much faster because you already have a great set of features and your and-and-and the most interesting thing is even on even when you have all the labels that is when you have 100% of", "start_timestamp": "01:52:53", "end_timestamp": "01:53:27", "start_second": 6773, "end_second": 6807, "url": "https://www.youtube.com/watch?v=dMUes74-nYY&t=6773s", "title": "Lecture 7 Self-Supervised Learning -- UC Berkeley Spring 2020 - CS294-158 Deep Unsupervised Learning", "thumbnail": "https://i.ytimg.com/vi/dMUes74-nYY/maxresdefault.jpg"} {"video_id": "dMUes74-nYY", "text": "the labels your performance from free training and then fine-tuning is better than just doing supervised learning so there is no argument to not use unsupervised learning because even if you have all the labels the performance you get by doing unsupervised learning and then performing supervised learning as a fine tuning step is higher than what do you get by just doing supervised learning and and it's it's uniformly consistent across all the label data regimes so here's like a good graph to understand how CPC version 1", "start_timestamp": "01:53:27", "end_timestamp": "01:53:59", "start_second": 6807, "end_second": 6839, "url": "https://www.youtube.com/watch?v=dMUes74-nYY&t=6807s", "title": "Lecture 7 Self-Supervised Learning -- UC Berkeley Spring 2020 - CS294-158 Deep Unsupervised Learning", "thumbnail": "https://i.ytimg.com/vi/dMUes74-nYY/maxresdefault.jpg"} {"video_id": "dMUes74-nYY", "text": "interpolated to version 2 and you can see how each bottom improvement axis on the x-axis added as far as the linear classification accuracy goes and basically Ln mr. layer nom in the clay layer nom helps a lot and you know like bu refers to like bottom-up predictions instead of just using top-down predictions which is basically saying that hey instead of just relating the bottom rows from the top why not do the other way and that also helps a lot augmentation that every single patches which is referred as PA that helps a lot", "start_timestamp": "01:53:59", "end_timestamp": "01:54:36", "start_second": 6839, "end_second": 6876, "url": "https://www.youtube.com/watch?v=dMUes74-nYY&t=6839s", "title": "Lecture 7 Self-Supervised Learning -- UC Berkeley Spring 2020 - CS294-158 Deep Unsupervised Learning", "thumbnail": "https://i.ytimg.com/vi/dMUes74-nYY/maxresdefault.jpg"} {"video_id": "dMUes74-nYY", "text": "and so so so various different improvement axes which you can actually refer to the paper like like for instance be using bigger patches of the lot and so on so so you just from 48 49 percent you able to now go close to 72 percent but by just focusing on the engineering details and and doing large-scale optimization really well and that that's really the success story of sub supervised learning just is just do the simple things right and you'll get really good numbers this is a table that shows the whatever graph you saw in", "start_timestamp": "01:54:36", "end_timestamp": "01:55:16", "start_second": 6876, "end_second": 6916, "url": "https://www.youtube.com/watch?v=dMUes74-nYY&t=6876s", "title": "Lecture 7 Self-Supervised Learning -- UC Berkeley Spring 2020 - CS294-158 Deep Unsupervised Learning", "thumbnail": "https://i.ytimg.com/vi/dMUes74-nYY/maxresdefault.jpg"} {"video_id": "dMUes74-nYY", "text": "numbers and you can see that on every single data data regime even if you train the deepest possible architecture for supervised based language is arrest in 200 you're able to improve by 1% in the top five or more than that 1.3 percent if it with with cell supervisory training and you can also see that the low data regime your your your top fire accuracies are so good that they are even better than methods that have used very very engineered semi-supervised pipelines which which we'll cover in a future lecture", "start_timestamp": "01:55:16", "end_timestamp": "01:55:52", "start_second": 6916, "end_second": 6952, "url": "https://www.youtube.com/watch?v=dMUes74-nYY&t=6916s", "title": "Lecture 7 Self-Supervised Learning -- UC Berkeley Spring 2020 - CS294-158 Deep Unsupervised Learning", "thumbnail": "https://i.ytimg.com/vi/dMUes74-nYY/maxresdefault.jpg"} {"video_id": "dMUes74-nYY", "text": "so using unlabeled data to improve the data efficiency of label data label they like like low rate a regime supervised learning it's not it's not just specific to cell supervised learning like it can also be done using other methods like semi-supervised learning and and and those numbers representing the that the thick methods that just use label propagation pseudo labeling data augmentee unsupervised at augmentation and so forth will not cover that today but it will be covered in a future lecture and those are also very", "start_timestamp": "01:55:52", "end_timestamp": "01:56:24", "start_second": 6952, "end_second": 6984, "url": "https://www.youtube.com/watch?v=dMUes74-nYY&t=6952s", "title": "Lecture 7 Self-Supervised Learning -- UC Berkeley Spring 2020 - CS294-158 Deep Unsupervised Learning", "thumbnail": "https://i.ytimg.com/vi/dMUes74-nYY/maxresdefault.jpg"} {"video_id": "dMUes74-nYY", "text": "interesting but they involve more more handcrafted details as to how you go about doing simultaneous loss optimizations where it's in several ways be training is small like just think it moral destroy it once and then use it everywhere so it's much more elegant that way so here is the final Pascal vo C numbers which is also a benchmark that people have cared about in in terms of transfer learning for self supervised learning there's always been this thing that some supervised learning will is only Garant considered to work if if the", "start_timestamp": "01:56:24", "end_timestamp": "01:57:01", "start_second": 6984, "end_second": 7021, "url": "https://www.youtube.com/watch?v=dMUes74-nYY&t=6984s", "title": "Lecture 7 Self-Supervised Learning -- UC Berkeley Spring 2020 - CS294-158 Deep Unsupervised Learning", "thumbnail": "https://i.ytimg.com/vi/dMUes74-nYY/maxresdefault.jpg"} {"video_id": "dMUes74-nYY", "text": "features you get from self supervised learning are gonna transfer to something other than classification like like something like object detection on it on a data set where you don't have a lot of labels like pascal and for a long time people believe that you could never be the supervised baseline so if you look at the supervised baseline mean average position with the rest at 152 backbone you get seventy four point seven pascal walk just two thousand the data set with 2007 version and you can look at all these south supervised methods in the", "start_timestamp": "01:57:01", "end_timestamp": "01:57:33", "start_second": 7021, "end_second": 7053, "url": "https://www.youtube.com/watch?v=dMUes74-nYY&t=7021s", "title": "Lecture 7 Self-Supervised Learning -- UC Berkeley Spring 2020 - CS294-158 Deep Unsupervised Learning", "thumbnail": "https://i.ytimg.com/vi/dMUes74-nYY/maxresdefault.jpg"} {"video_id": "dMUes74-nYY", "text": "next row including momentum contrast which which was the first method which god a ball supervised and got got got to seventy four point nine and and and and and and they fast now seen and trained on the CPC version two features get seventy six point six as the mean average position which is which is even better so so that goes on to say like well you know well that self supervised learning can actually work even better than supervised learning for downstream tasks so even if you collect a lot of label data you may not actually be able", "start_timestamp": "01:57:33", "end_timestamp": "01:58:08", "start_second": 7053, "end_second": 7088, "url": "https://www.youtube.com/watch?v=dMUes74-nYY&t=7053s", "title": "Lecture 7 Self-Supervised Learning -- UC Berkeley Spring 2020 - CS294-158 Deep Unsupervised Learning", "thumbnail": "https://i.ytimg.com/vi/dMUes74-nYY/maxresdefault.jpg"} {"video_id": "dMUes74-nYY", "text": "to get these numbers because these models are actually learning a lot more about the data so now that you looked at the principle of contrast of predictions and contrasted learning and and and and seen the benefits and of it actually working at scale people start like like more people interested in just the image domain started looking at contrasted learning and people asked us question hey this contrast learning is cool but then do we actually need all these patches like like inherently patches is hard to deal with because when you", "start_timestamp": "01:58:08", "end_timestamp": "01:58:47", "start_second": 7088, "end_second": 7127, "url": "https://www.youtube.com/watch?v=dMUes74-nYY&t=7088s", "title": "Lecture 7 Self-Supervised Learning -- UC Berkeley Spring 2020 - CS294-158 Deep Unsupervised Learning", "thumbnail": "https://i.ytimg.com/vi/dMUes74-nYY/maxresdefault.jpg"} {"video_id": "dMUes74-nYY", "text": "actually drew it an image into patches you're increasing your bat size by a lot and so even if you reduce your image size it's not it's not particularly good to pre-trained with much smaller images and find you in with larger images and secondly you also want to make sure that you use Bosch norm during your pee training and when you do something I see PC using Bosch norm is much harder because yeah you don't want information to mix in your pixel cnn's so so therefore people want that to have this version of contrast to learning that", "start_timestamp": "01:58:47", "end_timestamp": "01:59:21", "start_second": 7127, "end_second": 7161, "url": "https://www.youtube.com/watch?v=dMUes74-nYY&t=7127s", "title": "Lecture 7 Self-Supervised Learning -- UC Berkeley Spring 2020 - CS294-158 Deep Unsupervised Learning", "thumbnail": "https://i.ytimg.com/vi/dMUes74-nYY/maxresdefault.jpg"} {"video_id": "dMUes74-nYY", "text": "just worked at an instance level where the the context is just one version of your image target is another version of the same image and negatives are just any other images so in this case it could be like hey you just take a picture for dog you perform one data augmentation to it which is just apply a grayscale perform another data augmentation to it which is you flip it and you take a particular random crop so and any other image would be a negative for this particular anchor positive there so what does this actually learn", "start_timestamp": "01:59:21", "end_timestamp": "01:59:53", "start_second": 7161, "end_second": 7193, "url": "https://www.youtube.com/watch?v=dMUes74-nYY&t=7161s", "title": "Lecture 7 Self-Supervised Learning -- UC Berkeley Spring 2020 - CS294-158 Deep Unsupervised Learning", "thumbnail": "https://i.ytimg.com/vi/dMUes74-nYY/maxresdefault.jpg"} {"video_id": "dMUes74-nYY", "text": "them you're basically trying to learn that hey this particular you know the legs are absent and the other version and so you're trying to learn that there's a dog here depending on the amount of random cropping and data augmentation you use the level of cheating you can afford to identify the two things with the same is lower lower and therefore you're forced to learn good features to make sure that you identify that two different images presented you are actually fundamentally the same thing compared to any other", "start_timestamp": "01:59:53", "end_timestamp": "02:00:25", "start_second": 7193, "end_second": 7225, "url": "https://www.youtube.com/watch?v=dMUes74-nYY&t=7193s", "title": "Lecture 7 Self-Supervised Learning -- UC Berkeley Spring 2020 - CS294-158 Deep Unsupervised Learning", "thumbnail": "https://i.ytimg.com/vi/dMUes74-nYY/maxresdefault.jpg"} {"video_id": "dMUes74-nYY", "text": "image and that way you're able to learn really rich features it's very similar to the CPC area done at a patch level except that you're not actually doing any spatial prediction you're all you're trying to do is identify another version of the same image and this is in general referred to the principle of instances combination and to reason people have really taken this far one paper is called Boco or momentum contrast and other paper is called sim clr or simple contrast of learning for representations for vision and we're", "start_timestamp": "02:00:25", "end_timestamp": "02:00:59", "start_second": 7225, "end_second": 7259, "url": "https://www.youtube.com/watch?v=dMUes74-nYY&t=7225s", "title": "Lecture 7 Self-Supervised Learning -- UC Berkeley Spring 2020 - CS294-158 Deep Unsupervised Learning", "thumbnail": "https://i.ytimg.com/vi/dMUes74-nYY/maxresdefault.jpg"} {"video_id": "dMUes74-nYY", "text": "just going to look at these two papers though there are like a lot of other papers that have competed with diesel vapors in the recent past but these two papers are like the simplest and cleanest and also the most well functional in terms of state of the art metrics so first let's look at momentum contrast for unsupervised visual representation learning this is a paper by claiming he was also the inventor of rest nets and faster CNN masks are seen and and so forth so the late works as as false you you basically characterized", "start_timestamp": "02:00:59", "end_timestamp": "02:01:34", "start_second": 7259, "end_second": 7294, "url": "https://www.youtube.com/watch?v=dMUes74-nYY&t=7259s", "title": "Lecture 7 Self-Supervised Learning -- UC Berkeley Spring 2020 - CS294-158 Deep Unsupervised Learning", "thumbnail": "https://i.ytimg.com/vi/dMUes74-nYY/maxresdefault.jpg"} {"video_id": "dMUes74-nYY", "text": "contrast to learning as a dictionary lookup task where you're saying that okay hey I want to identify if two things are the same so I present one thing treated as a query and whatever I want to pair with is also present and a bunch of keys I have and there are also lots of other keys which could serve as negatives and I want to identify what is the right positive among these bunch of keys so you in Korea query you encode all your keys and you compute the pairwise similarities you know that true target which is because you know the", "start_timestamp": "02:01:34", "end_timestamp": "02:02:12", "start_second": 7294, "end_second": 7332, "url": "https://www.youtube.com/watch?v=dMUes74-nYY&t=7294s", "title": "Lecture 7 Self-Supervised Learning -- UC Berkeley Spring 2020 - CS294-158 Deep Unsupervised Learning", "thumbnail": "https://i.ytimg.com/vi/dMUes74-nYY/maxresdefault.jpg"} {"video_id": "dMUes74-nYY", "text": "ground truth for what is real other augmentation of the same image and then you just build this contrast loss and back prop so that does basically the idea of the instance discrimination so where does momentum come in here so the idea of momentum is to make sure that you're not you can use a slowly varying encoder of the same and you basically have an encoder which is used to encode your queries but your keys are going to use a pol yet a historically average version of the same encoder and this gives you lots of", "start_timestamp": "02:02:12", "end_timestamp": "02:02:48", "start_second": 7332, "end_second": 7368, "url": "https://www.youtube.com/watch?v=dMUes74-nYY&t=7332s", "title": "Lecture 7 Self-Supervised Learning -- UC Berkeley Spring 2020 - CS294-158 Deep Unsupervised Learning", "thumbnail": "https://i.ytimg.com/vi/dMUes74-nYY/maxresdefault.jpg"} {"video_id": "dMUes74-nYY", "text": "different benefits so one benefit is that it can let you use a lot of negatives without using your current mini packs which is to say that hey if you have a mini batch and if all your negatives are coming from your same mini batch then your number of negatives is limited by the batch size you have so that means you you you require a large batch size to use for your foot for being really efficient because you need a lot of negatives now that you're doing things at the instance level you don't have patches so what if", "start_timestamp": "02:02:48", "end_timestamp": "02:03:20", "start_second": 7368, "end_second": 7400, "url": "https://www.youtube.com/watch?v=dMUes74-nYY&t=7368s", "title": "Lecture 7 Self-Supervised Learning -- UC Berkeley Spring 2020 - CS294-158 Deep Unsupervised Learning", "thumbnail": "https://i.ytimg.com/vi/dMUes74-nYY/maxresdefault.jpg"} {"video_id": "dMUes74-nYY", "text": "you use a memory bank where you have a buffer of previous embeddings and then you just use that buffer as your negatives so for that buffer to function well it needs to be reasonably historically average version of your current encoder so that it can contrast well if it's just your current encoder then the previous embedding stored in the buffer or not rather than anymore so that basically so the idea for moko where you basically take an original image oh and you split it into queries in keys which is the two different", "start_timestamp": "02:03:20", "end_timestamp": "02:03:53", "start_second": 7400, "end_second": 7433, "url": "https://www.youtube.com/watch?v=dMUes74-nYY&t=7400s", "title": "Lecture 7 Self-Supervised Learning -- UC Berkeley Spring 2020 - CS294-158 Deep Unsupervised Learning", "thumbnail": "https://i.ytimg.com/vi/dMUes74-nYY/maxresdefault.jpg"} {"video_id": "dMUes74-nYY", "text": "augmentations and you encode both of them you encode the targets using your momentum encoder and you construct these query in keys you know the true target and you just optimize with the contrast loss note that this contrast loss in vocal doesn't use the bilinear product it just uses a unit norm vectors of all these in bearings with the temperature softmax and that that version also works pretty well as long as you can pick the right temperature and this is actually the pseudocode for vocal written in rape in a Python style and I think it's best", "start_timestamp": "02:03:53", "end_timestamp": "02:04:32", "start_second": 7433, "end_second": 7472, "url": "https://www.youtube.com/watch?v=dMUes74-nYY&t=7433s", "title": "Lecture 7 Self-Supervised Learning -- UC Berkeley Spring 2020 - CS294-158 Deep Unsupervised Learning", "thumbnail": "https://i.ytimg.com/vi/dMUes74-nYY/maxresdefault.jpg"} {"video_id": "dMUes74-nYY", "text": "understood through this so fq and FK our encoder networks with a query and key and your cue you have a dictionary of cues which is your memory bank of negatives and you have a momentum is a temperature parameter so initially you make the key encoder and the query encoder the same to start with and every time you load a mini batch you construct two different augmentations of it the query and the key and you fast forward the query in the key using the you get the embeddings using the query and key encoders and you", "start_timestamp": "02:04:32", "end_timestamp": "02:05:07", "start_second": 7472, "end_second": 7507, "url": "https://www.youtube.com/watch?v=dMUes74-nYY&t=7472s", "title": "Lecture 7 Self-Supervised Learning -- UC Berkeley Spring 2020 - CS294-158 Deep Unsupervised Learning", "thumbnail": "https://i.ytimg.com/vi/dMUes74-nYY/maxresdefault.jpg"} {"video_id": "dMUes74-nYY", "text": "who'da stop gradient on the key encoder you're not going to back probe through the momentum encoder and now you're computing the logits of the positives which is you have the same mini batch so you constructed or different augmentations for the same image so that means that the Pirates products of the corresponding batch indices like if you have a batch of 16 images and you created another batch of 16 images every particular index is basically another augmentation of itself so you can actually take the paralyzed or product", "start_timestamp": "02:05:07", "end_timestamp": "02:05:38", "start_second": 7507, "end_second": 7538, "url": "https://www.youtube.com/watch?v=dMUes74-nYY&t=7507s", "title": "Lecture 7 Self-Supervised Learning -- UC Berkeley Spring 2020 - CS294-158 Deep Unsupervised Learning", "thumbnail": "https://i.ytimg.com/vi/dMUes74-nYY/maxresdefault.jpg"} {"video_id": "dMUes74-nYY", "text": "for that and get get get the scores for the positives and for negatives you just use your cue or the memory bank of negatives that you have and you just take your query encoder and you just compute the paralyzed our products with all of this memory bank negatives and now you know that you have the positives and the negatives for your influency loss you just concatenate them and then throw across and repeat laws by creating the true labels and users JT to optimize and and finally you have to perform the momentum update for your key encoder so", "start_timestamp": "02:05:38", "end_timestamp": "02:06:14", "start_second": 7538, "end_second": 7574, "url": "https://www.youtube.com/watch?v=dMUes74-nYY&t=7538s", "title": "Lecture 7 Self-Supervised Learning -- UC Berkeley Spring 2020 - CS294-158 Deep Unsupervised Learning", "thumbnail": "https://i.ytimg.com/vi/dMUes74-nYY/maxresdefault.jpg"} {"video_id": "dMUes74-nYY", "text": "that your momentum encoder is slowly changing over time so that's the idea in moko and your cue or the memory bank is a first in first out queue so every time you are your plura upload a new batch of negative in bearings you are you also have to like take it take the the least reason keen a bad-size number of negatives out of the buffer and as long as you can do all this without any mistakes this will work and these are the different ways in which you can do instance contrasted learning one is you just do it end to", "start_timestamp": "02:06:14", "end_timestamp": "02:06:52", "start_second": 7574, "end_second": 7612, "url": "https://www.youtube.com/watch?v=dMUes74-nYY&t=7574s", "title": "Lecture 7 Self-Supervised Learning -- UC Berkeley Spring 2020 - CS294-158 Deep Unsupervised Learning", "thumbnail": "https://i.ytimg.com/vi/dMUes74-nYY/maxresdefault.jpg"} {"video_id": "dMUes74-nYY", "text": "end don't care about this momentum just use your current mini box as your negatives use the same encoder for the queries in the keys and back prop the gradients to every everything so that is the end of a notion the other version is hey I don't you know you you just say I count referred a really large bat size I want to use a memory bank of negatives and that would stay that would keep changing dynamically but then I can I'm going to use a lot more negatives that way so that's interesting and then you you cannot backprop to the memory bank", "start_timestamp": "02:06:52", "end_timestamp": "02:07:23", "start_second": 7612, "end_second": 7643, "url": "https://www.youtube.com/watch?v=dMUes74-nYY&t=7612s", "title": "Lecture 7 Self-Supervised Learning -- UC Berkeley Spring 2020 - CS294-158 Deep Unsupervised Learning", "thumbnail": "https://i.ytimg.com/vi/dMUes74-nYY/maxresdefault.jpg"} {"video_id": "dMUes74-nYY", "text": "but what you can do is you just applaud your most reason negatives every time you pass forward a new mini batch you just collect those and bearings and you just thank you that your memory bank so that has a problem because like I said if it's not just if it's not changing over time dynamically then then it's possible that your turn encoders in bearings can be correlating very easily with the most recent negatives and you can just ignore all the other negatives on your memory bank and so that way you're not actually taking true", "start_timestamp": "02:07:23", "end_timestamp": "02:08:00", "start_second": 7643, "end_second": 7680, "url": "https://www.youtube.com/watch?v=dMUes74-nYY&t=7643s", "title": "Lecture 7 Self-Supervised Learning -- UC Berkeley Spring 2020 - CS294-158 Deep Unsupervised Learning", "thumbnail": "https://i.ytimg.com/vi/dMUes74-nYY/maxresdefault.jpg"} {"video_id": "dMUes74-nYY", "text": "advantage of the number of negatives and the final version is the mokele illusion where you basically use a momentum encoder and then you use that for in queueing and D queuing and you use the you don't actually back prop to it and you only back for up to your queries and you have a lot of you you get the best of using you know just a regular end-to-end version but you also make sure that you can afford a large batch size effectively so here's the local plot where the number of negatives basically is increased over time like is", "start_timestamp": "02:08:00", "end_timestamp": "02:08:38", "start_second": 7680, "end_second": 7718, "url": "https://www.youtube.com/watch?v=dMUes74-nYY&t=7680s", "title": "Lecture 7 Self-Supervised Learning -- UC Berkeley Spring 2020 - CS294-158 Deep Unsupervised Learning", "thumbnail": "https://i.ytimg.com/vi/dMUes74-nYY/maxresdefault.jpg"} {"video_id": "dMUes74-nYY", "text": "increased a different iteration increase for different runs of the same algorithm like we have three different versions the moco memory bank and n 2n and increasing the number of negatives and enter in the authors only did it up 2024 because increasing it beyond that needs a lot more GPUs a lot more TP of course you know because it's basically your maths at global back size so having a global back size a thousand 24 is the maximum you can afford right now in 8/8 GPU Volta dgx so the authors didn't expand further on that but as we see", "start_timestamp": "02:08:38", "end_timestamp": "02:09:16", "start_second": 7718, "end_second": 7756, "url": "https://www.youtube.com/watch?v=dMUes74-nYY&t=7718s", "title": "Lecture 7 Self-Supervised Learning -- UC Berkeley Spring 2020 - CS294-158 Deep Unsupervised Learning", "thumbnail": "https://i.ytimg.com/vi/dMUes74-nYY/maxresdefault.jpg"} {"video_id": "dMUes74-nYY", "text": "very immediately after this the sims ii as our technique is basically the same thing as n 2n technique in moco but the only difference is they were they scale the top to use bigger batch sizes and use a lot more TPU course so local scales gracefully with the number of negatives which is basically the size of your memory bank and you can see that the benefits are there and as you keep increasing number negatives the linear classification accuracy and the y axis keeps going up so that means you're learning better representations so when", "start_timestamp": "02:09:16", "end_timestamp": "02:09:54", "start_second": 7756, "end_second": 7794, "url": "https://www.youtube.com/watch?v=dMUes74-nYY&t=7756s", "title": "Lecture 7 Self-Supervised Learning -- UC Berkeley Spring 2020 - CS294-158 Deep Unsupervised Learning", "thumbnail": "https://i.ytimg.com/vi/dMUes74-nYY/maxresdefault.jpg"} {"video_id": "dMUes74-nYY", "text": "mocha was published it was the state of the art for linear classification like CPC version two was not updated yet so and and so the updated results were already presented to you in the previous slide so Mokomoko had really good linear classifier accuracy where when they made the rest net 4x wider they got 68% top one and and and it's a nice plot of how the number of parameters plays a significant role in giving you like better top-1 accuracy of the linear classifier so finally we look at this paper called zoom CLR or Sinclair simple framework", "start_timestamp": "02:09:54", "end_timestamp": "02:10:36", "start_second": 7794, "end_second": 7836, "url": "https://www.youtube.com/watch?v=dMUes74-nYY&t=7794s", "title": "Lecture 7 Self-Supervised Learning -- UC Berkeley Spring 2020 - CS294-158 Deep Unsupervised Learning", "thumbnail": "https://i.ytimg.com/vi/dMUes74-nYY/maxresdefault.jpg"} {"video_id": "dMUes74-nYY", "text": "for contrast learning by think chance I mean complet momen Eruzione geoff hinton so the best way to understand since IAM clear now that you already know CPC and moco is it adopts instance discrimination it and it goes for the end-to-end mechanism proposed in moco so if you look at internment plot and figure a here you back propagating through both the query and the keys and you sharing the same encoder for the queries in the keys and your negatives are just coming from your back so and and and that's really this exact same", "start_timestamp": "02:10:36", "end_timestamp": "02:11:12", "start_second": 7836, "end_second": 7872, "url": "https://www.youtube.com/watch?v=dMUes74-nYY&t=7836s", "title": "Lecture 7 Self-Supervised Learning -- UC Berkeley Spring 2020 - CS294-158 Deep Unsupervised Learning", "thumbnail": "https://i.ytimg.com/vi/dMUes74-nYY/maxresdefault.jpg"} {"video_id": "dMUes74-nYY", "text": "thing adopted and Sinclair which is you just take X your input your image mini-batch you perform two different augmentations to it get exciting the UNIX j2d you pass them to the same rest that encoder and you get hecha in history and there is another encoder or another MLP head which takes these mean pool rest net embeddings and puts it into a lower dimension for the contrast of loss and then you just operate the same influency loss with unit vectors similar to moco so the new innovation in Sim CLR is this G", "start_timestamp": "02:11:12", "end_timestamp": "02:11:49", "start_second": 7872, "end_second": 7909, "url": "https://www.youtube.com/watch?v=dMUes74-nYY&t=7872s", "title": "Lecture 7 Self-Supervised Learning -- UC Berkeley Spring 2020 - CS294-158 Deep Unsupervised Learning", "thumbnail": "https://i.ytimg.com/vi/dMUes74-nYY/maxresdefault.jpg"} {"video_id": "dMUes74-nYY", "text": "functioned the the demo the the there is a transformation G that takes the rest net embedding and puts it into Layden's rather contrasted losses perform so early earlier versions like CPC and moko do not make use of any depth from when you Benny when you take the embeddings of your context to the targets it basically just uses a one by one convolution to reduce the channel dimension and perform the contrast of the last in a lower dimension in sims CLR you are using a few layers of MLP to transform 2048 dimensional vector that you get from", "start_timestamp": "02:11:49", "end_timestamp": "02:12:29", "start_second": 7909, "end_second": 7949, "url": "https://www.youtube.com/watch?v=dMUes74-nYY&t=7909s", "title": "Lecture 7 Self-Supervised Learning -- UC Berkeley Spring 2020 - CS294-158 Deep Unsupervised Learning", "thumbnail": "https://i.ytimg.com/vi/dMUes74-nYY/maxresdefault.jpg"} {"video_id": "dMUes74-nYY", "text": "rest net to something much smaller like 128 dimensions to the contrast to optimization so that's basically the difference and that really helped a lot so here is Sim Sim cleaners main algorithm another interesting version another interesting fact and sim clear is that if you're going to use your own batches your positives and negatives you can you can basically flip what is a query and what is a key so if you take a batch you do two different augmentations one of them becomes the query and one that becomes the key but then what which", "start_timestamp": "02:12:29", "end_timestamp": "02:13:06", "start_second": 7949, "end_second": 7986, "url": "https://www.youtube.com/watch?v=dMUes74-nYY&t=7949s", "title": "Lecture 7 Self-Supervised Learning -- UC Berkeley Spring 2020 - CS294-158 Deep Unsupervised Learning", "thumbnail": "https://i.ytimg.com/vi/dMUes74-nYY/maxresdefault.jpg"} {"video_id": "dMUes74-nYY", "text": "one is query image from this key is totally arbitrary so why not just do both ways so they actually do both ways and so that is the determine the loss which is L of 2k minus 1 comma 2 k plus L of 2 K comma 2 K minus 1 CD they just flip their order and then you can just use a large large batch gradient descent they use a large optimizer and use bad sizes of 2048 or 4096 on a cloud TPU and and and they're able to perform this optimization really fast and they also train much longer and just like CPC was trained for 500 a", "start_timestamp": "02:13:06", "end_timestamp": "02:13:44", "start_second": 7986, "end_second": 8024, "url": "https://www.youtube.com/watch?v=dMUes74-nYY&t=7986s", "title": "Lecture 7 Self-Supervised Learning -- UC Berkeley Spring 2020 - CS294-158 Deep Unsupervised Learning", "thumbnail": "https://i.ytimg.com/vi/dMUes74-nYY/maxresdefault.jpg"} {"video_id": "dMUes74-nYY", "text": "box they train actually 4,000 bucks and they add a little more data augmentations like Gaussian blur that helps a lot and randomized the order of data augmentations so simply errand was published just two weeks ago actually 2 to 2.5 weeks ago it got stained the art performance on imagenet linear classifier so you can see that just with the resident 50 whatever CPC version to God was 63.8 and moco had like 60 point 1 sim clear just took it all the way to 69 percent a huge jump and when they made the resonance wider which is making", "start_timestamp": "02:13:44", "end_timestamp": "02:14:22", "start_second": 8024, "end_second": 8062, "url": "https://www.youtube.com/watch?v=dMUes74-nYY&t=8024s", "title": "Lecture 7 Self-Supervised Learning -- UC Berkeley Spring 2020 - CS294-158 Deep Unsupervised Learning", "thumbnail": "https://i.ytimg.com/vi/dMUes74-nYY/maxresdefault.jpg"} {"video_id": "dMUes74-nYY", "text": "the rest at 2x so 4x wider you get more features linear classification now gets all the way up to 76.5% which is as good as a supervisor learner can be at that it's it is using more features it's using more parameters so it's in the same parameter scenario is not as good as supervised but if you just have a confirm for more parameters it's almost as good as supervised and this this kind of result was lacking for a long long time image image classification sub supervised learning was always lagging behind in supervised learning so so", "start_timestamp": "02:14:22", "end_timestamp": "02:14:58", "start_second": 8062, "end_second": 8098, "url": "https://www.youtube.com/watch?v=dMUes74-nYY&t=8062s", "title": "Lecture 7 Self-Supervised Learning -- UC Berkeley Spring 2020 - CS294-158 Deep Unsupervised Learning", "thumbnail": "https://i.ytimg.com/vi/dMUes74-nYY/maxresdefault.jpg"} {"video_id": "dMUes74-nYY", "text": "simple clear was the first to ensure that something can exist in this level so here is a more detailed study of how how you know you could ask a question hey what if you made supervisor also wider would supervise increase as much as supervised it turns out to be not true so like a regular supervised resident get 76% but a wider doesn't actually improve a lot more it doesn't prove it gets to 77 78 but not not not more so the gap between self supervised and supervised release is further and number of parameters is more so the hypothesis", "start_timestamp": "02:14:58", "end_timestamp": "02:15:41", "start_second": 8098, "end_second": 8141, "url": "https://www.youtube.com/watch?v=dMUes74-nYY&t=8098s", "title": "Lecture 7 Self-Supervised Learning -- UC Berkeley Spring 2020 - CS294-158 Deep Unsupervised Learning", "thumbnail": "https://i.ytimg.com/vi/dMUes74-nYY/maxresdefault.jpg"} {"video_id": "dMUes74-nYY", "text": "of the authors is that supervised models need me you know they cannot particularly benefit from more data augmentations or more parameters where sub supervised models can actually benefit from that because they're trying to do something much harder so when you actually can afford bigger models you would rather want to go for something like self supervised learning instead of supervised learning if you can continue making more improvements so after Sinclair was published the fact that the MLP has helped a lot made the", "start_timestamp": "02:15:41", "end_timestamp": "02:16:16", "start_second": 8141, "end_second": 8176, "url": "https://www.youtube.com/watch?v=dMUes74-nYY&t=8141s", "title": "Lecture 7 Self-Supervised Learning -- UC Berkeley Spring 2020 - CS294-158 Deep Unsupervised Learning", "thumbnail": "https://i.ytimg.com/vi/dMUes74-nYY/maxresdefault.jpg"} {"video_id": "dMUes74-nYY", "text": "moko authors consider using MLP heads as well and also doing a little more engineering more hyper parameter sweeps and also train longer and use the same kind of data augmentations that simply are used but and so the moko authors came back came up with reply to Sinclair in some sense and call their model this moko version to say on the left you can see what it basically is simply R which is the end-to-end version of loco on the right you can see the mocha version exactly the appealing thing about moko is it basically can let you use a lot of", "start_timestamp": "02:16:16", "end_timestamp": "02:16:51", "start_second": 8176, "end_second": 8211, "url": "https://www.youtube.com/watch?v=dMUes74-nYY&t=8176s", "title": "Lecture 7 Self-Supervised Learning -- UC Berkeley Spring 2020 - CS294-158 Deep Unsupervised Learning", "thumbnail": "https://i.ytimg.com/vi/dMUes74-nYY/maxresdefault.jpg"} {"video_id": "dMUes74-nYY", "text": "negatives with our naxi using large batches because of its memory buffer and so when you add when you add these ideas so MLP had which is the G function aaaghh plus which is the Gaussian blur augmentation used in simpler and another detail that was used in sim clear was the cosine learning rate decay which moko didn't use if you add all these details then mokos accuracy linear classifier just arrests in 50 encoder improves all the way up to 71 point one percent that just which is which is two percent better than simply results and", "start_timestamp": "02:16:51", "end_timestamp": "02:17:30", "start_second": 8211, "end_second": 8250, "url": "https://www.youtube.com/watch?v=dMUes74-nYY&t=8211s", "title": "Lecture 7 Self-Supervised Learning -- UC Berkeley Spring 2020 - CS294-158 Deep Unsupervised Learning", "thumbnail": "https://i.ytimg.com/vi/dMUes74-nYY/maxresdefault.jpg"} {"video_id": "dMUes74-nYY", "text": "it also improves the detection results in Pascal and so and so just by training longer and getting all these extra details right the moko authors were able to get very very impressive self supervised results with just using a GPUs and a bath size of 256 so that's that's that is really the state of the art technique right now and here's also the ablation between moko version one vocal version two and simply ER and you can clearly see like mocha version one from sixty point six it went all the way from sixty point six to sixty seven", "start_timestamp": "02:17:30", "end_timestamp": "02:18:10", "start_second": 8250, "end_second": 8290, "url": "https://www.youtube.com/watch?v=dMUes74-nYY&t=8250s", "title": "Lecture 7 Self-Supervised Learning -- UC Berkeley Spring 2020 - CS294-158 Deep Unsupervised Learning", "thumbnail": "https://i.ytimg.com/vi/dMUes74-nYY/maxresdefault.jpg"} {"video_id": "dMUes74-nYY", "text": "point five even without training longer like the seven percent improvement just by adding the same details that simply are had which is using MLP heads cosine learning rate DK and you the extra data augmentations and from 67.5 in it can have another 3.5% improvement by by training longer so so at this stage South supervised learning is really at a stage where it the amount of engineering detail you pay attention to and the amount of trickery you add in terms of model optimizations and like using clever techniques like memory", "start_timestamp": "02:18:10", "end_timestamp": "02:18:52", "start_second": 8290, "end_second": 8332, "url": "https://www.youtube.com/watch?v=dMUes74-nYY&t=8290s", "title": "Lecture 7 Self-Supervised Learning -- UC Berkeley Spring 2020 - CS294-158 Deep Unsupervised Learning", "thumbnail": "https://i.ytimg.com/vi/dMUes74-nYY/maxresdefault.jpg"} {"video_id": "dMUes74-nYY", "text": "banks and doing very well on large batch training what data augmentations you use what is the learning rate decays or optimizers those are really the most important parts and getting state-of-the-art numbers and it's only a matter of time before like it's gonna match the results of supervised learning on all these benchmarks like seventy one point files still not as good as 76% so that's still some gap to close in but the the rate of progress is because it's just been two or three months since all these papers came out and it's been", "start_timestamp": "02:18:52", "end_timestamp": "02:19:32", "start_second": 8332, "end_second": 8372, "url": "https://www.youtube.com/watch?v=dMUes74-nYY&t=8332s", "title": "Lecture 7 Self-Supervised Learning -- UC Berkeley Spring 2020 - CS294-158 Deep Unsupervised Learning", "thumbnail": "https://i.ytimg.com/vi/dMUes74-nYY/maxresdefault.jpg"} {"video_id": "dMUes74-nYY", "text": "rapidly exploring so so that's that's pretty much it for self supervised learning like we haven't really covered language like subsequent learning for language which is actually are arguably the domain where unsupervised ourselves provides free training has really taken off even before all these revision successes and some like bird which is really famous and has 4,000 citations in a year like it it basically is already productionize dan google search already uses it so that those are bigger successes than CPC", "start_timestamp": "02:19:32", "end_timestamp": "02:20:09", "start_second": 8372, "end_second": 8409, "url": "https://www.youtube.com/watch?v=dMUes74-nYY&t=8372s", "title": "Lecture 7 Self-Supervised Learning -- UC Berkeley Spring 2020 - CS294-158 Deep Unsupervised Learning", "thumbnail": "https://i.ytimg.com/vi/dMUes74-nYY/maxresdefault.jpg"} {"video_id": "pN34FNbOKXc", "text": "Transcriber: asma youssef Reviewer: Denise RQ What do you think is the key to achieve our goals, our success? Some people suggest things like hard work, focus, persistence. But research shows these are all by-products of something else, something much more powerful that we can all develop. It is this very special something that is really critical to success, and is what I am here to discuss with you today. Someone who has achieved great success is Josh Waitzkin, a chess international master and the subject of the movie", "start_timestamp": "00:00:00", "end_timestamp": "00:00:35", "start_second": 0, "end_second": 35, "url": "https://www.youtube.com/watch?v=pN34FNbOKXc&t=0s", "title": "The Power of belief -- mindset and success | Eduardo Briceno | TEDxManhattanBeach", "thumbnail": "https://i.ytimg.com/vi/pN34FNbOKXc/maxresdefault.jpg"} {"video_id": "pN34FNbOKXc", "text": "\"Searching for Bobby Fischer\". Nobody has won all the national chess championships that Josh has. But even more impressive, when he turned 21, he took on the challenge of mastering something completely new and very different from chess: martial arts. He realized that he had learned how to grow and succeed, and he could apply that understanding to other domains. And so, he devoted himself relentlessly to tai chi chuan. And after lots of hard work, many failures, and some broken joints, he became a great martial artist, and he won two world championships.", "start_timestamp": "00:00:35", "end_timestamp": "00:01:16", "start_second": 35, "end_second": 76, "url": "https://www.youtube.com/watch?v=pN34FNbOKXc&t=35s", "title": "The Power of belief -- mindset and success | Eduardo Briceno | TEDxManhattanBeach", "thumbnail": "https://i.ytimg.com/vi/pN34FNbOKXc/maxresdefault.jpg"} {"video_id": "pN34FNbOKXc", "text": "Now he is off to jiu-jitsu. So what does Josh say is the greatest thing ever happened to him? Believe it or not, he says, \"Losing my first national chess championship, because it helped me avoid many of the psychological traps.\" The key trap that Josh avoided was believing that he was special, that he was smarter than other people, and that he didn't have to work hard. He could have thought of himself as a prodigy, but he doesn't think that he has extraordinary intelligence. He says, \"The moment we believe that success is determined by an ingrained level of ability,", "start_timestamp": "00:01:16", "end_timestamp": "00:01:55", "start_second": 76, "end_second": 115, "url": "https://www.youtube.com/watch?v=pN34FNbOKXc&t=76s", "title": "The Power of belief -- mindset and success | Eduardo Briceno | TEDxManhattanBeach", "thumbnail": "https://i.ytimg.com/vi/pN34FNbOKXc/maxresdefault.jpg"} {"video_id": "pN34FNbOKXc", "text": "we will be brittle in the face of adversity.\" Josh often quotes Stanford Professor Carol Dweck who discovered that some people see intelligence or abilities as fixed what is called a fixed mindset, while other people see them as Josh does, as qualities that can be developed; a growth mindset. More important, Dr. Dweck discovered that these two different mindsets lead to very different behaviors and results. In a study she did with Dr. Lisa Blackwell, several hundreds seventh graders were surveyed to determine", "start_timestamp": "00:01:55", "end_timestamp": "00:02:27", "start_second": 115, "end_second": 147, "url": "https://www.youtube.com/watch?v=pN34FNbOKXc&t=115s", "title": "The Power of belief -- mindset and success | Eduardo Briceno | TEDxManhattanBeach", "thumbnail": "https://i.ytimg.com/vi/pN34FNbOKXc/maxresdefault.jpg"} {"video_id": "pN34FNbOKXc", "text": "which mindset each student had, and then they were tracked for two years. Results showed that the students with a growth mindset, those who thought they could change their own intelligence increased their grades over time. While those with a fixed mindset did not. You can see the trend, the gap in performance just widens and widens over time. The difference between these two groups: a different perspective on intelligence. Other studies have shown similar effects for our mindset about other abilities like problem solving, playing sports, managing people,", "start_timestamp": "00:02:27", "end_timestamp": "00:03:01", "start_second": 147, "end_second": 181, "url": "https://www.youtube.com/watch?v=pN34FNbOKXc&t=147s", "title": "The Power of belief -- mindset and success | Eduardo Briceno | TEDxManhattanBeach", "thumbnail": "https://i.ytimg.com/vi/pN34FNbOKXc/maxresdefault.jpg"} {"video_id": "pN34FNbOKXc", "text": "or anything else you'd like, dancing La Macarena. The key to success is not simply effort, or focus, or resilience, but it is the growth mindset that creates them, the mindset itself is critical. Research shows that when we directly try to build grit or persistence, it's not nearly as effective as addressing the mindset that underlies them. How many of us think of ourselves as not math people, or creative, or sociable, or athletic, or conversely, that we are naturals? If we are to fulfill our potential, we have to start thinking differently.", "start_timestamp": "00:03:01", "end_timestamp": "00:03:40", "start_second": 181, "end_second": 220, "url": "https://www.youtube.com/watch?v=pN34FNbOKXc&t=181s", "title": "The Power of belief -- mindset and success | Eduardo Briceno | TEDxManhattanBeach", "thumbnail": "https://i.ytimg.com/vi/pN34FNbOKXc/maxresdefault.jpg"} {"video_id": "pN34FNbOKXc", "text": "We have to realize we are not chained to our current capabilities. Neuroscience shows the brain is very malleable. And we can change our own ability to think and to perform. In fact, many of the most accomplished people of our era were thought of, by experts, to have no future. People like Charles Darwin, Lucille Ball, Marcel Proust, and many others. But they, along with all great achievers from Mozart to Einstein, built their abilities. But the key insight I would like you to walk away with today is that when we realize that,", "start_timestamp": "00:03:40", "end_timestamp": "00:04:14", "start_second": 220, "end_second": 254, "url": "https://www.youtube.com/watch?v=pN34FNbOKXc&t=220s", "title": "The Power of belief -- mindset and success | Eduardo Briceno | TEDxManhattanBeach", "thumbnail": "https://i.ytimg.com/vi/pN34FNbOKXc/maxresdefault.jpg"} {"video_id": "pN34FNbOKXc", "text": "when we realize we can change our own abilities, when we have a growth mindset, we bring our game to new levels. So how does a growth mindset do that? It turns out that there are physiological manifestations to mindset. Brain scans show that for people with a fixed mindset, the brain becomes most active when receiving information about how the person performed such as a grade or a score. But for people with a growth mindset, the brain becomes most active when receiving information about what they could do better next time.", "start_timestamp": "00:04:14", "end_timestamp": "00:04:43", "start_second": 254, "end_second": 283, "url": "https://www.youtube.com/watch?v=pN34FNbOKXc&t=254s", "title": "The Power of belief -- mindset and success | Eduardo Briceno | TEDxManhattanBeach", "thumbnail": "https://i.ytimg.com/vi/pN34FNbOKXc/maxresdefault.jpg"} {"video_id": "pN34FNbOKXc", "text": "In other words, people with a fixed mindset worry the most about how they are judged, while those with a growth mindset focus the most on learning. There are other consequences of mindset: people with a fixed mindset see effort as a bad thing, something that only people with low capabilities need, while those with a growth mindset see effort as what makes us smart, as the way to grow. And when they hit a set back or a failure, people with a fixed mindset tend to conclude that they are incapable. So to protect their ego, they lose interest or withdraw.", "start_timestamp": "00:04:43", "end_timestamp": "00:05:17", "start_second": 283, "end_second": 317, "url": "https://www.youtube.com/watch?v=pN34FNbOKXc&t=283s", "title": "The Power of belief -- mindset and success | Eduardo Briceno | TEDxManhattanBeach", "thumbnail": "https://i.ytimg.com/vi/pN34FNbOKXc/maxresdefault.jpg"} {"video_id": "pN34FNbOKXc", "text": "We observe that as lack of motivation. But behind it is a fixed mindset, whereas people with a growth mindset understand that set backs are part of growth. So when they hit one, they find a way around it. Like Josh Waitzkin did when he lost in chess or in martial arts. Research clearly shows these effects of mindset. In one study Dr. Dweck did with Dr. Claudia Mueller, they had children do a set of puzzles, and then they praised the kids. To some of the kids, they said, \"Wow, that's a really good score, you must be smart at this.\"", "start_timestamp": "00:05:17", "end_timestamp": "00:05:50", "start_second": 317, "end_second": 350, "url": "https://www.youtube.com/watch?v=pN34FNbOKXc&t=317s", "title": "The Power of belief -- mindset and success | Eduardo Briceno | TEDxManhattanBeach", "thumbnail": "https://i.ytimg.com/vi/pN34FNbOKXc/maxresdefault.jpg"} {"video_id": "pN34FNbOKXc", "text": "That's fixed mindset praise because it portrays intelligence or abilities as a fixed quality. To other kids they said, \"Wow, that's a really good score, you must have tried really hard.\" That's growth mindset praise because it focuses on the process. Then, they asked the kids, \"OK, what kind of puzzle would you like to do next? An easy one or a hard one?\" The majority of the kids who received the fixed mindset praise chose to do the easy puzzle. While the great majority of those who received the growth mindset praise", "start_timestamp": "00:05:50", "end_timestamp": "00:06:20", "start_second": 350, "end_second": 380, "url": "https://www.youtube.com/watch?v=pN34FNbOKXc&t=350s", "title": "The Power of belief -- mindset and success | Eduardo Briceno | TEDxManhattanBeach", "thumbnail": "https://i.ytimg.com/vi/pN34FNbOKXc/maxresdefault.jpg"} {"video_id": "pN34FNbOKXc", "text": "chose to do challenge themselves. Then the researchers gave a hard puzzle to all of the kids because they were interested in seeing what confronting difficulty would do to their performance. Look at what happened when the kids later went back to the set of easier problems that they started with. The kids who received the fixed mindset praise did significantly worse than they had originally, while those who received a growth mindset praise did better. And to top it off, at they very end, kids were asked to report their scores;", "start_timestamp": "00:06:20", "end_timestamp": "00:06:51", "start_second": 380, "end_second": 411, "url": "https://www.youtube.com/watch?v=pN34FNbOKXc&t=380s", "title": "The Power of belief -- mindset and success | Eduardo Briceno | TEDxManhattanBeach", "thumbnail": "https://i.ytimg.com/vi/pN34FNbOKXc/maxresdefault.jpg"} {"video_id": "pN34FNbOKXc", "text": "and the kids who received the fixed mindset praise lied about their scores over three times more often than those who received the growth mindset praise. They did not have another way to cope with their failure. The difference between these two groups: one short little sentence. How often do we praise kids for being smart or for being great at something? We have been told that this will raise their self-esteem. But instead, it puts them in a fixed mindset. They become afraid of challenges, and they lose confidence when things hit hard.", "start_timestamp": "00:06:51", "end_timestamp": "00:07:26", "start_second": 411, "end_second": 446, "url": "https://www.youtube.com/watch?v=pN34FNbOKXc&t=411s", "title": "The Power of belief -- mindset and success | Eduardo Briceno | TEDxManhattanBeach", "thumbnail": "https://i.ytimg.com/vi/pN34FNbOKXc/maxresdefault.jpg"} {"video_id": "pN34FNbOKXc", "text": "As Josh Waitzkin says, \"It is incredibly important for parents to make their feedback process related as oppose to praising or criticizing talent. If we win because we are winners, then when we lose, it must make us losers.\" These studies show not only the mechanisms by which mindset affects performance, but they also show something else that is very important: they show that we can change mindsets, and that's important, because most of us have fixed mindsets about something. Another study that showed that we can change mindsets", "start_timestamp": "00:07:26", "end_timestamp": "00:07:59", "start_second": 446, "end_second": 479, "url": "https://www.youtube.com/watch?v=pN34FNbOKXc&t=446s", "title": "The Power of belief -- mindset and success | Eduardo Briceno | TEDxManhattanBeach", "thumbnail": "https://i.ytimg.com/vi/pN34FNbOKXc/maxresdefault.jpg"} {"video_id": "pN34FNbOKXc", "text": "is one in which Dweck and Blackwell did a workshop with seventh graders to instill a growth mindset in them. As a result of the workshop, the students gained more interest in learning, and they worked harder; and as a result of that, their grades improved. Other studies have shown that when we teach a growth mindset, not only that it improves achievements for students as a whole but it also narrows the achievement gap, because the effects are most pronounced for the students who face negative stereotypes such as minority students, and girls in math.", "start_timestamp": "00:07:59", "end_timestamp": "00:08:31", "start_second": 479, "end_second": 511, "url": "https://www.youtube.com/watch?v=pN34FNbOKXc&t=479s", "title": "The Power of belief -- mindset and success | Eduardo Briceno | TEDxManhattanBeach", "thumbnail": "https://i.ytimg.com/vi/pN34FNbOKXc/maxresdefault.jpg"} {"video_id": "pN34FNbOKXc", "text": "I have spoken mostly about children, but mindsets affects all of us. In our work places, managers with fixed mindsets don't welcome feedback as much, and they don't mentor employees as much. And employees with growth mindsets about specific skills like negotiations become far better at those skills than people with fixed views. Mindsets can even help us solve big social issues. A recent study showed that when we expose Israelis and Palestinians to the idea that groups can change, they increase their attitudes towards towards one another,", "start_timestamp": "00:08:31", "end_timestamp": "00:09:05", "start_second": 511, "end_second": 545, "url": "https://www.youtube.com/watch?v=pN34FNbOKXc&t=511s", "title": "The Power of belief -- mindset and success | Eduardo Briceno | TEDxManhattanBeach", "thumbnail": "https://i.ytimg.com/vi/pN34FNbOKXc/maxresdefault.jpg"} {"video_id": "pN34FNbOKXc", "text": "they improve them. and they enhance their willingness to compromise and to work for peace. We also see the effects of mindsets on relationships, sports, health. How is it possible that as a society, we are not asking schools to develop a growth mindset in children? Our myopic efforts to teach them facts, concepts, and even critical critical thinking skills is likely to fail, if we don't also deliberately teach them the essential beliefs that will allow them to succeed not only in school but also beyond. There is a lot that we can do to change mindsets,", "start_timestamp": "00:09:05", "end_timestamp": "00:09:43", "start_second": 545, "end_second": 583, "url": "https://www.youtube.com/watch?v=pN34FNbOKXc&t=545s", "title": "The Power of belief -- mindset and success | Eduardo Briceno | TEDxManhattanBeach", "thumbnail": "https://i.ytimg.com/vi/pN34FNbOKXc/maxresdefault.jpg"} {"video_id": "pN34FNbOKXc", "text": "but here are three things that any of us can do to instill a growth mindset in ourselves and in those around us. First, recognize that the growth mindset is not only beneficial but it is also supported by science. Neuroscience shows that the brain changes and becomes more capable when we work hard to improve ourselves. Second, learn and teach others about how to develop our abilities. Learn about deliberate practice and what makes for effective effort. When we understand how to develop our abilities, we strengthen our conviction that we are in charge of them.", "start_timestamp": "00:09:43", "end_timestamp": "00:10:17", "start_second": 583, "end_second": 617, "url": "https://www.youtube.com/watch?v=pN34FNbOKXc&t=583s", "title": "The Power of belief -- mindset and success | Eduardo Briceno | TEDxManhattanBeach", "thumbnail": "https://i.ytimg.com/vi/pN34FNbOKXc/maxresdefault.jpg"} {"video_id": "g4M7stjzR1I", "text": "you're one of the one of the only people who dared boldly to try to formalize our the idea of artificial general intelligence to have a mathematical framework for intelligence just like as we mentioned termed IHC AI X I so let me ask the basic question what is IX e ok so let me first say what it stands for because letter stands for actually that's probably the more basic question but it the first question is usually how how it's pronounced but finally I put it on the website how it's pronounced and you figured it out yeah the name comes from", "start_timestamp": "00:00:00", "end_timestamp": "00:00:45", "start_second": 0, "end_second": 45, "url": "https://www.youtube.com/watch?v=g4M7stjzR1I&t=0s", "title": "Marcus Hutter: What is AIXI? | AI Podcast Clips", "thumbnail": "https://i.ytimg.com/vi/g4M7stjzR1I/maxresdefault.jpg"} {"video_id": "g4M7stjzR1I", "text": "AI artificial intelligence and the X I is the Greek letter X I which are used for solo manav's distribution for quite stupid reasons which I'm not willing to repeat here in front of camera so it just happened to be more less arbitrary I chose this I but it also has nice other interpretations so their actions and perceptions in this model around an agent has actions and perceptions and overtime so this is a Index IX index I so this action at time I and then followed by reception at time I will go with that I lit out the first part yes", "start_timestamp": "00:00:45", "end_timestamp": "00:01:25", "start_second": 45, "end_second": 85, "url": "https://www.youtube.com/watch?v=g4M7stjzR1I&t=45s", "title": "Marcus Hutter: What is AIXI? | AI Podcast Clips", "thumbnail": "https://i.ytimg.com/vi/g4M7stjzR1I/maxresdefault.jpg"} {"video_id": "g4M7stjzR1I", "text": "I'm just kidding I have some more interpretations so at some point maybe five years ago or ten years ago I discovered in in Barcelona it wasn't a big church there was in you know it's stone engraved some text and the word I see appeared there very surprised and and and and happy about it and I looked it up so it is Catalan language and it means with some interpretation of dead say that's the right thing to do yeah Eureka oh so it's almost like destined somehow came came to you in a dream so Osceola there's a Chinese word I she", "start_timestamp": "00:01:25", "end_timestamp": "00:02:07", "start_second": 85, "end_second": 127, "url": "https://www.youtube.com/watch?v=g4M7stjzR1I&t=85s", "title": "Marcus Hutter: What is AIXI? | AI Podcast Clips", "thumbnail": "https://i.ytimg.com/vi/g4M7stjzR1I/maxresdefault.jpg"} {"video_id": "g4M7stjzR1I", "text": "also written our galaxy if you transcribe that opinion then the final one is there is AI crossed with interaction be her status and it's going more to the content now so good old-fashioned AI is more about you know planning known the domestic world and induction is more about often you know iid data and inferring models and essentially what this is a model does is combining these two and I actually also recently I think heard that in Japanese AI means love so so if you can combine excise somehow with that I think we can there might be", "start_timestamp": "00:02:07", "end_timestamp": "00:02:42", "start_second": 127, "end_second": 162, "url": "https://www.youtube.com/watch?v=g4M7stjzR1I&t=127s", "title": "Marcus Hutter: What is AIXI? | AI Podcast Clips", "thumbnail": "https://i.ytimg.com/vi/g4M7stjzR1I/maxresdefault.jpg"} {"video_id": "g4M7stjzR1I", "text": "some interesting ideas there so I let's then take the next step can you maybe talk at the big level of what is this mathematical framework yeah so it consists essentially of two parts one is the learning and induction and prediction part and the other one is the planning part so let's come first to the learning induction prediction part which essentially explained already before so what we need for any agent to act well is that it can somehow predict what happens I mean if you have no idea what your actions do how can you decide which", "start_timestamp": "00:02:42", "end_timestamp": "00:03:21", "start_second": 162, "end_second": 201, "url": "https://www.youtube.com/watch?v=g4M7stjzR1I&t=162s", "title": "Marcus Hutter: What is AIXI? | AI Podcast Clips", "thumbnail": "https://i.ytimg.com/vi/g4M7stjzR1I/maxresdefault.jpg"} {"video_id": "g4M7stjzR1I", "text": "acts not good or not so you need to have some model of what your actions affect so what you do is you have some experience you build models like scientists you know of your experience then you hope these models are roughly correct and then you use these models for prediction and the model is sorry to interrupt the model is based on your perception of the world how your actions will affect our world that's not so what is the important part but it is technically important but at this stage we can just think about predicting say stock market", "start_timestamp": "00:03:21", "end_timestamp": "00:03:53", "start_second": 201, "end_second": 233, "url": "https://www.youtube.com/watch?v=g4M7stjzR1I&t=201s", "title": "Marcus Hutter: What is AIXI? | AI Podcast Clips", "thumbnail": "https://i.ytimg.com/vi/g4M7stjzR1I/maxresdefault.jpg"} {"video_id": "g4M7stjzR1I", "text": "data weather data or IQ sequences one two three four five what comes next yeah so of course our actions affect what we're doing what I come back to that in a second so and I'll keep interrupting so just to draw a line between prediction and planning or what do you mean by prediction in this in this way it's trying to predict the the environment without your long-term action in the environment what is prediction okay if you want to put the actions in now okay then let's put in a now yes so another question ok so the simplest form", "start_timestamp": "00:03:53", "end_timestamp": "00:04:33", "start_second": 233, "end_second": 273, "url": "https://www.youtube.com/watch?v=g4M7stjzR1I&t=233s", "title": "Marcus Hutter: What is AIXI? | AI Podcast Clips", "thumbnail": "https://i.ytimg.com/vi/g4M7stjzR1I/maxresdefault.jpg"} {"video_id": "g4M7stjzR1I", "text": "of prediction is that you just have data which you passively observe and you want to predict what happens without you know interfering as I said weather forecasting stock market IQ sequences or just anything okay and Salama of theory of induction based on compression so you look for the shortest program which describes your data sequence and then you take this program run it it reproduces your data sequence by definition and then you let it continue running and then it will produce some predictions and you can", "start_timestamp": "00:04:33", "end_timestamp": "00:05:05", "start_second": 273, "end_second": 305, "url": "https://www.youtube.com/watch?v=g4M7stjzR1I&t=273s", "title": "Marcus Hutter: What is AIXI? | AI Podcast Clips", "thumbnail": "https://i.ytimg.com/vi/g4M7stjzR1I/maxresdefault.jpg"} {"video_id": "g4M7stjzR1I", "text": "rigorously prove that for any prediction task this is essentially the best possible predictor of course if there's a prediction task our task which is unpredictable like you know your fair coin flips yeah I cannot predict the next reckon but Solomon of Tarsus says ok next head is probably 50% it's the best you can do so if something is unpredictable so Lamar will also not magically predict it but if there is some pattern and predictability then Solomonov induction we'll figure that out eventually and not just eventually", "start_timestamp": "00:05:05", "end_timestamp": "00:05:38", "start_second": 305, "end_second": 338, "url": "https://www.youtube.com/watch?v=g4M7stjzR1I&t=305s", "title": "Marcus Hutter: What is AIXI? | AI Podcast Clips", "thumbnail": "https://i.ytimg.com/vi/g4M7stjzR1I/maxresdefault.jpg"} {"video_id": "g4M7stjzR1I", "text": "but rather quickly and you can have proof convergence rates whatever your data is so there's pure magic in a sense what's the catch well the catch is that is not computable and we come back to that later you cannot just implement it in even this Google resources here and run it and you know predict the stock market and become rich I mean if ray solomonoff already you know tried it at the time but the basic task is you know you're in the environment and you interact with environment to try to learn a model the environment and the", "start_timestamp": "00:05:38", "end_timestamp": "00:06:09", "start_second": 338, "end_second": 369, "url": "https://www.youtube.com/watch?v=g4M7stjzR1I&t=338s", "title": "Marcus Hutter: What is AIXI? | AI Podcast Clips", "thumbnail": "https://i.ytimg.com/vi/g4M7stjzR1I/maxresdefault.jpg"} {"video_id": "g4M7stjzR1I", "text": "model is in the space as these all these programs and your goal is to get a bunch of programs that are simple and so let's let's go to the actions now but actually good that you asked usually I skip this part also that is also a minor contribution which I did so the action part but they're usually sort of just jump to the decision path so let me explain to the action partner thanks for asking so you have to modify it a little bit by now not just predicting a sequence which just comes to you but you have an observation then you act somehow", "start_timestamp": "00:06:09", "end_timestamp": "00:06:39", "start_second": 369, "end_second": 399, "url": "https://www.youtube.com/watch?v=g4M7stjzR1I&t=369s", "title": "Marcus Hutter: What is AIXI? | AI Podcast Clips", "thumbnail": "https://i.ytimg.com/vi/g4M7stjzR1I/maxresdefault.jpg"} {"video_id": "g4M7stjzR1I", "text": "and then you want to predict the next observation based on the past observation and your action then you take the next action you don't care about predicting it because you're doing it and then you get the next observation and you want more before you get it you want to predict it again based on your past excellent observation sequence you just condition extra on your actions there's an interesting alternative that you also try to predict your own actions if you want oh in the past or the future what are your future actions", "start_timestamp": "00:06:39", "end_timestamp": "00:07:14", "start_second": 399, "end_second": 434, "url": "https://www.youtube.com/watch?v=g4M7stjzR1I&t=399s", "title": "Marcus Hutter: What is AIXI? | AI Podcast Clips", "thumbnail": "https://i.ytimg.com/vi/g4M7stjzR1I/maxresdefault.jpg"} {"video_id": "g4M7stjzR1I", "text": "wait let me wrap I think my brain is broke we should maybe discuss that later Biff after I've explained the aixi model that's an interesting variation but there's a really interesting variation and a quick comment I don't know if you want to insert that in here but you're looking at in terms of observations you're looking at the entire the big history a long history of the observations exact that's very important the whole history from birth sort of of the agent and we can come back to that I'm also why this is important here", "start_timestamp": "00:07:14", "end_timestamp": "00:07:44", "start_second": 434, "end_second": 464, "url": "https://www.youtube.com/watch?v=g4M7stjzR1I&t=434s", "title": "Marcus Hutter: What is AIXI? | AI Podcast Clips", "thumbnail": "https://i.ytimg.com/vi/g4M7stjzR1I/maxresdefault.jpg"} {"video_id": "g4M7stjzR1I", "text": "often you know in RL you have MVPs Markov decision processes which are much more limiting okay so now we can predict conditioned on actions so even if the influence environment but prediction is not all we want to do right we also want to act really in the world and the question is how to choose the actions and we don't want to greedily choose the actions you know just you know what is best in in the next time step and we first I should say you know what is you know how to be measure performance so we measure performance by giving the agent", "start_timestamp": "00:07:44", "end_timestamp": "00:08:16", "start_second": 464, "end_second": 496, "url": "https://www.youtube.com/watch?v=g4M7stjzR1I&t=464s", "title": "Marcus Hutter: What is AIXI? | AI Podcast Clips", "thumbnail": "https://i.ytimg.com/vi/g4M7stjzR1I/maxresdefault.jpg"} {"video_id": "g4M7stjzR1I", "text": "reward that's the so called reinforcement learning framework so every time step you can give it a positive reward or negative reward or baby no reward it could be a very scarce right like if you play chess just at the end of the game you give +1 for winning or -1 for losing so in the aixi framework that's completely sufficient so occasionally you give a reward signal and you ask the agent to maximize river but not greedily sort of you know the next one next one because that's very bad in the long run if you're greedy so", "start_timestamp": "00:08:16", "end_timestamp": "00:08:44", "start_second": 496, "end_second": 524, "url": "https://www.youtube.com/watch?v=g4M7stjzR1I&t=496s", "title": "Marcus Hutter: What is AIXI? | AI Podcast Clips", "thumbnail": "https://i.ytimg.com/vi/g4M7stjzR1I/maxresdefault.jpg"} {"video_id": "g4M7stjzR1I", "text": "but over the lifetime of the agent so let's assume the agent lives for M times that was there it dies in sort of hundred years sharp that's just you know the simples model to explain so it looks at the future reward some and ask what is my action sequence or actually more precisely my policy which leads in expectation because of the know the world to the maximum reward some let me give you an analogy in chess for instance we know how to play optimally in theory it's just a minimax strategy I play the move which seems best to me", "start_timestamp": "00:08:44", "end_timestamp": "00:09:18", "start_second": 524, "end_second": 558, "url": "https://www.youtube.com/watch?v=g4M7stjzR1I&t=524s", "title": "Marcus Hutter: What is AIXI? | AI Podcast Clips", "thumbnail": "https://i.ytimg.com/vi/g4M7stjzR1I/maxresdefault.jpg"} {"video_id": "g4M7stjzR1I", "text": "under the assumption that the opponent plays the move which is best for him so best so worst for me and the assumption that he I play again the best move and then you have this expecting max three to the end of the game and then you back propagate and then you get the best possible move so that is the optimal strategy which for norman already figured out a long time ago for playing adversarial games luckily or maybe unluckily for the theory it becomes harder the world is not always adversarial so it can be if the other", "start_timestamp": "00:09:18", "end_timestamp": "00:09:50", "start_second": 558, "end_second": 590, "url": "https://www.youtube.com/watch?v=g4M7stjzR1I&t=558s", "title": "Marcus Hutter: What is AIXI? | AI Podcast Clips", "thumbnail": "https://i.ytimg.com/vi/g4M7stjzR1I/maxresdefault.jpg"} {"video_id": "g4M7stjzR1I", "text": "humans in cooperative fear or nature is usually I mean the dead nature is stochastic you know you know things just happen randomly or don't care about you so what you have to take into account is in noise yeah and not necessarily doesn't really so you'll replace the minimum on the opponent's side by an expectation which is general enough to include also adversarial cases so now instead of a minimax trials you have an expected max strategy so far so good so that is well known it's called sequential decision theory but the", "start_timestamp": "00:09:50", "end_timestamp": "00:10:22", "start_second": 590, "end_second": 622, "url": "https://www.youtube.com/watch?v=g4M7stjzR1I&t=590s", "title": "Marcus Hutter: What is AIXI? | AI Podcast Clips", "thumbnail": "https://i.ytimg.com/vi/g4M7stjzR1I/maxresdefault.jpg"} {"video_id": "g4M7stjzR1I", "text": "question is on which probability distribution do you base that if I have the true probability distribution like say I play backgammon right there's dice and there's certain randomness involved you know I can calculate probabilities and feed it in the expecting max or the signatures eg come up with the optimal decision if I have enough compute but in the before the real world we don't know that you know what is the probability you drive in front of me brakes and I don't know you know so depends on all kinds of things and especially new", "start_timestamp": "00:10:22", "end_timestamp": "00:10:52", "start_second": 622, "end_second": 652, "url": "https://www.youtube.com/watch?v=g4M7stjzR1I&t=622s", "title": "Marcus Hutter: What is AIXI? | AI Podcast Clips", "thumbnail": "https://i.ytimg.com/vi/g4M7stjzR1I/maxresdefault.jpg"} {"video_id": "g4M7stjzR1I", "text": "situations I don't know so is it this unknown thing about prediction and there's where solomonov comes in so what you do is in sequential decision jury it just replace the true distribution which we don't know by this Universal distribution I didn't expect a talk about it but this is used for universal prediction and plug it into the sequential decision mechanism and then you get the best of both worlds you have a long-term planning agent but it doesn't need to know anything about the world because the Solomon reduction part", "start_timestamp": "00:10:52", "end_timestamp": "00:11:23", "start_second": 652, "end_second": 683, "url": "https://www.youtube.com/watch?v=g4M7stjzR1I&t=652s", "title": "Marcus Hutter: What is AIXI? | AI Podcast Clips", "thumbnail": "https://i.ytimg.com/vi/g4M7stjzR1I/maxresdefault.jpg"} {"video_id": "g4M7stjzR1I", "text": "learns can you explicitly try to describe the universal distribution and how some of induction plays a role here yeah I'm trying to understand so what it does it so in the simplest case I said take the shortest program describing your data run it have a prediction which would be deterministic yes okay but you should not just take a shortest program but also consider the longer ones but keep it lower a priori probability so in the Bayesian framework you say our priori any distribution which is a model or stochastic program has a certain a", "start_timestamp": "00:11:23", "end_timestamp": "00:12:03", "start_second": 683, "end_second": 723, "url": "https://www.youtube.com/watch?v=g4M7stjzR1I&t=683s", "title": "Marcus Hutter: What is AIXI? | AI Podcast Clips", "thumbnail": "https://i.ytimg.com/vi/g4M7stjzR1I/maxresdefault.jpg"} {"video_id": "g4M7stjzR1I", "text": "pre or probability which is two - two - and why - ders - length you know I could explain length of this program so longer programs are punished yeah a priori and then you multiplied with the so called likelihood function yeah which is as the name suggests is how likely is this model given the data at hand so if you have a very wrong model it's very unlikely that this model is true so it this very small number so even if the model is simple it gets penalized by that and what you do is then you take just the sum word this is the average", "start_timestamp": "00:12:03", "end_timestamp": "00:12:37", "start_second": 723, "end_second": 757, "url": "https://www.youtube.com/watch?v=g4M7stjzR1I&t=723s", "title": "Marcus Hutter: What is AIXI? | AI Podcast Clips", "thumbnail": "https://i.ytimg.com/vi/g4M7stjzR1I/maxresdefault.jpg"} {"video_id": "g4M7stjzR1I", "text": "over it and this gives you a probability distribution so it was universal distribution of Solomon of distribution so it's weighed by the simplicity of the program and likelihood yes it's kind of a nice idea yeah so okay and then you said there's you're playing N or M or forgot the letter steps into the future so how difficult is that problem what's involved there okay so here's a cop to mutation problem what do we do yeah so you have a planning problem up to the horizon M and that's exponential time in in the horizon M which is I mean it's", "start_timestamp": "00:12:37", "end_timestamp": "00:13:12", "start_second": 757, "end_second": 792, "url": "https://www.youtube.com/watch?v=g4M7stjzR1I&t=757s", "title": "Marcus Hutter: What is AIXI? | AI Podcast Clips", "thumbnail": "https://i.ytimg.com/vi/g4M7stjzR1I/maxresdefault.jpg"} {"video_id": "g4M7stjzR1I", "text": "computable but in fact intractable I mean even for chess it's already intractable to do that exactly and you know it could be also discounted kind of framework or so so having a hard horizon you know at number of years it's just for simplicity of discussing the model and also sometimes the mass is simple but there are lots of variations actually quite interesting parameter it's it's there's nothing really problematic about it but it's very interesting so for instance you think no let's let's pin let's let the parameter M tend to infinity right", "start_timestamp": "00:13:12", "end_timestamp": "00:13:46", "start_second": 792, "end_second": 826, "url": "https://www.youtube.com/watch?v=g4M7stjzR1I&t=792s", "title": "Marcus Hutter: What is AIXI? | AI Podcast Clips", "thumbnail": "https://i.ytimg.com/vi/g4M7stjzR1I/maxresdefault.jpg"} {"video_id": "g4M7stjzR1I", "text": "you want an agent which lives forever all right if you do it novel you have two problems first the mathematics breaks down because you have an infinite revert sum which may give infinity and getting river 0.1 in the time step is infinity and giving you got one every time service affinity so equally good not really what we want another problem is that if you have an infinite life you can be lazy for as long as you want for ten years yeah and then catch up with the same expected reward and you know think about yourself or you know or", "start_timestamp": "00:13:46", "end_timestamp": "00:14:18", "start_second": 826, "end_second": 858, "url": "https://www.youtube.com/watch?v=g4M7stjzR1I&t=826s", "title": "Marcus Hutter: What is AIXI? | AI Podcast Clips", "thumbnail": "https://i.ytimg.com/vi/g4M7stjzR1I/maxresdefault.jpg"} {"video_id": "g4M7stjzR1I", "text": "maybe you know some friends or so if they knew they lived forever you know why work hard now or you know just enjoy your life you know and then catch up later so that's another problem with infinite horizon and you mentioned yes we can go to discounting but then the standard discounting is so called geometric discounting so $1 today is about worth as much as you know one dollar and five cents tomorrow so if you do this local geometric discounting you have introduced an effective horizon so the agent is now motivated to look ahead", "start_timestamp": "00:14:18", "end_timestamp": "00:14:49", "start_second": 858, "end_second": 889, "url": "https://www.youtube.com/watch?v=g4M7stjzR1I&t=858s", "title": "Marcus Hutter: What is AIXI? | AI Podcast Clips", "thumbnail": "https://i.ytimg.com/vi/g4M7stjzR1I/maxresdefault.jpg"} {"video_id": "g4M7stjzR1I", "text": "a certain amount of time effectively it's likely moving horizon and for any fixed effective horizon there is a problem to solve which requires larger horizon so if I look ahead you know five time steps I'm a terrible chess player right and I need to look ahead longer if I play go I probably have to look ahead even longer so for every problem no forever horizon there is a problem which this horizon cannot solve yes but I introduced the so-called near harmonic horizon which goes down with one over T rather than", "start_timestamp": "00:14:49", "end_timestamp": "00:15:22", "start_second": 889, "end_second": 922, "url": "https://www.youtube.com/watch?v=g4M7stjzR1I&t=889s", "title": "Marcus Hutter: What is AIXI? | AI Podcast Clips", "thumbnail": "https://i.ytimg.com/vi/g4M7stjzR1I/maxresdefault.jpg"} {"video_id": "g4M7stjzR1I", "text": "exponential in T which produces an agent which effectively looks into the future proportional to its age so if it's five years old it plans for five years if it's hundred years older than plans for hundred years interesting and a little bit similar to humans - right and my children don't I had very long within we get the doll to a play I had more longer maybe when we get all very old I mean we know that we don't live forever and maybe then how horizon shrinks again so just adjusting the horizon what is there some", "start_timestamp": "00:15:22", "end_timestamp": "00:15:52", "start_second": 922, "end_second": 952, "url": "https://www.youtube.com/watch?v=g4M7stjzR1I&t=922s", "title": "Marcus Hutter: What is AIXI? | AI Podcast Clips", "thumbnail": "https://i.ytimg.com/vi/g4M7stjzR1I/maxresdefault.jpg"} {"video_id": "g4M7stjzR1I", "text": "mathematical benefit of that of or is it just a nice I mean intuitively empirically we'll probably a good idea to sort of push a horizon back to uh extend the horizon as you experience more of the world but is there some mathematical conclusions here that are beneficial mr. lamagno who talks to the prediction party of extremely strong finite time but no finite data result so you have sown so much data then you lose sown so much so the dt r is really great with the aixi model with the planning part many results are only asymptotic which well", "start_timestamp": "00:15:52", "end_timestamp": "00:16:29", "start_second": 952, "end_second": 989, "url": "https://www.youtube.com/watch?v=g4M7stjzR1I&t=952s", "title": "Marcus Hutter: What is AIXI? | AI Podcast Clips", "thumbnail": "https://i.ytimg.com/vi/g4M7stjzR1I/maxresdefault.jpg"} {"video_id": "g4M7stjzR1I", "text": "this is what is asymptotic means you can prove for instance that in the long run if the agent you know x long enough then you know it performs optimal or some nice things happens so but you don't know how fast it converges yeah so it may converge fast but we're just not able to prove it because the typical problem or maybe there's a bug in the in the in the model so that is really dead slow yeah so so that is what asymptotic means sort of eventually but we don't know how fast and if I give the agent a fixed horizon M yeah then I cannot prove", "start_timestamp": "00:16:29", "end_timestamp": "00:17:04", "start_second": 989, "end_second": 1024, "url": "https://www.youtube.com/watch?v=g4M7stjzR1I&t=989s", "title": "Marcus Hutter: What is AIXI? | AI Podcast Clips", "thumbnail": "https://i.ytimg.com/vi/g4M7stjzR1I/maxresdefault.jpg"} {"video_id": "g4M7stjzR1I", "text": "our synthetic results right so I mean sort of pivot dies in hundred years then it number uses over cannot say eventually so this is the advantage of the discounting that I can prove our synthetic results so just to clarify so so I okay I made I've built up a model well now in a moment I've have this way of looking several steps ahead how do I pick what action I will take it's like with the playing chess right you do this minimax in this case here do expect the max based on the solo mode of distribution you propagate back and then", "start_timestamp": "00:17:04", "end_timestamp": "00:17:43", "start_second": 1024, "end_second": 1063, "url": "https://www.youtube.com/watch?v=g4M7stjzR1I&t=1024s", "title": "Marcus Hutter: What is AIXI? | AI Podcast Clips", "thumbnail": "https://i.ytimg.com/vi/g4M7stjzR1I/maxresdefault.jpg"} {"video_id": "g4M7stjzR1I", "text": "while inaction falls out the action which maximizes the future expected reward on the Solano's distribution and then you take the section and then repeat that you get a new observation and you feed it in this extant observation then you repeat and the reward so on yeah so you rewrote - yeah and then maybe you can even predict your own action I love the idea but ok this big framework what is it this is I mean it's kind of a beautiful mathematical framework to think about artificial general intelligence what can", "start_timestamp": "00:17:43", "end_timestamp": "00:18:16", "start_second": 1063, "end_second": 1096, "url": "https://www.youtube.com/watch?v=g4M7stjzR1I&t=1063s", "title": "Marcus Hutter: What is AIXI? | AI Podcast Clips", "thumbnail": "https://i.ytimg.com/vi/g4M7stjzR1I/maxresdefault.jpg"} {"video_id": "g4M7stjzR1I", "text": "you what does it help you into it about how to build such systems or maybe from another perspective what does it help us to in understanding AGI so when I started in the field I was always interested two things one was you know AGI i'm the name didn't exist 10 2014 area wa strong AI and physics c or everything so i switched back and forth between computer science and physics quite often you said the theory of everything the theory of everything just alike it was a basic that regulators problems before all all of humanity yeah I can explain", "start_timestamp": "00:18:16", "end_timestamp": "00:18:58", "start_second": 1096, "end_second": 1138, "url": "https://www.youtube.com/watch?v=g4M7stjzR1I&t=1096s", "title": "Marcus Hutter: What is AIXI? | AI Podcast Clips", "thumbnail": "https://i.ytimg.com/vi/g4M7stjzR1I/maxresdefault.jpg"} {"video_id": "g4M7stjzR1I", "text": "if you wanted some later time you know why I'm interested in these two questions Nestle and a small tangent if if if one to be it was one to be solved which one would you if one if you were if an apple fell in your head and there was a brilliant insight and you could arrive at the solution to one would it be a GI or the theory of everything definitely a GI because once the a GI problem solve they can ask the a GI to solve the other problem for me yeah brilliantly okay so so as you were saying about it okay so and the reason", "start_timestamp": "00:18:58", "end_timestamp": "00:19:37", "start_second": 1138, "end_second": 1177, "url": "https://www.youtube.com/watch?v=g4M7stjzR1I&t=1138s", "title": "Marcus Hutter: What is AIXI? | AI Podcast Clips", "thumbnail": "https://i.ytimg.com/vi/g4M7stjzR1I/maxresdefault.jpg"} {"video_id": "g4M7stjzR1I", "text": "why I didn't settle I mean this thought about you know once you have solved a GI it solves all kinds of other not just as your every problem but all kinds of use more useful problems to humanity it's very appealing to many people and you know and it is thought also but I was quite disappointed with the state of the art of the field of AI there was some theory you know about logical reasoning but I was never convinced that this will fly and then there was this for more holistic approaches with neural networks and I", "start_timestamp": "00:19:37", "end_timestamp": "00:20:08", "start_second": 1177, "end_second": 1208, "url": "https://www.youtube.com/watch?v=g4M7stjzR1I&t=1177s", "title": "Marcus Hutter: What is AIXI? | AI Podcast Clips", "thumbnail": "https://i.ytimg.com/vi/g4M7stjzR1I/maxresdefault.jpg"} {"video_id": "g4M7stjzR1I", "text": "didn't like these heuristics so and also I didn't have any good idea myself so that's the reason why I toggle back and forth quite some violent even work so four and a half years and a company developing software something completely unrelated but then I had this idea about the ITC model and so what it gives you it gives you a gold standard so I have proven that this is the most intelligent agents which anybody could build built in quotation mark right because it's just mathematical and you need infinite compute yeah but this is the limit and", "start_timestamp": "00:20:08", "end_timestamp": "00:20:46", "start_second": 1208, "end_second": 1246, "url": "https://www.youtube.com/watch?v=g4M7stjzR1I&t=1208s", "title": "Marcus Hutter: What is AIXI? | AI Podcast Clips", "thumbnail": "https://i.ytimg.com/vi/g4M7stjzR1I/maxresdefault.jpg"} {"video_id": "g4M7stjzR1I", "text": "this is completely specified it's not just a framework and it you know every year tens of frameworks are developed with just have skeletons and then pieces are missing and usually this missing pieces you know turn out to be really really difficult and so this is completely and uniquely defined and we can analyze that mathematically and we've also developed some approximations I can talk about it a little bit later that would dissolve the top-down approach like say for Normans minimax theory that's the theoretical optimal", "start_timestamp": "00:20:46", "end_timestamp": "00:21:19", "start_second": 1246, "end_second": 1279, "url": "https://www.youtube.com/watch?v=g4M7stjzR1I&t=1246s", "title": "Marcus Hutter: What is AIXI? | AI Podcast Clips", "thumbnail": "https://i.ytimg.com/vi/g4M7stjzR1I/maxresdefault.jpg"} {"video_id": "g4M7stjzR1I", "text": "play of games and now we need to approximate it put heuristics in prune the tree blah blah blah and so on so we can do that also with a hike symbol but for generally I it can also inspire those and most of most researchers go bottom-up right they have the systems that try to make it more general more intelligent it can inspire in which direction to go what do you mean by that so if you have some choice to make right so how should they evaluate my system if I can't do cross validation how should I do my learning if my standard", "start_timestamp": "00:21:19", "end_timestamp": "00:21:52", "start_second": 1279, "end_second": 1312, "url": "https://www.youtube.com/watch?v=g4M7stjzR1I&t=1279s", "title": "Marcus Hutter: What is AIXI? | AI Podcast Clips", "thumbnail": "https://i.ytimg.com/vi/g4M7stjzR1I/maxresdefault.jpg"} {"video_id": "g4M7stjzR1I", "text": "regularization doesn't work well yeah so the answers always this we have a system which does everything that's actually it's just you know completing the ivory tower completely useless from a practical point of view but you can look at it and see oh yeah maybe you know I can take some aspects and you know instead of Kolmogorov complexity there just take some compressors which has been developed so far and for the planning well we have used it here which is also you know being used in go and it I at least it's inspired me a lot", "start_timestamp": "00:21:52", "end_timestamp": "00:22:22", "start_second": 1312, "end_second": 1342, "url": "https://www.youtube.com/watch?v=g4M7stjzR1I&t=1312s", "title": "Marcus Hutter: What is AIXI? | AI Podcast Clips", "thumbnail": "https://i.ytimg.com/vi/g4M7stjzR1I/maxresdefault.jpg"} {"video_id": "g4M7stjzR1I", "text": "to have this formal definition and if you look at other fields you know like I always come back to physics because I'm a physics background think about the phenomena of energy that was a long time a mysterious concept and at some point it was completely formalized and that really helped a lot and you can point out a lot of these things which were first mysterious and wake and then they have been rigorously formalized speed and acceleration has been confused tried until it was formally defined here there was a time like this and and people you", "start_timestamp": "00:22:22", "end_timestamp": "00:22:55", "start_second": 1342, "end_second": 1375, "url": "https://www.youtube.com/watch?v=g4M7stjzR1I&t=1342s", "title": "Marcus Hutter: What is AIXI? | AI Podcast Clips", "thumbnail": "https://i.ytimg.com/vi/g4M7stjzR1I/maxresdefault.jpg"} {"video_id": "g4M7stjzR1I", "text": "know often you know I don't have any background you know still confuse it so and this is a model or the the intelligence definitions which is sort of the dual to it we come back to that later formalizes the notion of intelligence uniquely and rigorously so in in a sense it serves as kind of the light at the end of the tunnel so yeah so I mean there's a million questions I could ask her so maybe the kind of okay let's feel around in the dark a little bit so there's been here a deep mind but in general been a lot of breakthrough", "start_timestamp": "00:22:55", "end_timestamp": "00:23:30", "start_second": 1375, "end_second": 1410, "url": "https://www.youtube.com/watch?v=g4M7stjzR1I&t=1375s", "title": "Marcus Hutter: What is AIXI? | AI Podcast Clips", "thumbnail": "https://i.ytimg.com/vi/g4M7stjzR1I/maxresdefault.jpg"} {"video_id": "g4M7stjzR1I", "text": "ideas just like we've been saying around reinforcement learning so how do you see the progress in reinforcement learning is different like which subset of IHC does it occupy the current like you said the maybe the Markov assumption is made quite often in reinforce for learning the there's this other assumptions made in order to make the system work what do you see is the difference connection between reinforcement learning and I AXI and so the major difference is that essentially all other approaches they make stronger assumptions so in", "start_timestamp": "00:23:30", "end_timestamp": "00:24:09", "start_second": 1410, "end_second": 1449, "url": "https://www.youtube.com/watch?v=g4M7stjzR1I&t=1410s", "title": "Marcus Hutter: What is AIXI? | AI Podcast Clips", "thumbnail": "https://i.ytimg.com/vi/g4M7stjzR1I/maxresdefault.jpg"} {"video_id": "g4M7stjzR1I", "text": "reinforcement learning the Markov assumption is that the the next state or next observation only depends on the on the previous observation and not the whole history which makes of course the mathematics much easier rather than dealing with histories of course their profit from it also because then you have algorithms which run on current computers and do something practically useful for generally re all the assumptions which are made by other approaches we know already now they are limiting so for instance usually you need a go", "start_timestamp": "00:24:09", "end_timestamp": "00:24:42", "start_second": 1449, "end_second": 1482, "url": "https://www.youtube.com/watch?v=g4M7stjzR1I&t=1449s", "title": "Marcus Hutter: What is AIXI? | AI Podcast Clips", "thumbnail": "https://i.ytimg.com/vi/g4M7stjzR1I/maxresdefault.jpg"} {"video_id": "g4M7stjzR1I", "text": "digital assumption in the MTP framework in order to learn it goes this T essentially means that you can recover from your mistakes and that they are not traps in the environment and if you make this assumption then essentially you can you know go back to a previous state go there a couple of times and then learn what what statistics and what the state is like and then in the long run perform well in this state yeah but there are no fundamental problems but in real life we know you know there can be one single", "start_timestamp": "00:24:42", "end_timestamp": "00:25:12", "start_second": 1482, "end_second": 1512, "url": "https://www.youtube.com/watch?v=g4M7stjzR1I&t=1482s", "title": "Marcus Hutter: What is AIXI? | AI Podcast Clips", "thumbnail": "https://i.ytimg.com/vi/g4M7stjzR1I/maxresdefault.jpg"} {"video_id": "g4M7stjzR1I", "text": "action you know one second of being inattentive while driving a car fast you know you can ruin the rest of my life I can become quadriplegic or whatever so and there's no recovery anymore so the real world is not err gorica i always say you know there are traps and there are situations we are not recover from and very little theory has been developed for this case what about what do you see in there in the context of I see as the role of exploration sort of you mentioned you know in the in the real world and get into trouble and we", "start_timestamp": "00:25:12", "end_timestamp": "00:25:50", "start_second": 1512, "end_second": 1550, "url": "https://www.youtube.com/watch?v=g4M7stjzR1I&t=1512s", "title": "Marcus Hutter: What is AIXI? | AI Podcast Clips", "thumbnail": "https://i.ytimg.com/vi/g4M7stjzR1I/maxresdefault.jpg"} {"video_id": "g4M7stjzR1I", "text": "make the wrong decisions and really pay for it but exploration is seems to be fundamentally important for learning about this world for gaining new knowledge so is it his exploration baked in another way to ask it what are the parameters of this of IHC it can be controlled yeah I say the good thing is that there are no parameters to control and some other people drag knobs to control and you can do that I mean you can modify axes so that you have some knobs to play with if you want to but the exploration is directly baked in and", "start_timestamp": "00:25:50", "end_timestamp": "00:26:27", "start_second": 1550, "end_second": 1587, "url": "https://www.youtube.com/watch?v=g4M7stjzR1I&t=1550s", "title": "Marcus Hutter: What is AIXI? | AI Podcast Clips", "thumbnail": "https://i.ytimg.com/vi/g4M7stjzR1I/maxresdefault.jpg"} {"video_id": "g4M7stjzR1I", "text": "that comes from the Bayesian learning and the long-term planning so these together already imply exploration you can nicely and explicitly prove that for a simple problem like so-called bandit problems where you say to give a real good example say you have two medical treatments a and B you don't know the effectiveness you try a a little bit be a little bit but you don't want to harm too many patients so you have to sort of trade-off exploring yeah and at some point you want to explore and you can do the mathematics and", "start_timestamp": "00:26:27", "end_timestamp": "00:27:07", "start_second": 1587, "end_second": 1627, "url": "https://www.youtube.com/watch?v=g4M7stjzR1I&t=1587s", "title": "Marcus Hutter: What is AIXI? | AI Podcast Clips", "thumbnail": "https://i.ytimg.com/vi/g4M7stjzR1I/maxresdefault.jpg"} {"video_id": "g4M7stjzR1I", "text": "figure out the optimal strategy it took a Bayesian agents they're also non-bayesian agents but it shows that this patient framework by taking a prior over possible worlds doing the Bayesian mixture than the Bayes optimal decision with long term planning that is important automatically implies exploration also to the proper extent not too much exploration and not too little in this very simple settings in the aixi model and was also able to prove that it is a self optimizing theorem or asymptotic optimality theorems or later only asymptotic not", "start_timestamp": "00:27:07", "end_timestamp": "00:27:42", "start_second": 1627, "end_second": 1662, "url": "https://www.youtube.com/watch?v=g4M7stjzR1I&t=1627s", "title": "Marcus Hutter: What is AIXI? | AI Podcast Clips", "thumbnail": "https://i.ytimg.com/vi/g4M7stjzR1I/maxresdefault.jpg"} {"video_id": "g4M7stjzR1I", "text": "finite time bounds it seems like the long term planning is a really important but the long term part of the planet is really important yes and also I mean maybe a quick tangent how important do you think is removing the Markov assumption and looking at the full history sort of intuitively of course it's important but is it like fundamentally transformative to the entirety of the problem what's your sense of it like because we all we make that assumption quite often it's just throwing away the past now I think it's", "start_timestamp": "00:27:42", "end_timestamp": "00:28:14", "start_second": 1662, "end_second": 1694, "url": "https://www.youtube.com/watch?v=g4M7stjzR1I&t=1662s", "title": "Marcus Hutter: What is AIXI? | AI Podcast Clips", "thumbnail": "https://i.ytimg.com/vi/g4M7stjzR1I/maxresdefault.jpg"} {"video_id": "g4M7stjzR1I", "text": "absolutely crucial the question is whether there's a way to deal with it in a more holistic and still sufficiently well way so I have to come everything up and fly but you know you have say some you know key event in your life you know a long time ago you know in some city or something you realized you know that's a really dangerous street or whatever right here and you want to remember that forever right in case you come back they're kind of a selective kind of memory so you remember that all the important events in the past but somehow", "start_timestamp": "00:28:14", "end_timestamp": "00:28:49", "start_second": 1694, "end_second": 1729, "url": "https://www.youtube.com/watch?v=g4M7stjzR1I&t=1694s", "title": "Marcus Hutter: What is AIXI? | AI Podcast Clips", "thumbnail": "https://i.ytimg.com/vi/g4M7stjzR1I/maxresdefault.jpg"} {"video_id": "g4M7stjzR1I", "text": "selecting the importance is see that's very hard yeah and I'm not concerned about you know just storing the whole history just you can calculate you know human life says 30 or 100 years doesn't matter right how much data comes in through the vision system and the auditory system you compress it a little bit in this case La Salette and store it we are soon in the means of just storing it yeah but you still need to the selection for the planning part and the compression for the understanding part the raw storage I'm really not concerned", "start_timestamp": "00:28:49", "end_timestamp": "00:29:22", "start_second": 1729, "end_second": 1762, "url": "https://www.youtube.com/watch?v=g4M7stjzR1I&t=1729s", "title": "Marcus Hutter: What is AIXI? | AI Podcast Clips", "thumbnail": "https://i.ytimg.com/vi/g4M7stjzR1I/maxresdefault.jpg"} {"video_id": "LfUsGv-ESbc", "text": "howdy-ho how's it going so today we are going to try out the DET \u00e4r the end to end object detection with transformers from facebook AI research and they have a github repo and they pretty much give you everything like the model the pre trained weights and so on so today we're going to check out how easy it is to get started with that so in order to do that they have like a collab but we we won't look at it too much I've glanced at it and we'll basically see how far can we go without looking at it too much and how easy is that so what", "start_timestamp": "00:00:00", "end_timestamp": "00:00:38", "start_second": 0, "end_second": 38, "url": "https://www.youtube.com/watch?v=LfUsGv-ESbc&t=0s", "title": "[Code] How to use Facebook's DETR object detection algorithm in Python (Full Tutorial)", "thumbnail": "https://i.ytimg.com/vi/LfUsGv-ESbc/maxresdefault.jpg"} {"video_id": "LfUsGv-ESbc", "text": "I've done is I've spun up a collab that I will share at the end and I've imported torch and just loaded the model so you don't have to wait for that to happen so I've loaded that up and now we have it in the cache so now we can basically go ahead and load an image into the model and try to detect objects in the image so first of all this is super easy right you simply load this from torch hub it's kind of like the the tensorflow hub you simply give the name of the model you say I want the pre trained please chugga-boom you now have", "start_timestamp": "00:00:38", "end_timestamp": "00:01:12", "start_second": 38, "end_second": 72, "url": "https://www.youtube.com/watch?v=LfUsGv-ESbc&t=38s", "title": "[Code] How to use Facebook's DETR object detection algorithm in Python (Full Tutorial)", "thumbnail": "https://i.ytimg.com/vi/LfUsGv-ESbc/maxresdefault.jpg"} {"video_id": "LfUsGv-ESbc", "text": "a model so if we look at that model this is going to be this entire Det our model right here with all the transformer and ResNet and whatnot okay this is almost a bit too much right here so what we want is an image so let's go find an image where better they find an image than Google so let's find an image of dogs because dogs is one of the classes in this cocoa dataset this one's nice right okay so we want the image address we want to load it in here somehow so so that the URL is let's make this into some sort of like an input thing where", "start_timestamp": "00:01:12", "end_timestamp": "00:01:56", "start_second": 72, "end_second": 116, "url": "https://www.youtube.com/watch?v=LfUsGv-ESbc&t=72s", "title": "[Code] How to use Facebook's DETR object detection algorithm in Python (Full Tutorial)", "thumbnail": "https://i.ytimg.com/vi/LfUsGv-ESbc/maxresdefault.jpg"} {"video_id": "LfUsGv-ESbc", "text": "we can paste the URL right here okay there we go so we have this right here and that's the URL all right no that's not the URL at all is it but a beam but a boom what about cool better now we need to load this for that we gonna use the requests library always a pleasure requests requests so the way to load a binary file is you can put the URL here and you can say streamed here I glanced this from the other thing and the raw entry will get you the eventual deeply bytes no oh sorry get darrell streamed stream yeah so this", "start_timestamp": "00:01:56", "end_timestamp": "00:02:58", "start_second": 116, "end_second": 178, "url": "https://www.youtube.com/watch?v=LfUsGv-ESbc&t=116s", "title": "[Code] How to use Facebook's DETR object detection algorithm in Python (Full Tutorial)", "thumbnail": "https://i.ytimg.com/vi/LfUsGv-ESbc/maxresdefault.jpg"} {"video_id": "LfUsGv-ESbc", "text": "will get you the sort of the the bytes of the image and then use just say image dot open and of course we need the image from a the pill library the python image library so import image we got that and we can open that image up and with a bit of luck yeah yeah so this model expects I think cocoa dataset is 640 by 480 images but they if you can see right here and we're gonna take a quick glance at their transforming they resize it to 800 so we're gonna we're gonna steal that part right here people last time where some some found it really funny", "start_timestamp": "00:02:58", "end_timestamp": "00:03:54", "start_second": 178, "end_second": 234, "url": "https://www.youtube.com/watch?v=LfUsGv-ESbc&t=178s", "title": "[Code] How to use Facebook's DETR object detection algorithm in Python (Full Tutorial)", "thumbnail": "https://i.ytimg.com/vi/LfUsGv-ESbc/maxresdefault.jpg"} {"video_id": "LfUsGv-ESbc", "text": "that I called copy pasting to go Suraj so will from now and we'll call it just Suraj Inge what we also need are the class labels because that's in defined in the cocoa dataset right so these are the class labels let's take those and okay so this T here these are torch vision transforms we're gonna need that so from say so if you don't know torch vision it's kind of an addition to PI torch that just helps you with with images and has a lot of data sets and these transforms they're really helpful because so let's", "start_timestamp": "00:03:54", "end_timestamp": "00:04:37", "start_second": 234, "end_second": 277, "url": "https://www.youtube.com/watch?v=LfUsGv-ESbc&t=234s", "title": "[Code] How to use Facebook's DETR object detection algorithm in Python (Full Tutorial)", "thumbnail": "https://i.ytimg.com/vi/LfUsGv-ESbc/maxresdefault.jpg"} {"video_id": "LfUsGv-ESbc", "text": "call this image because you can you know resize but they have much more like random cropping and rotating images and so on pretty much everything you need for pre-training and this here is just the standard image net I believe the image net normalization so these are the means and these are the standard deviations from the image net data set and let's already resize our image actually to this weight hundred and I believe I believe if you rescale the 640 to 800 you get 600 here right fairly sure okay and then let's display it just", "start_timestamp": "00:04:37", "end_timestamp": "00:05:17", "start_second": 277, "end_second": 317, "url": "https://www.youtube.com/watch?v=LfUsGv-ESbc&t=277s", "title": "[Code] How to use Facebook's DETR object detection algorithm in Python (Full Tutorial)", "thumbnail": "https://i.ytimg.com/vi/LfUsGv-ESbc/maxresdefault.jpg"} {"video_id": "LfUsGv-ESbc", "text": "because we can okay what it's it's a bit squished but we don't care and let's put that up here so we only need to execute it once nice okay so from now on it should be a breeze so what these transforms do is they resize the image okay we don't need that anymore they make it into a tensor and then they normalize by that so if we run our image through this because our image right now is this is pill image right so our our image is this pill image but if we run it through the transforms then we'll get a tensor so that's pretty cool so the", "start_timestamp": "00:05:17", "end_timestamp": "00:06:07", "start_second": 317, "end_second": 367, "url": "https://www.youtube.com/watch?v=LfUsGv-ESbc&t=317s", "title": "[Code] How to use Facebook's DETR object detection algorithm in Python (Full Tutorial)", "thumbnail": "https://i.ytimg.com/vi/LfUsGv-ESbc/maxresdefault.jpg"} {"video_id": "LfUsGv-ESbc", "text": "model as it is a deep learning model that expects batches so we'll unscrew is that in the first dimension and then we get batches so shape let's see we don't have on skis no of course we don't so this is a one image of three channels of 600 by 800 so this is the Y index coordinates I guess are shifted yes in pi torch cool so we'll call this our image tensor now we just need to put it into the model so model we put that in there and since we don't let's actually up here put the model in eval mode I don't know if that's already done", "start_timestamp": "00:06:07", "end_timestamp": "00:06:57", "start_second": 367, "end_second": 417, "url": "https://www.youtube.com/watch?v=LfUsGv-ESbc&t=367s", "title": "[Code] How to use Facebook's DETR object detection algorithm in Python (Full Tutorial)", "thumbnail": "https://i.ytimg.com/vi/LfUsGv-ESbc/maxresdefault.jpg"} {"video_id": "LfUsGv-ESbc", "text": "but you know you can never be sure enough that the batch norms aren't so I think it probably doesn't have batch norms okay you're not utilizing the GPU we'll do that we'll do that Thanks so how do we use the GPU we put our model on the GPU model equals model CUDA yes yes yes I think so this is gonna work okay we're gonna come back to this later so we forward our image of course we also need that on the GPU and it's worked did this work this worked nice okay and since this is just for evaluation we should probably go with no", "start_timestamp": "00:06:57", "end_timestamp": "00:07:58", "start_second": 417, "end_second": 478, "url": "https://www.youtube.com/watch?v=LfUsGv-ESbc&t=417s", "title": "[Code] How to use Facebook's DETR object detection algorithm in Python (Full Tutorial)", "thumbnail": "https://i.ytimg.com/vi/LfUsGv-ESbc/maxresdefault.jpg"} {"video_id": "LfUsGv-ESbc", "text": "grad right here because we don't need this whole gradient stuff if we do that okay I'm dumb there you go and nothing happens of course because we need to capture the output somehow let's look at that output Wow Wow just wow so the output is a dictionary right because we get back class labels and bounding boxes so let's look at the bread boxes let's look at that tensor that's a tensor very nice let's look at its shape let's not print giant tensors anymore cool so since this was a batch of one we should probably go", "start_timestamp": "00:07:58", "end_timestamp": "00:08:49", "start_second": 478, "end_second": 529, "url": "https://www.youtube.com/watch?v=LfUsGv-ESbc&t=478s", "title": "[Code] How to use Facebook's DETR object detection algorithm in Python (Full Tutorial)", "thumbnail": "https://i.ytimg.com/vi/LfUsGv-ESbc/maxresdefault.jpg"} {"video_id": "LfUsGv-ESbc", "text": "the zeroeth and you can see right here there is a hundred bounding boxes and each one has four numbers and if you go with the other thing that's in there the log it's then you'll see that there are also should be a hundred log it's and hello there should be a hundred log it's and each one is of size 92 because there are 92 different classes 92 we'll see about that well one is going to be the nothing class right by the way how many classes do we have we have 91 classes okay cool we can deal with that all right so what", "start_timestamp": "00:08:49", "end_timestamp": "00:09:42", "start_second": 529, "end_second": 582, "url": "https://www.youtube.com/watch?v=LfUsGv-ESbc&t=529s", "title": "[Code] How to use Facebook's DETR object detection algorithm in Python (Full Tutorial)", "thumbnail": "https://i.ytimg.com/vi/LfUsGv-ESbc/maxresdefault.jpg"} {"video_id": "LfUsGv-ESbc", "text": "are we gonna do next what we want to do is for each of the for each of the for each of the log it predictions we want to find which classic corresponds to so what we're going to do is we're going to take the Arg max of the last dimension right so you can see here almost all of these things correspond to class 91 and class 91 is not in our classes because our class is only length 91 so that must be the nothing class so what we can technically do is for log its and boxes in let's just zip them together and [Music]", "start_timestamp": "00:09:42", "end_timestamp": "00:10:32", "start_second": 582, "end_second": 632, "url": "https://www.youtube.com/watch?v=LfUsGv-ESbc&t=582s", "title": "[Code] How to use Facebook's DETR object detection algorithm in Python (Full Tutorial)", "thumbnail": "https://i.ytimg.com/vi/LfUsGv-ESbc/maxresdefault.jpg"} {"video_id": "LfUsGv-ESbc", "text": "like this okay class is oops class as the law gets Arg max if that's 92 or let's say safe that's larger than the length of our classes we'll just skip it for now okay so that should work somehow and if not then our label should be the class index right here so let's just see what the detector detects right here it detects nothing why does it detect nothing that's isn't seem good what are we doing wrong we zip together the log it's oh yeah of course we still need the zero with entry we are dumb dumb dumb cool so so so so we can delete this and", "start_timestamp": "00:10:32", "end_timestamp": "00:12:06", "start_second": 632, "end_second": 726, "url": "https://www.youtube.com/watch?v=LfUsGv-ESbc&t=632s", "title": "[Code] How to use Facebook's DETR object detection algorithm in Python (Full Tutorial)", "thumbnail": "https://i.ytimg.com/vi/LfUsGv-ESbc/maxresdefault.jpg"} {"video_id": "LfUsGv-ESbc", "text": "now finally beautiful dogs - dogs detected excellent so now for each of these dogs we want the bounding box okay so now we somehow need to think of how are we gonna draw this on an image and well let's let's actually make a copy of that image because I don't really trust myself and then at the end of this we're just going to display that image right now actually the reason I make a copy is because in these in this pillow library you can actually draw on these images and we're going to that to draw these bounding boxes so for", "start_timestamp": "00:12:06", "end_timestamp": "00:12:48", "start_second": 726, "end_second": 768, "url": "https://www.youtube.com/watch?v=LfUsGv-ESbc&t=726s", "title": "[Code] How to use Facebook's DETR object detection algorithm in Python (Full Tutorial)", "thumbnail": "https://i.ytimg.com/vi/LfUsGv-ESbc/maxresdefault.jpg"} {"video_id": "LfUsGv-ESbc", "text": "that we need an image draw if I remember correctly and I think later we also want some text so we need an image font yes all right so let's draw a bounding box right here where so first of all let's look at that bounding box let's call this box box print box dot shape and break right here what's happening let's not do this right now so this is a boxes of size four now this could be two things it could be X 0 y 0 X 1 Y 1 so the two corner points or the kind of the boundaries or it could be X Y width height now from the paper I know that", "start_timestamp": "00:12:48", "end_timestamp": "00:13:49", "start_second": 768, "end_second": 829, "url": "https://www.youtube.com/watch?v=LfUsGv-ESbc&t=768s", "title": "[Code] How to use Facebook's DETR object detection algorithm in Python (Full Tutorial)", "thumbnail": "https://i.ytimg.com/vi/LfUsGv-ESbc/maxresdefault.jpg"} {"video_id": "LfUsGv-ESbc", "text": "they predict the center and the width and the height so I'm gonna go with that and I'm just gonna guess that it's like X Y WH and not some other way around if this is a bad guess then yeah we'll see we can just print out one of these boxes and honestly that looks reason oh by the way we should scale that up yeah so these are normalized coordinates probably between 0 and 1 so we should scale that up so we should probably the x coordinates which is scaled by 800 and the Y by 600 so let's do it so first of all we scale our box by 800 in the X and", "start_timestamp": "00:13:49", "end_timestamp": "00:14:37", "start_second": 829, "end_second": 877, "url": "https://www.youtube.com/watch?v=LfUsGv-ESbc&t=829s", "title": "[Code] How to use Facebook's DETR object detection algorithm in Python (Full Tutorial)", "thumbnail": "https://i.ytimg.com/vi/LfUsGv-ESbc/maxresdefault.jpg"} {"video_id": "LfUsGv-ESbc", "text": "here is a Y and the width is the X direction and this is the Y Direction boom okay we should probably get that on CPU will just hack together a bunch of things right here ok so now this isn't the correct so we sold our x and y and WH are going to be this box so now we need to actually draw on the image we're gonna do that so let's first go X 0 X 1 is X minus W 1/2 X plus W half y 0 y 1 is the same for a y with H plus H half Coolio now we need an image draw object so I think draw on this image so whatever you", "start_timestamp": "00:14:37", "end_timestamp": "00:15:37", "start_second": 877, "end_second": 937, "url": "https://www.youtube.com/watch?v=LfUsGv-ESbc&t=877s", "title": "[Code] How to use Facebook's DETR object detection algorithm in Python (Full Tutorial)", "thumbnail": "https://i.ytimg.com/vi/LfUsGv-ESbc/maxresdefault.jpg"} {"video_id": "LfUsGv-ESbc", "text": "draw on the draw object will end up on the image so we can use that to draw a bounding box and let's just quickly look it up so pill Python draw rectangle maybe there we go okay so there's this rectangle yeah there's the rectangle function and you can see you put in a shape XY here and width height like this wait for real we wouldn't even have to need to transform it I'm pretty sure you can go X I thought I remember you could do the different thing as well but it's called rectangle okay so let's do that so draw rectangle and we'll go we'll go", "start_timestamp": "00:15:37", "end_timestamp": "00:16:29", "start_second": 937, "end_second": 989, "url": "https://www.youtube.com/watch?v=LfUsGv-ESbc&t=937s", "title": "[Code] How to use Facebook's DETR object detection algorithm in Python (Full Tutorial)", "thumbnail": "https://i.ytimg.com/vi/LfUsGv-ESbc/maxresdefault.jpg"} {"video_id": "LfUsGv-ESbc", "text": "X 0 or we'll go X Y width height let's display that down here yeah that looks that looks nothing like we want but it's you know it's a start maybe actually we need the other thing here we need X 0 y 0 X 1 Y 1 mm yes yes doggy okay we still have the break in here now we get both dogs nice nice okay let's do I think Phil yes red and let's go for with five or so five seems like a good width oh god five is a terrible with oh it's not feel I think it's its outline yeah yeah okay okay let's go still go with five cool we got our dogs now we need to", "start_timestamp": "00:16:29", "end_timestamp": "00:17:50", "start_second": 989, "end_second": 1070, "url": "https://www.youtube.com/watch?v=LfUsGv-ESbc&t=989s", "title": "[Code] How to use Facebook's DETR object detection algorithm in Python (Full Tutorial)", "thumbnail": "https://i.ytimg.com/vi/LfUsGv-ESbc/maxresdefault.jpg"} {"video_id": "LfUsGv-ESbc", "text": "put like some some snappy text labels I think there's actually a pill image draw text I think that exists because I've this font thing yeah exactly so you need the font thing get it font in there and then yeah exactly you could put a text like this okay so you probably need the x and y coordinates of the text so let's do that W dot text and let's just go with x and y right here put it right in the middle and the text is going to be our label of course and we want the fill that's now going to be the color of the text let's", "start_timestamp": "00:17:50", "end_timestamp": "00:18:39", "start_second": 1070, "end_second": 1119, "url": "https://www.youtube.com/watch?v=LfUsGv-ESbc&t=1070s", "title": "[Code] How to use Facebook's DETR object detection algorithm in Python (Full Tutorial)", "thumbnail": "https://i.ytimg.com/vi/LfUsGv-ESbc/maxresdefault.jpg"} {"video_id": "LfUsGv-ESbc", "text": "go with white and the font we're going to load some font right here font dot how we're doing this true type true type ah no not cheating let's just go with regular fonts it won't look as fancy but we'll be fine so we're where is our text you see it I don't see it red let's make it red yes there we go okay so it wasn't red enough this should work on it so I did we just I just not see it I'm domina cool so we have two dogs how easy was that actually we wasted the most time with like bounding boxes and stuff absolutely cool right okay so now we can", "start_timestamp": "00:18:39", "end_timestamp": "00:19:58", "start_second": 1119, "end_second": 1198, "url": "https://www.youtube.com/watch?v=LfUsGv-ESbc&t=1119s", "title": "[Code] How to use Facebook's DETR object detection algorithm in Python (Full Tutorial)", "thumbnail": "https://i.ytimg.com/vi/LfUsGv-ESbc/maxresdefault.jpg"} {"video_id": "LfUsGv-ESbc", "text": "have some fun with it I'm going to scale this down for a bit because you don't need to see the actual code anymore so much so you can see the image more so we'll go to the images and the first thing I want to do is the dress what does this think of the dress okay so we'll copy that and we'll go into our collab and just paste this right here butter boom but a beam sounds nice and what is wrong the size of a tensor must match the size of a tensor we do something wrong transform image or images this maybe this is like an RGBA image I think", "start_timestamp": "00:19:58", "end_timestamp": "00:21:04", "start_second": 1198, "end_second": 1264, "url": "https://www.youtube.com/watch?v=LfUsGv-ESbc&t=1198s", "title": "[Code] How to use Facebook's DETR object detection algorithm in Python (Full Tutorial)", "thumbnail": "https://i.ytimg.com/vi/LfUsGv-ESbc/maxresdefault.jpg"} {"video_id": "LfUsGv-ESbc", "text": "if this is rgba we should just convert it to like an RGB pretty sure you can do something like this right here this should work as an alpha Channel then that will remove it yes now it works okay let's see what the model thinks of this yeah okay apparently there's a car and there's a surfboard and there's a person and there's a person nice see well we didn't figure out whether the dress was blue or white through gold it was just a person now they you could actually like threshold by how sure you are of a given", "start_timestamp": "00:21:04", "end_timestamp": "00:21:59", "start_second": 1264, "end_second": 1319, "url": "https://www.youtube.com/watch?v=LfUsGv-ESbc&t=1264s", "title": "[Code] How to use Facebook's DETR object detection algorithm in Python (Full Tutorial)", "thumbnail": "https://i.ytimg.com/vi/LfUsGv-ESbc/maxresdefault.jpg"} {"video_id": "LfUsGv-ESbc", "text": "class but where's the fun in that so let's go further and let's do some Rorschach inkblots because those are always lots and lots of fun so which one should we go for it this one looks like fun okay so we'll put this into here and it's astonishing right it's this cocoa data said it only has these 90 classes like it doesn't have anything anything else so it's a cake it's a cake and this here what is it okay we'll have to go maybe with blue what is it stop sign okay but so you might think it what if what if we want more like what if we", "start_timestamp": "00:21:59", "end_timestamp": "00:22:58", "start_second": 1319, "end_second": 1378, "url": "https://www.youtube.com/watch?v=LfUsGv-ESbc&t=1319s", "title": "[Code] How to use Facebook's DETR object detection algorithm in Python (Full Tutorial)", "thumbnail": "https://i.ytimg.com/vi/LfUsGv-ESbc/maxresdefault.jpg"} {"video_id": "LfUsGv-ESbc", "text": "want more predictions so there is a hack right right now the model can always assign math to this not a class thing like right here this class 91 in order for it to say I don't think there's anything there but generally we have a hundred predictions right so you see where this is going so yes let's let's change it but let's change it up a bit and let's go here let's first extract these tensors and boxes okay so we have the boxes and this and log its and boxes okay so we got that what we wanna do is basically we", "start_timestamp": "00:22:58", "end_timestamp": "00:23:55", "start_second": 1378, "end_second": 1435, "url": "https://www.youtube.com/watch?v=LfUsGv-ESbc&t=1378s", "title": "[Code] How to use Facebook's DETR object detection algorithm in Python (Full Tutorial)", "thumbnail": "https://i.ytimg.com/vi/LfUsGv-ESbc/maxresdefault.jpg"} {"video_id": "LfUsGv-ESbc", "text": "want to filter the we want to basically just remove the last class before we do the Arg max and thereby we want to force the model to make a prediction now it won't be a very good prediction because of course this is only the second highest class and it's arguable how much that counts but still it will do something so this must be done in the log it's right so well look at the log it's and the log it's our of shape 100 so we have 100 predictions of 92 classes now the first thing we want to do is just remove the last class so let's go", "start_timestamp": "00:23:55", "end_timestamp": "00:24:36", "start_second": 1435, "end_second": 1476, "url": "https://www.youtube.com/watch?v=LfUsGv-ESbc&t=1435s", "title": "[Code] How to use Facebook's DETR object detection algorithm in Python (Full Tutorial)", "thumbnail": "https://i.ytimg.com/vi/LfUsGv-ESbc/maxresdefault.jpg"} {"video_id": "LfUsGv-ESbc", "text": "everything here until the last class all right so now we have 91 actually let's make it more generic whatever thing however many classes are okay so we don't have this class anymore so now if we do the softmax over the last thing we can technically we get 91 but now they're normalized so they add up to one so it's kind of a probability distribution next we we want to find the max over this and that that will give us a max output so we don't want to plot all the 100 predictions because that would just be like like squares all over", "start_timestamp": "00:24:36", "end_timestamp": "00:25:24", "start_second": 1476, "end_second": 1524, "url": "https://www.youtube.com/watch?v=LfUsGv-ESbc&t=1476s", "title": "[Code] How to use Facebook's DETR object detection algorithm in Python (Full Tutorial)", "thumbnail": "https://i.ytimg.com/vi/LfUsGv-ESbc/maxresdefault.jpg"} {"video_id": "LfUsGv-ESbc", "text": "the place and we'd have no clue what happening so this max output right here this what we're trying to find is we're trying to find a let's say the five best predictions or so the five ones where the model thinks where the model is most confident it's not really good metric but you know so these are the probability values of all of the hundred predictions so what we want is like the top K okay so let's go with five and again we'll get like a top K output let's call that top K and I think it also has like values and indices yes so", "start_timestamp": "00:25:24", "end_timestamp": "00:26:20", "start_second": 1524, "end_second": 1580, "url": "https://www.youtube.com/watch?v=LfUsGv-ESbc&t=1524s", "title": "[Code] How to use Facebook's DETR object detection algorithm in Python (Full Tutorial)", "thumbnail": "https://i.ytimg.com/vi/LfUsGv-ESbc/maxresdefault.jpg"} {"video_id": "LfUsGv-ESbc", "text": "now we simply need to filter from the log it's and the boxes where these these top ones are so well filter the log it's [Music] will filter the log it's by that top K indices and we'll also filter thee I am not very gifted today boxes by the way I'm using a collab just because it's nice to kind of play around with a model because if I were to use a file I'd have to restart reload the model over and over again just not as nice so now we have the log it's and the boxes and if we do that right now we get always the top 5 predictions how nice is", "start_timestamp": "00:26:20", "end_timestamp": "00:27:24", "start_second": 1580, "end_second": 1644, "url": "https://www.youtube.com/watch?v=LfUsGv-ESbc&t=1580s", "title": "[Code] How to use Facebook's DETR object detection algorithm in Python (Full Tutorial)", "thumbnail": "https://i.ytimg.com/vi/LfUsGv-ESbc/maxresdefault.jpg"} {"video_id": "LfUsGv-ESbc", "text": "that and you can see the top 5 predictions are probably still kkkkkk cake and just to verify that and we can put its shape yeah see this is what I don't like about this stuff yes okay so we just have five predictions of 92 things and we don't want the 92 we've already said so we just want the 91 let's actually could put that here [Music] okay so now we have five by 91 and now to give us the top five are there we go so many takes and many stop sighs that's fine that's cool so the ultimate test right here is going", "start_timestamp": "00:27:24", "end_timestamp": "00:28:31", "start_second": 1644, "end_second": 1711, "url": "https://www.youtube.com/watch?v=LfUsGv-ESbc&t=1644s", "title": "[Code] How to use Facebook's DETR object detection algorithm in Python (Full Tutorial)", "thumbnail": "https://i.ytimg.com/vi/LfUsGv-ESbc/maxresdefault.jpg"} {"video_id": "LfUsGv-ESbc", "text": "to be yes the human adversarial example let's check it out so we put in a Jackson Pollock image and we'll see what the model says now we're actually forcing it to make predictions right so it can't escape it will need to do something okay I made another mistake I would need to copy the image address right here like this that's what happens when you're not an idiot you get the actual image so what does the model think of our pretty image okay can't even read that so let's make this into white bird bird bird okay lots", "start_timestamp": "00:28:31", "end_timestamp": "00:29:35", "start_second": 1711, "end_second": 1775, "url": "https://www.youtube.com/watch?v=LfUsGv-ESbc&t=1711s", "title": "[Code] How to use Facebook's DETR object detection algorithm in Python (Full Tutorial)", "thumbnail": "https://i.ytimg.com/vi/LfUsGv-ESbc/maxresdefault.jpg"} {"video_id": "LfUsGv-ESbc", "text": "of birds in this image clearly clearly lots of birds in this image let's try another one let's go with this this this one yes yes absolutely love it love it okay so we copy image address and beam Mormons Wow there's a lot of birds in these Pollock images just so many birds okay let's try one last how about this one this one is a bit more human-friendly right put it in here and and and okay we get some detections there's a clock right here there is a what's that how's horses let's print let's print the labels so just so we know what they are", "start_timestamp": "00:29:35", "end_timestamp": "00:31:09", "start_second": 1775, "end_second": 1869, "url": "https://www.youtube.com/watch?v=LfUsGv-ESbc&t=1775s", "title": "[Code] How to use Facebook's DETR object detection algorithm in Python (Full Tutorial)", "thumbnail": "https://i.ytimg.com/vi/LfUsGv-ESbc/maxresdefault.jpg"} {"video_id": "LfUsGv-ESbc", "text": "cake horse car horse and clock okay so I see the clock like this here is clearly a clock then this rectangle on the right side must be something let's put this to read as well now that's terrible ah white back to white how about back to white okay clock we got horse right here and house probably and the entire image is again a cake yes okay so as you can see it is a pretty pretty good system but of course it is only these 90 classes but it's for now it's a it's pretty cool and it works pretty well and just the easiness with which", "start_timestamp": "00:31:09", "end_timestamp": "00:32:13", "start_second": 1869, "end_second": 1933, "url": "https://www.youtube.com/watch?v=LfUsGv-ESbc&t=1869s", "title": "[Code] How to use Facebook's DETR object detection algorithm in Python (Full Tutorial)", "thumbnail": "https://i.ytimg.com/vi/LfUsGv-ESbc/maxresdefault.jpg"} {"video_id": "LfUsGv-ESbc", "text": "you get which which you can get this stuff elephants in Kruger National Park just the easiness is astonishing you can just load it up kind of have this have a bit of a notebook and with a bit of like a very few lines of code you can put something together that detects these bounding boxes lots of elephants and remember we only have the top five elephants right here so what happens if we go for more where is our top k so here we can let maybe say the top 15 predictions and as always if we want to make the model to", "start_timestamp": "00:32:13", "end_timestamp": "00:32:58", "start_second": 1933, "end_second": 1978, "url": "https://www.youtube.com/watch?v=LfUsGv-ESbc&t=1933s", "title": "[Code] How to use Facebook's DETR object detection algorithm in Python (Full Tutorial)", "thumbnail": "https://i.ytimg.com/vi/LfUsGv-ESbc/maxresdefault.jpg"} {"video_id": "733m6qBH-jI", "text": "Okay. Hey, everyone, looks like we're on. So as usual, if you have not yet, um, please enter your SUID so that we know you're here in this room. Um, so actually, can you hear me okay at the back? Is it okay? Oh, yes, is the volume okay at the back? All right. No one's responding. Yes, okay. All right. [LAUGHTER] Thank you. Okay. So, um, what I want to do today is, um, share with you two things. You know, we're approaching the end of quarter. Uh, I hope you guys are looking forward to, to the Thanksgiving break, um, next week.", "start_timestamp": "00:00:00", "end_timestamp": "00:00:44", "start_second": 0, "end_second": 44, "url": "https://www.youtube.com/watch?v=733m6qBH-jI&t=0s", "title": "Stanford CS230: Deep Learning | Autumn 2018 | Lecture 8 - Career Advice / Reading Research Papers", "thumbnail": "https://i.ytimg.com/vi/733m6qBH-jI/maxresdefault.jpg"} {"video_id": "733m6qBH-jI", "text": "Um, actually and I guess we have a lot of home viewers, but those us- those of you that are viewing this from outside California, know that we're all feeling really bad air here in California. So I hope, if you're somebody watching at home you have better air wherever you are. Um, uh, but, uh, what I hope to do today is give you some advice that will set you up for the future, uh, so if even beyond the conclusion of CS230. And in particular, what I want to do today is, um, share with you some advice on how to read research papers, uh, because, you know,", "start_timestamp": "00:00:44", "end_timestamp": "00:01:16", "start_second": 44, "end_second": 76, "url": "https://www.youtube.com/watch?v=733m6qBH-jI&t=44s", "title": "Stanford CS230: Deep Learning | Autumn 2018 | Lecture 8 - Career Advice / Reading Research Papers", "thumbnail": "https://i.ytimg.com/vi/733m6qBH-jI/maxresdefault.jpg"} {"video_id": "733m6qBH-jI", "text": "deep learning is evolving fast enough that even though you've learned a lot of foundations of deep learning and learned a lot of tips and tricks and probably know better than many practitioners how to actually get deep, deep learning algorithms to work already. Uh, when you're working on specific applications whether in computer vision or natural language processing or speech recognition or something else, um, for you to be able to efficiently figure out the academic literature on key parts of, uh, the, the deep learning world, will help you keep on developing and, you know,", "start_timestamp": "00:01:16", "end_timestamp": "00:01:46", "start_second": 76, "end_second": 106, "url": "https://www.youtube.com/watch?v=733m6qBH-jI&t=76s", "title": "Stanford CS230: Deep Learning | Autumn 2018 | Lecture 8 - Career Advice / Reading Research Papers", "thumbnail": "https://i.ytimg.com/vi/733m6qBH-jI/maxresdefault.jpg"} {"video_id": "733m6qBH-jI", "text": "staying on top of ideas even as they evolve over the next several years or maybe decade. So first thing I wanna do is, uh, give you advice on how, uh, when say, when I'm trying to master a new body of literature, how I go about that and hope that those techniques would be useful to help you be more efficient in how you read research papers. And then a second thing is, in previous offerings of this course, one request from a lot of students was just advice for navigating a career in machine learning. And so in the second half of today,", "start_timestamp": "00:01:46", "end_timestamp": "00:02:18", "start_second": 106, "end_second": 138, "url": "https://www.youtube.com/watch?v=733m6qBH-jI&t=106s", "title": "Stanford CS230: Deep Learning | Autumn 2018 | Lecture 8 - Career Advice / Reading Research Papers", "thumbnail": "https://i.ytimg.com/vi/733m6qBH-jI/maxresdefault.jpg"} {"video_id": "733m6qBH-jI", "text": "I want to share some thoughts with you on that. Okay, so it turns out that- so I guess two topics reading research papers, right? Um, and, uh, then second career advice in machine learning. So it turns out that, uh, you know, reading research papers is one of those things that a lot of P- PhD students learn by osmosis, right? Meaning that if you're a PhD student and you see, you know, a few professors or see other PhD students do certain things, then you might try to pick it up by osmosis. But I hope today to accelerate your efficiency", "start_timestamp": "00:02:18", "end_timestamp": "00:02:57", "start_second": 138, "end_second": 177, "url": "https://www.youtube.com/watch?v=733m6qBH-jI&t=138s", "title": "Stanford CS230: Deep Learning | Autumn 2018 | Lecture 8 - Career Advice / Reading Research Papers", "thumbnail": "https://i.ytimg.com/vi/733m6qBH-jI/maxresdefault.jpg"} {"video_id": "733m6qBH-jI", "text": "in how you acquire knowledge yourself from the, uh, from the a- academic literature, right? And so let's say that this is the area you want to become good at, let's say you want to build that, um, speech recognition, right? Let's turn this off now. Let's say you want to build that, um, speech recognition system that we talked about, the Robert turn on and the desk lamp. All right. Um, this is what I've read- this is the sequence of steps I recommend you take, uh, which is first: [NOISE] compile lists of papers and the- and by papers,", "start_timestamp": "00:02:57", "end_timestamp": "00:03:36", "start_second": 177, "end_second": 216, "url": "https://www.youtube.com/watch?v=733m6qBH-jI&t=177s", "title": "Stanford CS230: Deep Learning | Autumn 2018 | Lecture 8 - Career Advice / Reading Research Papers", "thumbnail": "https://i.ytimg.com/vi/733m6qBH-jI/maxresdefault.jpg"} {"video_id": "733m6qBH-jI", "text": "I mean, both research papers often posted on arXiv, onto the Internet, but also plus Medium posts, um, [NOISE] you know, what maybe some occasional GitHub post although those are rarer. But whatever texts or learning resources you have. And then, um, what I usually do is end up skipping around the list. All right. So if I'm trying to master a new body of knowledge, say you want to learn the most speech recognition systems, this is what it feels like to read a set of papers, which is maybe you initially start off with five papers and if on the horizontal axis,", "start_timestamp": "00:03:36", "end_timestamp": "00:04:15", "start_second": 216, "end_second": 255, "url": "https://www.youtube.com/watch?v=733m6qBH-jI&t=216s", "title": "Stanford CS230: Deep Learning | Autumn 2018 | Lecture 8 - Career Advice / Reading Research Papers", "thumbnail": "https://i.ytimg.com/vi/733m6qBH-jI/maxresdefault.jpg"} {"video_id": "733m6qBH-jI", "text": "I plot, you know, 0 percent to 100 percent read/understood, right? The way it feels like reading these papers is often read, you know, ten percent of each paper or try to quickly skim and understand each of these papers. And if based on that you decide that paper number two is a dud, right, other, other, other authors even cite it and say boy they, they sure got it wrong or you read it, and it just doesn't make sense. Then go ahead and forget it. And, uh, as you skip around to different papers, uh, you might decide that paper three is a really seminal one and then", "start_timestamp": "00:04:15", "end_timestamp": "00:04:54", "start_second": 255, "end_second": 294, "url": "https://www.youtube.com/watch?v=733m6qBH-jI&t=255s", "title": "Stanford CS230: Deep Learning | Autumn 2018 | Lecture 8 - Career Advice / Reading Research Papers", "thumbnail": "https://i.ytimg.com/vi/733m6qBH-jI/maxresdefault.jpg"} {"video_id": "733m6qBH-jI", "text": "spend a lot of time to go ahead and read and understand the whole thing. And based on that, you might then find a sixth paper from the citations and read that and go back and flesh out your understanding on paper four. And then find a paper seven and go and read that all the way to the conclusion. Um, but this is what it feels like as you, you know, assemble a list of papers and skip around and try to, uh, um, master a body of literature around some topic that you want to learn. And I think, um, some rough guidelines, you know,", "start_timestamp": "00:04:54", "end_timestamp": "00:05:28", "start_second": 294, "end_second": 328, "url": "https://www.youtube.com/watch?v=733m6qBH-jI&t=294s", "title": "Stanford CS230: Deep Learning | Autumn 2018 | Lecture 8 - Career Advice / Reading Research Papers", "thumbnail": "https://i.ytimg.com/vi/733m6qBH-jI/maxresdefault.jpg"} {"video_id": "733m6qBH-jI", "text": "if you read 15 to 20 papers I think you have a basic understanding of an- of an area like, maybe good enough to do some work, apply some algorithms. Um, if you read, um, 50 to 100 papers in an area like speech recognition and, and kind of understand a lot of it, then that's probably enough to give you a very good understanding of an area, right? You might, know- I'm, I'm always careful about when I say you're mastering a subject but you read 50 to 100 papers on speech recognition, you have a very good understanding of speech recognition.", "start_timestamp": "00:05:28", "end_timestamp": "00:05:58", "start_second": 328, "end_second": 358, "url": "https://www.youtube.com/watch?v=733m6qBH-jI&t=328s", "title": "Stanford CS230: Deep Learning | Autumn 2018 | Lecture 8 - Career Advice / Reading Research Papers", "thumbnail": "https://i.ytimg.com/vi/733m6qBH-jI/maxresdefault.jpg"} {"video_id": "733m6qBH-jI", "text": "Or if you're interested in say domain adaptation, right? By the time you've read 50 or 100 papers, you have a very good understanding of, of a subject like that. But if you read 5 to 20 papers, it's probably enough for you to implement it but maybe not, not sure this is enough for you to do research or be really at the cutting edge but these are maybe some guidelines for the volume of reading you should aspire to if you want to pick up a new area. I'll take one of the subjects in CS230 and go more deeply into it, right?", "start_timestamp": "00:05:58", "end_timestamp": "00:06:23", "start_second": 358, "end_second": 383, "url": "https://www.youtube.com/watch?v=733m6qBH-jI&t=358s", "title": "Stanford CS230: Deep Learning | Autumn 2018 | Lecture 8 - Career Advice / Reading Research Papers", "thumbnail": "https://i.ytimg.com/vi/733m6qBH-jI/maxresdefault.jpg"} {"video_id": "733m6qBH-jI", "text": "Um, now [NOISE] how do you read one paper? And, um, I hope most of you brought your laptops. So what I'm gonna do is describe to you how I read one paper, and then after that I'm actually going to ask all of you to, you know, download the paper online and just take, I don't know, uh, uh, take, take a few minutes to read a paper right here in class and see how far you can get understanding of a research paper in just minutes right, right here in class. Okay. So when reading one paper. So the, the, the bad way to read the paper is", "start_timestamp": "00:06:23", "end_timestamp": "00:07:02", "start_second": 383, "end_second": 422, "url": "https://www.youtube.com/watch?v=733m6qBH-jI&t=383s", "title": "Stanford CS230: Deep Learning | Autumn 2018 | Lecture 8 - Career Advice / Reading Research Papers", "thumbnail": "https://i.ytimg.com/vi/733m6qBH-jI/maxresdefault.jpg"} {"video_id": "733m6qBH-jI", "text": "to go from the first word until the last word, right? This is a bad way to- when you read a paper like this. Oh, and by the way, actually here, I'll tell you what my real life is like. So, um, I actually pretty much everywhere I go, whenever I backpack this is my actual folder. I don't want to show- this is my actual folder of unread papers. So pretty much everywhere I go, I actually have a paper, you know, a stack of papers is on my personal reading list. This is actually my real life. I didn't bring this to show you.", "start_timestamp": "00:07:02", "end_timestamp": "00:07:32", "start_second": 422, "end_second": 452, "url": "https://www.youtube.com/watch?v=733m6qBH-jI&t=422s", "title": "Stanford CS230: Deep Learning | Autumn 2018 | Lecture 8 - Career Advice / Reading Research Papers", "thumbnail": "https://i.ytimg.com/vi/733m6qBH-jI/maxresdefault.jpg"} {"video_id": "733m6qBH-jI", "text": "This is in my backpack all the time. Ah, and I think that- these days on my team at Landing AI and Deeplearning.ai, I personally lead a reading group where I lead a discussion about two papers a week. Uh, but to select two papers, that means I need to read like five or six papers a week to select two, you know, to present and discuss at the Landing AI and Deeplearning.ai meeting group. So this is my real life, right? And how I try to stay on top of the literature and, and I have a- I'm doing a lot. If I can find the time, if I can find the time to read", "start_timestamp": "00:07:32", "end_timestamp": "00:08:01", "start_second": 452, "end_second": 481, "url": "https://www.youtube.com/watch?v=733m6qBH-jI&t=452s", "title": "Stanford CS230: Deep Learning | Autumn 2018 | Lecture 8 - Career Advice / Reading Research Papers", "thumbnail": "https://i.ytimg.com/vi/733m6qBH-jI/maxresdefault.jpg"} {"video_id": "733m6qBH-jI", "text": "a couple of papers a week, hopefully all of you can too. Uh, but when I'm reading a paper, uh, this is, this is how I recommend you go about it which is, do- do- don't go for the first word and read until the last word, uh, instead, uh, take multiple passes through the paper [NOISE]. Right? Um, and so, you know, step one is, uh, [NOISE] read the title, [NOISE] the abstract, um, [NOISE] and also the figures. Um, especially in Deep Learning, there are a lot of research papers where sort of the entire paper is summarized in one or two figures in the figure caption.", "start_timestamp": "00:08:01", "end_timestamp": "00:08:48", "start_second": 481, "end_second": 528, "url": "https://www.youtube.com/watch?v=733m6qBH-jI&t=481s", "title": "Stanford CS230: Deep Learning | Autumn 2018 | Lecture 8 - Career Advice / Reading Research Papers", "thumbnail": "https://i.ytimg.com/vi/733m6qBH-jI/maxresdefault.jpg"} {"video_id": "733m6qBH-jI", "text": "So, um, so sometimes, just by reading the title, abstract and, you know, the key neural network architecture figure that just describes what the whole papers are, and maybe one or two of the experiments section. You can sometimes get a very good sense of what the whole paper is about without, you know, hardly reading any of the texts in the paper itself, right? Tha- tha- that's the first pass. Um, second pass, I would tend to read more carefully, um, [NOISE] the intro, the conclusions, um, look carefully at all the figures again,", "start_timestamp": "00:08:48", "end_timestamp": "00:09:21", "start_second": 528, "end_second": 561, "url": "https://www.youtube.com/watch?v=733m6qBH-jI&t=528s", "title": "Stanford CS230: Deep Learning | Autumn 2018 | Lecture 8 - Career Advice / Reading Research Papers", "thumbnail": "https://i.ytimg.com/vi/733m6qBH-jI/maxresdefault.jpg"} {"video_id": "733m6qBH-jI", "text": "[NOISE] and then skim, um, the rest, and you know, um, I- I don't know how many of you have published academic papers, but, uh, when people publish academic papers, um, part of, you know, the publication process is, uh, convincing the reviewers that your paper is worthy for acceptance. And so what you find is that the abstract, intro and conclusion is often when the authors try to summarize their work really, really carefully, uh, to make a case, to make a very clear case to the reviewers as to why, you know, they think their paper should be accepted for publication.", "start_timestamp": "00:09:21", "end_timestamp": "00:09:58", "start_second": 561, "end_second": 598, "url": "https://www.youtube.com/watch?v=733m6qBH-jI&t=561s", "title": "Stanford CS230: Deep Learning | Autumn 2018 | Lecture 8 - Career Advice / Reading Research Papers", "thumbnail": "https://i.ytimg.com/vi/733m6qBH-jI/maxresdefault.jpg"} {"video_id": "733m6qBH-jI", "text": "And so because of that, you know, maybe slightly not, slightly unusual incentive, the intro and conclusion and abstract often give a very clear summary of what's the paper actually about. Um, and depending on, [NOISE] um, and again, just to be, you know, b- bluntly honest with you guys, um, the related work section is useful if you want, sometimes is useful if you want to- to understand related work and figure out what's- what are the most important works in the papers. But the first time you read this, you might skim or even skip,", "start_timestamp": "00:09:58", "end_timestamp": "00:10:35", "start_second": 598, "end_second": 635, "url": "https://www.youtube.com/watch?v=733m6qBH-jI&t=598s", "title": "Stanford CS230: Deep Learning | Autumn 2018 | Lecture 8 - Career Advice / Reading Research Papers", "thumbnail": "https://i.ytimg.com/vi/733m6qBH-jI/maxresdefault.jpg"} {"video_id": "733m6qBH-jI", "text": "skim the related work section. It turns out, unless you're already familiar with the literature, if this is a body of work that you're not that familiar with, the related work section is sometimes almost impossible to understand. Uh, and again, since I'm being very honest with you guys, sometimes, related work section is when the authors try to cite everyone that could possibly be reviewing the paper and to make them feel good, uh, uh, and then hopefully accept the paper. And so related work sessions is sometimes written in funny ways, right?", "start_timestamp": "00:10:35", "end_timestamp": "00:11:01", "start_second": 635, "end_second": 661, "url": "https://www.youtube.com/watch?v=733m6qBH-jI&t=635s", "title": "Stanford CS230: Deep Learning | Autumn 2018 | Lecture 8 - Career Advice / Reading Research Papers", "thumbnail": "https://i.ytimg.com/vi/733m6qBH-jI/maxresdefault.jpg"} {"video_id": "733m6qBH-jI", "text": "Um, and then, uh, [NOISE] step 3, I would often read the paper, but, um, [NOISE] just skip the math [NOISE], right? Um, and four, read the whole thing, uh, but skip parts that don't make sense, [NOISE] right? You know, um, I think that, uh, one thing that's happened many times in the research is that, I mean, the papers will tend to be cutting edge research, and so when, uh, we publish things, we sometimes don't know what's really important and what's not important, right? So there are- there are many examples of- of well known,", "start_timestamp": "00:11:01", "end_timestamp": "00:12:00", "start_second": 661, "end_second": 720, "url": "https://www.youtube.com/watch?v=733m6qBH-jI&t=661s", "title": "Stanford CS230: Deep Learning | Autumn 2018 | Lecture 8 - Career Advice / Reading Research Papers", "thumbnail": "https://i.ytimg.com/vi/733m6qBH-jI/maxresdefault.jpg"} {"video_id": "733m6qBH-jI", "text": "highly cited research papers where some of it was just great stuff and some of it, you know, turned out to be unimportant. But at the time the paper was written, the authors did not know, every- no one on the planet knew what was important and what was not important. And maybe one example. Um, the LeNet-5 paper, right? Sample paper by Yann LeCun. Part of it was phenomenal, just established a lot of the foundations of ConvNets. And so it's, uh, one of the most incredibly influential papers. But you go back and read that paper,", "start_timestamp": "00:12:00", "end_timestamp": "00:12:27", "start_second": 720, "end_second": 747, "url": "https://www.youtube.com/watch?v=733m6qBH-jI&t=720s", "title": "Stanford CS230: Deep Learning | Autumn 2018 | Lecture 8 - Career Advice / Reading Research Papers", "thumbnail": "https://i.ytimg.com/vi/733m6qBH-jI/maxresdefault.jpg"} {"video_id": "733m6qBH-jI", "text": "an- another sort of, whole half of the paper was about other stuff, right? Transducers and so on that is much less used. And so- and so it's fine if you read a paper and some of it doesn't make sense because it's not that unusual, or sometimes it just happens that, um, great research means we're publishing things at the boundaries of our knowledge and sometimes, ah, uh, the stuff you see, you know, we'll realize five years in the future that that wasn't the most important thing after all, right? Or that- what was the key part of the algorithm,", "start_timestamp": "00:12:27", "end_timestamp": "00:12:54", "start_second": 747, "end_second": 774, "url": "https://www.youtube.com/watch?v=733m6qBH-jI&t=747s", "title": "Stanford CS230: Deep Learning | Autumn 2018 | Lecture 8 - Career Advice / Reading Research Papers", "thumbnail": "https://i.ytimg.com/vi/733m6qBH-jI/maxresdefault.jpg"} {"video_id": "733m6qBH-jI", "text": "maybe it wasn't what the authors thought. And so sometimes the past papers don't make sense. It's okay to skim it initially and move on, right? Uh, uh, unless you're trying to do a pe- unless you're trying to do deep research and really need to master it, then go ahead and spend more time. But if you're trying to get through a lot of papers, then, you know, then- then it's just prioritizing your time, okay? Um, and so, ah, just a few last things and then I'll ask you to practice this yourself with a paper, right?", "start_timestamp": "00:12:54", "end_timestamp": "00:13:25", "start_second": 774, "end_second": 805, "url": "https://www.youtube.com/watch?v=733m6qBH-jI&t=774s", "title": "Stanford CS230: Deep Learning | Autumn 2018 | Lecture 8 - Career Advice / Reading Research Papers", "thumbnail": "https://i.ytimg.com/vi/733m6qBH-jI/maxresdefault.jpg"} {"video_id": "733m6qBH-jI", "text": "Um, you know, I think that when you've read and understood the paper, um, [NOISE] these are questions to try to keep in mind. And when you read a paper in a few minutes, maybe try to answer these questions: what do the authors try to accomplish? And what I hope to do in a few minutes is ask you to, uh, download a paper off the Internet, read it, and then, um, try to answer these questions and discuss your answer to these questions with- with- with your peers, right? With others in the class, okay? Um, what were the key elements, [NOISE]", "start_timestamp": "00:13:25", "end_timestamp": "00:14:20", "start_second": 805, "end_second": 860, "url": "https://www.youtube.com/watch?v=733m6qBH-jI&t=805s", "title": "Stanford CS230: Deep Learning | Autumn 2018 | Lecture 8 - Career Advice / Reading Research Papers", "thumbnail": "https://i.ytimg.com/vi/733m6qBH-jI/maxresdefault.jpg"} {"video_id": "733m6qBH-jI", "text": "what can you use yourself, and um, [NOISE] okay? So I think if you can answer these questions, hopefully that will reflect that you have a pretty good understanding of the paper, okay? Um, and so what I would like you to do is, um, pull up your laptop and then so you- there- there's actually a- so I think on the, uh, ConvNet videos, right? On, um, the- the different AI ConvNet videos on Coursera, you learned a bit about, um, ah, well, various neural network architectures up to ResNets. And it turns out that there's another, uh,", "start_timestamp": "00:14:20", "end_timestamp": "00:15:16", "start_second": 860, "end_second": 916, "url": "https://www.youtube.com/watch?v=733m6qBH-jI&t=860s", "title": "Stanford CS230: Deep Learning | Autumn 2018 | Lecture 8 - Career Advice / Reading Research Papers", "thumbnail": "https://i.ytimg.com/vi/733m6qBH-jI/maxresdefault.jpg"} {"video_id": "733m6qBH-jI", "text": "follow-on piece of work that maybe builds on some of the ideas of ResNets, which is called DenseNets. So, what I'd like you to do is, um, oh, and- and so what I'd like you to do is actually try this. [NOISE] And when I'm reading a paper, [NOISE] again, in the earlier stages, don't get stuck on the math, just go ahead and skim the math, and read the English text where you get through faster. Ah, and maybe one of the principles is, go from the very efficient high information content first, and then go to the harder material later, right?", "start_timestamp": "00:15:16", "end_timestamp": "00:15:42", "start_second": 916, "end_second": 942, "url": "https://www.youtube.com/watch?v=733m6qBH-jI&t=916s", "title": "Stanford CS230: Deep Learning | Autumn 2018 | Lecture 8 - Career Advice / Reading Research Papers", "thumbnail": "https://i.ytimg.com/vi/733m6qBH-jI/maxresdefault.jpg"} {"video_id": "733m6qBH-jI", "text": "That's why often I just skim the math and I don't- if I don't understand some of the equation just move on, and then only later go back and, and really try to figure out the math more carefully, okay? So what I'd like you to do is take on a- I want you to take, um, uh, uh, wonder if, uh, let's- let's- let's try, let's- let's- have you take seven minutes. I was thinking maybe one- one minute per page is quite fast and, um, [NOISE] search for this paper, [NOISE] Densely Connected Convolutional Neural Net- Networks,", "start_timestamp": "00:15:42", "end_timestamp": "00:16:18", "start_second": 942, "end_second": 978, "url": "https://www.youtube.com/watch?v=733m6qBH-jI&t=942s", "title": "Stanford CS230: Deep Learning | Autumn 2018 | Lecture 8 - Career Advice / Reading Research Papers", "thumbnail": "https://i.ytimg.com/vi/733m6qBH-jI/maxresdefault.jpg"} {"video_id": "733m6qBH-jI", "text": "by Gao Huang et al, okay? I want you guys to take out your laptops, uh, search for this paper, er, download it. You should find this on arXiv, um, A-R-X-I-V, right? And, uh, and this is also, so sometimes we also call this Dense Nets, I guess. And go ahead and, uh, take, why don't you take like seven minutes to read this paper and I'll let you know when the time is passed, and then after that time, um, I'd like you to, you know, discuss with your, with, with the others, right, what, wha- what you think are the answers,", "start_timestamp": "00:16:18", "end_timestamp": "00:17:00", "start_second": 978, "end_second": 1020, "url": "https://www.youtube.com/watch?v=733m6qBH-jI&t=978s", "title": "Stanford CS230: Deep Learning | Autumn 2018 | Lecture 8 - Career Advice / Reading Research Papers", "thumbnail": "https://i.ytimg.com/vi/733m6qBH-jI/maxresdefault.jpg"} {"video_id": "733m6qBH-jI", "text": "especially the first two. Because the other two you can leave for later. Why don't you go ahead and take a few minutes to do that now, and then I'll let you know when, um, sort of like, seven minutes have passed and then you can discuss your answers to these with your friends, okay? [NOISE] All right guys. So, um, anyone with any thoughts or insights, surprises, or thoughts from this? So, now you've spent 11 minutes on this paper, right? Seven minutes reading, four minutes discussing. It was a very, very short period of time,", "start_timestamp": "00:17:00", "end_timestamp": "00:17:33", "start_second": 1020, "end_second": 1053, "url": "https://www.youtube.com/watch?v=733m6qBH-jI&t=1020s", "title": "Stanford CS230: Deep Learning | Autumn 2018 | Lecture 8 - Career Advice / Reading Research Papers", "thumbnail": "https://i.ytimg.com/vi/733m6qBH-jI/maxresdefault.jpg"} {"video_id": "733m6qBH-jI", "text": "but any, any thoughts? What do you think of the paper? Come on, you-all, you-all just spent a lot of time sitting around, discussing with each other. Wha- wha- what did people think about the time you spent trying to read the paper? Actually, did you feel you, how, actually, r- raise your hand if you feel, you know, you've kind of understood the main concepts in the paper just a bit. Okay, yeah, like, two-thirds of you, many of you. And, actually, what did you think of the figures? Wow, people are really less energetic today than usual [inaudible]", "start_timestamp": "00:17:33", "end_timestamp": "00:18:29", "start_second": 1053, "end_second": 1109, "url": "https://www.youtube.com/watch?v=733m6qBH-jI&t=1053s", "title": "Stanford CS230: Deep Learning | Autumn 2018 | Lecture 8 - Career Advice / Reading Research Papers", "thumbnail": "https://i.ytimg.com/vi/733m6qBH-jI/maxresdefault.jpg"} {"video_id": "733m6qBH-jI", "text": "So I think this is one of those papers where the, the paper is almost entirely summarized in figures one and two, all right. I think if you [inaudible] um, if you look at Figure One and the caption there and Figure Two on page three and the caption there and understand those two figures, those really convey, you know, 80 percent of the idea of the paper, right? Um, and I think that, uh, um, couple of other tips. So, um, it turns out that as you read these papers with practice, you do get faster. So, um, for example, Table One, uh,", "start_timestamp": "00:18:29", "end_timestamp": "00:19:08", "start_second": 1109, "end_second": 1148, "url": "https://www.youtube.com/watch?v=733m6qBH-jI&t=1109s", "title": "Stanford CS230: Deep Learning | Autumn 2018 | Lecture 8 - Career Advice / Reading Research Papers", "thumbnail": "https://i.ytimg.com/vi/733m6qBH-jI/maxresdefault.jpg"} {"video_id": "733m6qBH-jI", "text": "on page four, right, the, you know, this mess of the table on top. This is a pretty common format or a format like this is how a lot of authors use to describe a network architecture, especially in computer vision. So one of the things you find as well is that, um, the first time you see something like Table One it just looks really complicated. But by the time you've read a few papers in a similar format, you will look at Table One and go, \"Oh, yeah, got it.\" You know, this is, this is, this is the DenseNet-121", "start_timestamp": "00:19:08", "end_timestamp": "00:19:37", "start_second": 1148, "end_second": 1177, "url": "https://www.youtube.com/watch?v=733m6qBH-jI&t=1148s", "title": "Stanford CS230: Deep Learning | Autumn 2018 | Lecture 8 - Career Advice / Reading Research Papers", "thumbnail": "https://i.ytimg.com/vi/733m6qBH-jI/maxresdefault.jpg"} {"video_id": "733m6qBH-jI", "text": "versus DenseNet-169 architecture, and you will more quickly pick up those things. And so another thing you'll find is that, um, reading these papers actually gets better with practice, because you see different authors use different ways or similar ways of expressing themselves, and you get used to that. You'll actually be faster and faster at, uh, implementing these, um, at, at, at understanding these ideas. And I think, I know these days when I'm reading a paper like this, it maybe takes me about half an hour to,", "start_timestamp": "00:19:37", "end_timestamp": "00:20:03", "start_second": 1177, "end_second": 1203, "url": "https://www.youtube.com/watch?v=733m6qBH-jI&t=1177s", "title": "Stanford CS230: Deep Learning | Autumn 2018 | Lecture 8 - Career Advice / Reading Research Papers", "thumbnail": "https://i.ytimg.com/vi/733m6qBH-jI/maxresdefault.jpg"} {"video_id": "733m6qBH-jI", "text": "to feel like, and I, I know I gave you guys seven minutes when I thought I would need maybe half an hour to figure out a paper like this. Uh, um, uh, and I think, uh, for a more c- uh, I find that, uh, it's not unusual for people relatively new to machine learning to need maybe an hour to kind of, you know, really understand a paper like this. Um, and then I know I'm pretty experienced in machine learning, so I'm down to maybe half an hour for a paper like this, maybe even 20 minutes if it's a really easy one.", "start_timestamp": "00:20:03", "end_timestamp": "00:20:29", "start_second": 1203, "end_second": 1229, "url": "https://www.youtube.com/watch?v=733m6qBH-jI&t=1203s", "title": "Stanford CS230: Deep Learning | Autumn 2018 | Lecture 8 - Career Advice / Reading Research Papers", "thumbnail": "https://i.ytimg.com/vi/733m6qBH-jI/maxresdefault.jpg"} {"video_id": "733m6qBH-jI", "text": "But there are some outliers, so I have some colleagues who sometimes stumble across a really difficult paper. You need to chase down all the references and learn a lot of other stuff. So sometimes you come across a paper that takes you three or four hours or even longer to really understand it, but, uh, but I think depending on how much time you want to spend per week reading papers, um, you could actually learn, you know, learn a lot, right, um, uh, doing what you just did by maybe spending half an hour per paper,", "start_timestamp": "00:20:29", "end_timestamp": "00:20:56", "start_second": 1229, "end_second": 1256, "url": "https://www.youtube.com/watch?v=733m6qBH-jI&t=1229s", "title": "Stanford CS230: Deep Learning | Autumn 2018 | Lecture 8 - Career Advice / Reading Research Papers", "thumbnail": "https://i.ytimg.com/vi/733m6qBH-jI/maxresdefault.jpg"} {"video_id": "733m6qBH-jI", "text": "an hour a paper rather than seven minutes, right? Um, so, all right. I feel like, uh, yeah, and so, I, I think it's great, and, and, and notice that I've actually not said anything about the content of this paper, right? So whatever you guys just learned, that was all you. I had nothing to do with it. So, yeah, like you have the ability to go and learn this stuff by yourself. You don't need me anymore, right? [LAUGHTER] Um, so just the last few comments. Um, let's see. So the other things I get asked, questions I get is,", "start_timestamp": "00:20:56", "end_timestamp": "00:21:31", "start_second": 1256, "end_second": 1291, "url": "https://www.youtube.com/watch?v=733m6qBH-jI&t=1256s", "title": "Stanford CS230: Deep Learning | Autumn 2018 | Lecture 8 - Career Advice / Reading Research Papers", "thumbnail": "https://i.ytimg.com/vi/733m6qBH-jI/maxresdefault.jpg"} {"video_id": "733m6qBH-jI", "text": "uh, you know, where, where do you go? The deep learning field evolves so rapidly. So where, where do you go, uh, to? So if you're trying to master a new body of knowledge, definitely do web searches, and there are often good blog posts on, you know, here are the most important papers in speech recognition. There are lots of great resources there. And then the other thing you, I don't know, a lot of people try, want to do is try to keep up with the state of the art of deep learning even as it's evolving rapidly.", "start_timestamp": "00:21:31", "end_timestamp": "00:21:58", "start_second": 1291, "end_second": 1318, "url": "https://www.youtube.com/watch?v=733m6qBH-jI&t=1291s", "title": "Stanford CS230: Deep Learning | Autumn 2018 | Lecture 8 - Career Advice / Reading Research Papers", "thumbnail": "https://i.ytimg.com/vi/733m6qBH-jI/maxresdefault.jpg"} {"video_id": "733m6qBH-jI", "text": "And so, um, I, I- I'll just tell you where I go to keep up with, um, you know, discussions, announcements. And surprisingly, Twitter is becoming an impo- surprisingly important place for researchers to find out about, um, new things. Um, there's an ML Subreddit, it is actually pretty good. Um, lot of noise, but many important pieces of work do get mentioned there. Uh, some of the top machine-learning con- conferences are NIPS, ICML, and ICLR, right? And so whenever these conferences come around, take a look and glance throughout these,", "start_timestamp": "00:21:58", "end_timestamp": "00:22:34", "start_second": 1318, "end_second": 1354, "url": "https://www.youtube.com/watch?v=733m6qBH-jI&t=1318s", "title": "Stanford CS230: Deep Learning | Autumn 2018 | Lecture 8 - Career Advice / Reading Research Papers", "thumbnail": "https://i.ytimg.com/vi/733m6qBH-jI/maxresdefault.jpg"} {"video_id": "733m6qBH-jI", "text": "the titles, see if there's something that interests you. And then I think I'm, I'm fortunate I guess to have, um, friends, you know, uh, both colleagues here at Stanford as well as colleagues in several other teams I work with that, um, uh, that just tell me when there's a cool paper, I guess. But I think with, here within Stanford or among your workplace, for those of you taking this at SCPD, you can form a community that shares interesting papers. So a lot of the groups I have on Slack and we regularly Slack each other or send,", "start_timestamp": "00:22:34", "end_timestamp": "00:23:03", "start_second": 1354, "end_second": 1383, "url": "https://www.youtube.com/watch?v=733m6qBH-jI&t=1354s", "title": "Stanford CS230: Deep Learning | Autumn 2018 | Lecture 8 - Career Advice / Reading Research Papers", "thumbnail": "https://i.ytimg.com/vi/733m6qBH-jI/maxresdefault.jpg"} {"video_id": "733m6qBH-jI", "text": "send each other, uh, text messages on the Slack messaging system, where you find interesting papers, and tha- tha- that's been great for me actually. Um, yeah, oh, and, and, and Twitter, let's see. Kian is, I follow Kian, you can follow him too. Uh, This is me, Andrew Y Ng, right? Um, I probably don't Slack up papers as often as I do. But if you look at, and you can also look at who we follow, and there are a lot of good researchers, uh, that, that will share all these things online. Oh, and, um, there, there are,", "start_timestamp": "00:23:03", "end_timestamp": "00:23:36", "start_second": 1383, "end_second": 1416, "url": "https://www.youtube.com/watch?v=733m6qBH-jI&t=1383s", "title": "Stanford CS230: Deep Learning | Autumn 2018 | Lecture 8 - Career Advice / Reading Research Papers", "thumbnail": "https://i.ytimg.com/vi/733m6qBH-jI/maxresdefault.jpg"} {"video_id": "733m6qBH-jI", "text": "there's a bunch of people that also use a website called Arxiv Sanity. Um, I don't as much sometimes, um, but there's lots of resources like that, all right? Um. All right. Cool. So just two last tips for how to read papers and get good at this. Um, so to more deeply understand the paper, uh, some of the papers will have math in it. Uh, and, actually, if you read the, I don't know, you all learned about Batch Norm, right? In the second module's videos. If you read the Batch Norm paper, it's actually one of the harder papers to read.", "start_timestamp": "00:23:36", "end_timestamp": "00:24:17", "start_second": 1416, "end_second": 1457, "url": "https://www.youtube.com/watch?v=733m6qBH-jI&t=1416s", "title": "Stanford CS230: Deep Learning | Autumn 2018 | Lecture 8 - Career Advice / Reading Research Papers", "thumbnail": "https://i.ytimg.com/vi/733m6qBH-jI/maxresdefault.jpg"} {"video_id": "733m6qBH-jI", "text": "There's a lot of math in the derivation of Batch Norm but there are papers like that. And if you want to make sure you understand the math here's what I would recommend, which is, read through it, take detailed notes and then see if you can re-derive it from scratch. So if you want to deeply understand the math of an algorithm from like, you know, Batch Norm or the details of back-prop or something the good practice. And I think a lot of sort of a theory- theoretical science and mathematics Ph.D students will use a practice like this.", "start_timestamp": "00:24:17", "end_timestamp": "00:24:51", "start_second": 1457, "end_second": 1491, "url": "https://www.youtube.com/watch?v=733m6qBH-jI&t=1457s", "title": "Stanford CS230: Deep Learning | Autumn 2018 | Lecture 8 - Career Advice / Reading Research Papers", "thumbnail": "https://i.ytimg.com/vi/733m6qBH-jI/maxresdefault.jpg"} {"video_id": "733m6qBH-jI", "text": "You just go ahead and read the paper. Make sure you understand it and then to make sure you really, really understand it put, put, put aside the results and try to re-derive the math yourself from scratch. And if you can start from a blank piece of paper and re-derive one of these algorithms from scratch, then that's a good sign that you really understood it. When I was a Ph. D student I did this a lot, right? That you know I would read a textbook or read a paper or something and then put aside whatever I read", "start_timestamp": "00:24:51", "end_timestamp": "00:25:17", "start_second": 1491, "end_second": 1517, "url": "https://www.youtube.com/watch?v=733m6qBH-jI&t=1491s", "title": "Stanford CS230: Deep Learning | Autumn 2018 | Lecture 8 - Career Advice / Reading Research Papers", "thumbnail": "https://i.ytimg.com/vi/733m6qBH-jI/maxresdefault.jpg"} {"video_id": "733m6qBH-jI", "text": "and see if I could re-derive it from scratch starting from a blank piece of paper as only if I could do that, and I would you know feel like yep, I think I understand this piece of math. And it turns out if you want to do this type of math yourself is your ability to derive this type of math, re-derive this type of math, that gives you the ability to generalize, to derive new novel pieces of math yourself. So I think I actually learned a lot of math, for several machine learning by doing this. And just by re-deriving other people's work that", "start_timestamp": "00:25:17", "end_timestamp": "00:25:44", "start_second": 1517, "end_second": 1544, "url": "https://www.youtube.com/watch?v=733m6qBH-jI&t=1517s", "title": "Stanford CS230: Deep Learning | Autumn 2018 | Lecture 8 - Career Advice / Reading Research Papers", "thumbnail": "https://i.ytimg.com/vi/733m6qBH-jI/maxresdefault.jpg"} {"video_id": "733m6qBH-jI", "text": "allowed me to learn how to derive my own novel algorithms. And actually sometimes you go to the art galleries, right? They go to the Smithsonian. You see these art students, you know, sitting on the floor copying the great artworks, the great paintings you know, painted by the masters centuries ago. And so I think just as today there are art students sitting in or the de Young Museum or whatever or and I was at the Getty Museum in LA a few months ago. You actually see these art students you know, copying the work of the masters.", "start_timestamp": "00:25:44", "end_timestamp": "00:26:16", "start_second": 1544, "end_second": 1576, "url": "https://www.youtube.com/watch?v=733m6qBH-jI&t=1544s", "title": "Stanford CS230: Deep Learning | Autumn 2018 | Lecture 8 - Career Advice / Reading Research Papers", "thumbnail": "https://i.ytimg.com/vi/733m6qBH-jI/maxresdefault.jpg"} {"video_id": "733m6qBH-jI", "text": "And I think a lot of the ways that you want to become good at the math of machine learning yourself, this is a good way to do it. It's time-consuming but then you can become good at it that way. And same thing for codes, right? I think the simple lightweight version one of learning would be to download and run the open source code if you can find it, and a deeper way to learn this material is to re-implement it from scratch. Right, it is easy to download an open sourcing and run it and say ooh, it works. But if you can re-implement one of these algorithms from", "start_timestamp": "00:26:16", "end_timestamp": "00:26:56", "start_second": 1576, "end_second": 1616, "url": "https://www.youtube.com/watch?v=733m6qBH-jI&t=1576s", "title": "Stanford CS230: Deep Learning | Autumn 2018 | Lecture 8 - Career Advice / Reading Research Papers", "thumbnail": "https://i.ytimg.com/vi/733m6qBH-jI/maxresdefault.jpg"} {"video_id": "733m6qBH-jI", "text": "scratch then that's a strong sign that you've really understood this algorithm. Okay? Um, alright. And then longer term advice. Right. You know, for user keep on learning and keep on getting better and better, the more important thing is for you to learn steadily not for you to have a focus intense activity you know, like over Thanksgiving you read 50 papers over Thanksgiving and then you're done for the rest your life. It doesn't work like that, right? And I think you're actually much better off reading two or three papers a week for", "start_timestamp": "00:26:56", "end_timestamp": "00:27:45", "start_second": 1616, "end_second": 1665, "url": "https://www.youtube.com/watch?v=733m6qBH-jI&t=1616s", "title": "Stanford CS230: Deep Learning | Autumn 2018 | Lecture 8 - Career Advice / Reading Research Papers", "thumbnail": "https://i.ytimg.com/vi/733m6qBH-jI/maxresdefault.jpg"} {"video_id": "733m6qBH-jI", "text": "the next year than you know, cramming everything right over, over one long weekend or something. Actually in education we actually know that spaced repetition works better than cramming so the same same thing, same body of learning. If you learn a bit every week and space it out you actually have much better long-term retention than if you try to cram everything in short-term so there's, there's a very solid result that we know from pedagogy and how the human brain works. So, so if you're able to- so so again the way I, my life is my backpack.", "start_timestamp": "00:27:45", "end_timestamp": "00:28:15", "start_second": 1665, "end_second": 1695, "url": "https://www.youtube.com/watch?v=733m6qBH-jI&t=1665s", "title": "Stanford CS230: Deep Learning | Autumn 2018 | Lecture 8 - Career Advice / Reading Research Papers", "thumbnail": "https://i.ytimg.com/vi/733m6qBH-jI/maxresdefault.jpg"} {"video_id": "733m6qBH-jI", "text": "I just always have a few papers with me. And I find that I can, I read almost everything on the tablet. Almost everything on iPad, but I find that research papers one of the things where the ability to flip between pages and skim I still find more efficient on paper. So I read almost nothing on paper these days except for research papers, but that's just me. Your mileage may vary. Maybe something else will work better for you. Okay? All right. So let's see, that's it for reading research papers, I hope that while you're in CS230, you know,", "start_timestamp": "00:28:15", "end_timestamp": "00:28:45", "start_second": 1695, "end_second": 1725, "url": "https://www.youtube.com/watch?v=733m6qBH-jI&t=1695s", "title": "Stanford CS230: Deep Learning | Autumn 2018 | Lecture 8 - Career Advice / Reading Research Papers", "thumbnail": "https://i.ytimg.com/vi/733m6qBH-jI/maxresdefault.jpg"} {"video_id": "733m6qBH-jI", "text": "if some of you find some cool papers or if you go further for the DenseNet paper and find an interesting result there. Go ahead and post on Piazza if any of you want to start a reading group of other friends here at Stanford you know, encourage you to look around class, find, find, find a group here on campus or with among your CS230 classmates or your work colleagues. For those of you taking this on SCPD so that you can all keep studying the literature and learning and helping each other along. Okay? So that's it for reading papers.", "start_timestamp": "00:28:45", "end_timestamp": "00:29:20", "start_second": 1725, "end_second": 1760, "url": "https://www.youtube.com/watch?v=733m6qBH-jI&t=1725s", "title": "Stanford CS230: Deep Learning | Autumn 2018 | Lecture 8 - Career Advice / Reading Research Papers", "thumbnail": "https://i.ytimg.com/vi/733m6qBH-jI/maxresdefault.jpg"} {"video_id": "733m6qBH-jI", "text": "The second thing we're gonna do today is just give some longer-term advice on navigating a career in machine learning, right? Any questions about this before I move on? Okay. Cool. All right. But I hope that was useful. Some of this I wish I had known when I was a first-year PhD student but c'est la vie. Alright. Let's see. Can we turn on the lights please? Alright. So kind of in response to requests from early- students in earlier versions of the class, before we, you know as we approach the end of the quarter,", "start_timestamp": "00:29:20", "end_timestamp": "00:29:58", "start_second": 1760, "end_second": 1798, "url": "https://www.youtube.com/watch?v=733m6qBH-jI&t=1760s", "title": "Stanford CS230: Deep Learning | Autumn 2018 | Lecture 8 - Career Advice / Reading Research Papers", "thumbnail": "https://i.ytimg.com/vi/733m6qBH-jI/maxresdefault.jpg"} {"video_id": "733m6qBH-jI", "text": "want to give some advice to how to navigate a career in machine learning, right? So today machine learning there are so many options to do, so many exciting things. So how do you, you know, what do you want to do? So I'm going to assume that most of you will want to do one of two things, right? At some point you know you want to get the job, right? Maybe a job that does work in machine learning and including a faculty position for those of you who want to be a professor. But I guess eventually most people end up with a job", "start_timestamp": "00:29:58", "end_timestamp": "00:30:34", "start_second": 1798, "end_second": 1834, "url": "https://www.youtube.com/watch?v=733m6qBH-jI&t=1798s", "title": "Stanford CS230: Deep Learning | Autumn 2018 | Lecture 8 - Career Advice / Reading Research Papers", "thumbnail": "https://i.ytimg.com/vi/733m6qBH-jI/maxresdefault.jpg"} {"video_id": "733m6qBH-jI", "text": "I think I guess there are other alternatives but but and some of you want to go on to more advanced graduate studies although even after you get your PhD at some point, most people do get a job after the PhD. And by job I mean either in a big company, you know, or a or a startup, right? But regardless of the details of this, I think- I hope most of you want to do important work. Okay. So what I'd like to do today is break, you know, this into, how do you find a job or join a Ph.D program or whatever that lets you do important work.", "start_timestamp": "00:30:34", "end_timestamp": "00:31:21", "start_second": 1834, "end_second": 1881, "url": "https://www.youtube.com/watch?v=733m6qBH-jI&t=1834s", "title": "Stanford CS230: Deep Learning | Autumn 2018 | Lecture 8 - Career Advice / Reading Research Papers", "thumbnail": "https://i.ytimg.com/vi/733m6qBH-jI/maxresdefault.jpg"} {"video_id": "733m6qBH-jI", "text": "And I want to break this discussion into two steps. One is just how do you get a position? How do you get that job offer or how do you get that offer of admission to the Ph.D program or admission to the master's program or whatever you wanna do. And then two is selecting a position. Between going to this university versus that university or between taking on the job in this company versus that company. What are the ones that will tend to set you up for success, for long-term personal success and career success?", "start_timestamp": "00:31:21", "end_timestamp": "00:31:56", "start_second": 1881, "end_second": 1916, "url": "https://www.youtube.com/watch?v=733m6qBH-jI&t=1881s", "title": "Stanford CS230: Deep Learning | Autumn 2018 | Lecture 8 - Career Advice / Reading Research Papers", "thumbnail": "https://i.ytimg.com/vi/733m6qBH-jI/maxresdefault.jpg"} {"video_id": "733m6qBH-jI", "text": "And really I hope that, by the way, I hope that all of these are just tactics to let you do important work right and this, I hope that's what you want to do. So you know, what do recruiters look for? And I think just to keep the language simpler I'm going to pretend that, I'm just gonna talk about finding a job. And but a lot of that very similar things apply for PhD programs is just instead of saying recruiters I would say admissions committees right then it's actually some of this is, but let me just focus on the job scenario.", "start_timestamp": "00:31:56", "end_timestamp": "00:32:29", "start_second": 1916, "end_second": 1949, "url": "https://www.youtube.com/watch?v=733m6qBH-jI&t=1916s", "title": "Stanford CS230: Deep Learning | Autumn 2018 | Lecture 8 - Career Advice / Reading Research Papers", "thumbnail": "https://i.ytimg.com/vi/733m6qBH-jI/maxresdefault.jpg"} {"video_id": "733m6qBH-jI", "text": "So most recruiters look for technical skills. So for example, there are a lot of machine learning interviews that will ask you questions like, you know, where would you use gradient descent or batch gradient descent or stochastic gradient descent, you know, descent and what happens when the mean batch size is too large or too small, right? So there are companies, many companies today asking questions like that in the interview process. Or can you explain difference between an LCM and GIGO and when would you use GIGO?", "start_timestamp": "00:32:29", "end_timestamp": "00:32:58", "start_second": 1949, "end_second": 1978, "url": "https://www.youtube.com/watch?v=733m6qBH-jI&t=1949s", "title": "Stanford CS230: Deep Learning | Autumn 2018 | Lecture 8 - Career Advice / Reading Research Papers", "thumbnail": "https://i.ytimg.com/vi/733m6qBH-jI/maxresdefault.jpg"} {"video_id": "733m6qBH-jI", "text": "So you really get questions like that in many job interviews today. And so recruiters looking for ML skills as well as, and so you will often be quizzed on ML skills as well as your coding ability, right? And then beyond your- and I think Silicon Valley's become quite good at giving people the assessments to test for real skill in machine learning engineering and in software engineering. And then the other thing that recruiters will look for, that many recruiters will look for is meaningful work. And in particular, um, uh, you know,", "start_timestamp": "00:32:58", "end_timestamp": "00:33:43", "start_second": 1978, "end_second": 2023, "url": "https://www.youtube.com/watch?v=733m6qBH-jI&t=1978s", "title": "Stanford CS230: Deep Learning | Autumn 2018 | Lecture 8 - Career Advice / Reading Research Papers", "thumbnail": "https://i.ytimg.com/vi/733m6qBH-jI/maxresdefault.jpg"} {"video_id": "733m6qBH-jI", "text": "there are some candidates that apply for jobs that have very, um, theoreticals. They're very academic skills meaning you can answer all the quiz questions about, you know, what is Batch Norm? Can you derive the [inaudible] for this? But unless you've actually shown that you can apply this in a meaningful setting, it's harder to convince a company or a recruiter that you know not just the theory, but that you know how to actually make this stuff work. And so, um, having done meaningful work using machine learning is a very strong,", "start_timestamp": "00:33:43", "end_timestamp": "00:34:12", "start_second": 2023, "end_second": 2052, "url": "https://www.youtube.com/watch?v=733m6qBH-jI&t=2023s", "title": "Stanford CS230: Deep Learning | Autumn 2018 | Lecture 8 - Career Advice / Reading Research Papers", "thumbnail": "https://i.ytimg.com/vi/733m6qBH-jI/maxresdefault.jpg"} {"video_id": "733m6qBH-jI", "text": "is a very desirable candidate, I think, to a lot of companies. Kind of work experience. And I think really, whether you've done, whether you've done something meaningful, um, reassures that, you know, that you can actually do work, right? There's not just you can answer quiz questions, that you know how to implement learning algorithms that work. Um, and, and maybe, um, uh, yeah, right. Um, and then many recruiters actually look for your ability to keep on learning new skills and stay on top of machine learning even as it evolves as well.", "start_timestamp": "00:34:12", "end_timestamp": "00:34:44", "start_second": 2052, "end_second": 2084, "url": "https://www.youtube.com/watch?v=733m6qBH-jI&t=2052s", "title": "Stanford CS230: Deep Learning | Autumn 2018 | Lecture 8 - Career Advice / Reading Research Papers", "thumbnail": "https://i.ytimg.com/vi/733m6qBH-jI/maxresdefault.jpg"} {"video_id": "733m6qBH-jI", "text": "Okay. And so a very common pattern for the, um, successful, you know, AI engineers, say, machine learning engineers, would be the following, where if on the horizontal axis, I plot different areas. So, you might learn about machine learning. Learn about deep learning. Learn about probabilistic graphical models. Learn about NLP. Learn about computer vision and so on for other areas of AI and machine learning. Um, and if the vertical area of the vertical axis is depth, uh, a lot of all the strongest candidates for jobs are, um, T-shaped individuals.", "start_timestamp": "00:34:44", "end_timestamp": "00:35:24", "start_second": 2084, "end_second": 2124, "url": "https://www.youtube.com/watch?v=733m6qBH-jI&t=2084s", "title": "Stanford CS230: Deep Learning | Autumn 2018 | Lecture 8 - Career Advice / Reading Research Papers", "thumbnail": "https://i.ytimg.com/vi/733m6qBH-jI/maxresdefault.jpg"} {"video_id": "733m6qBH-jI", "text": "Meaning that you have a broad understanding of many different topics in the AI machine learning, and very deep understanding in, you know, maybe at least one area. Maybe more than one area. Um, and so I think by taking CS230 and doing the things that you're doing here, hopefully you're acquiring a deeper understanding of one of these areas of deep learning in particular. Um, but the other thing that even, you know, deepens your knowledge in one area will be the projects you work on. Um, the open source contributions you make, right.", "start_timestamp": "00:35:24", "end_timestamp": "00:35:57", "start_second": 2124, "end_second": 2157, "url": "https://www.youtube.com/watch?v=733m6qBH-jI&t=2124s", "title": "Stanford CS230: Deep Learning | Autumn 2018 | Lecture 8 - Career Advice / Reading Research Papers", "thumbnail": "https://i.ytimg.com/vi/733m6qBH-jI/maxresdefault.jpg"} {"video_id": "733m6qBH-jI", "text": "Uh, whether or not you've done research. Um, and maybe whether or not you've done an internship. Right? Okay. And I think these two elements, you know, broad area of skills, and then also going deeper to do a meaningful project in deep learning. Or, um, work with a Stanford professor, right? And do a meaningful research project, or make some contribution to open-source. Publish it on GitHub, and then let us use it. These are the things that let you deepen your knowledge and, and convince recruiters that you both have the broad technical skills,", "start_timestamp": "00:35:57", "end_timestamp": "00:36:29", "start_second": 2157, "end_second": 2189, "url": "https://www.youtube.com/watch?v=733m6qBH-jI&t=2157s", "title": "Stanford CS230: Deep Learning | Autumn 2018 | Lecture 8 - Career Advice / Reading Research Papers", "thumbnail": "https://i.ytimg.com/vi/733m6qBH-jI/maxresdefault.jpg"} {"video_id": "733m6qBH-jI", "text": "and when called on you're able to apply these in a, in a, in a meaningful way to an important problem, right? And in fact, um, the way we design CS230 is actually a microcosm of this. Where, um, you know, you learned about neural nets. Um, then about topics like Batch Norm, ConvNets, sequence models, right? I'm just gonna say RNNs. So, actually you've a breadth within the field of deep learning. And then what happens is, well, then, and the reason I want you to work on the project is so that you can pick one of these areas.", "start_timestamp": "00:36:29", "end_timestamp": "00:37:06", "start_second": 2189, "end_second": 2226, "url": "https://www.youtube.com/watch?v=733m6qBH-jI&t=2189s", "title": "Stanford CS230: Deep Learning | Autumn 2018 | Lecture 8 - Career Advice / Reading Research Papers", "thumbnail": "https://i.ytimg.com/vi/733m6qBH-jI/maxresdefault.jpg"} {"video_id": "733m6qBH-jI", "text": "And maybe go deep and build a meaningful project in one of these areas, which will, which will, and it's not just about making a resume look good, right? It's about giving you the practical experience to make sure you actually know how to make these things work, um, uh, and give you the learning. To make sure that you actually know how to make a CNN work, to make a RNN work. All right. And then of course it stands many students also list their projects on their resumes obviously. Um, so, I think the um, let's see.", "start_timestamp": "00:37:06", "end_timestamp": "00:37:40", "start_second": 2226, "end_second": 2260, "url": "https://www.youtube.com/watch?v=733m6qBH-jI&t=2226s", "title": "Stanford CS230: Deep Learning | Autumn 2018 | Lecture 8 - Career Advice / Reading Research Papers", "thumbnail": "https://i.ytimg.com/vi/733m6qBH-jI/maxresdefault.jpg"} {"video_id": "733m6qBH-jI", "text": "The- the- the- failure modes. The things, bad ways to navigate your career. Um, there are some students they just do this, right? There are some Stanford students that just take class, after class, after class, after class, and go equally in depth in a huge range of areas. And this is not terrible. You can actually still got a job uh, uh you still get. Sometimes you can even get into some Ph.D. programs like this with all the depth, but this is not the best way to navigate your career. All right? So, there are some Stanford students who's- that takes tons of classes.", "start_timestamp": "00:37:40", "end_timestamp": "00:38:11", "start_second": 2260, "end_second": 2291, "url": "https://www.youtube.com/watch?v=733m6qBH-jI&t=2260s", "title": "Stanford CS230: Deep Learning | Autumn 2018 | Lecture 8 - Career Advice / Reading Research Papers", "thumbnail": "https://i.ytimg.com/vi/733m6qBH-jI/maxresdefault.jpg"} {"video_id": "733m6qBH-jI", "text": "You can get a good GPA doing that, but do nothing else. And this is not terrible, but this is- this is not- this is not great. It's not as good as the alternative. Um, there's one other thing I've seen Stanford students do which is, uh, just try to do that, right? But you just try to jump in on day one, and go really really deep in one area. And again, um, this has its own challenges, I guess. You know, one, one, one failure mode, one mode is actually not great. As sometimes you actually get, um, undergrad freshmen at Stanford that have not yet learned a lot about coding,", "start_timestamp": "00:38:11", "end_timestamp": "00:38:48", "start_second": 2291, "end_second": 2328, "url": "https://www.youtube.com/watch?v=733m6qBH-jI&t=2291s", "title": "Stanford CS230: Deep Learning | Autumn 2018 | Lecture 8 - Career Advice / Reading Research Papers", "thumbnail": "https://i.ytimg.com/vi/733m6qBH-jI/maxresdefault.jpg"} {"video_id": "733m6qBH-jI", "text": "or software engineering, or machine learning, and try to jump into research projects right away. This turns out not to be very efficient because it turns out Stanford classes are, your online courses or Stanford classes are a very efficient way for you to learn about the broad range of areas. And after that going deeper and getting experience in one vertical area then deepens your knowledge. It makes so you know how to actually make those ideas work. Uh, so I do see sometimes unfortunately, you know, som- some Stanford freshmen join us already knowing how to code and have implemented,", "start_timestamp": "00:38:48", "end_timestamp": "00:39:14", "start_second": 2328, "end_second": 2354, "url": "https://www.youtube.com/watch?v=733m6qBH-jI&t=2328s", "title": "Stanford CS230: Deep Learning | Autumn 2018 | Lecture 8 - Career Advice / Reading Research Papers", "thumbnail": "https://i.ytimg.com/vi/733m6qBH-jI/maxresdefault.jpg"} {"video_id": "733m6qBH-jI", "text": "you know, some learning algorithms, but some students that do not yet have much experience try to jump into research projects right away. And that turns out not to be very productive for the student or for the research group because until you've taken the classes and mastered the basics it's difficult to understand what's really going on in the advanced projects, right? Um, so I would, I, I would say this is actually worse than that, right? This is, this is actually okay. This is actually pretty bad. It is I, I, I would not do this for your career,", "start_timestamp": "00:39:14", "end_timestamp": "00:39:43", "start_second": 2354, "end_second": 2383, "url": "https://www.youtube.com/watch?v=733m6qBH-jI&t=2354s", "title": "Stanford CS230: Deep Learning | Autumn 2018 | Lecture 8 - Career Advice / Reading Research Papers", "thumbnail": "https://i.ytimg.com/vi/733m6qBH-jI/maxresdefault.jpg"} {"video_id": "733m6qBH-jI", "text": "right? Yeah. Probably not. Yeah. Um, and then the other not-so-great mode that you see some Stanford students do is get a lot of breadth, and then do a tiny project here. Do a tiny project there. Do a tiny project there. Do a tiny project there. And you end up with ten tiny projects, but no one or two really sec- significant projects. So again, this is not terrible, but, you know, beyond a certain point, by the way recruiters are not impressed by volume, right? So, having done 10 lame projects is actually not impressive.", "start_timestamp": "00:39:43", "end_timestamp": "00:40:20", "start_second": 2383, "end_second": 2420, "url": "https://www.youtube.com/watch?v=733m6qBH-jI&t=2383s", "title": "Stanford CS230: Deep Learning | Autumn 2018 | Lecture 8 - Career Advice / Reading Research Papers", "thumbnail": "https://i.ytimg.com/vi/733m6qBH-jI/maxresdefault.jpg"} {"video_id": "733m6qBH-jI", "text": "Not nearly as impressive as doing one great project or two great projects. And again, there's more to life than impressing recruiters, but recruiters are very rational. And the reason recruiters are less impressed by someone who's profile looks like this is because they're actually probably factually less skilled and less able at doing good work in machine learning compared to someone that, that has done a substantive project and knows what it takes to see, see the whole thing through. Does that make sense? So, when I say you'd have recruiters more or", "start_timestamp": "00:40:20", "end_timestamp": "00:40:47", "start_second": 2420, "end_second": 2447, "url": "https://www.youtube.com/watch?v=733m6qBH-jI&t=2420s", "title": "Stanford CS230: Deep Learning | Autumn 2018 | Lecture 8 - Career Advice / Reading Research Papers", "thumbnail": "https://i.ytimg.com/vi/733m6qBH-jI/maxresdefault.jpg"} {"video_id": "733m6qBH-jI", "text": "less impressed is because they're actually quite rational, in terms of, uh, trying to understand how good you are at um, uh, at, at, doing important work, at building meaningful AI systems. Makes sense? Um, and so in terms of building up both the horizontal piece and vertical piece, uh, this is what I recommend. Um, to build a horizontal piece, a lot of this is about building foundational skills. And, um, it turns out coursework is a very efficient way to do this. Uh, you know, in, in, in these courses, right, you know various instructors like us,", "start_timestamp": "00:40:47", "end_timestamp": "00:41:26", "start_second": 2447, "end_second": 2486, "url": "https://www.youtube.com/watch?v=733m6qBH-jI&t=2447s", "title": "Stanford CS230: Deep Learning | Autumn 2018 | Lecture 8 - Career Advice / Reading Research Papers", "thumbnail": "https://i.ytimg.com/vi/733m6qBH-jI/maxresdefault.jpg"} {"video_id": "733m6qBH-jI", "text": "but many other Stanford professors, um, put a lot of work into organizing the content to make it efficient for you to learn this material. Um, and then also reading research papers which we just talked about. Having a community will help you. Um, and then that is often, uh, building a more deep and, um, relevant project, right? And, and, and the pro- projects have to be relevant. So, you know, if you want to build a career machine learning, build a career in AI. Hopefully, the project is something that's relevant to CS,", "start_timestamp": "00:41:26", "end_timestamp": "00:42:06", "start_second": 2486, "end_second": 2526, "url": "https://www.youtube.com/watch?v=733m6qBH-jI&t=2486s", "title": "Stanford CS230: Deep Learning | Autumn 2018 | Lecture 8 - Career Advice / Reading Research Papers", "thumbnail": "https://i.ytimg.com/vi/733m6qBH-jI/maxresdefault.jpg"} {"video_id": "733m6qBH-jI", "text": "or machine learning, or AI deep learning. Um, I do see, I don't know, for some reason, I feel like, uh, a surprisingly large number of Stanford students I know are in the Stanford dance crew, and they spend a lot of time on that which is fine. If you enjoy dancing, go have fun. Don't, don't, you know, you, you don't need to work all the time. So, go join the dance crew, or go on the overseas exchange program. And go hang out in London and have fun, but those things do not as directly contribute to this, right?", "start_timestamp": "00:42:06", "end_timestamp": "00:42:36", "start_second": 2526, "end_second": 2556, "url": "https://www.youtube.com/watch?v=733m6qBH-jI&t=2526s", "title": "Stanford CS230: Deep Learning | Autumn 2018 | Lecture 8 - Career Advice / Reading Research Papers", "thumbnail": "https://i.ytimg.com/vi/733m6qBH-jI/maxresdefault.jpg"} {"video_id": "733m6qBH-jI", "text": "Yeah. I know, I think, I think, in an earlier version of this presentation, you know, students walked away, saying ha, you know, Andrew says we should not have fun we should work all the time and that's not the goal [LAUGHTER]. Um, All right. There is one. All right. Um, you know, there is the uh, Saturday morning problem which all of you will face. Right? Which is every week, uh, including this week on Saturday morning you have a choice. Um, you can, uh, read a paper [LAUGHTER] or work on research or work on open source or,", "start_timestamp": "00:42:36", "end_timestamp": "00:43:50", "start_second": 2556, "end_second": 2630, "url": "https://www.youtube.com/watch?v=733m6qBH-jI&t=2556s", "title": "Stanford CS230: Deep Learning | Autumn 2018 | Lecture 8 - Career Advice / Reading Research Papers", "thumbnail": "https://i.ytimg.com/vi/733m6qBH-jI/maxresdefault.jpg"} {"video_id": "733m6qBH-jI", "text": "I don't know what people do, or you can watch TV or something, [LAUGHTER] right? Um, and you will face this choice, like, maybe every Saturday, you know, for the rest of your life or for all Saturdays in the rest of your life. And, um, you know, you can build out that foundation skills, go deep or go have fun, and you should have fun, all right? Just for the record. But one of the problems that a lot of people face is that, um, even if you spend all Saturday and all Sunday reading a research paper, um, you know, the following Monday,", "start_timestamp": "00:43:50", "end_timestamp": "00:44:22", "start_second": 2630, "end_second": 2662, "url": "https://www.youtube.com/watch?v=733m6qBH-jI&t=2630s", "title": "Stanford CS230: Deep Learning | Autumn 2018 | Lecture 8 - Career Advice / Reading Research Papers", "thumbnail": "https://i.ytimg.com/vi/733m6qBH-jI/maxresdefault.jpg"} {"video_id": "733m6qBH-jI", "text": "or maybe spend all Saturday and Sunday working hard, it turns out that the following Monday, you're not that much better at deep learning. Is like, yeah, you work really hard. So you read five papers, you know, great. Uh, but if you work on a research group the professor or your manager if you're in a company, they have no idea how hard you work. So there's no one to come by and say ''Oh, good job working so hard all weekend.'' So no one knows these sacrifices you make all weekend to study or code open source, just no one knows.", "start_timestamp": "00:44:22", "end_timestamp": "00:44:51", "start_second": 2662, "end_second": 2691, "url": "https://www.youtube.com/watch?v=733m6qBH-jI&t=2662s", "title": "Stanford CS230: Deep Learning | Autumn 2018 | Lecture 8 - Career Advice / Reading Research Papers", "thumbnail": "https://i.ytimg.com/vi/733m6qBH-jI/maxresdefault.jpg"} {"video_id": "733m6qBH-jI", "text": "So there's almost no short-term reward to doing these things. Um, but they see- and, and, and, and whereas there might be short-term rewards for doing other things, right? Um, uh, but the secret to this is that it's not about reading papers really, really hard for one Saturday morning or for all Saturday once and it being done. The secret to this is to do this consistently, um, you know, for years, um, or at least a month. And it turns out that if you read, um, two papers a week, and you do that for a year then you have read 50 papers", "start_timestamp": "00:44:51", "end_timestamp": "00:45:25", "start_second": 2691, "end_second": 2725, "url": "https://www.youtube.com/watch?v=733m6qBH-jI&t=2691s", "title": "Stanford CS230: Deep Learning | Autumn 2018 | Lecture 8 - Career Advice / Reading Research Papers", "thumbnail": "https://i.ytimg.com/vi/733m6qBH-jI/maxresdefault.jpg"} {"video_id": "733m6qBH-jI", "text": "after a year and you will be much better at deep learning after that, right? I mean when you read, you have read 100 papers in the year if you read two papers a week. And so for you to be successful is much less about the intense burst of effort you put in over one weekend. It's much more about whether you can find a little bit of time every week to read a few papers or contribute to open source or take some online courses, uh, but- and if you do that you know every week for six months or do that every week for a year,", "start_timestamp": "00:45:25", "end_timestamp": "00:45:56", "start_second": 2725, "end_second": 2756, "url": "https://www.youtube.com/watch?v=733m6qBH-jI&t=2725s", "title": "Stanford CS230: Deep Learning | Autumn 2018 | Lecture 8 - Career Advice / Reading Research Papers", "thumbnail": "https://i.ytimg.com/vi/733m6qBH-jI/maxresdefault.jpg"} {"video_id": "733m6qBH-jI", "text": "you will actually learn a lot about these fields and be much better off, and be much more capable at deep learning and machine learning or whatever, right? Um, yeah. So, um, yeah, and yeah she- my wife and I actually do not own a TV. [LAUGHTER] For what it's worth. Okay, but again, if you own one go ahead. Make sure- don't, don't drive yourself crazy. There's a healthy work-life integration as well. All right. So, um, so I hope that doing these things more is not about finding a job, it's about doing these things and make you more capable as a machine learning person,", "start_timestamp": "00:45:56", "end_timestamp": "00:46:40", "start_second": 2756, "end_second": 2800, "url": "https://www.youtube.com/watch?v=733m6qBH-jI&t=2756s", "title": "Stanford CS230: Deep Learning | Autumn 2018 | Lecture 8 - Career Advice / Reading Research Papers", "thumbnail": "https://i.ytimg.com/vi/733m6qBH-jI/maxresdefault.jpg"} {"video_id": "733m6qBH-jI", "text": "so that you have the power to go out and implement stuff that matters, right? To do stuff that actually, do, do work that matters. Well the second thing we'll chat about is selecting a job, right? And it's actually really interesting. Um, I, uh, gave this part of presentation, um, last year, uh, actually sorry earlier this year and shortly after that presentation, um, there was a student in the class that was already in a company who emailed me saying, \"Boy Andrew, I wish you'd told me this before I accepted my current job.\"", "start_timestamp": "00:46:40", "end_timestamp": "00:47:12", "start_second": 2800, "end_second": 2832, "url": "https://www.youtube.com/watch?v=733m6qBH-jI&t=2800s", "title": "Stanford CS230: Deep Learning | Autumn 2018 | Lecture 8 - Career Advice / Reading Research Papers", "thumbnail": "https://i.ytimg.com/vi/733m6qBH-jI/maxresdefault.jpg"} {"video_id": "733m6qBH-jI", "text": "Um, and so [LAUGHTER] let's see. Hopefully this will be useful to you. Um, so it turns out that ,um, uh, you know, I, so when you're- at some point you're deciding, you know, what Ph.D program do you want to apply for, what companies you want to apply for a job at and, um, I can tell you what, uh, so if you want to keep learning new things, um, I think one of the biggest predictors of your success will be whether or not you're working with great people and projects, right? And in particular, um, you know, there are these fascinating results from,", "start_timestamp": "00:47:12", "end_timestamp": "00:48:00", "start_second": 2832, "end_second": 2880, "url": "https://www.youtube.com/watch?v=733m6qBH-jI&t=2832s", "title": "Stanford CS230: Deep Learning | Autumn 2018 | Lecture 8 - Career Advice / Reading Research Papers", "thumbnail": "https://i.ytimg.com/vi/733m6qBH-jI/maxresdefault.jpg"} {"video_id": "733m6qBH-jI", "text": "uh, what are these, I wanna say from the social sciences showing that, um, if your closest friends, if your five closest friends or ten closest friends are all smokers, there's a much higher chance you become a smoker as well, right? And if your five or 10 close friends are, uh, um, you know, overweight, there's a much high chance you'd do the same or- and conversely if there's a, you know, so I think that if your five closest friends work really hard, read a lot of research papers, care about their work, learning and making themselves better,", "start_timestamp": "00:48:00", "end_timestamp": "00:48:29", "start_second": 2880, "end_second": 2909, "url": "https://www.youtube.com/watch?v=733m6qBH-jI&t=2880s", "title": "Stanford CS230: Deep Learning | Autumn 2018 | Lecture 8 - Career Advice / Reading Research Papers", "thumbnail": "https://i.ytimg.com/vi/733m6qBH-jI/maxresdefault.jpg"} {"video_id": "733m6qBH-jI", "text": "then there's actually a very good chance that you will be, that they'll influence you that way as well. So we're all human. We're all influenced by the people around us, right? And so, um, I think that- and I've been fortunate, I've taught at Stanford for a long time now, so I've been fortunate to have seen a lot of students go from Stanford to various careers and because I've seen how many hundreds or maybe thousands of Stanford students, that I knew right, when they were still Stanford students, go on to separate jobs.", "start_timestamp": "00:48:29", "end_timestamp": "00:48:57", "start_second": 2909, "end_second": 2937, "url": "https://www.youtube.com/watch?v=733m6qBH-jI&t=2909s", "title": "Stanford CS230: Deep Learning | Autumn 2018 | Lecture 8 - Career Advice / Reading Research Papers", "thumbnail": "https://i.ytimg.com/vi/733m6qBH-jI/maxresdefault.jpg"} {"video_id": "733m6qBH-jI", "text": "I saw many of them have amazing careers. Um, I saw, you know, a few have, like, okay careers, um, that I think over time I've learned to pattern match what is predictive of your future success after you leave Stanford and I'll share with you some of those patterns, share with you some of those patterns as you navigate your career. And it's just there's so many options in machine learning today that its's kind of tragic if you don't, you know, navigate to hopefully maximize your chance of being one of the people", "start_timestamp": "00:48:57", "end_timestamp": "00:49:23", "start_second": 2937, "end_second": 2963, "url": "https://www.youtube.com/watch?v=733m6qBH-jI&t=2937s", "title": "Stanford CS230: Deep Learning | Autumn 2018 | Lecture 8 - Career Advice / Reading Research Papers", "thumbnail": "https://i.ytimg.com/vi/733m6qBH-jI/maxresdefault.jpg"} {"video_id": "733m6qBH-jI", "text": "that gets to do fun and important work that helps others. Um, so when selecting a position, um, I would advise you to focus on the team, um, [NOISE] you interact with and by team I mean, you know, somewhere between 10 to 30 persons, right, maybe up to 50, right? Um, because it turns out that if yo- there will be some group of people. Maybe 10 to 30 people, maybe 50 people that you interact with quite closely and these will be appears in the people that will influence you the most, right? Um, because if you join a company with 10,000 people,", "start_timestamp": "00:49:23", "end_timestamp": "00:50:11", "start_second": 2963, "end_second": 3011, "url": "https://www.youtube.com/watch?v=733m6qBH-jI&t=2963s", "title": "Stanford CS230: Deep Learning | Autumn 2018 | Lecture 8 - Career Advice / Reading Research Papers", "thumbnail": "https://i.ytimg.com/vi/733m6qBH-jI/maxresdefault.jpg"} {"video_id": "733m6qBH-jI", "text": "you will not interact with all 10,000 people. There will be a core of 10 or 30 or 50 people that you interact with the most, and it's those people how much they know, how much they teach you, how hard working they are, whether they're learning themselves that will influence you the most, rather than all these other hypothetical 10,000 people in a giant company. Um, and of these people, one of the ones that will influence you the most is your manager, all right? So make sure you meet your manager and get to know them and make sure they're someone you want to work with.", "start_timestamp": "00:50:11", "end_timestamp": "00:50:40", "start_second": 3011, "end_second": 3040, "url": "https://www.youtube.com/watch?v=733m6qBH-jI&t=3011s", "title": "Stanford CS230: Deep Learning | Autumn 2018 | Lecture 8 - Career Advice / Reading Research Papers", "thumbnail": "https://i.ytimg.com/vi/733m6qBH-jI/maxresdefault.jpg"} {"video_id": "733m6qBH-jI", "text": "Um, and in particular, I would recommend focusing on these things and not on the brand, um, of the company. Because it turns out that the brand of the company you work with is actually not that correlated. Yeah maybe there's a very weak correlation, but it's actually not that correlated with what your personal experience would be like if that makes sense, right? Um, and so, um, [NOISE] and by the way, again, just full disclosure. I'm one of the- I have a research group here at Stanford, right? My research group at Stanford is one of the better known research groups in", "start_timestamp": "00:50:40", "end_timestamp": "00:51:24", "start_second": 3040, "end_second": 3084, "url": "https://www.youtube.com/watch?v=733m6qBH-jI&t=3040s", "title": "Stanford CS230: Deep Learning | Autumn 2018 | Lecture 8 - Career Advice / Reading Research Papers", "thumbnail": "https://i.ytimg.com/vi/733m6qBH-jI/maxresdefault.jpg"} {"video_id": "733m6qBH-jI", "text": "the world but just don't join us because you think we are well-known, right? It's just not a good reason to join us for the brand. Instead, if you want to work with someone, meet the people and evaluate the individuals, or look at the people and see if you think these are people you can learn from and work with, and are good people, makes sense? [NOISE] So, um, in today's world there are a lot of companies, um, recruiting Stanford students. So let me give you some advice. This piece I only give because many years- well I'll just give you advice.", "start_timestamp": "00:51:24", "end_timestamp": "00:52:10", "start_second": 3084, "end_second": 3130, "url": "https://www.youtube.com/watch?v=733m6qBH-jI&t=3084s", "title": "Stanford CS230: Deep Learning | Autumn 2018 | Lecture 8 - Career Advice / Reading Research Papers", "thumbnail": "https://i.ytimg.com/vi/733m6qBH-jI/maxresdefault.jpg"} {"video_id": "733m6qBH-jI", "text": "So sometimes, there are giant companies with let's say, uh, 50,000 people, right? And I'm not thinking of any one specific company. If you're trying to guess what company I'm thinking of, there is no one specific company I'm thinking of but this pattern matches, uh, to many large companies. But maybe there's a giant company with, you know, 50,000 people, right? And, um, let's say that they have a 300 person, right, AI team, um, it turns out that if you look at the work of the 300 persons in the AI team and if they send you a job offer to join the 300 person AI team,", "start_timestamp": "00:52:10", "end_timestamp": "00:52:52", "start_second": 3130, "end_second": 3172, "url": "https://www.youtube.com/watch?v=733m6qBH-jI&t=3130s", "title": "Stanford CS230: Deep Learning | Autumn 2018 | Lecture 8 - Career Advice / Reading Research Papers", "thumbnail": "https://i.ytimg.com/vi/733m6qBH-jI/maxresdefault.jpg"} {"video_id": "733m6qBH-jI", "text": "that might be pretty good, right? Since this may be the group, you know, whose work you hear about, they publish papers, you read them on the news. Um and so if you've got a job offer to work with this group, that might be pretty good or even better would be sometimes even within the 30 person AI team it's actually difficult to tell what's good and what's not. There is often a lot of variance even with this, what's even better would be if you get a job offer to join a 30 person team. So you actually know who's your manager,", "start_timestamp": "00:52:52", "end_timestamp": "00:53:19", "start_second": 3172, "end_second": 3199, "url": "https://www.youtube.com/watch?v=733m6qBH-jI&t=3172s", "title": "Stanford CS230: Deep Learning | Autumn 2018 | Lecture 8 - Career Advice / Reading Research Papers", "thumbnail": "https://i.ytimg.com/vi/733m6qBH-jI/maxresdefault.jpg"} {"video_id": "733m6qBH-jI", "text": "who are your peers, who you're working with. And if you think these are 30 great people you can learn from, that could be a great job offer. The failure mode that unfortunately I've seen, um, several Stanford students go down and it's actually this is a true story. There was once, uh, several years ago there's a Stanford student I knew that I thought was a great guy, right? You know, I knew his work, he was coding machine learning algorithms. I thought he was very sharp and did very good work, uh, working with some of my Ph.D students.", "start_timestamp": "00:53:19", "end_timestamp": "00:53:46", "start_second": 3199, "end_second": 3226, "url": "https://www.youtube.com/watch?v=733m6qBH-jI&t=3199s", "title": "Stanford CS230: Deep Learning | Autumn 2018 | Lecture 8 - Career Advice / Reading Research Papers", "thumbnail": "https://i.ytimg.com/vi/733m6qBH-jI/maxresdefault.jpg"} {"video_id": "733m6qBH-jI", "text": "He got a job offer from one of these giant companies with- that has a great AI group. Um, and his offer wasn't to go to the AI group, his offer was to, um, join us and then we'll assign you to a team. So this particular student, that was a Stanford student that I know about and care about, um, he wound up being assigned to a really boring Java back end payments team and, uh, so after he accepted the job offer, he wound up being assigned to a, you know, back-end- and I apologize. I know you work on Java back-end payment process systems", "start_timestamp": "00:53:46", "end_timestamp": "00:54:19", "start_second": 3226, "end_second": 3259, "url": "https://www.youtube.com/watch?v=733m6qBH-jI&t=3226s", "title": "Stanford CS230: Deep Learning | Autumn 2018 | Lecture 8 - Career Advice / Reading Research Papers", "thumbnail": "https://i.ytimg.com/vi/733m6qBH-jI/maxresdefault.jpg"} {"video_id": "733m6qBH-jI", "text": "I think they're great [LAUGHTER] but the student was assigned to that team and he was really bored and so, um, I think that this was a student whose career- I personally saw his career rising, while he was at Stanford and after he went to this, you know, frankly not very interesting team, I saw his career plateau, um, and after about a year and a half he resigned from this company after wasting a year and a half of his life and missing out really on a year and a half of this very exciting growth of AI machine learning, right?", "start_timestamp": "00:54:19", "end_timestamp": "00:54:48", "start_second": 3259, "end_second": 3288, "url": "https://www.youtube.com/watch?v=733m6qBH-jI&t=3259s", "title": "Stanford CS230: Deep Learning | Autumn 2018 | Lecture 8 - Career Advice / Reading Research Papers", "thumbnail": "https://i.ytimg.com/vi/733m6qBH-jI/maxresdefault.jpg"} {"video_id": "733m6qBH-jI", "text": "So it was very unfortunate. Um, uh, and it was actually after I told this story, um, last time I taught this class earlier this year that actually someone from, um, actually it was from the same big company [LAUGHTER] he found me and said, \"Boy, Andrew I wish you'd told me the story earlier, because this is exactly what happened to me, at the same big company [LAUGHTER]. Now, I wanna share with you, uh, a different, um, so- so I would just be careful about rotation programs as well. You know, when the company is trying to recruit you,", "start_timestamp": "00:54:48", "end_timestamp": "00:55:24", "start_second": 3288, "end_second": 3324, "url": "https://www.youtube.com/watch?v=733m6qBH-jI&t=3288s", "title": "Stanford CS230: Deep Learning | Autumn 2018 | Lecture 8 - Career Advice / Reading Research Papers", "thumbnail": "https://i.ytimg.com/vi/733m6qBH-jI/maxresdefault.jpg"} {"video_id": "733m6qBH-jI", "text": "if a company refuses to tell you what project you work on, who's your manager, exactly what team you're joining, I personally do not find those job offers that attractive because if they can't, you know, if they refuse to tell you what team you're gonna work with, well chances are, right, telling you the answer will not make the job attractive to you. That's why they're not telling you. So I'd just be very careful. And sometimes rotation programs sound good on paper but it is really, you know, well we'll figure out where to send you later.", "start_timestamp": "00:55:24", "end_timestamp": "00:55:52", "start_second": 3324, "end_second": 3352, "url": "https://www.youtube.com/watch?v=733m6qBH-jI&t=3324s", "title": "Stanford CS230: Deep Learning | Autumn 2018 | Lecture 8 - Career Advice / Reading Research Papers", "thumbnail": "https://i.ytimg.com/vi/733m6qBH-jI/maxresdefault.jpg"} {"video_id": "733m6qBH-jI", "text": "So, I feel like I've seen some students go into rotation programs that sound good on paper, that sound like a good idea but just as you wouldn't- after you graduate from Stanford, would you wanna do four internships and then apply for a job? That would be a weird thing to do. So, sometimes rotation programs are yeah, come and do four internships and then we'll let you apply for a job and see where we wanna send you. It could be a job at back end payment processing system, right? So, um, so so just just be cautious about the marketing of rotation programs.", "start_timestamp": "00:55:52", "end_timestamp": "00:56:20", "start_second": 3352, "end_second": 3380, "url": "https://www.youtube.com/watch?v=733m6qBH-jI&t=3352s", "title": "Stanford CS230: Deep Learning | Autumn 2018 | Lecture 8 - Career Advice / Reading Research Papers", "thumbnail": "https://i.ytimg.com/vi/733m6qBH-jI/maxresdefault.jpg"} {"video_id": "733m6qBH-jI", "text": "Um uh, and again, if you do if but if- but if what they say is do rotation and then you join this team, then you can look at this team and say yep, that's a great team. I wanna do rotation but then I would go and work with this team and and these are the 30 people I'll work with. So that could be great. But do a rotation and then we can send you anywhere in this giant company, that I would just be very careful about. Um, now on the flip side, there are some companies, I'm not gonna mention any companies, but there are some companies with you know,", "start_timestamp": "00:56:20", "end_timestamp": "00:56:49", "start_second": 3380, "end_second": 3409, "url": "https://www.youtube.com/watch?v=733m6qBH-jI&t=3380s", "title": "Stanford CS230: Deep Learning | Autumn 2018 | Lecture 8 - Career Advice / Reading Research Papers", "thumbnail": "https://i.ytimg.com/vi/733m6qBH-jI/maxresdefault.jpg"} {"video_id": "733m6qBH-jI", "text": "not as glamorous, not as- not as like cool brands, and maybe this is a, I don't know, 10,000 person company or 1,000 or 50,000 person or whatever. Let's say 10,000 person company. I have seen many companies that are not super well-known in the AI world, they are not in the news all the time, but they may have a very elite team of 100 people doing great work in machine learning, right? And there are definitely companies whose brands are not you know, the first companies you think of when you think of big AI companies that sometimes have", "start_timestamp": "00:56:49", "end_timestamp": "00:57:23", "start_second": 3409, "end_second": 3443, "url": "https://www.youtube.com/watch?v=733m6qBH-jI&t=3409s", "title": "Stanford CS230: Deep Learning | Autumn 2018 | Lecture 8 - Career Advice / Reading Research Papers", "thumbnail": "https://i.ytimg.com/vi/733m6qBH-jI/maxresdefault.jpg"} {"video_id": "733m6qBH-jI", "text": "a really really great 10 person or 50 person or 100 person team that works on learning algorithms. And even if the overall brand or the overall company, you know, isn't as like, is a little bit sucky. If you manage to track down this team and if you have a job offer to join this elite team in a much bigger company, you could actually learn a lot from these people and do important work. You know, one of the things about Silicon Valley is that uh, the brand of your resume matters less and less, right? Less than never before.", "start_timestamp": "00:57:23", "end_timestamp": "00:57:54", "start_second": 3443, "end_second": 3474, "url": "https://www.youtube.com/watch?v=733m6qBH-jI&t=3443s", "title": "Stanford CS230: Deep Learning | Autumn 2018 | Lecture 8 - Career Advice / Reading Research Papers", "thumbnail": "https://i.ytimg.com/vi/733m6qBH-jI/maxresdefault.jpg"} {"video_id": "733m6qBH-jI", "text": "I mean, I guess, I think the exception of the Stanford brand, you totally want the Stanford brand in your resume but with that exception, but really you know, Silicon Valley is becoming really good. Sili- the world, right? Has become really good at evaluating people for your genuine technical skills and your genuine capability and less for your brand and so, I would recommend that instead of trying to get the best stamps of approval on your resume to go and take the positions that let you have the best learning experiences and also allows you to do", "start_timestamp": "00:57:54", "end_timestamp": "00:58:23", "start_second": 3474, "end_second": 3503, "url": "https://www.youtube.com/watch?v=733m6qBH-jI&t=3474s", "title": "Stanford CS230: Deep Learning | Autumn 2018 | Lecture 8 - Career Advice / Reading Research Papers", "thumbnail": "https://i.ytimg.com/vi/733m6qBH-jI/maxresdefault.jpg"} {"video_id": "733m6qBH-jI", "text": "the most important work and that is really shaped by the you know, 30 or 50 people you work with and not by the overall brand of the company you work with, right? So the variance across um uh-so there's a huge variance across teams within one company and that variance is actually pretty bigger or might be bigger than the variance across different companies, does that make sense? Since I would, and if a company refuses to tell you what team you would join, I would seriously consider just, you know, doing something- if you have a better option,", "start_timestamp": "00:58:23", "end_timestamp": "00:58:54", "start_second": 3503, "end_second": 3534, "url": "https://www.youtube.com/watch?v=733m6qBH-jI&t=3503s", "title": "Stanford CS230: Deep Learning | Autumn 2018 | Lecture 8 - Career Advice / Reading Research Papers", "thumbnail": "https://i.ytimg.com/vi/733m6qBH-jI/maxresdefault.jpg"} {"video_id": "733m6qBH-jI", "text": "I would, I would do something else. Um, and then finally, um, yeah and- and so really I- again I guess I don't wanna name these companies but you know think of some of the large retailers or some of the large healthcare systems or there are a lot of companies that are not well known in the AI world but that I've met their AI teams and I think they're great. And so if you're able to find those jobs and meet their people you can actually get very exciting jobs in there. All right but of course, for the giant companies with elite AI teams,", "start_timestamp": "00:58:54", "end_timestamp": "00:59:23", "start_second": 3534, "end_second": 3563, "url": "https://www.youtube.com/watch?v=733m6qBH-jI&t=3534s", "title": "Stanford CS230: Deep Learning | Autumn 2018 | Lecture 8 - Career Advice / Reading Research Papers", "thumbnail": "https://i.ytimg.com/vi/733m6qBH-jI/maxresdefault.jpg"} {"video_id": "733m6qBH-jI", "text": "you can join that elite AI team, right? That's also- that's also great. I'm a bit biased since I use to lead some of these elite AI teams. So- so I think those teams are great but the loss of some teams in a, um, ah, yeah. All right. Um, lastly, you know, just general advice, this is how I really live my life. I tend to choose the things to work on that will allow you to learn the most and you know, try to do important work, right? So, you know especially if you're relatively early in your career, whatever you learn in your career will pay off for a long time and so um,", "start_timestamp": "00:59:23", "end_timestamp": "01:00:15", "start_second": 3563, "end_second": 3615, "url": "https://www.youtube.com/watch?v=733m6qBH-jI&t=3563s", "title": "Stanford CS230: Deep Learning | Autumn 2018 | Lecture 8 - Career Advice / Reading Research Papers", "thumbnail": "https://i.ytimg.com/vi/733m6qBH-jI/maxresdefault.jpg"} {"video_id": "733m6qBH-jI", "text": "uh and so joining the teams that are working with a great set of 10 or 30 or 50 teammates will let you learn a lot, and then also, you know, hopefully, I mean, yeah and- and just don't- don't don't join a like a cigarette company and hope you know, give more people cancer or stuff like that. Just don't- don't do this. Don't- don't do stuff like that. But if you can do meaningful work that helps other people and do important work and also learn a lot on the way, hopefully you can find positions like that, right?", "start_timestamp": "01:00:15", "end_timestamp": "01:00:47", "start_second": 3615, "end_second": 3647, "url": "https://www.youtube.com/watch?v=733m6qBH-jI&t=3615s", "title": "Stanford CS230: Deep Learning | Autumn 2018 | Lecture 8 - Career Advice / Reading Research Papers", "thumbnail": "https://i.ytimg.com/vi/733m6qBH-jI/maxresdefault.jpg"} {"video_id": "733m6qBH-jI", "text": "That let you set- set yourself up for long-term success but also do work that you think matters and that, and that helps other people. All right. Um, any questions while we wrap up? Yeah. [NOISE] I have a question about important work, what are some topics that you think you would include as important [inaudible]? What's important? You know, I don't know. Um, I think one of the most meaningful things to do in life is called [inaudible]. Either advance the human condition or help other people. But the thing is, I'm nervous.", "start_timestamp": "01:00:47", "end_timestamp": "01:01:23", "start_second": 3647, "end_second": 3683, "url": "https://www.youtube.com/watch?v=733m6qBH-jI&t=3647s", "title": "Stanford CS230: Deep Learning | Autumn 2018 | Lecture 8 - Career Advice / Reading Research Papers", "thumbnail": "https://i.ytimg.com/vi/733m6qBH-jI/maxresdefault.jpg"} {"video_id": "733m6qBH-jI", "text": "I don't wanna name one or two things because the world needs a lot of people who work on a lot of different things. So, the world's not gonna function if everyone works on computational biology. I think comp-bio is great but it's actually good that, where people work on comp-bio, my Ph.D students like you know, many work on the outside to healthcare. My team at Landing AI does a lot of work on the AI applied to manufacturing, to agriculture, to some health care and some other industries. Um,uh, I actually especially the California fire is burning you know,", "start_timestamp": "01:01:23", "end_timestamp": "01:01:53", "start_second": 3683, "end_second": 3713, "url": "https://www.youtube.com/watch?v=733m6qBH-jI&t=3683s", "title": "Stanford CS230: Deep Learning | Autumn 2018 | Lecture 8 - Career Advice / Reading Research Papers", "thumbnail": "https://i.ytimg.com/vi/733m6qBH-jI/maxresdefault.jpg"} {"video_id": "733m6qBH-jI", "text": "I actually think that there's important work to be done in AI and climate change, uh, um, but I think that there's a lot of important work in a lot of industries. Right, I actually think that, you know, I should think that the next wave of AI, excuse me I should say machine learning, is we've already um, transformed a lot of the tech world, right? So, you know, yeah, I mean we've already helped a lot of the Silicon Valley tech world become good at AI and that's great, right? Helped build a couple of the teams that wound up doing this, right?", "start_timestamp": "01:01:53", "end_timestamp": "01:02:26", "start_second": 3713, "end_second": 3746, "url": "https://www.youtube.com/watch?v=733m6qBH-jI&t=3713s", "title": "Stanford CS230: Deep Learning | Autumn 2018 | Lecture 8 - Career Advice / Reading Research Papers", "thumbnail": "https://i.ytimg.com/vi/733m6qBH-jI/maxresdefault.jpg"} {"video_id": "733m6qBH-jI", "text": "Google Brain, how Google become good at deep learning, the Baidu I grew up with, hope I do become, you know, good at one of the greatest AI companies in the world, certainly in China, and I'm very happy that between me and some of my friends in the industry we've made a lot of good AI companies. I think part of the next phase for the evolution of machine learning is for it to go into not just the tech companies like the, you know, like the Google and Baidu which I helped as well as Facebook, Microsoft which I had nothing to do with as well", "start_timestamp": "01:02:26", "end_timestamp": "01:02:55", "start_second": 3746, "end_second": 3775, "url": "https://www.youtube.com/watch?v=733m6qBH-jI&t=3746s", "title": "Stanford CS230: Deep Learning | Autumn 2018 | Lecture 8 - Career Advice / Reading Research Papers", "thumbnail": "https://i.ytimg.com/vi/733m6qBH-jI/maxresdefault.jpg"} {"video_id": "733m6qBH-jI", "text": "as what else AirBnB, Pinterest, Uber, right? All these are great companies. I hope they'll all embrace AI. But I think some of the most exciting work to be done still has also looked outside the tech industry and to look at all the sometimes called traditional industries that do not have shiny tech things because I think the value creation there as surprise you could implement there may be even bigger than if you, you know, uh, uh yeah. I'll mention one interesting thing, one thing I noticed is a lot of large tech companies all work on the same problems, right?", "start_timestamp": "01:02:55", "end_timestamp": "01:03:28", "start_second": 3775, "end_second": 3808, "url": "https://www.youtube.com/watch?v=733m6qBH-jI&t=3775s", "title": "Stanford CS230: Deep Learning | Autumn 2018 | Lecture 8 - Career Advice / Reading Research Papers", "thumbnail": "https://i.ytimg.com/vi/733m6qBH-jI/maxresdefault.jpg"} {"video_id": "733m6qBH-jI", "text": "So everyone works on machine translation, everyone works on speech recognition, face detection, and click-through rate and part of me feels like this is great because it means there's a lot of progress in machine translation and that's great. We do want progress in machine translation. Though sometimes when you look at other industries. Um, so, you know, when you look at manufacturing or um, some of the medical devices that you're looking at or sometimes on on these farms hanging out with farmers on, on, on.", "start_timestamp": "01:03:28", "end_timestamp": "01:03:55", "start_second": 3808, "end_second": 3835, "url": "https://www.youtube.com/watch?v=733m6qBH-jI&t=3808s", "title": "Stanford CS230: Deep Learning | Autumn 2018 | Lecture 8 - Career Advice / Reading Research Papers", "thumbnail": "https://i.ytimg.com/vi/733m6qBH-jI/maxresdefault.jpg"} {"video_id": "733m6qBH-jI", "text": "If you like, in my own work with my teams where sometimes we're stumbling across brand new research problems that the big tech companies do not see and have not yet learned to frame. So, I find one of the most exciting challenges is actually to be constantly on the cutting edge. Looking at these types of problems there's a different cutting edge than the cutting edge of the big tech companies. So, I think some of you will join the big tech companies and that's great. We need more AI in the big companies, in the tech companies,", "start_timestamp": "01:03:55", "end_timestamp": "01:04:21", "start_second": 3835, "end_second": 3861, "url": "https://www.youtube.com/watch?v=733m6qBH-jI&t=3835s", "title": "Stanford CS230: Deep Learning | Autumn 2018 | Lecture 8 - Career Advice / Reading Research Papers", "thumbnail": "https://i.ytimg.com/vi/733m6qBH-jI/maxresdefault.jpg"} {"video_id": "5QaOt56cWhg", "text": "John I believe a lot of things everything I believe I think is true but if I stop to think about it what do I mean by a belief what is the nature of belief yeah well I think as you know the mind is a biological phenomenon so belief is part of the biology of the mind and you won't understand belief unless you see it in relation to other parts of the biology of the mind now I have to introduce an ugly word here intentionality and that sounds like it's a fancy thing it just means the capacity by which the mind represents objects and", "start_timestamp": "00:00:00", "end_timestamp": "00:00:29", "start_second": 0, "end_second": 29, "url": "https://www.youtube.com/watch?v=5QaOt56cWhg&t=0s", "title": "John Searle - What is Belief?", "thumbnail": "https://i.ytimg.com/vi/5QaOt56cWhg/maxresdefault.jpg"} {"video_id": "5QaOt56cWhg", "text": "states of affairs so beliefs and hopes and fears and desires and love and hate and lust and discussion those are all intentional now that suggests they've got something to with intending but that's just an accident of history we got this word from the Germans and like most of our confused words in philosophy I and in German intentionality doesn't sound like a position that's the word for intention so forget about the connection with intending and just think there is this capacity that the mind has to represent and it does that in a", "start_timestamp": "00:00:29", "end_timestamp": "00:00:57", "start_second": 29, "end_second": 57, "url": "https://www.youtube.com/watch?v=5QaOt56cWhg&t=29s", "title": "John Searle - What is Belief?", "thumbnail": "https://i.ytimg.com/vi/5QaOt56cWhg/maxresdefault.jpg"} {"video_id": "5QaOt56cWhg", "text": "variety of ways and belief is one of the most important a belief in desire are kind of matching concepts here because with belief we represent how things are or how we think they are and that has the mind to world direction of v am I supposed to fit the world but with desires we represent not how we think things are but how we want them to be and that desire has the world to mind direction of it the world is supposed to change to match the mind now how then does all of this work as a totality all of these intentional States well I can't", "start_timestamp": "00:00:57", "end_timestamp": "00:01:32", "start_second": 57, "end_second": 92, "url": "https://www.youtube.com/watch?v=5QaOt56cWhg&t=57s", "title": "John Searle - What is Belief?", "thumbnail": "https://i.ytimg.com/vi/5QaOt56cWhg/maxresdefault.jpg"} {"video_id": "5QaOt56cWhg", "text": "ask that question briefly it's too big a question but I can tell you some features beliefs are characteristically I justified beliefs required justification in a way that desires and hunches don't and beliefs are currently justified by their position within a network of other beliefs and other intentional states and above all a network that contains perception so you see that the dog is in the living room and that is a kind of a boring belief but you come to the believing the dog is in the living room so you have beliefs", "start_timestamp": "00:01:32", "end_timestamp": "00:02:06", "start_second": 92, "end_second": 126, "url": "https://www.youtube.com/watch?v=5QaOt56cWhg&t=92s", "title": "John Searle - What is Belief?", "thumbnail": "https://i.ytimg.com/vi/5QaOt56cWhg/maxresdefault.jpg"} {"video_id": "5QaOt56cWhg", "text": "that are both related to your perceptions and also many of your beliefs are derived related to other beliefs I believe that Barack Obama is active in the government because I also believe he's president the United States but now the remarkable thing is that with beliefs there's a peculiar rational constraint in that the belief is not only caused by perception which is often the case but the belief is itself subject to rational assessment depending on not just what you've seen but what you've read and what you know", "start_timestamp": "00:02:06", "end_timestamp": "00:02:44", "start_second": 126, "end_second": 164, "url": "https://www.youtube.com/watch?v=5QaOt56cWhg&t=126s", "title": "John Searle - What is Belief?", "thumbnail": "https://i.ytimg.com/vi/5QaOt56cWhg/maxresdefault.jpg"} {"video_id": "5QaOt56cWhg", "text": "otherwise and what seems reasonable and what evidence you have so now I have to introduce another piece of jargon the belief only exists in a big network of other beliefs and other mental states and one part of the network such as my belief were in the United States that only makes sense in relation to the whole network I'd have to believe the United States is a country that if it's on the surface of the earth and so on so belief is not belief it looks like it's pretty simple on the surface truth I got this belief I believe I'm an American", "start_timestamp": "00:02:44", "end_timestamp": "00:03:14", "start_second": 164, "end_second": 194, "url": "https://www.youtube.com/watch?v=5QaOt56cWhg&t=164s", "title": "John Searle - What is Belief?", "thumbnail": "https://i.ytimg.com/vi/5QaOt56cWhg/maxresdefault.jpg"} {"video_id": "5QaOt56cWhg", "text": "but in fact it's part of a vast network of intentionality and you can really only understand it by seeing how the network works and how its constrained by rationality and by perception are there different categories of beliefs such that the belief that you saw your dog in your living room or the belief that you do not believe in God yeah those are two things I'd use the word believe but one is kind of a direct perception and the other is kind of an analysis of reality yeah but but I have they're both beliefs", "start_timestamp": "00:03:14", "end_timestamp": "00:03:49", "start_second": 194, "end_second": 229, "url": "https://www.youtube.com/watch?v=5QaOt56cWhg&t=194s", "title": "John Searle - What is Belief?", "thumbnail": "https://i.ytimg.com/vi/5QaOt56cWhg/maxresdefault.jpg"} {"video_id": "5QaOt56cWhg", "text": "yeah I think that you're right to say that we have to make a categorization of our beliefs in this or so so to speak different degrees of centrality but in fact there's some of my beliefs I think it is misleading to describe as beliefs and I think they are presuppositions that enable me to cope with a world do I believe that there is a real world out there independently of my representation see I'm gonna get on an airplane now when I call up the airline when I get on the computer to find out is the plane on time I don't then have to", "start_timestamp": "00:03:49", "end_timestamp": "00:04:26", "start_second": 229, "end_second": 266, "url": "https://www.youtube.com/watch?v=5QaOt56cWhg&t=229s", "title": "John Searle - What is Belief?", "thumbnail": "https://i.ytimg.com/vi/5QaOt56cWhg/maxresdefault.jpg"} {"video_id": "5QaOt56cWhg", "text": "as oh and by the way does reality exist I that's not something I can find out by even looking on the net because all of these activities presuppose the existence of reality so there are some beliefs I that are so fundamental that is probably not a good idea to construe them as beliefs and I mentioned earlier that Network these are part of something in addition to the network these are what I call the background the whole system works against a background of what we take for granted we take for granted that entities are related to", "start_timestamp": "00:04:26", "end_timestamp": "00:05:03", "start_second": 266, "end_second": 303, "url": "https://www.youtube.com/watch?v=5QaOt56cWhg&t=266s", "title": "John Searle - What is Belief?", "thumbnail": "https://i.ytimg.com/vi/5QaOt56cWhg/maxresdefault.jpg"} {"video_id": "5QaOt56cWhg", "text": "each other by cause and effect relation so we want to know what's the cause of cancer and it won't do to say well cancer is just one of those things it doesn't have any causes we won't accept that because our background presupposition is things need a causal explanation and the background presupposition that makes sense of true belief is the idea that there's a way that things are that's independent of how we represent and how they are now sometimes that's not the case sometimes our beliefs are so enjoyed they're so", "start_timestamp": "00:05:03", "end_timestamp": "00:05:33", "start_second": 303, "end_second": 333, "url": "https://www.youtube.com/watch?v=5QaOt56cWhg&t=303s", "title": "John Searle - What is Belief?", "thumbnail": "https://i.ytimg.com/vi/5QaOt56cWhg/maxresdefault.jpg"} {"video_id": "5QaOt56cWhg", "text": "ill formed we don't really know but for beliefs that really matter to us we assume that there is a reality that corresponds to the belief but that belief in that reality is not just another belief it's a presupposition of making sense of the first belief some people say that when they believe in God that that is the most sure thing that they know yet many people know I think a lot of people for them the belief in God is the kind of background presupposition they make sense of their lives only on the presupposition that there is a", "start_timestamp": "00:05:33", "end_timestamp": "00:06:06", "start_second": 333, "end_second": 366, "url": "https://www.youtube.com/watch?v=5QaOt56cWhg&t=333s", "title": "John Searle - What is Belief?", "thumbnail": "https://i.ytimg.com/vi/5QaOt56cWhg/maxresdefault.jpg"} {"video_id": "5QaOt56cWhg", "text": "divine force I and there was a period in my life a rather long time ago when I accepted something like that when I was a small child but it later on it came to see me there's no rational ground for that whatever it's sad that there's no rational ground for it and a lot of people think well who the hell needs a rational ground I have it on faith well okay but faith is not a reason a faith is not a ground for accepting something so I I think you're absolutely right that there are a lot of people for whom a certain metaphysical vision the", "start_timestamp": "00:06:06", "end_timestamp": "00:06:39", "start_second": 366, "end_second": 399, "url": "https://www.youtube.com/watch?v=5QaOt56cWhg&t=366s", "title": "John Searle - What is Belief?", "thumbnail": "https://i.ytimg.com/vi/5QaOt56cWhg/maxresdefault.jpg"} {"video_id": "5QaOt56cWhg", "text": "existence of or the existence of spirituality or the existence of a certain spiritual nature of the universe that all of those are background presuppositions of their whole being and a whole mode of life but I I don't share any of that I think it's all for the I think it's almost all hot air that they don't have any ground for these and many of them would admit they don't have any ground but for me that's a reason for not accepting it whereas I can my acceptance that there is a world that exists independently of me that", "start_timestamp": "00:06:39", "end_timestamp": "00:07:08", "start_second": 399, "end_second": 428, "url": "https://www.youtube.com/watch?v=5QaOt56cWhg&t=399s", "title": "John Searle - What is Belief?", "thumbnail": "https://i.ytimg.com/vi/5QaOt56cWhg/maxresdefault.jpg"} {"video_id": "5QaOt56cWhg", "text": "seems to me not at all like the belief in God it's not specific to this or that view it just says if when you investigate how things are there's a way that they are that enables you to investigate but to understand the nature of belief you're feeling the reality of the external world and the person who really believes in God as a fundamental basic belief in terms of just understanding belief not understanding reality it's kind of the same thing yeah no I don't think it is and I'll tell you why the belief in God presupposes the", "start_timestamp": "00:07:08", "end_timestamp": "00:07:42", "start_second": 428, "end_second": 462, "url": "https://www.youtube.com/watch?v=5QaOt56cWhg&t=428s", "title": "John Searle - What is Belief?", "thumbnail": "https://i.ytimg.com/vi/5QaOt56cWhg/maxresdefault.jpg"} {"video_id": "-Y7PLaxXUrs", "text": "Translator: Michele Gianella Reviewer: Saeed Hosseinzadeh When I was a boy, I wanted to maximise my impact on the world, and I was smart enough to realise that I am not very smart. And that I have to build a machine that learns to become much smarter than myself, such that it can solve all the problems that I cannot solve myself, and I can retire. And my first publication on that dates back 30 years: 1987. My diploma thesis, where I already try to solve the grand problem of AI, not only build a machine that learns a little bit here, learns a little bit there,", "start_timestamp": "00:00:00", "end_timestamp": "00:00:53", "start_second": 0, "end_second": 53, "url": "https://www.youtube.com/watch?v=-Y7PLaxXUrs&t=0s", "title": "True Artificial Intelligence will change everything | Juergen Schmidhuber | TEDxLakeComo", "thumbnail": "https://i.ytimg.com/vi/-Y7PLaxXUrs/hqdefault.jpg"} {"video_id": "-Y7PLaxXUrs", "text": "but also learns to improve the learning algorithm itself. And the way it learns, the way it learns, and so on recursively, without any limits except the limits of logics and physics. And, I'm still working on the same old thing, and I'm still pretty much saying the same thing, except that now more people are listening. Because the learning algorithms that we have developed on the way to this goal, they are now on 3.000 million smartphones. And all of you have them in your pockets. What you see here are the five most valuable companies of the Western world:", "start_timestamp": "00:00:53", "end_timestamp": "00:01:45", "start_second": 53, "end_second": 105, "url": "https://www.youtube.com/watch?v=-Y7PLaxXUrs&t=53s", "title": "True Artificial Intelligence will change everything | Juergen Schmidhuber | TEDxLakeComo", "thumbnail": "https://i.ytimg.com/vi/-Y7PLaxXUrs/hqdefault.jpg"} {"video_id": "-Y7PLaxXUrs", "text": "Apple, Google, Facebook, Microsoft and Amazon. And all of them are emphasising that AI, artificial intelligence, is central to what they are doing. And all of them are using heavily the deep learning methods that my team has developed since the early nineties, in Munich and in Switzerland. Especially something which is called: \"the long short-term memory\". Has anybody in this room ever heard of the long short-term memory, or the LSTM? Hands up, anybody ever heard of that? Okay. Has anybody never heard of the LSTM?", "start_timestamp": "00:01:45", "end_timestamp": "00:02:33", "start_second": 105, "end_second": 153, "url": "https://www.youtube.com/watch?v=-Y7PLaxXUrs&t=105s", "title": "True Artificial Intelligence will change everything | Juergen Schmidhuber | TEDxLakeComo", "thumbnail": "https://i.ytimg.com/vi/-Y7PLaxXUrs/hqdefault.jpg"} {"video_id": "-Y7PLaxXUrs", "text": "Okay. I see we have a third group in this room: [those] who didn't understand the question. (Laughter) The LSTM is a little bit like your brain: it's an artificial neural network which also has neurons, and in your brain, you've got about 100 billion neurons. And each of them is connected to roughly 10,000 other neurons on average, Which means that you have got a million billion connections. And each of these connections has a \"strength\" which says how much does this neuron over here influence that one over there at the next time step.", "start_timestamp": "00:02:33", "end_timestamp": "00:03:25", "start_second": 153, "end_second": 205, "url": "https://www.youtube.com/watch?v=-Y7PLaxXUrs&t=153s", "title": "True Artificial Intelligence will change everything | Juergen Schmidhuber | TEDxLakeComo", "thumbnail": "https://i.ytimg.com/vi/-Y7PLaxXUrs/hqdefault.jpg"} {"video_id": "-Y7PLaxXUrs", "text": "And in the beginning, all these connections are random and the system knows nothing; but then, through a smart learning algorithm, it learns from lots of examples to translate the incoming data, such as video through the cameras, or audio through the microphones, or pain signals through the pain sensors. It learns to translate that into output actions, because some of these neurons are output neurons, that control speech muscles and finger muscles. And only through experience, it can learn to solve all kinds of interesting problems,", "start_timestamp": "00:03:25", "end_timestamp": "00:04:04", "start_second": 205, "end_second": 244, "url": "https://www.youtube.com/watch?v=-Y7PLaxXUrs&t=205s", "title": "True Artificial Intelligence will change everything | Juergen Schmidhuber | TEDxLakeComo", "thumbnail": "https://i.ytimg.com/vi/-Y7PLaxXUrs/hqdefault.jpg"} {"video_id": "-Y7PLaxXUrs", "text": "such as driving a car or do the speech recognition on your smartphone. Because whenever you take out your smartphone, an Android phone, for example, and you speak to it, and you say: \"Ok Google, show me the shortest way to Milano.\" Then it understands your speech. Because there is a LSTM in there which has learned to understand speech. Every ten milliseconds, 100 times a second, new inputs are coming from the microphone, and then are translated, after thinking, into letters which are then questioned to the search engine.", "start_timestamp": "00:04:04", "end_timestamp": "00:04:48", "start_second": 244, "end_second": 288, "url": "https://www.youtube.com/watch?v=-Y7PLaxXUrs&t=244s", "title": "True Artificial Intelligence will change everything | Juergen Schmidhuber | TEDxLakeComo", "thumbnail": "https://i.ytimg.com/vi/-Y7PLaxXUrs/hqdefault.jpg"} {"video_id": "-Y7PLaxXUrs", "text": "And it has learned to do that by listening to lots of speech from women, from men, all kinds of people. And that's how, since 2015, Google speech recognition is now much better than it used to be. The basic LSTM cell looks like that: I don't have the time to explain that, but at least I can list the names of the brilliant students in my lab who made that possible. And what are the big companies doing with that? Well, speech recognition is only one example; if you are on Facebook - is anybody on Facebook? Are you sometimes clicking at the translate button?", "start_timestamp": "00:04:48", "end_timestamp": "00:05:30", "start_second": 288, "end_second": 330, "url": "https://www.youtube.com/watch?v=-Y7PLaxXUrs&t=288s", "title": "True Artificial Intelligence will change everything | Juergen Schmidhuber | TEDxLakeComo", "thumbnail": "https://i.ytimg.com/vi/-Y7PLaxXUrs/hqdefault.jpg"} {"video_id": "-Y7PLaxXUrs", "text": "because somebody sent you something in a foreign language and then you can translate it. Is anybody doing that? Yeah. Whenever you do that, you are waking up, again, a long short term memory, an LSTM, which has learned to translate text in one language into translated text. And Facebook is doing that four billion times a day, so every second 50,000 sentences are being translated by an LSTM working for Facebook; and another 50,000 in the second; then another 50,000. And to see how much this thing is now permitting the modern world,", "start_timestamp": "00:05:30", "end_timestamp": "00:06:13", "start_second": 330, "end_second": 373, "url": "https://www.youtube.com/watch?v=-Y7PLaxXUrs&t=330s", "title": "True Artificial Intelligence will change everything | Juergen Schmidhuber | TEDxLakeComo", "thumbnail": "https://i.ytimg.com/vi/-Y7PLaxXUrs/hqdefault.jpg"} {"video_id": "-Y7PLaxXUrs", "text": "just note that almost 30 percent of the awesome computational power for inference and all these Google Data Centers, all these data centers of Google, all over the world, is used for LSTM. Almost 30 percent. If you have an Amazon Echo, you can ask a question and it answers you. And the voice that you hear it's not a recording; it's an LSTM network which has learned from training examples to sound like a female voice. If you have an iPhone, and you're using the quick type, it's trying to predict what you want to do next", "start_timestamp": "00:06:13", "end_timestamp": "00:06:57", "start_second": 373, "end_second": 417, "url": "https://www.youtube.com/watch?v=-Y7PLaxXUrs&t=373s", "title": "True Artificial Intelligence will change everything | Juergen Schmidhuber | TEDxLakeComo", "thumbnail": "https://i.ytimg.com/vi/-Y7PLaxXUrs/hqdefault.jpg"} {"video_id": "-Y7PLaxXUrs", "text": "given all the previous context of what you did so far. Again, that's an LSTM which has learned to do that, so it's on a billion iPhones. You are a large audience, by my standards: but when we started this work, decades ago, in the early '90s, only few people were interested in that, because computers were so slow and you couldn't do so much with it. And I remember I gave a talk at a conference, and there was just one single person in the audience, a young lady. I said, young lady, it's very embarrassing, but apparently today I'm going to give this talk just to you.", "start_timestamp": "00:06:57", "end_timestamp": "00:07:42", "start_second": 417, "end_second": 462, "url": "https://www.youtube.com/watch?v=-Y7PLaxXUrs&t=417s", "title": "True Artificial Intelligence will change everything | Juergen Schmidhuber | TEDxLakeComo", "thumbnail": "https://i.ytimg.com/vi/-Y7PLaxXUrs/hqdefault.jpg"} {"video_id": "-Y7PLaxXUrs", "text": "And she said, \"OK, but please hurry: I am the next speaker!\" (Laughter) Since then, we have greatly profited from the fact that every five years computers are getting ten times cheaper, which is an old trend that has held since 1941 at least. Since this man, Konrad Zuse, built the first working program controlled computer in Berlin and he could do, roughly, one operation per second. One! And then ten years later, for the same price, one could do 100 operations: 30 years later, 1 million operations for the same price;", "start_timestamp": "00:07:42", "end_timestamp": "00:08:27", "start_second": 462, "end_second": 507, "url": "https://www.youtube.com/watch?v=-Y7PLaxXUrs&t=462s", "title": "True Artificial Intelligence will change everything | Juergen Schmidhuber | TEDxLakeComo", "thumbnail": "https://i.ytimg.com/vi/-Y7PLaxXUrs/hqdefault.jpg"} {"video_id": "-Y7PLaxXUrs", "text": "and today, after 75 years, we can do a million billion times as much for the same price. And the trend is not about to stop, because the physical limits are much further out there. Rather soon, and not so many years or decades, we will for the first time have little computational devices that can compute as much as a human brain; and that's a trend that doesn't break. 50 years later, there will be a little computational device, for the same price, that can compute as much as all 10 billion human brains taken together.", "start_timestamp": "00:08:27", "end_timestamp": "00:09:08", "start_second": 507, "end_second": 548, "url": "https://www.youtube.com/watch?v=-Y7PLaxXUrs&t=507s", "title": "True Artificial Intelligence will change everything | Juergen Schmidhuber | TEDxLakeComo", "thumbnail": "https://i.ytimg.com/vi/-Y7PLaxXUrs/hqdefault.jpg"} {"video_id": "-Y7PLaxXUrs", "text": "and there will not only be one, of those devices, but many many many. Everything is going to change. Already in 2011, computers were fast enough such that our deep learning methods for the first time could achieve a superhuman pattern-recognition result. It was the first superhuman result in the history of computer vision. And back then, computers were 20 times more expensive than today. So today, for the same price, we can do 20 times as much. And just five years ago, when computers were 10 times more expensive than today,", "start_timestamp": "00:09:08", "end_timestamp": "00:09:46", "start_second": 548, "end_second": 586, "url": "https://www.youtube.com/watch?v=-Y7PLaxXUrs&t=548s", "title": "True Artificial Intelligence will change everything | Juergen Schmidhuber | TEDxLakeComo", "thumbnail": "https://i.ytimg.com/vi/-Y7PLaxXUrs/hqdefault.jpg"} {"video_id": "-Y7PLaxXUrs", "text": "we already could win, for the first time, medical imaging competitions. What you see behind me is a slice through the female breast and the tissue that you see there has all kinds of cells; and normally you need a trained doctor, a trained histologist who is able to detect the dangerous cancer cells, or pre-cancer cells. Now, our stupid network knows nothing about cancer, knows nothing about vision. It knows nothing in the beginning: but we can train it to imitate the human teacher, the doctor. And it became as good, or better, than the best competitors.", "start_timestamp": "00:09:46", "end_timestamp": "00:10:26", "start_second": 586, "end_second": 626, "url": "https://www.youtube.com/watch?v=-Y7PLaxXUrs&t=586s", "title": "True Artificial Intelligence will change everything | Juergen Schmidhuber | TEDxLakeComo", "thumbnail": "https://i.ytimg.com/vi/-Y7PLaxXUrs/hqdefault.jpg"} {"video_id": "-Y7PLaxXUrs", "text": "And very soon, all of medical diagnosis is going to be superhuman. And it's going to be mandatory, because it's going to be so much better than the doctors. After this, all kinds of medical imaging startups were founded focusing just on this, because it's so important. We can also use LSTM to train robots. One important thing I want to say is, that we not only have systems that slavishly imitate what humans show them; no, we also have AIs that set themselves their own goals. And like little babies, invent their own experiment", "start_timestamp": "00:10:26", "end_timestamp": "00:11:12", "start_second": 626, "end_second": 672, "url": "https://www.youtube.com/watch?v=-Y7PLaxXUrs&t=626s", "title": "True Artificial Intelligence will change everything | Juergen Schmidhuber | TEDxLakeComo", "thumbnail": "https://i.ytimg.com/vi/-Y7PLaxXUrs/hqdefault.jpg"} {"video_id": "-Y7PLaxXUrs", "text": "to explore the world and to figure out what you can do in the world. Without a teacher. And becoming more and more general problem solvers in the process, by learning new skills on top of old skills. And this is going to scale: we call that \"Artificial Curiosity\". Or a recent buzzword is \"power plane\". Learning to become a more and more general problem solvers by learning to invent, like a scientist, one new interesting goal after another. And it's going to scale. And I think, in not so many years from now, for the first time,", "start_timestamp": "00:11:12", "end_timestamp": "00:11:50", "start_second": 672, "end_second": 710, "url": "https://www.youtube.com/watch?v=-Y7PLaxXUrs&t=672s", "title": "True Artificial Intelligence will change everything | Juergen Schmidhuber | TEDxLakeComo", "thumbnail": "https://i.ytimg.com/vi/-Y7PLaxXUrs/hqdefault.jpg"} {"video_id": "-Y7PLaxXUrs", "text": "we are going to have an animal-like AI - we don't have that yet. On the level of a little crow, which already can learn to use tools, for example, or a little monkey. And once we have that, it may take just a few decades to do the final step towards human level intelligence. Because technological evolution is about a million times faster than biological evolution, and biological evolution needed 3.5 billion years to evolve a monkey from scratch. But then, it took just a few tens of millions of years afterwards", "start_timestamp": "00:11:50", "end_timestamp": "00:12:35", "start_second": 710, "end_second": 755, "url": "https://www.youtube.com/watch?v=-Y7PLaxXUrs&t=710s", "title": "True Artificial Intelligence will change everything | Juergen Schmidhuber | TEDxLakeComo", "thumbnail": "https://i.ytimg.com/vi/-Y7PLaxXUrs/hqdefault.jpg"} {"video_id": "-Y7PLaxXUrs", "text": "to evolve human level intelligence. We have a company which is called Nnaisense like birth in [French], \"Naissance\", but spelled in a different way, which is trying to make this a reality and build the first true general-purpose AI. At the moment, almost all research in AI is very human centric, and it's all about making human lives longer and healthier and easier and making humans more addicted to their smartphones. But in the long run, AIs are going to - especially the smart ones - are going to set themselves their own goals.", "start_timestamp": "00:12:35", "end_timestamp": "00:13:16", "start_second": 755, "end_second": 796, "url": "https://www.youtube.com/watch?v=-Y7PLaxXUrs&t=755s", "title": "True Artificial Intelligence will change everything | Juergen Schmidhuber | TEDxLakeComo", "thumbnail": "https://i.ytimg.com/vi/-Y7PLaxXUrs/hqdefault.jpg"} {"video_id": "-Y7PLaxXUrs", "text": "And I have no doubt, in my mind, that they are going to become much smarter than we are. And what are they going to do? Of course they are going to realize what we have realized a long time ago; namely, that most of the resources, in the solar system or in general, are not in our little biosphere. They are out there in space. And so, of course, they are going to emigrate. And of course they are going to use trillions of self-replicating robot factories to expand in form of a growing AI bubble which within a few hundred thousand years", "start_timestamp": "00:13:16", "end_timestamp": "00:14:00", "start_second": 796, "end_second": 840, "url": "https://www.youtube.com/watch?v=-Y7PLaxXUrs&t=796s", "title": "True Artificial Intelligence will change everything | Juergen Schmidhuber | TEDxLakeComo", "thumbnail": "https://i.ytimg.com/vi/-Y7PLaxXUrs/hqdefault.jpg"} {"video_id": "-Y7PLaxXUrs", "text": "is going to cover the entire galaxy by senders and receivers such that AIs can travel the way they are already traveling in my lab: by radio, from sender to receiver. Wireless. So what we are witnessing now is much more than just another Industrial Revolution. This is something that transcends humankind, and even life itself. The last time something so important has happened was maybe 3.5 billion years ago, when life was invented. A new type of life is going to emerge from our little planet and it's going to colonize and transform the entire universe.", "start_timestamp": "00:14:00", "end_timestamp": "00:14:48", "start_second": 840, "end_second": 888, "url": "https://www.youtube.com/watch?v=-Y7PLaxXUrs&t=840s", "title": "True Artificial Intelligence will change everything | Juergen Schmidhuber | TEDxLakeComo", "thumbnail": "https://i.ytimg.com/vi/-Y7PLaxXUrs/hqdefault.jpg"} {"video_id": "-Y7PLaxXUrs", "text": "The universe is still young: it's only 13.8 billion years old, it's going to become much older than that, many times older than that. So there's plenty of time to reach all of it, or all of the visible parts, totally within the limits of light speed and physics. A new type of life is going to make the universe intelligent. Now, of course, we are not going to remain the crown of creation, of course not. But there is still beauty in seeing yourself as part of a grander process that leads the cosmos from low complexity towards higher complexity.", "start_timestamp": "00:14:48", "end_timestamp": "00:15:33", "start_second": 888, "end_second": 933, "url": "https://www.youtube.com/watch?v=-Y7PLaxXUrs&t=888s", "title": "True Artificial Intelligence will change everything | Juergen Schmidhuber | TEDxLakeComo", "thumbnail": "https://i.ytimg.com/vi/-Y7PLaxXUrs/hqdefault.jpg"} {"video_id": "zoa2pKYp_fk", "text": "well active learning is the idea that students to really learn something to really understand something have to be actively involved and that just sitting passively and listening to a lecture really doesn't help students develop the higher order cognitive processes that they need to really really understand something so you can listen to something you can watch a movie you can watch TV and you can generally get the plot but if you're asked to to recall specific details or to to even explain a particular nuance associated with the TV", "start_timestamp": "00:00:00", "end_timestamp": "00:00:41", "start_second": 0, "end_second": 41, "url": "https://www.youtube.com/watch?v=zoa2pKYp_fk&t=0s", "title": "Active Learning Overview", "thumbnail": "https://i.ytimg.com/vi/zoa2pKYp_fk/maxresdefault.jpg"} {"video_id": "zoa2pKYp_fk", "text": "show or movie you can't really do it and that's what happens often in the lecture is that students will sit in the lecture though write down what's being said but they're not really engaged with the material so active learning is this idea of people say minds on always hands-on sometimes those students have to to be actively with their mind thinking about the material applying what's being said and given opportunities within the lecture to apply what's being taught or what's being the topic at hand and then active learning", "start_timestamp": "00:00:41", "end_timestamp": "00:01:16", "start_second": 41, "end_second": 76, "url": "https://www.youtube.com/watch?v=zoa2pKYp_fk&t=41s", "title": "Active Learning Overview", "thumbnail": "https://i.ytimg.com/vi/zoa2pKYp_fk/maxresdefault.jpg"} {"video_id": "zoa2pKYp_fk", "text": "strictly speaking means that you just you're just one a particular individual is active interactive we tend to parse that a little bit and say interactive learning would mean the student has been active in his or her own mind in thinking about the material but then is also interacting with others peers or potentially the faculty member or TA in order to to further develop understanding construct meaning for for the topic I always start maybe the second session of the class the second class meeting is a discussion of what we", "start_timestamp": "00:01:16", "end_timestamp": "00:01:52", "start_second": 76, "end_second": 112, "url": "https://www.youtube.com/watch?v=zoa2pKYp_fk&t=76s", "title": "Active Learning Overview", "thumbnail": "https://i.ytimg.com/vi/zoa2pKYp_fk/maxresdefault.jpg"} {"video_id": "zoa2pKYp_fk", "text": "know about how people learn so a discussion of the literature and the research on human cognition and learning and and if you take a constructivist point of view or constructionist point of view which really says that as I said before to understand people have to make meaning of a topic they have to construct their own meaning and we show the research that really shows that this is true for a higher level of processes people have to be actively engaged and there's research to show that we also show the classroom based", "start_timestamp": "00:01:52", "end_timestamp": "00:02:25", "start_second": 112, "end_second": 145, "url": "https://www.youtube.com/watch?v=zoa2pKYp_fk&t=112s", "title": "Active Learning Overview", "thumbnail": "https://i.ytimg.com/vi/zoa2pKYp_fk/maxresdefault.jpg"} {"video_id": "zoa2pKYp_fk", "text": "research so Freeman's 2014 paper that was a meta-analysis of 225 other studies that showed that in courses in college-level courses with we're active learning was used there was a 12% decrease in the failure rate and they normalized it to all of the important factors that they should be normalized to the experience of the instructor the size of the class the type of the institution the the situation the way the position that the class is situated within the larger curriculum and and across the board it was shown that there", "start_timestamp": "00:02:25", "end_timestamp": "00:03:01", "start_second": 145, "end_second": 181, "url": "https://www.youtube.com/watch?v=zoa2pKYp_fk&t=145s", "title": "Active Learning Overview", "thumbnail": "https://i.ytimg.com/vi/zoa2pKYp_fk/maxresdefault.jpg"} {"video_id": "zoa2pKYp_fk", "text": "was a 12% decrease on average of the failure rate and they make a comment in the paper that if that had been a clinical trial of a drug and 12% of the people on the drug head showed marked improvement they would have had to stop the trial and and give everyone the drug so this idea that there's a 12% decrease in the failure rate in courses that use active learning to me is pretty compelling that we should all be using active learning so whenever possible because we have an MIT our students our MIT students we use data we use the", "start_timestamp": "00:03:01", "end_timestamp": "00:03:36", "start_second": 181, "end_second": 216, "url": "https://www.youtube.com/watch?v=zoa2pKYp_fk&t=181s", "title": "Active Learning Overview", "thumbnail": "https://i.ytimg.com/vi/zoa2pKYp_fk/maxresdefault.jpg"} {"video_id": "zoa2pKYp_fk", "text": "research and we we try to find really good research solid research that shows the the way people learn and then how to support that with specific Act classroom practices so many of the students haven't had the experience of being in a class where active learning was used so they don't really understand it so when we start to talk about it as a way of teaching they may not really get it so throughout the course from the first the first class all the way through we I try to use several different types of active", "start_timestamp": "00:03:36", "end_timestamp": "00:04:07", "start_second": 216, "end_second": 247, "url": "https://www.youtube.com/watch?v=zoa2pKYp_fk&t=216s", "title": "Active Learning Overview", "thumbnail": "https://i.ytimg.com/vi/zoa2pKYp_fk/maxresdefault.jpg"} {"video_id": "zoa2pKYp_fk", "text": "learning exercises each class so the students themselves are actively engaged with the material from the first day so I may have them break into pairs and discuss a particular topic or identify something they didn't they didn't understand from the pre class readings and then after three minutes week they can either share their comments with someone else or maybe we just report out to the larger group that that if they just report if they just write down and then report back that's a pretty good example of active learning it's a pretty", "start_timestamp": "00:04:07", "end_timestamp": "00:04:36", "start_second": 247, "end_second": 276, "url": "https://www.youtube.com/watch?v=zoa2pKYp_fk&t=247s", "title": "Active Learning Overview", "thumbnail": "https://i.ytimg.com/vi/zoa2pKYp_fk/maxresdefault.jpg"}